No.
What you refer to (the difference between the means of group A and B) is actually the effect size, and it has absolutely nothing to do with the p-values.
The situation is nicely summarized in the (highly recommended) paper Using Effect Sizeāor Why the P Value Is Not Enough (emphasis mine):
Why Report Effect Sizes?
The effect size is the main finding of a quantitative study. While a
P value can inform the reader whether an effect exists, the P value will not reveal the size of the effect. In reporting and
interpreting studies, both the substantive significance (effect size)
and statistical significance (P value) are essential results to be
reported.
Why Isn't the P Value Enough?
Statistical significance is the probability that the observed
difference between two groups is due to chance. If the P value is
larger than the alpha level chosen (eg, .05), any observed difference
is assumed to be explained by sampling variability. With a
sufficiently large sample, a statistical test will almost always
demonstrate a significant difference, unless there is no effect
whatsoever, that is, when the effect size is exactly zero; yet very
small differences, even if significant, are often meaningless. Thus,
reporting only the significant P value for an analysis is not
adequate for readers to fully understand the results.
In other words, the p-value reflects our confidence that the effect indeed exists (and it's not due to chance), but it says absolutely nothing about its magnitude (size).
In fact, the practice of focusing on the p-values instead of the effect size has been the source of much controversy and the subject of fierce criticism lately; see the (again, highly recommended) book The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives.
The following threads at Cross Validated may also be useful: