The probability that the first success occurs at the $n^\text{th}$ experiment in a sequence of Bernoulli experiments with probability of success $p$ follows a geometric discrete distribution $X$ of parameter $p$, with $\Pr[X=n]=(1-p)^{n-1}\cdot p$. A most classical and intuitive result is that $E(X)=1/p$ for $0<p\le1$. I'm after an analytic proof of that.
By definition of expected value as $E(X)=\displaystyle\lim_{m\to\infty}\sum_{n=1}^m\Pr[X=n]\cdot n$, if it exists, I get $$E(X)=\lim_{m\to\infty}\sum_{n=1}^m (1-p)^{n-1}\cdot p\cdot n$$ I prove convergence of the series using the ratio test and that $1-p<1$, to get $$E(X)=p\sum_{n=1}^\infty(1-p)^{n-1}\cdot n$$ But I'm pretty stuck there. Please hint at a method.