Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function. Given the following two definitions of convexity of $f$, prove that (i) implies (ii):
(i) $\forall x, y \in \mathbb{R} : f(x) \ge f(y) + f'(y)(x - y)$
(ii) $\forall x, y \in \mathbb{R}, \forall \lambda \in [0, 1] : f(\lambda x + (1 - \lambda)y) \le \lambda f(x) + (1 - \lambda)f(y)$
First I saw that (i) is: $$f'(y) \leq \frac{f(x)-f(y)}{x-y} \,\,\, (*)$$ So the slope from $y$ to a greater point $x$ is always greater than the slope of only $y$.
I have tried writing $f(\lambda x + (1 - \lambda)y) = f(y+\lambda(y-x))$ as $f(y)$ plus, so to speak, the sum of all $f'(y + \epsilon \cdot n) \cdot \epsilon$, where $\epsilon \rightarrow 0$ and $n$ needs to be defined correctly of course. So just the "starting-point" $f(y)$ and then every point with it's slope until we reach $f(y+\lambda(y-x))$. That slope, I would estimate using $(*)$ and get an inequality. But this doesn't work since I would need to use (*) on the actual points $x$ and $y$, but that doesn't work of course.
I can't come up with a different attempt and the other questions on the internet don't have a derivative in them. They are all different definitions of convexity than the ones here.