I am reading "A Course in Metric Geometry" (Burago, Burago, and Ivanov). The authors define arc lengths of continuous curves in metric spaces in the same way it is defined here https://en.wikipedia.org/wiki/Curve#Length_of_a_curve. They then say that the arc length function on a metric space $(X,d)$ satisfies lower semi-continuity on the space of continuous paths into $(X,d)$ (when equipped with the pointwise convergence topology). More specifically (with $L$ being the function that assigns continuous paths their arc lengths):
This means that if a sequence of rectifiable paths $\gamma_i$ (with the same domain) is such that $\gamma_i(t)$ converges to $\gamma(t)$ (as $i\rightarrow \infty$, for every $t$ in the domain), then $\lim\inf L(\gamma_i)\geq L(\gamma)$
This seems true to me, but the proof the authors give does not look right to me somehow. They say:
Let paths $\gamma_i$ converge pointwise to $\gamma$. Take $\epsilon>0$ and fix a partition $Y$ for $\gamma$ such that $L(\gamma)-\Sigma(Y)<\epsilon$. Now consider the sums $\Sigma_j(Y)$ for paths $\gamma_j$ corresponding to the same partition $Y$. Choose $j$ to be so large that the inequality $d(\gamma_j(y_i),\gamma(y_i))<\epsilon$ holds for all $y_i\in Y$. Then $$L(\gamma)\leq \Sigma(Y)+\epsilon\leq \Sigma_j(Y)+\epsilon+(N+1)\epsilon\leq L(\gamma_j)+(N+2)\epsilon$$ Since $\epsilon$ is arbitrary, this implies [the desired result].
To clarify, $Y$ is a partition of the interval serving as codomains for $\gamma$ (like say $[0,1]$) with points $=y_1,y_2,\dots,y_m$ (I have introduced the variable $m$ myself). $\Sigma(Y)$ denotes the arc length approximation $\sum_{i=1}^{m-1} d(\gamma(y_i),\gamma(y_{i+1}))$ ($\Sigma_j(Y)$ is the same sort of approximation, but replacing $\gamma$ with the sequence function $\gamma_j$).
The authors do not explain what $N$ is, but I think $N$ is supposed to be what I have called $m$ (the number of points in $Y$)? But I am not sure how the authors get the inequality
$$\Sigma(Y)+\epsilon\leq \Sigma_j(Y)+\epsilon+(N+1)\epsilon$$
The closest inequality to this I could get was
$$\Sigma(Y)+\epsilon\leq \Sigma_j(Y)+\epsilon+(2N-2)\epsilon$$
basically using the fact that $$d(\gamma(y_i),\gamma(y_{i+1})\leq d(\gamma(y_i),\gamma_j(y_i))+d(\gamma_j(y_i),\gamma_j(y_{i+1})+d(\gamma_j(y_{i+1}),\gamma(y_{i+1}))$$
$$\Rightarrow d(\gamma(y_i),\gamma(y_{i+1})<d(\gamma_j(y_i),\gamma_j(y_{i+1})+2\epsilon$$ and then summing over all values of $i$ less than $N$. But this does not substantially change the argument assuming $N$ is really the number of points in $P$. The problem is as $\epsilon$ decreases, to sustain the chain of inequalities the authors use, I think $Y$ would also have to change, which would in turn mean $N$ changes, and so quantities like $(N+1)\epsilon$ or $(2N-1)\epsilon$ would be basically indeterminate and not guaranteed to vanish. In essence, I don't get how the authors can reason by saying something like "Since $\epsilon$ is arbitrary" given that $Y$ is fixed, and $Y$ is determined by $\epsilon$.
I have not been able to find any other proof of this claim or any clarification about this proof's details. Any help clarifying this proof's details or providing another proof is appreciated.