From this Wiki page: given a Goppa code $\Gamma(g, L)$ and a binary word $v=(v_0,...,v_{n-1})$, its syndrome is defined as $$s(x)=\sum_{i=0}^{n-1}\frac{v_i}{x-L_i} \mod g(x).$$ To do error correction, Patterson's algorithm goes as follows:
Calculate $$v(x)=\sqrt{s(x)^{-1}-x}\mod g(x)$$ (this assumes that $s(x)\ne 0$, which is always the case unless $v$ belongs to $\Gamma(g,L)$ and no correction is required).
Use the EEA to obtain the polynomials $a(x)$ and $b(x)$ such that $$ a(x)=b(x)v(x)\mod g(x)$$
Calculate the polynomial $p(x)=a(x)+xb(x)^2$. Assuming that the original word is decodable, this should be the same as the factorized error locator polynomial $$\sigma(x)=\prod_{i\in B}(x-L_i) $$ where $B=\{i \text{ s.t. } e_i\ne 0\}$.
Assuming that the original word is decodable (that is, $|B|<d$ where $d$ is the minimum distance of the code), we simply calculate the roots of $\sigma(x)$: if $L_i$ is a root, then an error as occurred in the $i$-th position.
I only have one question: how do we show that the polynomial $p(x)$ obtained in the third step corresponds exactly to $\sigma(x)$ as defined above?