Obviously an element of the dual can never be equal to an element of the original vector space. The easiest thing is of course to just write things out basis-free. Given a vector space $V$ and a metric tensor on it, i.e a bilinear map $g:V\times V\to\Bbb{R}$ which is symmetric and non-degenerate, we can do the following:
- for each $v\in V$, we assign a covector $g(v,\cdot)$. This mapping $v\mapsto g(v,\cdot)$ from $V$ into $V^*$ is denoted as $g^{\flat}:V\to V^*$ (this is an isomorphism, because that's the definition of $g$ being non-degenerate).
Now, one way of making sense of the indices is that you can take a basis $\{e_1,\dots, e_n\}$ of $V$, and the dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ of $V^*$ (whose defining property is that for all $i,j\in\{1,\dots, n\}$, $\epsilon^i(e_j)=\delta^i_j$). Now, $v\in V$ can be written as a linear combination of basis vectors
\begin{align}
v=\sum_{i=1}^nv^ie_i,
\end{align}
for some unique numbers $v^1,\dots, v^n\in\Bbb{R}$ (actually it's easy to see from the definitions that $v^i=\epsilon^i(v)$, the value of the covector $\epsilon^i$ on the vector $v$). Next, $g^{\flat}(v)\in V^*$ is a covector so we must be able to write it as a linear combination of the $\epsilon$'s, i.e
\begin{align}
g^{\flat}(v)&=\sum_{i=1}^nc_i\epsilon^i,\tag{$*$}
\end{align}
for some unique $c_1,\dots, c_n\in\Bbb{R}$. In fact, you can convince yourself that
\begin{align}
c_j = [g^{\flat}(v)](e_j)=g(v,e_j)=g\left(\sum\limits_{i=1}^nv^ie_i,e_j\right)=\sum_{i=1}^nv^ig(e_i,e_j)\equiv g_{ij}v^i
\end{align}
(first equality follows by evaluating both sides of $(*)$ on the vector $e_j$, and using the property of the dual basis).
Or equivalently (after renaming indices), $v^i=g^{ij}c_j$. Finally, it is tradition to not write the coefficients as $c_j$, but to write them as $v_j$ instead, whicch thus gives the equality $v^i=g^{ij}v_j$. So, $v^1,\dots, v^n\in\Bbb{R}$ are the coefficients when you write the vector $v$ as a linear combination of the basis $\{e_1,\dots, e_n\}$ of $V$, whereas $v_1,\dots, v_n\in\Bbb{R}$ are the coefficients when you write the covector $g^{\flat}(v)$ as a linear combination of the corresponding dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ of $V^*$.
Now, another way of saying this is to use the (I believe Penrose's) abstract index notation (e.g as explained in Wald's GR book). Here, the $(0,2)$ tensor $g$ is written as $g_{\alpha\beta}$. The subscripts $\alpha,\beta$ do not indicate the componenets with respect to a basis, but simply that $g$ is an object which has two slots where you can feed it elements of $V$. An element of $V$ is written as $v^{\alpha}$ (instead of $v\in V$). In this notation, the symbol $g_{\alpha\beta}\,v^{\alpha}$ stands for $g(v,\cdot)$. The repeated indices in the up-down manner doesn't refer to summation of components, but rather tensor contraction, so $v^{\alpha}\in V, g_{\alpha\beta}\in T^0_2(V), g_{\alpha\beta}\,v^{\alpha}\in V^*$ etc. But then again, this doesn't seem to be what the lecturer is referring to.
So, what is going on in that lecture is that one is literally fixing a basis $\{e_1,\dots, e_n\}$ of $V$, then considering the metric tensor components $g_{ij}=g(e_i,e_j)$ for $i,j\in\{1,\dots, n\}$ and defining a new set of basis elements of $V$ as $e^i=g^{ij}e_j$, so that $\{e^1,\dots, e^n\}$ is still a basis of $V$ in that notation. To me this is extremely confusing and unnatural. For the sake of avoiding dual spaces, everything is being squished down into the original vector space (it's like writing, drawing, painting all on a single piece of paper; very messy and cluttered).
In an abstract manner what has happened here is that using a basis $\{e_1,\dots, e_n\}$ of $V$, one can obviously consider the dual basis $\{\epsilon^1,\dots, \epsilon^n\}$ as I mentioned above. With this, we can construct a linear map $\psi:V\to V^*$ defined by $\psi(e_i)=\epsilon^i$ for all $i\in\{1,\dots, n\}$, and extending linearly (this $\psi$ is an isomorphism because it sends a basis to a basis). So, we now have two isomorphisms between $V$ and $V^*$, namely $g^{\flat}:V\to V^*$ (whose inverse is denoted $g^{\sharp}:V^*\to V$) and $\psi:V\to V^*$. What is happening in this lecture is that these are being composed to yield $g^{\sharp}\circ \psi:V\to V$. So, the value of this isomorphism on the vector $e_i$ is:
\begin{align}
(g^{\sharp}\circ\psi)(e_i)=g^{\sharp}(\epsilon^i)= g^{ij}e_j \equiv e^i
\end{align}
(The penultimate equality is because $g^{\flat}(e_j)=g(e_j,\cdot)=g_{ji}\epsilon^i$, so by applying $g^{\sharp}$ to both sides we get $e_j=g^{\sharp}(g_{ji}\epsilon^i)=g_{ji}g^{\sharp}(\epsilon^i)$, and hence juggling the indices we get $g^{\sharp}(\epsilon^i)=g^{ij}e_j$). So, $g^{\sharp}\circ \psi$ is the isomorphism $V\to V$ which sends for each $i\in\{1,\dots, n\}$ $e_i\in V$ to $e^i=g^{ij}e_j\in V$.
If you start out with an orthonormal basis $\{e_1,\dots, e_n\}$ of $V$, then indeed this procedure will spit out $e^i=e_i$ for all $i\in\{1,\dots, n\}$, the equality being that of elements of $V$ (assuming $g$ is positive-definite).