I'm gonna go ahead and try to talk about this all in $\mathbb R^2$. Let's look at a point $P = (3, 5)$, just to be concrete. A typical tangent vector at $P$ is something like $$
v = \pmatrix{2\\1},
$$
which we tend to think of as an arrow pointing about ENE, with the arrow starting at $P$. Some authors instead say that a tangent vector at $P$ is a pair like $(P, v)$, with the rule that the "$P$" is "just along for the ride", so that addition is defined by a rule like
$$
(P, v) + (P, w) = (P, v+w)
$$
where that last addition is just ordinary addition of vectors in $\mathbb R^2$.
Presumably you can see that this set of pairs is in a 1-1 correspondence with $\mathbb R^2$, with $u \leftrightarrow (P, u)$. So it doesn't really matter which representation you choose.
As a practical matter, one use of a vector like $v$ (or $(P, v)$) is to describe 'directional derivatives': if we have a function
$$
f: \mathbb R^2 \to \mathbb R
$$
that's smooth, we can ask "at the point $P$, what's the directional derivative of $f$ in the direction $v$?" This gives us a function -- let's call it $Q$ just to have a name, define by
$$
Q(f) = \text{directional derivative of $f$ at $P$ in direction $v$}
$$
which is defined on the set of differentiable functions. The function $Q$ has some nice properties: $Q(cf) = c Q(f)$ (where $c$ is a constant), and $Q(f + g) = Q(f) + Q(g)$, so $Q$ is actually linear. Both of these can be proved with fairly basic calculus. Slightly more surprising is that
$$
Q(fg) = f(P) Q(g) + g(P) Q(f)
$$
which is proved with the product rule for derivatives.
Whenever we have a function with these properties, we call it a 'derivation.'
Here's the first small surprise: not only can we take any vector at $P$ (not just $v$) and define such a derivation, but every derivation at $P$ actually turns out to be a derivation that comes from some vector.[This actually takes a bit of proof!]
So "tangent vectors at $P$" and "derivations at $P$" are in 1-1 correspondence.
What's nice about that is that when we talk about more general shapes, there might not be an obvious way to write down a tangent vector, but we can still think about derivations -- functions that have certain properties. And then we can say that the tangent space at $P$ is just the set of all "derivations at P".
Now what about curves? Every differentiable curve $\gamma$ with $\gamma(0) = P$ has a derivative $\gamma'(0)$ which is a "vector at $P$". But multiple curves can have the SAME derivative at $0$. For instance
$$
\alpha(t) = (t, 0)
$$
and
$$
\beta(t) = (t, t^2)
$$
both have $(1, 0)$ as their derivative at $t = 0$. So we make a rule that we're going to treat any two such curves as "equivalent". That divides the set of curves through the origin (or those through $P$) into a bunch of piles: the ones whose derivative is $(1, 0)$; the ones whose derivative is $(-3, \pi)$; the ones whose derivative is $(-11, -11)$, and so on.
We call these piles "equivalence classes", and it now becomes clear that such equivalence classes are in 1-1 correspondence with vectors. So we can forget vectors and talk about equivalence classes of curves instead. Again, not terribly helpful in the plane, but really nice when you want to talk about some infinite-dimensional manifold that isn't yet 'embedded' in a nice space like Euclidean space.
Summary: there are many different sets whose elements are in 1-1correspondence with tangent vectors at $P$. Which one you decide to call "tangent vectors" depends on your approach, and after a differential geometry/topology course or two, you find yourself switching among them pretty liberally.