On this excellent post, the following can be found:
... we're used to considering vectors as column vectors, and dual vectors as row vectors. So, when we write something like $$u^\top A v,$$ our notation suggests that $u^\top \in T^1(V)$ is a dual vector and that $v \in T_1(V)$ is a vector. This means that the bilinear map $V \times V^* \to \mathbb{R}$ given by $$(v, u^\top) \mapsto u^\top A v$$ is a type $(1,1)$-tensor.
Elements of $V^*$, or covectors, are linear functions or functionals. As such I can picture them as a matrix, because a matrix exerts a transformation on vectors (or elements of $V$). So it would have been less surprising if the writer had tried to make the claim that the matrix $A$ is an element of $V^*$.
Instead it is the row vector $u^\top$ that is a dual vector. Why $u^\top$ and not $A$?
Further, and it may be a notation problem, I thought that $T^0_1 V\equiv V^*$ in finite vector spaces, rather than $T^1(V).$
Updated question after the answer and comments at the time of the writing:
Can I interpret that since $V^*$ is the function
$v=\begin{bmatrix}x_1\\x_2\\\vdots\\x_n\end{bmatrix}\in \underbrace{\quad V\quad}_{n\text{-dim. space}} \overbrace{{\Large{\longrightarrow}}}^{\Large\color{red}{V^*}} \underbrace{\quad \mathbb R\quad}_{1-\text{dim}}$
from the point of view of matching dimensions, it would make sense to picture it as row vector in the dot product:
$\underbrace{\color{red}{\begin{bmatrix}x^*_1,x^*_2,\cdots,x^*_d \end{bmatrix}}}_{[1\times d]}\cdot\underbrace{\begin{bmatrix}x_1\\x_2\\\vdots\\x_n\end{bmatrix}}_{[d\times 1]}=[1\times 1]$
?
So what is the role of $A$? And, again, can I get some insight as to the $T$ script notation?