0

Let's say you have standard basis $\beta = \{ (1,0), (1,0) \}$ for $\mathbb{R}^2$ and linear transformation $T: \mathbb{R}^2 \to \mathbb{R}^2$ that is diagonalizable with two distinct eigenvalues, so you have a basis of eigenvectors $\gamma = \{ w_1, w_2 \}$.

The eigenvectors are not guaranteed to be orthogonal, and if you use G-S, they may no longer be eigenvectors. However, you can always write the eigenvectors in terms of the basis $\gamma$ and they will be orthonormal.

To be concrete, I mean that $[w_1]_\gamma = (1, 0)_\gamma$ and likewise for $w_2$.

So does orthogonality fully relate to the notion of what basis you are using, and any basis you choose can be made orthogonal simply by choosing that basis as reference?

This doesn't make complete sense to me, since we have the G-S process, and this seems to make the notion of orthogonality almost useless if this is the case.

So when we say that for a finite dimensional inner product space $(V, \langle \cdot , \cdot \rangle)$ over some $\mathbb{F}$, do we define this notion of orthogonality (when $\langle x , y \rangle = 0$ for $x, y \in V$) as independent of basis?

  • It is not entirely clear what you are asking. Any basis can be modified using GS to make it orthonormal. If you take a basis of eigenvectors and then apply GS, then the resulting vectors are no longer guaranteed to be eigenvectors (but they will be orthonormal). – copper.hat Jun 11 '23 at 22:04
  • Yes exactly. I'm asking this because of the rules of self-adjoint and normal for an orthonormal eigenbasis to exist, and I wanted to first get a better understanding of how orthogonality relates to basis. – user129393192 Jun 11 '23 at 22:07
  • Orthogonality is a property of a collection of vectors, not just a basis. But orthonormal bases are very convenience for computational purposes, numerical and otherwise. – copper.hat Jun 11 '23 at 22:14
  • And is orthogonality dependent on choice of basis? That is my question. For example, if two vectors are not orthogonal with respect to the standard basis, can they be orthogonal with respect to a different basis? – user129393192 Jun 11 '23 at 22:15
  • Orthogonality is related to (defined by) a specific inner product. It has nothing to do with a basis. So, for a fixed inner product, two vectors are orthogonal or not, regardless of any basis. – copper.hat Jun 11 '23 at 22:18
  • I see. The thing I misunderstand is that my professor says that a vector cannot exist without a basis. – user129393192 Jun 11 '23 at 22:19
  • I do not know what your professor means by that. A basis is a subset of a vector space. Perhaps your professor did not mean it literally, or they were constructing a vector space form a collection of vectors. – copper.hat Jun 11 '23 at 22:24
  • No, he meant it literally. He said a vector is always with respect to a basis and does not exist without one; I asked pretty persistently because I was confused. I believe I understand. Orthogonality of vectors related to the "raw" underlying representation of the vector, without a basis specified. – user129393192 Jun 11 '23 at 22:29
  • You need to take that up with your professor, but in a general mathematical context, such a statement would fly in the opposite direction of every linear algebra textbook that I have encountered. A basis always exists (assuming AOC), but (again in a mathematical context), a basis is a subset of a vector space, so the vector space must come first. – copper.hat Jun 11 '23 at 22:36

2 Answers2

1

Arbitrary linear transformation do not preserve orthogonality. "Expressing a vector in the basis $\gamma$" is a linear transformation $\phi_\gamma : \mathbb R^2 \to \mathbb R^2$ with $\phi_\gamma(v) = [v]_\gamma$. Linear transformations which preserve the inner product (and hence orthogonality) are called orthogonal transformations, and are represented by orthogonal matrices relative to an orthogonal basis. $\phi_\gamma$ is not orthogonal because it takes a nonorthogonal basis ($\gamma$) to an orthogonal one (the standard basis).

As you say, orthogonality is defined via $\langle x,y\rangle = 0$. This has nothing to do with a basis. "Preserving the inner product" for linear $T : \mathbb R^2 \to \mathbb R^2$ means $$ \langle Tx, Ty\rangle = \langle x,y\rangle $$ for all $x,y\in\mathbb R^2$, so this is the formal definition for "$T$ is orthogonal". This also has nothing to do with a basis. You will find that $\phi_\gamma$ does not satisfy this condition for the standard inner product.

Just because nonorthogonal transformations exist does not mean that orthogonality is useless.

  • Sorry, I did not get your definition for "T is orthogonal". Could you please re-iterate? In class, we only stated that for an inner product space $(V, <-,->)$, two vectors are orthogonal if $<u, v> = 0$ and said nothing of transformations, so I am not totally clear on your explanation. – user129393192 Jun 11 '23 at 21:54
  • First be sure to distinguish the orthogonality of a pair of vectors given by a specific result of their inner product, totally independent of basis, from the orthogonality of a linear transformation matrix such as that of a change of basis which requires it's transpose to be the same as it's inverse to preserve orthogonality. – bonif Jun 12 '23 at 08:36
  • If you mean conjugate transpose, would that not be the self-adjoint property @bonif ? – user129393192 Jun 12 '23 at 19:09
  • @bonif A linear transformation $T$ is called orthogonal if $$\langle Tx, Ty\rangle = \langle x,y\rangle$$ for all vectors $x,y$. In other words, it preserves the inner product. – Nicholas Todoroff Jun 12 '23 at 20:30
  • @NicholasTodoroff Right, I don't think I implied otherwise. It appeared to me the OP had difficulty separating a matrix property from the product operation that renders two vectors orthogonal. – bonif Jun 13 '23 at 10:02
  • @user129393192 No. A square matrix $A$ is orthogonal iff $A^T=A^{-1}$ – bonif Jun 13 '23 at 10:12
  • @bonif I'm sorry, I meant user129393192 – Nicholas Todoroff Jun 13 '23 at 16:22
1

The eigenvectors are not guaranteed to be orthogonal, and if you use G-S, they may no longer be eigenvectors. However, you can always write the eigenvectors in terms of the basis γ and they will be orthonormal.

You are confusing a vector in $V = \mathbb R^2$ with it's coordinates (also in $W = \mathbb R^2$)! Consider a general vector space $V$ with an inner product which is a map $ V \times V \to V$. This map is defined for $V$ and not for the space $ W = \mathbb R^{\dim V}$ obtained by fixing a basis. You can only take the dot product between vectors in the vector space $V$. That is, the dot product is base independent !

What you are doing is taking two basis vectors $w_1,w_2$ looking at their coordinates in the basis $\gamma$ and taking the dot product of their coordinates as if they were points of the vector space $V$. But the coordinates are in the space $W$ which is not equipped with a dot product.


If you want to equip $\mathbb R^{\dim V}$ with a dot product then the only natural way to do so is to define a map $\phi_\gamma : V \to \mathbb R^{\dim V}$ which maps $v$ to it's coordinates in the basis $\gamma$ and define the inner product on $\mathbb R^{\dim V}$ by

$$ \langle \phi_\gamma(v), \phi_\gamma(v')\rangle_{\mathbb R^{\dim V}} := \langle v,v' \rangle_{V}$$

Doing so in your example you will see that the dot product of $[w_1]_\gamma = (1,0)$ and $[w_2]_\gamma = (0,1)$ is not $1 \times 0 + 0 \times 1 = 0$ but is by definition the dot product of the vectors $w_1,w_2$. This is because the dot product is by definition

$$ \langle [w_1]_\gamma, [w_2]_\gamma \rangle := \langle w_1, w_2 \rangle $$


In order to avoid confusion it is probably best to use a different notation for coordinates. You might want to write the coordinates of a vector as a column vector.


For a specific example suppose that $w_1 = (0,1)$ and $w_2 = (1,1)$ and $V = \mathbb R^2$ defined with the usual dot product. In the basis $\{w_1,w_2\}$ the basis vectors have the coordinates $(1,0)^T$ and $(0,1)^T$. And a point of coordinate $(x,y)^T$ cooresponds to the point in $V$ defined as $ xw_1 + yw_2 = (y,y+x)$. It follows that the dot product on the coordinate space is defined by \begin{align*} \langle (x,y)^T,(x',y')^T \rangle &= \langle (y,y+x), (y',y'+x') \rangle \\ &= yy' + (y+x)(y'+x'). \end{align*} This means that $\langle (1,0)^T,(0,1)^T \rangle \neq 1 \times 0 + 0 \times 1$. Rather $$ \langle (1,0)^T,(0,1)^T \rangle = 0 \times 1 + (0+1)(1+0) = 1. $$

Digitallis
  • 4,246
  • But does that not rely on your definition of the inner product in the coordinate space? From my understanding, when you say basis independent, you mean that you treat the vectors themselves, not with respect to any coordinate system, when you consider orthogonality. – user129393192 Jun 12 '23 at 19:12
  • @user129393192 The way I define the dot product on the coordinate space is "The dot product of the coordinates of $v$ and the coordinates of $w$ is defined to be the dot product of $v$ and $w$ in $V$. This is by definition basis independent. However it is NOT necessary to consider a basis to define a dot product. I'm simply adressing the mistake you made by taking the dot product of the coordinates without defining an appropriate dot product on the coordinate space. – Digitallis Jun 12 '23 at 19:26
  • @user129393192 don't hesitate to ask for clarifications if needed ;) – Digitallis Jun 12 '23 at 19:58