Algebra and Projections in Rn, Cross Product in R3

Algebra and Projections in Rn, Cross Product in R3

Algebra and Projections in Rn, Cross Product in {\mathbb{R}^3}

Abstract:
This series is the direct continuation of the series on the Euclidean Space of n dimensions. Here we will review some linear algebra concepts that help to better understand the n-dimensional Euclidean space, examine the concepts of projections of one vector onto another, demonstrate the Pythagorean theorem, and conclude with a review of the cross product in \mathbb{R}^3 and its relationship with the other products of the 3-dimensional Euclidean space.

INDEX
Linear Independence, Orthogonality, and Projections
The Pythagorean Theorem and Projection onto a Subspace
The Dot and Cross Product in \mathbb{R}^3


Linear Independence, Orthogonal, and Projections

Linear combination and linear independence

A nonzero vector \vec{z} can be constructed as a linear combination with respect to other nonzero vectors \vec{x} and \vec{y} if there exists a pair of real numbers \alpha and \beta, not both zero simultaneously, such that:

\vec{z} = \alpha \vec{x} + \beta\vec{y}

That is, the vector \vec{z} can be constructed as a weighted sum of the vectors \vec{x} and \vec{y}.

Analogously, it is said that the vectors \vec{x} and \vec{y} are linearly independent if

(\alpha \vec{x} + \beta\vec{y} = \vec{0} ) \longleftrightarrow (\alpha=0 \wedge \beta=0 )

Linear independence between the vectors \vec{x} and \vec{y} tells us that \vec{y} cannot be obtained as a (nonzero) scalar multiple of \vec{x} and vice versa.

The concept of linear independence we have just reviewed can be extended to larger sets of vectors. The set of nonzero vectors \{\vec{x}_1, \cdots, \vec{x}_n\} is said to be linearly independent when

\displaystyle \left[\left(\sum_{i=1}^n \alpha_i \vec{x}_i \right) = \vec{0} \right] \longleftrightarrow \left[\bigwedge_{i=1}^n (\alpha_i = 0) \right]

The angle formed by two vectors and orthogonality

If we recall the Cauchy-Schwarz inequality, it tells us that (\forall \vec{x},\vec{y}\in\mathbb{R}^n)(|\vec{x}\cdot\vec{y}| \leq \|\vec{x}\| \|\vec{y}\|). Keeping this in mind, it is easy to verify that for any pair of vectors \vec{x},\vec{y}\in\mathbb{R}^n\setminus\{\vec{0}\} the following relation holds:

\displaystyle -1 \leq \frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\|\|\vec{y}\|}\leq 1

We can now intuit a relationship between the dot product and the angle formed by the vectors \vec{x} and \vec{y}, because they generate a plane isometric to \mathbb{R}^2. Therefore, without loss of generality, we can imagine them as being elements of \mathbb{R}^2 with angles with respect to the \hat{x} axis of \theta_x and \theta_y, respectively, so that the vectors are written in polar form as:

\begin{array}{rl} \vec{x} &= \|\vec{x}\|(\cos(\theta_x) , \sin(\theta_x)) \\ \\ \vec{y} &= \|\vec{y}\|(\cos(\theta_y) , \sin(\theta_y)) \end{array}

Thus we can assume (without loss of generality, again) that \theta_x \lt \theta_y, and then calculate the dot product \vec{x}\cdot\vec{y}. Doing so, we obtain the following result:

\begin{array}{rl}\vec{x}\cdot \vec{y} &= \|\vec{x}\| \|\vec{y}\| (\cos(\theta_x)\cos(\theta_y) + \sin(\theta_x)\sin(\theta_y)) \\ \\ &= \|\vec{x}\| \|\vec{y}\| \cos(\theta_y-\theta_x) \end{array}

Now, by taking the difference between the greater and smaller angular positions, we obtain the angle between the vectors, \angle(\vec{x},\vec{y})=\theta_y - \theta_x. And with this we can now write:

\displaystyle \cos\left(\angle(\vec{x},\vec{y}) \right) = \frac{\vec{x} \cdot \vec{y}}{\|\vec{x}\|\|\vec{y}\|}

Here we must emphasize that \angle(\vec{x},\vec{y})\in [0, \pi]

From this we can connect the Cauchy-Schwarz inequality with the geometry of angles, and it also allows us to obtain a rigorous notion of orthogonality. Two vectors are said to be Orthogonal when they form an angle of \pi/2 radians between them, in the sense explained in the previous paragraph. This is equivalent to saying that \cos\left(\angle(\vec{x},\vec{y})\right) = 0, which in turn is equivalent to saying that \vec{x}\cdot\vec{y} = 0. For this reason, stating the orthogonality of the vectors \vec{x} and \vec{y} is equivalent to saying that \vec{x}\cdot\vec{y}=0.

If two nonzero vectors are orthogonal, then they are linearly independent

This is a somewhat intuitive property of vectors in \mathbb{R}^n whose formal proof is not so direct, and it is also a property that can sometimes cause some confusion: The orthogonality of two vectors implies their linear independence, but the linear independence of two vectors does not necessarily imply their orthogonality. To see the latter, a simple counterexample suffices:

If we take the vectors \vec{A}=(1,0) and \vec{B}=(1,1), which are clearly not orthogonal because \vec{A}\cdot\vec{B}=1, we see that if we do

\alpha\vec{A} + \beta\vec{B} = \vec{0}

Then we have

\begin{array}{rl} \alpha + \beta &= 0 \\ \beta &= 0 \end{array}

and therefore: \alpha = 0 \wedge \beta=0. And with this we conclude that:

\alpha\vec{A} + \beta\vec{B} = \vec{0} \longleftrightarrow \alpha = 0 \wedge \beta=0

Which is equivalent to saying that \vec{A} and \vec{B} are linearly independent. This makes it very explicit that it is not true that linear independence implies orthogonality. However, orthogonality does imply linear independence, and this is what I will formally demonstrate below. For this, let us consider the following set of premises:

\mathcal{H}= \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0, \alpha\vec{x}+\beta\vec{y} = \vec{0}\}

From this we can produce the following reasoning:

\begin{array}{rll} (1) &\mathcal{H}\vdash \vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\} &{;\;Assumption}\\ \\ (2) &\mathcal{H}\vdash \vec{x}\cdot\vec{y}=0 &{\;Assumption} \\ \\ (3) &\mathcal{H}\vdash \alpha\vec{x} + \beta\vec{y} = \vec{0} &{\;Assumption} \\ \\ (4) &\mathcal{H}\vdash (\alpha\vec{x} + \beta\vec{y})\cdot\vec{x} = \alpha\|\vec{x}\|^2 + \beta(\vec{x}\cdot\vec{y}) &{;\; Bilinearity} \\ \\ (5) &\mathcal{H}\vdash \alpha\|\vec{x}\|^2 = 0 & {;\; From(2,3,4)} \\ \\ (6) &\mathcal{H}\vdash \alpha = 0 & {;\; From(1,5)} \\ \\ (7) &\mathcal{H}\vdash (\alpha\vec{x} + \beta\vec{y})\cdot\vec{y} = \alpha(\vec{x}\cdot\vec{y}) + \beta\|\vec{y}\|^2 & {;\;Bilinearity} \\ \\ (8) &\mathcal{H}\vdash \beta\|\vec{y}\|^2 = 0 &{;\;From(2,3,7)} \\ \\ (9) &\mathcal{H}\vdash \beta = 0 &{;\;From(1,8)} \\ \\ (10) &\mathcal{H}\vdash \alpha= 0 \wedge \beta = 0 &{;\;\wedge-int(6,9)} \end{array}

With this we conclude that

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0, \alpha\vec{x}+\beta\vec{y} = \vec{0}\} \vdash \alpha= 0 \wedge \beta = 0

Finally, by applying the deduction theorem to this last expression we have:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0\} \vdash (\alpha\vec{x}+\beta\vec{y} = \vec{0}) \rightarrow (\alpha= 0 \wedge \beta = 0)

The proof that yields the arrow in the opposite direction is trivial.

That is: if \vec{x} and \vec{y} are nonzero and orthogonal vectors, then they are linearly independent.

The projection of one vector onto another

Suppose we have two nonzero vectors \vec{x} and \vec{y} that form between them an angle \angle(\vec{x},\vec{y}) and we ask ourselves “To what extent is the vector \vec{x} found along the vector \vec{y}?” or “What is the size of the shadow of the vector \vec{x} when projected onto the direction of the vector \vec{y}?”. This question can be resolved through trigonometry, and with this we define the projection of a vector \vec{x} onto another \vec{y}, Proy_{\vec{y}}(\vec{x}), through the expression:

Proy_{\vec{y}}(\vec{x}) = \| \vec{x}\| \cos(\angle(\vec{x},\vec{y})) \hat{y}

If we combine this with what was seen in previous paragraphs, we can write:

\displaystyle Proy_{\vec{y}}(\vec{x}) = {\| \vec{x}\|} \left(\frac{\vec{x}\cdot\vec{y}}{{\|\vec{x}\|} \|\vec{y}\|}\right)\color{red}{\hat{y}} = \left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{y}\|} \right)\color{red}{\frac{\vec{y}}{\|\vec{y}\|}} = \left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{y}\|^2}\right)\vec{y} = \left(\frac{\vec{x}\cdot\vec{y}}{\vec{y}\cdot\vec{y}}\right)\vec{y}

since, let us recall

\displaystyle \cos(\angle(\vec{x},\vec{y})) = \frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\| \|\vec{y}\|}

Projections are important because they allow us to express vectors in terms of any basis as the sum of their projections:

\vec{x} = \displaystyle \sum_{i=1}^n \alpha_i \hat{u}_i

Where \{\vec{u}_i\}_{i=1,\cdots, n} is a basis of linearly independent vectors of \mathbb{R}^n and the coefficients \alpha_i = (\vec{x}\cdot\vec{u}_i)/\|\vec{u}_i\| are precisely the projections onto each element of the basis, which constitute the coordinates of \vec{x} with respect to the basis \{\hat{u}_i\}_{i=1,\cdots, n} of \mathbb{R}^n.


The Pythagorean Theorem and Projection onto a Subspace

The Pythagorean theorem is a result well known to all, with countless proofs. One possible proof of this theorem emerges precisely from the topics we have developed for the Euclidean space, with the added value of being valid for any number of dimensions.

Proving the Pythagorean Theorem

If we have a right triangle with legs a and b, and hypotenuse c, the Pythagorean theorem tells us that a^2+b^2=c^2. With this understood, we can represent each leg through a pair of orthogonal vectors \vec{x} and \vec{y} and write the Pythagorean theorem as follows:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \leftrightarrow (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2)

Where the expression \vec{x}\bot\vec{y} indicates that both vectors are orthogonal, that is: nonzero and such that \vec{x}\cdot\vec{y}=0. In this way, a biconditional relationship is established between orthogonality and the sum of the squared magnitudes of two vectors.

This vector form of representing the Pythagorean theorem can be proved through the following two reasonings:

First, in the forward direction:

\begin{array}{rll} (1) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \vec{x}\bot\vec{y} & {;\;Assumption} \\ \\ (2) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \vec{x}\cdot\vec{y}= 0 & {;\;From(1)} \\ \\ (3) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \|\vec{x} + \vec{y}\|^2 = (\vec{x} + \vec{y})\cdot(\vec{x} + \vec{y}) = \|\vec{x}\|^2 + 2(\vec{x}\cdot\vec{y}) + \|\vec{y}\|^2 & \\ &;\; Property\;of\;the\;Euclidean\;norm\;and\;the\;dot\;product & \\ \\ (4) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2 & {;\;From(2,3)} \\ \\ (5) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \rightarrow ( \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2) & {;\;DT(4)} \end{array}

And now in the reverse direction:

\begin{array}{rll} (1) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2 & {;\;Assumption} \\ \\ (2) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 +2(\vec{x}\cdot\vec{y}) + \|\vec{y}\|^2 & \\ &;\; Property\;of\;the\;Euclidean\;norm\;and\;the\;dot\;product &\\ \\ (3) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \vec{x}\cdot\vec{y}=0 & {;\;From(1,2)} \\ \\ (4) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \vec{x}\bot\vec{y} & {;\;From(3)} \\ \\ (5) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2) \rightarrow \vec{x}\bot\vec{y} & {;\;DT(4)} \end{array}

And finally, by combining both reasonings we obtain what we wanted to demonstrate:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \leftrightarrow (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2)

The Projection of a Vector onto a Subspace of \mathbb{R}^n

Let us consider a subspace H of \mathbb{R}^n formed by a basis of unit vectors \{\hat{v}_1, \cdots, \hat{v}_k\}. If we take a vector \vec{x}\in\mathbb{R}^n\setminus\{\vec{0}\}, the projection of the vector \vec{x} onto the space H is defined by the expression:

Proy_{H}(\vec{x}) = \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j

That a set is orthonormal means that all its elements are orthogonal to each other and each one has norm equal to one.

This is, so to speak, the shadow cast by a vector onto each of the components of the subspace H of \mathbb{R}^n

Distance between a Point or Vector of \mathbb{R}^n and a Subspace of \mathbb{R}^n

From the projection of a vector \vec{x}\in\mathbb{R}^n\setminus\{\vec{0}\} onto a subspace H of \mathbb{R}^n, we can construct a vector of the form

\vec{x} - Proy_{H}(\vec{x})

The vector formed in this way will be a vector that connects a point of the subspace H with the point of coordinates \vec{x}, which emerges orthogonally to the subspace H. This is not difficult to prove: if we take any vector \vec{z}\in H and calculate the dot product (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z}, it suffices to see that the result of this operation is zero. Let us do the calculations to verify this:

If \vec{z}\in H, then it will be of the form

\vec{z}=\displaystyle \sum_{j=1}^k \beta_j\hat{v}_j

Where \{\hat{v}_j\}_{j=1}^k is an orthonormal basis of H and \beta_j \in\mathbb{R} are the coefficients of \vec{z} in H. Taking this into account, the calculation of the dot product (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z}, yields:

\begin{array}{rl} (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z} &= \left(\vec{x} - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \right) \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \vec{x} \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j \end{array}

But since \vec{x} is a vector of \mathbb{R}^n of which H is a subspace, it is possible to find a set of n-k vectors orthonormal to each other and also orthonormal to all the vectors of H, say \{\hat{v}_{k+1}, \cdots, \hat{v}_n\}, such that together with the basis of H they form a basis for \mathbb{R}^n and we can write

\vec{x} = \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j + \sum_{j=k+1}^n \alpha_j \hat{v}_j

So that the development above continues as follows:

\begin{array}{rl} (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z} &= \displaystyle \left( \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j + \sum_{j=k+1}^n \alpha_j \hat{v}_j\right) \cdot \sum_{j=1}^k \beta_j\hat{v}_j - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j + \underbrace{\color{red}{\sum_{j=k+1}^n \alpha_j \hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j}}_{(*)} - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= 0 \end{array}

(*) Zero sum because \{v_j\}_{j=1}^n is an orthonormal basis of \mathbb{R}^n.

From this we can show that the distance between the subspace H and the vector \vec{x} is given by:

\|\vec{x} - Proy_{H}(\vec{x})\|

Proof

To prove this result it will be shown that for all \vec{z}\in H it will always hold that \|\vec{x} - Proy_{H}(\vec{x})\| \leq \|\vec{x} - \vec{z}\|, for this we will use the Pythagorean theorem in the following way:

\begin{array}{rl} \|\vec{x} - \vec{z}\|^2 &= \| \left(\vec{x} -Proy_{H}(\vec{x}) \right) + \left(Proy_{H}(\vec{x}) - \vec{z}\right)\|^2 \\ \\ &= \| \vec{x} -Proy_{H}(\vec{x}) \|^2 + \|Proy_{H}(\vec{x}) - \vec{z}\|^2 \\ \\ \end{array}

This last equality is obtained because the vectors \vec{x} -Proy_{H}(\vec{x}) and Proy_{H}(\vec{x}) - \vec{z} are orthogonal. And therefore:

\|\vec{x} - Proy_{H}(\vec{x})\|^2 \leq \|\vec{x} - \vec{z}\|^2

which is what we wanted to demonstrate.

With this result in hand, we can say that the distance between a point \vec{x}\in\mathbb{R}^n and a subspace H of \mathbb{R}^n generated by the orthonormal vectors \{\hat{v}_1, \cdots, \hat{v}_k\} is given by:

dist(\vec{x},H) =\left\|\vec{x} - Proy_{H}(\vec{x})\right\|= \left\|\vec{x} - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j\right\|


The Dot and Cross Product in \mathbb{R}^3

We will now change our focus slightly to concentrate on vectors in \mathbb{R}^3. Here, in addition to the operations we have already reviewed in general for \mathbb{R}^n, it is also possible to define the cross product, which takes two vectors and produces another vector as a result. This is a product exclusive to \mathbb{R}^3 (and possibly to \mathbb{R}^7, whose case we will not analyze here). Generally, the vectors of the canonical basis of \mathbb{R}^3 are represented by the letters \hat{x}, \hat{y}, \hat{z} or as \hat{\imath}, \hat{\jmath}, \hat{k}. The preference for one or the other is personal.

\begin{array}{rl} \hat{\imath} = \hat{x}&=(1,0,0)\\ \hat{\jmath} =\hat{y}&=(0,1,0)\\ \hat{k} =\hat{z}&=(0,0,1)\\ \end{array}

Thus, if we have a vector of the form (a,b,c), it can be written algebraically as follows:

(a,b,c) = a\hat{x} + b\hat{y} + c\hat{z}

The Cross Product in \mathbb{R}^3

Let \vec{x}=(x_1,x_2,x_3) and \vec{y}=(y_1,y_2,y_3) be vectors in \mathbb{R}^3. The cross product of \vec{x} with \vec{y}, \vec{x}\times\vec{y} is defined by:

\begin{array}{rl} \vec{x}\times\vec{y} &= \left|\begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \end{array}\right| \\ \\ &=\hat{x}x_2y_3 + \hat{y}x_3y_1 + \hat{z} x_1y_2 - \left( \hat{z} x_2 y_1 + \hat{y} x_1 y_3 + \hat{x}x_3y_2\right) \\ \\ &=\hat{x}(x_2y_3 - x_3y_2) + \hat{y}(x_3y_1 - x_1y_3) + \hat{z}(x_1y_2 - x_2y_1) \end{array}

Lagrange’s Identity

For the case of vectors in \mathbb{R}^3 we can recognize three types of “products”: the dot product \vec{x}\cdot\vec{y}, the cross product \vec{x}\times\vec{y}, and the product of the norms \|\vec{x}\|\|\vec{y}\|. These three products are related to each other through Lagrange’s identity

\|\vec{x}\times\vec{y}\|^2 = \|\vec{x}\|^2\|\vec{y}\|^2- (\vec{x}\cdot\vec{y})^2

Proof of Lagrange’s Identity

Let \vec{x}=(x_1,x_2,x_3) and \vec{y}=(y_1,y_2,y_3) be vectors in \mathbb{R}^3, then we have:

\begin{array}{rl} \vec{x}\times\vec{y} &=(x_2y_3 - x_3y_2) \hat{x} + (x_3y_1 - x_1y_3)\hat{y} + (x_1y_2 - x_2y_1)\hat{z} \end{array}

So that:

\begin{array}{rl} \|\vec{x}\times\vec{y}\|^2 &=(x_2y_3 - x_3y_2)^2 + (x_3y_1 - x_1y_3)^2 + (x_1y_2 - x_2y_1)^2 \\ \\ &= \color{green}{x_2^2y_3^2 - 2x_2x_3y_3y_2 + x_3^2y_2^2} + \cdots\\ \\ &\cdots + \color{blue}{x_3^2y_1^2 - 2x_3x_1y_1y_3 + x_1^2y_3^2} + \cdots \\ \\ &\cdots + \color{red}{x_1^2y_2^2 - 2x_1x_2y_2y_1 + x_2^2y_1^2} \end{array}

On the other hand:

\begin{array}{rl} \|\vec{x}\|^2 \|\vec{y}\|^2 - (\vec{x}\cdot\vec{y})^2 &= (x_1^2 + x_2^2 + x_3^2)(y_1^2+y_2^2 + y_3^2) - (x_1y_1 + x_2y_2 + x_3 y_3)^2 \\ \\ \\ &= {x_1^2y_1^2} + \color{red}{x_1^2y_2^2} + \color{blue}{x_1^2y_3^2} + \cdots \\ \\ &\cdots + \color{red}{x_2^2y_1^2} + {x_2^2y_2^2} + \color{green}{x_2^2y_3^2} + \cdots \\ \\ &\cdots + \color{blue}{x_3^2y_1^2} + \color{green}{x_3^2y_2^2} + {x_3^2y_3^2} + \cdots \\ \\ &\cdots - \left[ {x_1^2y_1^2} + {x_2^2y_2^2} + {x_3^2y_3^2} + \right. \cdots \\ \\ &\cdots + 2\left(\color{red}{x_1x_2y_1y_2} + \color{blue}{x_1x_3y_1y_3} + \color{green}{x_2x_3y_2y_3} \right)\left.\right] \\ \\ \\ &= \color{red}{x_1^2y_2^2 - 2x_1x_2y_2y_1 + x_2^2y_1^2} + \cdots \\ \\ & \cdots + \color{blue}{x_1^2y_3^2 - 2x_1x_3y_3y_1 + x_3^2y_1^2} + \cdots \\ \\ & \cdots + \color{green}{x_2^2y_3^2 - 2x_2x_3y_3y_2 + x_3^2y_2^2} \end{array}

Finally, by comparing the colored expressions we obtain what we wanted to demonstrate.

The Cross Product and the Angle Between Vectors

Earlier we saw that there is a close relationship between the angle sustained by two vectors and the result of the dot product, which is given by the relation \vec{x}\cdot\vec{y} = \|\vec{x}\|\|\vec{y}\|\cos(\angle(\vec{x},\vec{y})). It turns out that a similar thing occurs with the cross product and it is given by the following relation:

\|\vec{x}\times\vec{y}\| = \|\vec{x}\|\|\vec{y}\| \sin(\angle(\vec{x},\vec{y}))

This expression is a direct result of Lagrange’s identity demonstrated above, and the proof goes more or less as follows:

\begin{array}{rl} \|\vec{x}\times\vec{y}\|^2 &= \|\vec{x}\|^2\|\vec{y}\|^2 - (\vec{x}\cdot\vec{y})^2 \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 - (\|\vec{x}\|\|\vec{y}\|\cos(\angle(\vec{x},\vec{y})))^2 \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 - \|\vec{x}\|^2\|\vec{y}\|^2\cos^2(\angle(\vec{x},\vec{y})) \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 (1 - \cos^2(\angle(\vec{x},\vec{y}))) \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 \sin^2(\angle(\vec{x},\vec{y})) \end{array}

Finally, taking square roots we arrive at:

\|\vec{x}\times\vec{y}\| = \|\vec{x}\|\|\vec{y}\|\; |\sin(\angle(\vec{x},\vec{y}))|

But recall that \angle(\vec{x},\vec{y})\in[0,\pi], and in that range of values the sine function is always non-negative, so we can remove the absolute value and arrive at what we wanted to demonstrate.

From this expression we can infer that the result of the operation \|\vec{x}\times\vec{y}\| gives us the area generated by the vectors \vec{x} and \vec{y}.

Views: 1

Leave a Reply

Your email address will not be published. Required fields are marked *