The Euclidean Space Rn

The Euclidean Space Rn

The Euclidean Space {\mathbb{R}^n}

In this class, we explore the Euclidean space \mathbb{R}^n, its algebraic structure, and metric properties. You will learn about vector operations, the dot product, the norm, and the Euclidean distance, essential concepts in geometry and analysis. With clear explanations and intuitive examples, this material will help you understand how space is mathematically modeled in multiple dimensions.

Learning Objectives:
By the end of this class, the student will be able to:

  1. Define the Euclidean space \mathbb{R}^n and its fundamental properties.
  2. Explain the vector structure of \mathbb{R}^n through its basic operations.
  3. Apply the dot product to compute angles and projections between vectors.
  4. Demonstrate algebraic and metric properties of the dot product in \mathbb{R}^n.
  5. Use the Euclidean norm to determine the magnitude of a vector.
  6. Calculate the Euclidean distance between two points in \mathbb{R}^n and analyze its geometric meaning.
  7. Verify the validity of fundamental inequalities such as Cauchy-Schwarz and the triangle inequality.

INDEX
The Space \mathbb{R}^n
The Dot Product
The Norm and the Euclidean Distance
Conclusion

The Vector Space \mathbb{R}^n

Surely before reaching this point, you were familiar with the properties of \mathbb{R}, or the plane \mathbb{R}^2, or space \mathbb{R}^3. All these ideas are useful for understanding the space \mathbb{R}^n. Above all, the set \mathbb{R}^n = \{\vec{x} = (x_1, \cdots, x_n) | x_1, \cdots, x_n \in \mathbb{R}\}, equipped with the usual operations of vector addition and scalar multiplication, is a vector space. Let’s delve into this by reviewing the basic operations of \mathbb{R}^n.

Basic operations in \mathbb{R}^n

If \vec{x}=(x_1, \cdots, x_n), \vec{y}=(y_1, \cdots, y_n) are vectors in \mathbb{R}^n and \alpha is any real scalar, then the operations of vector addition and scalar multiplication are as described below:

Vector addition: Vector addition is described by the function:

\begin{array}{rcrl} +:& \mathbb{R}^n \times \mathbb{R}^n & \longrightarrow & \mathbb{R}^n \\ & (\vec{x},\vec{y}) & \longmapsto & \vec{x}+\vec{y} = (x_1+y_1, \cdots, x_n + y_n) \end{array}

Scalar multiplication: Scalar multiplication is described by the function:

\begin{array}{rcrl} ():& \mathbb{R} \times \mathbb{R}^n & \longrightarrow & \mathbb{R}^n \\ & (\alpha,\vec{x}) & \longmapsto & (\alpha\vec{x}) = (\alpha x_1, \cdots, \alpha x_n) \end{array}

Vector space properties of \mathbb{R}^n

The space \mathbb{R}^n equipped with the operations described above is a vector space because its operations of addition and scalar multiplication satisfy the following properties:

First, we have the commutative and associative properties.

\vec{x} + \vec{y} = \vec{y} + \vec{x} \\ \vec{x} + (\vec{y} + \vec{z}) = (\vec{x} + \vec{y}) + \vec{z} \\ (\alpha \beta) \vec{x} = \alpha (\beta \vec{x}) = \beta (\alpha \vec{x}) = (\beta\alpha) \vec{x}

The sum of scalars distributes over scalar multiplication, and vector addition distributes over scalar multiplication; that is, the following equalities hold:

(\alpha + \beta) \vec{x} = \alpha\vec{x} + \beta\vec{x} \\ \alpha(\vec{x} + \vec{y}) = \alpha\vec{x} + \alpha\vec{y}

There exists an additive identity \vec{0}=(0,\cdots, 0) that satisfies the property:

\vec{x} + \vec{0} = \vec{x}

There exists a multiplicative identity for scalar multiplication:

1 \vec{x} = \vec{x}

And every vector \vec{x}\in\mathbb{R}^n has an additive inverse -\vec{x}, which satisfies the property:

\vec{x} + -\vec{x} = \vec{0}

The Dot Product

When examining the construction of \mathbb{R}^n as a vector space, we notice that it lacks a multiplication operation between vectors; initially, we cannot “multiply” vectors as we do with real numbers. However, it is possible to define such an operation, and one way to do so is through what is known as the dot product.

The dot product should not be confused with scalar multiplication, the former is a product between two vectors that yields a scalar, while the latter is the multiplication of a scalar by a vector, resulting in another vector. Consider two vectors in \mathbb{R}^n: \vec{x}=(x_1, \cdots, x_n) and \vec{y}=(y_1, \cdots, y_n). The dot product of \vec{x} with \vec{y}, denoted as \vec{x}\cdot\vec{y}, is defined as the real number given by the formula:

\vec{x}\cdot\vec{y} =\displaystyle \sum_{i=1}^n x_i y_i = x_1y_1 + \cdots x_ny_n

There are many ways to represent the dot product of vectors in \mathbb{R}^n, one being the formula above. Another is obtained by considering a basis of \mathbb{R}^n and using the Einstein summation convention: If \{\hat{e}_i\}_{i=\overline{1,n}} is a basis of \mathbb{R}^n (usually the canonical basis), then the vectors \vec{x} and \vec{y} can be written as:

\vec{x}=\displaystyle\sum_{i=1}^n x_i\hat{e}_i = x_1\hat{e}_1 + \cdots x_n\hat{e}_n

\vec{y}=\displaystyle\sum_{i=1}^n y_i\hat{e}_i = y_1\hat{e}_1 + \cdots y_n\hat{e}_n

This explicitly indicates that the coefficients x_i and y_i of the vectors are relative to the basis of the space.

The Einstein Summation Convention

The Einstein summation convention allows us to simplify the representation of vectors in general and the dot product in particular. Observing the two expressions above, we see that the subscript i appears both in the vector coefficient and in the basis vector; for Einstein, the presence of repeated indices is enough to assume the existence of the summation in the expression, so we can write:

\vec{x}= x_i\hat{e}_i

\vec{y}= y_i\hat{e}_i

Using this notation convention, the dot product is expressed as follows:

\vec{x}\cdot\vec{y} = x_i\hat{e}_i \cdot y_i\hat{e}_i = x_iy_i \underbrace{(\hat{e}_i \cdot \hat{e}_i)}_{=1} = x_iy_i

In this last equality, it has been assumed that we are working with the canonical basis.

Other Notations for the Dot Product

The notation for vectors and their operations is not always the same in all contexts, the one I have used in the first paragraphs of this entry is the most commonly seen when working in calculus. When working in linear algebra, sometimes a distinction is made between vectors and covectors:

When we talk about vectors, we refer to what is known as a “column vector,” which is represented in matrix form as:

\alpha^i = \left( \begin{array}{c}\alpha_1 \\ \vdots \\ \alpha_n \end{array} \right)

Whereas when we talk about covectors, we refer to what is called a “row vector,” which is represented in matrix form as:

\beta_i = \left( \beta_1 \; \cdots \; \beta_n \right)

Thus, the dot product of two vectors \vec{x}=(x_1,\cdots,x_n) and \vec{y}=(y_1,\cdots,y_n) is interpreted as the matrix product of the “covector” x_i with the vector y^i, yielding the following real number:

\left( x_1 \; \cdots \; x_n \right) \left( \begin{array}{c}y_1 \\ \vdots \\ y_n \end{array} \right) = x_iy^i

Observe that in this last equality, the Einstein summation convention appears again, as the repeated indices indicate that the final result is a sum.

The notation that distinguishes vectors and covectors using subscripts and superscripts is known as “covariant notation” or “tensor notation” and is widely used in studying special and general relativity. This notation also has the advantage of facilitating work with tensors, a concept that generalizes the ideas we have just reviewed and that we will examine in more detail on another occasion. In other disciplines, such as quantum mechanics, the Bra-Ket notation is preferred, where:

\left< x \right| =\left( x_1 \; \cdots \; x_n \right) \\ \\ \left|y\right> = \left( \begin{array}{c}y_1 \\ \vdots \\ y_n \end{array} \right)

Thus, the dot product is represented as \left<x|y\right>.

Properties of the Dot Product

From the definition of the dot product, we can derive a whole series of properties that will be highly relevant in the future.

If we use the dot product to define the function \tilde{\omega}(\vec{x})=\vec{\omega} \cdot \vec{x} = \omega_i x^i, we see that the function \tilde{\omega} defined in this way possesses all the properties of linear functions, as it is straightforward to prove that

\begin{array}{rl} \tilde{\omega}(\alpha \vec{x} + \beta\vec{y}) = \alpha \tilde{\omega}(\vec{x}) + \beta\tilde{\omega}(\vec{y}) \end{array}

For this reason, objects such as \tilde{\omega} that are defined using the dot product are called linear functionals. As we already know, \vec{x} is a vector belonging to the vector space \mathbb{R}^n, and as we will see in other circumstances, \tilde{\omega} is an object in the dual space of \mathbb{R}^n.

From this, it follows that there is a close relationship between the dot product and linear functions; in fact, a statement that summarizes all the important properties of the dot product is: “The dot product is a bilinear, symmetric, positive, and non-degenerate form.” Let’s examine what each part of this statement means:

When we say that the dot product is a bilinear form, we mean that if \vec{x},\vec{y} and \vec{z} are vectors in \mathbb{R}^n and \alpha,\beta \in \mathbb{R}, then the following two equalities hold:

\begin{array}{rl} \vec{x}\cdot(\alpha \vec{y} + \beta\vec{z}) = \alpha (\vec{x}\cdot\vec{y}) + \beta(\vec{x}\cdot\vec{z}) \\ \\ (\alpha \vec{x} + \beta\vec{y})\cdot\vec{z} = \alpha (\vec{x} \cdot \vec{z}) + \beta(\vec{y}\cdot\vec{z}) \end{array}

The dot product is symmetric because:

\forall(\vec{x},\vec{y}\in\mathbb{R}^n)(\vec{x}\cdot\vec{y} = \vec{y}\cdot\vec{x})

It is positive definite because:

(\forall\vec{x}\in\mathbb{R}^n)(\vec{x}\cdot\vec{x} \geq 0)

And finally, it is non-degenerate because:

\vec{x}\cdot\vec{x} = 0 \leftrightarrow \vec{x}=\vec{0}

The Norm and the Euclidean Distance

A norm is a way to measure the magnitude of a vector, when a vector space has a norm, it is called a Normed Vector Space. If \vec{x},\vec{y}\in\mathbb{R}^n and \lambda\in\mathbb{R}, then the function Norm( . ) is a norm if it satisfies the following properties:

  1. Norm(\vec{x})\geq 0
  2. Norm(\vec{x}) = 0 \leftrightarrow \vec{x}=\vec{0}
  3. Norm(\lambda\vec{x}) = |\lambda| Norm(\vec{x})
  4. Norm(\vec{x} + \vec{y}) \leq Norm(\vec{x}) + Norm(\vec{y})

An important aspect of the dot product is that it is particularly useful for defining a mathematical concept of distance that intuitively aligns with our natural understanding of distances between two points. For each \vec{x}\in\mathbb{R}^n, its Euclidean Norm, \|\vec{x}\| is defined by the equation:

\|\vec{x}\| = \sqrt{\vec{x}\cdot\vec{x}}

From this, we say that the Euclidean norm is the norm induced by the dot product.

A distance, or metric, is a function that tells us about “the separation between two elements of a set.” If \vec{x}, \vec{y}, \vec{z}\in\mathbb{R}^n and \lambda\in\mathbb{R},, then the function Dist( . ) is a distance if it satisfies the following properties:

  1. Dist(\vec{x},\vec{y})=0 \leftrightarrow \vec{x}=\vec{y}
  2. Dist(\vec{x},\vec{y})=Dist(\vec{y},\vec{x})\geq 0
  3. Dist(\vec{x},\vec{z})\leq Dist(\vec{x},\vec{y}) + Dist(\vec{y},\vec{z})

The last expression is known as the Triangle Inequality, and if it were not satisfied, the function Dist(.) would be what is called a “pseudo-distance” or “pseudo-metric.” A Vector Space equipped with a distance is known as a Metric Space.

From the Euclidean Norm, we define the Euclidean Distance between two vectors. If we have two vectors \vec{x},\vec{y}\in\mathbb{R}^n, then the Euclidean distance between these two vectors, dist_e(\vec{x},\vec{y}), is given by:

dist_e(\vec{x},\vec{y}) = \|\vec{x} - \vec{y}\|

If \vec{x}=(x_1,\cdots,x_n) and \vec{y}=(y_1,\cdots, y_n), then it is easy to prove from the properties of the dot product and the norm that:

dist_e(\vec{x},\vec{y}) = \sqrt{\displaystyle \sum_{i=1}^n (x_i - y_i)^2}

If we equip the vector space \mathbb{R}^n with the Euclidean distance, what we obtain is a Euclidean Space.

From this, we say that the metric of Euclidean space is the metric induced by the Euclidean norm.

Properties of the Euclidean Norm

Since our study focuses specifically on Euclidean Space, it will be useful to review the properties of the Euclidean norm.

Cauchy-Schwarz Inequality

If \vec{x},\vec{y}\in\mathbb{R}^n, then the following property holds:

|\vec{x}\cdot\vec{y}|\leq \|\vec{x}\|\|\vec{y}\|

PROOF:

Let \lambda = (\vec{x}\cdot\vec{y})/\|\vec{y}\|^2, then we have:

\begin{array}{rl} 0\leq \|\vec{x} - \lambda \vec{y}\|^2 &= (\vec{x} - \lambda\vec{y}) \cdot (\vec{x} - \lambda\vec{y}) \\ \\ \displaystyle &= \vec{x}\cdot\vec{x} - \lambda\vec{x}\cdot\vec{y} + \lambda\vec{y}\cdot\vec{x} + \lambda^2(\vec{y}\cdot\vec{y})\\ \\ &= \|\vec{x}\|^2 - 2\lambda(\vec{x}\cdot\vec{y}) + \lambda^2 \|\vec{y}\|^2 \\ \\ \displaystyle &= \|\vec{x}\|^2 - 2\left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{y}\|^2}\right)(\vec{x}\cdot\vec{y}) + \left(\frac{\vec{x}\cdot\vec{y}}{{\|\vec{y}\|^2}}\right)^2 {\|\vec{y}\|^2}\\ \\ \displaystyle &= \|\vec{x}\|^2 - 2\left(\frac{(\vec{x}\cdot\vec{y})^2}{\|\vec{y}\|^2}\right) + \frac{\left(\vec{x}\cdot\vec{y}\right)^2}{\|\vec{y}\|^2}\\ \\ &= \|\vec{x}\|^2 - \frac{\left(\vec{x}\cdot\vec{y}\right)^2}{\|\vec{y}\|^2} \end{array}

Thus, we can state:

\displaystyle 0 \leq \|\vec{x}\|^2 - \frac{\left(\vec{x}\cdot\vec{y}\right)^2}{\|\vec{y}\|^2}

And therefore:

\left(\vec{x}\cdot\vec{y}\right)^2 \leq \|\vec{x}\|^2 \|\vec{y}\|^2

Finally, taking square roots, we arrive at what we wanted to prove:

|\vec{x}\cdot\vec{y}| \leq \|\vec{x}\| \|\vec{y}\|

Triangle Inequality

Let \vec{x},\vec{y}\in\mathbb{R}^n, these vectors satisfy the relation:

\|\vec{x} + \vec{y}\| \leq \|\vec{x}\| + \|\vec{y}\|

PROOF:

First, let’s note that:

\begin{array}{rl} \|\vec{x} + \vec{y}\|^2 &= (\vec{x} + \vec{y})\cdot(\vec{x} + \vec{y}) \\ \\ &=\|\vec{x}\|^2 + 2(\vec{x}\cdot\vec{y}) + \|\vec{y}\|^2 \end{array}

Since the following inequalities hold:

\vec{x}\cdot\vec{y}\leq |\vec{x}\cdot\vec{y}| \leq \|\vec{x}\|\vec{y}\|

We can write the following:

\begin{array}{rl} \|\vec{x} + \vec{y}\|^2 &\leq \|\vec{x}\|^2 + 2\|\vec{x}\|\|\vec{y}\| + \|\vec{y}\|^2 \\ \\ &\leq \left(\|\vec{x}\| + \|\vec{y}\| \right)^2 \end{array}

Finally, taking square roots, we arrive at what we wanted to prove:

\|\vec{x} + \vec{y}\|\leq \|\vec{x}\| + \|\vec{y}\|

Conclusion

Throughout this class, we have explored the fundamental properties of Euclidean space \mathbb{R}^n, addressing its algebraic and metric structures. We began by defining its basic operations, such as vector addition and scalar multiplication, thereby establishing its nature as a vector space. Then, we delved into the concept of the dot product and its relevance to the geometry of \mathbb{R}^n, highlighting its matrix interpretation and its relationship with linear functions.

Subsequently, we analyzed the Euclidean norm and the distance it induces, emphasizing how these tools allow us to quantify lengths and distances in this space. Additionally, we reviewed fundamental properties such as the Cauchy-Schwarz inequality:

|\vec{x}\cdot\vec{y}| \leq \|\vec{x}\| \|\vec{y}\|

and the triangle inequality:

\|\vec{x} + \vec{y}\|\leq \|\vec{x}\| + \|\vec{y}\|

which are key to the development of more advanced theories in analysis and geometry.

Views: 1

Leave a Reply

Your email address will not be published. Required fields are marked *