Algebra et Projectiones in Rn, Productum Vectoriale in R3

Algebra et Projectiones in Rn, Productum Vectoriale in R3

Algebra et Projectiones in Rn, Productum Vectoriale in {\mathbb{R}^3}

Summarium:
Haec series est continuatio directa seriei de Spatio Euclidiano dimensionis n. Hic recognoscemus quaedam notiones algebrae linearis quae adiuvant melius comprehendere spatium euclidianum n-dimensionale, recognoscemus notiones projectionum vectoris in alium, demonstraemus theorema Pythagorae et concludetur cum recognitione producti vectorialis in \mathbb{R}^3 eiusque relatione cum aliis productis spatii Euclidiani 3-dimensionalis.

INDEX
Independentia Linearis, Orthogonalitas et Projectiones
Theorema Pythagorae et Projectio in Subspatium
Productum Scalaris et Vectorialis in \mathbb{R}^3


Independentia Linearis, Orthogonalis et Projectiones

Combinatio linearis et independentia linearis

Vector non nullus \vec{z} potest constitui ut combinatio linearis respectu aliorum vectorum non nullorum \vec{x} et \vec{y} si exstat par numerorum realium \alpha et \beta, non ambo simul nulli, tales ut:

\vec{z} = \alpha \vec{x} + \beta\vec{y}

Id est, vector \vec{z} constitui potest ut summa ponderata vectorum \vec{x} et \vec{y}.

Similiter dicitur vectores \vec{x} et \vec{y} esse lineariter independentes si

(\alpha \vec{x} + \beta\vec{y} = \vec{0} ) \longleftrightarrow (\alpha=0 \wedge \beta=0 )

Independentia linearis inter vectores \vec{x} et \vec{y} significat \vec{y} non posse obtineri ut multiplicatio scalaris (non nulla) \vec{x} nec e converso.

Conceptus independentiae linearis quem modo recognovimus extendi potest ad maiores collectiones vectorum. Collectio vectorum non nullorum \{\vec{x}_1, \cdots, \vec{x}_n\} dicitur lineariter independens cum

\displaystyle \left[\left(\sum_{i=1}^n \alpha_i \vec{x}_i \right) = \vec{0} \right] \longleftrightarrow \left[\bigwedge_{i=1}^n (\alpha_i = 0) \right]

Angulus a duobus vectoribus formatus et orthogonalitas

Si memoramus inaequalitatem Cauchy-Schwarz, haec nobis dicit (\forall \vec{x},\vec{y}\in\mathbb{R}^n)(|\vec{x}\cdot\vec{y}| \leq \|\vec{x}\| \|\vec{y}\|). Hoc considerato facile est videre quod pro quolibet par vectorum \vec{x},\vec{y}\in\mathbb{R}^n\setminus\{\vec{0}\} valet relatio:

\displaystyle -1 \leq \frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\|\|\vec{y}\|}\leq 1

Nunc possumus intueri relationem inter productum puncti et angulum a vectoribus \vec{x} et \vec{y} formatum, quia hi efficiunt planum isometricum ad \mathbb{R}^2. Propter hoc, sine iactura generalitatis, eos imaginari possumus ut elementa \mathbb{R}^2 cum angulis respectu axis \hat{x} \theta_x et \theta_y, respective, ita ut vectores scribantur in forma polari sic:

\begin{array}{rl} \vec{x} &= \|\vec{x}\|(\cos(\theta_x) , \sin(\theta_x)) \\ \\ \vec{y} &= \|\vec{y}\|(\cos(\theta_y) , \sin(\theta_y)) \end{array}

Ita possumus supponere (sine iactura generalitatis, iterum) quod \theta_x \lt \theta_y, deinde computare productum puncti \vec{x}\cdot\vec{y}. Hoc facientes consequimur sequentem resultatum:

\begin{array}{rl}\vec{x}\cdot \vec{y} &= \|\vec{x}\| \|\vec{y}\| (\cos(\theta_x)\cos(\theta_y) + \sin(\theta_x)\sin(\theta_y)) \\ \\ &= \|\vec{x}\| \|\vec{y}\| \cos(\theta_y-\theta_x) \end{array}

Nunc, sumendo differentiam inter positionem angularem maiorem et minorem obtinemus angulum inter vectores comprehensum, \angle(\vec{x},\vec{y})=\theta_y - \theta_x. Et hoc nunc scribere possumus:

\displaystyle \cos\left(\angle(\vec{x},\vec{y}) \right) = \frac{\vec{x} \cdot \vec{y}}{\|\vec{x}\|\|\vec{y}\|}

Hic commemorandum est \angle(\vec{x},\vec{y})\in [0, \pi]

Ex hoc coniungere possumus inaequalitatem Cauchy-Schwarz cum geometria angulorum, et insuper nobis concedit obtinere notionem rigorosam orthogonalitatis. Duo vectores dicuntur Orthogonales cum inter se teneant angulum \pi/2 radianorum, eo sensu qui in paragrapho superiore explicatus est. Hoc idem est ac dicere \cos\left(\angle(\vec{x},\vec{y})\right) = 0, quod rursus idem est ac dicere \vec{x}\cdot\vec{y} = 0. Hac de causa dicitur affirmare orthogonalitatem vectorum \vec{x} et \vec{y} idem esse ac dicere \vec{x}\cdot\vec{y}=0.

Si duo vectores non nulli sunt orthogonales, tunc sunt lineariter independentes

Haec est proprietas quodammodo intuitiva vectorum \mathbb{R}^n cuius demonstratio formalis non est tam directa, atque etiam proprietas quae interdum confusionem parere potest: Orthogonalitas duorum vectorum implicat independentiam linearim inter eos, sed independentia linearis inter duos vectores non necessario implicat eorum orthogonalitatem. Ad hoc ultimum videndum sufficit simplex contraexemplum:

Si sumamus vectores \vec{A}=(1,0) et \vec{B}=(1,1), qui plane non sunt orthogonales quia \vec{A}\cdot\vec{B}=1, videbimus quod si facimus

\alpha\vec{A} + \beta\vec{B} = \vec{0}

Tum habetur

\begin{array}{rl} \alpha + \beta &= 0 \\ \beta &= 0 \end{array}

et proinde: \alpha = 0 \wedge \beta=0. Et hoc concluditur quod:

\alpha\vec{A} + \beta\vec{B} = \vec{0} \longleftrightarrow \alpha = 0 \wedge \beta=0

Quod idem est ac dicere \vec{A} et \vec{B} esse lineariter independentes. Hoc clare ostenditur modo valde explicito non verum esse quod independentia linearis implicet orthogonalitatem. Attamen, orthogonalitas quidem implicat independentiam linearim et hoc formaliter demonstrabo infra, et ad hoc consideremus sequentem collectionem praemissarum:

\mathcal{H}= \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0, \alpha\vec{x}+\beta\vec{y} = \vec{0}\}

Ex hoc possumus producere sequentem ratiocinationem:

\begin{array}{rll} (1) &\mathcal{H}\vdash \vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\} &{;\;Praesumptio}\\ \\ (2) &\mathcal{H}\vdash \vec{x}\cdot\vec{y}=0 &{\;Praesumptio} \\ \\ (3) &\mathcal{H}\vdash \alpha\vec{x} + \beta\vec{y} = \vec{0} &{\;Praesumptio} \\ \\ (4) &\mathcal{H}\vdash (\alpha\vec{x} + \beta\vec{y})\cdot\vec{x} = \alpha\|\vec{x}\|^2 + \beta(\vec{x}\cdot\vec{y}) &{;\; Bilinearitas} \\ \\ (5) &\mathcal{H}\vdash \alpha\|\vec{x}\|^2 = 0 & {;\; Ex(2,3,4)} \\ \\ (6) &\mathcal{H}\vdash \alpha = 0 & {;\; Ex(1,5)} \\ \\ (7) &\mathcal{H}\vdash (\alpha\vec{x} + \beta\vec{y})\cdot\vec{y} = \alpha(\vec{x}\cdot\vec{y}) + \beta\|\vec{y}\|^2 & {;\;Bilinearitas} \\ \\ (8) &\mathcal{H}\vdash \beta\|\vec{y}\|^2 = 0 &{;\;Ex(2,3,7)} \\ \\ (9) &\mathcal{H}\vdash \beta = 0 &{;\;Ex(1,8)} \\ \\ (10) &\mathcal{H}\vdash \alpha= 0 \wedge \beta = 0 &{;\;\wedge-int(6,9)} \end{array}

Hoc concludimus quod

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0, \alpha\vec{x}+\beta\vec{y} = \vec{0}\} \vdash \alpha= 0 \wedge \beta = 0

Denique, applicando theorema deductionis super hanc ultimam expressionem habetur:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\cdot\vec{y}=0\} \vdash (\alpha\vec{x}+\beta\vec{y} = \vec{0}) \rightarrow (\alpha= 0 \wedge \beta = 0)

Probatio qua obtinetur sagitta in directione contraria est trivialis.

Id est: si \vec{x} et \vec{y} sunt vectores non nulli et orthogonales, tunc sunt lineariter independentes.

Projectio vectoris in alium

Supponamus nos habere duos vectores non nullos \vec{x} et \vec{y} qui tenent inter se angulum \angle(\vec{x},\vec{y}) et quaerimus “Quanta parte vector \vec{x} super vectorem \vec{y} iacet?” vel “Quam magnitudo est umbra vectoris \vec{x} cum projicitur in directionem vectoris \vec{y}?”. Hanc quaestionem resolvere possumus per trigonometriam, atque ita definire projectionem vectoris \vec{x} in alium \vec{y}, Proy_{\vec{y}}(\vec{x}), per expressionem:

Proy_{\vec{y}}(\vec{x}) = \| \vec{x}\| \cos(\angle(\vec{x},\vec{y})) \hat{y}

Si hoc coniungimus cum iis quae in paragraphis superioribus visa sunt scribere possumus:

\displaystyle Proy_{\vec{y}}(\vec{x}) = {\| \vec{x}\|} \left(\frac{\vec{x}\cdot\vec{y}}{{\|\vec{x}\|} \|\vec{y}\|}\right)\color{red}{\hat{y}} = \left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{y}\|} \right)\color{red}{\frac{\vec{y}}{\|\vec{y}\|}} = \left(\frac{\vec{x}\cdot\vec{y}}{\|\vec{y}\|^2}\right)\vec{y} = \left(\frac{\vec{x}\cdot\vec{y}}{\vec{y}\cdot\vec{y}}\right)\vec{y}

cum, meminerimus

\displaystyle \cos(\angle(\vec{x},\vec{y})) = \frac{\vec{x}\cdot\vec{y}}{\|\vec{x}\| \|\vec{y}\|}

Projectiones magni momenti sunt quia nobis permittunt exprimere vectores secundum quamlibet basim ut summam suarum projectionum:

\vec{x} = \displaystyle \sum_{i=1}^n \alpha_i \hat{u}_i

Ubi \{\vec{u}_i\}_{i=1,\cdots, n} est basis vectorum lineariter independentium \mathbb{R}^n et coefficientes \alpha_i = (\vec{x}\cdot\vec{u}_i)/\|\vec{u}_i\| sunt ipsae projectiones in unumquemque elementum basis et constituunt coordinatas \vec{x} respectu basis \{\hat{u}_i\}_{i=1,\cdots, n} \mathbb{R}^n.


Theorema Pythagorae et Projectio in Subspatium

Theorema Pythagorae est proventus omnibus notus et qui innumeras demonstrationes habet. Una demonstratio possibilis huius theorematos emergit ex materiis quas de spatio euclidiano elaboravimus cum addito quod valet pro qualibet numerorum dimensionum.

Demonstratio Theorematis Pythagorae

Si habemus triangulum rectangulum cum cathetis a et b, et hypotenusa c, theorema Pythagorae dicit nobis a^2+b^2=c^2. Hoc intellecto possumus repraesentare quemque cathetum per par vectorum orthogonalium \vec{x} et \vec{y} et scribere theorema Pythagorae hoc modo:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \leftrightarrow (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2)

Ubi expressio \vec{x}\bot\vec{y} significat ambos vectores esse orthogonales, id est: non nullos et tales ut \vec{x}\cdot\vec{y}=0. Hoc modo, constituitur relatio biconditionalitatis inter orthogonalitatem et summam magnitudinum in quadrato duorum vectorum.

Haec forma vectorialis ad repraesentandum theorema Pythagorae demonstrari potest per sequentia duo ratiocinia:

Primum in directum:

\begin{array}{rll} (1) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \vec{x}\bot\vec{y} & {;\;Praesumptio} \\ \\ (2) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \vec{x}\cdot\vec{y}= 0 & {;\;Ex(1)} \\ \\ (3) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \|\vec{x} + \vec{y}\|^2 = (\vec{x} + \vec{y})\cdot(\vec{x} + \vec{y}) = \|\vec{x}\|^2 + 2(\vec{x}\cdot\vec{y}) + \|\vec{y}\|^2 & \\ &;\;Proprietas\;normae\;euclidianae\;et\;producti\;scalari & \\ \\ (4) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \vec{x}\bot\vec{y}\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2 & {;\;Ex(2,3)} \\ \\ (5) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \rightarrow ( \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2) & {;\;TD(4)} \end{array}

Et nunc in regressum:

\begin{array}{rll} (1) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2 & {;\;Praesumptio} \\ \\ (2) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 +2(\vec{x}\cdot\vec{y}) + \|\vec{y}\|^2 & \\ &;\;Proprietas\;normae\;euclidianae\;et\;producti\;scalari &\\ \\ (3) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \vec{x}\cdot\vec{y}=0 & {;\;Ex(1,2)} \\ \\ (4) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}, \|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2\} \vdash \vec{x}\bot\vec{y} & {;\;Ex(3)} \\ \\ (5) & \{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2) \rightarrow \vec{x}\bot\vec{y} & {;\;TD(4)} \end{array}

Et denique, coniungendo ambos ratiocinia habetur quod demonstrandum erat:

\{\vec{x},\vec{y}\in \mathbb{R}^n\setminus\{\vec{0}\}\} \vdash \vec{x}\bot\vec{y} \leftrightarrow (\|\vec{x} + \vec{y}\|^2 = \|\vec{x}\|^2 + \|\vec{y}\|^2)

Projectio vectoris in Subspatium \mathbb{R}^n

Consideremus subspatium H \mathbb{R}^n formatum per basim vectorum unitariorum \{\hat{v}_1, \cdots, \hat{v}_k\}. Si sumimus vectorem \vec{x}\in\mathbb{R}^n\setminus\{\vec{0}\}, definitur projectio vectoris \vec{x} in spatium H per expressionem:

Proy_{H}(\vec{x}) = \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j

Quod collectio sit orthonormalis significat omnes eius elementos inter se esse orthogonales et unumquemque habere normam aequalem unitati.

Hoc est, ut ita dicamus, umbra quam projicit vector in unamquamque partem subspatii H \mathbb{R}^n

Distantia inter Punctum vel Vectorem \mathbb{R}^n et Subspatium \mathbb{R}^n

Ex projectione vectoris \vec{x}\in\mathbb{R}^n\setminus\{\vec{0}\} in subspatium H \mathbb{R}^n construere potest vector huius formae

\vec{x} - Proy_{H}(\vec{x})

Vector hoc modo formatus erit vector qui coniungit punctum subspatii H cum puncto coordinatarum \vec{x}, qui egreditur orthogonaliter ad subspatium H. Hoc non difficile est probare, si sumamus vectorem quendam \vec{z}\in H et computemus productum puncti (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z}, satis est videre quod huius operationis exitus est zero. Faciamus computationes ut videamus si hoc revera ita est:

Si \vec{z}\in H, tunc erit formae

\vec{z}=\displaystyle \sum_{j=1}^k \beta_j\hat{v}_j

Ubi \{\hat{v}_j\}_{j=1}^k est basis orthonormalis H et \beta_j \in\mathbb{R} sunt coefficientes \vec{z} in H. Hoc intellecto, calculus producti puncti (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z}, dabit:

\begin{array}{rl} (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z} &= \left(\vec{x} - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \right) \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \vec{x} \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \displaystyle \sum_{j=1}^k \beta_j\hat{v}_j \end{array}

Sed quia \vec{x} est vector \mathbb{R}^n cuius H est subspatium, possibile est invenire collectionem n-k vectorum orthonormalium inter se et simul orthonormalium omnibus vectoribus H, dicamus \{\hat{v}_{k+1}, \cdots, \hat{v}_n\}, ita ut una cum basi H formet basim \mathbb{R}^n et scribi possit

\vec{x} = \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j + \sum_{j=k+1}^n \alpha_j \hat{v}_j

Ita ut expositio superior sequatur hoc modo:

\begin{array}{rl} (\vec{x}-Proy_{H}(\vec{x}))\cdot \vec{z} &= \displaystyle \left( \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j + \sum_{j=k+1}^n \alpha_j \hat{v}_j\right) \cdot \sum_{j=1}^k \beta_j\hat{v}_j - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j + \underbrace{\color{red}{\sum_{j=k+1}^n \alpha_j \hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j}}_{(*)} - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= \displaystyle \sum_{j=1}^k (\vec{x}\cdot\hat{v}_j )\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j - \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j \cdot \sum_{j=1}^k \beta_j\hat{v}_j \\ \\ &= 0 \end{array}

(*) Summa nulla quia \{v_j\}_{j=1}^n est basis orthonormalis \mathbb{R}^n.

Ex hoc demonstrare possumus quod distantia inter subspatium H et vectorem \vec{x} datur per:

\|\vec{x} - Proy_{H}(\vec{x})\|

Demonstratio

Ad hunc exitum demonstrandum ostendetur quod pro omni \vec{z}\in H semper valebit \|\vec{x} - Proy_{H}(\vec{x})\| \leq \|\vec{x} - \vec{z}\|, ad hoc utemur theorema Pythagorae hoc modo:

\begin{array}{rl} \|\vec{x} - \vec{z}\|^2 &= \| \left(\vec{x} -Proy_{H}(\vec{x}) \right) + \left(Proy_{H}(\vec{x}) - \vec{z}\right)\|^2 \\ \\ &= \| \vec{x} -Proy_{H}(\vec{x}) \|^2 + \|Proy_{H}(\vec{x}) - \vec{z}\|^2 \\ \\ \end{array}

Haec ultima aequalitas obtinetur quia vectores \vec{x} -Proy_{H}(\vec{x}) et Proy_{H}(\vec{x}) - \vec{z} sunt orthogonales. Itaque:

\|\vec{x} - Proy_{H}(\vec{x})\|^2 \leq \|\vec{x} - \vec{z}\|^2

quod erat demonstrandum.

Iam hoc effectu obtento, dicere possumus distantiam inter punctum \vec{x}\in\mathbb{R}^n et subspatium H \mathbb{R}^n generatum per vectores orthonormales \{\hat{v}_1, \cdots, \hat{v}_k\} dari per:

dist(\vec{x},H) =\left\|\vec{x} - Proy_{H}(\vec{x})\right\|= \left\|\vec{x} - \displaystyle \sum_{j=1}^k (\vec{x} \cdot \hat{v}_j)\hat{v}_j\right\|


Productum Scalaris et Vectorialis in \mathbb{R}^3

Nunc parum mutabimus nostrum prospectum ut intendamus in vectores \mathbb{R}^3. Hic, praeter operationes quas iam in genere de \mathbb{R}^n recognovimus, possibile est etiam productum vectoriale quod ex duobus vectoribus resultat in alium vectorem. Hoc est productum proprium \mathbb{R}^3 (et fortasse \mathbb{R}^7, cuius casum hic non examinabimus). Generaliter repraesentantur vectores basis canonicae \mathbb{R}^3 per litteras \hat{x}, \hat{y}, \hat{z} vel ut \hat{\imath}, \hat{\jmath}, \hat{k}. Praeferentia unius vel alterius est personalis.

\begin{array}{rl} \hat{\imath} = \hat{x}&=(1,0,0)\\ \hat{\jmath} =\hat{y}&=(0,1,0)\\ \hat{k} =\hat{z}&=(0,0,1)\\ \end{array}

Itaque, si habemus vectorem formae (a,b,c), scribi potest in forma algebraica hoc modo:

(a,b,c) = a\hat{x} + b\hat{y} + c\hat{z}

Productum vectoriale in \mathbb{R}^3

Sint \vec{x}=(x_1,x_2,x_3) et \vec{y}=(y_1,y_2,y_3) vectores \mathbb{R}^3. Definimus productum vectoriale \vec{x} cum \vec{y}, \vec{x}\times\vec{y} per:

\begin{array}{rl} \vec{x}\times\vec{y} &= \left|\begin{array}{ccc} \hat{x} & \hat{y} & \hat{z} \\ x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \end{array}\right| \\ \\ &=\hat{x}x_2y_3 + \hat{y}x_3y_1 + \hat{z} x_1y_2 - \left( \hat{z} x_2 y_1 + \hat{y} x_1 y_3 + \hat{x}x_3y_2\right) \\ \\ &=\hat{x}(x_2y_3 - x_3y_2) + \hat{y}(x_3y_1 - x_1y_3) + \hat{z}(x_1y_2 - x_2y_1) \end{array}

Identitas Lagrange

Pro casu vectorum \mathbb{R}^3 possumus agnoscere tria genera “productorum”: Scalaris \vec{x}\cdot\vec{y}, vectorialis \vec{x}\times\vec{y}, et normarum \|\vec{x}\|\|\vec{y}\|. Haec tria producta inter se connectuntur per identitatem Lagrange

\|\vec{x}\times\vec{y}\|^2 = \|\vec{x}\|^2\|\vec{y}\|^2- (\vec{x}\cdot\vec{y})^2

Demonstratio identitatis Lagrange

Sint \vec{x}=(x_1,x_2,x_3) et \vec{y}=(y_1,y_2,y_3) vectores \mathbb{R}^3, tunc habemus:

\begin{array}{rl} \vec{x}\times\vec{y} &=(x_2y_3 - x_3y_2) \hat{x} + (x_3y_1 - x_1y_3)\hat{y} + (x_1y_2 - x_2y_1)\hat{z} \end{array}

Quare:

\begin{array}{rl} \|\vec{x}\times\vec{y}\|^2 &=(x_2y_3 - x_3y_2)^2 + (x_3y_1 - x_1y_3)^2 + (x_1y_2 - x_2y_1)^2 \\ \\ &= \color{green}{x_2^2y_3^2 - 2x_2x_3y_3y_2 + x_3^2y_2^2} + \cdots\\ \\ &\cdots + \color{blue}{x_3^2y_1^2 - 2x_3x_1y_1y_3 + x_1^2y_3^2} + \cdots \\ \\ &\cdots + \color{red}{x_1^2y_2^2 - 2x_1x_2y_2y_1 + x_2^2y_1^2} \end{array}

Altera ex parte:

\begin{array}{rl} \|\vec{x}\|^2 \|\vec{y}\|^2 - (\vec{x}\cdot\vec{y})^2 &= (x_1^2 + x_2^2 + x_3^2)(y_1^2+y_2^2 + y_3^2) - (x_1y_1 + x_2y_2 + x_3 y_3)^2 \\ \\ \\ &= {x_1^2y_1^2} + \color{red}{x_1^2y_2^2} + \color{blue}{x_1^2y_3^2} + \cdots \\ \\ &\cdots + \color{red}{x_2^2y_1^2} + {x_2^2y_2^2} + \color{green}{x_2^2y_3^2} + \cdots \\ \\ &\cdots + \color{blue}{x_3^2y_1^2} + \color{green}{x_3^2y_2^2} + {x_3^2y_3^2} + \cdots \\ \\ &\cdots - \left[ {x_1^2y_1^2} + {x_2^2y_2^2} + {x_3^2y_3^2} + \right. \cdots \\ \\ &\cdots + 2\left(\color{red}{x_1x_2y_1y_2} + \color{blue}{x_1x_3y_1y_3} + \color{green}{x_2x_3y_2y_3} \right)\left.\right] \\ \\ \\ &= \color{red}{x_1^2y_2^2 - 2x_1x_2y_2y_1 + x_2^2y_1^2} + \cdots \\ \\ & \cdots + \color{blue}{x_1^2y_3^2 - 2x_1x_3y_3y_1 + x_3^2y_1^2} + \cdots \\ \\ & \cdots + \color{green}{x_2^2y_3^2 - 2x_2x_3y_3y_2 + x_3^2y_2^2} \end{array}

Denique, comparando expressiones coloribus signatas habetur quod erat demonstrandum.

Productum Crucis et angulus inter vectores

Superius vidimus exstare arctam relationem inter angulum a duobus vectoribus sustentum et effectum producti scalaris, quod datur relatione \vec{x}\cdot\vec{y} = \|\vec{x}\|\|\vec{y}\|\cos(\angle(\vec{x},\vec{y})). Fit autem aliquid simile cum producto vectoriali, et datur relatione sequenti:

\|\vec{x}\times\vec{y}\| = \|\vec{x}\|\|\vec{y}\| \sin(\angle(\vec{x},\vec{y}))

Haec expressio est effectus directus identitatis Lagrange supra demonstratae, demonstratio sic se habet:

\begin{array}{rl} \|\vec{x}\times\vec{y}\|^2 &= \|\vec{x}\|^2\|\vec{y}\|^2 - (\vec{x}\cdot\vec{y})^2 \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 - (\|\vec{x}\|\|\vec{y}\|\cos(\angle(\vec{x},\vec{y})))^2 \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 - \|\vec{x}\|^2\|\vec{y}\|^2\cos^2(\angle(\vec{x},\vec{y})) \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 (1 - \cos^2(\angle(\vec{x},\vec{y}))) \\ \\ &= \|\vec{x}\|^2\|\vec{y}\|^2 \sin^2(\angle(\vec{x},\vec{y})) \end{array}

Denique, radices sumendo pervenimus ad:

\|\vec{x}\times\vec{y}\| = \|\vec{x}\|\|\vec{y}\|\; |\sin(\angle(\vec{x},\vec{y}))|

At meminerimus \angle(\vec{x},\vec{y})\in[0,\pi], et in hoc intervallo valorum functio sinus semper est non-negativa, ita removere possumus valorem absolutum et pervenimus ad id quod erat demonstrandum.

Ex hac expressione possumus intueri quod effectus operationis \|\vec{x}\times\vec{y}\| dat nobis aream generatam a vectoribus \vec{x} et \vec{y}.

Views: 1

Leave a Reply

Your email address will not be published. Required fields are marked *