diff --git a/Chapter2/LinearCombinations.md b/Chapter2/LinearCombinations.md index 0c2ebf1be869c4305dbf0cd91411584df0fb4353..9eedcca866d2dc7b0a5e09b4e7723389663f6cfd 100644 --- a/Chapter2/LinearCombinations.md +++ b/Chapter2/LinearCombinations.md @@ -1,9 +1,10 @@ (Sec:LinearCombinations)= -# Linear Combinations -::::{prf:definition} +# Linear Combinations -Let $\mathbf{v}_1, \ldots, \mathbf{v}_n$ be vectors in $\mathbb{R}^m$. Any expression of the form +::::{prf:definition} + +Let $\mathbf{v}_1, \ldots, \mathbf{v}_n$ be vectors in $\mathbb{R}^m$. Any expression of the form $$ x_1 \mathbf{v_1}+\cdots+x_n \mathbf{v_n}, @@ -13,24 +14,24 @@ where $x_1, \ldots, x_n$ are real numbers, is called a **linear combination** of :::: - -::::{prf:example} +::::{prf:example} The vectors $\mathbf{v}_1$ and $\mathbf{v}_2$ are two vectors in the plane $\mathbb{R}^2$. As we can see in {numref}`Figure %s <Fig:LinearCombinations:LinearCombinations>`, the vector $\mathbf{u}$ is a linear combination of $\mathbf{v}_1$ and $\mathbf{v}_2$ since it can be written as $\mathbf{u}=2\mathbf{v}_1+\mathbf{v}_2$. The vector $\mathbf{w}$ is a linear combination of these two vectors as well. It can be written as $\mathbf{w}=-3\mathbf{v}_1+2\mathbf{v}_2$. -:::{figure} Images/Fig-LinearCombinations-LinComb.svg +```{applet} +:url: linear_combinations/linearcombinations +:fig: Images/Fig-LinearCombinations-LinComb.svg :name: Fig:LinearCombinations:LinearCombinations +:status: approved Linear combinations of vectors in the plane. - -::: +``` :::: -If we want to determine whether a given vector is a linear combination of other vectors, then we can do that using systems of equations. +If we want to determine whether a given vector is a linear combination of other vectors, then we can do that using systems of equations. - -::::{prf:example} +::::{prf:example} $$ \mathbf{v_1}= @@ -41,7 +42,7 @@ $$ Is the vector $\mathbf{b}$ a linear combination of $\mathbf{v}_1$ and $\mathbf{v}_2$? We can use the definition of a linear combination to solve this problem. If $\mathbf{b}$ is in fact a linear combination of the two other vectors, then it can be written as $x_1 \mathbf{v}_1+x_2 \mathbf{v}_2$. This means that we should verify whether the system of equations $x_1 \mathbf{v}_1+x_2 \mathbf{v}_2=\mathbf{b}$ has a solution. -The equation +The equation $$ x_1 @@ -50,26 +51,25 @@ x_1 \begin{bmatrix} -1 \\ 3 \\ 0 \end{bmatrix} $$ -is equivalent to the system +is equivalent to the system $$ \left\{\begin{array}{l} x_1+3x_2=-1 \\ 2x_1+x_2=3 \\ x_1+2x_2=0\end{array} \right. $$ -The augmented matrix of this system of equations is equal to - +The augmented matrix of this system of equations is equal to $$ \left[\begin{array}{cc|c} 1 & 3 & -1 \\ 2 & 1 & 3 \\ 1 & 2 & 0 \end{array}\right] $$ -and its reduced echelon form is equal to +and its reduced echelon form is equal to $$ \left[\begin{array}{cc|c} 1 & 0 & 2 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{array}\right]. $$ -This means that $\mathbf{b}$ is indeed a linear combination of $\mathbf{v}_1$ and $\mathbf{v}_2$. +This means that $\mathbf{b}$ is indeed a linear combination of $\mathbf{v}_1$ and $\mathbf{v}_2$. $$ 2 @@ -82,8 +82,7 @@ We have found that $\mathbf{b}$ can be written as $2\mathbf{v}_1-\mathbf{v_2}$. :::: - -::::{prf:example} +::::{prf:example} $$ \mathbf{v_1}= @@ -96,9 +95,7 @@ In this case it is a lot easier to decide whether $\mathbf{b}$ is a linear combi :::: - - -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/ac63b286-09e1-46e5-91fc-952b54436293?id=78560 :label: grasple_exercise_2_2_A :dropdown: @@ -106,7 +103,7 @@ In this case it is a lot easier to decide whether $\mathbf{b}$ is a linear combi :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/bd263ac1-b906-48dc-a898-d959254d9681?id=70163 :label: grasple_exercise_2_2_B :dropdown: @@ -114,14 +111,11 @@ In this case it is a lot easier to decide whether $\mathbf{b}$ is a linear combi :::: - - -## Span +## Span In linear algebra it is often important to know whether each vector in $\mathbb{R}^n$ can be written as a linear combination of a set of given vectors. In order to investigate when it is possible to write any given vector as a linear combination of a set of given vectors we introduce the notion of a **span**. - -::::{prf:definition} +::::{prf:definition} :label: Dfn:LinearCombinations:Span Let $S$ be a set of vectors. The set of all linear combinations $a_1\mathbf{v}_1+a_2\mathbf{v}_2+ \cdots +a_k \mathbf{v}_k$, where $\mathbf{v}_1, \ldots, \mathbf{v}_k$ are vectors in $S$, will be called the **span** of those vectors and will be denoted as $\Span{S}$. @@ -132,16 +126,15 @@ The span of an empty collection of vectors will be defined as the set that only :::: +::::{prf:remark} -::::{prf:remark} - -The collection $\Span{\mathbf{v}_1, \ldots, \mathbf{v}_k}$ always contains all of the vectors $\mathbf{v}_1, \ldots, \mathbf{v}_k$. This is true since each vector $\mathbf{v}_i$ can be written as the linear combination +The collection $\Span{\mathbf{v}_1, \ldots, \mathbf{v}_k}$ always contains all of the vectors $\mathbf{v}_1, \ldots, \mathbf{v}_k$. This is true since each vector $\mathbf{v}_i$ can be written as the linear combination $$ 0\mathbf{v}_1+\cdots+1\mathbf{v}_i+\cdots +0\mathbf{v}_k. $$ -Moreover, the span of any set of vectors always contains the zero vector. Whatever set of vectors we start with, we can always write +Moreover, the span of any set of vectors always contains the zero vector. Whatever set of vectors we start with, we can always write $$ \mathbf{0}=0\mathbf{v}_1+0\mathbf{v}_2+\cdots +0\mathbf{v}_k. @@ -151,27 +144,26 @@ $$ The following examples will give us a bit of an idea what spans look like. - -::::{prf:example} +::::{prf:example} :label: Ex:LinearCombinations:SpanOfOneVector What does the span of a single non-zero vector look like? A linear combination of a vector $\mathbf{v}$ is of the form $x\mathbf{v}$, where $x$ is some real number. Linear combinations of a single vector $\mathbf{v}$ are thus just multiples of that vector. This means that $\Span{\mathbf{v}}$ is simply the collection of all vectors on the line through the origin and with directional vector $\mathbf{v}$ as we can see in {numref}`Figure %s <Fig:LinearCombinations:SpanOneVectors>`. -:::{figure} Images/Fig-LinearCombinations-SpanOne.svg +```{applet} +:url: linear_combinations/span_one +:fig: Images/Fig-LinearCombinations-SpanOne.svg :name: Fig:LinearCombinations:SpanOneVectors +:status: approved The span of a single non-zero vector. - -::: +``` :::: +::::{prf:example} +:label: Ex:LinearCombinations:SpanOfTwoVectors -::::{prf:example} -:label: Ex:LinearCombinations:SpanOfTwoVectors - -Let $\mathbf{u}$ and $\mathbf{v}$ be two non-zero vectors in $\mathbb{R}^3$, as depicted in {numref}`Figure %s <Fig:LinearCombinations:SpanTwoVectors>`. What does the span of these vectors look like? By definition, $\Span{\mathbf{u}, \mathbf{v}}$ contains all linear combinations of $\mathbf{u}$ and $\mathbf{v}$. Each of these linear combinations is of the form - +Let $\mathbf{u}$ and $\mathbf{v}$ be two non-zero vectors in $\mathbb{R}^3$, as depicted in {numref}`Figure %s <Fig:LinearCombinations:SpanTwoVectors>`. What does the span of these vectors look like? By definition, $\Span{\mathbf{u}, \mathbf{v}}$ contains all linear combinations of $\mathbf{u}$ and $\mathbf{v}$. Each of these linear combinations is of the form $$ x_1\mathbf{u}+x_2\mathbf{v} \quad \textrm{$x_1$, $x_2$ in $\mathbb{R}$}. @@ -179,60 +171,73 @@ $$ This looks like the parametric vector equation of a plane. Since the span must contain the zero vector we find that we obtain a plane through the origin like in {numref}`Figure %s <Fig:LinearCombinations:SpanTwoVectors>`. - -:::{figure} Images/Fig-LinearCombinations-SpanTwoPlane.svg -:name: Fig:LinearCombinations:SpanTwoVectors - -The span of two non-zero, non-parallel vectors. +:::{figure} +:name: ::: -:::: - +```{applet} +:url: linear_combinations/span_two_plane +:fig: Images/Fig-LinearCombinations-SpanTwoPlane.svg +:name: Fig:LinearCombinations:SpanTwoVectors +:status: approved -::::{prf:example} +The span of two non-zero, non-parallel vectors. +``` +:::: +::::{prf:example} The span of two non-zero vectors does not need to be a plane through the origin. If $\mathbf{u}$ and $\mathbf{v}$ are parallel, as in {numref}`Figure %s <Fig:LinearCombinations:SpanTwoParallelVectors>`, then the span is actually a line through the origin. -:::{figure} Images/Fig-LinearCombinations-SpanTwoLine.svg +```{applet} +:url: linear_combinations/span_two_line +:fig: Images/Fig-LinearCombinations-SpanTwoLine.svg :name: Fig:LinearCombinations:SpanTwoParallelVectors +:status: approved The span of two non-zero, parallel vectors. - -::: +``` If two non-zero vectors $\mathbf{u}$ and $\mathbf{v}$ are parallel, then $\mathbf{v}$ can be written as a multiple of $\mathbf{u}$. Assume for example that $\mathbf{v}=2\mathbf{u}$. Any linear combination $x_1\mathbf{u}+x_2\mathbf{v}$ can then be written as $x_1\mathbf{u}+2x_2\mathbf{u}$ or $(x_1+2x_2)\mathbf{u}$. This means that in this case each vector in the span of $\mathbf{u}$ and $\mathbf{v}$ is a multiple of $\mathbf{u}$. Therefore, the span will be a line through the origin. :::: - -::::{prf:example} +::::{prf:example} If we start with three non-zero vectors in $\mathbb{R}^3$, then the resulting span may take on different forms. The span of the three vectors in {numref}`Figure %s <Fig:LinearCombinations:SpanThreeVectors1>`, for example, is equal to the entire space $\mathbb{R}^3$. In {numref}`Sec:BasisDim` we will see why this is the case. -:::{figure} Images/Fig-LinearCombinations-SpanThreeR3.svg -:name: Fig:LinearCombinations:SpanThreeVectors1 - -The span of three vectors. +:::{figure} +:name: ::: +```{applet} +:url: linear_combinations/span_three +:fig: Images/Fig-LinearCombinations-SpanThreeR3.svg +:name: Fig:LinearCombinations:SpanThreeVectors1 +:status: approved + +The span of three vectors. +``` + On the other hand, if we start with the three vectors that you can see in {numref}`Figure %s <Fig:LinearCombinations:SpanThreeVectors2>`, then the span is equal to a plane through the origin. -:::{figure} Images/Fig-LinearCombinations-SpanThreePlane.svg +```{applet} +:url: linear_combinations/span_three_plane +:fig: Images/Fig-LinearCombinations-SpanThreePlane.svg :name: Fig:LinearCombinations:SpanThreeVectors2 +:status: approved -The span of three vectors lying in the same plane. - -::: +The span of three vectors lying in the same plane. +``` There is also a possibility where the span of three non-zero vectors in $\mathbb{R}^3$ is equal to a line through the origin. Can you figure out when this happens? :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/676d672c-74fc-4545-99ba-6b308af566ce?id=78542 :label: grasple_exercise_2_2_C :dropdown: @@ -240,23 +245,17 @@ There is also a possibility where the span of three non-zero vectors in $\mathbb :::: - - - - We will now look at a very specific set of vectors in $\mathbb{R}^n$ of which the span is always the entire space $\mathbb{R}^n$. - -::::{prf:definition} +::::{prf:definition} Suppose we are working in $\mathbb{R}^n$. Let $\mathbf{e}_k$ be the vector of which all components are equal to 0, with the exception that the entry on place $k$ is equal to 1. The vectors $(\mathbf{e}_1, \ldots, \mathbf{e}_n)$ will be called the **standard basis** of $\mathbb{R}^n$. :::: +::::{prf:example} -::::{prf:example} - -The following vectors form the standard basis for $\mathbb{R}^2$. +The following vectors form the standard basis for $\mathbb{R}^2$. $$ \mathbf{e}_1= @@ -264,15 +263,14 @@ $$ \begin{bmatrix} 0 \\ 1 \end{bmatrix} \nonumber $$ -Each vector $\mathbf{v}$ can be written as a linear combination of the vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ in a unique way. Later on we will call each collection of vectors with this property a **basis** for $\mathbb{R}^2$. If +Each vector $\mathbf{v}$ can be written as a linear combination of the vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ in a unique way. Later on we will call each collection of vectors with this property a **basis** for $\mathbb{R}^2$. If $$ \mathbf{v}= \begin{bmatrix} a \\ b \end{bmatrix}, \nonumber $$ -then clearly we have that - +then clearly we have that $$ \mathbf{v}=a @@ -284,10 +282,9 @@ It is easy to see that this is the only linear combination of $\mathbf{e}_1$ and :::: +::::{prf:example} -::::{prf:example} - -The three vectors below form the standard basis for $\mathbb{R}^3$. +The three vectors below form the standard basis for $\mathbb{R}^3$. $$ \mathbf{e}_1= @@ -300,31 +297,30 @@ Here too, it is true that each vector in $\mathbb{R}^3$ can be written as a uniq :::: - -::::{prf:proposition} +::::{prf:proposition} :label: Prop:LinearCombinations:SpanStandardBasis If $(\mathbf{e}_1, \ldots, \mathbf{e}_n)$ is the standard basis for $\mathbb{R}^n$, then $\Span{\mathbf{e}_1, \ldots, \mathbf{e}_n}$ is equal to $\mathbb{R}^n$. :::: -::::{prf:proof} +::::{prf:proof} -Take an arbitrary vector $\mathbf{v}$ in $\mathbb{R}^n$ with +Take an arbitrary vector $\mathbf{v}$ in $\mathbb{R}^n$ with $$ \mathbf{v}= \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix}.\nonumber $$ -The vector $\mathbf{v}$ can be written as +The vector $\mathbf{v}$ can be written as \begin{align*} \mathbf{v} &= a_1 \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}+a_2 \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix}+ \ldots a_n \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix} \\ -&= a_n\mathbf{e}_1+a_2\mathbf{v}_2+\ldots +a_n\mathbf{e}_n. +&= a_n\mathbf{e}\_1+a_2\mathbf{v}\_2+\ldots +a_n\mathbf{e}\_n. \end{align*} This means that $\mathbf{v}$ is in the span of $\mathbf{e}_1, \ldots, \mathbf{e}_n$. @@ -337,7 +333,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of ## Grasple Exercises -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/9c780d10-9a8f-4fd6-9471-3f1a0e46c009?id=70171 :label: grasple_exercise_2_2_1 :dropdown: @@ -345,7 +341,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.f74168ff-a448-4420-88d9-ebe7365a00a9?id=70172com/exercises/ :label: grasple_exercise_2_2_2 :dropdown: @@ -353,7 +349,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/b760d9b9-d0ba-4875-b828-397e7a045283?id=70175 :label: grasple_exercise_2_2_3 :dropdown: @@ -363,7 +359,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of % ------------------------------------------------ -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/a8175390-3844-408c-b192-c4b05f9beb7b?id=70170 :label: grasple_exercise_2_2_4 :dropdown: @@ -371,8 +367,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of :::: - -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/fab5c526-91ed-407b-9faa-645f40c22b8b?id=70169 :label: grasple_exercise_2_2_5 :dropdown: @@ -380,36 +375,33 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/2167085c-2498-4694-9eac-abfeeb0ec307?id=70162 :label: grasple_exercise_2_2_6 :dropdown: -:description: About the interpretation of Span$\{\vect{a}_1,\vect{a}_2\}$. +:description: About the interpretation of Span$\{\vect{a}_1,\vect{a}_2\}$. :::: - % ------------------------------------------------ -::::{grasple} -:url: https://embed.grasple.com/exercises/493831d9-ab4a-4f78-b9ea-7b707aa9f4c2?id=70174 +::::{grasple} +:url: https://embed.grasple.com/exercises/493831d9-ab4a-4f78-b9ea-7b707aa9f4c2?id=70174 :label: grasple_exercise_2_2_7 :dropdown: :description: Checking whether a vector is a linear combination of the columns of a matrix $A$. :::: - -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/c008320d-9d0e-463f-8bb7-344988f10438?id=70176 -:label: grasple_exercise_2_2_8 +:label: grasple_exercise_2_2_8 :dropdown: :description: About the difference between $\{\vect{a}_1,\vect{a}_2,\vect{a}_3\}$ and Span$\{\vect{a}_1,\vect{a}_2,\vect{a}_3\}$. :::: - -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/b4f4dc1f-4f56-41e8-b16d-a2694e90890c?id=70181 :label: grasple_exercise_2_2_9 :dropdown: @@ -419,7 +411,7 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of % ------------------------------------------------ -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/45bc5527-e79b-4198-b6b7-9b3168d9d1ff?id=70182 :label: grasple_exercise_2_2_10 :dropdown: @@ -429,11 +421,10 @@ In {prf:ref}`Prop:LinearCombinations:SpanStandardBasis` we saw that the span of % ------------------------------------------------ -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/7fcebe18-474c-4995-9c81-f1da7ab4cc5e?id=70360 :label: grasple_exercise_2_2_11 :dropdown: :description: Conversion between vector equation and linear system. - -:::: \ No newline at end of file +:::: diff --git a/Chapter3/GeometryofLinearTransformations.md b/Chapter3/GeometryofLinearTransformations.md index e5ff00a393547cfef79bb30681e123f980f11b39..b96197bc9b2e33294e0ce5cbc41fadc0f87372c4 100644 --- a/Chapter3/GeometryofLinearTransformations.md +++ b/Chapter3/GeometryofLinearTransformations.md @@ -1,34 +1,29 @@ (Sec:GeomLinTrans)= -# Some Important Classes of Linear Transformations - - - -We have seen in {numref}`Subsec:LinTrafo:LinTrafo` that any matrix corresponds to a linear transformation and that vice versa every linear transformation corresponds to a matrix. In this section, we will study some particularly noteworthy classes of linear transformations in more depth. +# Some Important Classes of Linear Transformations +We have seen in {numref}`Subsec:LinTrafo:LinTrafo` that any matrix corresponds to a linear transformation and that vice versa every linear transformation corresponds to a matrix. In this section, we will study some particularly noteworthy classes of linear transformations in more depth. (Subsec:GeomLinTrans:Proj)= -## Projections +## Projections -One of the simplest types of linear transformation takes a vector and sets one of its entries equal to $0$. For example, we can look at the linear transformation +One of the simplest types of linear transformation takes a vector and sets one of its entries equal to $0$. For example, we can look at the linear transformation $$ T:\mathbb{R}^{2}\to\mathbb{R}^{2},\quad\begin{bmatrix}a_{1}\\a_{2}\end{bmatrix}\mapsto \begin{bmatrix}a_{1}\\0\end{bmatrix}. $$ -Geometrically, this is the linear transformation which squashes the plane flat onto the $x$-axis. In slightly less informal terms, it is the transformation which projects the plane onto the $x$-axis. - -Using the orthogonal projections defined -in {prf:ref}`Dfn:InnerProduct:OrthoProjectionOntoVector`, this can be generalised as follows. If $\mathbf{v}$ if a vector in $\mathbb{R}^{n}$, then - +Geometrically, this is the linear transformation which squashes the plane flat onto the $x$-axis. In slightly less informal terms, it is the transformation which projects the plane onto the $x$-axis. +Using the orthogonal projections defined +in {prf:ref}`Dfn:InnerProduct:OrthoProjectionOntoVector`, this can be generalised as follows. If $\mathbf{v}$ if a vector in $\mathbb{R}^{n}$, then $$ T_{\mathbf{v}}:\mathbb{R}^{n}\to\mathbb{R}^{n},\quad\mathbf{w}\mapsto\text{proj}_{\mathbf{v}}(\mathbf{w}) $$ -is the linear transformation which projects the vector $\mathbf{w}$ onto the line through $\mathbf{v}$. In other words, it maps a vector $\mathbf{w}$ to the closest multiple of $\mathbf{v}$. This transformation with +is the linear transformation which projects the vector $\mathbf{w}$ onto the line through $\mathbf{v}$. In other words, it maps a vector $\mathbf{w}$ to the closest multiple of $\mathbf{v}$. This transformation with $$ \mathbf{v}=\begin{bmatrix}2\\1\end{bmatrix} @@ -36,79 +31,54 @@ $$ can be seen on the left in {numref}`Figure %s <Fig:GeomLinTrans:ProjinR2>`. Let us briefly verify that it really is a linear transformation. - - ::::::{prf:proposition} For any vector $\mathbf{v}$ in $\mathbb{R}^{n}$, the map - $$ T_{\mathbf{v}}:\mathbf{w}\mapsto\proj_{\mathbf{v}}(\mathbf{w}) $$ is a linear transformation. - :::::: - - - - ::::::{prf:proof} -The proof is a simple application of the definitions. For any $\mathbf{w}_{1},\mathbf{w}_{2}$ in $\mathbb{R}^{n}$, we have +The proof is a simple application of the definitions. For any $\mathbf{w}_{1},\mathbf{w}_{2}$ in $\mathbb{R}^{n}$, we have \begin{align*} - T_{\mathbf{v}}(\mathbf{w}_{1}+\mathbf{w}_{2})&=\proj_{\mathbf{v}}(\mathbf{w}_{1}+\mathbf{w}_{2})=\frac{(\mathbf{w}_{1}+\mathbf{w}_{2})\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}=\frac{\mathbf{w}_{1}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}+\frac{\mathbf{w}_{2}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}\\ - &=\proj_{\mathbf{v}}(\mathbf{w}_{1})+\proj_{\mathbf{v}}(\mathbf{w}_{2})=T_{\mathbf{v}}(\mathbf{w}_{1})+T_{\mathbf{v}}(\mathbf{w}_{2}). -\end{align*} +T*{\mathbf{v}}(\mathbf{w}_{1}+\mathbf{w}_{2})&=\proj*{\mathbf{v}}(\mathbf{w}*{1}+\mathbf{w}_{2})=\frac{(\mathbf{w}_{1}+\mathbf{w}_{2})\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}=\frac{\mathbf{w}_{1}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}+\frac{\mathbf{w}_{2}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}\\ +&=\proj_{\mathbf{v}}(\mathbf{w}_{1})+\proj_{\mathbf{v}}(\mathbf{w}_{2})=T_{\mathbf{v}}(\mathbf{w}_{1})+T_{\mathbf{v}}(\mathbf{w}_{2}). +\end{align_} Similarly, for any $\mathbf{w}$ in $\mathbb{R}^{n}$ and any $c$ in $\mathbb{R}$ we have \begin{align*} - T_{\mathbf{v}}(c\mathbf{w})&=\proj_{\mathbf{v}}(c\mathbf{w})=\frac{(c\mathbf{w})\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}=c\,\frac{\mathbf{w}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}\\ - &=c\,\proj_{\mathbf{v}}(\mathbf{w})=c\,T_{\mathbf{v}}(\mathbf{w}), +T*{\mathbf{v}}(c\mathbf{w})&=\proj*{\mathbf{v}}(c\mathbf{w})=\frac{(c\mathbf{w})\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}=c\,\frac{\mathbf{w}\ip\mathbf{v}}{\mathbf{v}\ip\mathbf{v}}\mathbf{v}\\ +&=c\,\proj*{\mathbf{v}}(\mathbf{w})=c\,T*{\mathbf{v}}(\mathbf{w}), \end{align*} which finishes the proof. - :::::: - - - -The following proposition allows us to quickly find the standard matrix of the projection onto an arbitrary line in $\mathbb{R}^{2}$. This will be useful later on in this section, e.g. in the proof of {prf:ref}`Prop:GeomLinTrans:MatofReflinPlane`. - - +The following proposition allows us to quickly find the standard matrix of the projection onto an arbitrary line in $\mathbb{R}^{2}$. This will be useful later on in this section, e.g. in the proof of {prf:ref}`Prop:GeomLinTrans:MatofReflinPlane`. ::::::{prf:proposition} :label: Prop:GeomLinTrans:MatofProjonLine -Let $\mathcal{L}$ be the line in the plane that passes through the origin and that makes an angle of $\theta$ with the positive $x$-axis. The projection $T_{\mathcal{L}}$ on $\mathcal{L}$ has standard matrix +Let $\mathcal{L}$ be the line in the plane that passes through the origin and that makes an angle of $\theta$ with the positive $x$-axis. The projection $T_{\mathcal{L}}$ on $\mathcal{L}$ has standard matrix $$ P=\begin{bmatrix}\cos^{2}(\theta)&\sin(\theta)\cos(\theta)\\\sin(\theta)\cos(\theta)&\sin^{2}(\theta)\end{bmatrix}. $$ - - - :::::: - - - - ::::::{prf:proof} - - -The vector +The vector $$ \mathbf{v}=\begin{bmatrix}\cos(\theta)\\\sin(\theta)\end{bmatrix} $$ - is a unit vector on $\mathcal{L}$. Using the fact that $\mathbf{u}-\proj_{\mathcal{L}}(\mathbf{u})$ makes a right angle with $\mathcal{L}$ for any vector $\mathbf{u}$, we find that $\proj_{\mathcal{L}}(\mathbf{e}_{1})$ has length $\cos(\theta)$ (cf. {numref}`Figure %s <Fig:GeomLinTrans:MatofProjonLine>`). Since $\proj_{\mathcal{L}}(\mathbf{e}_{1})$ is a vector in the direction of $\mathbf{v}$ and since $\mathbf{v}$ has length $1$, the first column of $P$ is as claimed. - - +is a unit vector on $\mathcal{L}$. Using the fact that $\mathbf{u}-\proj_{\mathcal{L}}(\mathbf{u})$ makes a right angle with $\mathcal{L}$ for any vector $\mathbf{u}$, we find that $\proj_{\mathcal{L}}(\mathbf{e}_{1})$ has length $\cos(\theta)$ (cf. {numref}`Figure %s <Fig:GeomLinTrans:MatofProjonLine>`). Since $\proj_{\mathcal{L}}(\mathbf{e}_{1})$ is a vector in the direction of $\mathbf{v}$ and since $\mathbf{v}$ has length $1$, the first column of $P$ is as claimed. ::::{figure} Images/Fig-GeomLinTrans-MatofProjonLine.svg :name: Fig:GeomLinTrans:MatofProjonLine @@ -116,22 +86,14 @@ $$ The projection of $\mathbf{e}_{1}$ on the line $\mathcal{L}$ that makes an angle $\theta$ with the positive $x$-axis. Note that the length of $T_{\mathcal{L}}(\mathbf{e}_{1})$ is $\cos(\theta)$ since the length of $\mathbf{e}_{1}$ is $1$. :::: - - -That the second column is as claimed, too, can be shown analogously. We leave it as an exercise for the interested reader. - +That the second column is as claimed, too, can be shown analogously. We leave it as an exercise for the interested reader. :::::: - - - Often, you might have not the angle $\mathcal{L}$ makes with the positive $x$ axis, but rather a vector $\mathbf{v}$ on $\mathcal{L}$. In this case, too, you can find the standard matrix of the projection on $\mathcal{L}$ quite easily. - - ::::::{prf:proposition} -Let $\mathcal{L}$ be a line that passes through the origin in the direction of $\mathbf{v}=\begin{bmatrix} v_{1}\\v_{2}\end{bmatrix}$. The projection $T_{\mathcal{L}}$ has standard matrix +Let $\mathcal{L}$ be a line that passes through the origin in the direction of $\mathbf{v}=\begin{bmatrix} v_{1}\\v_{2}\end{bmatrix}$. The projection $T_{\mathcal{L}}$ has standard matrix $$ P=\frac{1}{v_{1}^{2}+v_{2}^{2}}\begin{bmatrix} @@ -140,52 +102,35 @@ v_{1}v_{2}&v_{2}^{2} \end{bmatrix}. $$ - :::::: - ::::::{prf:proof} It suffices to find the cosine and sine of the angle $\mathcal{L}$ makes with the positive $x$-axis in terms of $v_{1}$ and $v_{2}$. We leave this as an exercise. :::::: - One salient fact about these projections is that they act as the identity on their range. That is, for any vector $\mathbf{w}$ in the range of $T$ we have $T(T(\mathbf{w}))=T(\mathbf{w})$. This leads us to the following definition: - - ::::::{prf:definition} A linear transformation $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is called a **projection** if $T\circ T=T$. - :::::: - - - ::::::{prf:proposition} :label: Prop:GeomLinTrans:ProjSquaredisProj -An $n\times n$-matrix $P$ is the standard matrix of a projection if and only if $P^{2}=P$. - +An $n\times n$-matrix $P$ is the standard matrix of a projection if and only if $P^{2}=P$. :::::: - - ::::::{prf:proof} - We leave this as an exercise. - :::::: - - - -It turns out that not all projections look like the ones discussed in Section {numref}`Sec:DotProduct`, not even if we restrict ourselves to a plane. Consider for example the following construction. Let $\mathbf{v}$ be any non-zero vector in $\mathbb{R}^{2}$ and let $\mathcal{L}$ be the line through $\mathbf{v}$ and the origin. Let $\mathbf{w}$ be a vector in $\mathbb{R}^{2}$ which does not lie on $\mathcal{L}$. For any vector $\mathbf{u}$, we define $\mathcal{L}_{\mathbf{u}}$ as the line through $\mathbf{u}$ in the direction $\mathbf{w}$. We now define the transformation $T$ which maps a vector $\mathbf{u}$ to the intersection of $\mathcal{L}_{\mathbf{u}}$ and $\mathcal{L}$. For +It turns out that not all projections look like the ones discussed in Section {numref}`Sec:DotProduct`, not even if we restrict ourselves to a plane. Consider for example the following construction. Let $\mathbf{v}$ be any non-zero vector in $\mathbb{R}^{2}$ and let $\mathcal{L}$ be the line through $\mathbf{v}$ and the origin. Let $\mathbf{w}$ be a vector in $\mathbb{R}^{2}$ which does not lie on $\mathcal{L}$. For any vector $\mathbf{u}$, we define $\mathcal{L}_{\mathbf{u}}$ as the line through $\mathbf{u}$ in the direction $\mathbf{w}$. We now define the transformation $T$ which maps a vector $\mathbf{u}$ to the intersection of $\mathcal{L}_{\mathbf{u}}$ and $\mathcal{L}$. For $$ \mathbf{v}=\begin{bmatrix}2\\1\end{bmatrix}\quad\text{and}\quad\mathbf{w}=\begin{bmatrix}-2\\1\end{bmatrix} @@ -193,69 +138,50 @@ $$ this projection is depicted on the right in {numref}`Figure %s <Fig:GeomLinTrans:ProjinR2>`. It is an example of a non-orthogonal (or **oblique**) projection. Of course, we again have to check that this is really a linear transformation. - - ::::{figure} Images/Fig-GeomLinTrans-ProjinR2.svg :name: Fig:GeomLinTrans:ProjinR2 On the left an orthogonal projection $T_{1}$ acting on a few selected vectors $\mathbf{u}_{1}$, $\mathbf{u}_{2}$, and $\mathbf{u}_{3}$. On the right a non-orthogonal projection $T_{2}$ acting on some selected vectors $\mathbf{v}_{1}$, $\mathbf{v}_{2}$, and $\mathbf{v}_{3}$. In both cases, the blue line represents the line $\mathcal{L}$ in the direction of $\begin{bmatrix}2\\1\end{bmatrix}$. On the left, every vector $\mathbf{u}_{i}$ is mapped to the closest vector that lies on $\mathcal{L}$. On the right, every vector $\mathbf{v}_{i}$ is mapped to the intersection of $\mathcal{L}$ wih the line through $\mathbf{v}_{i}$ in the direction given by $\begin{bmatrix}-2\\1\end{bmatrix}$. :::: - - - - - ::::::{prf:proposition} Let $\mathcal{L}$ be a line through the origin and let $\mathbf{w}$ be a vector not on $\mathcal{L}$. The transformation $T:\mathbb{R}^{2}\to\mathbb{R}^{2}$ which maps a vector $\mathbf{u}$ to the intersection of $\mathcal{L}$ and the line through $\mathbf{u}$ in the direction of $\mathbf{w}$ is a linear transformation. - :::::: - - - - ::::::{prf:proof} - - - For any vector $\mathbf{u}$, there is a unique pair of real numbers $(c_{\mathbf{u}},d_{\mathbf{u}})$ such that $\mathbf{u}+c_{\mathbf{u}}\mathbf{w}=d_{\mathbf{u}}\mathbf{v}$. What $T$ does is map $\mathbf{u}$ to $d_{\mathbf{u}}\mathbf{v}$. Hence, for any two vectors $\mathbf{u}_{1},\mathbf{u}_{2}$ in $\mathbb{R}^{2}$, we have +For any vector $\mathbf{u}$, there is a unique pair of real numbers $(c_{\mathbf{u}},d_{\mathbf{u}})$ such that $\mathbf{u}+c_{\mathbf{u}}\mathbf{w}=d_{\mathbf{u}}\mathbf{v}$. What $T$ does is map $\mathbf{u}$ to $d_{\mathbf{u}}\mathbf{v}$. Hence, for any two vectors $\mathbf{u}_{1},\mathbf{u}_{2}$ in $\mathbb{R}^{2}$, we have \begin{align*} - \mathbf{u}_{1}+c_{\mathbf{u}_{1}}\mathbf{w}&=d_{\mathbf{u}_{1}}\mathbf{v}=T(\mathbf{u}_{1})\quad\text{and}\\ - \mathbf{u}_{2}+c_{\mathbf{u}_{2}}\mathbf{w}&=d_{\mathbf{u}_{2}}\mathbf{v}=T(\mathbf{u}_{2}).\\ -\end{align*} -Clearly, we also have +\mathbf{u}*{1}+c*{\mathbf{u}*{1}}\mathbf{w}&=d*{\mathbf{u}*{1}}\mathbf{v}=T(\mathbf{u}_{1})\quad\text{and}\\ +\mathbf{u}_{2}+c*{\mathbf{u}*{2}}\mathbf{w}&=d*{\mathbf{u}*{2}}\mathbf{v}=T(\mathbf{u}_{2}).\\ +\end{align_} +Clearly, we also have $$ (\mathbf{u}_{1}+\mathbf{u}_{2})+(c_{\mathbf{u}_{1}}+c_{\mathbf{u}_{2}})\mathbf{w}=(d_{\mathbf{u}_{1}}+d_{\mathbf{u}_{2}}) \mathbf{v} $$ - so $T(\mathbf{u}_{1}+\mathbf{u}_{2})=(d_{\mathbf{u}_{1}}+d_{\mathbf{u}_{2}}) +so $T(\mathbf{u}_{1}+\mathbf{u}_{2})=(d_{\mathbf{u}_{1}}+d_{\mathbf{u}_{2}}) \mathbf{v}=T(\mathbf{u_{1}})+T(\mathbf{u}_{2})$. The proof that $T(c\mathbf{u})=cT(\mathbf{u})$ for any $\mathbf{u}$ in $\mathbb{R}^{2}$ and any scalar $c$ is analogous. We leave it as an exercise. - :::::: - - - -Let us try to find the standard matrix of the transformation $T$ we just defined. Its first column is the intersection of $\mathcal{L}$ with $\mathcal{L}_{e_{1}}$. This intersection is given by: +Let us try to find the standard matrix of the transformation $T$ we just defined. Its first column is the intersection of $\mathcal{L}$ with $\mathcal{L}_{e_{1}}$. This intersection is given by: $$ \begin{bmatrix}1\\0\end{bmatrix}+t\begin{bmatrix}-2\\1\end{bmatrix}=s\begin{bmatrix}2\\1\end{bmatrix} \Longleftrightarrow \begin{cases} 1=2s+2t\\0=s-t\end{cases}\Longleftrightarrow s=t=\frac{1}{4} $$ - so $T(e_{1})=\begin{bmatrix}\frac{1}{2}\\\frac{1}{4}\end{bmatrix}$. The second column of the standard matrix of $T$ is the intersection of $\mathcal{L}$ with $\mathcal{L}_{e_{2}}$. We find this intersection in a similar fashion: - +so $T(e_{1})=\begin{bmatrix}\frac{1}{2}\\\frac{1}{4}\end{bmatrix}$. The second column of the standard matrix of $T$ is the intersection of $\mathcal{L}$ with $\mathcal{L}_{e_{2}}$. We find this intersection in a similar fashion: $$ \begin{bmatrix}0\\1\end{bmatrix}+t\begin{bmatrix}-2\\1\end{bmatrix}=s\begin{bmatrix}2\\1\end{bmatrix} \Longleftrightarrow \begin{cases}0=2s+2t\\1=s-t\end{cases}\Longleftrightarrow \frac{1}{2}=s=-t $$ - so $T(e_{2})=\begin{bmatrix}1\\\frac{1}{2}\end{bmatrix}$ and we conclude that the standard matrix of $T$ is +so $T(e_{2})=\begin{bmatrix}1\\\frac{1}{2}\end{bmatrix}$ and we conclude that the standard matrix of $T$ is $$ P=\begin{bmatrix} @@ -263,110 +189,85 @@ P=\begin{bmatrix} \end{bmatrix}. $$ - - - -We can also consider projections in three dimensional space (cf. {numref}`Figure %s <Fig:GeomLinTrans:3DProj>`). If $\mathbf{v}$ is a vector in $\mathbb{R}^{3}$ and $\mathcal{L}$ is the line in the direction of $\mathbf{v}$, then +We can also consider projections in three dimensional space (cf. {numref}`Figure %s <Fig:GeomLinTrans:3DProj>`). If $\mathbf{v}$ is a vector in $\mathbb{R}^{3}$ and $\mathcal{L}$ is the line in the direction of $\mathbf{v}$, then $$ P:\mathbb{R}^{3}\to\mathbb{R}^{3},\quad\mathbf{w}\mapsto \proj_{\mathbf{v}}(\mathbf{w}) $$ -gives the orthogonal projection of the vector $\mathbf{w}$ on $\mathcal{L}$. -We can also consider the orthogonal projection on a plane in three dimensional space. Suppose the plane $\mathcal{P}$ is spanned by $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ and assume $\mathbf{v}_{1}\ip\mathbf{v}_{2}=0$, that is, assume $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ are orthogonal and non-zero. Then +gives the orthogonal projection of the vector $\mathbf{w}$ on $\mathcal{L}$. +We can also consider the orthogonal projection on a plane in three dimensional space. Suppose the plane $\mathcal{P}$ is spanned by $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ and assume $\mathbf{v}_{1}\ip\mathbf{v}_{2}=0$, that is, assume $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ are orthogonal and non-zero. Then $$ P:\mathbb{R}^{3}\to\mathbb{R}^{3},\mathbf{w}\mapsto \proj_{\mathbf{v}_{1}}(\mathbf{w})+\proj_{\mathbf{v}_{2}}(\mathbf{w}) $$ - gives the projection on $\mathcal{P}$. +gives the projection on $\mathcal{P}$. - - -::::{figure} Images/Fig-GeomLinTrans-3DProjonLine.svg +```{applet} +:url: geom_lin_trans/3d_proj_on_line +:fig: Images/Fig-GeomLinTrans-3DProjonLine.svg :name: Fig:GeomLinTrans:3DProj +:status: reviewed Projections in three dimensional space. On the left, the projection on a line $\mathcal{L}$, on the right the projection on a plane $\mathcal{P}$. -:::: - +``` - - -Let us briefly discuss what happens in higher dimensions. -Suppose $\mathbf{v}_{1},...,\mathbf{v}_{k}$ are non-zero -vectors in $\mathbb{R}^{n}$ which are all orthogonal to each other, -i.e. $\mathbf{v}_{i} \ip\mathbf{v}_{j}=0$ for all $i$ and $j$. -Put $V=\Span{\mathbf{v}_{1},...,\mathbf{v}_{k}}$. -For any vector $\mathbf{w}$ in $\mathbb{R}^{n}$ we can define the -**orthogonal projection** on $V$ as +Let us briefly discuss what happens in higher dimensions. +Suppose $\mathbf{v}_{1},...,\mathbf{v}_{k}$ are non-zero +vectors in $\mathbb{R}^{n}$ which are all orthogonal to each other, +i.e. $\mathbf{v}_{i} \ip\mathbf{v}_{j}=0$ for all $i$ and $j$. +Put $V=\Span{\mathbf{v}_{1},...,\mathbf{v}_{k}}$. +For any vector $\mathbf{w}$ in $\mathbb{R}^{n}$ we can define the +**orthogonal projection** on $V$ as $$ \proj_{V}(\mathbf{w})=\proj_{\mathbf{v}_{1}}(\mathbf{w})+\proj_{\mathbf{v}_{2}}(\mathbf{w})+\cdots+\proj_{\mathbf{v}_{k}}(\mathbf{w}). $$ - -Projections, especially orthogonal projections, play a very important role in linear algebra and we will encounter them quite a bit more in later sections. - - +Projections, especially orthogonal projections, play a very important role in linear algebra and we will encounter them quite a bit more in later sections. ## Reflections - - -A second important class of linear transformations with a very natural geometric interpretation is that of reflections. Let us consider a simple example. Suppose we let $\mathcal{L}$ be the line in the plane through +A second important class of linear transformations with a very natural geometric interpretation is that of reflections. Let us consider a simple example. Suppose we let $\mathcal{L}$ be the line in the plane through $$ \mathbf{0}\quad\text{and in the direction of}\quad \mathbf{v}=\begin{bmatrix}1\\1\end{bmatrix}. $$ - -We can define a transformation $T$ which reflects points in the plane along $\mathcal{L}$. (See {numref}`Figure %s <Fig:GeomLinTrans:ReflinR2>`.) It is easy to find the standard matrix $R$ of this transformation: the first standard basis vector $\mathbf{e}_{1}$ is mapped to $\mathbf{e}_{2}$ and, similarly, $\mathbf{e}_{2}$ is mapped to $\mathbf{e}_{1}$, so +We can define a transformation $T$ which reflects points in the plane along $\mathcal{L}$. (See {numref}`Figure %s <Fig:GeomLinTrans:ReflinR2>`.) It is easy to find the standard matrix $R$ of this transformation: the first standard basis vector $\mathbf{e}_{1}$ is mapped to $\mathbf{e}_{2}$ and, similarly, $\mathbf{e}_{2}$ is mapped to $\mathbf{e}_{1}$, so $$ R=\begin{bmatrix}0&1\\1&0\end{bmatrix}. $$ - - - - -::::{figure} Images/Fig-GeomLinTrans-ReflinR2.svg +```{applet} +:url: geom_lin_trans/reflect_in_r2 +:fig: Images/Fig-GeomLinTrans-ReflinR2.svg :name: Fig:GeomLinTrans:ReflinR2 +:status: reviewed -The reflection $R$ along the line $\mathcal{L}$ in the direction of $\mathbf{v}=\begin{bmatrix}1\\1\end{bmatrix}$. The vectors in red are mapped to the vector in blue by this reflection. -:::: +The reflection $R$ along the line $\mathcal{L}$ in the direction of $\mathbf{v}=\begin{bmatrix}1\\1\end{bmatrix}$. The vectors in red are mapped to the vector in blue by this reflection. +``` +So far so good. But how do we find the reflection over an arbitrary line $\mathcal{L}$? It turns out that the projections we have seen in Section {ref}`Subsec:GeomLinTrans:Proj` will help us out. Consider a line $\mathcal{L}$ and a vector $\mathbf{v}$ not in $\mathcal{L}$, as in {numref}`Figure %s <Fig:GeomLinTrans:ReflFromDoubleProj>`. In order to reflect $\mathbf{v}$ over $\mathcal{L}$, we first move it to the closest point on $\mathcal{L}$ and then move it the same distance again in the same direction. - - -So far so good. But how do we find the reflection over an arbitrary line $\mathcal{L}$? It turns out that the projections we have seen in Section {ref}`Subsec:GeomLinTrans:Proj` will help us out. Consider a line $\mathcal{L}$ and a vector $\mathbf{v}$ not in $\mathcal{L}$, as in {numref}`Figure %s <Fig:GeomLinTrans:ReflFromDoubleProj>`. In order to reflect $\mathbf{v}$ over $\mathcal{L}$, we first move it to the closest point on $\mathcal{L}$ and then move it the same distance again in the same direction. - -The closest point to $\mathbf{v}$ on $\mathcal{L}$ is the orthogonal projection $\text{proj}_{\mathcal{L}}(\mathbf{v})$. To get from $\mathbf{v}$ to the closest point on $\mathcal{L}$, we therefore have to subtract $\mathbf{v}-\text{proj}_{\mathcal{L}}(\mathbf{v})$ from $\mathbf{v}$ (See {numref}`Figure %s <Fig:GeomLinTrans:ReflFromDoubleProj>`.). So in order to reflect $\mathbf{v}$ over $\mathcal{L}$, we have to subtract the vector $\mathbf{v}-\text{proj}_{\mathcal{L}}(\mathbf{v})$ twice from our starting vector $\mathbf{v}$. This means that any $\mathbf{v}$ is mapped to $2\proj_{\mathcal{L}}(\mathbf{v})-\mathbf{v}$, so if we write $T$ for this transformation we find - +The closest point to $\mathbf{v}$ on $\mathcal{L}$ is the orthogonal projection $\text{proj}_{\mathcal{L}}(\mathbf{v})$. To get from $\mathbf{v}$ to the closest point on $\mathcal{L}$, we therefore have to subtract $\mathbf{v}-\text{proj}_{\mathcal{L}}(\mathbf{v})$ from $\mathbf{v}$ (See {numref}`Figure %s <Fig:GeomLinTrans:ReflFromDoubleProj>`.). So in order to reflect $\mathbf{v}$ over $\mathcal{L}$, we have to subtract the vector $\mathbf{v}-\text{proj}_{\mathcal{L}}(\mathbf{v})$ twice from our starting vector $\mathbf{v}$. This means that any $\mathbf{v}$ is mapped to $2\proj_{\mathcal{L}}(\mathbf{v})-\mathbf{v}$, so if we write $T$ for this transformation we find $$ T(\mathbf{v})=2\proj_{\mathcal{L}}(\mathbf{v})-\mathbf{v}=(2\proj_{\mathcal{L}}-I)\mathbf{v}. $$ - - - - ::::{figure} Images/Fig-GeomLinTrans-ReflFromDoubleProj.svg :name: Fig:GeomLinTrans:ReflFromDoubleProj Reflection along the line $\mathcal{L}$ can be seen as the transformation $2\proj_{\mathcal{L}}-I$. :::: - - - Keeping this in mind, it makes sense to define general reflections as follows. - - ::::::{prf:definition} -If $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is the orthogonal projection on $\text{range}(T)$ with standard matrix $P$, then +If $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ is the orthogonal projection on $\text{range}(T)$ with standard matrix $P$, then $$ S:\mathbb{R}^{n}\to\mathbb{R}^{n},\mathbf{v}\mapsto (2P-I)\mathbf{v} @@ -374,49 +275,27 @@ $$ is the **reflection** over $\text{range}(T)$. - :::::: - - - Since any reflection is a linear combination of some projection and the identity, we arrive at the following proposition. - - ::::::{prf:proposition} Any reflection is a linear transformation. - :::::: - - - - ::::::{prf:proof} A reflection is by definition a sum of scaled linear transformations. As such, it is again a linear transformation. - :::::: - - - The following proposition guarantees that, as you would expect, applying a reflection twice leaves you back where you started. - - ::::::{prf:proposition} If $R$ is the standard matrix of a reflection, then $R^{2}=I$. - :::::: - - - - ::::::{prf:proof} We know that $R=2P-I$ where $P$ is the standard matrix of some projection. By {prf:ref}`Prop:GeomLinTrans:ProjSquaredisProj`, we have $P^{2}=P$ and therefore @@ -424,105 +303,73 @@ $$ R^{2}=(2P-I)(2P-I)=4P^{2}-4P+I=I. $$ - - :::::: - - - The definition of a reflection in combination with {prf:ref}`Prop:GeomLinTrans:MatofProjonLine` allows us to find the standard matrix for the reflection along any line through the origin in $\mathbb{R}^{2}$. - - ::::::{prf:proposition} :label: Prop:GeomLinTrans:MatofReflinPlane Let $\mathcal{L}$ be the line in the plane that passes through the origin and that makes an angle $\theta$ with the positive $x$-axis. The standard matrix of the reflection along $\mathcal{L}$ is - $$ A_{R_{\mathcal{L}}}=2\begin{bmatrix}\cos^{2}(\theta)&\sin(\theta)\cos(\theta)\\\sin(\theta)\cos(\theta)&\sin^{2}(\theta)\end{bmatrix}-I_{2}=\begin{bmatrix}\cos(2\theta)&\sin(2\theta)\\\sin(2\theta)&-\cos(2\theta)\end{bmatrix}. $$ - - - :::::: - - - - ::::::{prf:proof} - - -Exercise. For the second equality, remember the trigonometric identities +Exercise. For the second equality, remember the trigonometric identities $$ \sin(2\theta)=2\sin(\theta)\cos(\theta)\quad\text{and}\quad\cos(2\theta)=2\cos^{2}(\theta)-1=1-2\sin^{2}(\theta). $$ - - - :::::: - - - For large $n$ it is hard to picture what a reflection in $n$-dimensional space does. But for $n=3$ it is still doable. In fact, it is done in {numref}`Figure %s <Fig:GeomLinTrans:3DReflalongPlane>`. - - -::::{figure} Images/Fig-GeomLinTrans-3DReflalongPlane.svg +```{applet} +:url: geom_lin_trans/3d_refl_along_plane +:fig: Images/Fig-GeomLinTrans-3DReflalongPlane.svg :name: Fig:GeomLinTrans:3DReflalongPlane +:status: reviewed Reflection along the plane $\mathcal{P}$ in $\mathbb{R}^{3}$. -:::: - +``` One particularly interesting aspect of reflections is that they preserve lengths of vectors and angles between vectors. This is a consequence of {prf:ref}`Prop:GeomLinTrans:ReflDotProd`. - ::::{prf:proposition} :label: Prop:GeomLinTrans:ReflDotProd - If $S$ is a defined on $\R^{n}$, then for any $\vect{w}_{1},\vect{w}_{2}$ in $\R^{n}$ we have: $$S(\vect{w}_{1})\cdot S(\vect{w}_{2})=\vect{w}_{1}\cdot\vect{w}_{2}.$$ - :::: :::{prf:proof} -By definition, there is an orthogonal projection with standard matrix $P$ such that $S(\vect{w})=(2P-I)\vect{w}$. We assume $P$ is the projection on the span of a single vector $\vect{v}$. If there are more, the computations become considerably messier, but neither harder nor more enlightening. - +By definition, there is an orthogonal projection with standard matrix $P$ such that $S(\vect{w})=(2P-I)\vect{w}$. We assume $P$ is the projection on the span of a single vector $\vect{v}$. If there are more, the computations become considerably messier, but neither harder nor more enlightening. \begin{align*} -S(\vect{w}_{1})\cdot S(\vect{w}_{2})&=(2P-I)\vect{w}_{1}\cdot(2P-I)\vect{w}_{2}\\ +S(\vect{w}*{1})\cdot S(\vect{w}_{2})&=(2P-I)\vect{w}_{1}\cdot(2P-I)\vect{w}_{2}\\ &=(2\left(\frac{\vect{w_{1}}\cdot\vect{v}}{\vect{v}\cdot\vect{v}}\right)\vect{v}-\vect{w}_{1})\cdot (2\left(\frac{\vect{w_{2}}\cdot\vect{v}}{\vect{v}\cdot\vect{v}}\right)\vect{v}-\vect{w}_{2})\\ &=4\left(\frac{\vect{w}_{1}\cdot \vect{v}}{\vect{v}\cdot\vect{v}}\right)\left(\frac{\vect{w}_{2}\cdot \vect{v}}{\vect{v}\cdot\vect{v}}\right)\vect{v}\cdot\vect{v}-2\left(\frac{\vect{w}_{2}\cdot \vect{v}}{\vect{v}\cdot\vect{v}}\right)\vect{w}_{1}\cdot\vect{v}-2\left(\frac{\vect{w}_{1}\cdot \vect{v}}{\vect{v}\cdot\vect{v}}\right)\vect{w}_{2}\cdot\vect{v}+\vect{w}_{1}\cdot\vect{w}_{2}\\ &=\vect{w}_{1}\cdot\vect{w}_{2} -\end{align*} - +\end{align_} which proves the claim. ::: - ## Rotations - As we have seen in {prf:ref}`Prop:GeomLinTrans:ReflDotProd`, reflections preserve the dot product and therefore lengths of vectors and the angles between vectors. However, there are other transformations that do so. These other transformations are the rotations. Let us start with the definition. - - ::::::{prf:definition} A **rotation** is a transformation $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ that is not a reflection but such that for any $\mathbf{v}_{1},\mathbf{v}_{2}$ in $\mathbb{R}^{n}$ we have: @@ -532,28 +379,23 @@ For convenience, we will also call the identity transformation a rotation, even :::::: - - - ::::::{prf:proposition} :label: Prop:GeomLinTrans:RotsAreLinTrans Rotations are linear transformations. - :::::: - ::::::{prf:proof} -Let $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ be a rotation. Because of {prf:ref}`Prop:InnerProduct:DotProdGeometric`, we have $\mathbf{v}_{1}\ip \mathbf{v}_{2}=T(\mathbf{v}_{1})\ip T(\mathbf{v}_{2})$. A tedious but not terribly hard calculation now shows that, for every $\mathbf{v}_{1},\mathbf{v}_{2}$ in $\mathbb{R}^{n}$, +Let $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ be a rotation. Because of {prf:ref}`Prop:InnerProduct:DotProdGeometric`, we have $\mathbf{v}_{1}\ip \mathbf{v}_{2}=T(\mathbf{v}_{1})\ip T(\mathbf{v}_{2})$. A tedious but not terribly hard calculation now shows that, for every $\mathbf{v}_{1},\mathbf{v}_{2}$ in $\mathbb{R}^{n}$, $$ -\lVert T(\mathbf{v}_{1}+\mathbf{v}_{2})-T(\mathbf{v}_{1})-T(\mathbf{v}_{2})\rVert ^{2}=\lVert \mathbf{v}_{1}+\mathbf{v}_{2}-\mathbf{v}_{1}-\mathbf{v}_{2}\rVert^{2}=0. +\lVert T(\mathbf{v}_{1}+\mathbf{v}_{2})-T(\mathbf{v}_{1})-T(\mathbf{v}_{2})\rVert ^{2}=\lVert \mathbf{v}_{1}+\mathbf{v}_{2}-\mathbf{v}_{1}-\mathbf{v}_{2}\rVert^{2}=0. $$ This implies that $T(\mathbf{v}_{1}+\mathbf{v}_{2})-T(\mathbf{v}_{1})-T(\mathbf{v}_{2})=\mathbf{0}$, hence $T(\mathbf{v}_{1}+\mathbf{v}_{2})=T(\mathbf{v}_{1})+T(\mathbf{v}_{2})$. -Similarly, one can show that for any vector $\mathbf{v}$ in $\mathbb{R}^{n}$ and any scalar $c$, we have +Similarly, one can show that for any vector $\mathbf{v}$ in $\mathbb{R}^{n}$ and any scalar $c$, we have $$ \lVert T(c\mathbf{v})-cT(\mathbf{v})\rVert^{2}=\lVert c\mathbf{v}-c\mathbf{v}\rVert^{2}=0. @@ -561,24 +403,23 @@ $$ This then implies $T(c\mathbf{v})-cT(\mathbf{v})=\mathbf{0}$ whence $T(c\mathbf{v})=cT(\mathbf{v})$. In conclusion: $T$ is a linear transformation. - :::::: -In fact, the proof of {prf:ref}`Prop:GeomLinTrans:RotsAreLinTrans` only uses the fact that rotations preserve the inner product. It therefore also shows that reflections are linear transformations, but we have already established that using a simpler argument. +In fact, the proof of {prf:ref}`Prop:GeomLinTrans:RotsAreLinTrans` only uses the fact that rotations preserve the inner product. It therefore also shows that reflections are linear transformations, but we have already established that using a simpler argument. -The name *rotation* is inspired by the following observation about rotations in the plane. +The name _rotation_ is inspired by the following observation about rotations in the plane. ::::::{prf:proposition} For any real number $\theta$, the rotation over the angle $\theta$ in the plane has standard matrix - $$ R_{\theta}=\begin{bmatrix} \cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta) \end{bmatrix}. + $$ This is indeed the standard matrix of a rotation. @@ -587,8 +428,7 @@ This is indeed the standard matrix of a rotation. ::::::{prf:proof} - Suppose we take the vector $\mathbf{e}_{1}$ and rotate it (counterclockwise) over an angle $\theta$. Where do we end up? By definition, the $x$-coordinate of our new location will be $\cos(\theta)$ and its $y$-coordinate will be $\sin(\theta)$. Similarly, if we start with the vector $\mathbf{e}_{2}$ and rotate that over the angle $\theta$, the $x$-coordinate of our new point will be $-\sin(\theta)$. This is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:RotinPlane>`. - +Suppose we take the vector $\mathbf{e}_{1}$ and rotate it (counterclockwise) over an angle $\theta$. Where do we end up? By definition, the $x$-coordinate of our new location will be $\cos(\theta)$ and its $y$-coordinate will be $\sin(\theta)$. Similarly, if we start with the vector $\mathbf{e}_{2}$ and rotate that over the angle $\theta$, the $x$-coordinate of our new point will be $-\sin(\theta)$. This is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:RotinPlane>`. ::::{figure} Images/Fig-GeomLinTrans-RotinPlane.svg :name: Fig:GeomLinTrans:RotinPlane @@ -599,8 +439,7 @@ The rotation over the angle $\theta$ working on $\mathbf{e}_{1}$ and $\mathbf{e} To show that $R_{\theta}$ is indeed the standard matrix of a rotation, we first note that it is only a reflection if it is the identity matrix. We leave it as an exercise to check that $(R_{\theta}\vect{v}_{1})\cdot (R_{\theta}\vect{v}_{2})=\vect{v}_{1}\cdot \vect{v}_{2}$ for any $\vect{v}_{1},\vect{v}_{2}$ in $\R^{2}$. :::::: - -Alternatively, one can describe a rotation in the plane as a combination of two reflections. We make this precise in the following proposition, which is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:RotisDoubleRefl>`. +Alternatively, one can describe a rotation in the plane as a combination of two reflections. We make this precise in the following proposition, which is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:RotisDoubleRefl>`. ::::::{prf:proposition} :label: Prop:GeomLinTrans:RotisDoubleRefl @@ -609,139 +448,110 @@ Any rotation in the plane is the composition of two reflections. :::::: - ::::::{prf:proof} We will show that the standard matrix $R_{\theta}$ of the rotation over an angle $\theta$ is the product of the standard matrices of two reflections. The claim follows then from the definition of the matrix product. -Let $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ be two lines in the plane through the origin that make an angle of $\theta/2$ with each other. If we call $\phi/2$ the angle $\mathcal{L}_{1}$ makes with the positive $x$-axis, we can conclude that $\mathcal{L}_{2}$ makes an angle of $\phi/2+\theta/2$ with the positive $x$-axis. From {prf:ref}`Prop:GeomLinTrans:MatofReflinPlane`, we know that the standard matrices of the reflections along $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are +Let $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ be two lines in the plane through the origin that make an angle of $\theta/2$ with each other. If we call $\phi/2$ the angle $\mathcal{L}_{1}$ makes with the positive $x$-axis, we can conclude that $\mathcal{L}_{2}$ makes an angle of $\phi/2+\theta/2$ with the positive $x$-axis. From {prf:ref}`Prop:GeomLinTrans:MatofReflinPlane`, we know that the standard matrices of the reflections along $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are $$ R_{\mathcal{L}_{1}}=\begin{bmatrix}\cos(\phi)&\sin(\phi)\\\sin(\phi)&-\cos(\phi)\end{bmatrix}\quad \text{and}\quad R_{\mathcal{L}_{2}}=\begin{bmatrix}\cos(\theta+\phi)&\sin(\theta+\phi)\\ \sin(\theta+\phi)&-\cos(\theta+\phi)\end{bmatrix}, -$$ +$$ -respectively. Using the fact that, for any angles $\alpha$ and $\beta$, we have the identities +respectively. Using the fact that, for any angles $\alpha$ and $\beta$, we have the identities \begin{align*} \cos(\alpha-\beta)&= \cos(\alpha)\cos(\beta)+\sin(\alpha)\sin(\beta)\quad\text{and}\\ \sin(\alpha-\beta)&=\sin(\alpha)\cos(\beta)-\sin(\beta)\cos(\alpha), \end{align*} -we find +we find $$ R_{\mathcal{L}_{2}}R_{\mathcal{L}_{1}}=\begin{bmatrix} \cos(\theta)&\sin(-\theta)\\\sin(\theta)&\cos(\theta) \end{bmatrix}=R_{\theta}. -$$ +$$ :::::: - - - - - ::::{figure} Images/Fig-GeomLinTrans-RotisDoubleRefl.svg :name: Fig:GeomLinTrans:RotisDoubleRefl {prf:ref}`Prop:GeomLinTrans:RotisDoubleRefl` illustrated. $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are arbitrary lines that make an angle of $\theta/2$ with each other. Composing the reflections along $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ then gives the rotation over the angle $\theta$. This is shown for the particular vector $\mathbf{v}$. Note that the angle $\phi/2$ that $\mathcal{L}_{1}$ makes with the positive $x$ axis is irrelevant to the proof. :::: - - - - In the plane, you can only rotate around the origin. Things get considerably more complicated if we move to $\mathbb{R}^{3}$, because there you can rotate around any arbitrary line. We will not get into that here. - - ## Shear Transformations - - The last class of linear transformation we will deal with in this section are the shear transformations. These are transformations which fix a certain line through the origin, $\mathcal{L}$ say, and shift all other points parallel to $\mathcal{L}$. - - ::::::{prf:example} :label: Ex:GeomLinTrans:ShearTrans -Consider the linear transformation +Consider the linear transformation $$ T:\mathbb{R}^{2}\to\mathbb{R}^{2},\quad \mathbf{v}\mapsto \begin{bmatrix}2&-1\\1&0\end{bmatrix}\mathbf{v}. -$$ +$$ -The action of $T$ is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:ShearTrans>`. Consider furthermore the line +The action of $T$ is illustrated in {numref}`Figure %s <Fig:GeomLinTrans:ShearTrans>`. Consider furthermore the line $$ \mathcal{L}=\left\{\begin{bmatrix}c\\c\end{bmatrix}\mid c\text{ in }\mathbb{R}\right\}=\left\{c\mathbf{w}\mid c\text{ in }\mathbb{R}\right\}\quad \text{where}\quad\mathbf{w}=\begin{bmatrix}1\\1\end{bmatrix}, -$$ +$$ i.e. the line through the origin in the direction of $\mathbf{w}$. Any vector $c\mathbf{w}$ in $\mathcal{L}$ is fixed: - $$ T(c\mathbf{w})=\begin{bmatrix}2&-1\\1&0\end{bmatrix}\begin{bmatrix}c\\c\end{bmatrix}=\begin{bmatrix}c\\c \end{bmatrix}. -$$ +$$ What happens with vectors not in $\mathcal{L}$? Take two scalars $c$ and $d$ which are not equal. Then - $$ T\left(\begin{bmatrix}c\\d\end{bmatrix}\right)=\begin{bmatrix}2&-1\\1&0\end{bmatrix}\begin{bmatrix}c\\d \end{bmatrix}=\begin{bmatrix}2c-d\\c\end{bmatrix}=\begin{bmatrix}c\\d\end{bmatrix}+\begin{bmatrix}c-d\\c-d\end{bmatrix}, -$$ - - so $T$ moves points not on $\mathcal{L}$ parallel to $\mathcal{L}$. Points closer to $\mathcal{L}$ get moved a smaller distance than points further away from $\mathcal{L}$. Points to the left of $\mathcal{L}$, i.e. points for which $c<d$, get moved to the left. Points to the right of $\mathcal{L}$, i.e. points for which $c>d$, get moved to the right. +$$ +so $T$ moves points not on $\mathcal{L}$ parallel to $\mathcal{L}$. Points closer to $\mathcal{L}$ get moved a smaller distance than points further away from $\mathcal{L}$. Points to the left of $\mathcal{L}$, i.e. points for which $c<d$, get moved to the left. Points to the right of $\mathcal{L}$, i.e. points for which $c>d$, get moved to the right. :::::: - - - - - ::::{figure} Images/Fig-GeomLinTrans-ShearTrans.svg :name: Fig:GeomLinTrans:ShearTrans -The shear transformation $T$ from Example {numref}`Figure %s <Fig:GeomLinTrans:ShearTrans>` working on the vectors +The shear transformation $T$ from Example {numref}`Figure %s <Fig:GeomLinTrans:ShearTrans>` working on the vectors $\mathbf{e}_{1}=\begin{bmatrix}1\\0\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1\\1\end{bmatrix}$. Note how the distance between a vector and the line $\mathcal{L}$ is preserved by $T$. As a consequence, the area of the green and blue parallelogams on the left is the same as that of their respective images on the right. :::: - - - - - ::::::{prf:definition} :label: Dfn:GeomLinTrans:ShearScale A linear transformation $T:\mathbb{R}^{2}\to\mathbb{R}^{2}$ is called a **shear transformation**, or simply a **shear**, if there is some line $\mathcal{L}$ and some vector $\mathbf{w}$ in $\mathcal{L}$ such that + <ol type="i"> <li> $T(\mathbf{v})=\mathbf{v}$ for all $\mathbf{v}$ on $\mathcal{L}$; - </li> <li id="Item:GeomLinTrans:ShearScale"> - for every vector $\mathbf{v}$ in $\mathbb{R}^{2}$ there is a scalar $c$ such that we have +for every vector $\mathbf{v}$ in $\mathbb{R}^{2}$ there is a scalar $c$ such that we have $$ T(\mathbf{v})=\mathbf{v}+c\mathbf{w}. @@ -749,61 +559,42 @@ $$ </li> </ol> - - :::::: - - - -Note that the scalar $c$ in [ii.](#Item:GeomLinTrans:ShearScale) from {prf:ref}`Dfn:GeomLinTrans:ShearScale` is different for different vectors. For vectors lying on one side of $\mathcal{L}$, it will be positive. For vectors on the other side of $\mathcal{L}$ it will be negative. Moreover, for vectors further away from $\mathcal{L}$, $c$ will be larger than for vectors closer to $\mathcal{L}$. - - +Note that the scalar $c$ in [ii.](#Item:GeomLinTrans:ShearScale) from {prf:ref}`Dfn:GeomLinTrans:ShearScale` is different for different vectors. For vectors lying on one side of $\mathcal{L}$, it will be positive. For vectors on the other side of $\mathcal{L}$ it will be negative. Moreover, for vectors further away from $\mathcal{L}$, $c$ will be larger than for vectors closer to $\mathcal{L}$. ::::::{prf:proposition} If $T$ is a shear transformation fixing the line $\mathcal{L}$ and $\mathbf{v}$ is an arbitrary vector in $\mathbb{R}^{2}$, then the distance from $\mathbf{v}$ to $\mathcal{L}$ is the same as the distance from $T(\mathbf{v})$ to $\mathcal{L}$. - :::::: - - - - ::::::{prf:proof} -The distance between a vector $\mathbf{v}$ and a line $\mathcal{L}$ is the length of $\mathbf{v}-\proj_{\mathcal{L}}(\mathbf{v})$. We find, for arbitrary $\mathbf{v}$ in $\mathbb{R}^{2}$, +The distance between a vector $\mathbf{v}$ and a line $\mathcal{L}$ is the length of $\mathbf{v}-\proj_{\mathcal{L}}(\mathbf{v})$. We find, for arbitrary $\mathbf{v}$ in $\mathbb{R}^{2}$, \begin{align*} - \lVert T(\mathbf{v})-\proj_{\mathcal{L}}(T(\mathbf{v}))\rVert &= \lVert \mathbf{v}+c\mathbf{w}-\frac{(\mathbf{v}+c\mathbf{w})\ip\mathbf{w}}{\mathbf{w}\ip\mathbf{w}}\mathbf{w}\rVert=\lVert \mathbf{v}-\frac{\mathbf{v}\ip\mathbf{w}}{\mathbf{w}\ip\mathbf{w}}\mathbf{w}\rVert\\ -&=\lVert \mathbf{v} -\proj_{\mathcal{L}}\mathbf{v}\rVert +\lVert T(\mathbf{v})-\proj*{\mathcal{L}}(T(\mathbf{v}))\rVert &= \lVert \mathbf{v}+c\mathbf{w}-\frac{(\mathbf{v}+c\mathbf{w})\ip\mathbf{w}}{\mathbf{w}\ip\mathbf{w}}\mathbf{w}\rVert=\lVert \mathbf{v}-\frac{\mathbf{v}\ip\mathbf{w}}{\mathbf{w}\ip\mathbf{w}}\mathbf{w}\rVert\\ +&=\lVert \mathbf{v} -\proj*{\mathcal{L}}\mathbf{v}\rVert \end{align*} which had to be proven. - :::::: - - - We can introduce a similar concept in $\mathbb{R}^{3}$, but now we need the transformation $T$ to fix not a line but rather a plane. - - ::::::{prf:definition} :label: Item:GeomLinTrans:ShearScale A linear transformation $T:\mathbb{R}^{3}\to\mathbb{R}^{3}$ is called a **shear transformation**, or simply a **shear**, if there is some plane $\mathcal{P}$ in $\mathbb{R}^{3}$ and some vector $\mathbf{w}$ in $\mathcal{P}$ such that + <ul> <li> $T(\mathbf{v})=\mathbf{v}$ for all $\mathbf{v}$ on $\mathcal{P}$; - - </li> <li id="Item:GeomLinTrans:ShearScale"> - for every vector $\mathbf{v}$ in $\mathbb{R}^{3}$ there is a scalar $c$ such that we have +for every vector $\mathbf{v}$ in $\mathbb{R}^{3}$ there is a scalar $c$ such that we have $$ T(\mathbf{v})=\mathbf{v}+c\mathbf{w}. @@ -811,72 +602,58 @@ $$ </li> </ul> - - :::::: - - - - - ::::::{prf:example} Application -Suppose we have a standard deck of $52$ playing cards placed in a stack on a table. A standard playing card is about $87$ by $56$ millimeters, so we can assume that the corners of the lowest card are on +Suppose we have a standard deck of $52$ playing cards placed in a stack on a table. A standard playing card is about $87$ by $56$ millimeters, so we can assume that the corners of the lowest card are on $$ \begin{bmatrix}0\\0\\0\end{bmatrix},\quad\begin{bmatrix}87\\0\\0\end{bmatrix},\quad\begin{bmatrix}0\\56\\0\end{bmatrix},\quad\text{and}\quad\begin{bmatrix}87\\56\\0\end{bmatrix}, -$$ - -respectively. A playing card typically has a thickness of about $0.2$ millimeter, so the coordinates of the top card of the stack will be +$$ +respectively. A playing card typically has a thickness of about $0.2$ millimeter, so the coordinates of the top card of the stack will be $$ \begin{bmatrix}0\\0\\10.4\end{bmatrix},\quad\begin{bmatrix}87\\0\\10.4\end{bmatrix},\quad\begin{bmatrix}0\\56\\10.4\end{bmatrix},\quad\text{and}\quad\begin{bmatrix}87\\56\\10.4\end{bmatrix}, -$$ +$$ respectively. If we now move the top card along the $x$-axis, then, due to friction, the second card will also move. This in turn will make the third card move and so on. If we assume friction with the table is high enough, the bottom card will approximately remain in place. This situation is depicted in {numref}`Figure %s <Fig:GeomLinTrans:CardsStack>`. -The movement of the cards can be described by a shear transformation. If the top card is moved 6 millimeters along the $x$-axis, then the edges parallel to the $y$-axis of the cards will make an angle of $\phi=\arctan(10.4/6)\approx \frac{\pi}{3}$ with the positive $x$-axis. A card at height $h$ will be moved a distance of about $h\frac{1}{\sqrt{3}}$ along the $x$-axis. We therefore find that the standard matrix associated to the linear transformation that describes the movement of the cards is +The movement of the cards can be described by a shear transformation. If the top card is moved 6 millimeters along the $x$-axis, then the edges parallel to the $y$-axis of the cards will make an angle of $\phi=\arctan(10.4/6)\approx \frac{\pi}{3}$ with the positive $x$-axis. A card at height $h$ will be moved a distance of about $h\frac{1}{\sqrt{3}}$ along the $x$-axis. We therefore find that the standard matrix associated to the linear transformation that describes the movement of the cards is $$ \begin{bmatrix}1&0&\frac{1}{\sqrt{3}}\\0&1&0\\0&0&1\end{bmatrix}. -$$ +$$ Shear transformations are widely used to model this kind of displacement of layered media. For example in materials science or crystallography. - :::::: - - - - ::::{figure} Images/Fig-GeomLinTrans-CardsStack.svg :name: Fig:GeomLinTrans:CardsStack A shear transformation applied to a stack of cards. :::: - ## Grasple Exercises -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/1531c9ba-540c-4d64-bddf-169105eaa5ff?id=70393 -:label: grasple_exercise_3_3_1 +:label: grasple_exercise_3_3_1 :dropdown: :description: Give a geometric description for transformation with given standard matrix. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/51095023-d860-483f-8758-44d2b83d7c9e?id=70394 :label: grasple_exercise_3_3_2 :dropdown: @@ -884,7 +661,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/5eae3328-453b-4065-9829-be8acb10f0fa?id=70421 :label: grasple_exercise_3_3_3 :dropdown: @@ -892,7 +669,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/cf49c839-9eee-4f7b-b459-cfe3edcf530b?id=70422 :label: grasple_exercise_3_3_4 :dropdown: @@ -900,7 +677,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/aca8c030-4392-4e22-be38-2316f9c483c4?id=70425 :label: grasple_exercise_3_3_5 :dropdown: @@ -908,7 +685,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/e67fb238-1b3e-4cad-bbb2-4126579fa97f?id=78593 :label: grasple_exercise_3_3_6 :dropdown: @@ -916,7 +693,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/795b979c-e3d9-4f24-80ad-0dfad38c84d2?id=83137 :label: grasple_exercise_3_3_7 :dropdown: @@ -924,7 +701,7 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/7d10562b-929a-4b1f-9a90-6280e12b9c98?id=85261 :label: grasple_exercise_3_3_8 :dropdown: @@ -932,10 +709,10 @@ A shear transformation applied to a stack of cards. :::: -::::{grasple} +::::{grasple} :url: https://embed.grasple.com/exercises/ee24dead-8281-493e-9ced-b2f0f9cb1421?id=85263 :label: grasple_exercise_3_3_9 :dropdown: :description: Understanding shear transformations. -:::: \ No newline at end of file +:::: diff --git a/Chapter3/Linear_Transformations.md b/Chapter3/Linear_Transformations.md index 4f2b993456f37116f48ef347f26746895bf4ab86..e07215fc5e4ae2a7aaab7a9cda4496d223d8bf33 100644 --- a/Chapter3/Linear_Transformations.md +++ b/Chapter3/Linear_Transformations.md @@ -1,8 +1,9 @@ (Sec:LinTrafo)= -# Linear Transformations +# Linear Transformations (Subsec:LinTrafo:MatrixTrafo)= + ## Introduction Until now we have used matrices in the context of linear systems. The equation @@ -11,70 +12,60 @@ $$ A\mathbf{x} = \mathbf{b}, $$ -where $A$ is an $m \times n$ matrix, is just a concise way to write down a system of $m$ linear equations in $n$ +where $A$ is an $m \times n$ matrix, is just a concise way to write down a system of $m$ linear equations in $n$ unknowns. -A different way to look at this matrix equation is to consider it as an input-output system: +A different way to look at this matrix equation is to consider it as an input-output system: the left-hand side $A\mathbf{x}$ -can be seen as a mapping that sends an "input" $\mathbf{x}$ to an "output" $\mathbf{y}= A\mathbf{x}$. +can be seen as a mapping that sends an "input" $\mathbf{x}$ to an "output" $\mathbf{y}= A\mathbf{x}$. -For instance, in computer graphics, typically points describing a 3D object have to be converted to points in 2D, to be able to visualize them on a screen. Or, in a dynamical system, a -matrix $A$ may describe how a system evolves from a "state" $\mathbf{x}_{k}$ at time $k$ to a state $\mathbf{x}_{k+1}$ at time $k+1$ via : +For instance, in computer graphics, typically points describing a 3D object have to be converted to points in 2D, to be able to visualize them on a screen. Or, in a dynamical system, a +matrix $A$ may describe how a system evolves from a "state" $\mathbf{x}_{k}$ at time $k$ to a state $\mathbf{x}_{k+1}$ at time $k+1$ via : $$ \mathbf{x}_{k+1} = A\mathbf{x}_{k}. $$ -A "state" may be anything ranging from a set of particles at certain positions, a set of pixels describing a minion, concentrations of chemical substances in a reactor tank, to population sizes of different species. -Thinking mathematically we would describe such an input-output interpretation as a -transformation (or: function, map, mapping, operator) +A "state" may be anything ranging from a set of particles at certain positions, a set of pixels describing a minion, concentrations of chemical substances in a reactor tank, to population sizes of different species. +Thinking mathematically we would describe such an input-output interpretation as a +transformation (or: function, map, mapping, operator) $$ T: \mathbb{R}^n \to \mathbb{R}^m. $$ -We will see that these matrix transformations have two characteristic properties -which makes them the protagonists of the more general linear algebra concept of a **linear transformation**. - - +We will see that these matrix transformations have two characteristic properties +which makes them the protagonists of the more general linear algebra concept of a **linear transformation**. (Subsec:MatrixTrafo)= -## Matrix Transformations -Let $A$ be an $m\times n$ matrix. We can in a natural way associate a transformation $T_A:\mathbb{R}^n \to \mathbb{R}^m$ to the matrix $A$. +## Matrix Transformations +Let $A$ be an $m\times n$ matrix. We can in a natural way associate a transformation $T_A:\mathbb{R}^n \to \mathbb{R}^m$ to the matrix $A$. ::::::{prf:definition} -The transformation $T_A$ corresponding to the $m\times n$ matrix $A$ - is the mapping defined by +The transformation $T_A$ corresponding to the $m\times n$ matrix $A$ +is the mapping defined by $$ T_A(\mathbf{x}) = A\mathbf{x} \quad \text{or } \quad T_A:\mathbf{x} \mapsto A\mathbf{x}, $$ -where $\mathbf{x} \in \mathbb{R}^n$. - -We call such a mapping a **matrix transformation**. Conversely we say that the matrix $A$ **represents** the transformation $T_A$. +where $\mathbf{x} \in \mathbb{R}^n$. +We call such a mapping a **matrix transformation**. Conversely we say that the matrix $A$ **represents** the transformation $T_A$. :::::: - - - - As a first example consider the following. - - ::::::{prf:example} :label: Ex:LinTrafo:FirstMatrixTrafo - -The transformation corresponding to the matrix +The transformation corresponding to the matrix $A = \begin{bmatrix} 1 & 2 & 0\\ 1 & 2 & 1 \end{bmatrix}$ is defined by $$ - T_A(\mathbf{x}) = + T_A(\mathbf{x}) = \begin{bmatrix} 1 & 2 & 0\\ 1 & 2 & 1 \end{bmatrix}\mathbf{x}. @@ -84,104 +75,81 @@ We have, for instance $$ \begin{bmatrix} - 1 & 2 & 0\\ 1 & 2 & 1 + 1 & 2 & 0\\ 1 & 2 & 1 \end{bmatrix} \begin{bmatrix} - 1\\1\\1 -\end{bmatrix} = + 1\\1\\1 +\end{bmatrix} = \begin{bmatrix} - 3 \\ 4 + 3 \\ 4 \end{bmatrix} \quad \text{and} \quad \begin{bmatrix} - 1 & 2 & 0\\ 1 & 2 & 1 + 1 & 2 & 0\\ 1 & 2 & 1 \end{bmatrix} \begin{bmatrix} - 2\\-1\\0 -\end{bmatrix} = + 2\\-1\\0 +\end{bmatrix} = \begin{bmatrix} - 0\\ 0 + 0\\ 0 \end{bmatrix}. $$ According to the definition of the matrix-vector product we can also write - - :::{math} :label: Eq:LinTrafo:AxIsLinearCombination - - A\mathbf{x} = \begin{bmatrix} - 1 & 2 & 0\\ 1 & 2 & 1 - \end{bmatrix} - \begin{bmatrix} - x_1\\x_2\\x_3 - \end{bmatrix} = - x_1 - \begin{bmatrix} - 1\\ 1 - \end{bmatrix}+ - x_2 - \begin{bmatrix} - 2 \\ 2 - \end{bmatrix}+ - x_3 - \begin{bmatrix} - 0\\ 1 - \end{bmatrix}. +A\mathbf{x} = \begin{bmatrix} +1 & 2 & 0\\ 1 & 2 & 1 + \end{bmatrix} +\begin{bmatrix} +x_1\\x_2\\x_3 + \end{bmatrix} = +x_1 + \begin{bmatrix} +1\\ 1 + \end{bmatrix}+ +x_2 + \begin{bmatrix} +2 \\ 2 + \end{bmatrix}+ +x_3 + \begin{bmatrix} +0\\ 1 + \end{bmatrix}. ::: - - - - :::::: - - - -We recall that for a transformation $T$ from a domain $D$ to a codomain $E$ the range $R= R_T$ is defined as the set of all images of elements of $D$ in $E$: +We recall that for a transformation $T$ from a domain $D$ to a codomain $E$ the range $R= R_T$ is defined as the set of all images of elements of $D$ in $E$: $$ R_T = \{\text{ all images } T(x), \, \text{ for } x \text{ in }D\}. $$ - - ::::::{prf:remark} :label: Ex:LinTrafo:FirstMatrixTrafoContinued - - -From Equation {eq}`Eq:LinTrafo:AxIsLinearCombination` +From Equation {eq}`Eq:LinTrafo:AxIsLinearCombination` it is clear that the range of the matrix transformation in {prf:ref}`Ex:LinTrafo:FirstMatrixTrafo` consists of all linear combinations of the three columns of $A$: $$ -\text{Range}(T_A) = +\text{Range}(T_A) = \Span{ \begin{bmatrix} 1\\ 1 \end{bmatrix}, \begin{bmatrix} 2 \\ 2 \end{bmatrix}, \begin{bmatrix} 0\\ 1 \end{bmatrix}}. $$ -In a later chapter ({numref}`Sec:SubspacesRn`, <FONT color ="#0076C2"> Subspaces of $\R^n$</FONT>) we will call this the **column space** of the matrix $A$. - +In a later chapter ({numref}`Sec:SubspacesRn`, <FONT color ="#0076C2"> Subspaces of $\R^n$</FONT>) we will call this the **column space** of the matrix $A$. :::::: - - - - The first example leads to a first property of matrix transformations: - - ::::::{prf:proposition} :label: Prop:LinTrafo:RangeTA - - Suppose $$ @@ -198,37 +166,32 @@ $$ \text{Range}(T_A) = \Span{\mathbf{a}_1, \mathbf{a}_2,\ldots,\mathbf{a}_n }. $$ - - :::::: - - ::::::{prf:example} :label: Ex:LinTrafo:SecondMatrixTrafo - -The matrix +The matrix $$ A = \begin{bmatrix} - 1 & 0 \\ 0 & 1 \\ 0 & 0 + 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} $$ leads to the transformation $$ - T: \mathbb{R}^2 \to \mathbb{R}^3, \quad + T: \mathbb{R}^2 \to \mathbb{R}^3, \quad T \left(\begin{bmatrix} - x \\ y -\end{bmatrix}\right)= + x \\ y +\end{bmatrix}\right)= \begin{bmatrix} - 1 & 0 \\ 0 & 1 \\ 0 & 0 + 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} - x \\ y -\end{bmatrix} = + x \\ y +\end{bmatrix} = \begin{bmatrix} x \\ y \\0 \end{bmatrix}. @@ -236,49 +199,41 @@ $$ This transformation "embeds" the plane $\mathbb{R}^2$ into the space $\mathbb{R}^3$, as depicted in {numref}`Figure %s <Fig:LinTrafo:EmbedR2R3>`. - - - -::::{figure} Images/Fig-LinTrafo-EmbedR2R3.svg +```{applet} +:url: linear_transformation/embed_r2_r3 +:fig: Images/Fig-LinTrafo-EmbedR2R3.svg :name: Fig:LinTrafo:EmbedR2R3 -$T$: embedding $\mathbb{R}^2$ into $\mathbb{R}^3$. -:::: - - +$T$: embedding $\mathbb{R}^2$ into $\mathbb{R}^3$. +``` The range of this transformation is the span of the two vectors $$ - \mathbf{e}_1 = + \mathbf{e}_1 = \begin{bmatrix} - 1\\ 0 \\ 0 -\end{bmatrix} \quad \text{and} \quad - \mathbf{e}_2 = + 1\\ 0 \\ 0 +\end{bmatrix} \quad \text{and} \quad + \mathbf{e}_2 = \begin{bmatrix} - 0\\ 1 \\ 0 + 0\\ 1 \\ 0 \end{bmatrix}, $$ -which is the $xy$-plane in $\mathbb{R}^3$. - +which is the $xy$-plane in $\mathbb{R}^3$. :::::: - -For $2\times2$ and $3\times3$ matrices the transformations often have a geometric interpretation, as the following example illustrates. - - +For $2\times2$ and $3\times3$ matrices the transformations often have a geometric interpretation, as the following example illustrates. ::::::{prf:example} :label: Eq:LinTrafo:SkewProjection - -The transformation corresponding to the matrix +The transformation corresponding to the matrix $$ A = \begin{bmatrix} - 1 & 1 \\ 0 & 0 + 1 & 1 \\ 0 & 0 \end{bmatrix} $$ @@ -286,8 +241,8 @@ is the mapping $$ T: \mathbb{R}^2 \to \mathbb{R}^2, \quad T\left(\begin{bmatrix} - x \\ y -\end{bmatrix}\right)= + x \\ y +\end{bmatrix}\right)= \begin{bmatrix} x +y \\ 0 \end{bmatrix}. @@ -296,40 +251,38 @@ $$ First we observe that the range of this transformation consists of all multiples of the vector $ \begin{bmatrix} 1 \\ 0 \end{bmatrix} $, i.e. the $x$-axis in the plane. -Second, let us find the set of points/vectors that is mapped to an arbitrary point -$\begin{bmatrix} c \\ 0 \end{bmatrix}$ in the range. For this we solve +Second, let us find the set of points/vectors that is mapped to an arbitrary point +$\begin{bmatrix} c \\ 0 \end{bmatrix}$ in the range. For this we solve $$ A\mathbf{x} = \begin{bmatrix} - 1 & 1 \\ 0 & 0 - \end{bmatrix} + 1 & 1 \\ 0 & 0 + \end{bmatrix} \begin{bmatrix} - x \\ y -\end{bmatrix} = + x \\ y +\end{bmatrix} = \begin{bmatrix} c \\ 0 \end{bmatrix} - \quad \iff \quad + \quad \iff \quad \begin{bmatrix} x+y \\ 0 -\end{bmatrix} = +\end{bmatrix} = \begin{bmatrix} c \\ 0 \end{bmatrix}. $$ -The points whose coordinates satisfy this equation all lie on the line described by the equation +The points whose coordinates satisfy this equation all lie on the line described by the equation $$ x + y = c. $$ -So what the mapping does is to send all points on a line $\mathcal{L}:x + y = c$ to the point $(c,0)$, which is the intersecting of this line with the $x$-axis. <BR> -An alternative way to describe it: it is the skew projection, in the direction $\begin{bmatrix} 1 \\ -1 \end{bmatrix}$ onto the $x$-axis. +So what the mapping does is to send all points on a line $\mathcal{L}:x + y = c$ to the point $(c,0)$, which is the intersecting of this line with the $x$-axis. <BR> +An alternative way to describe it: it is the skew projection, in the direction $\begin{bmatrix} 1 \\ -1 \end{bmatrix}$ onto the $x$-axis. See {numref}`Figure %s <Fig:LinTrafo:SkewProjection>`. - - ::::{figure} Images/Fig-LinTrafo-SkewProjection.svg :name: Fig:LinTrafo:SkewProjection @@ -338,20 +291,17 @@ The transformation of {prf:ref}`Eq:LinTrafo:SkewProjection` :::::: - - ::::::{exercise} :label: Exc:Lintrafo:VectorInRange? - -Find out whether the vectors +Find out whether the vectors $$ - \mathbf{y}_1 = + \mathbf{y}_1 = \begin{bmatrix} 2 \\ 1 \\ 0 \end{bmatrix} \quad \text{and} \quad - \mathbf{y}_2 = + \mathbf{y}_2 = \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix} @@ -360,28 +310,20 @@ $$ are in the range of the matrix transformation $$ - T(\mathbf{x}) = A\mathbf{x} = + T(\mathbf{x}) = A\mathbf{x} = \begin{bmatrix} - 1 &1&1 \\ 1 &-1&3 \\ -1&2&-4 + 1 &1&1 \\ 1 &-1&3 \\ -1&2&-4 \end{bmatrix}\mathbf{x}. $$ - - :::::: - - - We close this subsection with an example of a matrix transformation representing a very elementary dynamical system. - - ::::::{prf:example} :label: Ex:LinTrafo:MigrationModel - -Consider a model with two cities between which over a fixed period of time migrations take place. Say in a period of ten years 90\% of the inhabitants in city $A$ stay in city $A$ and 10\% move to city $B$. From city $B$ 20\% of the citizens move to $A$, so 80\% stay in city $B$. <BR> +Consider a model with two cities between which over a fixed period of time migrations take place. Say in a period of ten years 90\% of the inhabitants in city $A$ stay in city $A$ and 10\% move to city $B$. From city $B$ 20\% of the citizens move to $A$, so 80\% stay in city $B$. <BR> The following table contains the relevant statistics: $$ @@ -391,7 +333,7 @@ $$ \end{array} $$ -For instance, if at time 0 the population in city $A$ amounts to 50 (thousand) and in city $B$ live 100 (thousand) people, then at the end of one period the population in city $A$ +For instance, if at time 0 the population in city $A$ amounts to 50 (thousand) and in city $B$ live 100 (thousand) people, then at the end of one period the population in city $A$ amounts to $$ @@ -400,12 +342,12 @@ $$ Likewise for city $B$. -If we denote the population sizes after $k$ periods by a vector +If we denote the population sizes after $k$ periods by a vector $$ \mathbf{x}_k = \begin{bmatrix} - x_k \\ y_k + x_k \\ y_k \end{bmatrix} $$ @@ -413,47 +355,44 @@ it follows that $$ \begin{bmatrix} - x_{k+1} \\ y_{k+1} -\end{bmatrix} = + x_{k+1} \\ y_{k+1} +\end{bmatrix} = \begin{bmatrix} - 0.9x_k + 0.2y_k \\0.1x_k + 0.8y_k + 0.9x_k + 0.2y_k \\0.1x_k + 0.8y_k \end{bmatrix}, \quad - \text{i.e., } - \mathbf{x}_{k+1} = + \text{i.e., } + \mathbf{x}_{k+1} = \begin{bmatrix} - 0.9 & 0.2 \\ 0.1 & 0.8 + 0.9 & 0.2 \\ 0.1 & 0.8 \end{bmatrix} \begin{bmatrix} - x_k \\ y_k + x_k \\ y_k \end{bmatrix} = M \mathbf{x}_{k}. $$ -The $M$ stands for migration matrix. - -Obviously this model can be generalized to a "world" with any number of cities. +The $M$ stands for migration matrix. +Obviously this model can be generalized to a "world" with any number of cities. :::::: - - (Subsec:LinTrafo:LinTrafo)= + ## Linear Transformations In the previous section we saw that the matrix transformation $\mathbf{y}=A\mathbf{x}$ can also be seen as a mapping $T(\mathbf{x}) = A\mathbf{x}$. This mapping has two characteristic properties on which we will focus in this section. - - ::::::{prf:definition} :label: Dfn:LinTrafo:LinTrafo - A **linear transformation** is a function $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ that has the following properties +A **linear transformation** is a function $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ that has the following properties + <ol type="i"> <li> - For all vectors $\mathbf{v}_1,\,\mathbf{v}_2$ in $\mathbb{R}^n$: - +For all vectors $\mathbf{v}_1,\,\mathbf{v}_2$ in $\mathbb{R}^n$: + <BR> $$ @@ -463,7 +402,7 @@ $$ </li> <li> - For all vectors $\mathbf{v}$ in $\mathbb{R}^n$ and all scalars $c$ in $\mathbb{R}$: +For all vectors $\mathbf{v}$ in $\mathbb{R}^n$ and all scalars $c$ in $\mathbb{R}$: <BR> @@ -476,54 +415,50 @@ $$ :::::: - ::::::{exercise} :label: Exc:LinTrafo:ImageofZeroVector - Show that a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$ always sends the zero vector in $\R^n$ to the zero vector in $\R^m$. <BR> -Thus, if $ T:\mathbb{R}^n \to\mathbb{R}^m$ is a linear transformation, then $T(\mathbf{0}_n) = \mathbf{0}_m$. +Thus, if $ T:\mathbb{R}^n \to\mathbb{R}^m$ is a linear transformation, then $T(\mathbf{0}_n) = \mathbf{0}_m$. :::::: ::::::{solution} Exc:LinTrafo:ImageofZeroVector :class: dropdown -If $ T:\mathbb{R}^n \to\mathbb{R}^m$ is linear, and $\vect{v}$ is any vector in $\R^n$, then $\mathbf{0}_n = 0\vect{v}$. From the second property in {prf:ref}`Dfn:LinTrafo:LinTrafo` it follows that +If $ T:\mathbb{R}^n \to\mathbb{R}^m$ is linear, and $\vect{v}$ is any vector in $\R^n$, then $\mathbf{0}_n = 0\vect{v}$. From the second property in {prf:ref}`Dfn:LinTrafo:LinTrafo` it follows that $$ T(\mathbf{0}_n) = T(0\vect{v}) = 0\,T(\vect{v}) = \mathbf{0}_m. $$ -:::::: - - +:::::: ::::::{prf:example} :label: Ex:LinTrafo:FirstLinearMap -Consider the map $T:\mathbb{R}^2\rightarrow\mathbb{R}^3$ that sends each vector +Consider the map $T:\mathbb{R}^2\rightarrow\mathbb{R}^3$ that sends each vector $\begin{bmatrix} x \\ y -\end{bmatrix}$ -in $\mathbb{R}^2$ to the vector +\end{bmatrix}$ +in $\mathbb{R}^2$ to the vector $\begin{bmatrix} x \\ y \\ 0 -\end{bmatrix}$ in $\mathbb{R}^3$. -Let us check that this a linear map. +\end{bmatrix}$ in $\mathbb{R}^3$. +Let us check that this a linear map. -For that, we need to check the two properties in the definition. +For that, we need to check the two properties in the definition. <BR> -For property (i) we take two arbitrary vectors +For property (i) we take two arbitrary vectors -$$ +$$ \begin{bmatrix} - x_1 \\ y_1 + x_1 \\ y_1 \end{bmatrix} \quad \text{ and }\quad \begin{bmatrix} - x_2 \\ y_2 -\end{bmatrix} \quad \text{in} \quad \mathbb{R}^2, + x_2 \\ y_2 +\end{bmatrix} \quad \text{in} \quad \mathbb{R}^2, $$ and see: @@ -534,16 +469,16 @@ $$ \end{bmatrix} + \begin{bmatrix} x_2 \\ y_2 -\end{bmatrix} \right)= +\end{bmatrix} \right)= T \left(\begin{bmatrix} - x_1+x_2 \\ y_1+y_2 -\end{bmatrix}\right)= + x_1+x_2 \\ y_1+y_2 +\end{bmatrix}\right)= \begin{bmatrix} x_1 + x_2 \\ y_1 + y_2 \\ 0 -\end{bmatrix} = +\end{bmatrix} = \begin{bmatrix} x_1 \\ y_1 \\ 0 -\end{bmatrix} + +\end{bmatrix} + \begin{bmatrix} x_2 \\ y_2 \\ 0 \end{bmatrix}. @@ -554,7 +489,7 @@ This last vector indeed equals $$ T\left(\begin{bmatrix} x_1 \\ y_1 -\end{bmatrix}\right)+ +\end{bmatrix}\right)+ T\left(\begin{bmatrix} x_2 \\ y_2 \end{bmatrix}\right). @@ -565,16 +500,16 @@ Similarly, for the second property, given any scalar $c$, $$ T\left(c \begin{bmatrix} x_1 \\ y_1 -\end{bmatrix}\right)= +\end{bmatrix}\right)= T \left(\begin{bmatrix} c x_1 \\ cy_1 -\end{bmatrix}\right)= +\end{bmatrix}\right)= \begin{bmatrix} c x_1 \\ c y_1 \\ 0 -\end{bmatrix} = +\end{bmatrix} = c \begin{bmatrix} - x_1 \\ y_1 \\ 0 -\end{bmatrix}= + x_1 \\ y_1 \\ 0 +\end{bmatrix}= cT \left(\begin{bmatrix} x_1 \\ y_1 \end{bmatrix}\right). @@ -582,40 +517,37 @@ $$ So indeed $T$ has the two properties of a linear transformation. - :::::: - - ::::::{prf:example} Consider the mapping - $T:\mathbb{R}^2\rightarrow\mathbb{R}^2$ that sends each vector $ \begin{bmatrix} - x \\ y -\end{bmatrix}$ +$T:\mathbb{R}^2\rightarrow\mathbb{R}^2$ that sends each vector $ \begin{bmatrix} +x \\ y +\end{bmatrix}$ in $\mathbb{R}^2$ to the vector $\begin{bmatrix} x+y \\ xy \end{bmatrix}$: $$ T: \begin{bmatrix} - x \\ y -\end{bmatrix} \mapsto + x \\ y +\end{bmatrix} \mapsto \begin{bmatrix} - x+y \\ xy + x+y \\ xy \end{bmatrix} $$ -This mapping is **not** a linear transformation. +This mapping is **not** a linear transformation. $$ T \left(\begin{bmatrix} - 1 \\ 1 -\end{bmatrix} + + 1 \\ 1 +\end{bmatrix} + \begin{bmatrix} - 1 \\ 2 + 1 \\ 2 \end{bmatrix}\right)= T \left(\begin{bmatrix} -2 \\ 3 -\end{bmatrix}\right) = +2 \\ 3 +\end{bmatrix}\right) = \begin{bmatrix} 5 \\ 6 \end{bmatrix}, @@ -625,19 +557,19 @@ whereas $$ T \left(\begin{bmatrix} - 1 \\ 1 -\end{bmatrix}\right)+ + 1 \\ 1 +\end{bmatrix}\right)+ T \left(\begin{bmatrix} - 1 \\ 2 -\end{bmatrix}\right)= + 1 \\ 2 +\end{bmatrix}\right)= \begin{bmatrix} - 2 \\ 1 -\end{bmatrix} + + 2 \\ 1 +\end{bmatrix} + \begin{bmatrix} - 3 \\ 2 -\end{bmatrix} = + 3 \\ 2 +\end{bmatrix} = \begin{bmatrix} - 5 \\ 3 + 5 \\ 3 \end{bmatrix} \,\neq\, \begin{bmatrix} @@ -650,43 +582,35 @@ The second requirement of a linear transformation is violated as well: $$ T\left(3 \begin{bmatrix} - 1 \\ 1 -\end{bmatrix}\right)= + 1 \\ 1 +\end{bmatrix}\right)= T \left(\begin{bmatrix} - 3 \\ 3 -\end{bmatrix}\right)= + 3 \\ 3 +\end{bmatrix}\right)= \begin{bmatrix} - 6 \\ 9 + 6 \\ 9 \end{bmatrix} \,\,\neq\,\, 3\,T \left(\begin{bmatrix} - 1 \\ 1 -\end{bmatrix} \right)= - 3 + 1 \\ 1 +\end{bmatrix} \right)= + 3 \begin{bmatrix} - 2 \\ 1 -\end{bmatrix} = + 2 \\ 1 +\end{bmatrix} = \begin{bmatrix} - 6 \\ 3 + 6 \\ 3 \end{bmatrix}. $$ - - :::::: - - - - - ::::::{exercise} :label: Exc:LinTrafo:T(x)=x+p - -Let $\mathbf{p}$ be a nonzero vector in $\mathbb{R}^2$. Is the translation +Let $\mathbf{p}$ be a nonzero vector in $\mathbb{R}^2$. Is the translation $$ T\!:\mathbb{R}^2 \to \mathbb{R}^2, \quad \mathbf{x} \mapsto \mathbf{x} + \mathbf{p} @@ -699,51 +623,44 @@ a linear transformation? ::::::{solution} Exc:LinTrafo:T(x)=x+p :class: dropdown -The transformation defined by $T(\vect{x}) = \vect{x} + \vect{p}$, with $\vect{p}\neq \vect{0}$ does not have any of the two properties of a linear transformation. +The transformation defined by $T(\vect{x}) = \vect{x} + \vect{p}$, with $\vect{p}\neq \vect{0}$ does not have any of the two properties of a linear transformation. -For instance, since $\vect{p}+\vect{p} \neq \vect{p}$, +For instance, since $\vect{p}+\vect{p} \neq \vect{p}$, $$ T(\vect{x}+\vect{y}) = \vect{x}+\vect{y} + \vect{p} \neq T(\vect{x})+T(\vect{y}) = \vect{x}+ \vect{p} +\vect{y} + \vect{p}. $$ - :::::: Note that {prf:ref}`Ex:LinTrafo:FirstLinearMap` was in fact the first example of a matrix transformation in the {ref}`Subsec:LinTrafo:MatrixTrafo`: $$ \begin{bmatrix} - x \\ y -\end{bmatrix} \mapsto + x \\ y +\end{bmatrix} \mapsto \begin{bmatrix} - x \\ y \\ 0 + x \\ y \\ 0 \end{bmatrix} - = + = \begin{bmatrix} - 1 & 0 \\ 0&1 \\ 0&0 -\end{bmatrix} + 1 & 0 \\ 0&1 \\ 0&0 +\end{bmatrix} \begin{bmatrix} - x \\ y -\end{bmatrix} + x \\ y +\end{bmatrix} $$ - -As we will see: **any** linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$ is a matrix transformation. The converse is true as well. This is the content of the next proposition. - - +As we will see: **any** linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^m$ is a matrix transformation. The converse is true as well. This is the content of the next proposition. ::::::{prf:proposition} :label: Prop:LinTrafo:MatrixTrafoIsLinear - Each matrix transformation is a linear transformation. - :::::: - ::::::{prf:proof} This is a direct consequence of the two properties of the matrix-vector product ({prf:ref}`Prop:MatVecProduct:Linearity`) that say @@ -754,15 +671,11 @@ $$ :::::: - - - ::::::{prf:proposition} :label: Prop:LinTrafo:CompositionLintrafos - -Suppose $T: \mathbb{R}^n\to\mathbb{R}^m$ and $S:\mathbb{R}^m\to\mathbb{R}^p$ are linear transformations. -Then the transformation $S\circ T:\mathbb{R}^n\to\mathbb{R}^p $ defined by +Suppose $T: \mathbb{R}^n\to\mathbb{R}^m$ and $S:\mathbb{R}^m\to\mathbb{R}^p$ are linear transformations. +Then the transformation $S\circ T:\mathbb{R}^n\to\mathbb{R}^p $ defined by $$ S\circ T(\mathbf{x}) = S(T(\mathbf{x})) @@ -774,23 +687,22 @@ is a linear transformation from $\mathbb{R}^n$ to $\mathbb{R}^p$. ::::::{prf:remark} -The transformation $S\circ T$ is called the **composition** of the two transformations $S$ and $T$. It is best read as *"$S$ after $T$"*. +The transformation $S\circ T$ is called the **composition** of the two transformations $S$ and $T$. It is best read as _"$S$ after $T$"_. :::::: - ::::::{prf:proof} Suppose that $$ - T(\mathbf{x}+\mathbf{y}) = T(\mathbf{x})+T(\mathbf{y})\quad \text{and} \quad T(c\mathbf{x}) = cT(\mathbf{x}), \quad \text{for}\,\, \mathbf{x}, \mathbf{y} \quad \text{in } \mathbb{R}^n, + T(\mathbf{x}+\mathbf{y}) = T(\mathbf{x})+T(\mathbf{y})\quad \text{and} \quad T(c\mathbf{x}) = cT(\mathbf{x}), \quad \text{for}\,\, \mathbf{x}, \mathbf{y} \quad \text{in } \mathbb{R}^n, \,\, c \text{ in } \mathbb{R} $$ -and likewise for $S$. Then +and likewise for $S$. Then $$ \begin{array}{rl} - S\circ T(\mathbf{x}+\mathbf{y}) = S(T(\mathbf{x}+\mathbf{y})) = S( T(\mathbf{x})+T(\mathbf{y})) \!\!\!\!& + S\circ T(\mathbf{x}+\mathbf{y}) = S(T(\mathbf{x}+\mathbf{y})) = S( T(\mathbf{x})+T(\mathbf{y})) \!\!\!\!& = S(T(\mathbf{x})) + S(T(\mathbf{y})) \\ & = S\circ T(\mathbf{x}) + S\circ T(\mathbf{y}) \end{array} $$ @@ -803,30 +715,25 @@ $$ Hence $S\circ T$ satisfies the two requirements of a linear transformation. - :::::: +In words: the composition/concatenation of two linear transformations is itself a linear transformation. - - -In words: the composition/concatenation of two linear transformations is itself a linear transformation. - - -::::::{exercise} +::::::{exercise} :label: Exc:LinTrafo:CombiningLinTrafos -There are other ways to combine linear transformations. +There are other ways to combine linear transformations. -The sum $S = T_1 + T_2$ of two linear transformation $T_1,T_2: \mathbb{R}^n \to \mathbb{R}^m$ is defined as follows +The sum $S = T_1 + T_2$ of two linear transformation $T_1,T_2: \mathbb{R}^n \to \mathbb{R}^m$ is defined as follows $$ S: \mathbb{R}^n \to \mathbb{R}^m, \quad S(\mathbf{x}) = T_1(\mathbf{x}) + T_2(\mathbf{x}). $$ -And the (scalar) multiple $T_3 = cT_1$ is the transformation +And the (scalar) multiple $T_3 = cT_1$ is the transformation $$ - T_3: \mathbb{R}^n \to \mathbb{R}^m, \quad T_3(\mathbf{x}) = cT_1(\mathbf{x}). + T_3: \mathbb{R}^n \to \mathbb{R}^m, \quad T_3(\mathbf{x}) = cT_1(\mathbf{x}). $$ Show that $S$ and $T_3$ are again linear transformations. @@ -837,15 +744,14 @@ Show that $S$ and $T_3$ are again linear transformations. :class: dropdown The properties of the linear transformatiuon $T_1$ and $T_2$ carry over to $S$ and $T_3$ in the following way. -We check the properties one by one. +We check the properties one by one. -For the sum $S$ we have +For the sum $S$ we have <ol type="i"> <li> - For all vectors $\mathbf{v}_1,\,\mathbf{v}_2$ in $\R^n$ <BR> - +For all vectors $\mathbf{v}_1,\,\mathbf{v}_2$ in $\R^n$ <BR> $$ \begin{array}{rcl} @@ -860,7 +766,7 @@ $$ </li> <li> - And likewise, for all vectors $\mathbf{v}$ in $\mathbb{R}^n$ and all scalars $c$ in $\mathbb{R}$: <BR> +And likewise, for all vectors $\mathbf{v}$ in $\mathbb{R}^n$ and all scalars $c$ in $\mathbb{R}$: <BR> %$S(c\mathbf{v}) = T_1(c\mathbf{v})+T_2(c\mathbf{v}) = cT_1(\mathbf{v})+cT_2(\mathbf{v}) = c\big(T_1(\mathbf{v})+cT_2(\mathbf{v})\big)= cS(\mathbf{v})$. $$ @@ -878,43 +784,35 @@ $$ The linearity of $T_3$ is verified in a similar manner. :::::: - And now, let us return to matrix transformations. - (Subsec:LinTrafo:LinTrafoeqMatrixTrafo)= -## Standard Matrix for a Linear Transformation +## Standard Matrix for a Linear Transformation We have seen that every matrix transformation is a linear transformation. In this subsection we will show that conversely -every linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$ can be represented by a matrix transformation. +every linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$ can be represented by a matrix transformation. The key to construct a matrix that represents a given linear transformation lies in the following proposition. - - ::::::{prf:proposition} :label: Prop:LinTrafo:ExtendedLinearity - -Suppose $T:\mathbb{R}^n\rightarrow\mathbb{R}^m$ is a linear transformation. Then the following property holds: for -each set of vectors $\mathbf{x}_1, \ldots, \mathbf{x}_k$ in $\mathbb{R}^n$ and each set of numbers $c_1,\ldots,c_k$ in $\mathbb{R}$: +Suppose $T:\mathbb{R}^n\rightarrow\mathbb{R}^m$ is a linear transformation. Then the following property holds: for +each set of vectors $\mathbf{x}_1, \ldots, \mathbf{x}_k$ in $\mathbb{R}^n$ and each set of numbers $c_1,\ldots,c_k$ in $\mathbb{R}$: :::::{math} :label: Eq:LinTrafo:LinComb - T(c_1\mathbf{x}_1+c_2 \mathbf{x}_2+\ldots +c_k \mathbf{x}_k) = - c_1T(\mathbf{x}_1)+c_2T(\mathbf{x}_2)+\ldots +c_kT( \mathbf{x}_k). +T(c_1\mathbf{x}\_1+c_2 \mathbf{x}\_2+\ldots +c_k \mathbf{x}\_k) = +c_1T(\mathbf{x}\_1)+c_2T(\mathbf{x}\_2)+\ldots +c_kT( \mathbf{x}\_k). ::::: - :::::: -In words: for any linear transformation -*the image of a linear combination of vectors is equal to the linear combination of their images*. - - +In words: for any linear transformation +_the image of a linear combination of vectors is equal to the linear combination of their images_. ::::::{prf:proof} Suppose $T:\mathbb{R}^n\rightarrow\mathbb{R}^m$ is a linear transformation. @@ -933,160 +831,145 @@ $$ T(c_1\mathbf{x}_1+c_2 \mathbf{x}_2+\ldots +c_k \mathbf{x}_k) &=& T(c_1\mathbf{x}_1)+T(c_2 \mathbf{x}_2+\ldots +c_k \mathbf{x}_k) \\ &=& \quad \ldots \\ - &=& T(c_1\mathbf{x}_1)+T(c_2 \mathbf{x}_2)+\ldots + T(c_k \mathbf{x}_k) + &=& T(c_1\mathbf{x}_1)+T(c_2 \mathbf{x}_2)+\ldots + T(c_k \mathbf{x}_k) \end{array} $$ and then apply rule (ii) to each term. - :::::: - - - - - ::::::{prf:example} :label: Ex:LinTrafo:ExtendedLinearity - -Suppose $T: \mathbb{R}^3 \to \mathbb{R}^2$ is a linear transformation, and we know that for +Suppose $T: \mathbb{R}^3 \to \mathbb{R}^2$ is a linear transformation, and we know that for $$ - \vect{a}_1 = + \vect{a}_1 = \begin{bmatrix} - 1 \\ 0 \\ 0 -\end{bmatrix}, \quad - \vect{a}_2 = + 1 \\ 0 \\ 0 +\end{bmatrix}, \quad + \vect{a}_2 = \begin{bmatrix} - 1 \\ 1 \\ 0 + 1 \\ 1 \\ 0 \end{bmatrix}, - \quad \vect{a}_3 = + \quad \vect{a}_3 = \begin{bmatrix} - 1 \\ 1 \\ 1 + 1 \\ 1 \\ 1 \end{bmatrix} $$ -the images under $T$ are given by +the images under $T$ are given by $$ - T(\vect{a}_1) = \vect{b}_1 = + T(\vect{a}_1) = \vect{b}_1 = \begin{bmatrix} - 1 \\ 2 -\end{bmatrix}, \quad T(\vect{a}_2) = \vect{b}_2 = + 1 \\ 2 +\end{bmatrix}, \quad T(\vect{a}_2) = \vect{b}_2 = \begin{bmatrix} - 3 \\ -1 + 3 \\ -1 \end{bmatrix}, - \quad \text{and} \quad T(\vect{a}_3) = \vect{b}_3 = + \quad \text{and} \quad T(\vect{a}_3) = \vect{b}_3 = \begin{bmatrix} - 2 \\ -2 + 2 \\ -2 \end{bmatrix}. $$ -Then for the vector +Then for the vector $$ - \vect{v} = + \vect{v} = \begin{bmatrix} - 4 \\ 1 \\ -1 + 4 \\ 1 \\ -1 \end{bmatrix} = 3 \vect{a}_1 + 2 \vect{a}_2 - 1 \vect{a}_3 $$ it follows that $$ - T(\vect{v}) = 3 \vect{b}_1 + 2 \vect{b}_2 + (-1) \vect{b}_3 = - 3 + T(\vect{v}) = 3 \vect{b}_1 + 2 \vect{b}_2 + (-1) \vect{b}_3 = + 3 \begin{bmatrix} - 1 \\ 2 -\end{bmatrix} - + 2 + 1 \\ 2 +\end{bmatrix} + + 2 \begin{bmatrix} - 3 \\ -1 -\end{bmatrix} - + (-1) + 3 \\ -1 +\end{bmatrix} + + (-1) \begin{bmatrix} - 2 \\ -2 -\end{bmatrix}= + 2 \\ -2 +\end{bmatrix}= \begin{bmatrix} - 7 \\ 6 + 7 \\ 6 \end{bmatrix}. $$ - - :::::: +The central idea illustrated in {prf:ref}`Ex:LinTrafo:ExtendedLinearity`, which is in fact a direct consequence of {prf:ref}`Prop:LinTrafo:ExtendedLinearity`, is the following: +a linear transformation $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is completely specified by the images +$ T(\mathbf{a}\_1), T(\mathbf{a}\_2), \ldots , T(\mathbf{a}\_n)$ of a set of vectors $\{\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_n\}$ that spans $\mathbb{R}^n$. <BR> +This idea is also hovering over {prf:ref}`Ex:LinTrafo:ExtendedLinearity`. +The simplest set of vectors that spans the whole space $\mathbb{R}^n$ is +the standard basis for $\mathbb{R}^n$ which was introduced in the section {ref}`Sec:LinearCombinations`. -The central idea illustrated in {prf:ref}`Ex:LinTrafo:ExtendedLinearity`, which is in fact a direct consequence of {prf:ref}`Prop:LinTrafo:ExtendedLinearity`, is the following: - -a linear transformation $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is completely specified by the images -$ T(\mathbf{a}_1), T(\mathbf{a}_2), \ldots , T(\mathbf{a}_n)$ of a set of vectors $\{\mathbf{a}_1, \mathbf{a}_2, \ldots, \mathbf{a}_n\}$ that spans $\mathbb{R}^n$. <BR> -This idea is also hovering over {prf:ref}`Ex:LinTrafo:ExtendedLinearity`. - -The simplest set of vectors that spans the whole space $\mathbb{R}^n$ is -the standard basis for $\mathbb{R}^n$ which was introduced in the section {ref}`Sec:LinearCombinations`. - -Recall that this is the set of vectors +Recall that this is the set of vectors :::{math} :label: Eq:LinTrafo:StandardBasis -\left(\vect{e}_1,\mathbf{e}_2, \ldots, \mathbf{e}_n\right)= +\left(\vect{e}\_1,\mathbf{e}\_2, \ldots, \mathbf{e}\_n\right)= \left(\begin{bmatrix} - 1 \\ 0 \\ 0 \\ \vdots \\ 0 +1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}, \begin{bmatrix} - 0 \\ 1 \\ 0 \\ \vdots \\ 0 -\end{bmatrix}, - \quad \cdots \quad , +0 \\ 1 \\ 0 \\ \vdots \\ 0 +\end{bmatrix}, +\quad \cdots \quad , \begin{bmatrix} - 0 \\ 0 \\ 0 \\ \vdots \\ 1 +0 \\ 0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}\right). ::: -The next example gives an illustration of the above, and it also leads the way to +The next example gives an illustration of the above, and it also leads the way to the construction of a matrix for an arbitrary linear transformation. - - ::::::{prf:example} :label: Ex:LinTrafo:StandardMatrixIntro - -Suppose $T$ is a linear transformation from $\mathbb{R}^2$ to $\mathbb{R}^2$ for which +Suppose $T$ is a linear transformation from $\mathbb{R}^2$ to $\mathbb{R}^2$ for which $$ - T(\mathbf{e}_1) = \mathbf{a}_1 = + T(\mathbf{e}_1) = \mathbf{a}_1 = \begin{bmatrix} -1 \\2 -\end{bmatrix}, - \quad - T(\mathbf{e}_2) = \mathbf{a}_2 = +1 \\2 +\end{bmatrix}, + \quad + T(\mathbf{e}_2) = \mathbf{a}_2 = \begin{bmatrix} -4 \\3 +4 \\3 \end{bmatrix}. $$ -Then for an arbitrary vector +Then for an arbitrary vector $$ - \mathbf{x} = + \mathbf{x} = \begin{bmatrix} x_1\\x_2 \end{bmatrix} = - x_1 + x_1 \begin{bmatrix} -1\\0 -\end{bmatrix} + - x_2 +1\\0 +\end{bmatrix} + + x_2 \begin{bmatrix} -0\\1 -\end{bmatrix} +0\\1 +\end{bmatrix} = x_1\mathbf{e}_1 + x_2\mathbf{e}_2, $$ @@ -1097,11 +980,11 @@ $$ T(\mathbf{x}) &=& x_1T(\mathbf{e}_1) + x_2T(\mathbf{e}_2) \\ &=& x_1 \begin{bmatrix} -1 \\2 -\end{bmatrix} +1 \\2 +\end{bmatrix} + x_2 \begin{bmatrix} -4 \\3 +4 \\3 \end{bmatrix} \,\,=\,\,\, \begin{bmatrix} 1 &4 \\2 &3 @@ -1113,52 +996,40 @@ So we see that $$ T(\mathbf{x}) = A \mathbf{x}, \quad\text{where} \quad - A = + A = \begin{bmatrix} - T(\mathbf{e}_1) & T(\mathbf{e}_2) + T(\mathbf{e}_1) & T(\mathbf{e}_2) \end{bmatrix}. $$ - - :::::: - - - - - - ::::::{exercise} :label: Exc:LinTrafo:MatrixForFirstExample - Show that the procedure of {prf:ref}`Ex:LinTrafo:StandardMatrixIntro` applied to the linear transformation of {prf:ref}`Ex:LinTrafo:FirstLinearMap` indeed yields the matrix $$ A = \begin{bmatrix} - 1 & 0 \\ 0 & 1 \\ 0 & 0 + 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{bmatrix}. $$ :::::: - - - -::::::{solution} Exc:LinTrafo:MatrixForFirstExample +::::::{solution} Exc:LinTrafo:MatrixForFirstExample :class: dropdown -Consider the linear transformation +Consider the linear transformation $T:\mathbb{R}^2\rightarrow\mathbb{R}^3$ that sends each vector $ \begin{bmatrix} - x \\ y -\end{bmatrix}$ +x \\ y +\end{bmatrix}$ in $\mathbb{R}^2$ to the vector $\begin{bmatrix} x \\ y \\ 0 \end{bmatrix}$. It holds that $$ - T(\vect{e}_1) = T\left(\begin{bmatrix} 1\\ 0 \end{bmatrix}\right) = + T(\vect{e}_1) = T\left(\begin{bmatrix} 1\\ 0 \end{bmatrix}\right) = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad - T(\vect{e}_2) = T\left(\begin{bmatrix} 0\\ 1 \end{bmatrix}\right) = + T(\vect{e}_2) = T\left(\begin{bmatrix} 0\\ 1 \end{bmatrix}\right) = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}. $$ @@ -1172,57 +1043,49 @@ $$ :::::: -The reasoning of {prf:ref}`Ex:LinTrafo:StandardMatrixIntro` can be generalized. This is the content of the next theorem. - - +The reasoning of {prf:ref}`Ex:LinTrafo:StandardMatrixIntro` can be generalized. This is the content of the next theorem. ::::::{prf:theorem} :label: Thm:LinTrafo:LinTrafo=MatrixTrafo +Each linear transformation $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is a matrix transformation. -Each linear transformation $T$ from $\mathbb{R}^n$ to $\mathbb{R}^m$ is a matrix transformation. - -More specific, if $T: \mathbb{R}^n \to \mathbb{R}^m$ is linear, then for each $\mathbf{x}$ in $\mathbb{R}^n$ +More specific, if $T: \mathbb{R}^n \to \mathbb{R}^m$ is linear, then for each $\mathbf{x}$ in $\mathbb{R}^n$ :::::{math} :label: Eq:Lintrafo:StandardMatrix - T(\mathbf{x}) = A\mathbf{x}, \quad \text{where} \quad - A = +T(\mathbf{x}) = A\mathbf{x}, \quad \text{where} \quad +A = \begin{bmatrix} - T(\mathbf{e}_1) & T(\mathbf{e}_2) & \ldots & T(\mathbf{e}_n) +T(\mathbf{e}\_1) & T(\mathbf{e}\_2) & \ldots & T(\mathbf{e}\_n) \end{bmatrix}. ::::: - - :::::: - - - ::::::{prf:proof} We can more or less copy the derivation in {prf:ref}`Ex:LinTrafo:StandardMatrixIntro`. First of all, any vector $\mathbf{x}$ is a linear combination of the standard basis: $$ - \mathbf{x} = + \mathbf{x} = \begin{bmatrix} -x_1\\x_2\\ \vdots \\ x_n +x_1\\x_2\\ \vdots \\ x_n \end{bmatrix} = - x_1 + x_1 \begin{bmatrix} -1 \\ 0 \\ \vdots \\ 0 +1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} + - x_2 + x_2 \begin{bmatrix} -0 \\ 1 \\ \vdots \\ 0 +0 \\ 1 \\ \vdots \\ 0 \end{bmatrix} + \ldots + - x_n + x_n \begin{bmatrix} -0 \\ 0 \\ \vdots \\ 1 +0 \\ 0 \\ \vdots \\ 1 \end{bmatrix}, $$ @@ -1243,182 +1106,146 @@ The last expression is a linear combination of $n$ vectors in $\mathbb{R}^m$, an $$ x_1 T(\mathbf{e}_1) + x_2 T(\mathbf{e}_2) + \ldots + x_n T(\mathbf{e}_n) = \begin{bmatrix} - T(\mathbf{e}_1) & T(\mathbf{e}_2) & \ldots & T(\mathbf{e}_n) + T(\mathbf{e}_1) & T(\mathbf{e}_2) & \ldots & T(\mathbf{e}_n) \end{bmatrix} \mathbf{x}. $$ - - :::::: - - - - - ::::::{prf:definition} :label: Dfn:LinTrafo:StandardMatrix - -For a linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$, the matrix - - +For a linear transformation $T:\mathbb{R}^n \to \mathbb{R}^m$, the matrix :::{math} :label: Eq:LinTrafo:StandardMatrix - - \begin{bmatrix} - T(\mathbf{e}_1) & T(\mathbf{e}_2) & \ldots & T(\mathbf{e}_n) +T(\mathbf{e}\_1) & T(\mathbf{e}\_2) & \ldots & T(\mathbf{e}\_n) \end{bmatrix} ::: - - -is called the **standard matrix** of $T$. - +is called the **standard matrix** of $T$. :::::: - - - -In the section {ref}`Sec:GeomLinTrans` you will learn how to build standard matrices for rotations, reflections and other geometrical mappings. +In the section {ref}`Sec:GeomLinTrans` you will learn how to build standard matrices for rotations, reflections and other geometrical mappings. For now let us look at a more "algebraic" example. - - ::::::{prf:example} :label: Ex:LinTrafo:MatrixToLinearMap - -Consider the transformation +Consider the transformation $$ - T: + T: \begin{bmatrix} -x \\ y \\ z -\end{bmatrix} \mapsto +x \\ y \\ z +\end{bmatrix} \mapsto \begin{bmatrix} -x-y \\ 2y+3z \\ x+y-z +x-y \\ 2y+3z \\ x+y-z \end{bmatrix}. $$ -It can be checked that the transformation has the two properties of a linear transformation according to the definition. -Note that +It can be checked that the transformation has the two properties of a linear transformation according to the definition. +Note that $$ - T(\mathbf{e}_1) = + T(\mathbf{e}_1) = \begin{bmatrix} -1 \\ 0 \\ 1 -\end{bmatrix}, \quad - T(\mathbf{e}_2) = +1 \\ 0 \\ 1 +\end{bmatrix}, \quad + T(\mathbf{e}_2) = \begin{bmatrix} --1 \\ 2 \\ 1 +-1 \\ 2 \\ 1 \end{bmatrix}, \quad \text{and} \quad - T(\mathbf{e}_3) = + T(\mathbf{e}_3) = \begin{bmatrix} -0 \\ 3 \\ -1 +0 \\ 3 \\ -1 \end{bmatrix}. $$ -So we find that the matrix $[T]$ of $T$ is given by +So we find that the matrix $[T]$ of $T$ is given by $$ - [T] = + [T] = \begin{bmatrix} -1 & -1 & 0 \\ 0 &2&3 \\ 1 & 1 & -1 +1 & -1 & 0 \\ 0 &2&3 \\ 1 & 1 & -1 \end{bmatrix} $$ is the standard matrix of $T$. - :::::: - - - - - ::::::{exercise} :label: Exc:LinTrafo:FillBlanks +In the previous example we could have found the matrix just by inspection. -In the previous example we could have found the matrix just by inspection. - -For the slightly different transformation $T:\R \to \R$ given by +For the slightly different transformation $T:\R \to \R$ given by $$ - T: + T: \begin{bmatrix} -x \\ y \\ z -\end{bmatrix} \mapsto +x \\ y \\ z +\end{bmatrix} \mapsto \begin{bmatrix} -3x-z \\ y+4z \\ x-y+2z +3x-z \\ y+4z \\ x-y+2z \end{bmatrix}, $$ -can you fill in the blanks in the following equation? - +can you fill in the blanks in the following equation? $$ \begin{bmatrix} -3x-z \\ y+4z \\ x-y+2z -\end{bmatrix} = +3x-z \\ y+4z \\ x-y+2z +\end{bmatrix} = \begin{bmatrix} -.. & .. & .. \\ .. & .. & .. \\ .. & .. & .. +.. & .. & .. \\ .. & .. & .. \\ .. & .. & .. \end{bmatrix} \begin{bmatrix} -x \\ y \\ z -\end{bmatrix}. +x \\ y \\ z +\end{bmatrix}. $$ If you can, you will have shown that $T$ is a matrix transformation, and as a direct consequence $T$ is a linear transformation. - :::::: - - - To conclude we consider an example that refers back to {prf:ref}`Prop:LinTrafo:CompositionLintrafos`, and which will to a large extent pave the road for the product of two matrices. - - ::::::{prf:example} :label: Ex:LinTrafo:ProductOfMatrices - -Suppose $T:\mathbb{R}^2 \to \mathbb{R}^3$ and $S:\mathbb{R}^3 \to \mathbb{R}^3$ are the matrix transformations given by +Suppose $T:\mathbb{R}^2 \to \mathbb{R}^3$ and $S:\mathbb{R}^3 \to \mathbb{R}^3$ are the matrix transformations given by $$ - T(\mathbf{x}) = A\mathbf{x} = + T(\mathbf{x}) = A\mathbf{x} = \begin{bmatrix} - 1&2 \\ 3&4 \\ 1&0 + 1&2 \\ 3&4 \\ 1&0 \end{bmatrix} \mathbf{x} \quad \text{and} \quad S(\mathbf{y}) = B\mathbf{y} = \begin{bmatrix} - 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 + 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 \end{bmatrix} \mathbf{x} $$ -From {prf:ref}`Prop:LinTrafo:CompositionLintrafos` we know that the composition -$S\circ T: \mathbb{R}^2 \to \mathbb{R}^3$ is also a linear transformation. What is the (standard) matrix of $S\circ T$? +From {prf:ref}`Prop:LinTrafo:CompositionLintrafos` we know that the composition +$S\circ T: \mathbb{R}^2 \to \mathbb{R}^3$ is also a linear transformation. What is the (standard) matrix of $S\circ T$? -For this we need the images of the unit vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ in $\mathbb{R}^2$. +For this we need the images of the unit vectors $\mathbf{e}_1$ and $\mathbf{e}_2$ in $\mathbb{R}^2$. For each vector we first apply $T$ and then $S$. For $\mathbf{e}_1$ this gives $$ - T(\mathbf{e}_1) = + T(\mathbf{e}_1) = \begin{bmatrix} - 1&2 \\ 3&4 \\ 1&0 -\end{bmatrix} + 1&2 \\ 3&4 \\ 1&0 +\end{bmatrix} \begin{bmatrix} - 1\\0 -\end{bmatrix} = + 1\\0 +\end{bmatrix} = \begin{bmatrix} 1 \\ 3 \\ 1 \end{bmatrix}, @@ -1427,9 +1254,9 @@ $$ and then $$ - S (T(\mathbf{e}_1)) = + S (T(\mathbf{e}_1)) = \begin{bmatrix} - 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 + 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 \end{bmatrix} \begin{bmatrix} 1 \\ 3 \\ 1 @@ -1442,19 +1269,19 @@ $$ Likewise for $\mathbf{e}_2$: $$ - T(\mathbf{e}_2) = + T(\mathbf{e}_2) = \begin{bmatrix} - 1&2 \\ 3&4 \\ 1&0 -\end{bmatrix} + 1&2 \\ 3&4 \\ 1&0 +\end{bmatrix} \begin{bmatrix} - 0\\1 -\end{bmatrix} = + 0\\1 +\end{bmatrix} = \begin{bmatrix} 2\\4\\0 \end{bmatrix} \,\,\Longrightarrow\,\, - S (T(\mathbf{e}_2)) = + S (T(\mathbf{e}_2)) = \begin{bmatrix} - 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 + 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 \end{bmatrix} \begin{bmatrix} 2 \\ 4 \\ 0 @@ -1473,217 +1300,191 @@ S\circ T(\mathbf{e_1})&S\circ T(\mathbf{e_2}) \end{bmatrix} \,\,=\,\, \begin{bmatrix} 2 &2 \\ 0&-2 \\ -1&2 -\end{bmatrix}. +\end{bmatrix}. $$ In the section {ref}`Sec:MatrixOps` we will define the product of two matrices precisely in such a way that $$ \begin{bmatrix} - 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 + 1&0 &1 \\ 1 & -1 &2 \\ -1&1&-3 \end{bmatrix} \begin{bmatrix} - 1&2 \\ 3&4 \\ 1&0 -\end{bmatrix} = + 1&2 \\ 3&4 \\ 1&0 +\end{bmatrix} = \begin{bmatrix} 2 &2 \\ 0&-2 \\ -1&2 \end{bmatrix}. $$ - - :::::: - ## Grasple Exercises -%::::::{grasple} +%::::::{grasple} %:url: https://embed.grasple.com/exercises/97a589a8-54f9-4688-bd4d-a17a9585813b?id=69465 -%:label: grasple_exercise_3_1_1 +%:label: grasple_exercise_3_1_1 %:dropdown: %:description: This is {prf:ref}`Ex:LinTrafo:SecondMatrixTrafo`. %:::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/3f14573a-1d4c-4a4b-ae48-ccb168005702?id=70373 :label: grasple_exercise_3_1_2 :dropdown: :description: To specify the domain and the codomain of a linear transformation :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/b80d9889-bd46-45c6-a9cb-d056aa315232?id=70374 :label: grasple_exercise_3_1_3 :dropdown: :description: To find the size of the matrix of a linear transformation :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/be6a768d-c60d-4ed6-81a7-5dea71b4a1a5?id=70375 :label: grasple_exercise_3_1_4 :dropdown: -:description: To find image of two vectors under $T(\vect{x}) = A\vect{x}$. +:description: To find image of two vectors under $T(\vect{x}) = A\vect{x}$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/ :label: grasple_exercise_3_1_5 :dropdown: -:description: For linear map $T$, find $T(c\vect{u})$ and $T(\vect{u}+\vect{v})$ if $T(\vect{u})$ and $T(\vect{v})$ are given. +:description: For linear map $T$, find $T(c\vect{u})$ and $T(\vect{u}+\vect{v})$ if $T(\vect{u})$ and $T(\vect{v})$ are given. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/93048f7c-b755-4445-a532-949f34136096?id=70398 :label: grasple_exercise_3_1_6 :dropdown: -:description: For linear map $T:\R^2 \to \R^2$, find $T((x1,x2))$ if $T(\vect{e}_1)$ and $T(\vect{e}_2)$ are given +:description: For linear map $T:\R^2 \to \R^2$, find $T((x1,x2))$ if $T(\vect{e}_1)$ and $T(\vect{e}_2)$ are given :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/2af6559f-8871-494d-abce-d4263d530c69?id=70381 :label: grasple_exercise_3_1_7 :dropdown: :description: Find all vectors $\vect{w}$ for which $T(\vect{w}) = \vect{u}$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/ce6e4a52-c985-43ee-92cb-2762a467ac5a?id=70383 :label: grasple_exercise_3_1_8 :dropdown: -:description: Find vectors $\vect{w}$ for which $T(\vect{w}) = \vect{u}$. +:description: Find vectors $\vect{w}$ for which $T(\vect{w}) = \vect{u}$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/37b6bd46-8cfc-4c98-a5e8-53aa41c87dcf?id=70384 :label: grasple_exercise_3_1_10 :dropdown: :description: Find vectors $\vect{w}$ for which $T(\vect{w}) = \vect{u}$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/c5b2a642-fd50-43f6-9346-c37a0ffe1a40?id=70386 :label: grasple_exercise_3_1_10 :dropdown: :description: Find vectors $\vect{w}$ for which $T(\vect{w}) = \vect{u}$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/c3d009c0-62d6-4ae3-8ca1-04a5d2730455?id=70406 :label: grasple_exercise_3_1_11 :dropdown: :description: To show that a given transformation is non-linear. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/b9a4b128-f2c2-4612-a7f5-271c4e69aa70?id=70418 :label: grasple_exercise_3_1_12 :dropdown: -:description: Finding an image and a pre-image of $T:\R^2 \to \R^2$ using a picture. +:description: Finding an image and a pre-image of $T:\R^2 \to \R^2$ using a picture. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/4058e54a-74f2-414e-9693-420abbc62677?id=70391 :label: grasple_exercise_3_1_13 :dropdown: :description: 'To give a geometric description of $T: \vect{x} \mapsto A\vect{x}$.' :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/990bf561-629e-430f-b8d0-e757c63fe15c?id=70392 :label: grasple_exercise_3_1_14 :dropdown: :description: 'To give a geometric description of $T: \vect{x} \mapsto A\vect{x}$.' :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/4e5d3f55-9257-4023-9739-5df0a1a9f277?id=70410 :label: grasple_exercise_3_1_15 :dropdown: :description: To find the matrix of the transformation that sends $(x,y)$ to $x\vect{a}_1 + y\vect{a}_2$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/9efa96e2-483d-4b2c-a58a-ba197bc09a81?id=70411 :label: grasple_exercise_3_1_16 :dropdown: :description: To find the matrix of the transformation that sends $(x,y)$ to $x\vect{a}_1 + y\vect{a}_2$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/729cba57-72d1-4d54-8cf9-c9946952bf9d?id=70412 :label: grasple_exercise_3_1_17 :dropdown: -:description: To rewrite $T:\R^3 \to \R^2$ to standard form. +:description: To rewrite $T:\R^3 \to \R^2$ to standard form. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/b4bb3730-f14c-4a60-a8b8-6b895cf93ac5?id=70413 :label: grasple_exercise_3_1_18 :dropdown: :description: To find the standard matrix for $T:\R^4 \to \R$. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/34bb6386-7e7c-411b-83a1-09bbaf1106c5?id=70415 :label: grasple_exercise_3_1_19 :dropdown: -:description: To find the standard matrix for $T:\R^2 \to \R^2$ if $T(\vect{v}_1)$ and $T(\vect{v}_2)$ are given. +:description: To find the standard matrix for $T:\R^2 \to \R^2$ if $T(\vect{v}_1)$ and $T(\vect{v}_2)$ are given. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/ce8ba17c-0a17-4d5e-b4b7-5c277c7e8df8?id= :label: grasple_exercise_3_1_20 :dropdown: -:description: To find the standard matrix for $T:\R^2 \to \R^3$ if $T(\vect{v}_1)$ and $T(\vect{v}_2)$ are given. +:description: To find the standard matrix for $T:\R^2 \to \R^3$ if $T(\vect{v}_1)$ and $T(\vect{v}_2)$ are given. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/ce8ba17c-0a17-4d5e-b4b7-5c277c7e8df8?id=70416 :label: grasple_exercise_3_1_21 :dropdown: -:description: If $T(\vect{0}) = \vect{0}$, is $T$ (always) linear? +:description: If $T(\vect{0}) = \vect{0}$, is $T$ (always) linear? :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/3f992e7a-19e3-4b83-8d90-db86e323ea94?id=69296 :label: grasple_exercise_3_1_22 :dropdown: -:description: To show that $T(\vect{0}) = \vect{0}$ for a linear transformation. +:description: To show that $T(\vect{0}) = \vect{0}$ for a linear transformation. :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/94d618e0-de21-491c-ad44-8e29974e0303?id=71098 :label: grasple_exercise_3_1_23 :dropdown: -:description: (T/F) If $\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}$ is linearly dependent, then $\{T(\vect{v}_1),T(\vect{v}_2),T(\vect{v}_3)\}$ is also linearly dependent? +:description: (T/F) If $\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}$ is linearly dependent, then $\{T(\vect{v}_1),T(\vect{v}_2),T(\vect{v}_3)\}$ is also linearly dependent? :::::: - -::::::{grasple} +::::::{grasple} :url: https://embed.grasple.com/exercises/f983b627-10c2-4dd6-a273-2a33e99d0ded?id=71101 :label: grasple_exercise_3_1_24 :dropdown: -:description: (T/F) If $\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}$ is linearly independent, then $\{T(\vect{v}_1),T(\vect{v}_2),T(\vect{v}_3)\}$ is also linearly independent? +:description: (T/F) If $\{\vect{v}_1,\vect{v}_2,\vect{v}_3\}$ is linearly independent, then $\{T(\vect{v}_1),T(\vect{v}_2),T(\vect{v}_3)\}$ is also linearly independent? ::::::