Eigenvalues and Eigenvectors Explained

4 Minute Read

Eigenvalues and eigenvectors use the German term eigen, meaning "its own". Together in linear systems, they describe a preserved direction of a transformation. Imagine an exercise band being stretched. An arrow drawn straight through the length of the band does not change direction during the stretch. In contrast, a line drawn diagonally across the band will appear to change direction during the stretch. The first of the arrows would be an eigenvector because it preserves its direction during the transformation.

Eigenvalues, represented by \(\lambda\), are the nonzero scalars such that:

\[ Ax = \lambda x\]

... where the \(Ax\) refers to matrix-vector multiplication, and \(\lambda x\) refers to scalar multiplication.

An "enlargement" means \(\lambda > 1\), and shrinking means \(\lambda < 1\). When \(\lambda = 1\), there was no change. Rotating a figure \(60^{\circ}\) changes the direction of all vectors, so there are no eigenvalues or eigenvectors at all in that case. For the matrix \(A\), the vector \(x\) is an eigenvector, and the eigenvalue \(\lambda\) is \(4\), so \(Ax = \lambda x\):

\[ Ax = \begin{bmatrix}3 & 1\\1 & 3\end{bmatrix} \begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}4\\4\end{bmatrix} \hspace{1.5em} \hspace{1.5em} 4x = 4 \begin{bmatrix}1\\1\end{bmatrix} = \begin{bmatrix}4\\4\end{bmatrix}\]

Deriving eigenvalues

The eigenvalues of a square matrix are the solutions \(\lambda\) of the equation:

\[ det (A - \lambda I) = 0\]

For example:

\[ det \bigg( \begin{bmatrix}4 & 3\\2 & 3\end{bmatrix} - \begin{bmatrix}\lambda & 0\\0 & \lambda\end{bmatrix} \bigg) = det \bigg( \begin{bmatrix}(4 - \lambda) & 3\\2 & (3 - \lambda)\end{bmatrix} \bigg)\]

\[ = (4 - \lambda)(3 - \lambda) - (2 \cdot 3) = \lambda^2 - 7\lambda + 6\]

\((\lambda^2 - 7\lambda + 6)\) is the characteristic polynomial. Factoring gives \((\lambda - 1)(\lambda - 6)\). The eigenvalues are [1, 6]. (There are lots of ways to factor a complex polynomial, including long division and graphing.)

Deriving eigenvectors

To find a corresponding eigenvector, use \(A - \lambda I\) with a \(\lambda\) value and solve for \(B\vec{x} = 0\). For \(\lambda = 1\):

\[ B = \begin{bmatrix}(4 - 1) & 3\\2 & (3 - 1)\end{bmatrix} = \begin{bmatrix}3 & 3\\2 & 2\end{bmatrix}\]

\[ \begin{bmatrix}3 & 3\\2 & 2\end{bmatrix} \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix} \rightarrow \begin{bmatrix}3 & 3 & 0\\2 & 2 & 0\end{bmatrix} \rightarrow RREF \rightarrow \begin{bmatrix}1 & 1 & 0\\0 & 0 & 0\end{bmatrix}\]

\[ x_1 + x_2 = 0\text{. Let } x_2 = 1\text{; } x_1 = -1\text{. The eigenvector is }\begin{bmatrix}-1\\\phantom{-}1\end{bmatrix}\text{. We can check this:}\]

\[ \begin{bmatrix}4 & 3\\2 & 3\end{bmatrix} \begin{bmatrix}-1\\\phantom{-}1\end{bmatrix} = \begin{bmatrix}-1\\\phantom{-}1\end{bmatrix} \hspace{1.5em} \hspace{1.5em} 1x = 1 \begin{bmatrix}-1\\\phantom{-}1\end{bmatrix} = \begin{bmatrix}-1\\\phantom{-}1\end{bmatrix}\]

Deriving eigenspaces

An eigenspace is the null space of (\(A - \lambda I\)). So for \(\lambda = 6\):

\[ E_6 = null(A - 6I) = null\bigg(\begin{bmatrix}(4 - 6) & 3\\2 & (3 - 6)\end{bmatrix}\bigg) = null\bigg(\begin{bmatrix}-2 & \phantom{-}3\\\phantom{-}2 & -3\end{bmatrix}\bigg)\]

\[ \begin{bmatrix}-2 & \phantom{-}3 & 0\\\phantom{-}2 & -3 & 0\end{bmatrix} \rightarrow RREF \rightarrow \begin{bmatrix}1 & -\frac{3}{2} & 0\\0 & \phantom{-}0 & 0\end{bmatrix}\]

\[ x_1 = \frac{3}{2}x_2\text{, }x_2 = \text{(arbitrary, let it be 1)}\]

\[ \begin{bmatrix}x_1\\x_2\end{bmatrix} = x_2 \begin{bmatrix}\frac{3}{2}\\1\end{bmatrix}\text{. Thus, the eigenspace } E_6 = \begin{bmatrix}\frac{3}{2}\\1\end{bmatrix}.\]

Computing exponential matrices

For any positive integer \(n\), \(\lambda^n\) is an eigenvalue of \(A^n\) with corresponding eigenvector \(x\).

Thus, \(A^{n}x = \lambda^{n}x\).

\[ A^{10}x = \begin{bmatrix}0 & 1\\2 & 1\end{bmatrix}^{10} \begin{bmatrix}5\\1\end{bmatrix}\]

\[ \begin{matrix} \lambda_1 = -1 & \lambda_2 = 2\\ v_1 = \begin{bmatrix}\phantom{-}1\\-1\end{bmatrix} & v_2 = \begin{bmatrix}1\\2\end{bmatrix} \end{matrix}\]

Note that \(v_1\) and \(v_2\) form a basis for \(\mathbb{R}^2\), because our vector \(x\) can be written as a combination of them both:

\[ \begin{bmatrix}5\\1\end{bmatrix} = 3 \begin{bmatrix}\phantom{-}1\\-1\end{bmatrix} + 2 \begin{bmatrix}1\\2\end{bmatrix}\]

So, we can expand our original equation:

\[ A^{10}x = A^{10}(3v_1 + 2v_2)\]

Now, since \(A(\lambda^{n}x) = \lambda^{n}(Ax) = \lambda^{n}(\lambda x)\), we have:

\[ A^{10}(3v_1 + 2v_2) \hspace{0.5em} = \hspace{0.5em} 3(A^{10}v_1) + 2(A^{10}v_2) \hspace{0.5em} = \hspace{0.5em} 3 (\lambda_1^{10})v_1 + 2 (\lambda_2^{10})v_2\]

What's left is to simply evaluate the final expression.

\[ ... \begin{bmatrix}\phantom{-}3 + 2^{11}\\-3 + 2^{12}\end{bmatrix} = \begin{bmatrix}2051\\4093\end{bmatrix}\]


Updated May 22, 2020.