1win aviator game

Basis and dimension of a vector space

Before we start explaining these two terms mentioned in the heading, let’s recall what a vector space is.

Vector space is defined as a set of vectors that is closed under two algebraic operations called vector addition and scalar multiplication and satisfies several axioms. To see more detailed explanation of a vector space, click here.

Now when we recall what a vector space is, we are ready to explain some terms connected to vector spaces. Firstly, we will give you the definition of a basis of a vector space.

Definition. A finite set $B = \{v_1, v_2, \ldots, v_n \}, n \in \mathbb{N}$, in a vector space $V$ is called a basis of $V$ if $B$ is linearly independent and spans $V$. If either one of these criterias is not satisfied, then the collection is not a basis for V. If a collection of vectors spans vV, then it contains enough vectors so that every vector in V can be written as a linear combination of those in the collection. If the collection is linearly independent, then it doesn’t contain so many vectors that some become dependent on the others. Intuitively, then, a basis has just the right size: It’s big enough to span the space but not so big as to be dependent.

Using this definition, we see that the set of unit vectors $B = \{e_1, e_2, e_3 \}$ is a basis for $\mathbb{R}^3$. Notice that the same set is also a basis for $\mathbb{C}^3$.

In general, standard basis for $\mathbb{R}^n$ is

$$B= \{(1, 0, 0, \ldots, 0), (0, 1, 0, \ldots,0), \ldots,  (0, 0, 0, \ldots, 1) \}.$$

Matrices $\begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 0 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix}$ form a basis for $\mathbf{M}_{2,2}(\mathbb{R})$.

A set $\{1, t, t^2, \ldots, t^n \}$ is a basis of the space of all polynomials $P_n$.

In all of these examples we can easily see that all sets are linearly independent spanning sets for the given space. From these examples we can also conclude that every vector space has a basis. Moreover, a vector space can have many different bases.

For example, both ${i, j}$ and ${ i + j, i − j}$  are bases for  $\mathbb{R}^2$.

Definition. We say that a vector space $V$ is finite dimensional  if $V$ has a basis consisting of finitely many elements.

Otherwise, we say that $V$ is infinite dimensional.

The term of the previous definition does not have a link with the term of the dimension.

Notice that examples above show that $\mathbb{R}^n, \mathbb{C}^n, \mathbf{M}_{mn}, P_n$ are finite dimensional, and in hereafter we will talk about finite dimensional spaces.

The following is a very useful proposition.

Proposition 1. Let $V \neq \{0\}$ be a vector space and let $S = \{v_1, v_2, \ldots, v_m \}, m \in \mathbb{N}$, be a spanning set for $V$.  Then there exists a basis of $V$ which is a subset of $S$.

Proposition 2.  Every finite dimensional vector space $V \neq \{0\}$ has a basis.

Changing of basis 

Let $B= \{v_1, v_2, \ldots, v_n \}$ be a basis for a finite dimensional vector space $V$. Then the vector $x \in V$ can be written uniquely as a linear combination of the vectors $v_1, \ldots, v_n$, that is

$$v = a_1v_1 + a_2v_2 + \ldots, + a_nv_n,$$

for some scalars $a_1, a_2, \ldots, a_n$.

The scalars $a_i, i=1, \ldots, n,$ can be recorded in a column vector, called the coordinate column vector of $x$ with respect to basis $B$:

$$[x]_B = \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_n \\ \end{bmatrix}.$$

Now suppose that $B’ = \{v_1′, v_2′, \ldots, v_n’\}$ is another basis for $V$. Then the same vector $x$ can also be written uniquely as a linear combination of these vectors

$$x= a_1’v_1′ + a_2’v_2′ + \ldots + a_n’v_n’.$$

The coordinate vector of $x$ with respect to basis $B’$ is

$$[x]_{B’} = \begin{bmatrix} a_1′ \\ a_2′ \\ \vdots \\ a_n’\\ \end{bmatrix}.$$

The change of basis matrix from $B$ to $B’$ is

$$\mathbf{P} = \begin{bmatrix} f_{11}& f_{12} & \cdots & f_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ f_{n1} & f_{n2} & \cdots & f_{nn} \\ \end{bmatrix},$$

$$\quad$$

$$v_1′ = f_{11}v_1 + \ldots + f_{1n}v_n$$

$$\vdots $$

$$v_n’ = f_{1n}v_1 + \ldots + f_{nn}v_n.$$

The columns of the matrix $P$ represent the components of the vectors of the basis $B’$, written by using the vectors of the basis $B$. Now the components of the vector $x$ transform in a way

$$[x]_B = \mathbf{P} \cdot [x]_{B’}.$$

This means that

$$[x]_{B’} = \mathbf{P}^{-1} \cdot [x]_B.$$

We can now define the term of the dimension of a vector space.

Definition. Let $V \neq \{0 \}$ be a finite dimensional vector space. We say that the dimension of $V$ is the number of elements of any basis of $V$. In addition, the dimension of the zero vector space is $0$.

The dimension of $V$ we will denote as $\dim V$.

To find the dimension of a vector space $V$ it is necessary to find a basis of $V$.

We know that every finite set that spans a vector space $V$ can be reduced to a basis, by discarding vectors if necessary. The contention of the following proposition is, in a sense, dual.

Proposition 3. Let $S = \{v_1, v_2, \ldots, v_k \}, k \in \mathbb{N}$, be linearly independent set in a finite dimensional vector space $V$. Then $S$ can be extended to a basis, by adding more vectors if necessary.

If we know the dimension of a vector space $V$, it is easy to check that some set is a basis of $V$. We have the following corollary.

Corollary 1. Let $V$ be a vector space with $\dim V = n$. Then

(1) Every linearly independent set in $V$ has exactly $n$ or less elements. Every linearly independent set in $V$ that has exactly $n$ elements is a basis of $V$.

(2) Every spanning set of $V$ has $n$ or more elements. Every spanning set of $V$ that has exactly $n$ elements is a basis of $V$.

Example 1. Given a set of vectors $S = \{(2, 1, -3), (3, 2, -5), (1, -1, 1) \}$. Show that $S$ is a basis for $\mathbb{R}^3$. Given the vector $x = (6, 2, -7)$ in the standard basis $E = \{e_1, e_2, e_3 \} = \{1, 0, 0), (0, 1, 0), (0, 0, 1) \}$. If $S$ is a basis for $\mathbb{R}^3$, express the vector $x$ in the basis $S$.

Solution. We need to show that $S$ spans $\mathbb{R}^3$ and is linearly independent.

Let

$$\alpha \cdot (2, 1, -3 ) + \beta \cdot (3, 2, -5) + \gamma \cdot (1, -1, 1) = (0, 0, 0), $$

for some scalars $\alpha, \beta, \gamma$.

This gives the three equations:

$$\begin{aligned}

2 \alpha  +  3 \beta  +  \gamma &= 0, \\

\alpha  +  2 \beta  – \gamma & = 0, \\

– 3 \alpha  –  5 \beta  +  \gamma & = 0. \\

\end{aligned}$$

This is the homogeneous system of equations with  the nonzero determinant of the system, so only the trivial solution exists, that is, $\alpha = \beta = \gamma = 0$. Therefore, the set $S$ is linearly independent.

Now, by the corollary 1., the set $S$ is a basis for $\mathbb{R}^3$.

The coordinate vector of $x$ in the basis $E$ is given with

$$[x]_E = \begin{bmatrix} 6 \\ 2 \\ -7 \\ \end{bmatrix} = 6 \cdot e_1 + 2 \cdot e_2 – 7 \cdot e_3.$$

To determine the coordinate vector of $x$ in the basis $S$, we need to specify the scalars $a_1, a_2, a_3$ such that

$$[x]_S = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} = a_1 \cdot f_1 + a_2 \cdot f_2 + a_3 \cdot f_3,$$

where, $ f_1 = (2, 1, -3), f_2 = (3, 2, -5), f_3 = ( 1, -1, 1)$.

Since is given the record of the vector $x$ in the standard basis $E$, it is enough to express the vectors in $E$ by using the vectors in $S$, that is, we must determine the scalars $a_{ij}$ such that

$$e_1 = a_{11} f_1 + a_{12}f_2 + a_{13}f_3$$

$$e_2 = a_{21} f_1 + a_{22}f_2 + a_{23}f_3$$

$$e_3 = a_{31} f_1 + a_{32}f_2 + a_{33}f_3.$$

We can rewrite the system above in the matrix form:

$$e_1 = \mathbf{A} \cdot f_1, \quad e_2 = \mathbf{A} \cdto f_2, \quad e_3 = \mathbf{A}\cdot f_3,$$

so by the definition follows that $\mathbf{A}$ is the change of basis matrix from $S$ to $E$. We will denote this matrix as $\mathbf{P}_{[S, E]}$.

The change of basis matrix from $E$ to $S$, denoted as $\mathbf{P}_{[E, S]}$, is the inverse matrix of $\mathbf{P}_{[S, E]}$, and it can be determined directly. This matrix is given with

$$\mathbf{P}_{[E, S]} = \begin{bmatrix} 2 & 3 & 1 \\ 1 & 2 & -1 \\ -3 & -5 & 1 \\ \end{bmatrix}.$$

The inverse matrix of $\mathbf{P}_{[E, S]}$ we determine using elementary row operations and we obtain the solution:

$$

\left[\begin{array}{ccc|ccc}

1 & 0 & 0 & -3 & -8 & -5 \\

0 & 1 & 0 & 2 & 5 & 3 \\

0 & 0 & 1 & 1 & 1 & 1

\end{array} \right],$$

that is,

$$\mathbf{P}_{[E, S]}^{-1} = \mathbf{P}_{[S, E]} = \begin{bmatrix} -3 & -8 & -5 \\ 2 & 5 & 3 \\ 1 & 1 & 1 \\ \end{bmatrix}.$$

Finally, the record of the vector $x$ in the basis $S$ is equal to

$$\begin{aligned}

\left[x \right]_S &= \mathbf{P}_{[S, E]} \cdot [x]_E \\

&\quad \\

&=  \begin{bmatrix} -3 & -8 & -5 \\ 2 & 5 & 3 \\ 1 & 1 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} 6 \\ 2 \\ -7 \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ \end{bmatrix}. \\

\end{aligned}$$

The term of a subspace of a vector space $V$ is defined independent of the dimension of $V$. If we assume that $V$ is a finite dimensional vector space with $\dim V = n$, and if $W$ is a subspace of $V$, then we can conclude that $\dim W \le n$. About it tells us the following proposition.

Proposition 4. Let $V$ be a finite dimensional vector space such that $\dim V = n$ and $W$ be a subspace of $V$. Then $\dim W \le n$. If $W$ is a subspace of $V$ such that $\dim W = n$, then $W = V$.

If $V$ is a finite dimensional vector space and $W_1$ and $W_2$ its subspaces, then $W_1$ and $W_2$ are finite dimensional, as well as $W_1 \cap W_2$ and $W_1 + W_2$. It is clear that $\dim (W_1 \cap W_2) \le \dim W_1$ and $\dim W_2 \le \dim (W_1 + W_2)$ is valid. We have the following theorem, that tells us more about these dimensions.

Theorem 1. Let $V$ be a finite dimensional vector space and $W_1$ and $W_2$ be two subspaces of $V$. Then

$$\dim (W_1 + W_2) + \dim (W_1 \cap W_2) = \dim W_1 + \dim W_2.$$

The following is the formula for the dimension of the direct sum.

Corollary 2. Let $W_1$ and $W_2$ be two subspaces of a finite dimensional vector space such that they make a direct sum. Then

$$\dim (W_1 + W_2) = \dim W_1 + \dim W_2.$$

Definition.  Let $V$ be a vector space and $W$ be a subspace of $V$. A subspace $W_2$ of $V$ is called a direct complement of $W_1$ if $W_1 + W_2 = V$.

Generally, for any two subspaces of $V$, $W_1 + W_2 = W_2 + W_1$ is valid, therefore, the above definition is symmetric:  if $W_2$ is a direct complement of $W_1$ in $V$, then $W_1$ is also a direct complement of $W_2$ in $V$.

The question of existence of the direct complement of an arbitrary subspace of a given vector space is solved with the following theorem.

Theorem 2. Let $V$ be a finite dimensional vector space and $W$ be a subspace of $V$. Then there exists a direct complement of $W$ in $V$.

Example 2. Let $ K = \{(v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_2 – 2v_3 + v_4 = 0 \}$ and $L = \{(v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_1 = v_4, v_2 = 2v_3 \}$ be two subspaces of  $\mathbb{R}^4$. Find a basis and dimension for $K$, $L$ and $K \cap L$.

Solution. We have

$$\begin{aligned}

K & = \{(v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_2 – 2v_3 + v_4 = 0 \} \\

&= \{(v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_2 = 2v_3 – v_4 \} \\

&= \{(v_1, 2v_3 – v_4, v_3, v_4) : v_1, v_3, v_4 \in \mathbb{R} \} \\

\end{aligned}$$

Each vector in $K$ we can write in the form

$$(v_1, 2v_3 – v_4, v_3, v_4) = v_1 (1, 0, 0, 0) + v_3 (0, 2, 1, 0)  + v_4 (0, -1, 0, 1).$$

We need to check if the set $\{(1, 0, 0, 0), (0, 2, 1, 0), (0, -1, 0, 1)\}$ is linearly independent, that is, we need to determine the scalars $\alpha, \beta, \gamma$ so that

$$\alpha (1, 0, 0, 0) + \beta (0, 2, 1, 0) + \gamma ( 0, -1, 0, 1) = (0, 0, 0, 0).$$

The equation above gives the system of equations

$$ \alpha = 0$$

$$2 \beta – \gamma = 0$$

$$\beta = 0$$

$$\gamma = 0.$$

Solving yields $\alpha = \beta = \gamma =0$, so the set  $\{(1, 0, 0, 0), (0, 2, 1, 0), (0, -1, 0, 1)\}$ is linearly independent and thus the one basis for $K$. It follows that $\dim K = 3$.

Now we need to find a basis for $L$. We have

$$\begin{aligned}

L &= \{(v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_1 = v_4, v_2 = 2v_3 \} \\

&= \{ (v_4, 2v_3, v_3, v_4) : v_3, v_4 \in \mathbb{R} \} \\

\end{aligned}$$

Each vector in $L$ we can write in the form

$$(v_4, 2 v_3, v_3, v_4) = v_3 (0, 2, 1, 0) + v_4 ( 1, 0, 0, 1).$$

The set $\{(0, 2, 1, 0),  ( 1, 0, 0, 1) \}$ is linearly independent and thus the one basis for $L$. It follows that $\dim L = 2$.

What remains is to find a basis for $K \cap L$. We have:

$$\begin{aligned}

K \cap L &= \{ (v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_2 = 2v_3 – v_4 , v_1 = v_4, v_2 = 2v_3 \} \\

&= \{ (v_1, v_2, v_3, v_4) \in \mathbb{R}^4 : v_1 = v_4 = 0, v_2 = 2v_3 \} \\

&= \{ ( 0, 2v_3, v_3, 0) : v_3 \in \mathbb{R} \} \\

& = \{v_3 ( 0, 2, 1, 0): v_3 \in \mathbb{R}\} \\

&= span \{(0, 2, 1, 0)\}. \\

\end{aligned}$$

Therefore, the one basis for $K \cap L$ is $\{ (0, 2, 1, 0) \}$ and $\dim ( K \cap L) = 1$.