
In this lesson we will examine in detail the procedure of joining the matrices to the vectors and linear operators. In hereafter, all vector spaces will be finite dimensional, and its bases will be ordered.
Let $\beta = \{v_1, \ldots, v_n \}$ be a basis for a vector space $V$. over the field $\mathbb{F}$. Each vector $x \in V$ can be written uniquely as a linear combination of the vectors in the basis:
$$x = \sum_{i = 1}^n \alpha_iv_i, \quad \alpha_i \in \mathbb{F}.$$
Now we can form the coordinate vector of $x$ relative to the basis $\beta$:
$$[x]^\beta = \begin{bmatrix} \alpha_1 \\ \vdots \\ \alpha_n \\ \end{bmatrix} \in \mathbb{F}.$$
Before we proceed, we have the following proposition.
Proposition 1. Let $V$ be a vector space over the field $\mathbb{F}$ and $\beta = \{v_1, \ldots, v_n \}$ be any basis for $V$. Then the mapping $\varphi: V \to \mathbb{F}^n$, defined by $\varphi(x) = [x]_\beta$, is an isomorphism.
Let now $A: V \to W$ be a linear operator and let $\beta= \{v_1, \ldots, v_n \}$ and $\gamma = \{ w_1, \ldots, w_n \}$ be a basis for $V$ and, respectively, $W$. The vectors $Av_1, \ldots, Av_n$ we can write in the form
$$Av_j = \sum_{i=1}^n \alpha_{ij}w_i, \quad 1 \le j \le n.$$
Thus, the linear operator leads in a natural way to a matrix $\mathbf{A} = [\alpha_{ij}]$ defined with respect to the given bases $\beta$ and $\gamma$. This matrix is called the matrix representation of the linear operator $A$ in the ordered bases $\beta$ and $\gamma$, denoted by $[A]_\beta^\gamma$; we write:
$$\mathbf{A} = [A]_\beta^\gamma = \begin{bmatrix} \alpha_{11} & \cdots & \alpha_{1n} \\ \vdots & \ddots & \vdots \\ \alpha_{m1} & \cdots & \alpha_{mn}\\ \end{bmatrix} \in M_{mn}(\mathbf{F}).$$
Notice that the $j$-th column of the matrix $[A]_{\beta}^{\gamma}$ is actually $[Aej]_{\gamma}$, for all $1 \le j \le n$, that is, the matrix notation of the vector $Aej$ in the basis $\gamma$.
Proposition 2. Let $V$ and $W$ be vector spaces over the field $\mathbb{F}$ and let $\beta = \{v_1, \ldots, v_n\}$ and $\gamma= \{w_1, \ldots, w_n\}$ be a basis for $V$ and, respectively, $W$. Then the mapping $\psi: L(V, W) \to M_{mn}(\mathbb{F})$, defined by $\psi(A) = [A]_\beta^\gamma$, is an isomorphism.
Let us return to the space $L(V)$, where $\dim V = n$. In this case, each linear operator $A \in L(V)$ will be represented by an $n \times n$ matrix, and we then see that the space $M_{n}(\mathbb{F})$ is closed under addition, multiplication and scalar multiplication. The space $L(V)$ is isomorphic to the space of all square matrices $M_n (\mathbb{F})$.
The mapping $\psi$ from the Proposition 2. has the following three useful properties.
Proposition 3. Let $\beta =\{v_1, \ldots, v_n \}$ and $\gamma = \{w_1, \ldots, w_n \}$ be the bases for the vector spaces $V$ and $W$, respectively, and $A \in L(V, W)$ be a linear operator and $x \in V$. Then
$$[Ax]^\gamma = [A]_\beta^\gamma [x]_\beta.$$
Proposition 4. Let $V, W$ and $Z$ be finite dimensional vector spaces with ordered bases $\beta, \gamma$ and $\delta$, respectively. Let $A: V \to W$ and $B: W \to Z$ be linear operators. Then
$$[BA]_\beta^\delta = [B]_\gamma^\delta [A]_\beta^\gamma.$$
The following proposition shows how the rank of a linear operator is related to the rank of a matrix.
Proposition 5. If the linear operator $A \in L(V, W)$ is represented by $\mathbf{A} = [\alpha_{ij}] \in M_{mn}(\mathbb{F})$, then $r(A) = r (\mathbf{A})$.
Suppose now that we have a system of linear equations in $n$ unknowns, written in a matrix form $\mathbf{A} x = \mathbf{B}$, where $\mathbf{A}$ is the matrix representation of $A \in L(V)$, and $\dim V = n$.
If we want to solve this system for a unique $x$, then the matrix $\mathbf{A}$ must be of rank $n$, which means that $null(A) = 0$, from where it follows that $\ker A = \{0 \}$ and thus the linear operator $A$ is nonsingular.
This leads to the following characterization.
Proposition 6. A linear operator $A \in L(V)$ is nonsingular if and only if $\det \mathbf{A} \neq 0$.
If $\beta’$ and $\gamma’$ are some other bases in $V$ and, respectively, $W$, we want to know in which relation then are the matrices $[x]_\beta$ and $[x]_{\beta’}$, that is, $[A]_\beta^\gamma$ and $[A]_{\beta’}^{\gamma’}$. We have the following theorem.
Theorem 1. Let $A \in L(V, W)$ and let $\beta = \{v_1, \ldots, v_n \}$, $\beta^{‘} = \{v_1^{‘}, \ldots, v_n^{‘} \}$ and $\gamma = \{w_1, \ldots, w_n \}$, $\gamma^{‘} = \{w_1^{‘}, \ldots, w_n^{‘}\}$ be two bases for $V$ and, respectively, $W$. Let the operators $T \in L(W)$ and $S \in L(V)$ be defined on the basis $\gamma$ and, respectively, $\beta$, by $Tw_i = w_i^{‘} , i = 1, \ldots, m$ and $Sv_j = v_j^{‘} , j= 1, \ldots, n$. Then
$$[A]_{\beta’}^{\gamma’} = \left( [T]_\gamma^\gamma \right) ^{-1} [A]_\beta^\gamma [S]_\beta^\beta.$$
The matrices $[S]_\beta^\beta$ and $[T]_\gamma^\gamma$ are introduced as the matrix notation of nonsingular operators $S$ and $T$.
Definition. Matrix $$[S]_\beta^\beta = [I]_{\beta’}^\beta$$ is called transition matrix from basis $\beta$ to basis $\beta’$.
Corollary 1. Let $A \in L(V, W)$ and let $\beta = \{v_1, \ldots, v_n \}$, $\beta^{‘} = \{v_1^{‘}, \ldots, v_n^{‘} \}$ be two bases for $V$, and let $[S]_\beta^\beta = [I]_{\beta’}^\beta$ be transition matrix from basis $\beta$ to basis $\beta’$. Then
$$[A]_{\beta’}^{\beta’} = \left( [S]_\beta^\beta \right) ^{-1} [A]_\beta^\beta [S]_\beta^\beta.$$
Corollary 2. Let $\beta = \{v_1, \ldots, v_n \}$, $\beta^{‘} = \{v_1^{‘}, \ldots, v_n^{‘} \}$ be two bases for $V$, and let $[S]_\beta^\beta = [I]_{\beta’}^\beta$ be transition matrix from basis $\beta$ to basis $\beta’$. Then
$$\left( [S]_\beta^\beta \right) ^{-1}=\left( [I]_{\beta’}^\beta \right) ^{-1}$$
Corollary 3. Let $\beta = \{v_1, \ldots, v_n \}$, $\beta^{‘} = \{v_1^{‘}, \ldots, v_n^{‘} \}$ be two bases for $V$, and let $[S]_\beta^\beta = [I]_{\beta’}^\beta$ be transition matrix from basis $\beta$ to basis $\beta’$. Then, for each vector $x \in V$
$$[x]^{\beta’} = \left( [S]_\beta^\beta \right) ^{-1} [x]^\beta.$$