Definition. Let $V$ be a vector space over the field $\mathbb{F}$. Any vector $v$ of the form
$$v = \alpha_1 v_1 + \alpha_2 v_2 + \cdots + \alpha_k v_k,$$
where $v_1, \ldots, v_k \in V$, $\alpha_1, \ldots, \alpha_k \in \mathbf{F}$ and $k \in \mathbb{N}$, is called a linear combination of vectors $v_1, \ldots, v_k$ with coefficients $\alpha_1, \ldots, \alpha_k$.
A linear combination of the vectors is defined for a finite number of vectors.
Example 1. Let $v_1 = ( 1, -2, -5)$ and $v_2 = (2, 5, 6)$. Express the vector $x = ( 7, 4, -3)$ as a linear combination of $v_1$ and $v_2$.
Solution. By the definition of a linear combination, we write
$$x = \alpha \cdot v_1 + \beta \cdot v_2,$$
that is,
$$(7, 4, -3) = \alpha \cdot ( 1, -2, -5) + \beta \cdot ( 2, 5, 6)$$
We need to specify the scalars $ \alpha$ and $\beta$ such that the previous equality is valid.
The equation above gives the following system of equations:
$$ 7= \alpha + 2 \beta$$
$$ 4 = – 2 \alpha + 5 \beta$$
$$-3 = -5 \alpha + 6 \beta,$$
which we can solve using row reduction:
$$
\left[\begin{array}{cc|c}
1 & 2 & 7 \\
-2 & 5 & 4 \\
-5 & 6 & -3 \\
\end{array} \right] \sim
\left[\begin{array}{cc|c}
1 & 2 & 7 \\
0 & 1 & 2 \\
0 & 1 & 2 \\
\end{array} \right] \sim
\left[\begin{array}{cc|c}
1 & 0 & 3 \\
0 & 1 & 2 \\
0 & 0 & 0 \\
\end{array} \right]
$$
We read $\alpha = 3$ and $\beta = 2$, that is,
$$ (7, 4, -3) = 3 \cdot ( 1, -2, -5) + 2 \cdot ( 2, 5, 6).$$
Definition. Given a set of vectors $S = \{v_1, v_1, \ldots, v_k \}, k \in \mathbb{N}$, in a vector space $V$ over the field $\mathbb{F}$. A set $S$ is linearly independent if
$$\sum_{i=1}^k \alpha_i v_i= 0 \Longrightarrow \alpha_1 = \alpha_2 = \ldots = \alpha_k = 0$$
is valid, where $\alpha_1, \ldots, \alpha_k \in \mathbb{F}$.
Otherwise, we say that $S$ is linearly dependent, that is, if there is at least one nontrivial solution to the above equation, then $S$ is linearly dependent.
In the following text we are going to miss the attribute linearly, so we are going to speak of dependent and independent sets.
The zero vector can be represented as a linear combination of the elements in $S$ only in a trivial way.
Each set that contains only the zero vector is dependent.
For every $v \in V, v \neq 0$, the set $\{v\}$ is independent.
Dependence, that is, independence does not depend on the order of vectors in the observed set $S$. It is direct consequence of the commutativity in a vector space.
Each subset of the independent set is independent.
The question of whether the set is dependent or independent is now a question of whether the homogeneous system of equations $\mathbf{A} \cdot \alpha$, where $\mathbf{A}$ the associated matrix, has a non trivial solution. Let $\mathbf{A}$ be an $m \times n$ matrix. From before we know that the homogeneous system of equations have the trivial solution if and only if the rank of the matrix $\mathbf{A}$ is equal to $n$. In all other cases, it will have infinitely many solutions. We have the following corollary, which connects a linear dependence and the rank of the matrix of the system.
Corollary 1. A set of $k$ column vectors $\{v_1, v_2, \ldots, v_k \}$ is independent if and only if the associated matrix $\mathbf{A} = [v_1, v_2, \ldots, v_k]$ has rank $k$.
We have also the following useful result. Consider the system of $n$ homogeneous equations in the $n$ independent variables. A set of $k$ column vectors $\{v_1,. v_2, \ldots, v_k \}$ is independent iff the associated matrix $\mathbf{A}$ is regular, that is $\det \mathbf{A} \neq 0$.
It is not entirely accurate to speak of linearly dependent, that is, independent vectors, however, the following proposition shows us how it makes sense for vectors of the dependent set to say that are mutually dependent.
Proposition 1. A set $S = \{v_1, v_2, \ldots, v_k \}, k \ge 2$ in a vector space $V$ is linearly dependent if and only if there exists $j \in \{1, 2, \ldots, k \}$ such that $v_j$ can be written as linear combination of the remaining elements of the set $S$.
Let $S = \{v_1, v_2, \ldots, v_k \}, k \ge 2$, be linear dependent, ordered set, and let $v_1 \neq 0$. Then there exists $l \in \{2, 3, \ldots, k \}$ such that $v_l$ can be written as linear combination of vectors $v_1, v_2, \ldots, v_{l-1}$.
In particular, if a set is dependent, then at least one vector can be written as a linear combination of the others. In this sense, the second claim of the proposition tells us: if we know that some set is dependent, if the order of its elements is fixed, and if we know that the first of them is not trivial, then there exists element in $S$ such that it can be written as a linear combination of its predecessors.
Example 2. Let $S= \{(6, 2, 3, 4), (0, 5, -3, 1), (0, 0, 7, -2) \}$. Determine if $S$ is linearly dependent or independent in $\mathbb{R}^4$.
Solution. By the definition we have
$$\alpha \cdot (6, 2, 3, 4) + \beta \cdot (0, 5, -3, 1) + \gamma \cdot (0, 0, 7, -2) = (0, 0, 0, 0).$$
This equation gives the following system of linear equations:
$$6\alpha = 0 $$
$$2 \alpha + 5 \beta = 0$$
$$3 \alpha – 3 \beta + 7 \gamma = 0 $$
$$4 \alpha + \beta – 2 \gamma = 0 $$
From the first equation we have $\alpha =0$. By substituting it in the second equation, we obtain $\beta = 0$. From the third and the fourth equation we have now that $\gamma = 0$. Therefore, the above system of linear equations has only trivial solution, so $S$ is linearly independent.
Example 3. Determine if the set $S$ is linearly dependent or independent in $\mathbb{R}^5$, if
$$S = \{(1, 2, -1, 0, 1), (3, 2, -1, 2, 2), (4, 4, -2, 2, 3), (1, 2, -1, -2, 0) \}.$$
Solution.
We need to specify the scalars $\alpha, \beta, \gamma, \delta$ so that
$$\alpha \cdot (1, 2, -1, 0, 1) + \beta \cdot (3, 2, -1, 2, 2) + \gamma \cdot (4, 4, -2, 2, 3) + \delta \cdot(1, 2, -1, -2, 0) = (0, 0, 0, 0, 0).$$
The equation above gives the following homogeneous system of equations:
$$\alpha + 3 \beta + 4 \gamma + \delta = 0$$
$$2 \alpha + 2 \beta + 4 \gamma + 2 \delta = 0$$
$$- \alpha – \beta – 2 \gamma – \delta = 0$$
$$2 \beta + 2 \gamma – 2 \delta = 0$$
$$\alpha + 2 \beta + 3 \gamma = 0$$
Let
$$\mathbf{A} = \begin{bmatrix} 1 & 3 & 4 & 1 \\ 2 & 2 & 4 & 2 \\ -1 & -1 & -2 & -1 \\ 0 & 2 & 2 & -2 \\ 1 & 2 & 3 & 0 \\ \end{bmatrix}$$
be the associated matrix of the system.
Then, row reducing the matrix $\mathbf{A}$ to the row echelon form, we obtain:
$$\begin{bmatrix} 1 & 3 & 4 & 1 \\ 2 & 2 & 4 & 2 \\ -1 & -1 & -2 & -1 \\ 0 & 2 & 2 & -2 \\ 1 & 2 & 3 & 0 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 3 & 4 & 1 \\ 0 & -4 & -4 & 0 \\ 0 & 2 & 2 & 0 \\ 0 & 1 & 1 & 1 \\ 0 & -1 & -1 & -1 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 3 & 4 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \sim \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix}.$$
By this point, we can conclude that $r (\mathbf{A}) = 3 < 5$, so the given set is linearly dependent (corollary 1.).
Definition. Let $V$ be a vector space over the field $\mathbb{F}$ and $S = \{v_1, v_2, \ldots, v_k \} \subseteq V, S \neq 0$ a finite set of vectors in $V$. The span of the set of the vectors $v_1, v_2, \ldots, v_k$ is the set of all linear combinations of these $k$ vectors:
$$[S] = \{v_1, v_2, \ldots, v_k \} = \{\alpha_1 v_1 + \alpha_2 v_2 + \ldots + \alpha_k v_k : \alpha_1, \alpha_2, \ldots, \alpha_k \in \mathbb{F} \}.$$
We also say that the set $S$ is spanned by the set of the vectors $\{v_1, v_2, \ldots, v_k\}$.
A subset $S$ of a vector space $V$ is called a spanning set for $V$ if $[S] = V$.
It is obvious that every superset of the spanning set is still a spanning set for the same vector space.
It is desirable to find as smaller spanning set and thus determine all vectors with as less elements.
For example, a real vector space $\mathbb{R}^3$ is spanned by the vectors
$$v_1 = (1, 0, 0), \quad v_2 = (0, 1, 0), \quad v_3 = (0, 0, 1),$$
that is, a set $S = \{(1, 0, 0), (0, 1, 0), (0, 0, 1) \}$ is a spanning set for $\mathbb{R}^3$. If we remove any element of the set $S$, then the resulting set is no longer a spanning set for $\mathbb{R}^3$.
On the other hand, consider a set
$$S = \{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 0 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ \end{bmatrix}, \begin{bmatrix} 1 & 2 \\ 1 & 0 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix} \}.$$
$S$ is a spanning set for $\mathbf{M_2}(\mathbb{R})$. If we remove the matrix $\begin{bmatrix} 0 & 0 \\ 0 & 1 \\ \end{bmatrix}$, then the resulting set is no longer a spanning set for $\mathbf{M_2}(\mathbb{R})$. However, if we remove the matrix $\begin{bmatrix} 1 & 2 \\ 1 & 0 \\ \end{bmatrix}$, then the reduced set is still a spanning set for $\mathbf{M_2}(\mathbb{R})$.
We have the following proposition.
Proposition 2. Let $S$ be a spanning set for a vector space $V$ and let in $S$ exists the vector $y$ which can be represented as a linear combination of (some of the other) elements in $S$. Then $S$ \ $\{y \}$ is still a spanning set for $V$.
If $S = \{v_1, \ldots, v_k \}$ is a finite set of vectors in a vector space $V$, then the following are equivalent
(1) $S$ is linearly independent set.
(2) Every vector $v \in span (S)$ has a unique expression as linear combination of vectors in $S$.
(3) No vector in $S$ is a linear combination of the other elements in $S$.
Proposition 3. Let $V$ be a vector space and $S = \{v_1, \ldots, v_k \} \subseteq V$ a finite subset of vectors in $V$. Then the span of $\{v_1, \ldots, v_k \}$ is a subspace of $V$.
Further, $span (S)$ is the smallest subspace of $V$ that contains $S$.
Example 4. Determine the span set of
$$S = \{(v_1, v_2, v_3) \in \mathbb{C}^3 : v_1 + v_2 = v_3, v_2 – v_3 = v_1 – v_2 \} \le \mathbb{C}_{\mathbb{R}}^3.$$
Solution. We have
$$\begin{aligned}
v = (v_1, v_2, v_3) \in S &\Leftrightarrow \begin{cases} v_1 + v_2 = v_3 \\ v_2 – v_3 = v_1 – v_2 \\ \end{cases} \Leftrightarrow \begin{cases} v_2 = 2 v_1 \\ v_3 = 3v_1\\ \end{cases}\\
& \quad \\
&\Leftrightarrow v = (v_1, 2v_1, 3v_1), \quad v_1 \in \mathbb{C}\\
& \quad \\
& \Leftrightarrow v = (x + yi, 2(x + yi), 3 (x + yi), \quad x, y \in \mathbb{R} \\
& \quad \\
&\Leftrightarrow v = x (1, 2, 3) + y ( i, 2i, 3i), \quad x, y \in \mathbb{R} \\
\end{aligned}
$$
Therefore. the span set of $S$ is $\{(1, 2, 3), (i, 2i, 3i) \}$.
Example 5. Determine whether the vector $x = ( 14, 3, 15)$ lies in the span of vectors $a = (1, 2, 5)$ and $b = (3, 1, 4)$.
Solution.
The vector $x$ is in the span of the vectors $a$ and $b$ if and only if can be expressed as a linear combination of $a$ and $b$. So we have
$$\alpha \cdot a + \beta \cdot b = x,$$
that is,
$$ \alpha \cdot ( 1, 2, 5) + \beta \cdot (3, 1, 4) = (14, 3, 15).$$
The equation above gives the following system of equations
$$\alpha + 3 \beta = 14$$
$$2 \alpha + \beta = 3$$
$$5 \alpha + 4 \beta = 15$$
Solving yields $\alpha = -1$ and $\beta = 5$, that is,
$$(14, 3, 15) = -1 \cdot ( 1, 2, 5) + 5 \cdot (3, 1, 4).$$
Since the vector $x$ is a linear combination of the vectors $a$ and $b$, $x$ is in the span of $a$ and $b$.