Vector spaces

As an introduction, let’s shortly say that a vector space is defined as a set of vectors that is closed under two algebraic operations called vector addition and scalar multiplication and satisfies several axioms, which will be given shortly. For better understanding, let’s say that the vector space in this section is $ {\mathbb{R}}^n$, in which the scalars are real numbers, and a vector is represented as a sequence of $ n$ real numbers. Scalar multiplication multiplies each component of the vector by the scalar value. Vector addition forms a new vector by adding each component of two vectors. In the end of the  section, we will give you precise definition of vector spaces.

To understand the definition of vector spaces, we need to recall rules of computing with numbers.

The binary operations of addition $+: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ and multiplication $\cdot: \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ satisfy the following properties:

(1) $$a + (b + c) = (a + b) + c, \quad \forall a, b, c \in \mathbb{R};$$

(2) there exists $0 \in \mathbb{R}$ such that

$$a + 0 = 0 + a. \quad \forall a \in \mathbb{R};$$

(3) for each $a \in \mathbb{R}$ there exists $-a \in \mathbb{R}$ such that

$$a +(-a) = (-a) + a =0;$$

(4) $$a + b = b+ a, \quad \forall a, b \in \mathbb{R};$$

(5) $$a \cdot ( b \cdot c) = (a \cdot b) \cdot c, \quad \forall a, b, c \in \mathbb{R};$$

(6) there exits $1 \in \mathbb{R}$ such that

$$1 \cdot a = a \cdot 1, \quad \forall a \in \mathbb{R};$$

(7) for each $a \in \mathbb{R}, a \neq 0$, there exits $a^{-1} \in \mathbb{R}$ such that

$$a \cdot a^{-1} = a^{-1} \cdot a = 1;$$

(8) $$a \cdot b = b \cdot a, \quad \forall a, b \in \mathbb{R};$$

(9) $$a (b +c) = a \cdot b + b \cdot c, \quad \forall a, b, c \in \mathbb{R}.$$

Whenever we have some set $\mathbb{F}$ with default binary operations of addition $+: \mathbb{F} \times \mathbb{F} \to \mathbb{F}$ and multiplication $ \cdot :  \mathbb{F} \times \mathbb{F} \to \mathbb{F}$ that have above $9$ properties (where instead $\mathbb{R}$ we put $\mathbb{F}$), we say that $\mathbb{F}$ is a field. More precise, a triple $( \mathbb{F}, +, \cdot)$ is a field.

Now we can  conclude that a set of rational numbers and a set of complex numbers are fields because they satisfy all of the properties mentioned above, but, for example, a set of integers is not a field because the property (7) is not satisfied.

In the precise definition of a vector space we are going to let that a scalar field is any field $\mathbb{F}$ (instead $\mathbb{R}$).

Definition. Let $V$ be a non – empty set endowed with two operations; addition $+: V \times V \to V$ and multiplication $\cdot : V \times V \to V$. A triple $(V, +, \cdot)$ is a vector space over the field $\mathbb{F}$ if the following properties hold:

(1) Additive Associativity:

for any $v_1, v_2, v_3 \in V$, $$v_1 + ( v_2 + v_3) = (v_1 + v_2) + v_3;$$

(2) Zero Vector

there exists a zero vector $0 \in V$ such that $$v + 0 = 0 + v, \quad \forall v \in V;$$

(3) Additive Inverses

for any $ v \in V$, there exists $-v \in V$ satisfying $$-v + v = v + (-v) =0;$$

(4) Commutativity

for any $v_1, v_2 \in V$, $$v_1 + v_2 = v_2 + v_1;$$

(5) Distributivity across Scalar Addition

for any $v \in V$ and  $\alpha, \beta \in \mathbb{F}$, $$(\alpha + \beta) \cdot v = \alpha \cdot v + \beta \cdot v;$$

(6) Distributivity across Vector Addition

$$\alpha \cdot (v_1 + v_2) = \alpha \cdot v_1 + \alpha \cdot v_2,$$

for any $v \in V$ and $\alpha \in \mathbb{F}$;

(7) Scalar Multiplication Associativity

$$(\alpha \cdot \beta) \cdot v = \alpha \cdot ( \beta \cdot v),$$ for any $v \in V$ and $\alpha, \beta \in \mathbb{F}$;

(8) One

for any $v \in V$, $$1 \cdot v = v.$$

The elements in $V$ are called vectors and the elements in $\mathbb{F}$ are called scalars.

From the definition of a vector space, one can deduce the following:

Proposition 1.  Let $V$ be a vector space over the field $\mathbb{F}$. Then:

(1) if $\alpha \cdot v = 0$, for $\alpha \in \mathbb{F}$ and $v \in V$, then either $\alpha = 0$ or $v =0$;

(2) $$( – \alpha) \cdot v = \alpha \cdot (-v) = – ( \alpha \cdot v), \quad \forall \alpha \in \mathbb{F}, \forall v \in V;$$

(3) $$\alpha \cdot (v_1 – v_2) = \alpha \cdot v_1 – \alpha \cdot v_2, \quad \forall \alpha \in \mathbb{F}, \forall v_1, v_2 \in V;$$

(4) $$( \alpha – \beta ) \cdot v = \alpha \cdot v – \beta \cdot v, \quad \forall \alpha, \beta \in \mathbb{F}, \forall v \in V.$$

The proposition above tells us that the rules, to which we are accustomed to in the calculation with numbers, are valid in each vector space.

Examples below are the most important examples of vector spaces:

(1) $V^2 (O)$ and $V^3 (O)$ are vector spaces;

(2) for any positive integer $n$, $\mathbb{R}^n$, that is, the set of real $n$ – ples, is a real vector space;

(3) for any positive integer $n$, $\mathbb{C}^n$ is a complex vector space (we multiply vectors from $\mathbb{C}^n$ by complex scalars);

(4) if we put $n=1$ in the previous two examples, we obtain that $\mathbb{R}$ and $\mathbb{C}$ are vector spaces, which is trivial;

(5) the examples (2) and (3) have their natural generalization in vector spaces of matrices. $\mathbf{M}_{mn}( \mathbb{F})$ is a vector space over the field $\mathbb{F}$, where we will write $\mathbf{M}_{mn}(\mathbb{R})$ if we are talking about real matrices, that is, $\mathbf{M}_{mn}(\mathbb{C})$ if we are talking about complex matrices;

(6) let $P$ be a set of all polynomials. Then $P$ is a real or complex vector space with usual operations defined over polynomials;

(7) a set of all continuous functions on $\mathbb{R}$ is a vector space.

The concept of subspace is another important concept. We can say that a subspace of a vector space is a subset of $V$ that is closed under addition and scalar multiplication.

The proper definition follows.

Definition. Let $V$ be a vector space over the field $\mathbb{F}$ and let $W \subseteq V$ be a nonempty subset of $V$. A set $W$ is a vector subspace of $V$ if it is a vector space with the induced operations, that is, $(W, +, \cdot)$ satisfies the properties (1) – (8) from the definition of a vector space, where $+$ and $\cdot$ are the same operations of $V$, restricted to $W$.

If $W$ is a vector subspace of $V$. then this we will write as $W \le V$.

If we want to determine if any nonempty subset of $V$ is a subspace of $V$, we must check that this subset satisfies all the properties from the definition of a vector space. Easier way to check if a nonempty subset of $V$ is a subspace of $V$ is to to apply the following proposition:

Proposition 2. Let $V$ be a vector space over the field $\mathbb{F}$ and let $W$ a nonempty subset of $V$. Then $W$ is a subspace of $V$ if and only if

(1) $$w_1 + w_2 \in W,$$

for any $w_1, w_2 \in W$;

(2) $$\alpha \cdot w \in W,$$

for any $\alpha \in \mathbb{F}$ and $w \in W$.

The conditions from the previous proposition can be merged, that is, the properties (1) and (2) are equivalent to

$$\alpha \cdot w_1 + \beta \cdot w_2 \in W,$$

for any $w_1, w_2 \in W$ and $\alpha, \beta \in \mathbb{F}$.

The smallest possible subspace of $V$ is the subset of $V$ containing only the zero vector, and the whole space $V$ is the biggest one.

For example, by applying the proposition 2., we can check that the set of all symmetric matrices is a subspace of $\mathbf{M}_n$.

Proposition 3. Let $V$ be a vector space and $W_1$ and $W_2$ two subspaces of $V$. Then the intersection $W_1 \cap W_2$ is still a subspace of $V$.

Unlike the intersection,the union $W_1 \cup W_2$ might not be a subspace of $V$ in general. However, we can find the smallest subspace of $V$ containing the union $W_1 \cup W_2$, and that would be the sum of the spaces $W_1$ and $W_2$ defined as

$$W_1 + W_2 := \{ w_1 + w_2 : w_1 \in W_1, w_2 \in W_2 \}.$$

Definition. Let $V$ be a vector space and $W_1$ and $W_2$ two subspaces  of $V$. Then we say that the sum $W_1 + W_2$ is direct sum if $W_1 \cap W_2 = \{0\}$ and it is denoted as $W_1 \oplus W_2$.

Proposition 4. Let $W_1$ and $W_2$ be two subspaces of a vector space $V$. Then the sum $W_1 + W_2$ is direct if and only if every vector $w \in W_1 + W_2$ can be written in a unique way as $w= w_1 + w_2$, where $w_1 \in W_1$ and $w_2 \in W_2$.