Column vectors are coordinate vectors
In this short section we make clear the idea of viewing column vectors as coordinate vectors with respect to a basis. This perspective becomes critical when we move on to discuss linear maps between vector spaces and their associated matrices, and of course when we discuss changes of bases.
To begin, let’s consider the vector space \(V = \mathbb{R}^2\). Now let us pick a vector from this vector space, say \(v = \begin{pmatrix} 2 \\ 1 \end{pmatrix}\). If we were asked to draw this vector we might produce something resembling the image below:

What have we done here? Well, we have taken the vector \(\begin{pmatrix} 2 \\ 1 \end{pmatrix}\) and interpreted it as “move 2 units to the right followed by 1 unit up”. Or in other words, “move 2 units in the direction of the basis vector \(e_1\) followed by 1 unit in the direction of the basis vector \(e_2\)”. Or to put it in mathematical terms, we have expressed \(v\) as \(2e_1 + e_2\) and plotted it on a graph accordingly.
In general then, if we have a vector \(\begin{pmatrix} \alpha \\ \beta \end{pmatrix}\) we should interpret this as \(\alpha e_1 + \beta e_2\). In other words, “travel \(\alpha\) units in the direction of \(e_1\), followed by \(\beta\) units in the direction of \(e_2\)”. Here \(\alpha, \beta\) are known as the coordinates (with respect to the basis \(\mathcal{E}).\)
However, there is no reason why we need to do our travelling along the standard basis \(\mathcal{E} = \{e_1, e_2\}\). In fact travelling along any set of basis vectors is just as good! Let us take a concrete example to explore this idea, suppose we also allow ourselves to think of travel along the basis \(\mathcal{B}\) consisting of vectors \(b_1 = \begin{pmatrix} 3 \\ 1 \end{pmatrix}\) and \(b_2 = \begin{pmatrix} -1 \\ 1 \end{pmatrix}\). If we wish to travel to the point \(w = \begin{pmatrix} 5 \\ 3 \end{pmatrix}\) (that is, 5 units in the direction \(e_1\), followed by 3 units in the direction \(e_2\)) we now have two ways to do this, namely we can think of travelling the in the old way, that is, along the standard basis vectors, or we can think of travelling along our new basis vectors \(b_1\) and \(b_2\). On a graph this looks something like this:


So we can write \(w = 5e_1 + 3e_2\) and also \(w = 2b_1 + b_2\). We wish to record both sets of coordinates in a nice compact way. We know that we can write \(5e_1 + 3e_2\) as \(\begin{pmatrix} 5 \\ 3 \end{pmatrix}\), but how do we write \(2b_1 + b_2\)? Surely it cannot be \(\begin{pmatrix} 2 \\ 1 \end{pmatrix}\) as this would result in us writing things like \(w = \begin{pmatrix} 5 \\ 3 \end{pmatrix} = \begin{pmatrix} 2 \\ 1 \end{pmatrix}\), which is nothing short of a mess.
In fact, this idea is almost correct, there is just one thing missing. If alongside the column vector, we include a way to tell which basis we are working with, then this works nicely. We do this by labelling a column vector with a subscript. Therefore we have \(w = \begin{pmatrix} 5 \\ 3 \end{pmatrix}_{\mathcal{E}}\), and also \(w =\begin{pmatrix} 2 \\ 1 \end{pmatrix}_{\mathcal{B}}\). Now, we can write things like \(w = \begin{pmatrix} 5 \\ 3 \end{pmatrix}_{\mathcal{E}} = \begin{pmatrix} 2 \\ 1 \end{pmatrix}_{\mathcal{B}}\) with no issues.
Note: It is usual to not include the subscript \(\mathcal{E}\) when discussing coordinate vectors with respect to the standard basis. So, for example, \(\begin{pmatrix} \alpha \\ \beta \end{pmatrix}\) should be interpreted as \(\alpha e_1 + \beta e_2 \). However, when coordinate vectors with respect to other bases are being used the subscript (or some other clear indicator) must be included.
There is no reason why we must restrict ourselves to vector spaces of the form \(\mathbb{F}^n\), and let us take this idea and apply it to other vector spaces. Given any vector space \(V\) and a basis \(\mathcal{B}\) we may write coordinate vectors (that is, column vectors with a subscript) to describe vectors in this space. Let us state this as a definition.
Definition
Let \(V \) be a vector space and let \(\mathcal{B} = \{b_1,\dots b_n \}\) be a basis. Let \(v = \alpha_1 b_1 + \dots + \alpha_n b_n \in V \). Then \(\begin{pmatrix} \alpha_1 \\ \vdots \\ \alpha_n \end{pmatrix}_{\mathcal{B}} \) is the coordinate vector of \(v \) with respect to the basis \(\mathcal{B}\).
Let us see an example of writing vectors as coordinate vectors in terms of a basis for a vector space, not of the form \(\mathbb{F}^n\).
Example 1
Let \(V = \mathbb{R}[x]_{\leq 3}\) denote the vector space of polynomials of degree less than or equal to \(3\). Let \(\mathcal{B} = \{1, x+1, x^2 + 1, x^3 + x + 1 \}\) be a basis of \(V\) (labelled by \(b_1,b_2,b_3,b_4\) respectively). We will write \(a)\,\, v = 3x^2 + 2\) and \(b)\,\, w = x^3\) as coordinate vectors with respect to this basis.
\(a)\) We have \(v = 3(x^2 + 1) – 1(1) = -1b_1 + 3b_3\) and so \(v = \begin{pmatrix} -1 \\ 0 \\ 3 \\ 0 \end{pmatrix}_{\mathcal{B}}\).
\(b)\) We have \(w = (x^3+x+1) – (x+1) = b_4 – b_2\) and so \(w= \begin{pmatrix} 0 \\ -1 \\ 0 \\ 1 \end{pmatrix}_{\mathcal{B}}\).
As mentioned at the beginning of the section, this notion is key when it comes to discussing linear maps and change of bases. We will revisit the ideas discussed here when we cover these topics. For now, let’s see some exercises to further practice writing vectors as coordinate vectors. As always, full solutions are given.
Exercises
Exercise 1
Let \(V = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} : a,b,c,d \in \mathbb{R} \right\}\) with basis
Express the following vectors as coordinate vectors with respect to the basis \(\mathcal{B}\):
- \(v_1 = \begin{pmatrix} 5 & 5 \\ 5 & 5 \end{pmatrix}\)
- \(v_2 = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\)
- \(v_3 = \begin{pmatrix} 2 & 2 \\ 2 & -1 \end{pmatrix}\)
Solution
- Observe that \(b_3 + b_4 = \begin{pmatrix} 0 & 2 \\ 2 & 0 \end{pmatrix}\) and also \(2b_1 – b_2 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\). Therefore \(v = 10b_1 – 5b_2 + 2.5b_3 + 2.5b_4\)\(= \begin{pmatrix} 10 \\ -5 \\ 2.5 \\ 2.5 \end{pmatrix}_{\mathcal{B}}\).
- We have \(v_2 = -b_1 – b_2 = \begin{pmatrix} -1 \\ -1 \\ 0 \\ 0 \end{pmatrix}_{\mathcal{B}}.\)
- We have \(v_3 = b_1 + b_2 + b_3 + b_4 = \begin{pmatrix} 1 \\ 1 \\ 1 \\ 1 \end{pmatrix}_{\mathcal{B}}.\)
The next exercise works over a finite field. If this is not familiar to you free feel to skip it, or read the first few lines of the solution where the idea behind the field is explained, then give it a go!
Exercise 2
Let \(V = \mathbb{F}_5[x]_{\leq 2}\) be the vector space of polynomials of degree less than or equal to \(2\) with coefficients in the finite field \(\mathbb{F}_5\). Let \(\mathcal{B} = \{1, 3x, x^2 + 2\}\) be a basis. Write the following vectors as coordinate vectors with respect to the basis \(\mathcal{B} \).
- \(v_1 = 3 + x\)
- \(v_2 = 3x^2\)
- \(v_3 = 2 + 2x + 2x^2\)
Solution
We first explain the idea behind the finite field \(\mathbb{F}_5 \) (for more details see my post about finite fields, specifically part 1). Roughly, this can be thought of as the integers \(\mathbb{Z} \) with the relation \(5 = 0 \). Both the addition and multiplication are given by the addition and multiplication on the integers. Now, the reason why the integers are not a field is the absence of multiplicative inverses, however the relation \(5 = 0 \) fixes this. For example, the multiplicative inverse of \(2 \) is \(3 \) since we have \(3 \cdot 2 = 6 = 5 + 1 = 1\) (using \(5 = 0) \).
- We have \(2 \cdot (3x) = 6x = x\). Therefore \(v_1 = 3(1) + 2(3x) = \begin{pmatrix} 3 \\ 2 \\ 0 \end{pmatrix}_{\mathcal{B}}\).
- We have \(3 \cdot (x^2 + 2) = 3x^2 + 6 = 3x^2 + 1 .\) So \(v_2 = 3(x^2 + 2) -1(1) =\)\(\begin{pmatrix} -1 \\ 0 \\ 3 \end{pmatrix}_{\mathcal{B}} = \begin{pmatrix} 4 \\ 0 \\ 3 \end{pmatrix}_{\mathcal{B}} .\)
(The last equals sign uses \(5 = 0 \)). - We have \(2(x^2 + 2) + 4(3x) + 3(1) = 2x^2 +12x + 7 =\)\(2x^2 + 2x + 2,\) therefore we have \(v_3 = \begin{pmatrix} 3 \\ 4 \\ 2 \end{pmatrix}_{\mathcal{B}} \).