Linear Independence Calculator

Created by Maciej Kowalski, PhD candidate
Reviewed by Bogna Szyk and Jack Bowater
Last updated: Nov 07, 2022


Welcome to the linear independence calculator, where we'll learn how to check if you're dealing with linearly independent vectors or not.

In essence, the world around us is a vector space and sometimes it is useful to limit ourselves to a smaller section of it. For example, a sphere is a 3-dimensional shape, but a circle exists in just two dimensions, so why bother with calculations in three?

Linear dependence allows us to do just that - work in a smaller space, the so-called span of the vectors in question. But don't you worry if you've found all these fancy words fuzzy so far. In a second, we'll slowly go through all of this together.

So grab your morning/evening snack for the road, and let's get going!

What is a vector?

When you ask someone, "What is a vector?" quite often, you'll get the answer "an arrow." After all, we usually denote them with an arrow over a small letter:

v\scriptsize\vec{v}

Well, let's just say that this answer will not score you 100 on a test. Formally, a vector is an element of vector space. End of definition. Easy enough. We can finish studying. Everything is clear now.

But what is a vector space, then? Again, the mathematical definition leaves a lot to be desired: it's a set of elements with some operations (addition and multiplication by scalar), which must have several specific properties. So, why don't we just leave the formalism and look at some real examples?

The Cartesian space is an example of a vector space. This means that the numerical line, the plane, and the 3-dimensional space we live in are all vector spaces. Their elements are, respectively, numbers, pairs of numbers, and triples of numbers, which, in each case, describe the location of a point (an element of the space). For instance, the number 1-1 or point A=(2,3)A = (2, 3) are elements of (different!) vector spaces. Often, when drawing the forces that act on an object, like velocity or gravitational pull, we use straight arrows to describe their direction and value, and that's where the "arrow definition" comes from.

What is quite important is that we have well-defined operations on the vectors mentioned above. There are some slightly more sophisticated ones like the dot product and the cross product (if you need to learn the difference between them, visit the cross product calculator and the dot product calculator). However, fortunately, we'll limit ourselves to two basic ones which follow similar rules to the same matrix operations (vectors are, in fact, one-row matrices). First of all, we can add them:

1+4=3\scriptsize-1 + 4 = 3

Or:

(2,3)+(3,11)=(2+(3),3+11)=(1,14)\scriptsize \begin{split} (2,3) + (-3, 11) &= (2 + (-3), 3 + 11) \\ &= (-1, 14) \end{split}

And we can multiply them by a scalar (a real or complex number) to change their magnitude:

3×(1)=3\scriptsize3\times(-1)=-3

Or:

7×(2,3)=(7×2,7×3)=(14,21)\scriptsize7\times(2,3)=(7\times2,7\times3)=(14,21)

Truth be told, a vector space doesn't have to contain numbers. It can be a space of sequences, functions, or permutations. Even the scalars don't have to be numerical! But let's leave that abstract mumbo-jumbo to scientists. We're quite fine with just the numbers, aren't we?

Linear combination of vectors

Let's say that we're given a bunch of vectors (from the same space): v1\vec{v}_1, v2\vec{v}_2, v3\vec{v}_3, ..., vn\vec{v}_n. As we've seen in the above section, we can add them and multiply them by scalars. Any expression that is obtained this way is called a linear combination of the vectors. In other words, any vector w\vec{w}, that can be written as

w=α1×v1+α2×v2+α3×v3+...+αn×vn\scriptsize \begin{split} \vec{w}&=\alpha_1\!\times\!\vec{v}_1+\alpha_2\!\times\!\vec{v}_2+\alpha_3\!\times\!\vec{v}_3\\ &+...+\alpha_n\times\vec{v}_n \end{split}

Where α1\alpha_1, α2\alpha_2, α3\alpha_3, ..., αn\alpha_n, are arbitrary real numbers is said to be a linear combination of the vectors v1\vec{v}_1, v2\vec{v}_2, v3\vec{v}_3, ..., vn\vec{v}_n. Note that w\vec{w} is indeed a vector since it's a sum of vectors.

Okay, so why do all that? There are several things in life, like helium balloons and hammocks, that are fun to have but aren't all that useful on a daily basis. Is it the case here?

Let's consider the Cartesian plane, i.e., the 2-dimensional space of points A=(x,y)A = (x,y) with two coordinates, where xx and yy are arbitrary real numbers. We already know that such points are vectors, so why don't we take two very special ones: e1=(1,0)\vec{e}_1 = (1,0) and e2=(0,1)e_2 = (0,1). Now, observe that:

A=(x,y)=(x,0)+(0,y)=x×(1,0)+y×(0,1)=x×e1+y×e2\scriptsize \begin{split} A &= (x,y) = (x,0) + (0,y) \\ &= x\times(1,0) + y\times(0,1)\\ & = x\times \vec{e}_1 + y\times \vec{e}_2 \end{split}

In other words, any point (vector) of our space is a linear combination of vectors e1\vec{e}_1 and e2\vec{e}_2. These vectors then form a basis (and an orthonormal basis at that) of the space. And believe us, in applications and calculations, it's often easier to work with a basis you know rather than some random vectors you don't.

🙋 The vectors e1\vec{e}_1 and e2\vec{e}_2 are unit vectors. They have some special features we analyzed in detail at our unit vector calculator!

But what if we added another vector to the pile and wanted to describe linear combinations of the vectors e1\vec{e}_1, e2\vec{e}_2, and, say, v\vec{v}? We've seen that e1\vec{e}_1 and e2\vec{e}_2 proved enough to find all points. So adding v\vec{v} shouldn't change anything, should it? Actually, it seems quite redundant. And that's exactly where linear dependence comes into play.

Linearly independent vectors

We say that v1\vec{v}_1, v2\vec{v}_2, v3\vec{v}_3, ..., vn\vec{v}_n are linearly independent vectors if the equation

α1×v1+α2×v2+α3×v3+...+αn×vn=0\scriptsize \begin{split} &\alpha_1\!\times\!\vec{v}_1+\alpha_2\!\times\!\vec{v}_2+\alpha_3\!\times\!\vec{v}_3\\ &+\!...\!+\alpha_n\!\times\!\vec{v}_n=\vec{0} \end{split}

(here 0\vec{0} is the vector with zeros in all coordinates) holds if and only if α1=α2=α3=...=αn\alpha_1=\alpha_2=\alpha_3=...=\alpha_n. Otherwise, we say that the vectors are linearly dependent.

The above definition can be understood as follows: the only linear combination of the vectors that gives the zero vector is trivial. For instance, recall the vectors from the above section: e1=(1,0)\vec{e}_1 = (1,0), e2=(0,1)\vec{e}_2 = (0,1), and then also take v=(2,1)\vec{v} = (2,-1). Then

(2)×e1+1×e2+1×v=(2)×(1,0)+1×(0,1)+1×(2,1)=(2,0)+(0,1)+(2,1)=(0,0)\scriptsize \begin{split} &(-2)\times\vec{e}_1 + 1\times\vec{e}_2 + 1\times\vec{v} \\ &= (-2)\times(1,0) + 1\times(0,1) + 1\times(2,-1) \\ &= (-2,0) + (0,1) + (2,-1) = (0,0) \end{split}

so we've found a non-trivial linear combination of the vectors that gives zero. Therefore, they are linearly dependent. Also, we can easily see that e1\vec{e}_1 and e2\vec{e}_2 themselves without the problematic v\vec{v} are linearly independent vectors.

The span of vectors in linear algebra

The set of all elements that can be written as a linear combination of vectors v1\vec{v}_1, v2\vec{v}_2, v3\vec{v}_3, ..., vn\vec{v}_n is called the span of the vectors and is denoted span(v1,v2,v3,...,vn)\mathrm{span}(\vec{v}_1,\vec{v}_2,\vec{v}_3,...,\vec{v}_n). Coming back to the vectors from the above section, i.e., e1=(1,0)\vec{e}_1= (1,0), e2=(0,1)e_2 = (0,1), and v=(2,1)\vec{v} = (2,-1), we see that

span(e1,e2,v)=span(e1,e2)=R2\scriptsize\mathrm{span}(\vec{e}_1, \vec{e}_2, \vec{v}) = \mathrm{span}(\vec{e}_1,\vec{e}_2) = \R^2