Omni Calculator logo

Null Space Calculator

Created by Maciej Kowalski, PhD candidate
Reviewed by Steven Wooding
Last updated: May 20, 2024


Welcome to Omni's null space calculator, where we'll study the topic of how to find the null space of a matrix. In essence, it means finding all the vectors that are mapped to zero by the given array AA. This notion comes from treating AA as a linear map, and as such, the null space of a matrix is also often called the kernel of that matrix. In general, it will have infinitely many elements, so "finding all the vectors" basically means determining the basis for the null space (whose dimension we call the nullity of the matrix).

So sit back, bring along a cup of coffee, and let's get to it!

What is a matrix?

A long time ago in a galaxy far, far away... No, wait! This doesn't seem right for us talking about what a matrix is. Let's try again.

Several years ago, in the virtual world that we call reality, we used to go to primary school. And it was then when we first saw what a matrix is. That's right; they had taught us about these high-school-algebra objects long before we learned the most memorable information from our teenage education - that mitochondria are the powerhouses of cells.

We've known about matrices for years.

We begin our mathematical education with learning numbers by counting dogs and kitties. Soon enough, the thing gets more complicated, and they tell us to add and subtract those numbers. But it's not that bad; we have our fingers to help us with that.

However, to our wonder, it appears that mathematics doesn't end there. The next topic they introduce is multiplication, and that one is trickier. Don't get us wrong; the operation makes sense. After all, if there are five trees and each has seven apples, then it's useful to know that, in total, we have thirty-five apples. The problem is that we don't have that many fingers! How do we grasp this if our most reliable tool failed us?

That's where matrices come in! Well, in a sense, at least.

To help us with our multiplication (and division), we are shown what is called a multiplication table - some rows and columns with numbers in it. Precisely like a matrix.

·1

·2

·3

·4

·5

·6

·7

·8

·9

·10

1

2

3

4

5

6

7

8

9

10

2

4

6

8

10

12

14

16

18

20

3

6

9

12

15

18

21

24

27

30

4

8

12

16

20

24

28

32

36

40

5

10

15

20

25

30

35

40

45

50

6

12

18

24

30

36

42

48

54

60

7

14

21

28

35

42

49

56

63

70

8

16

24

32

40

48

56

64

72

80

9

18

27

36

45

54

63

72

81

90

10·

10

20

30

40

50

60

70

80

90

100

A matrix is an array of elements (usually numbers) with a set number of rows and columns. It is the main object of studies in linear algebra. An example of a matrix would be:

A=[310211]A = \begin{bmatrix} 3 & -1 \\ 0 & 2 \\ 1 & -1 \end{bmatrix}

Moreover, we say that a matrix has cells, or boxes, into which we write the elements of our array. For example, the matrix AA above has the value 22 in the cell that is in the second row and the second column. The starting point here are 1-cell matrices, which are the same thing as real numbers for all intents and purposes.

As you can see, matrices came to be when scientists decided that they needed to write a few numbers concisely and operate with the whole lot as a single object. As such, they naturally appear when dealing with:

  • Systems of equations, especially with Cramer's rule (Chek our dedicated Cramer's rule calculator to see how it works in practice);
  • Vectors and vector spaces;
  • 3-dimensional geometry (e.g., the dot product and the cross product);
  • Linear transformations (translation and rotation); and
  • Graph theory and discrete mathematics.

We can look at matrices as an extension of the numbers as we know them. After all, the multiplication table above is just a simple example, but, in general, we can have any numbers we like in the cells: positive, negative, fractions, and decimals. If you're feeling especially brainy, you can even have some complex numbers in there too.

The usefulness of matrices comes from the fact that they contain more information than a single value (i.e., they contain many of them). Arguably, it makes them fairly complicated objects, but it's still possible to define some basic operations on them, like, for example, addition and subtraction.

However, the possibilities don't end there! Matrices have an extremely rich structure. To illustrate this with an example, let us mention that to each such matrix, we can associate several important values, such as the determinant, which you can quickly evaluate using our matrix determinant calculator.

But it's just the beginning of our journey! We're here to learn how to find the null space of a matrix, and the word "null" hasn't even appeared yet. So let's not waste another minute and jump to the next section to learn what it is that we're trying to calculate.

Null space/kernel of a matrix

As mentioned in the above section, we can view a matrix as a linear map (a translation or rotation) in a Euclidean space, i.e., the one-dimensional line, the two-dimensional plane, the three-dimensional space, or the four-dimensional hyperspace (the list goes on, but we're focusing here on arrays of size up to 4×44\times4). In other words, the elements of that space are vectors with one, two, three, or four coordinates, respectively.

The null space or the kernel of a matrix is the subspace consisting of all the vectors mapped to the zero vector of that space. In mathematical notation, for an array AA with nn columns, this means all the vectors v=(v1,v2,,vn)\boldsymbol{v} = (v_1, v_2,\ldots, v_n) for which

Av=0A\cdot\boldsymbol{v} = \boldsymbol{0},

where \cdot here denotes matrix multiplication, and 0\boldsymbol{0} is the zero vector, i.e., 0=(0,0,...,0)\boldsymbol{0} = (0,0,...,0) with nn coordinates.

Observe that:

  1. The zero vector is always in the null space. After all, whatever matrix we have, we'll get zero if we multiply it by zero.
  2. In general, the kernel of a matrix can have infinitely many elements. In fact, if (v1,v2,,vn)(v_1, v_2,\ldots, v_n) is in the null space, then so is (av1,av2,,avn)(av_1, av_2,\ldots, av_n) for any real number aa.
  3. You can extend Point 2. above further: if x1\boldsymbol{x_1}, x2\boldsymbol{x_2},…, xk\boldsymbol{x_k} all belong to the null space, then so does every linear combination of these vectors.
  4. The above points boil down to the fact that the null space of a matrix is a vector subspace of the big Euclidean space. Therefore, if we want to study it in-depth, it'd be best to be able to describe what its elements look like in general. That's where the notion of a basis for the null space comes into play - more on that in the next section.

Alright, so the null space of a matrix is just a subspace of elements that satisfy some formula. Fair enough. But how is it useful? What does it tell us about the matrix itself? Well, how fortunate of you to ask, we were just getting to that!

Nullity of a matrix

The nullity of a matrix is the dimension of its null space. Simple as that, no strings attached. But although the nullity definition takes one line, let's try to go one step further and actually try to understand it.

Generally, the dimension of a space is equal to the size of its basis. Yup, yet another definition that uses words and notions that are not too easy themselves. Well, it looks like we'll have to go deeper than that.

To know about nullity, we need to understand what the basis for null space is.

Let x1\boldsymbol{x_1}, x2\boldsymbol{x_2},…, xk\boldsymbol{x_k} be a collection of vectors from the same space. Then any vector w\boldsymbol{w} of the form:

w=α1 ⁣ ⁣x1+α2 ⁣ ⁣x2+α3 ⁣ ⁣x3++αk ⁣ ⁣xk\boldsymbol{w} = \alpha_1\!\cdot\!\boldsymbol{x_1} + \alpha_2\!\cdot\!\boldsymbol{x_2} + \alpha_3\!\cdot\!\boldsymbol{x_3} + \ldots + \alpha_k\!\cdot\!\boldsymbol{x_k},

where α1,α2,α3,,αk\alpha_1, \alpha_2, \alpha_3, \ldots, \alpha_k are arbitrary real numbers is called a linear combination of the vectors x1\boldsymbol{x_1}, x2\boldsymbol{x_2}, x3\boldsymbol{x_3},…, xk\boldsymbol{x_k}. The space of all such vectors w\boldsymbol{w} is called the span of x1\boldsymbol{x_1}, x2\boldsymbol{x_2}, x3\boldsymbol{x_3},…, xk\boldsymbol{x_k}.

At times, it may happen that among x1\boldsymbol{x_1}, x2\boldsymbol{x_2},…, xk\boldsymbol{x_k}, some of the vectors are redundant. This means that you can construct all elements of the space from only a part of the kk vectors given. In that case, we say that x1\boldsymbol{x_1}, x2\boldsymbol{x_2},…, xk\boldsymbol{x_k} are linearly dependent. Otherwise, we call them linearly independent. (Please note that here, we mention all these notions briefly. For a more in-depth study, be sure to check out linear independence calculator).

Any linearly independent vector set that generates a space is called its basis. In particular, the basis for the null space of a matrix is a collection of vectors that generate that matrix's kernel. Therefore, coming back to the nullity definition at the beginning of this section, we can reformulate it as follows:

💡 The nullity of a matrix is the number of linearly independent vectors that generate the null space of that matrix.

All in all, given a matrix (i.e., a linear map) AA that has nn columns (i.e., acts on elements of an nn-dimensional space), it is beneficial to know which elements are vanishing (that is, are mapped to zero) under AA. In other words, to know the null space of AA. For instance, if AA has, say, 44 columns and the nullity of that matrix is 33, then we know that AA "kills" three dimensions along the way, and the fewer dimensions there are, the easier the calculations.

A matrix's nullity is connected to its rank by the following theorem, called simply the rank-nullity theorem.

💡 For a matrix AA, the sum of its rank and its nullity is equal to the number of columns in AA.

In other words, if you know a matrix' nullity, then you know its rank and vice versa. For more information on that topic, be sure to check out matrix rank calculator.

Phew, it's been quite a few definitions, don't you think? Fortunately, the nullity definition was the last one for today, and it made us more than ready to finally learn how to find the null space of a matrix. Let's waste not a second longer and see what we came here for!

How to find the null space of a matrix?

When we're trying to determine the kernel and nullity of a matrix, the primary tool to use is the Gauss-Jordan elimination. It is a handy algorithm that transforms a given matrix into its reduced row echelon form, which is so much easier to work with.

The idea is to "kill" (i.e., make zero) as many entries of the matrix as possible using the so-called elementary row operations. These are:

  • Exchanging two rows of the matrix;
  • Multiplying a row by a non-zero constant; and
  • Adding to a row a non-zero multiple of a different row.

The crucial property here is that the initial matrix and its reduced row echelon form have the same rank and null space. Note that our null space calculator can show you how the input array looks after the Gauss-Jordan elimination due to its usefulness. It's enough to go into the advanced mode and choose the right option under "Show the reduced matrix?".

For example, suppose we have a matrix with three rows and four columns.

A=[a1a2a3a4b1b2b3b4c1c2c3c4]A = \begin{bmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \end{bmatrix}

Then, the first step in the Gauss-Jordan elimination is to take the first cell in the first row, i.e., a1a_1 (provided that it is non-zero), and use the elementary row operations to kill the entries below it. This means that we add a suitable multiple of the top row to the other two so that we obtain a matrix of the form:

A=[a1a2a3a40s2s3s40t2t3t4]A = \begin{bmatrix} a_1 & a_2 & a_3 & a_4 \\ 0 & s_2 & s_3 & s_4 \\ 0 & t_2 & t_3 & t_4 \end{bmatrix}

Next, we do something similar with the middle row. We take s2s_2 (as long as it is non-zero) and use it to kill the entries below it. As a result, we get an array of the form:

A=[a1a2a3a40s2s3s400k3k4]A = \begin{bmatrix} a_1 & a_2 & a_3 & a_4 \\ 0 & s_2 & s_3 & s_4 \\ 0 & 0 & k_3 & k_4 \end{bmatrix}

Now comes the distinction between the Gauss-Jordan elimination and its simplified form, the Gaussian elimination: we divide each row by the first non-zero entry in that row. This gives:

A=[1p2p3p401q3q4001r4]A = \begin{bmatrix} 1 & p_2 & p_3 & p_4 \\ 0 & 1 & q_3 & q_4 \\ 0 & 0 & 1 & r_4 \end{bmatrix}

And that's where we often end the algorithm, like, for instance, when we're looking for the column space of a matrix. In fact, for our purposes, we can already read off some useful information from the array we have.

🔎 Don't know what a column space is? No worries - head to Omni's column space calculator for a brief introduction to the topic!

The ones that appear as the first non-zero elements in each row are called the leading ones. In our example, we have them in the first, second, and third columns out of four. The number of columns that do not contain a leading one is equal to the nullity of that matrix. As a bonus, if we recall the above section and the nullity definition, we get that the number of columns with a leading one is the rank of our matrix.

However, to find the basis for the null space, we'll modify the array some more. Again, we use elementary row operations, but this time we go from the bottom upwards. Firstly, we use the 11 in the third row to kill the entries above it.

A=[1p20n4010m4001r4]A = \begin{bmatrix} 1 & p_2 & 0 & n_4 \\ 0 & 1 & 0 & m_4 \\ 0 & 0 & 1 & r_4 \end{bmatrix}

Next, we do the same with the 11 in the middle row to kill the cell above it.

A=[100u4010m4001r4]A = \begin{bmatrix} 1 & 0 & 0 & u_4 \\ 0 & 1 & 0 & m_4 \\ 0 & 0 & 1 & r_4 \end{bmatrix}

This, finally, is the matrix that will give us the basis for the null space. To determine it in detail, we follow a few simple rules.

  • If the matrix has no columns without a leading one, the null space is trivial, i.e., it is of dimension 00 and contains only the zero vector.
  • If the array contains a column with nothing but zeros, say, the kk-th one, then the elementary vector ek\boldsymbol{e_k} is an element of the basis, i.e., the vector with 11 in the kk-th coordinate and zeros otherwise.
  • If a column, say, the kk-th one, doesn't have a leading one and has at least one non-zero entry, then we construct a vector v\boldsymbol{v} of the basis by taking all the non-zero cells of that column and putting their opposites (i.e., the value times 1-1) in the corresponding coordinates of v\boldsymbol{v}. Moreover, we put 11 in its kk-th coordinate and zeros in the remaining ones. Observe that this is what happens in the example above: from the matrix that we've obtained, we know that the vector (u4,m4,r4,1)(u_4, m_4, r_4, 1) is in the basis for the null space.
  • No other vectors belong to the basis.

Alright, we admit that although informative, this section was no walk in the park. But no worries - we hereby declare the end of theory! (Of course, for the rest of this article, not forever.) We've spent so much time on it that we deserve an excellent example with numbers rather than letters, don't you think?

Example: using the null space calculator

Say that you're sitting through the very last class before the winter break, and it just so happens that it's the linear algebra class. You can already see yourself running home to start packing for the mountain trip you have organized for the weekend. Oh, you just can't wait to go skiing, but first, you have to endure this last half an hour of mathematics.

You can see the teacher at the edge of dozing off. Apparently, not only could you use the break from school. However, they can't just end the class there and then, so they decide to give you one last task, and whoever finishes it is free to leave. So the only thing separating you from the winter break is finding the null space of the following matrix:

A=[2482612313]A = \begin{bmatrix} 2 & -4 & 8 & 2 \\ 6 & -12 & 3 & 13 \end{bmatrix}

Oh, how lucky we are that Omni is here with the null space calculator! Let's see how fast it will give us the answer to our problem.

The array we have at hand has two rows and four columns, so we begin by choosing the correct options under "Number of rows" and "Number of columns". This will trigger a symbolic matrix to appear with its cells denoted with symbols used under it. We see that the first row has entries a1a_1, a2a_2, a3a_3, and a4a_4, so looking back at the array AA, we input

a1=2a_1 = 2, a2=4a_2 = -4, a3=8a_3 = 8, and a4=2a_4 = 2.

Similarly, the second row has bb-s in it, so we get

b1=6b_1 = 6, b2=12b_2 = -12, b3=3b_3 = 3, and b4=13b_4 = 13.

Once we input the last number, the null space calculator will spit out the answer. However, let's not get ahead of ourselves and take the time to grab a piece of paper and calculate the whole thing ourselves.

We begin by applying the Gauss-Jordan elimination to AA. It has only two rows, so it shouldn't be too difficult.

We want to use the first element in the first row, i.e., the 22 to kill the entry below it. Since 6+(3)2=06 + (-3)\cdot2 = 0, we add a multiple of 3-3 of the top row to the bottom one. This gives:

[24826+(3)212+(3)(4)3+(3)813+(3)2]=[248200217]\scriptsize \begin{split} & \begin{bmatrix} 2 & -4 & 8 & 2 \\[.5em] 6\!+\!(-3)\!\cdot\!2 \!&\! -12\!+\!(-3)\!\cdot\!(-4) \!&\! \!3\!+\!(-3)\!\cdot\!8 \!&\! 13\!+\!(-3)\!\cdot\!2 \end{bmatrix} \\[1.5em] =& \begin{bmatrix} 2 & -4 & 8 & 2 \\[.5em] 0 \!&\! 0 \!&\! -21 \!&\! 7 \end{bmatrix} \end{split}

Next, just as Gauss-Jordan elimination states, we divide each row by the first non-zero entry. In our case, this translates to dividing the first row by 22 and the second by 21-21.

[22428222002121721]=[124100113]\footnotesize \begin{split} & \begin{bmatrix} \frac{2}{2} & \frac{-4}{2} & \frac{8}{2} & \frac{2}{2} \\[.5em] 0 & 0 & \frac{-21}{-21} & \frac{7}{-21} \end{bmatrix} \\[1.5em] =& \begin{bmatrix} 1 & -2 & 4 & 1 \\[.5em] 0 & 0 & 1 & -\frac{1}{3} \end{bmatrix} \end{split}

A piece of cake, wasn't it?

Next, following the instructions from the above section, we use elementary row operations from the bottom upwards. For us, this means using the 11 from the second row to kill the 44 above it. Since 4+(4)1=04 + (-4)\cdot1 = 0, we add a multiple of 4-4 of the bottom row to the top one and obtain

[124+(4)11+(4)(13)00113]=[12021300113]\footnotesize \begin{split} & \begin{bmatrix} 1 & -2 & 4\!+\!(-4)\!\cdot\!1 & 1\!+\!(-4)\!\cdot\!(\!-\frac{1}{3}\!) \\[.5em] 0 & 0 & 1 & -\frac{1}{3} \end{bmatrix} \\[1.5em] =& \begin{bmatrix} 1 & -2 & 0 & 2\frac{1}{3} \\[.5em] 0 & 0 & 1 & -\frac{1}{3} \end{bmatrix} \end{split}

We got a matrix with two leading ones - in the first and third columns. Therefore, the nullity of our matrix is 22 (since there are four columns and 42=24 - 2 = 2).

To determine a basis for the null space, we again recall the instructions from the above section. We have no columns with zeros only, so the basis will have no elementary vectors ek\boldsymbol{e_k}. We move on to check the columns that don't have leading ones: the second and fourth.

For the second, we construct a vector v1\boldsymbol{v_1} with four coordinates (since the matrix has four columns). The first coordinate is 2(1)=2-2\cdot(-1) = 2 (because we have 2-2 in the row with the leading one in the first column), the second coordinate is 11 (because we're considering the second column), and the rest are zero. All in all, we have v1=(2,1,0,0)\boldsymbol{v_1} = (2, 1, 0, 0).

Similarly, for the fourth, we get v2\boldsymbol{v_2} whose first coordinate is 213(1)=2132\frac{1}{3}\cdot(-1) = -2\frac{1}{3} (because we have 2132\frac{1}{3} in the row with the leading one in the first column), the third is 13(1)=13-\frac{1}{3}\cdot(-1) = \frac{1}{3} (because we have 13-\frac{1}{3} in the row with the leading one in the third column), the fourth one is 11 (because we're considering the fourth column), and the rest are zero. This gives v2=(213,0,13,1)\boldsymbol{v_2} = (-2\frac{1}{3}, 0, \frac{1}{3}, 1).

To sum up, the basis for the null space consists of two vectors: v1=(2,1,0,0)\boldsymbol{v_1} = (2, 1, 0, 0) and v2=(213,0,13,1)\boldsymbol{v_2} = (-2\frac{1}{3}, 0, \frac{1}{3}, 1). The nullity of the matrix is 22.

Task done! And thanks to Omni Calculator, there was no need for any overtime. Even better - we feel that when the time comes for a test in null spaces, you're more than ready.

Keep the engine running; we're ready to hit the road and begin the winter break!

Maciej Kowalski, PhD candidate
Related calculators
Number of rows
2
Number of columns
3
A=
a1a2a3
b1b2b3
First row
a₁
a₂
a₃
Second row
b₁
b₂
b₃
Result
A=
Check out 35 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 32 more
People also viewed…

Cross product

Cross product calculator finds the cross product of two vectors in a three-dimensional space.

Lost socks

Socks Loss Index estimates the chance of losing a sock in the laundry.

Square Pyramid

Using the square pyramid calculator, find the surface area and volume of a square pyramid.

Test grade

With this test grade calculator, you'll quickly determine the test percentage score and grade.
Copyright by Omni Calculator sp. z o.o.
Privacy, Cookies & Terms of Service