Omni Calculator logo

Diagonalize Matrix Calculator

Created by Maciej Kowalski, PhD candidate
Reviewed by Bogna Szyk and Jack Bowater
Last updated: May 22, 2023


Welcome to the diagonalize matrix calculator, where we'll take you on a mathematical journey to the land of matrix diagonalization. We'll go through the topic of how to diagonalize a matrix using its eigenvalues and eigenvectors together. This process is extremely useful in advanced array calculations since it's so much easier to deal with a diagonal matrix rather than a full one.

But is it a simple algorithm? Is every array a diagonalizable matrix? Sit back, grab your morning/afternoon coffee, and let's find out, shall we?

What is a matrix?

Do you remember the good old days of primary school mathematics? You counted how many oranges Mr. Smith had if he bought eight and ate two, and they told you that these were called integer numbers, and math seemed simple enough.

Then Mr. Smith opened up a little shop and sold his own fruit – $1.20 per pound for apples and $0.90 per pound for bananas, including taxes.They told you that these new values were called rational numbers, and you spent a few months getting the hang of them, multiplying and adding them together. Still, it wasn't too bad.

But they had to go further, didn't they? As long as geometry consisted of shapes such as squares or rectangles, the calculations were pretty simple and straightforward. But once they introduced triangles, especially right triangles, and the Pythagorean theorem, some weird values appeared, which were called roots, and, apparently, they can't be described in the form of a good old fraction. And don't even get us started on the π\pi that spoiled all the circle calculations.

All these numbers were called real numbers because they appear naturally in the real world. Well, fair enough, but it's so much easier to look at an approximation of the cubic root of three, 1.441.44, than the flimsy 33\sqrt[3]{3} sign.

And you know what? The big-headed mathematicians didn't stop there and decided that it's quite a shame that -1 doesn't have a square root, so why not imagine that it does and call it i. Funnily enough, they actually named it the imaginary number and called this whole new extension complex numbers. But, to give credit where credit is due, complex numbers are extremely useful in all kinds of sciences, including physics and statistics.

So, now that we have all these fancy numbers, what good can we do with them? That's right – take a boatload of them, write them in a table, and treat them as one thing.

A matrix is an array of elements (usually numbers) that has a set number of rows and columns. An example of a matrix would be:

A=(310211)A=\begin{pmatrix} 3 & -1 \\ 0 & 2\\ 1 & -1 \end{pmatrix}

Moreover, we say that a matrix has cells, or boxes, into which we write the elements of our array. For example, matrix A above has the value 22 in the cell that is in the second row and the second column. The starting point here are 1-cell matrices, which are, for all intents and purposes, the same thing as real numbers.

As you can see, matrices came to be when a scientist decided that he needs to write a few numbers concisely and operate with the whole lot as a single object. As such, they naturally appear when dealing with:

  • Systems of equations, especially when calculating Cramer's rule and the (reduced) row echelon form;
  • Vectors and vector spaces;
  • 3-dimensional geometry (e.g., the dot product and the cross product);
  • Linear transformations (translation and rotation); and
  • Graph theory and discrete mathematics.

We can look at matrices as an extension of the numbers as we know them (real or complex). As such, it would make sense to define some basic operations on them, like, for example, addition and subtraction. And indeed, it can be easily done, but these new objects have a much more complex structure, and, to each array, we can find several important values. For example we can calculate the matrix's rank.

However, we've gathered here today to look at something quite different: matrix diagonalization. We will need some tricks to define it, but how about we start with what exactly this diagonal matrix is and why they're easier to deal with.

Diagonal matrix: definition and properties

We call a square array of numbers a diagonal matrix if it is of the form:

A=(x1000x2000xn)A=\begin{pmatrix} x_1 & 0 & \ldots & 0\\ 0&x_2&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&x_n \end{pmatrix}

where x1x_1, x2x_2, ..., xnx_n are some numbers. In other words, a diagonal matrix is an array whose non-zero entries only appear on the main diagonal.

Now let's list a few useful properties of diagonal matrices to convince you that they are fairly easy objects.

  1. The sum and product of diagonal matrices is again a diagonal matrix. In other words, if AA and BB are diagonal matrices, then A+BA + B, ABA\cdot B, ABA\circ B are also diagonal. What's the last symbol? The Hadamard product: learn how to calculate it with our Hadamard product calculator!

  2. Diagonal matrices are transpose-invariant. This means that if AA is a diagonal matrix, then the calculated transposed matrix is the same object: A=AA^\intercal = A.

  3. The k-th power of a diagonal matrix is a diagonal matrix with the same entries individually raised to the k-th power. This one might be easier to understand symbolically. Recall the array AA above with the xix_i's on the diagonal. Then:

Ak=(x1k000x2k000xnk)\qquad A^k=\begin{pmatrix} x_1^k & 0 & \ldots & 0\\ 0&x_2^k&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&x_n^k \end{pmatrix}

Hopefully, you can see some advantages in learning how to diagonalize a matrix. Especially the third property above shows that if we need to take the matrix to some high power, then matrix diagonalization would make the task quite a lot easier. And why spend ages doing gruesome calculations if a simple trick can make the task so much simpler?

We now need to define eigenvalues and eigenvectors, which are the most important tools when studying matrix diagonalization, and that's exactly what the next section is all about.

Eigenvalues and eigenvectors

🙋 Checkpoint! If you need to refresh your knowledge about eigenvectors and eigenvalues, visit our eigenvalue and eigenvector calculator!

Suppose that we have a square matrix AA of size n×nn \times n. If there exists some number λ\lambda and a non-zero vector v\boldsymbol{v} of length nn for which:

Av=λvA\cdot \boldsymbol{v} = \lambda \boldsymbol{v}

then we call λ\lambda an eigenvalue of AA and v\boldsymbol{v} its corresponding eigenvector.

The above is a formal definition, so let's now try to translate it into everyday language. You know, the kind of language that you use to talk about eigenvalues and eigenvectors on a daily basis.

Suppose that you have a square matrix that has some ugly entries and is generally difficult to look at. But maybe its beauty is on the inside? Perhaps we can use some not-so-terrible smaller objects to describe it throughly?

Eigenvalues and eigenvectors are exactly that. Once we know them, we know the initial matrix quite well and are able to simplify any further calculations (as an example, recall taking a power of a matrix described in the section above).

So, how do we get these eigen-thingies? Well, let's take a second look at the equation that describes them above. If we move the right side to the left:

Avλv=0A\cdot \boldsymbol{v} - \lambda\boldsymbol{v} = 0

and write it using the identity matrix, II (the diagonal matrix with 11-s on the diagonal), then we'll get an equivalent matrix equation:

(AλI)v=0\left(A-\lambda I\right)\cdot \boldsymbol{v} = 0

Here we calculated the multiplication of a matrix AλIA - \lambda I by a non-zero vector (a one-column matrix) that should give 00, or rather the zero matrix. In order for this to be true, we need to have:

det(AλI)=0\text{det}\left(A - \lambda I\right)=0

Where det\text{det} denotes the determinant of a matrix (it's also denoted AλI\left|A - \lambda I\right| — not to be confused with the absolute value of a number).

Since we know the entries of AA, this equality gives us an equation with λ\lambda as an unknown variable called the characteristic polynomial of AA. Its degree is equal to the size of the array, i.e., it's a quadratic equation for 2×22\times2 matrices and a cubic equation for 3×33\times3 matrices.

As the last thing in this section, let's point out a few important things concerning eigenvalues and eigenvectors.

  1. Eigenvalues don't always exist. As we've mentioned above, we have quadratic and cubic equations for 2×22\times2 and 3×33\times3 matrices, respectively. Such things don't always have a real solution (e.g., no real number satisfies the equation x2+1=0x^2 + 1 = 0). However, if we instead consider the complex numbers, then every equation will have at least one solution.

  2. Eigenvalues have multiplicities. It can happen that, say, a quadratic equation has only one solution. For example, the equation x2+2x+1=0x^2 + 2x + 1 = 0 is satisfied only for x=1x = -1 because x2+2x+1=(x+1)2x^2 + 2x + 1 = (x + 1)^2. This small 22 in the exponent is called the multiplicity of x=1x = -1. What is more, in the field of complex numbers, the sum of multiplicities of an equation's solutions is always equal to the degree of the polynomial. For instance, a cubic equation (an equation of degree 33) with complex numbers can have three solutions of multiplicity 11, one solution of multiplicity 22, and one solution of multiplicity 11, or one solution of multiplicity 33.

  3. A matrix with too few eigenvalues (counted with multiplicities) is not a diagonalizable matrix. As points 1. and 2. suggest, this can only happen if we don't consider complex numbers. In particular, a matrix with no real eigenvalues is not a diagonalizable matrix (in the field of real numbers).

  4. One eigenvalue can have multiple eigenvectors. For example, the identity matrix (the diagonal matrix with 11's in the diagonal) has only one eigenvalue, λ=1\lambda = 1, and it corresponds to as many (linearly independent) eigenvectors as the size of the matrix (which is equal to the multiplicity of λ=1\lambda = 1).

  5. A matrix with too few eigenvectors is not a diagonalizable matrix. One example of when that happens is point 3. above. But there's more! As opposed to eigenvalues, a matrix's eigenvectors don't have multiplicities. It may, however, happen that, say, an eigenvalue of multiplicity 22 has only one eigenvector, even if we take complex numbers. Such matrices are called non-diagonalizable. They are rather rare, but be sure to keep an eye out for them!

Phew, that was quite a lot of theory, wouldn't you say? We keep defining some things, their properties, and a minute after minute passes without a clear set of instructions on what we're here for: how to diagonalize a matrix. It's high time we move to that, isn't it?

How to diagonalize a matrix?

Say that you're given a square array, AA, of size n×nn\times n, and you know that it's a diagonalizable matrix. We've seen in the section Diagonal matrix: definition and properties what a diagonal matrix is, so, at first glance, it may seem a bit too much like magic to transform one thing into the other. Well, we might need some help with that.

The matrix diagonalization of AA is given by:

A=SDS1,A=S\cdot D\cdot S^{-1},

where:

  • DD is diagonal; and
  • SS is our little helper (on the right, we have the inverse of SS, S1S^{-1}).

Finding SS and DD may seem like no easy task. In fact, as mentioned in the above section, once we have the eigenvalues and eigenvectors of AA, we have everything.

Below we list the main points of the procedure whenever you want to find matrix diagonalization. We should warn you, however, that, in full generality, some of them are no walk in the park and can take some time to finish. Oh, how lucky we are that the diagonalize matrix calculator exists!

  1. Take your square matrix AA of size n×nn \times n and calculate the determinant, det(AλI)\text{det}\left(A - \lambda I\right), i.e., the determinant of AA with λ\lambda's subtracted from the diagonal entries.

  2. Find the solutions of det(AλI)=0\text{det}\left(A - \lambda I\right)=0 and figure out their multiplicities. If the sum of the multiplicities is not equal to nn, then AA is not a diagonalizable matrix. Otherwise, proceed.

  3. The solutions of det(AλI)=0\text{det}\left(A - \lambda I\right)=0 are the eigenvalues of AA. Take each eigenvalue λλ separately and write the matrix AλIA - \lambda I.

  4. To find the eigenvectors, define a vector v=(x1,x2,,xn)v = (x_1, x_2,\ldots, x_n) of length nn and solve the matrix equation (AλI)v=0\left(A - \lambda I\right) \cdot \boldsymbol{v} = 0. (It will have infinitely many solutions that depend on finitely many parameters.)

  5. There are as many eigenvectors corresponding to λλ as there are parameters in the solution of (AλI)v=0\left(A - \lambda I\right) \cdot \boldsymbol{v} = 0. Each of them is given by substituting in the solution 11 for one of the parameters and 00 for all the other ones.

  6. If an eigenvalue has fewer eigenvectors than its multiplicity, then AA is not a diagonalizable matrix.

In the end, if AA is diagonalizable, we should obtain eigenvalues λ1λ_1, λ2λ_2, ..., λnλ_n (some may be repeated according to their multiplicities) and eigenvectors v1v_1, v2v_2, ..., vnv_n. Then:

A=SDS1A=S\cdot D\cdot S^{-1}

With:

D=(λ1000λ2000λn)D = \begin{pmatrix} \lambda_1&0&\ldots&0\\ 0&\lambda_2&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\lambda_n \end{pmatrix}

And SS is the matrix whose first column contains the entries of v1v_1, the second has entries of v2v_2, and so on.

Phew, that was a lot of symbols, fancy scientific language, and generally a lot to get our heads around. But if you've made it this far, you deserve a cold one and a nice numerical example to see the diagonalize matrix calculator in practice.

Let's not waste another second and get to it!

Example: using the diagonalize matrix calculator

It's the last algebra class of the year. All the students already feel the laziness creeping in, and even the teacher seems like she's about to doze off. Unfortunately, she still has enough energy to give the class one last test, but tells you that whoever finishes it can leave and celebrate the beginning of summer.

With a little smirk on her face, the teacher writes on the blackboard a 3×33\times3 matrix and tells you to find its 20th power. You hear a quiet murmur go up all around you and think about how long it will take to make so many matrix multiplications. Well, let's at least appreciate that it's not of size 4×44\times4.

You suddenly remember the trick with matrix diagonalization that should help speed up the calculations. Excited in spite of yourself, you grab a piece of paper and begin your last exercise of the year.

The matrix is:

A=(100211011)A=\begin{pmatrix}1&0&0\\2&1&-1\\0&-1&1\end{pmatrix}

Let's first see how easy it is to find the answer with our diagonalize matrix calculator.

We have a 3×33\times3 matrix, so the first thing we need to do is tell the calculator that by choosing the correct option under "Matrix size". This will show us a symbolic example of such a matrix that tells us what notation we use for its entries. For example, the first row has elements a1a_1, a2a_2, and a3a_3, so we look back at our array and input

a1=1a_1 = 1, a2=0a_2 = 0, and a3=0a_3 = 0.

Similarly, the other two rows give

b1=2b_1 = 2, b2=1b_2= 1, b3=1b_3 = -1,

c1=0c_1 = 0, c2=1c_2 = -1, c3=1c_3 = 1.

Once we write the last value, the diagonalize matrix calculator will spit out all the information we need: the eigenvalues, the eigenvectors, and the matrices SS and DD in the decomposition A=SDS1A = S \cdot D \cdot S^{-1}.

Now let's see how we can arrive at this answer ourselves. We begin by finding the matrix's eigenvalues. To do that, we first find the characteristic polynomial of AA. We obtain it by calculating the determinant, det(AλI)\text{det}\left(A - \lambda I\right), which in our case is:

det(AλI)=(1 ⁣ ⁣λ0021 ⁣ ⁣λ1011 ⁣ ⁣λ)= ⁣(1 ⁣ ⁣λ)3+0+0(1 ⁣ ⁣λ) ⁣ ⁣( ⁣1) ⁣ ⁣( ⁣1)=(1 ⁣ ⁣λ)3(1 ⁣ ⁣λ)=(1 ⁣ ⁣λ) ⁣ ⁣((1 ⁣ ⁣λ)21)=(1λ) ⁣ ⁣(12λ+λ21)=λ ⁣ ⁣(1 ⁣ ⁣λ) ⁣ ⁣(λ ⁣ ⁣2)\begin{split} &\text{det}\left(A-\lambda I\right)\\[.5em] &=\begin{pmatrix}1\!-\!\lambda&0&0\\2&1\!-\!\lambda&-1\\0&-1&1\!-\!\lambda\end{pmatrix}\\[20px] &=\!(1\!-\!\lambda)^3+0+0\\[.5em] &\quad-(1\!-\!\lambda)\!\cdot\!(-\!1)\!\cdot\!(-\!1)\\[.5em] &=(1\!-\!\lambda)^3-(1\!-\!\lambda)\\[.5em] &=(1\!-\!\lambda)\!\cdot\!\left((1\!-\!\lambda)^2-1\right)\\[.5em] &=(1-\lambda)\!\cdot\!(1-2\lambda +\lambda^2-1)\\[.5em] &=\lambda\!\cdot\!(1\!-\!\lambda)\!\cdot\!(\lambda\!-\!2) \end{split}

That went quite well! Even though we have a polynomial of degree 33, we managed to describe it in a nice multiplicative form. The advantage here is that the cubic equation:

λ(1λ)(λ2)=0\lambda \cdot(1-\lambda)\cdot(\lambda -2)=0

Is satisfied if and only if one of the factors is zero*, i.e., if λ=0\lambda = 0, 1λ=01 - \lambda = 0, or λ2=0\lambda - 2 = 0. This translates to λ=0\lambda = 0, λ=1\lambda = 1, or λ=2\lambda = 2, and these are exactly the eigenvalues of AA.

Now we need to find the eigenvectors. For that, let's take a vector v=(x,y,z)\boldsymbol{v} = (x, y, z) and describe the matrix equation (AλI)v=0\left(A -\lambda I\right) \cdot \boldsymbol{v} = 0 as a system of equations for each eigenvalue λ\lambda.

For λ=0\lambda = 0 we have:

AλI=A=(100211011)A-\lambda I = A = \begin{pmatrix}1&0&0\\2&1&-1\\0&-1&1\end{pmatrix}

Which translates (AλI)v=0\left(A - \lambda I\right) \cdot \boldsymbol{v} = 0 into the following three (coordinate) equations:

{1x+0y+0z=02x+1y+(1)z=00x+(1)y+1z=0\begin{cases} 1\cdot x + 0\cdot y + 0\cdot z = 0\\ 2\cdot x + 1\cdot y+(-1)\cdot z = 0\\ 0\cdot x + (-1)\cdot y + 1\cdot z=0 \end{cases}

Which is:

{x=02x+yz=0y+z=0\begin{cases} x=0\\ 2x+y-z=0\\ -y+z=0 \end{cases}

The solutions to that system are of the form x=0x = 0, y=ty = t, z=tz = t where tt is an arbitrary number. Taking t=1t = 1 we get the first eigenvector: v=(0,1,1)\boldsymbol{v} = (0, 1, 1).

If we repeat this reasoning for the other two eigenvalues, we'll obtain eigenvectors (0.5, 0, 1) and (0, -1, 1) for λ = 1 and λ = 2, respectively.

Now that we have the eigenvalues λ1=0\lambda_1 = 0, λ2=1\lambda_2 = 1, λ3=2\lambda_3 = 2 and their eigenvectors v1=(0,1,1)\boldsymbol{v}_1 = (0, 1, 1), v2=(0.5,0,1)\boldsymbol{v}_2 = (0.5, 0, 1), v2=(0,1,1)\boldsymbol{v}_2 = (0, -1, 1), we use the first to form the diagonal matrix:

D=(λ1000λ2000λ3)=(000010002)\begin{split} D&=\begin{pmatrix} \lambda_1&0&0\\ 0&\lambda_2&0\\ 0&0&\lambda_3 \end{pmatrix}\\[20px] &=\begin{pmatrix} 0&0&0\\ 0&1&0\\ 0&0&2 \end{pmatrix} \end{split}

And the second to create the S matrix:

S=(v1,v2,v3)=(00.50101111)\begin{split} S &= (\boldsymbol{v}_1,\boldsymbol{v}_2,\boldsymbol{v}_3)\\[10px] &=\begin{pmatrix} 0&0.5&0\\ 1&0&-1\\ 1&1&1 \end{pmatrix} \end{split}

Note how the matrices differ slightly from what the diagonalize matrix calculator gives. The difference is only a matter of reordering the eigenvalues (and hence also the eigenvectors). Both answers are equally correct.

These matrices satisfy A=SDS1A = S \cdot D \cdot S^{-1}, so by the rules of matrix multiplication and matrix inverse, we have:

A20=(SDS1)20=SD20S1\begin{split} A^{20} &= \left(S\cdot D\cdot S^{-1}\right)^{20} \\[.5em] &= S\cdot D^{20}\cdot S^{-1} \end{split}

Now we use some properties of a diagonal matrix and observe that:

D20=(000010002)20=(020000120000220)=(00001000220)\begin{split} D^{20}&=\begin{pmatrix} 0&0&0\\ 0&1&0\\ 0&0&2 \end{pmatrix}^{20}\\[20px] &=\begin{pmatrix} 0^{20}&0&0\\ 0&1^{20}&0\\ 0&0&2^{20} \end{pmatrix}\\[20px] &=\begin{pmatrix} 0&0&0\\ 0&1&0\\ 0&0&2^{20} \end{pmatrix} \end{split}

So now we just need to figure out what 2202^{20} is and multiply it from one side by SS and from the other side by S1S^{-1}.

Hmm, why don't we again turn to Omni Calculator with that? They're sure to have some tools that can help us. And once that is done, we're off to the beach so that we can get that dream tan. Oh, did we forget about getting that beach body again? No matter; next year will be the year.

Maciej Kowalski, PhD candidate
Matrix size
2×2
A=
a1a2
b1b2
First row
a₁
a₂
Second row
b₁
b₂
Check out 35 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 32 more
People also viewed…

Alien civilization

The alien civilization calculator explores the existence of extraterrestrial civilizations by comparing two models: the Drake equation and the Astrobiological Copernican Limits👽

Polar to cartesian coordinates

If you need to convert from a rotational reference frame to an orthogonal one, our calculator for the polar to cartesian coordinates conversion will come in handy.

Social Media Time Alternatives

Check what you could have accomplished if you get out of your social media bubble.

Surface area of a triangular prism

The surface area of a triangular prism calculator is the simplest way to calculate the surface of many different types of triangular prisms.
Copyright by Omni Calculator sp. z o.o.
Privacy, Cookies & Terms of Service