Omni Calculator logo

Matrix Power Calculator

Created by Maciej Kowalski, PhD candidate
Reviewed by Dominik Czernia, PhD and Jack Bowater
Last updated: May 22, 2023


Welcome to the matrix power calculator, where we'll study the topic of taking an integer exponent of a matrix. In essence, taking the power of a matrix is the same thing as with regular numbers: you use multiplication (similarly as in the matrix multiplication calculator) several times.

Learning how to square a matrix is quite simple, but when the exponent increases, the task gets tiresome and tricky. That's why we also show how to calculate matrix powers using eigenvalues and eigenvectors.

Let's not waste a second longer and go straight into the nitty-gritty, shall we?

What is a matrix?

Say that you were sent to the supermarket to do some grocery shopping. Apparently, the fridge is half empty, and you have a BBQ party planned for tomorrow. Still, you don't want to be a hoarder and buy too much, so you decide to make a shopping list. You need a dozen eggs, four pounds of potatoes, a couple of large water bottles, a bar of chocolate)…

When you write the items you need one after the other, together with how much you should buy, what you end up with is a table. If you want to keep your hand on your home budget, you might even want to add one more column with the prices. Or yet another one with a tax on the products. In the end, you get a table that is very concise but carries a lot of information. The idea behind a matrix is very similar.

A matrix is an array of elements (usually numbers) that has a set number of rows and columns. An example of a matrix would be:

A=[310211]A = \begin{bmatrix} 3 & -1 \\ 0 & 2\\ 1 & -1 \end{bmatrix}

Moreover, we say that a matrix has cells, or boxes, into which we write the elements of our array. For example, the above matrix, AA, has the value 22 in the cell that is in the second row and the second column. The starting point here are 1-cell matrices, which are, for all intents and purposes, the same thing as real numbers.

As you can see, matrices came to be when a scientist decided that they needed to write a few numbers concisely and operate with the whole lot as a single object. As such, they naturally appear when dealing with:

  • Systems of equations, especially with Cramer's rule and the (reduced) row echelon form;
  • Vectors and vector spaces;
  • 3-dimensional geometry (e.g., the dot product and the cross product);
  • Linear transformations (translation and rotation); and
  • Graph theory and discrete mathematics.

🔎 If you want to add or subtract several matrixes, our matrix addition calculator will come in handy!

We can look at matrices as an extension of the numbers as we know them (real or complex) because they contain more information than a single value (i.e., it contains many of them). As such, it would make sense to define some basic operations on them, like, for example, addition and subtraction. And indeed, we can easily do it, but the truth is that their structure is much richer. To illustrate that, let us mention that to every matrix, we can associate several important values, such as their rank or determinant, which allow us to do many interesting, useful things with them.

We, however, are more interested in taking an exponent of a matrix. Intuitively, it should be connected with matrix multiplication, and indeed it is. So let's start small and see first how to square a matrix.

Matrix multiplication: how to square a matrix?

The most important thing we need to know about matrix multiplication is that sometimes it can't be done. In fact, a product ABA \cdot B of two matrices exists if and only if the first matrix has as many rows as the other has columns. For example, recall the matrix AA from the above section. We can multiply it by a matrix of size 3×33\times3, but not by a matrix of size 3×23\times2 (because AA has three rows).

Fortunately for us, the above matrix multiplication condition translates into something very simple in our case: the exponent (integer, at least 22) of a matrix exists if and only if it is a square matrix. Observe that it indeed makes sense: if A2=AAA^2=A\cdot A exists, then AA (the first factor) must have as many rows as AA (the second factor) has columns.

However, before we get to our very special case, let's see how matrix multiplication works in general.

Say that AA has entries an,ma_{n,m}, where nn denotes the number of the row, and mm denotes the column. This means that an entry of a2,4a_{2,4} would refer to the number in the second row of the fourth column. Similarly, let BB have entries bn,mb_{n,m}. If the product ABA \cdot B is a matrix with entries cn,mc_{n,m}, then we have

cn,m=an,1b1,m+an,2b2,m+an,3b3,m+\footnotesize c_{n,m} \!=\! a_{n,1}\!\cdot\!b_{1,m} + a_{n,2}\!\cdot\!b_{2,m} + a_{n,3}\!\cdot\!b_{3,m} + \ldots

In other words, to obtain the entry in row nn and column mm in the matrix product, we need to take the nn-th row of the first matrix and the mm-th column of the second matrix and multiply their elements in pairs one by one, and then add it all up. Note that, in particular, this means that multiplying two square matrices of size n×nn\times n gives again an array of size n×nn\times n.

Well, that sure looks more complicated than regular number multiplication, doesn't it? To get a firmer grasp of the topic, let's look at an example of a 2×22\times2 array and check how to square such a matrix.
Let AA be given by:

A=[a1a2b1b2]A = \begin{bmatrix} a_1 & a_2 \\ b_1 & b_2 \end{bmatrix}

and the cells of its second power are denoted by:

A2=[x1x2y1y2]A^2 = \begin{bmatrix} x_1 & x_2 \\ y_1 & y_2 \end{bmatrix}

Then, according to the above matrix multiplication rule, we have:

  • x1=a1a1+a2b1x_1 = a_1 \cdot a_1 + a_2 \cdot b_1;
  • x2=a2a2+a2b2x_2 = a_2 \cdot a_2 + a_2 \cdot b_2;
  • y1=b1a1+b2b1y_1 = b_1 \cdot a_1 + b_2 \cdot b_1;
  • y2=b1a2+b2b2y_2 = b_1 \cdot a_2 + b_2 \cdot b_2.

But what if we increase the exponent? What comes with greater powers?

Higher matrix powers can be costly.

We could just repeat this multiplication as many times as we need. However, if the matrix power is, say, 5050, it may take us a whole day to get it. Lucky for us, there is a trick to save us from such tedious calculations. Do you want to see it?

Calculating matrix power using eigenvalues and eigenvectors

In general, matrices can be difficult to work with. Their entries can be some large numbers or ugly fractions (who even remembers how to multiply those?). Fortunately, a way to bring their beauty to the surface involves eigenvalues and eigenvectors.

For more information about eigenvalues and eigenvectors, please check our eigenvalue and eigenvector calculator. Here, however, we'll focus on the usability of them.

Say that you have a matrix AA of size n×nn\times n, and you want to find A30A^{30}. If you choose to do regular matrix multiplication that many times, then we wish you good luck and suggest that you might need to find a hobby.

For those of us who care about our mental health, we'll now show how to finish this task quicker, more efficiently, and in general, in a way that could impress that pretty girl in your class. Note, however, that in order for this to work, we need to know that AA is diagonalizable. This means that it must have exactly nn eigenvalues (counted with their multiplicities) and nn eigenvectors.

If λ1\lambda_1, λ2\lambda_2, λ3\lambda_3,…, λn\lambda_n are the eigenvalues and v1\boldsymbol{v_1}, v2\boldsymbol{v_2}, v3\boldsymbol{v_3},…, vn\boldsymbol{v_n} are the eigenvectors, then:

A=SDS1A = S \cdot D \cdot S^{-1}

where:

D=[λ1000λ2000λn]D = \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{bmatrix}

and SS is the matrix whose first column consists of the coordinates of v1\boldsymbol{v_1}, the second consists of v2\boldsymbol{v_2}, and so on. Note also that the last factor on the right, i.e., the S1S^{-1} denotes the matrix inverse of SS. You can quickly evaluate the inverse of any matrix with our inverse matrix calculator.

Well, fair enough, we have some new matrices that don't give much at first glance. However, you'll find they help us a lot. To see this, let's check how to square such a matrix decomposition:

A2= AA=(SDS1)(SDS1)=(SD)(S1S)(DS1)=(SD)I(DS1)=(SD)(DS1)=SD2S1\footnotesize \begin{split} A^2 =&\ A \!\cdot\! A = \left(S \!\cdot\! D \!\cdot\! S^{-1}\right) \!\cdot\! \left(S \!\cdot\! D \!\cdot\! S^{-1}\right) \\[.5em] =& \left(S \!\cdot\! D\right) \!\cdot\! \left(S^{-1} \!\cdot\! S\right) \!\cdot\! \left(D \!\cdot\! S^{-1}\right) \\[.5em] =& \left(S \!\cdot\! D\right) \!\cdot\! I \!\cdot\! \left(D \!\cdot\! S^{-1}\right) \\[.5em] =& \left(S \!\cdot\! D\right) \!\cdot\! \left(D \!\cdot\! S^{-1}\right) \\[.5em] =&\, S \!\cdot\! D^2 \!\cdot\! S^{-1} \end{split}

Let's analyze what happened here step by step.

  1. We know that the matrix power A2A^2 is just a multiplication of two copies of AA, so that's what we wrote.

  2. Using eigenvalues and eigenvectors, we can decompose:

    A=SDS1A = S \cdot D \cdot S^{-1}.

  3. Matrix multiplication is associative. This means that we can arbitrarily change the order of multiplication. Or, in other words, we can put the brackets wherever we like. Note, however, that in general, matrix multiplication is not commutative, so we can't have the factors change places. For instance, SDS \cdot D and DSD \cdot S are not the same thing.

  4. We rearrange the brackets so that we have S1S^{-1} next to SS. These are inverse elements (just like inverse fractions), so their product gives us the identity element - the matrix II which has 11's along the main diagonal and 00's elsewhere. Think of it as a matrix equivalent to the number 11.

  5. The identity element II changes nothing in matrix multiplication (just like 11 in regular number multiplication), so we can forget about it.

  6. We get rid of the brackets, which gives two copies of DD next to each other. This is nothing else but the square of DD.

You can see that once we know how to square such a matrix, we can do the same thing to higher powers, i.e., the diagonalization will move the exponent to the diagonal matrix DD and leave the SS and S1S^{-1} on the sides. In our case, this means that:

A30=(SDS1)30=SD30S1A^{30} = (S \cdot D \cdot S^{-1})^{30} = S \cdot D^{30} \cdot S^{-1}.

"But what does it change? I had to get A30A^{30}, and now I need D30D^{30}." Well, the thing is that diagonal matrices (such as DD) are very simple to raise to some power. In fact, we just need to transfer the exponent to each of the values on the diagonal, i.e.,

D30=[λ130000λ230000λn30]D^{30} = \begin{bmatrix} \lambda_1^{30} & 0 & \cdots & 0 \\ 0 & \lambda_2^{30} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n^{30} \end{bmatrix}

This way, instead of multiplying thirty matrices, we simply multiply three: SS, D30D^{30} (which we get immediately from DD), and S1S^{-1}. Pretty cool, isn't it? And it saves a lot of time and a lot of work!

🙋 Omni's diagonalize matrix calculator is a dedicated tool that helps you diagonalize any matrix in a second.

Why don't we leave all those mathematical symbols aside and finally see an example? After all, the theory is useful to start with, but it's numbers that will appear on the test.

Example: Using the matrix power calculator

Say that your teacher decides to test how much you've learned about matrix multiplication and divides the class into groups. Your task is to find A13A^{13} for the matrix:

A=[100211011]A = \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 &-1 \\ 0 & -1 & 1 \end{bmatrix}

Well, the entries aren't too large, so regular matrix multiplication applied several times shouldn't be too bad. However, you can already see that the other members of your group don't feel like helping. Do they even know how to square a matrix? Eh, how is it that it's always you that does all the dirty work? Never mind, today we've learned a trick or two, so let's get to work.

First of all, let's see how the matrix power calculator can give us an answer in a few simple clicks. To begin with, we need to tell it the size of the matrix we have by choosing the right option under "Matrix size" - in our case, 3×33\times3. This will show us a symbolic image of such an array with its entries denoted a1a_1, a2a_2, b1b_1, and so on. Before we move on to those, we input the matrix power that we'd like to calculate, which for us is 1313.

The picture shows that the first row of AA has entries a1a_1, a2a_2, and a3a_3. We look back at the task at hand and input the relevant numbers into the matrix power calculator:

  • a1=1a_1=1, a2=0a_2=0, and a3=0a_3=0.

Similarly, the two other rows give:

  • b1=2b_1=2, b2=1b_2=1, b3=1b_3=-1;
  • c1=0c_1=0, c2=1c_2=-1, c3=1c_3=1.

Once we input the last entry, the matrix power calculator will give us the answer at the bottom of the calculator.

However, we won't spoil the answer, and we'll calculate the matrix power ourselves first. So, grab a piece of paper, and let's get to it.

We might just use regular matrix multiplication several times, but where's the fun in that? Now that we know about eigenvalues and eigenvectors, let's use them to speed things up!

Note that the matrix power calculator can give us all the tools we need - we only need to ask it nicely. Just go to the Advanced mode and choose "Yes" under "Show diagonalization?". (Note, however, that this option is not available for 4×44\times4 matrices.) For our matrix AA, it will tell us that its eigenvalues are λ1=2\lambda_1 = 2, λ2=0\lambda_2 = 0, and λ3=1\lambda_3 = 1, while the corresponding eigenvectors are v1=(0,1,1)\boldsymbol{v_1} = (0,-1,1), v2=(0,1,1)\boldsymbol{v_2} = (0,1,1), v3=(0.5,0,1)\boldsymbol{v_3} = (0.5,0,1). This means that:

A=SDS1A = S \cdot D \cdot S^{-1}

where:

D=[200000001]D = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}

and:

S=[000.5110111]S = \begin{bmatrix} 0 & 0 & 0.5 \\ -1 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix}

Now, if we recall the previous section, then we can use this decomposition to write:

A13=(S ⁣ ⁣D ⁣ ⁣S1)13=S ⁣ ⁣D13 ⁣ ⁣S1A^{13} = (S \!\cdot\! D \!\cdot\! S^{-1})^{13} = S \!\cdot\! D^{13} \!\cdot\! S^{-1}

What is more, we have:

D13=[200000001]13=[213000013000113]=[819200000001]\begin{split} D^{13} =& \begin{bmatrix} 2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}^{13} \\[2em] =& \begin{bmatrix} 2^{13} & 0 & 0 \\ 0 & 0^{13} & 0 \\ 0 & 0 & 1^{13} \end{bmatrix} \\[2em] =& \begin{bmatrix} 8192 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{split}

Now it's just a matter of multiplying that matrix by SS from the left and S1S^{-1} from the right. As extra motivation, let us mention that matrix multiplication is a piece of cake when one of them is diagonal (like D13D^{13}) because of all the zeros outside of the diagonal. Let's see this in practice and calculate S ⁣ ⁣D13S \!\cdot\! D^{13}:

SD13=[000.5110111][819200000001]=[000.51181920018192011]=[000.5819200819201]\footnotesize \begin{split} S \!\cdot\! D^{13} \!=\!& \begin{bmatrix} 0 & 0 & 0.5 \\ -1 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 8192 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \\[2em] =& \begin{bmatrix} 0 & 0 & 0.5 \!\cdot\! 1 \\ -1 \!\cdot\! 8192 & 0 & 0 \\ 1 \!\cdot\! 8192 & 0 & 1 \!\cdot\! 1 \end{bmatrix} \\[2em] =& \begin{bmatrix} 0 & 0 & 0.5 \\ -8192 & 0 & 0 \\ 8192 & 0 & 1 \end{bmatrix} \end{split}

Lastly, we need to find the inverse of SS and multiply the above matrix by it from the right. By the way, the other group members are on their phones, so you don't expect any help there. Luckily, if you don't feel like doing that or just want to check your solution, all the tools you may need are easily available on the Omni Calculator website. We always have your back!

Maciej Kowalski, PhD candidate
Matrix size
2×2
A=
a1a2
b1b2
Power (k)
First row
a₁
a₂
Second row
b₁
b₂
Check out 35 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 32 more
People also viewed…

(Reduced) row echelon form

The (reduced) row echelon form calculator uses the Gauss (or Gauss-Jordan) elimination to find the solution of a system of up to three equations.

Ideal egg boiling

Quantum physicist's take on boiling the perfect egg. Includes times for quarter and half-boiled eggs.

Schwarzschild radius

Calculate the gravitational acceleration at the event horizon of a black hole of a given mass using the Schwarzschild radius calculator.

Triangle inequality theorem

The triangle inequality theorem calculator is here to check whether the three line segments can make up a triangle or not.
Copyright by Omni Calculator sp. z o.o.
Privacy, Cookies & Terms of Service