Inverse Matrix Calculator
Welcome to the inverse matrix calculator, where you'll have the chance to learn all about inverting matrices. This operation is similar to searching for the fraction of a given number, except now we're multiplying matrices and want to obtain the identity matrix as a result.
But don't worry. Before we give, say, the inverse of a $4\times4$ matrix, we'll look at some basic definitions, including a singular and nonsingular matrix. Then we'll move on to the general inverse matrix formula with a neat simplification for the inverse of a $2\times2$ matrix and some useful matrix inverse properties. Last but not least, we give an example with thorough calculations of how to find the inverse of a $3\times3$ matrix.
What is a matrix?
In primary school, they teach you the natural numbers, $1$, $2$, or $143$, and they make perfect sense – you have $1$ toy car, $2$ comic books, and terribly long $143$ days until Christmas. Then they tell you that there are also fractions (or rational numbers, as they call them), such as $1/2$, or decimals, like $1.25$, which still seems reasonable. After all, you gave $1/2$ of your chocolate bar to your brother, and it cost $\text{\textdollar}1.25$. Next, you meet the negative numbers like $2$ or $30$, and they're a bit harder to grasp. But, once you think about it, one guy from your class got $2$ points on a test for cheating, and there was a $\text{\textdollar}30$ discount on jeans on Black Friday.
Lastly, the school introduces real numbers and some weird wormlike symbols that they keep calling square roots. What's even worse, while $\sqrt{4}$ is a simple $2$, $\sqrt{3}$ is something like $1.73205...$ and the digits go on forever. They convince you that such numbers describe, for example, the diagonal of a rectangle. And then there's $\pi$, which somehow appeared out of nowhere when you talked about circles. Fair enough, maybe those numbers are real in some sense. But that's just about as far as it can go, right?
Wrong. Mathematicians are busy figuring out various interesting and, believe it or not, useful extensions of real numbers. The most important one is complex numbers, which are the starting point for any modern physicist. Fortunately, that's not the direction we're taking here. There is another.
A matrix is an array of elements (usually numbers) that has a set number of rows and columns. An example of a matrix would be:
Moreover, we say that a matrix has cells, or boxes, in which we write the elements of our array. For example, matrix $A$ above has the value $2$ in the cell that is in the second row and the second column. The starting point here is 1cell matrices, which are basically the same thing as real numbers.
As you can see, matrices are a tool used to write a few numbers concisely and operate with the whole lot as a single object. As such, they are extremely useful when dealing with:
 Systems of equations, especially when using Cramer's rule or as we've seen in our condition numbers calculator;
 Vectors and vector spaces;
 3dimensional geometry (e.g., the dot product and the cross product);
 Eigenvalues and eigenvectors; and
 graph theory and discrete mathematics.
Calculations with matrices are a great deal trickier than with numbers. For instance, if we want to add them, we first have to make sure that we can. But, since we're here on the inverse matrix calculator, we leave addition for later. First, however, let's familiarize ourselves with a few definitions.
Singular and nonsingular matrix, the identity matrix
Whether you want to find the inverse of a $2\times2$ matrix or the inverse of a $4\times4$ matrix, you have to understand one thing first: it doesn't always exist. Think of a fraction, say $a / b$. Such a thing is perfectly fine as long as $b$ is nonzero. If it is, the expression doesn't make sense, and a similar thing happens for matrices.
A singular matrix is one that doesn't have an inverse. A nonsingular matrix is (surprise, surprise) one that does. Therefore, whenever you face an exercise with an inverse matrix, you should begin by checking if it's nonsingular. Otherwise, there's no point sweating over calculations. It just cannot be done.
You can still get pretty close to a singular matrix's inverse by instead calculating its MoorePenrose pseudoinverse. If you don't know what the pseudoinverse is, wait no more and jump to the pseudoinverse calculator!
By definition, the inverse of a matrix $A$ is a matrix $A^{1}$ for which:
Where $\mathbb{I}$denotes the identity matrix, i.e., a square matrix that has $1$s on the main diagonal and $0$s elsewhere. For example, the $3\times3$ identity matrix is:
In other words, when given an arbitrary matrix $A$, we want to find another one for which the product of the two (in whatever order) gives the identity matrix. Think of $\mathbb{I}$ as $1$ (the identity element) in the world of matrices. After all, for a fraction $a / b$, its inverse is $b / a$ but not just because we "flip it" (at least, not by definition). It's because of a similar multiplication property:
That was enough time spent reading through definitions, don't you think? Let's finally see the inverse matrix formula and learn how to find the inverse of a $2\times2$, $3\times3$, and $4\times4$ matrix.
How to find the inverse of a matrix: inverse matrix formula
Before we go into special cases, like the inverse of a $2\times2$ matrix, let's take a look at the general definition.
Let $A$ be a square nonsingular matrix of size $n$. Then the inverse $A^{1}$ (if it exists) is given by the formula:
The $A$ is the determinant of $A$ (not to be confused with the absolute value of a number). The $A_{ij}$ denotes the $i,j$minor of $A$, i.e., the determinant of the matrix obtained from $A$ by forgetting about its $i^{\mathrm{th}}$ row and $j^{\mathrm{th}}$ column (it is a square matrix of size $n1$). What we have obtained in called the cofactor matrix of $A$. Lastly, the $^{\mathrm{T}}$ outside the array is the transposition. It means that once we know the cells inside, we have to "flip them" so that the $i^{\mathrm{th}}$ row will become its $i^{\mathrm{th}}$h column and vice versa, as we taught you at the matrix transpose calculator. This leads to the adjoint matrix of $A$. All these steps are detailed at Omni's adjoint matrix calculator, in case you need a more formal explanation.
Phew, that was a lot of symbols and a lot of technical mumbojumbo, but that's just the way mathematicians like it. Some of us wind down by watching romcoms, and others write down definitions that sound smart. Who are we to judge them?
In the next section, we point out a few important facts to take into account when looking for the inverse of a $4\times4$ matrix, or whatever size it is. But before we see them, let's take some time to look at what the above matrix inverse formula becomes when it's the inverse of a $2\times2$ matrix that we're looking for.
Let:
Then the minors (the $A_{ij}$s above) come from crossing out one of the rows and one of the columns. But if we do that, we'll be left with a single cell! And the determinant of such a thing (a $1\times1$ matrix) is just the number in that cell. For example, $A_{12}$ comes from forgetting the first row and the second column, which means that only $c$ remains (or rather $\begin{pmatrix}c\end{pmatrix}$ since it's a matrix). Therefore,
Also, in this special case, the determinant is simple enough: $A = a\times d  b\times c$. So after taking the minuses and the transposition, we arrive at a nice and pretty formula for the inverse of a $2\times2$ matrix:
Arguably, the inverse of a $4\times4$ matrix is not as easy to calculate as the $2\times2$ case. There is an alternative way of calculating the inverse of a matrix; the method involves elementary row operations and the socalled Gaussian elimination (for more information, be sure to check out the (reduced) row echelon form calculator). As an example, we describe below how to find the inverse of a $3\times3$ matrix using the alternative algorithm.
Say that you want to calculate the inverse of a matrix:
We then construct a matrix with three rows and twice as many columns like the one below:
and use Gaussian elimination on the 6element rows of the matrix to transform it into something of the form:
where the $x$'s, $y$'s, and $z$'s are obtained along the way from the transformations. Then:
Whichever method you prefer, it might be useful to check out a few matrix inverse properties to make our studies a little easier.
Matrix inverse properties
Below we list a few observations and matrix inverse properties.

The inverse of a matrix doesn't always exist. Let's take a closer look at the inverse matrix formula in the section above. It contains the determinant of the matrix. This means that, first of all, we need to have a square matrix even to start thinking about its inverse. Secondly, the determinant appears in the denominator of a fraction in the inverse matrix formula. Therefore, if that determinant is equal to $0$, then that expression doesn't make any sense, and the inverse doesn't exist.

The inverse of an inverse is the initial matrix. In other words, if you invert a matrix twice, you'll obtain what you started with. Symbolically, we can write this property as $(A^{1})^{1} = A$ for an arbitrary nonsingular matrix $A$.

The inverse of a product is the product of the inverses in the reverse order. This means that if you have two square matrices $A$ and $B$ of the same size and want to calculate the inverse of their product, then, alternatively, you can find their individual inverses and multiply them but in the reverse order. In short, $(A\cdot B)^{1} = B^{1}\cdot A^{1}$.

The inverse of the transpose is the transpose of the inverse. In essence, it doesn't matter if you first transpose a matrix and then calculate its inverse or first find the inverse and only transpose it then. In symbolic notation, this translates to $(A^{\mathrm{T}})^{1} = (A^{1})^{\mathrm{T}}$. In particular, observe that this relies on the fact that the determinant of a matrix stays the same after transposition.
We hope that you're sufficiently intrigued by the theory and can't wait to tell your friends about it over a cup of coffee. However, before you go spreading knowledge, let's go together through an example and see how to find the inverse of a $3\times3$ matrix in practice.
Example: using the inverse matrix calculator
We'll now study stepbystep how to find the inverse of a $3\times3$ matrix. Say that you're given an array:
Before we move on to the calculations, let's see how we can use the inverse matrix calculator to do it all for us.
First of all, we're dealing with a $3\times3$ matrix, so we have to tell the calculator that by choosing the proper option under "Matrix size." This will show us a symbolic example of such an array with cells denoted $a_1$, $a_2$, and so on. We have to input the numbers given by our matrix under the correct symbols from the picture. For example, $a_3$ is in the first row in the third column, so we find the corresponding cell in our matrix and check that it has $5$ in there. Therefore, we put $a_3 = 5$ into the inverse matrix calculator. Similarly, we get the other cells:
We define the other cells:
Then:
And:
The moment we input the last number, the inverse matrix calculator will spit out the answer or tell us that the inverse doesn't exist. But, if you don't want any spoilers, we can also do the calculations by hand.
A priori, we don't even know if $A^{1}$ exists, maybe it's just a fairytale like vampires? To make sure, let's calculate its determinant:
Phew, no vampires today, just a nonsingular matrix and good ol' mathematics.
Recall the matrix inverse formula and observe that it's now time to calculate the $A_{ij}$s for $i$ and $j$ between $1$ and $3$. As an example, let's take, say, $A_{11}$, and $A_{23}$. The first of the two is the determinant of what we get by forgetting the first row and the first column of $A$. This means that:
Similarly, $A_{23}$ comes from crossing out the second row and the third column:
This gives:
And:
The complete first row is:
For the second row, we find:
And the third row is:
It only remains to use the inverse matrix formula and plug in all the numbers we've calculated above: