Welcome to our pseudoinverse calculator, where we'll learn all there is to know about the Moore-Penrose pseudoinverse. We'll show you how to calculate the pseudoinverse for any matrix, and we'll cover some of its important properties. We'll even show you how to calculate the pseudoinverse of a 3-by-2 matrix. So, grab a cup of coffee and let's get started!

A short introduction to matrices

Before we can properly explain what the pseudoinverse is, we have to cover some basics — and what can be a more fundamental building block of linear algebra than a matrix?

A matrix is an array of numbers, symbols, or expressions (collectively called its elements). A matrix's elements are arranged in rows and columns. Matrices can be added, subtracted, or multiplied together, but they can also be subjected to other operations:

  • The matrix transpose of a matrix MM involves swapping its rows with its columns, and we denote it as MTM^T.

  • The matrix inverse can best be explained as calculating its reciprocal. Inverting a matrix MM is denoted as M1M^{-1}.

What is a matrix pseudoinverse?

Now that we know what a matrix is, we can start talking about its pseudoinverse.

The Moore-Penrose pseudoinverse A+A^+ of a matrix AA is a generalization of its inverse A1A^{-1}. It's also known as the Moore-Penrose inverse or just the pseudoinverse. When a matrix's determinant is zero, it cannot be inverted. We'd call the matrix singular — it has no inverse. This is unfortunate, because the inverse is valuable in solving a system of equations. However, there is good news — if we can find some value that is almost a solution, we can still do some really useful things. Finding this approximate solution is precisely what the matrix pseudoinverse allows us to do.

A system of equations is defined as Ax=bA\cdot\vec{x} = \vec{b}, where AA is a known matrix, and b\vec{b} is a known vector. We need to solve for the unknown vector x\vec{x}. In an ideal linear problem that you'd find in your algebra class, there is only one true solution, x\vec{x}. We'd call such a problem "well-defined". However, real-world problems that are restated with Ax=bA\cdot\vec{x} = \vec{b} are rarely well-defined and instead have either no solutions or multiple solutions for x\vec{x}. How can we solve this problem? You guessed it — with the pseudoinverse of AA!

  • If the system has no solutions for x\vec{x}, we can use A+A^+ to find the system's best fitting solution, xapprox=A+b\vec{x}_\text{approx}=A^+\cdot\vec{b}. While AxapproxbA\cdot \vec{x}_\text{approx} \ne \vec{b} as no solution exists, we've still found the best approximation of the solution — we can also say it minimizes the error Axapproxb|A\cdot\vec{x}_\text{approx} - b|.

  • If the system has more than one solution, we can also find the best one with xbest=A+b\vec{x}_\text{best} = A^+\cdot\vec{b}.

Here's another way of thinking about the Moore-Penrose inverse. As we said before, AA is singular and has no inverse A1A^{-1}. The nonsingular matrix BB does have an inverse B1B^{-1}, and from the definition of the inverse matrix, B1B=IB^{-1}\cdot B = I (with II as the identity matrix) will always be true. The pseudoinverse A+A^+ is defined so that AA+A\cdot A^+ will be as close to II as possible. We denote this objective as AA+IA\cdot A^+ \approx I. This is why we call A+A^+ the generalized inverse of AA: it tries to perform the same function as the normal inverse does for matrices that don't have an inverse. Under this condition, the pseudoinverse doesn't have to be square, unlike the normal inverse. Lastly, it's interesting to note that if AA is actually invertible, then A+=A1A^+ = A^{-1}.

How to calculate the pseudoinverse?

By now, we've learned what the pseudoinverse is and why exactly it is so valuable. But how do we calculate the pseudoinverse of AA?

We can evaluate the pseudoinverse A+A^+ in many ways.
If you use singular value decomposition to obtain the terms of A=USVTA = U\cdot S\cdot V^T, then you can pretty easily calculate AA's pseudoinverse with A+=VS+UTA^+ = V\cdot S^+\cdot U^T.
But that's a lot of effort, and so mathematicians have discovered some shortcuts.

  • If AA has linearly independent columns, you can calculate the Moore-Penrose pseudoinverse A+A^+ with A+=(ATA)1ATA^+ = (A^T\cdot A)^{-1}\cdot A^T.

  • Similarly, if AA has linearly independent rows, A+=AT(AAT)1A^+ = A^T\cdot (A\cdot A^T)^{-1}.

If neither AA's columns nor its rows are linearly independent, it gets a little bit trickier!

  1. Start by calculating AATA\cdot A^T and row reduce it to reduced row echelon form.

  2. Take the non-zero rows of the result and make them the columns of a new matrix PP.

  3. Similarly, row-reduce ATAA^T\cdot A and use its non-zero rows for the columns of the new matrix QQ.

  4. With your newly found PP and QQ, calculate M=PTAQM = P^T\cdot A\cdot Q.

  5. Finally, calculate the pseudoinverse A+=QM1PTA^+ = Q\cdot M^{-1}\cdot P^T.

How do I use the pseudoinverse calculator?

Lucky for us, our pseudoinverse calculator is much quicker to use than the formula! With your matrix handy, follow these easy steps.

  1. Select your matrix's dimensions. A matrix with nn rows and mm columns is commonly referred to as an n×mn\times m matrix. So, if your matrix is three numbers tall and two numbers wide, its dimensions are 3×23×2.
  2. Input your matrix's values row-by-row. Use the symbolic matrix at the top of the calculator as a reference when deciding which value goes where.
  3. Find your visualized results at the bottom of the calculator.

Both your complete matrix and its pseudoinverse are shown, just in case you want to make sure you entered your values correctly.

What is the pseudoinverse used for?

As we've said before, we used the Moore-Penrose pseudoinverse in linear algebra to find approximate solutions to poorly defined systems of equations. But what good is an approximate solution?

Finding the best-fitting solution to a poorly defined system of equations is a crucial element of many real-world technologies. Approximate solutions power the entire concept of data fitting, and so the pseudoinverse can help you predict the weather, predict business and economic trends, and diagnose medical problems.

The best-fitting solution gets even better when you realize the resulting line of best fit doesn't have to be linear, but can be quadratic or exponential. If you can see a pattern in your data, you can fit any kind of line to it, and the Moore-Penrose inverse can help you find that line.

Because a non-invertible matrix has no inverse, we can't calculate its condition number, depriving us of valuable information. In cases such as these, the pseudoinverse can stand in for the regular inverse in the condition number formula.

An example of calculating the Moore-Penrose pseudoinverse

The calculations are all good in theory, but what about practice? Is it really as hard as it looks?

Let's look at two examples: one where we use the shortcuts that linear independence grants us, and one where we have to take the longer path.

We start with the matrix AA:

A=[132433]\footnotesize A = \begin{bmatrix} 1 & 3 \\ 2 & 4 \\ 3 & 3 \\ \end{bmatrix}

AA has linearly independent columns, and so to calculate A+A^+, we can use our first shortcut A+=(ATA)1ATA^+ = (A^T\cdot A)^{-1}\cdot A^T.

Let's start by calculating the transpose of AA, denoted as ATA^T.

AT=[123343]\footnotesize A^T = \begin{bmatrix} 1 & 2 & 3 \\ 3 & 4 & 3 \\ \end{bmatrix}

We need to multiply ATA^T with AA

ATA=[14202034]\footnotesize A^T\cdot A = \begin{bmatrix} 14 & 20 \\ 20 & 34 \\ \end{bmatrix}

…invert it…

(ATA)1=[0.4470.2630.2630.184]\footnotesize (A^T\cdot A)^{-1} = \begin{bmatrix} 0.447 & -0.263 \\ -0.263 & 0.184 \\ \end{bmatrix}

…and multiply the whole thing with ATA^T again.

A+=(ATA)1AT=[0.3420.1580.5530.2890.2110.237]\footnotesize \begin{split} A^+ &= (A^T \cdot A)^{-1}\cdot A^T \\ &= \begin{bmatrix} -0.342 & -0.158 & 0.553 \\ 0.289 & 0.211 & -0.237 \\ \end{bmatrix} \end{split}

And there we have it: the pseudoinverse of our 3-by-2 matrix.

That was simple enough, but now let's tackle a linearly dependent 3-by-2 matrix and its pseudoinverse, where we're forced to use the extended method. Consider our next matrix BB:

B=[122436]\footnotesize B = \begin{bmatrix} 1 & 2 \\ 2 & 4 \\ 3 & 6 \end{bmatrix}

Neither BB's rows nor its columns are linearly independent. You can obtain column #2 by multiplying column #1 with 22, and you can obtain rows #2 and #3 by multiplying row #1 with 22 and 33, respectively. As there is no linear independence, we have to use the extended method. Let's take it step by step!

Step 1: We calculate BBTB\cdot B^T...

BBT=[51015102030153045]\footnotesize B\cdot B^T = \begin{bmatrix} 5 & 10 & 15 \\ 10 & 20 & 30 \\ 15 & 30 & 45 \\ \end{bmatrix}

…and we use row reductions to obtain…

RREF(BBT)=[123000000]\footnotesize \text{RREF}(B\cdot B^T) = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{bmatrix}

Step 2: We take the non-zero rows in the above result and use them to make the columns of PP.

P=[123]\footnotesize P = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}

Step 3: We do the same thing with BTBB^T\cdot B. We calculate it, row-reduce it, and use the non-zero rows for the columns of QQ.

BTB=[14282856]RREF(BTB)=[1200]Q=[12]\footnotesize \begin{split} B^T\!\cdot\!B &= \begin{bmatrix} 14 & 28 \\ 28 & 56 \\ \end{bmatrix} \\

\text{RREF}(B^T!\cdot!B) &=

Step 4: Now that we have PP and QQ, we can calculate MM. It might only have one element, but MM is still a matrix!

M=PTBQ=[70]\footnotesize M = P^T\!\cdot\!B\!\cdot\!Q = \begin{bmatrix} 70 \end{bmatrix}

Step 5: The last step is to use MM, PP, and QQ to finally calculate B+B^+.

M1=[170]=[0.014]B+=QM1PT=[0.0140.0290.0430.0290.0570.086]\footnotesize \begin{split} M^{-1} &= \begin{bmatrix} \frac{1}{70} \end{bmatrix} = \begin{bmatrix} 0.014 \end{bmatrix} \\ \therefore B^+ &= Q\cdot M^{-1}\cdot P^T \\ &= \begin{bmatrix} 0.014 & 0.029 & 0.043 \\ 0.029 & 0.057 & 0.086 \\ \end{bmatrix} \end{split}

And so, we've found the pseudoinverse of a linearly dependent matrix!


What's the difference between a normal inverse and a pseudoinverse?

There isn't a "difference" between the pseudoinverse and the inverse. The pseudoinverse is just a generalization of the inverse — it tries to do the same job. The pseudoinverse A+ strives to satisfy A·A+ ≈ Ɪ, where Ɪ is the identity matrix.

  • If the inverse doesn't exist, the pseudoinverse is the closest we can get to the inverse.
  • If the inverse exists, the pseudoinverse is exactly equal to the inverse.

Is the pseudoinverse square?

The pseudoinverse A+ will have the transposed shape of its original matrix. An n×m matrix has an m×n pseudoinverse. In other words, if A has, e.g., 2 rows and 3 columns, then A+ will have 3 rows and 2 columns. A+ will therefore only be square if A is square.

What is the pseudoinverse of a zero matrix?

A zero matrix Z is a matrix that contains only zeros. It has no inverse, as its determinant is always 0. A zero matrix's pseudoinverse is generally the zero matrix's transpose, i.e., Z+ = ZT.

What is the pseudoinverse of a diagonal matrix?

A diagonal matrix D is a matrix that only has nonzero elements on its diagonal, and all other elements are zero. Because of D's unique structure, computing D+ is very easy: simply replace the elements on the diagonal with their reciprocals.

Rijk de Wet
Matrix size
First row
Second row
Check out 33 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 30 more
People also viewed…

Absolute value equation

This absolute value equation calculator will help you to solve equations that involve the absolute value of a linear expression. It shows intermediate calculations and graphs the solutions!


Use this free circumference calculator to find the area, circumference and diameter of a circle.

Schwarzschild radius

Calculate the gravitational acceleration at the event horizon of a black hole of a given mass using the Schwarzschild radius calculator.

Truncated cone

Omni's truncated cone calculator can solve every imaginable problem related to truncated cones.
Copyright by Omni Calculator sp. z o.o.
Privacy policy & cookies
main background