Pseudoinverse Calculator
Welcome to our pseudoinverse calculator, where we'll learn all there is to know about the MoorePenrose pseudoinverse. We'll show you how to calculate the pseudoinverse for any matrix, and we'll cover some of its important properties. We'll even show you how to calculate the pseudoinverse of a 3by2 matrix. So, grab a cup of coffee and let's get started!
A short introduction to matrices
Before we can properly explain what the pseudoinverse is, we have to cover some basics — and what can be a more fundamental building block of linear algebra than a matrix?
A matrix is an array of numbers, symbols, or expressions (collectively called its elements). A matrix's elements are arranged in rows and columns. Matrices can be added, subtracted, or multiplied together, but they can also be subjected to other operations:

The matrix transpose of a matrix $M$ involves swapping its rows with its columns, and we denote it as $M^T$.

The matrix inverse can best be explained as calculating its reciprocal. Inverting a matrix $M$ is denoted as $M^{1}$.
What is a matrix pseudoinverse?
Now that we know what a matrix is, we can start talking about its pseudoinverse.
The MoorePenrose pseudoinverse $A^+$ of a matrix $A$ is a generalization of its inverse $A^{1}$. It's also known as the MoorePenrose inverse or just the pseudoinverse. When a matrix's determinant is zero, it cannot be inverted. We'd call the matrix singular — it has no inverse. This is unfortunate, because the inverse is valuable in solving a system of equations. However, there is good news — if we can find some value that is almost a solution, we can still do some really useful things. Finding this approximate solution is precisely what the matrix pseudoinverse allows us to do.
A system of equations is defined as $A\cdot\vec{x} = \vec{b}$, where $A$ is a known matrix, and $\vec{b}$ is a known vector. We need to solve for the unknown vector $\vec{x}$. In an ideal linear problem that you'd find in your algebra class, there is only one true solution, $\vec{x}$. We'd call such a problem "welldefined". However, realworld problems that are restated with $A\cdot\vec{x} = \vec{b}$ are rarely welldefined and instead have either no solutions or multiple solutions for $\vec{x}$. How can we solve this problem? You guessed it — with the pseudoinverse of $A$!

If the system has no solutions for $\vec{x}$, we can use $A^+$ to find the system's best fitting solution, $\vec{x}_\text{approx}=A^+\cdot\vec{b}$. While $A\cdot \vec{x}_\text{approx} \ne \vec{b}$ as no solution exists, we've still found the best approximation of the solution — we can also say it minimizes the error $A\cdot\vec{x}_\text{approx}  b$.

If the system has more than one solution, we can also find the best one with $\vec{x}_\text{best} = A^+\cdot\vec{b}$.
Here's another way of thinking about the MoorePenrose inverse. As we said before, $A$ is singular and has no inverse $A^{1}$. The nonsingular matrix $B$ does have an inverse $B^{1}$, and from the definition of the inverse matrix, $B^{1}\cdot B = I$ (with $I$ as the identity matrix) will always be true. The pseudoinverse $A^+$ is defined so that $A\cdot A^+$ will be as close to $I$ as possible. We denote this objective as $A\cdot A^+ \approx I$. This is why we call $A^+$ the generalized inverse of $A$: it tries to perform the same function as the normal inverse does for matrices that don't have an inverse. Under this condition, the pseudoinverse doesn't have to be square, unlike the normal inverse. Lastly, it's interesting to note that if $A$ is actually invertible, then $A^+ = A^{1}$.
How to calculate the pseudoinverse?
By now, we've learned what the pseudoinverse is and why exactly it is so valuable. But how do we calculate the pseudoinverse of $A$?
We can evaluate the pseudoinverse $A^+$ in many ways.
If you use singular value decomposition to obtain the terms of $A = U\cdot S\cdot V^T$, then you can pretty easily calculate $A$'s pseudoinverse with $A^+ = V\cdot S^+\cdot U^T$.
But that's a lot of effort, and so mathematicians have discovered some shortcuts.

If $A$ has linearly independent columns, you can calculate the MoorePenrose pseudoinverse $A^+$ with $A^+ = (A^T\cdot A)^{1}\cdot A^T$.

Similarly, if $A$ has linearly independent rows, $A^+ = A^T\cdot (A\cdot A^T)^{1}$.
If neither $A$'s columns nor its rows are linearly independent, it gets a little bit trickier!

Start by calculating $A\cdot A^T$ and row reduce it to reduced row echelon form.

Take the nonzero rows of the result and make them the columns of a new matrix $P$.

Similarly, rowreduce $A^T\cdot A$ and use its nonzero rows for the columns of the new matrix $Q$.

With your newly found $P$ and $Q$, calculate $M = P^T\cdot A\cdot Q$.

Finally, calculate the pseudoinverse $A^+ = Q\cdot M^{1}\cdot P^T$.
How do I use the pseudoinverse calculator?
Lucky for us, our pseudoinverse calculator is much quicker to use than the formula! With your matrix handy, follow these easy steps.
 Select your matrix's dimensions. A matrix with $n$ rows and $m$ columns is commonly referred to as an $n\times m$ matrix. So, if your matrix is three numbers tall and two numbers wide, its dimensions are $3×2$.
 Input your matrix's values rowbyrow. Use the symbolic matrix at the top of the calculator as a reference when deciding which value goes where.
 Find your visualized results at the bottom of the calculator.
Both your complete matrix and its pseudoinverse are shown, just in case you want to make sure you entered your values correctly.
What is the pseudoinverse used for?
As we've said before, we used the MoorePenrose pseudoinverse in linear algebra to find approximate solutions to poorly defined systems of equations. But what good is an approximate solution?
Finding the bestfitting solution to a poorly defined system of equations is a crucial element of many realworld technologies. Approximate solutions power the entire concept of data fitting, and so the pseudoinverse can help you predict the weather, predict business and economic trends, and diagnose medical problems.
The bestfitting solution gets even better when you realize the resulting line of best fit doesn't have to be linear, but can be quadratic or exponential. If you can see a pattern in your data, you can fit any kind of line to it, and the MoorePenrose inverse can help you find that line.
Because a noninvertible matrix has no inverse, we can't calculate its condition number, depriving us of valuable information. In cases such as these, the pseudoinverse can stand in for the regular inverse in the condition number formula.
An example of calculating the MoorePenrose pseudoinverse
The calculations are all good in theory, but what about practice? Is it really as hard as it looks?
Let's look at two examples: one where we use the shortcuts that linear independence grants us, and one where we have to take the longer path.
We start with the matrix $A$:
$A$ has linearly independent columns, and so to calculate $A^+$, we can use our first shortcut $A^+ = (A^T\cdot A)^{1}\cdot A^T$.
Let's start by calculating the transpose of $A$, denoted as $A^T$.
We need to multiply $A^T$ with $A$…
…invert it…
…and multiply the whole thing with $A^T$ again.
And there we have it: the pseudoinverse of our 3by2 matrix.
That was simple enough, but now let's tackle a linearly dependent 3by2 matrix and its pseudoinverse, where we're forced to use the extended method. Consider our next matrix $B$:
Neither $B$'s rows nor its columns are linearly independent. You can obtain column #2 by multiplying column #1 with $2$, and you can obtain rows #2 and #3 by multiplying row #1 with $2$ and $3$, respectively. As there is no linear independence, we have to use the extended method. Let's take it step by step!
Step 1: We calculate $B\cdot B^T$...
…and we use row reductions to obtain…
Step 2: We take the nonzero rows in the above result and use them to make the columns of $P$.
Step 3: We do the same thing with $B^T\cdot B$. We calculate it, rowreduce it, and use the nonzero rows for the columns of $Q$.
Step 4: Now that we have $P$ and $Q$, we can calculate $M$. It might only have one element, but $M$ is still a matrix!
Step 5: The last step is to use $M$, $P$, and $Q$ to finally calculate $B^+$.
And so, we've found the pseudoinverse of a linearly dependent matrix!
FAQ
What's the difference between a normal inverse and a pseudoinverse?
There isn't a "difference" between the pseudoinverse and the inverse. The pseudoinverse is just a generalization of the inverse — it tries to do the same job. The pseudoinverse A^{+} strives to satisfy A·A^{+} ≈ Ɪ, where Ɪ is the identity matrix.
 If the inverse doesn't exist, the pseudoinverse is the closest we can get to the inverse.
 If the inverse exists, the pseudoinverse is exactly equal to the inverse.
Is the pseudoinverse square?
The pseudoinverse A^{+} will have the transposed shape of its original matrix. An n×m matrix has an m×n pseudoinverse. In other words, if A has, e.g., 2 rows and 3 columns, then A^{+} will have 3 rows and 2 columns. A^{+} will therefore only be square if A is square.
What is the pseudoinverse of a zero matrix?
A zero matrix Z is a matrix that contains only zeros. It has no inverse, as its determinant is always 0. A zero matrix's pseudoinverse is generally the zero matrix's transpose, i.e., Z^{+} = Z^{T}.
What is the pseudoinverse of a diagonal matrix?
A diagonal matrix D is a matrix that only has nonzero elements on its diagonal, and all other elements are zero. Because of D's unique structure, computing D^{+} is very easy: simply replace the elements on the diagonal with their reciprocals.
A  = 
