Condition Number Calculator
Welcome to the condition number calculator. Need to determine whether your linear algebra problem is wellconditioned or unstable? Will incorrect measurements or poor rounding be the downfall of your matrix equation? Here, we'll show you what a matrix condition number is and how to find the condition number of any matrix, so that you can protect yourself against any errors that may creep in.
What is the condition number of a matrix?
Before we can make sense of any result our condition number calculator produces, let's first define the matrix condition number and what it represents. We usually denote the condition number of a matrix $A$ as $\text{cond}(A)$ or $\kappa(A)$. We can define it mathematically as follows:
In this equation, $\Vert\cdot\Vert$ is any matrix norm. When we want to specify which norm we used, we can use the relevant subscript in the condition number symbol, such as $\text{cond}_2(A)$ for the matrix 2norm $\Vert\cdot\Vert_2$. In addition, $A^{1}$ is the matrix inverse of $A$.
The matrix $A$ is noninvertible if its matrix determinant is zero (i.e., $A = 0$). In this case, it has an infinite condition number. To still gain some insight into the matrix's conditionality, some mathematicians would redefine the condition number with the pseudoinverse $A^+$ as $\text{cond}(A) = \Vert A\Vert\cdot \Vert A^+\Vert$. This alternate definition would still deliver huge, nearinfinite condition numbers for noninvertible matrices, thereby still honoring our initial definition.
We can interpret the condition number in multiple ways. Firstly, $\text{cond}(A)$ measures the ratio of maximum stretching to maximum shrinking in a unit vector $\vec{x}$ (i.e. $\Vert\vec{x}\Vert = 1$) when $A$ is multiplied with it. Therefore, an equivalent definition of the condition number is:
In pure mathematics, a matrix is either invertible or not. But, as the second way of interpreting the condition number, $\text{cond}(A)$ is a measure of how invertible $A$ is. As $\text{cond}(A)$ increases, $A$ gets closer to being noninvertible.
The third and most important use of condition numbers is in linear algebra. Let's take a look at why below!
The matrix condition number in linear algebra
When we have a system of linear equations $A\cdot\vec{x} = \vec{b}$, the condition number takes on a special meaning. $\text{cond}(A)$ now becomes the rate at which the solution $\vec{x}$ will change in relation to a change in $\vec{b}$. For this reason, we can call $\text{cond}(A)$ the problem's error magnification factor. Changes in $\vec{b}$ are usually due to errors made in formulating the problem, such as taking erroneous measurements or making rounding errors.
So, suppose some error crept in, and the values contained in $\vec{b}$ are slightly wrong. How far from the truth our newlyfound solution $\vec{x}$ is, depends on $\text{cond}(A)$:

If the condition number of matrix $A$ is large, $\vec{x}$ is vulnerable to errors. The error in $\vec{x}$ resulting from the error in $\vec{b}$ will therefore be large.

Inversely, if $\text{cond}(A)$ is small, $\vec{x}$ will be wellprotected against reasonable errors in $\vec{b}$, and so its error will be small.
We can restate this relation mathematically. With $\delta\vec{b}$ representing the error in $\vec{b}$ and $\delta\vec{x}$ the resulting change in $\vec{x}$, we can relate the relative errors $\Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert$ and $\Vert \delta\vec{x}\Vert \ /\ \Vert \vec{x}\Vert$ with:
This means that $\vec{x}$'s relative error can be up to as large as $\vec{b}$'s relative error scaled by the condition number.
How to find the condition number of a matrix?
With our mathematical definition of the condition number as $\text{cond}(A) = \Vert A\Vert \cdot \Vert A^{1}\Vert$, it is simple to find $\text{cond}(A)$:

Choose a matrix norm. Although the choice is problemdependent, the matrix 2norm is typically used.

Evaluate the inverse of $A$. We need the matrix inverse to find the matrix condition number. If $A^{1}$ does not exist, we can declare that $\text{cond}(A) = \infty$.

Calculate $\Vert A\Vert$ and $\Vert A^{1}\Vert$. It's crucial to use the same norm as chosen above throughout our calculations.

Multiply the norms to find $\text{cond}(A)$.
How to use the condition number calculator?
It's great to know how to calculate the matrix condition number, but sometimes you just need an answer immediately to save time. This is where our matrix condition number calculator comes in handy. Here's how to use it:

Select your matrix's dimensionality. We support $2\times2$ and $3\times3$ matrices.

Enter your matrix, row by row. Feel free to refer to the symbolic representation at the top.

Select a matrix norm, or leave it at the default selection of the matrix 2norm.

Find $\text{cond}(A)$ at the bottom of our matrix condition number calculator.
How to find the condition number of a matrix? – An example
To drive what we've learned home, let's take a look at an example. Suppose we have the linear system $A\cdot \vec{x} = \vec{b}$ with:
and
We can easily solve for $\vec{x}$ by calculating:
Now, let's see what happens when we add a small error of $0.01$ to the first element of $\vec{b}$ to form $\vec{b}_\text{err}$.
Calculating $\vec{x}\_\text{err}$ in $A\cdot\vec{x}\_\text{err} = \vec{b}_\text{err}$, we get:
There's a huge difference between $\vec{x}$ and $\vec{x}_\text{err}$! As you might have guessed by now, it's because $A$ has a large condition number. In fact, $\text{cond}(A) = 7\ 895$. You can verify this result in our matrix norm calculator!
It's fascinating to note that the relationship between our relative errors and our condition number is adhered to:
$4.573 \le 4.014$, and therefore our solution's relative error does not exceed the problem's relative error magnified by $A$'s condition number.
FAQ
What is the condition number of the identity matrix?
The condition number of an identity matrix of any size is 1. Because an identity matrix leaves any vector it's multiplied with untouched, it doesn't magnify an error in b
. Therefore, it makes intuitive sense for the identity matrix to have a condition number of 1.
1 is the smallest possible matrix condition number, so the identity matrix can be seen as optimally wellconditioned.
What is the condition number of a diagonal matrix?
The condition number of a diagonal matrix D is the ratio between the largest and smallest elements on its diagonal, i.e., cond(D) = max(D_{ii}) / min(D_{ii}). It's important to note that this is only true when using the matrix 2norm for computing cond(D). This is largely because D's diagonal elements are its eigenvalues.
Can the condition number of a matrix be zero?
No. If cond(A) = 0, then cond(A)×‖δb⃗‖/‖b⃗‖ = 0 and therefore ‖δx⃗‖/‖x⃗‖ ≤ 0. Therefore, a condition number of 0 would mean that the matrix removes any error, which isn't possible.
In fact, the smallest possible condition number is 1, where an error is neither magnified nor diminished.
Does scaling a matrix affect its condition number?
No. Any scaling of the matrix will be canceled out by the matrix inverse being scaled inversely. This is because (𝛾A)^{−1} = 𝛾^{−1}A^{−1}. So,
cond(𝛾A)
= ‖𝛾A‖ · ‖(𝛾A)^{−1}‖
= 𝛾‖A‖ · (𝛾)^{−1} · ‖A^{−1}‖
= (𝛾·𝛾^{−1}) · (‖A‖·‖A^{−1}‖)
= 1 · (‖A‖·‖A^{−1}‖)
= cond(A)
Remember, the condition number measures the ratio of maximum stretch to minimum stretch. If both the maximum and minimum were to increase by 𝛾, the ratio wouldn't change, and therefore we have cond(𝛾A) = cond(A).
A  = 
