Welcome to the condition number calculator. Need to determine whether your linear algebra problem is well-conditioned or unstable? Will incorrect measurements or poor rounding be the downfall of your matrix equation? Here, we'll show you what a matrix condition number is and how to find the condition number of any matrix, so that you can protect yourself against any errors that may creep in.

If you need a revision of the easier topics in mathematics, take a look at our ratio calculator!

What is the condition number of a matrix?

Before we can make sense of any result our condition number calculator produces, let's first define the matrix condition number and what it represents. We usually denote the condition number of a matrix AA as cond(A)\text{cond}(A) or κ(A)\kappa(A). We can define it mathematically as follows:

cond(A)={AA1if A is invertibleotherwise\text{cond}(A) = \begin{cases} \Vert A\Vert \cdot \Vert A^{-1}\Vert & \text{if}\ A\ \text{is invertible} \\ \infty & \text{otherwise} \\ \end{cases}

In this equation, \Vert\cdot\Vert is any matrix norm. When we want to specify which norm we used, we can use the relevant subscript in the condition number symbol, such as cond2(A)\text{cond}_2(A) for the matrix 2-norm 2\Vert\cdot\Vert_2. In addition, A1A^{-1} is the inverse of AA.

The matrix AA is non-invertible if its determinant is zero (i.e., A=0|A| = 0). In this case, it has an infinite condition number. To still gain some insight into the matrix's conditionality, some mathematicians would redefine the condition number with the Moore-Penrose pseudoinverse A+A^+ as cond(A)=AA+\text{cond}(A) = \Vert A\Vert\cdot \Vert A^+\Vert. This alternate definition would still deliver huge, near-infinite condition numbers for non-invertible matrices, thereby still honoring our initial definition.

We can interpret the condition number in multiple ways. Firstly, cond(A)\text{cond}(A) measures the ratio of maximum stretching to maximum shrinking in a unit vector when AA is multiplied with it. Therefore, an equivalent definition of the condition number is:

cond(A)=maxx=1Axx(minx=1Axx)1\text{cond}(A) = \max_{\Vert \vec{x}\Vert = 1} \frac{\Vert A\vec{x}\Vert }{\Vert \vec{x}\Vert } \cdot \left ( \min_{\Vert \vec{x}\Vert =1} \frac{\Vert A\vec{x}\Vert }{\Vert \vec{x}\Vert } \right )^{-1}

In pure mathematics, a matrix is either invertible or not. But, being the second way of interpreting the condition number, cond(A)\text{cond}(A) is a measure of how invertible AA is. As cond(A)\text{cond}(A) increases, AA gets closer to being non-invertible.

The third and most important use of condition numbers is in linear algebra. Let's take a look at why below!

The matrix condition number in linear algebra

When we have a system of linear equations Ax=bA\cdot\vec{x} = \vec{b}, the condition number takes on a special meaning. cond(A)\text{cond}(A) now becomes the rate at which the solution x\vec{x} will change in relation to a change in b\vec{b}. For this reason, we can call cond(A)\text{cond}(A) the problem's error magnification factor. Changes in b\vec{b} are usually due to errors made in formulating the problem, such as taking erroneous measurements or making rounding errors.

So, suppose some error crept in, and the values contained in b\vec{b} are slightly wrong. How far from the truth our newly-found solution x\vec{x} is, depends on cond(A)\text{cond}(A):

  • If the condition number of matrix AA is large, x\vec{x} is vulnerable to errors. The error in x\vec{x} resulting from the error in b\vec{b} will therefore be large.

  • Inversely, if cond(A)\text{cond}(A) is small, x\vec{x} will be well-protected against reasonable errors in b\vec{b}, and so its error will be small.

We can restate this relation mathematically. With δb\delta\vec{b} representing the error in b\vec{b} and δx\delta\vec{x} the resulting change in x\vec{x}, we can relate the relative errors δb / b\Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert and δx / x\Vert \delta\vec{x}\Vert \ /\ \Vert \vec{x}\Vert with:

δxxcond(A)δbb\frac{ \Vert \delta\vec{x}\Vert }{ \Vert \vec{x}\Vert } \le \text{cond}(A) \cdot \frac{ \Vert \delta\vec{b}\Vert }{ \Vert \vec{b}\Vert }

This means that x\vec{x}'s relative error can be up to as large as b\vec{b}'s relative error scaled by the condition number.

How to find the condition number of a matrix?

With our mathematical definition of the condition number as cond(A)=AA1\text{cond}(A) = \Vert A\Vert \cdot \Vert A^{-1}\Vert , it is simple to find cond(A)\text{cond}(A):

  1. Choose a matrix norm. Although the choice is problem-dependent, the matrix 2-norm is typically used.

  2. Evaluate the inverse of AA. We need the matrix inverse to find the matrix condition number. If A1A^{-1} does not exist, we can declare that cond(A)=\text{cond}(A) = \infty.

  3. Calculate A\Vert A\Vert and A1\Vert A^{-1}\Vert . It's crucial to use the same norm as chosen above throughout our calculations.

  4. Multiply the norms to find cond(A)\text{cond}(A).

How to use the condition number calculator?

It's great to know how to calculate the matrix condition number, but sometimes you just need an answer immediately to save time. This is where our matrix condition number calculator comes in handy. Here's how to use it:

  1. Select your matrix's dimensionality. We support 2×22\times2 and 3×33\times3 matrices.

  2. Enter your matrix, row by row. Feel free to refer to the symbolic representation at the top.

  3. Select a matrix norm, or leave it at the default selection of the matrix 2-norm.

  4. Find cond(A)\text{cond}(A) at the bottom of our matrix condition number calculator.

How to find the condition number of a matrix? – An example

To drive what we've learned home, let's take a look at an example. Suppose we have the linear system Ax=bA\cdot \vec{x} = \vec{b} with:

A=[3.70.97.81.9]A = \begin{bmatrix} 3.7 & 0.9 \\ 7.8 & 1.9 \\ \end{bmatrix}

and

b=[7.415.6]\vec{b} = \begin{bmatrix} 7.4 \\ 15.6 \\ \end{bmatrix}

We can easily solve for x\vec{x} by calculating:

x=A1b=[20]\begin{split} \vec{x} &= A^{-1}\cdot\vec{b} \\ &= \begin{bmatrix} 2 \\ 0 \end{bmatrix} \end{split}

Now, let's see what happens when we add a small error of 0.010.01 to the first element of b\vec{b} to form b2\vec{b}_2.

b2=[7.4115.6]\vec{b}_2 = \begin{bmatrix} 7.41 \\ 15.6 \\ \end{bmatrix}

Calculating x2\vec{x}_2 in Ax2=b2A\cdot\vec{x}_2 = \vec{b}_2, we get:

x2=A1b2=[3.97.8]\begin{split} \vec{x}_2 &= A^{-1}\cdot\vec{b}_2 \\ &= \begin{bmatrix} 3.9 \\ -7.8 \end{bmatrix} \\ \end{split}

There's a huge difference between x\vec{x} and x2\vec{x}_2! As you might have guessed by now, it's because AA has a large condition number. In fact, cond(A)=7,895.0\text{cond}(A) = 7,895.0. You can verify this result in our matrix norm calculator!

It's fascinating to note that the relationship between our relative errors and our condition number is adhered to:

δx / x=4.014δb / b=0.00058cond(A)×δb / b=4.573\begin{split} \Vert \delta\vec{x}\Vert \ /\ \Vert \vec{x}\Vert &= 4.014 \\ \Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert &= 0.00058 \\ \text{cond}(A) \times \Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert &= 4.573 \end{split}

4.5734.0144.573 \le 4.014, and therefore our solution's relative error does not exceed the problem's relative error magnified by AA's condition number.

FAQ

What is the condition number of the identity matrix?

The condition number of an identity matrix of any size is 1. Because an identity matrix leaves any vector it's multiplied with untouched, it doesn't magnify an error in b. Therefore, it makes intuitive sense for the identity matrix to have a condition number of 1.

1 is the smallest possible matrix condition number, so the identity matrix can be seen as optimally well-conditioned.

What is the condition number of a diagonal matrix?

The condition number of a diagonal matrix D is the ratio between the largest and smallest elements on its diagonal, i.e., cond(D) = max(Dii) / min(Dii). It's important to note that this is only true when using the matrix 2-norm for computing cond(D). This is largely because D's diagonal elements are its eigenvalues.

Can the condition number of a matrix be zero?

No. If cond(A) = 0, then ‖δx‖/‖x‖ ≤ cond(A)·‖δb‖/‖b‖ = 0. Therefore, a condition number of 0 would mean that the matrix removes any error, which isn't possible.

In fact, the smallest possible condition number is 1, where an error is neither magnified nor diminished.

Does scaling a matrix affect its condition number?

No. Any scaling of the matrix will be canceled out by the matrix inverse being scaled inversely. This is because (𝛾A)−1 = 𝛾−1A−1. So,

cond(𝛾A)
= ‖𝛾A‖·‖(𝛾A)−1
= 𝛾‖A‖·(𝛾)−1‖A−1
= (𝛾·𝛾−1)·(‖A‖·‖A−1‖)
= cond(A)

The condition number measures the ratio of maximum stretch to minimum stretch. If both the maximum and minimum increase by 𝛾, the ratio wouldn't change, meaning cond(𝛾A) = cond(A)

Rijk de Wet
A=
a₁a₂
b₁b₂
Matrix size
2x2
First row
a₁
a₂
Second row
b₁
b₂
Results
Matrix norm
2-norm
People also viewed…

Alien civilization

The alien civilization calculator explores the existence of extraterrestrial civilizations by comparing two models: the Drake equation and the Astrobiological Copernican Limits👽

Car vs. Bike

Everyone knows that biking is awesome, but only this Car vs. Bike Calculator turns biking hours into trees! 🌳

Cotangent

The cotangent calculator is here to give you the value of the cotangent function for any given angle.
main background