# Condition Number Calculator

Created by Rijk de Wet
Last updated: Dec 14, 2021

Welcome to the condition number calculator. Need to determine whether your linear algebra problem is well-conditioned or unstable? Will incorrect measurements or poor rounding be the downfall of your matrix equation? Here, we'll show you what a matrix condition number is and how to find the condition number of any matrix, so that you can protect yourself against any errors that may creep in.

If you need a revision of the easier topics in mathematics, take a look at our ratio calculator!

## What is the condition number of a matrix?

Before we can make sense of any result our condition number calculator produces, let's first define the matrix condition number and what it represents. We usually denote the condition number of a matrix $A$ as $\text{cond}(A)$ or $\kappa(A)$. We can define it mathematically as follows:

$\text{cond}(A) = \begin{cases} \Vert A\Vert \cdot \Vert A^{-1}\Vert & \text{if}\ A\ \text{is invertible} \\ \infty & \text{otherwise} \\ \end{cases}$

In this equation, $\Vert\cdot\Vert$ is any matrix norm. When we want to specify which norm we used, we can use the relevant subscript in the condition number symbol, such as $\text{cond}_2(A)$ for the matrix 2-norm $\Vert\cdot\Vert_2$. In addition, $A^{-1}$ is the inverse of $A$.

The matrix $A$ is non-invertible if its determinant is zero (i.e., $|A| = 0$). In this case, it has an infinite condition number. To still gain some insight into the matrix's conditionality, some mathematicians would redefine the condition number with the Moore-Penrose pseudoinverse $A^+$ as $\text{cond}(A) = \Vert A\Vert\cdot \Vert A^+\Vert$. This alternate definition would still deliver huge, near-infinite condition numbers for non-invertible matrices, thereby still honoring our initial definition.

We can interpret the condition number in multiple ways. Firstly, $\text{cond}(A)$ measures the ratio of maximum stretching to maximum shrinking in a unit vector when $A$ is multiplied with it. Therefore, an equivalent definition of the condition number is:

$\text{cond}(A) = \max_{\Vert \vec{x}\Vert = 1} \frac{\Vert A\vec{x}\Vert }{\Vert \vec{x}\Vert } \cdot \left ( \min_{\Vert \vec{x}\Vert =1} \frac{\Vert A\vec{x}\Vert }{\Vert \vec{x}\Vert } \right )^{-1}$

In pure mathematics, a matrix is either invertible or not. But, being the second way of interpreting the condition number, $\text{cond}(A)$ is a measure of how invertible $A$ is. As $\text{cond}(A)$ increases, $A$ gets closer to being non-invertible.

The third and most important use of condition numbers is in linear algebra. Let's take a look at why below!

## The matrix condition number in linear algebra

When we have a system of linear equations $A\cdot\vec{x} = \vec{b}$, the condition number takes on a special meaning. $\text{cond}(A)$ now becomes the rate at which the solution $\vec{x}$ will change in relation to a change in $\vec{b}$. For this reason, we can call $\text{cond}(A)$ the problem's error magnification factor. Changes in $\vec{b}$ are usually due to errors made in formulating the problem, such as taking erroneous measurements or making rounding errors.

So, suppose some error crept in, and the values contained in $\vec{b}$ are slightly wrong. How far from the truth our newly-found solution $\vec{x}$ is, depends on $\text{cond}(A)$:

• If the condition number of matrix $A$ is large, $\vec{x}$ is vulnerable to errors. The error in $\vec{x}$ resulting from the error in $\vec{b}$ will therefore be large.

• Inversely, if $\text{cond}(A)$ is small, $\vec{x}$ will be well-protected against reasonable errors in $\vec{b}$, and so its error will be small.

We can restate this relation mathematically. With $\delta\vec{b}$ representing the error in $\vec{b}$ and $\delta\vec{x}$ the resulting change in $\vec{x}$, we can relate the relative errors $\Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert$ and $\Vert \delta\vec{x}\Vert \ /\ \Vert \vec{x}\Vert$ with:

$\frac{ \Vert \delta\vec{x}\Vert }{ \Vert \vec{x}\Vert } \le \text{cond}(A) \cdot \frac{ \Vert \delta\vec{b}\Vert }{ \Vert \vec{b}\Vert }$

This means that $\vec{x}$'s relative error can be up to as large as $\vec{b}$'s relative error scaled by the condition number.

## How to find the condition number of a matrix?

With our mathematical definition of the condition number as $\text{cond}(A) = \Vert A\Vert \cdot \Vert A^{-1}\Vert$, it is simple to find $\text{cond}(A)$:

1. Choose a matrix norm. Although the choice is problem-dependent, the matrix 2-norm is typically used.

2. Evaluate the inverse of $A$. We need the matrix inverse to find the matrix condition number. If $A^{-1}$ does not exist, we can declare that $\text{cond}(A) = \infty$.

3. Calculate $\Vert A\Vert$ and $\Vert A^{-1}\Vert$. It's crucial to use the same norm as chosen above throughout our calculations.

4. Multiply the norms to find $\text{cond}(A)$.

## How to use the condition number calculator?

It's great to know how to calculate the matrix condition number, but sometimes you just need an answer immediately to save time. This is where our matrix condition number calculator comes in handy. Here's how to use it:

1. Select your matrix's dimensionality. We support $2\times2$ and $3\times3$ matrices.

2. Enter your matrix, row by row. Feel free to refer to the symbolic representation at the top.

3. Select a matrix norm, or leave it at the default selection of the matrix 2-norm.

4. Find $\text{cond}(A)$ at the bottom of our matrix condition number calculator.

## How to find the condition number of a matrix? – An example

To drive what we've learned home, let's take a look at an example. Suppose we have the linear system $A\cdot \vec{x} = \vec{b}$ with:

$A = \begin{bmatrix} 3.7 & 0.9 \\ 7.8 & 1.9 \\ \end{bmatrix}$

and

$\vec{b} = \begin{bmatrix} 7.4 \\ 15.6 \\ \end{bmatrix}$

We can easily solve for $\vec{x}$ by calculating:

$\begin{split} \vec{x} &= A^{-1}\cdot\vec{b} \\ &= \begin{bmatrix} 2 \\ 0 \end{bmatrix} \end{split}$

Now, let's see what happens when we add a small error of $0.01$ to the first element of $\vec{b}$ to form $\vec{b}_2$.

$\vec{b}_2 = \begin{bmatrix} 7.41 \\ 15.6 \\ \end{bmatrix}$

Calculating $\vec{x}_2$ in $A\cdot\vec{x}_2 = \vec{b}_2$, we get:

$\begin{split} \vec{x}_2 &= A^{-1}\cdot\vec{b}_2 \\ &= \begin{bmatrix} 3.9 \\ -7.8 \end{bmatrix} \\ \end{split}$

There's a huge difference between $\vec{x}$ and $\vec{x}_2$! As you might have guessed by now, it's because $A$ has a large condition number. In fact, $\text{cond}(A) = 7,895.0$. You can verify this result in our matrix norm calculator!

It's fascinating to note that the relationship between our relative errors and our condition number is adhered to:

$\begin{split} \Vert \delta\vec{x}\Vert \ /\ \Vert \vec{x}\Vert &= 4.014 \\ \Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert &= 0.00058 \\ \text{cond}(A) \times \Vert \delta\vec{b}\Vert \ /\ \Vert \vec{b}\Vert &= 4.573 \end{split}$

$4.573 \le 4.014$, and therefore our solution's relative error does not exceed the problem's relative error magnified by $A$'s condition number.

## FAQ

### What is the condition number of the identity matrix?

The condition number of an identity matrix of any size is 1. Because an identity matrix leaves any vector it's multiplied with untouched, it doesn't magnify an error in b. Therefore, it makes intuitive sense for the identity matrix to have a condition number of 1.

1 is the smallest possible matrix condition number, so the identity matrix can be seen as optimally well-conditioned.

### What is the condition number of a diagonal matrix?

The condition number of a diagonal matrix D is the ratio between the largest and smallest elements on its diagonal, i.e., cond(D) = max(Dii) / min(Dii). It's important to note that this is only true when using the matrix 2-norm for computing cond(D). This is largely because D's diagonal elements are its eigenvalues.

### Can the condition number of a matrix be zero?

No. If cond(A) = 0, then ‖δx‖/‖x‖ ≤ cond(A)·‖δb‖/‖b‖ = 0. Therefore, a condition number of 0 would mean that the matrix removes any error, which isn't possible.

In fact, the smallest possible condition number is 1, where an error is neither magnified nor diminished.

### Does scaling a matrix affect its condition number?

No. Any scaling of the matrix will be canceled out by the matrix inverse being scaled inversely. This is because (𝛾A)−1 = 𝛾−1A−1. So,

cond(𝛾A)
= ‖𝛾A‖·‖(𝛾A)−1‖
= 𝛾‖A‖·(𝛾)−1‖A−1‖
= (𝛾·𝛾−1)·(‖A‖·‖A−1‖)
= cond(A)

The condition number measures the ratio of maximum stretch to minimum stretch. If both the maximum and minimum increase by 𝛾, the ratio wouldn't change, meaning cond(𝛾A) = cond(A)

Rijk de Wet
A=
 ⌈ a₁ a₂ ⌉ ⌊ b₁ b₂ ⌋
Matrix size
2x2
First row
a₁
a₂
Second row
b₁
b₂
Results
Matrix norm
2-norm
People also viewed…

### Alien civilization

The alien civilization calculator explores the existence of extraterrestrial civilizations by comparing two models: the Drake equation and the Astrobiological Copernican Limits👽

### Car vs. Bike

Everyone knows that biking is awesome, but only this Car vs. Bike Calculator turns biking hours into trees! 🌳

### Cotangent

The cotangent calculator is here to give you the value of the cotangent function for any given angle.