Omni Calculator logo

Gram-Schmidt Calculator

Created by Maciej Kowalski, PhD candidate
Reviewed by Bogna Szyk and Jack Bowater
Last updated: May 28, 2024


Welcome to the Gram-Schmidt calculator, where you'll have the opportunity to learn all about the Gram-Schmidt orthogonalization. This simple algorithm is a way to read out the orthonormal basis of the space spanned by a bunch of random vectors. If you're not too sure what orthonormal means, don't worry! It's just an orthogonal basis whose elements are only one unit long. And what does orthogonal mean? Well, we'll cover that one soon enough!

So, just sit back comfortably at your desk, and let's venture into the world of orthogonal vectors!

What is a vector?

One of the first topics in physics classes at school is velocity. Once you learn the magical formula of v=s/tv = s / t, you open up the exercise book and start drawing cars or bikes with an arrow showing their direction parallel to the road. The teacher calls this arrow the velocity vector and interprets it more or less as "the car goes that way."

You can find similar drawings throughout all of physics, and the arrows always mean which direction a force acts on an object and how large it is. The scenario can describe anything from buoyancy in a swimming pool to the free fall of a bowling ball, but one thing stays the same: whatever the arrow is, we call it a vector.

In full (mathematical) generality, we define a vector to be an element of a vector space. In turn, we say that a vector space is a set of elements with two operations that satisfy some natural properties. Those elements can be quite funky, like sequences, functions, or permutations. Fortunately, for our purposes, regular numbers are funky enough.

Cartesian vector spaces

A Cartesian space is an example of a vector space. This means that a number, as we know them, is a (1-dimensional) vector space. The plane (anything we draw on a piece of paper), i.e., the space a pairs of numbers occupy, is a vector space as well. And lastly, so is []the 3-dimensional space of the world we live in, interpreted as a set of three real numbers.

When dealing with vector spaces, it's important to keep in mind the operations that come with the definition: addition and multiplication by a scalar (a real or complex number. Let's look at some examples of how they work in the Cartesian space.

In one dimension (a line), vectors are just regular numbers, so adding the vector 22 to the vector 3-3 is just

2+(3)=1\scriptsize2+(-3)=-1

Similarly, multiplying the vector 22 by a scalar, say, by 0.50.5 is just regular multiplication:

0.5×2=1\scriptsize0.5 \times 2 = 1

In two dimensions, vectors are points on a plane, which are described by pairs of numbers, and we define the operations coordinate-wise. For instance, if A=(2,1)\vec A = (2,1) and B=(1,7)\vec B = (-1, 7), then

A+B=(2,1)+(1,7)=(2+(1),1+7)=(1,8)\scriptsize\begin{split} & \vec A + \vec B = (2,1) + (-1,7) \\[1em] &= (2 + (-1), 1 + 7) = (1,8) \end{split}

Similarly, if we want to multiply A\vec A by, say, 1/2, then

12A=12(2,1)=(122,121)=(1,12)\scriptsize\begin{split} &\frac{1}{2}\cdot \vec A = \frac{1}{2}\cdot (2,1) \\[1em] &= \left(\frac{1}{2}\cdot 2,\frac{1}{2}\cdot 1\right) = \left(1,\frac{1}{2}\right) \end{split}

As a general rule, the operations described above behave the same way as their corresponding operations on matrices. After all, vectors here are just one-row matrices. Additionally, there are quite a few other useful operations defined on Cartesian vector spaces, like the cross product. Fortunately, we don't need that for this article, so we're happy to leave it for some other time, aren't we?

Now, let's distinguish some very special sets of vectors, namely the orthogonal vectors and the orthogonal basis.

What does orthogonal mean?

Intuitively, to define orthogonal is the same as to define perpendicular. This suggests that the meaning of orthogonal is somehow related to the 90-degree angle between objects. And this intuitive definition does work: in two- and three-dimensional spaces, orthogonal vectors are lines with a right angle between them.

But does this mean that whenever we want to check if we have orthogonal vectors, we have to draw out the lines, grab a protractor, and read out the angle? That would be troublesome... And what about 1-dimensional spaces? How to define orthogonal elements there? Not to mention the spaces of sequences. What does orthogonal mean in such cases? For that, we'll need a new tool.

The dot product (also called the scalar product) of two vectors v=(a1,a2,a3,...,an)\vec v = (a_1, a_2, a_3,..., a_n) and w=(b1,b2,b3,...,bn)\vec w = (b_1, b_2, b_3,..., b_n) is the number vwv\cdot w given by:

vw= a1b1+a2b2+a3b3+...+anbn\footnotesize\begin{split} \vec v \cdot \vec w =\ &a_1\cdot b_1 + a_2\cdot b_2 \\[.5em] &+ a_3 \cdot b_3 + ... + a_n\cdot b_n \end{split}

Observe that indeed the dot product is just a number: we obtain it by regular multiplication and addition of numbers. With this tool, we're now ready to define orthogonal elements in every case.

We say that v\vec v and w\vec w are orthogonal vectors if vw=0\vec v \cdot \vec w = 0. For instance, if the vector space is the one-dimensional Cartesian line, then the dot product is the usual number multiplication: vw=v×w\vec v \cdot \vec w = |\vec v| \times |\vec w|. So what does orthogonal mean in that case? Well, the product of two numbers is zero if, and only if, one of them is zero. Therefore, any non-zero number is orthogonal to 00 and nothing else.

Now that we're familiar with the meaning behind orthogonal let's go even deeper and distinguish some special cases: the orthogonal basis and the orthonormal basis.

Orthogonal and orthonormal basis

Let v1\vec v_1, v2\vec v_2, v3\vec v_3,..., vn\vec v_n be some vectors in a vector space. Every expression of the form

α1v1+α2v2+α3v3+....+αnvn\scriptsize \begin{split} &\alpha_1\cdot \vec v_1 + \alpha_2\cdot \vec v_2 + \alpha_3\vec v_3 + .... + \alpha_n\vec v_n \end{split}

where α1\alpha_1, α2\alpha_2, α3\alpha_3,..., αn\alpha_n are some arbitrary real numbers is called a linear combination of vectors. The space of all such combinations is called the span of v1\vec v_1, v2\vec v_2, v3\vec v_3,..., vn\vec v_n.

Think of the span of vectors as all possible vectors that we can get from the bunch. A keen eye will observe that, quite often, we don't need all nn of the vectors to construct all the combinations. The easiest example of that is when one of the vectors is the zero vector (i.e., with zeros on every coordinate). What good is it for if it stays as zero no matter what we multiply it by, and therefore doesn't add anything to the expression?

🙋 All vectors of a basis have length equal to 11. We call them unit vectors, and you will need them in the calculations for the Gram-Schmidt orthonormalization. Visit the unit vector calculator to get a solid background before our dive!

A slightly less trivial example of this phenomenon is when we have vectors e1=(1,0)\vec e_1 = (1,0), e2=(0,1)\vec e_2 = (0,1), and v=(1,1)\vec v = (1,1). Here we see that v=e1+e2\vec v = \vec e_1 + \vec e_2 so we don't really need vv for the linear combinations since we can already create any multiple of it by using e1\vec e_1 and e2\vec e_2.

All the above observations are connected with the so-called linear independence of vectors. In essence, we say that a bunch of vectors are linearly independent if none of them is redundant when we describe their linear combinations. Otherwise, as you might have guessed, we call them linearly dependent. You can explore this concept in depth at the linear independence calculator and the related angle between two vectors calculator!

Finally, we arrive at the definition that all the above theory has led to. The maximal set of linearly independent vectors among a bunch of them is called the basis of the space spanned by these vectors. We can determine linear dependence and the basis of a space by considering the matrix whose consecutive rows are our consecutive vectors and calculating the rank of such an array.

For example, from the triple e1\vec e_1, e2\vec e_2, and v\vec v above, the pair e1\vec e_1, e2\vec e_2 is a basis of the space. Note that a single vector, say e₁, is also linearly independent but it's not the maximal set of such elements.

Lastly, an orthogonal basis is a basis whose elements are orthogonal vectors to one another. Who'd have guessed, right? And an orthonormal basis is an orthogonal basis whose vectors are of length 11.

So how do we arrive at an orthonormal basis? Well, how fortunate of you to ask! That's exactly what the Gram-Schmidt process is for, as we'll see in a second.

Gram-Schmidt orthogonalization process

The Gram-Schmidt process is an algorithm that takes whatever set of vectors you give it and spits out an orthonormal basis of the span of these vectors. Its steps are:

  1. Take vectors v1\vec v_1, v2\vec v_2, v3\vec v_3,..., vn\vec v_n, whose orthonormal basis you'd like to find.

  2. Take u1=v1\vec u_1 = \vec v_1 and set e1\vec e_1 to be the normalization of u1\vec u_1 (the vector with the same direction but of length 11).

  3. Take u2\vec u_2 to be the vector orthogonal to u1\vec u_1 and set e2\vec e_2 to be the normalization of u2\vec u_2.

  4. Choose u3\vec u_3 so that u1\vec u_1, u2\vec u_2, and u3\vec u_3 are orthogonal vectors, and set e3\vec e_3 to be the normalization of u3\vec u_3.

  5. Repeat the process vector by vector until you run out of vectors, motivation, or time before something interesting is on the TV.

  6. The non-zero ei\vec e_i's are your orthonormal basis.

🙋 if you would like to learn more about vector projection, be sure to visit our vector projection calculator.

Now that we see the idea behind the Gram-Schmidt orthogonalization, let's try to describe the algorithm with mathematical precision.

First of all, let's learn how to normalize a vector. To do this, we simply multiply our vector by the inverse of its length, which is usually called its magnitude: you can learn how to calculate it at Omni's vector magnitude. For a vector v\vec v we often denote its length by v|\vec v| (not to be confused with the absolute value of a number!) and calculate it by:

v=vv\scriptsize|\vec v| = \sqrt{\vec v \cdot \vec v}

i.e., the square root of the dot product with itself. For instance, if we'd want to normalize v=(1,1)\vec v = (1,1), then we'd get

u=1vv=1vv(1,1)=111+11(1,1)=12(1,1)=(12,12)(0.7,0.7)\footnotesize\begin{split} &\vec u = \frac{1}{|\vec v|} \cdot \vec v = \frac{1}{\sqrt{\vec v \cdot \vec v}}\cdot (1,1)\\[1em] &=\frac{1}{\sqrt{1\cdot 1 + 1\cdot 1}}\cdot (1,1) \\[1em] &=\frac{1}{\sqrt{2}}\cdot (1,1) = \left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)\\[1em] &\approx (0.7,0.7) \end{split}

Next, we need to learn how to find the orthogonal vectors of whatever vectors we've obtained in the Gram-Schmidt process so far. Again, dot product comes to help out.

If we have vectors u1\vec u_1, u2\vec u_2, u3\vec u_3,..., uk\vec u_k, and would like to make v\vec v into an element u\vec u orthogonal to all of them, then we apply the formula:

u=vvu1u1u1u1vu2u2u2u2vu3u3u3u3...vukukuku3k\footnotesize\begin{split} \vec u = \vec v - \frac{\vec v \cdot \vec u_1}{\vec u_1 \cdot \vec u_1}\cdot \vec u_1 - \frac{\vec v \cdot \vec u_2}{\vec u_2 \cdot \vec u_2}\cdot \vec u_2\\[1em] \frac{\vec v \cdot \vec u_3}{\vec u_3 \cdot \vec u_3}\cdot \vec u_3 - ... -\frac{\vec v \cdot \vec u_k}{\vec u_k \cdot \vec u_k}\cdot \vec u_3k \end{split}

With this, we can rewrite the Gram-Schmidt process in a way that would make mathematicians nod and grunt their approval.

  1. Take vectors v1\vec v_1, v2\vec v_2, v3\vec v_3,..., vn\vec v_n whose orthonormal basis you'd like to find.

  2. Take u1=v1\vec u_1 = \vec v_1 and set e1=(1/u1)u1\vec e_1 = (1 / |\vec u_1|) \cdot \vec u_1.

  3. Take u2=v2[(v2u1)/(u1u1)]u1\vec u_2 = \vec v_2 - [(\vec v_2 \cdot \vec u_1)/(\vec u_1 \cdot \vec u_1)] \cdot u_1, and set e2=(1/u2)u2\vec e_2 = (1 / |\vec u_2|) \cdot u_2.

  4. Take u3=v3[(v3u1)/(u1u1)]u1\vec u_3 = \vec v_3 - [(\vec v_3 \cdot \vec u_1)/(\vec u_1 \cdot \vec u_1)] \cdot u_1, and set e3=(1/u3)u3\vec e_3 = (1 / |\vec u_3|) \cdot \vec u_3.

  5. Repeat the process vector by vector until you run out of vectors, motivation, or patience before finding out what happens next in the novel you're reading.

  6. The non-zero ei\vec e_i's are your orthonormal basis.

Arguably, the Gram-Schmidt orthogonalization contains only simple operations, but the whole thing can be time-consuming the more vectors you have.

Alright, it's been ages since we last saw a number rather than a mathematical symbol. It's high time we had some concrete examples, wouldn't you say?

Example: using the Gram-Schmidt calculator

Say that you're a huge Pokemon GO fan but have lately come down with the flu and can't really move that much. Fortunately, your friend decided to help you out by finding a program that you plug into your phone to let you walk around in the game while lying in bed at home. Pretty cool, if you ask us.

The only problem is that in order for it to work, you need to input the vectors that will determine the directions in which your character can move. We are living in a 3-dimensional world, and they must be 3-dimensional vectors. You close your eyes, roll the dice in your head, and choose some random numbers: (1,3,2)(1, 3, -2), (4,7,1)(4, 7, 1), and (3,1,12)(3, -1, 12).

"Error! The vectors have to be orthogonal!" Oh, how troublesome... Well, it's a good thing that we have the Gram-Schmidt calculator to help us with just such problems!

We have 33 vectors with 33 coordinates each, so we start by telling the calculator that by choosing the appropriate options under "Number of vectors" and "Number of coordinates." This will show us a symbolic example of such vectors with the notation used in the Gram-Schmidt calculator. For instance, the first vector is given by v=(a1,a2,a3)\vec v = (a_1, a_2, a_3). Therefore, since in our case the first one is (1,3,2)(1, 3, -2) we input:

a1=1a2=3a3=2\scriptsize\begin{split} &a_1 = 1\\ &a_2 = 3\\ &a_3 = -2 \end{split}

Similarly for the two other ones we get:

b1=4b2=7b3=1\scriptsize\begin{split} &b_1 = 4\\ &b_2 = 7\\ &b_3 = 1 \end{split}

And:

c1=3c2=1c3=12\scriptsize\begin{split} &c_1 = 3\\ &c_2 = -1\\ &c_3 = 12 \end{split}

Once we input the last number, the Gram-Schmidt calculator will spit out the answer. Unfortunately, just as you were about to see what it was, your phone froze. Apparently, the program is taking too much space, and there's not enough for the data transfer from the sites. When it rains, it pours... Oh well, it looks like we'll have to calculate it all by hand.

Let's denote our vectors as we did in the above section: v1=(1,3,2)\vec v_1 = (1, 3, -2), v2=(4,7,1)\vec v_2 = (4, 7, 1), and v3=(3,1,12)\vec v_3 = (3, -1, 12). Then, according to the Gram-Schmidt process, the first step is to take u1=v1=(1,3,2)\vec u_1 = \vec v_1 = (1, 3, -2) and to find its normalization:

e1=!u1u1=111+33+(2)(2)(1,3,2)114(1,3,2)(0.27,0.8,0.53)\scriptsize\begin{split} &\vec e_1 = \frac{!}{|\vec u_1|}\cdot \vec u_1 \\[1em] &= \frac{1}{\sqrt{1\!\cdot\!1+3\!\cdot\!3+(-2)\!\cdot\!(-2)}}\!\cdot \!(1,3,-2)\\[1em] &\frac{1}{\sqrt{14}}\cdot(1,3,-2)\\[1em] &\approx (0.27,0.8,-0.53) \end{split}

Next, we find the vector u2\vec u_2 orthogonal to u1\vec u_1:

u2=v2v2u1u1u1u1=(4,7,1)41+73+1(2)11+33+(2)(2)=(4,7,1)2314(1,3,2)(4,7,1)(1.64,4.93,3.29)(2.36,2.07,4.29)\scriptsize\begin{split} &\vec u_2 = \vec v_2 - \frac{\vec v_2 \cdot \vec u_1}{\vec u_1 \cdot \vec u_1} \cdot \vec u_1\\[1em] &=(4,7,1)-\frac{4\!\cdot \!1 +7\!\cdot\! 3 + 1\!\cdot \!(-2)}{1\!\cdot\! 1+3\!\cdot\! 3+(-2)\!\cdot\! (-2)}\\[1em] &=(4,7,1)-\frac{23}{14}\cdot(1,3,-2)\\[1em] &\approx (4,7,1)-(1.64,4.93,-3.29)\\[1em] &\approx(2.36,2.07,4.29) \end{split}

and normalize it:

e2=1u2u2=15.57+4.28+18.4(2.36,2.06,4.29)(0.44,0.39,0.8)\scriptsize\begin{split} &\vec e_2 = \frac{1}{|\vec u_2|}\cdot \vec u_2\\[1em] & = \frac{1}{5.57+4.28+18.4}\cdot (2.36,2.06,4.29)\\[1em] &\approx (0.44,0.39,0.8) \end{split}

Lastly, we find the vector u3\vec u_3 orthogonal to both u1\vec u_1 and u2\vec u_2:

u3=v3v3u1u1u1u1v3u2u2u2u2=(3,1,12)3+(3)+(24)14(1,3,2)7.08+(2.07)+51.4828.26(2.36,2.07,4.29)=(3,1,12)+127(1,3,2)56.4928.26(2.36,2.07,4.29)(0,0,0)\scriptsize \begin{split} &\vec u_3 = \vec v_3 -\frac{\vec v_3 \cdot \vec u_1}{\vec u_1\cdot \vec u_1}\cdot \vec u_1 - \frac{\vec v_3 \cdot \vec u_2}{\vec u_2 \cdot \vec u_2}\cdot \vec u_2\\[1em] &= (3,-1,12)\\[1em] &- \frac{3+(-3)+(-24)}{14}\cdot (1,3,-2)\\[1em] &-\frac{7.08+(-2.07)+51.48}{28.26}\cdot (2.36,2.07,4.29)\\[1em] &=(3,-1,12)+\frac{12}{7}\cdot (1,3,-2)\\[1em] &-\frac{56.49}{28.26}\cdot(2.36,2.07,4.29)\\[1em] &\approx (0,0,0) \end{split}

Oh no, we got the zero vector! That means that the three vectors we chose are linearly dependent, so there's no chance of transforming them into three orthonormal vectors... Well, we'll have to change one of them a little and do the whole thing again.

FAQ

What is Gram-Schmidt ortogonalization?

The Gram-Schmidt orthogonalization is a mathematical procedure that allows you to find the orthonormal basis of the vector space defined by a set of vectors. The orthonormal basis is a minimal set of vectors whose combinations span the entire space.

How do I perform the Gram-Schmidt ortogonalization?

From a set of vectors v₁, v₂, and v₃, we can find the elements of the orthonormal basis:

  1. Set u₁ = v₁.

  2. Normalize u₁: e₁ = u₁/|u₁|. This is the first element of the orthonormal basis.

  3. Find the second vector of the basis by choosing the vector orthogonal to u₁: u₂ = v₂ - [(v₂ ⋅ u₁)/(u₁ ⋅ u₁)] × u₁.

  4. Normalize u₂: e₂ = u₂/|u₂|.

  5. Repeat the procedure for v₃. Finding the null vector means that the original set is not linearly independent: they span a two-dimensional vector space.

How do I find the second base vector if v₂=(4,2,1) and u₁=(3,-2,4)?

Starting from the first orthogonal vector u₁=(3,-2,4), follow these steps:

  1. Calculate the projection of v₂=(4,2,1) onto u₁:

    proj(v₂) = [(v₂ ⋅ u₁)/(u₁ ⋅ u₁)] × u₁ = 12/29 × (4,2,1) = (1.24,-0.83,1.66)

  2. Subtract the projection from v₂:

    v₂ - proj(v₂) = (4,2,1) - (1.24,-0.83,1.66) = (2.76, 2.83, -0.66)

  3. The result is the vector u₂.

  4. To confirm their orthogonality, you can compute the dot product and verify that it is equal to zero.

Can I apply Gram-Schmidt to linearly dependent vectors?

Yes, but applying the Gram-Schmidt orthonormalization to linearly dependent vectors necessarily reduces the number of vectors in the original set.

The orthonormalization procedure gives you the minimal set of vectors that can generate the original set: if there is redundancy in the latter, the process eliminates it.

Maciej Kowalski, PhD candidate
Number of vectors
3
Number of coordinates
2
v=(a1,a2)
w=(b1,b2)
u=(c1,c2)
First vector
a₁
a₂
Second vector
b₁
b₂
Third vector
c₁
c₂
Check out 35 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 32 more
People also viewed…

BMR - Harris-Benedict equation

Harris-Benedict calculator uses one of the three most popular BMR formulas. Knowing your BMR (basal metabolic weight) may help you make important decisions about your diet and lifestyle.

Flat vs. round Earth

Omni's not-flat Earth calculator helps you perform three experiments that prove the world is round.

Heptagon area

Use this heptagon area calculator to determine the area of a regular heptagon, given its side, perimeter, circumcircle radius, or apothem.

Isosceles triangle side

The isosceles triangle side calculator will determine the lengths of an isosceles triangle if you give it the area, the height, the angles, or the perimeter.
Copyright by Omni Calculator sp. z o.o.
Privacy, Cookies & Terms of Service