# Lagrange Error Bound Calculator

Table of contents

How to use the Lagrange error bound calculatorWhat is the Lagrange error bound?A short introduction to the Taylor seriesHow do I calculate the Lagrange error bound?Calculating the Lagrange error bound: an exampleFAQsThe **Lagrange error bound calculator** will help you determine the **Lagrange error bound**, the largest possible error arising from using the Taylor series to approximate a function. And what's more, this article will show you:

**What the Lagrange error bound is**;- How it relates to the
**Taylor remainder theorem**; - The
**Lagrange error bound formula**; and - A worked-out example of
**how to calculate the Lagrange error bound**.

🙋 Looking for other calculators on number series? Why not give our **harmonic series calculator** a try?

## How to use the Lagrange error bound calculator

The **Lagrange error bound calculator** is easy to use! Simply enter your Taylor series problem's values into their corresponding fields:

- Enter the
**number of terms**from the Taylor series (also known as the resulting Taylor polynomial's**degree**) into`n`

. - Enter
**the**$x$**-value**at which you're evaluating the error, into`x`

. - Enter
**the polynomial's center**in the field labelled`a`

. - Enter
**the maximum value**of the $(n+1)$-th derivative of the function you're approximating, $f(x)$, into the field labelled`M`

.

The Lagrange error bound, $e_\text{max}$, will be presented **at the bottom of the calculator**.

Now, if you find yourself asking questions such as *"What is the Taylor series?"* or *"What is the Lagrange error bound?"*, then read on!

## What is the Lagrange error bound?

The Lagrange error bound is the **upper bound** on the error that results from approximating a function using the **Taylor series**. Using more terms from the series reduces the error, but it's rarely zero, and it's **hard to calculate directly**. The error bound tells us what the largest possible error is. The Lagrange error bound formula is derived from the **Taylor remainder theorem**.

💡 The Lagrange error bound can be seen as a **margin of error** for the Taylor series approximation. If you want to learn more about that topic, head over to our **margin of error calculator**.

## A short introduction to the Taylor series

If we have some function $f$, then we can perfectly replicate it using an infinite sum of terms, each expressed using a higher-order derivative of $f$. This endless sum is called the **Taylor series**.

Let's clarify the above with a mathematical definition — any function $f$ can be replicated by this infinitely long summation:

where:

- $f^{(n)}$ is the $n$-th derivative of the function $f$; and
- $a$ is any point of our choosing, as long as $f(x)$ is infinitely differentiable at $a$.

Below, you can see the Taylor series in action, as it approximates $f(x) = e^x$ with more and more terms, i.e., as the value of $n$ becomes higher and higher:

Isn't that amazing? With only a 5th-degree polynomial, we can already approximate $e^x$ to a high degree of accuracy.

However, **we now see a problem**: the Taylor series' definition is only true if we use **an infinite number of terms**. If we stop anywhere short of infinity — and as mortals, we must — then we'd only have **approximated** $f(x)$ with the Taylor polynomial, $P_n(x)$:

Because $P_n(x)$ is just an approximation of $f(x)$, there will be a **remainder**, $R_n(x)$:

**What is the value of** $R_n(x)$? We can't know for sure because there are countless terms to calculate. We do know that the **more terms we include from the series** (i.e., the higher the polynomial's degree, $n$), the better we approximate $f(x)$ with $P_n(x)$, and **the smaller that error will become**.

Lucky for us, there is an **upper bound** on $R_n(x)$, a worst-case-scenario estimate of the error. It's called the **Lagrange error bound**, and this calculator and text is all about finding it.

💡 Errors aren't desirable, but measuring them is the first step to fixing them! Try our **relative error calculator** to get started.

## How do I calculate the Lagrange error bound?

The Lagrange error bound formula is as follows:

where:

- $e_\text{max}$ — Lagrange error bound;
- $x$ — Where we're determining the error between $f(x)$ and $P_n(x)$;
- $a$ — Where the polynomial is centered;
- $n$ — Number of terms from the Taylor series we're including, and thus the degree of the resulting polynomial; and
- $M$ — Largest value of $\left\vert f^{(n+1)}(z) \right\vert$ for all $z$ between $a$ and $x$:

Using $e_\text{max}$, we can place a bound on $R_n$:

which simply states that $R_n(x)$ cannot be larger in magnitude than $e_\text{max}$, thereby providing an upper bound on any error that might result from the approximation of $f(x)$.

That's enough formulae — let's look at an example of the Lagrange error bound and how it's calculated.

## Calculating the Lagrange error bound: an example

Let's find the error bound for the function $\sin(x)$ at $\pi/3$ when approximated by a 4th-order Taylor polynomial centered around $\pi/6$.

Right off the bat, we have some values ready to be plugged into our Lagrange error bound formula:

- $f(x) = \sin(x)$;
- $x = \pi/3$;
- $n = 4$; and
- $a = \pi/6$.

The hardest part will be determining $M$, the largest value of the $(n+1) = 5$th differential of $f(x)$ within our interval $[a, x] = [\pi/6, \pi/3]$. Let's find it.

The higher-order differentials of $\sin(x)$ are given below:

Order | Differential |
---|---|

1 | $\cos(x)$ |

2 | $-\sin(x)$ |

3 | $-\cos(x)$ |

4 | $\sin(x)$ |

5 | $\cos(x)$ |

Great! $f^{(5)}(x) = \cos(x)$. Now we need to know what the biggest value of $\cos(x)$ can be at any point in the range $[\pi/6, \pi/3]$. We could use some fancy mathematics for this, but we can also reason that

- $\cos(0) = 1$;
- $\cos(\pi/2) = 0$; and
- $\cos(x)$ is strictly descending over the range $0 \le x \le \pi/2$.

So, the largest value of $\cos(x)$ in our range of $[\pi/6, \pi/3]$ should be closer to $x=0$, and therefore $M = \cos(\pi/6)$.

Now, we have everything we need to determine $e_\text{max}$:

Thanks to this result, we know that our error cannot be greater than 0.000284. Let's see if that's true:

The Lagrange error bound is correct! And what's more, if we took the same steps for other values of $n$, we can see how the error bound decreases as we approximate $f(x)$ better:

$n$ | $e_\text{max}$ | $R_n(x)$ |
---|---|---|

$1$ | $0.118712$ | $0.087424$ |

$2$ | $0.020719$ | $0.018885$ |

$3$ | $0.002712$ | $0.001833$ |

$4$ | $0.000284$ | $0.000267$ |

$5$ | $2.478505 \cdot 10^{-5}$ | $1.608669 \cdot 10^{-5}$ |

$6$ | $1.853917 \cdot 10^{-6}$ | $1.777034 \cdot 10^{-6}$ |

$7$ | $1.213386 \cdot 10^{-7}$ | $7.688356 \cdot 10^{-8}$ |

$8$ | $7.059195 \cdot 10^{-9}$ | $6.828675 \cdot 10^{-9}$ |

$9$ | $3.696186 \cdot 10^{-10}$ | $2.305197 \cdot 10^{-10}$ |

$10$ | $1.759380 \cdot 10^{-11}$ | $1.712037 \cdot 10^{-11}$ |

And there you have it. We've shown you how to calculate the Lagrange error bound — now it's your turn!

### What is the Lagrange error bound for f(x) = 1/(1-x) at x=0.1 with a 2nd-order Maclaurin series?

The Lagrange error bound is 0.0015. We can calculate it with the following steps:

- Find the values for
`a`

,`x`

, and`n`

:`a = 0`

(since we're using the Maclaurin series);`x = 0`

; and`n = 2`

.

- Find the largest value of the
`(n+1)`

th derivative of`f`

anywhere between`a`

and`x`

. For us, this is`9`

— you can do the math to see why. - Plug these values into the Lagrange error bound formula.

### What is the Taylor remainder?

The Taylor remainder is the **difference between the Taylor series approximation of a function and the actual function itself**. Because the Taylor series is infinitely long, we cannot calculate it conclusively, and it will always remain an approximation of the original function. However, we can put an upper limit on the Taylor remainder, which is called the Lagrange error bound.

### What is M in the Lagrange error bound?

`M`

represents the largest magnitude value that the `(n+1)`

th derivative of the function `f`

can take on between points `a`

and `x`

when you're approximating the value of `f`

at `x`

with an `n`

th degree Taylor polynomial centered at `a`

.