MSE Calculator
Omni's MSE calculator is here for you whenever you need to quickly determine the sum of squared errors (SSE) and mean squared error (MSE) when searching for the line of best fit. You can also use this tool if you are wondering how to calculate MSE by hand, since it can show you the results of intermediate calculations.
Not sure what MSE is? Need just the formula for MSE, or rather looking for a precise mathematical definition of MSE and an explanation of the reasoning behind it? You're in the right place! Scroll down to learn everything you need about MSE in statistics! An example of MSE calculated stepbystep is also included for your convenience!
What is MSE in statistics?
In statistics, the mean squared error (MSE) measures how close predicted values are to observed values. Mathematically, MSE is the average of the squared differences between the predicted values and the observed values. We often use the term residuals to refer to these individual differences.
We most often define the predicted values as the values obtained from simple linear regression, or just as the arithmetic mean of the observed values  in the latter case, all the predicted values are equal.
💡 In simple linear regression, the line of best fit found via the method of least squares is exactly the line that minimizes MSE! 
We now have a basic idea of what MSE is, so it's time to quickly explain how to find MSE with the help of our mean square error calculator.
How to use this MSE calculator?
It can't be any simpler! To use our MSE calculator most efficiently, follow these steps:

Choose the mode of the mean square error calculator  should the predicted values be automatically set as the average of the observed values, or do you want to enter custom values?

Next, input your data. You can enter up to 30 values  the fields will appear as you go.

The MSE and SSE of your observations are already there!

Do you want to see some of the details of the calculations? Turn the
Show details?
option toYes
!Tip: This option allows you to use this calculator to generate examples of MSE!

You can also increase the precision of calculations  just click the
advanced mode
and adjust thePrecision
value.
As nice as it is to use Omni's MSE calculator, it may happen that you'll have to compute MSE or SSE by hand. In the next section, we'll provide you with all the formulas you need.
How to find MSE and SSE?
Let x_{1}, ..., x_{n}
be the observed values and y_{1}, ..., y_{n}
be the predicted values.
The equation for MSE is the following:
MSE = (1/n) * Σ_{i}(x_{i} y_{i})²
,
where i
runs from 1
to n
.
If we ignore the 1/n
factor in front of the sum, we arrive at the formula for SSE:
SSE = Σ_{i}(x_{i} y_{i})²
,
where i
runs from 1
to n
. In other words, the relationship between SSE and MSE is the following:
MSE = SSE / n
.
Matrix formula for MSE
Let us consider the columnvector e
with coefficients defined as
e_{i} = x_{i}  y_{i}
for i = 1, ..., n
. That is, e
is the vector of residuals.
Using e
, we can say that MSE is equal to 1/n
times the squared magnitude of e
, or 1/n
times the dot product of e
by itself:
MSE = (1/n) * e² = (1/n) * e ∙ e
.
Alternatively, we can rewrite this MSE equation as follows:
MSE = (1/n) * e^{T}e
,
where e^{T}
is the transpose of e
, i.e., a rowvector, and the operation between e^{T}
and e
is matrix multiplication.
The above formulas lead us immediately to the following expression for SSE:
SSE = e² = e ∙ e = e^{T}e
.
Why do we take squares in MSE?
Wouldn't it be simpler and more intuitive to add the differences between actual data and predictions without squaring them first?
No, there are good reasons for taking the squares!
Namely, the predicted values can be greater than or less than the observed values. And when we add together positive and negative differences, individual errors may cancel each other out. As a result, we can get the sum close to (or even equal to) zero even though the terms were relatively large. This could lead us to a false conclusion that our prediction is accurate since the error is low.
In contrast, when we take a square of each difference, we get a positive number, and each individual error increases the sum. In other words, squaring makes both positive and negative differences contribute to the final value in the same way. Thanks to squaring, we can say that that the smaller the value of MSE, the better model.
In particular, if the predicted values coincided perfectly with observed values, then MSE would be zero. This, however, nearly never happens in practice: MSE is almost always strictly positive because there's almost always some noise (randomness) in the observed values.
As you can see, we really can't take simple differences. However, squares are not the only option! In the next section, we will tell you, among other things, about MAE, which uses absolute values instead of squares to achieve exactly the same effect  get rid of negative signs of differences.
Alternatives to MSE in statistics
As we've seen in the formulas, the units of MSE are the square of the original units, exactly like in the case of variance. To return to the original units, we often take the square root of MSE, obtaining the root mean squared error (RMSE):
RMSE = √MSE
,
This is analogous to taking the square root of variance in order to get the standard deviation.
Another (slightly less popular) measure of the quality of prediction is the mean absolute error (MAE), where, instead of squaring the differences between observed and predicted values, we take the absolute differences between them:
MAE = (1/n) * Σ_{i}x_{i} y_{i}
,
where i
runs from 1
to n
. When the predicted values are all equal to the mean of observed values, we arrive at the mean absolute deviation.
MSE example
Phew, we're finally done with the definition of MSE and all the formulas. It's high time we looked at an example!
Assume we have the following data:
3, 15, 6, 3, 44, 8, 15, 9, 7, 25, 24, 5, 88, 44, 3, 21
.

We see there are sixteen numbers, so
n = 16
. 
Next, we compute the average:
(3 + 15 + 6 + 3 + 44 + 8 + 15 + 9 + 7 + 25 + 24 + 5 + 88 + 44 + 3 + 21) / 16 = 320 / 16 = 20
.Hence,
μ = 20
. 
We compute the differences between each observation and the mean
μ
and also their squares:
x  x  μ  (x  μ)² 
3  17  289 
15  5  25 
6  14  196 
3  17  289 
44  24  576 
8  12  144 
15  5  25 
9  11  121 
7  13  169 
25  5  25 
24  4  16 
5  15  225 
88  68  4624 
44  24  576 
3  17  289 
21  1  1 

We sum the numbers from the 3rd column:
289 + 25 + 196 + 189 + 576 + 144 + 25 + 121 + 169 + 25 + 16 + 225 + 4624 + 576 + 289 + 1 = 7590
,
to get their SSE:SSE = 7590
. 
To find MSE, we divide SSE by the sample length
n = 16
:MSE = 7590 / 16 = 474.40
. 
To find RMSE, we take the square root of MSE:
RMSE = √474.40 ≈ 21.78
.
FAQ
How do I calculate MSE by hand?
To calculate MSE by hand, follow these instructions:
 Compute differences between the observed values and the predictions.
 Square each of these differences.
 Add all these squared differences together.
 Divide this sum by the sample length.
 That's it, you've found the MSE of your data!
How do I calculate SSE from MSE?
If you're given MSE, just one simple step separates you from finding SSE! The only thing you need to know is the sample length n
. Then apply this formula:
SSE = MSE × n
and enjoy your newlycomputed SSE!
How do I calculate RMSE from MSE?
To calculate RMSE from MSE, you need to remember that RMSE is the abbreviation of the root mean sum of errors, so, as its name indicates, RMSE is just the square root of MSE:
RMSE = √MSE
.
How do I calculate RMSE from SSE?
In order to correctly calculate RMSE from SSE, recall that RMSE is the square root of MSE, which, in turn, is SSE divided by the sample length n
. Combining these two formulas, we arrive at the following direct relationship between RMSE and SSE:
RMSE = √(SSE / n)
.