Omni calculator
Last updated:

F-statistic calculator

Table of contents

What is F-statistic?How to calculate the F-statistic using an F-statistic table?How to calculate the F-statistic in linear regression?FAQs

The F-statistic calculator (or F-test calculator) helps you compare the equality of the variances of two populations with normal distributions based on the ratio of the variances of a sample of observations drawn from them.

Read further, and learn the following:

  • What is an F-statistic;
  • What is the F-statistic formula; and
  • How to interpret an F-statistic in regression.

What is F-statistic?

Broadly speaking, an F-statistic is a test procedure that compares variances of two given populations. While an F-test may appear in various statistical or econometric problems, we apply it most frequently to regression analysis containing multiple explanatory variables. In this vein, an F-statistic is comparable to a T-statistic, with the main difference of having a linear combination of multiple regression coefficients (F-test) instead of testing only an individual one (T-test).

In the following article, we introduce the F-test in its most basic form using the F-distribution table for better intuition. Then we show how to calculate F-statistic in linear regressions (see the calculator's Multiple regression mode) and explain how to interpret an F-statistic in regression analysis.

How to calculate the F-statistic using an F-statistic table?

The best way to grasp the essence of F-test statistics is to consider its most basic form. Let's consider two populations, from which we each draw an equal number of observation samples. If we want to test whether the two populations are likely to have the same variance (denoted by Si2S^2_i, i=1,2i = 1, 2), we need to follow these steps:

  1. Specify the null hypothesis H0H_0 (which in our simple case is that the two variances are equal) and the alternative hypothesis H1H_1 (which supposes that the two variances are different).
H0:S12=S22H1:S12S22\footnotesize \qquad \begin{align*} H_0 : S^2_1 &= S^2_2 \\ H_1 : S^2_1 &\neq S^2_2 \end{align*}
  1. Determine the variance of the samples (here you may find our variance calculator useful).

  2. Calculate the F-test statistic by dividing the two variances.

F=S12S22\footnotesize \qquad F = \frac{S^2_1}{S^2_2}
  1. Determine the degrees of freedom (dfi)(\text{df}_i) of the two samples, with nn being the number of observations taken from the two populations in each case.
dfi=ni1\footnotesize \qquad \text{df}_i = n_i - 1
  1. Choose the significance level of the F-statistic (α)(\alpha) — for example, α=0.05\alpha = 0.05 corresponds to a 95 percent confidence interval.

  2. Check the critical value of the F-statistic in the F-distribution table as follows:

    • Look for the appropriate F-statistic table with the given significance level (α)(\alpha).
    • Find the right column at the top of the F-table statistics that correspond to the degree of freedom of your first sample (nominator).
    • Check the row on the side that corresponds to the degree of freedom of your second sample (denominator).
    • Read the F critical value at the intersection, which represents the shaded area on the F-distribution graph below.
Graph of a typical f distribution
F distribution
  1. Compare the F-statistic critical value to the previously obtained F-value – check our critical value calculator to learn more about the concept. If the F-value is larger than the critical value collected from the F-table statistic (F>Fcritical)(F > F_\text{critical}), you can reject the null hypothesis. That is, we can state with high confidence that the variances in the two observation samples are not equal.

How to calculate the F-statistic in linear regression?

Analysts mainly apply F-statistic on multiple regressions models (and so can you, with our F-test statistic calculator in Multiple regression mode). It's therefore a good idea that we step further in this direction from the previous basic analysis.

Let's assume we have the following regression model (full model, or unrestricted model), where we would like to know if it is more significant than its reduced form (restricted model). In other words, we are testing whether the restricted coefficients (or the effects of the restricted variables) are jointly non-significant (equal to zero) in the population:

Full modely=β0+β1x1+β2x2+β3x3+u^Restricted modely=β0+β1x1+u^\footnotesize \text{Full model} \\ y = \beta_0 + \beta_1x_1+ \beta_2x_2+ \beta_3x_3+ \hat{u} \\[1em] \footnotesize \text{Restricted model} \\ y = \beta_0 + \beta_1x_1+ \hat{u}

where:

  • β0\beta_0 – Constant or intercept;
  • yy – Dependent variable (also called the regressand, response variable, explained variable, or output variable);
  • xi, i=1,2,3x_i\, , \ i = 1, 2, 3 is the independent variable (also called the regressor, explanatory variable, controlled variable, or input variable);
  • βi, i=1,2,3\beta_i\, ,\ i = 1, 2, 3 are the coefficients; and
  • u^\hat{u} is the residual (or error term).

To conduct the F-test and obtain the F-statistic (or F-value), we need to take the following steps:

  1. State the hypothesis we want to test.

    In our case, the null hypothesis (H0)(H_0) is that the last two coefficients are jointly equal to zero in the unrestricted model. Or, stating the same differently, the joint effect of the related independent variables is insignificant.

    In turn, the alternative hypothesis (H1)(H_1) is that at least one of these coefficients is not equal to zero.

Specific caseH0:β2=β3=0General caseH0:βKJ+1==βK=0\footnotesize \qquad \text{Specific case} \\ \qquad H_0: \beta_2= \beta_3 = 0 \\[1em] \qquad \text{General case} \\ \qquad H_0: \beta_{K-J+1} = \cdots = \beta_K = 0
  • where:
    • JJ is the number of restrictions (in the present case, J=2J=2); and
    • KK is the total number of coefficients (in the present case, K=3K = 3).
  1. Now, to gain information on which model fits better, we need to obtain the sum square of residuals (SSR\text{SSR}), where we expect that the sum square of residuals of the restricted model is larger than that of the full model (i.e. SSRR>SSRF\text{SSR}_R > \text{SSR}_F).
SSR=i=1Nu^i2\footnotesize \qquad SSR = \sum^N_{i=1} \hat{u}^2_i
  1. However, the real question is to determine whether the sum square of residuals of the restricted model is significantly larger than the one in the full model (i.e. SSRRSSRF\text{SSR}_R \gg \text{SSR}_F). To do so, we need to apply the following F-statistic formula to estimate the F-ratio.
F=(SSRRSSRF)/J(1SSRF)/(NK)\qquad \footnotesize F = \frac{(\text{SSR}_R-\text{SSR}_F) / J}{(1 - \text{SSR}_F)/(N-K)}
  • where:
    • FF – F-statistic;
    • SSRF\text{SSR}_F – Sum square of residuals of the full model;
    • SSRR\text{SSR}_R – Sum square of residuals of the restricted model;
    • JJ – Number of restrictions;
    • KK – Total number of coefficients; and
    • NN – Number of observations representing the population.
  1. Naturally, the larger the F-statistic, the more evidence we have to reject the null hypothesis (note that the F-statistic increases when the difference between the two variances gets larger). However, to be more precise, we need to find a critical value of the F-statistic to decide on the rejection. In other words, if FF is larger than its critical value, we can reject the null hypothesis.

  2. Now, we can proceed in the way we described in the previous section by finding the critical F-value (FNK;αJ)(F^J_{N-K;\alpha}) in the F distribution table with a specified significance level F-statistic (α)(\alpha) and looking for the intercept corresponding to the degrees of freedom, where df1=J\text{df}_1 = J is at the top and df1=NK\text{df}_1 = N-K is at the side of the table (we can also say that FF has an F-distribution with JJ and NKN − K degrees of freedom). If FF is larger than its critical value, we can reject the null hypothesis.

So how to interpret F-statistic in regression?

The F-test can be interpreted as testing whether the increase in variance moving from the restricted model to the more general model is significant. We may write it formally in the following way:

P{F>FNK;αJ}=α\footnotesize P\{F > F^J_{N-K;\alpha}\} = \alpha

where α\alpha is the significance level of the test. For example, if NK=40N − K = 40 and J=4J = 4, the critical value at the 5% level is FNK;αJ=2.606F^J_{N-K; \alpha} = 2.606.

FAQs

What is the difference between F-test vs T-test?

There are some differences between the F-test vs a T-test.

  • The T-test is applied to test the significance of one explanatory variable, but the F-test studies the whole model.

  • While the T-test is used to compare the means of two populations, F-test is applied for comparing two population variances.

  • The T-statistic is based on the student t-distribution, while the F-statistic follows the F-distribution under the null hypothesis.

  • While the T-test is a univariate hypothesis test where the standard deviation is unknown, the F-test is applied to determine the equality of the two normal populations.

Can an F-statistic be negative?

No. Since variances always take a positive value (squared values), both the numerator and the denominator of the F-statistic formula must always be positive, resulting in a positive F-value.

What is a high F-statistic?

While a large F-value tends to indicate that the null hypothesis can be rejected, you can confidently reject the null if the T-value is larger than its critical value.

Is the F-distribution symmetric?

No. The curve of the F-distribution is not symmetrical but skewed to the right (the curve has a long tail on its right side), where the shape of the curve depends on the degrees of freedom.

How to calculate F-statistic?

To calculate F-statistic, in general, you need to follow the below steps.

  1. State the null hypothesis and the alternate hypothesis.

  2. Determine the F-value by the formula of F = [(SSE₁ – SSE₂) / m] / [SSE₂ / (n−k)], where SSE is the residual sum of squares, m is the number of restrictions and k is the number of independent variables.

  3. Find the critical value for the F-statistic as determined by F-statistic = variance of the group means / mean of the within-group variances.

  4. Find the F-statistic in the F-table.

  5. Support or reject the null hypothesis.

What is the F-statistic of two populations with variances of 10 and 5?

The F-statistic of two populations with variances of 10 and 5 is 2. To get this result, it suffices to divide the two variances.

Specification

Result

Check out 26 similar inference, regression, and statistical tests calculators 📉
Absolute uncertaintyAB testCoefficient of determination...23 more