Critical Value Calculator
Welcome to the critical value calculator! Here you can quickly determine the critical value(s) for twotailed tests, as well as for onetailed tests. It works for most common distributions in statistical testing: the standard normal distribution N(0,1) (that is, when you have a Zscore), tStudent, chisquare, and Fdistribution.
What is a critical value? And what is the critical value formula? Scroll down  we provide you with the critical value definition and explain how to calculate critical values in order to use them to construct rejection regions (also known as critical regions).
What is a critical value?
In hypothesis testing, critical values are one of the two approaches which allow you to decide whether to retain or reject the null hypothesis. The other approach is to calculate the pvalue.
The critical value approach consists of checking if the value of the test statistic generated by your sample belongs to the socalled rejection region, or critical region, which is the region where the test statistic is highly improbable to lie. A critical value is a cutoff value (or two cutoff values in case of a twotailed test) that constitutes the boundary of the rejection region(s). In other words, critical values divide the scale of your test statistic into the rejection region and nonrejection region.
Once you have found the rejection region, check if the value of test statistic generated by your sample belongs to it:
 if so, it means that you can reject the null hypothesis and accept the alternative hypothesis; and
 if not, then there is not enough evidence to reject H₀.
But, how to calculate critical values? First of all, you need to set a significance level, α, which quantifies the probability of rejecting the null hypothesis when it is actually correct. The choice of α is arbitrary; in practice, we most often use a value of 0.05 or 0.01. Critical values depend also on the alternative hypothesis you choose for your test, elucidated in the next section.
Critical value definition
To determine critical values, you need to know the distribution of your test statistic under the assumption that the null hypothesis holds. Critical values are then the points on the distribution which have the same probability as your test statistic, equal to the significance level α. These values are assumed to be at least as extreme at those critical values.
The alternative hypothesis determines what "at least as extreme" means. In particular, if the test is onesided, then there will be just one critical value, if it is twosided, then there will be two of them: one to the left and the other to the right of the median value of the distribution.
Critical values can be conveniently depicted as the points with the property that the area under the density curve of the test statistic from those points to the tails is equal to α:

lefttailed test: the area under the density curve from the critical value to the left is equal to
α
; 
righttailed test: the area under the density curve from the critical value to the right is equal to
α
; and 
twotailed test: the area under the density curve from the left critical value to the left is equal to
α/2
and the area under the curve from the right critical value to the right is equal toα/2
as well; thus, total area equalsα
.
As you can see, finding the critical values for a twotailed test with significance α
boils down to finding both onetailed critical values with a significance level of α/2
.
How to calculate critical values?
The formulae for the critical values involve the quantile function, Q
, which is the inverse of the cumulative distribution function (cdf
) for the test statistic distribution (calculated under the assumption that H₀ holds!): Q = cdf^{1}
Once we have agreed upon the value of α
, the critical value formulae are the following:

lefttailed test:
(∞, Q(α)]

righttailed test:
[Q(1  α), ∞)

twotailed test:
(∞, Q(α/2)] ∪ [Q(1  α/2), ∞)
In the case of a distribution symmetric about 0, the critical values for the twotailed test are symmetric as well:
Q(1  α/2) = Q(α/2)
Unfortunately, the probability distributions that are the most widespread in hypothesis testing have a somewhat complicated cdf
formulae. To find critical values by hand, you would need to use specialized software or statistical tables. In these cases, the best option is, of course, our critical value calculator! 😁
How to use this critical value calculator?
Now that you have found our critical value calculator, you no longer need to worry how to find critical value for all those complicated distributions! Here are the steps you need to follow:

Tell us the distribution of your test statistic under the null hypothesis: is it a standard normal N(0,1), tStudent, chisquared, or Snedecor's F? If you are not sure, check the sections below devoted to those distributions, and try to localize the test you need to perform.

Choose the alternative hypothesis: twotailed, righttailed, or lefttailed.

If needed, specify the degrees of freedom of the test statistic's distribution. If you are not sure, check the description of the test you are performing.

Set the significance level,
α
. We preset it to the most common value, 0.05, by default, but you can, of course, adjust it to your needs. 
The critical value calculator will then display not only your critical value(s) but also the rejection region(s).
Go to the advanced mode
of the critical value calculator if you need to increase the precision with which the critical values are computed.
Z critical values
Use the Z (standard normal)
option if your test statistic follows (at least approximately) the standard normal distribution N(0,1).
In the formulae below, u
denotes the quantile function of the standard normal distribution N(0,1):

lefttailed Z critical value:
u(α)

righttailed Z critical value:
u(1  α)

twotailed Z critical value:
±u(1  α/2)
Check out Ztest calculator to learn more about the most common Ztest used on the population mean. There are also Ztests for the difference between two population means, in particular one between two proportions.
t critical values
Use the tStudent
option if your test statistic follows the tStudent distribution. This distribution is similar to N(0,1), but its tails are fatter  the exact shape depends on the number of degrees of freedom. If this number is large (>30), which generically happens for large samples, then the tStudent distribution is practically indistinguishable from N(0,1).
In the formulae below, Q_{t,d}
is the quantile function of the tStudent distribution with d
degrees of freedom:

lefttailed t critical value:
Q_{t,d}(α)

righttailed t critical value:
Q_{t,d}(1  α)

twotailed t critical values:
±Q_{t,d}(1  α/2)
Visit the ttest calculator to learn more about various ttests: the one for a population mean with an unknown population standard deviation, those for the difference between the means of two populations (with either equal or unequal population standard deviations), as well as about the ttest for paired samples.
chisquare critical values (χ²)
Use the χ² (chisquare)
option when performing a test in which the test statistic follows the χ²distribution.
You need to determine the number of degrees of freedom of the χ²distribution of your test statistic  below we list them for the most commonly used χ²tests.
Here we give the formulae for chi square critical values; Q_{χ²,d}
is the quantile function of the χ²distribution with d
degrees of freedom:

Lefttailed χ² critical value:
Q_{χ²,d}(α)

Righttailed χ² critical value:
Q_{χ²,d}(1  α)

Twotailed χ² critical values:
Q_{χ²,d}(α/2)
andQ_{χ²,d}(1  α/2)
Several different tests lead to a χ²score:

Goodnessoffit test: does the empirical distribution agree with the expected distribution?
This test is righttailed. Its test statistic follows the χ²distribution with
k  1
degrees of freedom, wherek
is the number of classes into which the sample is divided. 
Independence test: is there a statistically significant relationship between two variables?
This test is also righttailed, and its test statistic is computed from the contingency table. There are
(r  1)(c  1)
degrees of freedom, wherer
is the number of rows, andc
the number of columns in the contingency table. 
Test for the variance of normally distributed data: does this variance have some predetermined value?
This test can be one or twotailed! Its test statistic has the χ²distribution with
n  1
degrees of freedom, wheren
is the sample size.
F critical values
Finally, choose F (FisherSnedecor)
if your test statistic follows the Fdistribution. This distribution has a pair of degrees of freedom.
Let us see how those degrees of freedom arise. Assume that you have two independent random variables, X
and Y
, that follow χ²distributions with d_{1}
and d_{2}
degrees of freedom, respectively. If you now consider the ratio (X/d_{1})/(Y/d_{2})
, it turns out it follows the Fdistribution with (d_{1}, d_{2})
degrees of freedom. That's the reason why we call d_{1}
and d_{2}
the numerator and denominator degrees of freedom, respectively.
In the formulae below, Q_{F,d1,d2}
stands for the quantile function of the Fdistribution with (d_{1}, d_{2})
degrees of freedom:

Lefttailed F critical value:
Q_{F,d1,d2}(α)

Righttailed F critical value:
Q_{F,d1,d2}(1  α)

Twotailed F critical values:
Q_{F,d1,d2}(α/2)
andQ_{F,d1,d2}(1  α/2)
Here we list the most important tests that produce Fscores: each of them is righttailed.

ANOVA: tests the equality of means in three or more groups that come from normally distributed populations with equal variances. There are
(k  1, n  k)
degrees of freedom, wherek
is the number of groups, andn
is the total sample size (across every group). 
Overall significance in regression analysis. The test statistic has
(k  1, n  k)
degrees of freedom, wheren
is the sample size, andk
is the number of variables (including the intercept). 
Compare two nested regression models. The test statistic follows the Fdistribution with
(k_{2}  k_{1}, n  k_{2})
degrees of freedom, wherek_{1}
andk_{2}
are the number of variables in the smaller and bigger models, respectively, andn
is the sample size. 
The equality of variances in two normally distributed populations. There are
(n  1, m  1)
degrees of freedom, wheren
andm
are the respective sample sizes.