pvalue Calculator
Welcome to our pvalue calculator! You will never again have to wonder how to find the pvalue, as here you can determine the onesided and twosided pvalues from test statistics, following all the most popular distributions: normal, tStudent, chisquared, and Snedecor's F.
Pvalues appear all over science, yet many people find the concept a bit intimidating. Don't worry  in this article we explain not only what the pvalue is, but also how to interpret pvalues correctly. Have you ever been curious about how to calculate pvalue by hand? We provide you with all the necessary formulae as well!
What is pvalue?
Formally, the pvalue is the probability that the test statistic will produce values at least as extreme as the value it produced for your sample. It is crucial to remember that this probability is calculated under the assumption that the null hypothesis is true!
More intuitively, pvalue answers the question: Assuming that I live in a world where the null hypothesis holds, how probable is it that, for another sample, the test I'm performing will generate a value at least as extreme as the one I observed for the sample I already have?
It is the alternative hypothesis which determines what "extreme" actually means, so the pvalue depends on the alternative hypothesis that you state: lefttailed, righttailed, or twotailed.
In formulas below, S
stands for a test statistic, x
for the value it produced for a given sample, and Pr(event  H_{0})
is the probability of an event, calculated under the assumption that H_{0} is true:

Lefttailed test:
pvalue = Pr(S ≤ x  H_{0})

Righttailed test:
pvalue = Pr(S ≥ x  H_{0})

Twotailed test:
pvalue = 2 * min{Pr(S ≤ x  H_{0}), Pr(S ≥ x  H_{0})}
(By
min{a,b}
we denote the smaller number out ofa
andb
.)If the distribution of the test statistic under H_{0} is symmetric about 0, then
pvalue = 2 * Pr(S ≥ x  H_{0})
or, equivalently,
pvalue = 2 * Pr(S ≤ x  H_{0})
As a picture is worth a thousand words, let us illustrate these definitions. Here we use the fact that the probability can be neatly depicted as the area under the density curve for a given distribution. We give two sets of pictures: one for a symmetric distribution, and the other for a skewed (nonsymmetric) distribution.

Symmetric case: normal distribution

Nonsymmetric case: chisquared distribution
In the last picture (twotailed pvalue for skewed distribution), the area of the lefthand side is equal to the area of the righthand side.
How to calculate pvalue from test statistic?
To determine the pvalue, you need to know the distribution of your test statistic under the assumption that the null hypothesis is true. Then, with help of the cumulative distribution function (cdf
) of this distribution, we can express the probability of the test statistics being values at least as extreme as its value x
for the sample:

Lefttailed test:
pvalue = cdf(x)

Righttailed test:
pvalue = 1  cdf(x)

Twotailed test:
pvalue = 2 * min{cdf(x) , 1  cdf(x)}
If the distribution of the test statistic under H_{0} is symmetric about 0, then a twosided pvalue can be simplified as:
pvalue = 2 * cdf(x) = 2  2 * cdf(x)
The probability distributions that are most widespread in hypothesis testing tend to have complicated cdf formulae, and finding the pvalue by hand may not be possible. You'll likely need to resort to a computer, or to a statistical table, where people have gathered approximate cdf values.
Well, you now know how to calculate pvalue, but... why do you need to calculate this number in the first place? In hypothesis testing, the pvalue approach is an alternative to the critical value approach. Recall that the latter requires researchers to preset the significance level, α, which is the probability of rejecting the null hypothesis when it is true (so of type I error). Once you have your pvalue, you just need to compare it with any given α to quickly decide whether or not to reject the null hypothesis at that significance level, α. For details, check the next section, where we explain how to interpret pvalues.
How to interpret pvalue?
As we have mentioned above, pvalue is the answer to the following question:
Assuming that I live in a world where the null hypothesis holds, how probable is it that, for another sample, the test I'm performing will generate a value at least as extreme as the one I observed for the sample I already have?
What does that mean for you? Well, you've got two options:
 A high pvalue means that your data is highly compatible with the null hypothesis; and
 A small pvalue provides evidence against the null hypothesis, as it means that your result would be very improbable if the null hypothesis were true.
However, it may happen that the null hypothesis is true, but your sample is highly unusual! For example, imagine we studied the effect of a new drug, and get a pvalue of 0.03
. This means that, in 3%
of similar studies, random chance alone would still be able to produce the value of the test statistic that we obtained, or a value even more extreme, even if the drug had no effect at all!
The question "what is pvalue" can also be answered as follows: pvalue is the smallest level of significance at which the null hypothesis would be rejected. So, if you now want to make a decision about the null hypothesis at some significance level α
, just compare your pvalue with α
:
 If
pvalue ≤ α
, then you reject the null hypothesis and accept the alternative hypothesis; and  If
pvalue ≥ α
, then you don't have enough evidence to reject the null hypothesis.
Obviously, the fate of the null hypothesis depends on α
. For instance, if the pvalue was 0.03
, we would reject the null hypothesis at a significance level of 0.05
, but not at a level of 0.01
. That's why the significance level should be stated in advance, and not adapted conveniently after pvalue has been established! A significance level of 0.05
is the most common value, but there's nothing magical about it. Here, you can see what too strong a faith in the 0.05
threshold can lead to. It's always best to report the pvalue, and allow the reader to make their own conclusions.
Also, bear in mind that subject area expertise (and common reason) is crucial. Otherwise, mindlessly applying statistical principles, you can easily arrive at statistically significant, despite the conclusion being 100% untrue.
How to use the pvalue calculator to find pvalue from test statistic?
As our pvalue calculator is here at your service, you no longer need to wonder how to find pvalue from all those complicated test statistics! Here are the steps you need to follow:

Pick the alternative hypothesis: twotailed, righttailed, or lefttailed.

Tell us the distribution of your test statistic under the null hypothesis: is it N(0,1), tStudent, chisquared, or Snedecor's F? If you are unsure, check the sections below, as they are devoted to these distributions,.

If needed, specify the degrees of freedom of the test statistic's distribution.

Enter the value of test statistic computed for your data sample.

Our calculator determines the pvalue from test statistic, and provides the decision to be made about the null hypothesis. The standard significance level is 0.05 by default.
Go to the advanced mode
if you need to increase the precision with which the calculations are performed, or change the significance level.
pvalue from Zscore
Use the Zscore
option if your test statistic approximately follows the standard normal distribution N(0,1). Thanks to the central limit theorem, you can count on the approximation if you have a large sample (say at least 50 data points), and treat your distribution as normal.
How to find pvalue from Zscore? In terms of the cumulative distribution function (cdf) of the standard normal distribution, which is traditionally denoted by Φ
, it is given by the following formulae:

Lefttailed ztest:
pvalue = Φ(Z_{score})

Righttailed ztest:
pvalue = 1  Φ(Z_{score})

Twotailed ztest:
pvalue = 2 * Φ(−Z_{score})
or
pvalue = 2  2 * Φ(Z_{score})
A Ztest most often refers to testing the population mean, or the difference between two population means, in particular between two proportions. You can also find Ztests in maximum likelihood estimations.
pvalue from tscore
Use the tscore
option if your test statistic follows the tStudent distribution. This distribution has a shape similar to N(0,1) (bellshaped and symmetric), but has heavier tails  the exact shape depends on the parameter called the degrees of freedom. If the number of degrees of freedom is large (>30), which generically happens for large samples, the tStudent distribution is practically indistinguishable from normal distribution N(0,1).
The pvalue from tscore is given by the following formulae, in which cdf_{t,d}
stands for the cumulative distribution function of the tStudent distribution with d
degrees of freedom:

Lefttailed ttest:
pvalue = cdf_{t,d}(t_{score})

Righttailed ttest:
pvalue = 1  cdf_{t,d}(t_{score})

Twotailed ttest:
pvalue = 2 * cdf_{t,d}(−t_{score})
or
pvalue = 2  2 * cdf_{t,d}(t_{score})
The most common tests with tscore are those for population means with an unknown population standard deviation, or for the difference between means of two populations, with either equal or unequal yet unknown population standard deviations. There's also a ttest for paired (dependent) samples.
pvalue from chisquare score (χ2 score)
Use the χ²score
option when performing a test in which the test statistic follows the χ²distribution.
This distribution arises, if, for example, you take the sum of squared variables, each following the normal distribution N(0,1). Remember to check the number of degrees of freedom of the χ²distribution of your test statistic!
How to find the pvalue from chisquarescore? You can do it with help of the following formulae, in which cdf_{χ²,d}
denotes the cumulative distribution function of the χ²distribution with d
degrees of freedom:

Lefttailed χ²test:
pvalue = cdf_{χ²,d}(χ²_{score})

Righttailed χ²test:
pvalue = 1  cdf_{χ²,d}(χ²_{score})
Remember that χ²tests for goodnessoffit and independence are righttailed tests! (see below)

Twotailed χ²test:
pvalue =
2 * min{cdf_{χ²,d}(χ²_{score}), 1  cdf_{χ²,d}(χ²_{score})}
(By
min{a,b}
we denote the smaller of the numbersa
andb
.)
The most popular tests which lead to a χ²score are the following:

Testing whether the variance of normally distributed data has some predetermined value. In this case, the test statistic has the χ²distribution with
n  1
degrees of freedom, wheren
is the sample size. This can be a onetailed or twotailed test. 
Goodnessoffit test checks whether the empirical (sample) distribution agrees with some expected probability distribution. In this case, the test statistic follows the χ²distribution with
k  1
degrees of freedom, wherek
is the number of classes into which the sample is divided. This is a righttailed test. 
Independence test is used to determine if there is a statistically significant relationship between two variables. In this case, its test statistic is based on the contingency table and follows the χ²distribution with
(r  1)(c  1)
degrees of freedom, wherer
is the number of rows andc
the number of columns in this contingency table. This also is a righttailed test.
pvalue from Fscore
Finally, the Fscore
option should be used when you perform a test in which the test statistic follows the Fdistribution, also known as the Fisher–Snedecor distribution. The exact shape of an Fdistribution depends on two degrees of freedom.
To see where those degrees of freedom come from, consider the independent random variables X
and Y
, which both follow the χ²distributions with d_{1}
and d_{2}
degrees of freedom, respectively. In that case, the ratio (X/d_{1})/(Y/d_{2})
follows the Fdistribution, with (d_{1}, d_{2})
degrees of freedom. For this reason, the two parameters d_{1}
and d_{2}
are also called the numerator and denominator degrees of freedom.
The pvalue from Fscore is given by the following formulae, where we let cdf_{F,d1,d2}
denote the cumulative distribution function of the Fdistribution, with (d_{1}, d_{2})
degrees of freedom:

Lefttailed Ftest:
pvalue = cdf_{F,d1,d2}(F_{score})

Righttailed Ftest:
pvalue = 1  cdf_{F,d1,d2}(F_{score})

Twotailed Ftest:
pvalue =
2 * min{cdf_{F,d1,d2}(F_{score}), 1  cdf_{F,d1,d2}(F_{score})}
(By
min{a,b}
we denote the smaller of the numbersa
andb
.)
Below we list the most important tests that produce Fscores. All of them are righttailed tests.

A test for the equality of variances in two normally distributed populations. Its test statistic follows the Fdistribution with
(n  1, m  1)
degrees of freedom, wheren
andm
are the respective sample sizes. 
ANOVA is used to test the equality of means in three or more groups that come from normally distributed populations with equal variances. We arrive at the Fdistribution with
(k  1, n  k)
degrees of freedom, wherek
is the number of groups, andn
is the total sample size (in all groups together). 
A test for overall significance of regression analysis. The test statistic has an Fdistribution with
(k  1, n  k)
degrees of freedom, wheren
is the sample size, andk
is the number of variables (including the intercept).With the presence of the linear relationship having been established in your data sample with the above test, you can calculate the coefficient of determination, R², which indicates the strength of this relationship.

A test to compare two nested regression models. The test statistic follows the Fdistribution with
(k_{2}  k_{1}, n  k_{2})
degrees of freedom, wherek_{1}
andk_{2}
are the number of variables in the smaller and bigger models, respectively, andn
is the sample size.You may notice that the Ftest of an overall significance is a particular form of the Ftest for comparing two nested models: it tests whether our model does significantly better than the model with no predictors (i.e., the interceptonly model).