pvalue Calculator
Table of contents
What is pvalue?How do I calculate pvalue from test statistic?How to interpret pvalueHow to use the pvalue calculator to find pvalue from test statisticHow do I find pvalue from zscore?How do I find pvalue from t?pvalue from chisquare score (χ² score)pvalue from FscoreFAQsWelcome to our pvalue calculator! You will never again have to wonder how to find the pvalue, as here you can determine the onesided and twosided pvalues from test statistics, following all the most popular distributions: normal, tStudent, chisquared, and Snedecor's F.
Pvalues appear all over science, yet many people find the concept a bit intimidating. Don't worry – in this article, we will explain not only what the pvalue is but also how to interpret pvalues correctly. Have you ever been curious about how to calculate the pvalue by hand? We provide you with all the necessary formulae as well!
🙋 If you want to revise some basics from statistics, our normal distribution calculator is an excellent place to start.
What is pvalue?
Formally, the pvalue is the probability that the test statistic will produce values at least as extreme as the value it produced for your sample. It is crucial to remember that this probability is calculated under the assumption that the null hypothesis H_{0} is true!
More intuitively, pvalue answers the question:
Assuming that I live in a world where the null hypothesis holds, how probable is it that, for another sample, the test I'm performing will generate a value at least as extreme as the one I observed for the sample I already have?
It is the alternative hypothesis that determines what "extreme" actually means, so the pvalue depends on the alternative hypothesis that you state: lefttailed, righttailed, or twotailed. In the formulas below, S stands for a test statistic, x for the value it produced for a given sample, and Pr(event  H_{0}) is the probability of an event, calculated under the assumption that H_{0} is true:

Lefttailed test: pvalue = Pr(S ≤ x  H_{0})

Righttailed test: pvalue = Pr(S ≥ x  H_{0})

Twotailed test:
pvalue = 2 × min{Pr(S ≤ x  H_{0}), Pr(S ≥ x  H_{0})}
(By min{a,b}, we denote the smaller number out of a and b.)
If the distribution of the test statistic under H_{0} is symmetric about 0, then:
pvalue = 2 × Pr(S ≥ x  H_{0})or, equivalently:
pvalue = 2 × Pr(S ≤ x  H_{0})
As a picture is worth a thousand words, let us illustrate these definitions. Here, we use the fact that the probability can be neatly depicted as the area under the density curve for a given distribution. We give two sets of pictures: one for a symmetric distribution and the other for a skewed (nonsymmetric) distribution.
 Symmetric case: normal distribution:
 Nonsymmetric case: chisquared distribution:
In the last picture (twotailed pvalue for skewed distribution), the area of the lefthand side is equal to the area of the righthand side.
How do I calculate pvalue from test statistic?
To determine the pvalue, you need to know the distribution of your test statistic under the assumption that the null hypothesis is true. Then, with the help of the cumulative distribution function (cdf) of this distribution, we can express the probability of the test statistics being at least as extreme as its value x for the sample:

Lefttailed test:
pvalue = cdf(x).

Righttailed test:
pvalue = 1  cdf(x).

Twotailed test:
pvalue = 2 × min{cdf(x) , 1  cdf(x)}.
If the distribution of the test statistic under H_{0} is symmetric about 0, then a twosided pvalue can be simplified to pvalue = 2 × cdf(x), or, equivalently, as pvalue = 2  2 × cdf(x).
The probability distributions that are most widespread in hypothesis testing tend to have complicated cdf formulae, and finding the pvalue by hand may not be possible. You'll likely need to resort to a computer or to a statistical table, where people have gathered approximate cdf values.
Well, you now know how to calculate the pvalue, but… why do you need to calculate this number in the first place? In hypothesis testing, the pvalue approach is an alternative to the critical value approach. Recall that the latter requires researchers to preset the significance level, α, which is the probability of rejecting the null hypothesis when it is true (so of type I error). Once you have your pvalue, you just need to compare it with any given α to quickly decide whether or not to reject the null hypothesis at that significance level, α. For details, check the next section, where we explain how to interpret pvalues.
How to interpret pvalue
As we have mentioned above, the pvalue is the answer to the following question:
Assuming that I live in a world where the null hypothesis holds, how probable is it that, for another sample, the test I'm performing will generate a value at least as extreme as the one I observed for the sample I already have?
What does that mean for you? Well, you've got two options:
 A high pvalue means that your data is highly compatible with the null hypothesis; and
 A small pvalue provides evidence against the null hypothesis, as it means that your result would be very improbable if the null hypothesis were true.
However, it may happen that the null hypothesis is true, but your sample is highly unusual! For example, imagine we studied the effect of a new drug and got a pvalue of 0.03. This means that in 3% of similar studies, random chance alone would still be able to produce the value of the test statistic that we obtained, or a value even more extreme, even if the drug had no effect at all!
The question "what is pvalue" can also be answered as follows: pvalue is the smallest level of significance at which the null hypothesis would be rejected. So, if you now want to make a decision on the null hypothesis at some significance level α, just compare your pvalue with α:
 If pvalue ≤ α, then you reject the null hypothesis and accept the alternative hypothesis; and
 If pvalue ≥ α, then you don't have enough evidence to reject the null hypothesis.
Obviously, the fate of the null hypothesis depends on α. For instance, if the pvalue was 0.03, we would reject the null hypothesis at a significance level of 0.05, but not at a level of 0.01. That's why the significance level should be stated in advance and not adapted conveniently after the pvalue has been established! A significance level of 0.05 is the most common value, but there's nothing magical about it.
It's always best to report the pvalue, and allow the reader to make their own conclusions.Also, bear in mind that subject area expertise (and common reason) is crucial. Otherwise, mindlessly applying statistical principles, you can easily arrive at
How to use the pvalue calculator to find pvalue from test statistic
As our pvalue calculator is here at your service, you no longer need to wonder how to find pvalue from all those complicated test statistics! Here are the steps you need to follow:

Pick the alternative hypothesis: twotailed, righttailed, or lefttailed.

Tell us the distribution of your test statistic under the null hypothesis: is it N(0,1), tStudent, chisquared, or Snedecor's F? If you are unsure, check the sections below, as they are devoted to these distributions.

If needed, specify the degrees of freedom of the test statistic's distribution.

Enter the value of test statistic computed for your data sample.

By default, the calculator uses the significance level of 0.05.

Our calculator determines the pvalue from the test statistic and provides the decision to be made about the null hypothesis.
How do I find pvalue from zscore?
In terms of the cumulative distribution function (cdf) of the standard normal distribution, which is traditionally denoted by Φ, the pvalue is given by the following formulae:

Lefttailed ztest:
pvalue = Φ(Z_{score})

Righttailed ztest:
pvalue = 1  Φ(Z_{score})

Twotailed ztest:
pvalue = 2 × Φ(−Z_{score})
or
pvalue = 2  2 × Φ(Z_{score})
🙋 To learn more about Ztests, head to Omni's Ztest calculator.
We use the Zscore if the test statistic approximately follows the standard normal distribution N(0,1). Thanks to the central limit theorem, you can count on the approximation if you have a large sample (say at least 50 data points) and treat your distribution as normal.
A Ztest most often refers to testing the population mean, or the difference between two population means, in particular between two proportions. You can also find Ztests in maximum likelihood estimations.
How do I find pvalue from t?
The pvalue from the tscore is given by the following formulae, in which cdf_{t,d} stands for the cumulative distribution function of the tStudent distribution with d degrees of freedom:

Lefttailed ttest:
pvalue = cdf_{t,d}(t_{score})

Righttailed ttest:
pvalue = 1  cdf_{t,d}(t_{score})

Twotailed ttest:
pvalue = 2 × cdf_{t,d}(−t_{score})
or
pvalue = 2  2 × cdf_{t,d}(t_{score})
Use the tscore option if your test statistic follows the tStudent distribution. This distribution has a shape similar to N(0,1) (bellshaped and symmetric) but has heavier tails – the exact shape depends on the parameter called the degrees of freedom. If the number of degrees of freedom is large (>30), which generically happens for large samples, the tStudent distribution is practically indistinguishable from the normal distribution N(0,1).
The most common ttests are those for population means with an unknown population standard deviation, or for the difference between means of two populations, with either equal or unequal yet unknown population standard deviations. There's also a ttest for paired (dependent) samples.
🙋 To get more insights into tstatistics, we recommend using our ttest calculator.
pvalue from chisquare score (χ² score)
Use the χ²score option when performing a test in which the test statistic follows the χ²distribution.
This distribution arises if, for example, you take the sum of squared variables, each following the normal distribution N(0,1). Remember to check the number of degrees of freedom of the χ²distribution of your test statistic!
How to find the pvalue from chisquarescore? You can do it with the help of the following formulae, in which cdf_{χ²,d} denotes the cumulative distribution function of the χ²distribution with d degrees of freedom:

Lefttailed χ²test:
pvalue = cdf_{χ²,d}(χ²_{score})

Righttailed χ²test:
pvalue = 1  cdf_{χ²,d}(χ²_{score})
Remember that χ²tests for goodnessoffit and independence are righttailed tests! (see below)

Twotailed χ²test:
pvalue = 2 × min{cdf_{χ²,d}(χ²_{score}), 1  cdf_{χ²,d}(χ²_{score})}
(By min{a,b}, we denote the smaller of the numbers a and b.)
The most popular tests which lead to a χ²score are the following:

Testing whether the variance of normally distributed data has some predetermined value. In this case, the test statistic has the χ²distribution with n  1 degrees of freedom, where n is the sample size. This can be a onetailed or twotailed test.

Goodnessoffit test checks whether the empirical (sample) distribution agrees with some expected probability distribution. In this case, the test statistic follows the χ²distribution with k  1 degrees of freedom, where k is the number of classes into which the sample is divided. This is a righttailed test.

Independence test is used to determine if there is a statistically significant relationship between two variables. In this case, its test statistic is based on the contingency table and follows the χ²distribution with (r  1)(c  1) degrees of freedom, where r is the number of rows, and c is the number of columns in this contingency table. This also is a righttailed test.
pvalue from Fscore
Finally, the Fscore option should be used when you perform a test in which the test statistic follows the Fdistribution, also known as the Fisher–Snedecor distribution. The exact shape of an Fdistribution depends on two degrees of freedom.
To see where those degrees of freedom come from, consider the independent random variables X and Y, which both follow the χ²distributions with d_{1} and d_{2} degrees of freedom, respectively. In that case, the ratio (X/d_{1})/(Y/d_{2}) follows the Fdistribution, with (d_{1}, d_{2})degrees of freedom. For this reason, the two parameters d_{1} and d_{2} are also called the numerator and denominator degrees of freedom.
The pvalue from Fscore is given by the following formulae, where we let cdf_{F,d1,d2} denote the cumulative distribution function of the Fdistribution, with (d_{1}, d_{2})degrees of freedom:

Lefttailed Ftest:
pvalue = cdf_{F,d1,d2}(F_{score})

Righttailed Ftest:
pvalue = 1  cdf_{F,d1,d2}(F_{score})

Twotailed Ftest:
pvalue = 2 × min{cdf_{F,d1,d2}(F_{score}), 1  cdf_{F,d1,d2}(F_{score})}
(By min{a,b}, we denote the smaller of the numbers a and b.)
Below we list the most important tests that produce Fscores. All of them are righttailed tests.

A test for the equality of variances in two normally distributed populations. Its test statistic follows the Fdistribution with (n  1, m  1)degrees of freedom, where n and m are the respective sample sizes.

ANOVA is used to test the equality of means in three or more groups that come from normally distributed populations with equal variances. We arrive at the Fdistribution with (k  1, n  k)degrees of freedom, where k is the number of groups, and n is the total sample size (in all groups together).

A test for overall significance of regression analysis. The test statistic has an Fdistribution with (k  1, n  k)degrees of freedom, where n is the sample size, and k is the number of variables (including the intercept).
With the presence of the linear relationship having been established in your data sample with the above test, you can calculate the coefficient of determination, R^{2}, which indicates the strength of this relationship. You can do it by hand or use our coefficient of determination calculator.

A test to compare two nested regression models. The test statistic follows the Fdistribution with (k_{2}  k_{1}, n  k_{2})degrees of freedom, where k_{1} and k_{2} are the numbers of variables in the smaller and bigger models, respectively, and n is the sample size.
You may notice that the Ftest of an overall significance is a particular form of the Ftest for comparing two nested models: it tests whether our model does significantly better than the model with no predictors (i.e., the interceptonly model).
Can pvalue be negative?
No, the pvalue cannot be negative. This is because probabilities cannot be negative, and the pvalue is the probability of the test statistic satisfying certain conditions.
What does a high pvalue mean?
A high pvalue means that under the null hypothesis, there's a high probability that for another sample, the test statistic will generate a value at least as extreme as the one observed in the sample you already have. A high pvalue doesn't allow you to reject the null hypothesis.
What does a low pvalue mean?
A low pvalue means that under the null hypothesis, there's little probability that for another sample, the test statistic will generate a value at least as extreme as the one observed for the sample you already have. A low pvalue is evidence in favor of the alternative hypothesis – it allows you to reject the null hypothesis.