# t-test Calculator

Created by Anna Szczepanek, PhD
Reviewed by Dominik Czernia, PhD and Jack Bowater
Last updated: Sep 23, 2022

Welcome to our t-test calculator!
Here you can not only easily perform one-sample t-tests, but also two-sample t-tests, as well as paired t-tests.
Do you prefer to find the p-value from t-test, or would you rather find the t-test critical values? Well, this t-test calculator can do both! 😊

What does a t-test tell you? Take a look at the text below, where we explain what actually gets tested when various types of t-tests are performed. Also, we explain when to use t-tests (in particular, whether to use the z-test vs. t-test), and what assumptions your data should satisfy for the results of a t-test to be valid. If you've ever wanted to know how to do a t-test by hand, we provide the necessary t-test formula, as well as giving the number of degrees of freedom in a t-test.

## When to use a t-test?

A t-test is one of the most popular statistical tests for location, i.e., it deals with the population(s) mean value(s).
There are different types of t-tests that you can perform:

• a one-sample t-test;
• a two-sample t-test; and
• a paired t-test.

In the we explain when to use which.
Remember that a t-test can only be used for one or two groups. If you need to compare three (or more) means, use the analysis of variance (ANOVA) method.

The t-test is a parametric test, meaning that your data has to fulfill some assumptions:

• the data points are independent; AND
• the data, at least approximately, follow a normal distribution.

If your sample doesn't fit these assumptions, you can resort to nonparametric alternatives. Visit our Mann–Whitney U test calculator or the Wilcoxon rank-sum test calculator to learn more. Other possibilities include the Wilcoxon signed-rank test or the sign test.

## Which t-test?

Your choice of t-test depends on whether you are studying one group or two groups:

• One sample t-test

Choose the one-sample t-test to check if the mean of a population is equal to some pre-set hypothesized value.

Examples:

• The average volume of a drink sold in 0.33 l cans - is it really equal to 330 ml?
• The average weight of people from a specific city - is it different from the national average?
• Two-sample t-test

Choose the two-sample t-test to check if the difference between the means of two populations is equal to some pre-determined value, when the two samples have been chosen independently of each other.

In particular, you can use this test to check whether the two groups are different from one another.

Examples:

• The average difference in weight gain in two groups of people: one group was on a high-carb diet and the other on a high-fat diet.
• The average difference in the results of a math test from students at two different universities.

This test is sometimes referred to as an independent samples t-test, or an unpaired samples t-test.

1. Paired t-test

A paired t-test is used to investigate the change in the mean of a population before and after some experimental intervention, based on a paired sample, i.e., when each subject has been measured twice: before and after treatment.

In particular, you can use this test to check whether, on average, the treatment has had any effect on the population.

Examples:

• The change in student test performance before and after taking a course.
• The change in blood pressure in patients before and after administering some drug.

## How to do a t-test?

So, you've decided which t-test to perform. These next steps will tell you how to calculate the p-value from t-test or its critical values, and then which decision to make about the null hypothesis.

1. Decide on the alternative hypothesis:

• Use a two-tailed t-test if you only care whether the population's mean (or, in the case of two populations, the difference between the populations' means) agrees or disagrees with the pre-set value.

• Use a one-tailed t-test if you want to test whether this mean (or difference in means) is greater/less than the pre-set value.

Formulas for the test statistic in t-tests include the sample size, as well as its mean and standard deviation. The exact formula depends on the t-test type - check the sections dedicated to each particular test for more details.

3. Determine the degrees of freedom for the t-test:

The degrees of freedom are the number of observations in a sample that are free to vary as we estimate statistical parameters. In the simplest case, the number of degrees of freedom equals your sample size minus the number of parameters you need to estimate. Again, the exact formula depends on the t-test you want to perform - check the sections below for details.

The degrees of freedom are essential, as they determine the distribution followed by your t-score (under the null hypothesis). If there are d degrees of freedom, then the distribution of the test statistics is the t-Student distribution with d degrees of freedom. This distribution has a shape similar to N(0,1) (bell-shaped and symmetric) but has heavier tails. If the number of degrees of freedom is large (>30), which generically happens for large samples, the t-Student distribution is practically indistinguishable from N(0,1).

💡 The t-Student distribution owes its name to William Sealy Gosset, who, in 1908, published his paper on the t-test under the pseudonym "Student". Gosset worked at the famous Guinness Brewery in Dublin, Ireland, and devised the t-test as an economical way to monitor the quality of beer. Cheers! 🍺🍺🍺

## p-value from t-test

Recall that the p-value is the probability (calculated under the assumption that the null hypothesis is true) that the test statistic will produce values at least as extreme as the t-score produced for your sample. As probabilities correspond to areas under the density function, p-value from t-test can be nicely illustrated with the help of the following pictures:

The following formulae say how to calculate p-value from t-test. By cdft,d we denote the cumulative distribution function of the t-Student distribution with d degrees of freedom:

1. p-value from left-tailed t-test:

p-value = cdft,d(tscore)

2. p-value from right-tailed t-test:

p-value = 1 - cdft,d(tscore)

3. p-value from two-tailed t-test:

p-value = 2 * cdft,d(−|tscore|)

or, equivalently: p-value = 2 - 2 * cdft,d(|tscore|)

However, the cdf of the t-distribution is given by a somewhat complicated formula. To find the p-value by hand, you would need to resort to statistical tables, where approximate cdf values are collected, or to specialized statistical software. Fortunately, our t-test calculator determines the p-value from t-test for you in the blink of an eye!

## t-test critical values

Recall, that in the critical values approach to hypothesis testing, you need to set a significance level, α, before computing the critical values, which in turn give rise to critical regions (a.k.a. rejection regions).

Formulas for critical values employ the quantile function of t-distribution, i.e., the inverse of the cdf:

1. Critical value for left-tailed t-test:
cdft,d-1(α)

critical region:

(-∞, cdft,d-1(α)]

2. Critical value for right-tailed t-test:
cdft,d-1(1-α)

critical region:

[cdft,d-1(1-α), ∞)

3. Critical values for two-tailed t-test:
±cdft,d-1(1-α/2)

critical region:

(-∞, -cdft,d-1(1-α/2)] ∪ [cdft,d-1(1-α/2), ∞)

To decide the fate of the null hypothesis, just check if your t-score lies within the critical region:

• If your t-score belongs to the critical region, reject the null hypothesis and accept the alternative hypothesis.

• If your t-score is outside the critical region, then you don't have enough evidence to reject the null hypothesis.

## How to use our t-test calculator?

1. Choose the type of t-test you wish to perform:

• a one-sample t-test (to test the mean of a single group against a hypothesized mean);
• a two-sample t-test (to compare the means for two groups); or
• a paired t-test (to check how the mean from the same group changes after some intervention).
2. Decide on the alternative hypothesis:

• two-tailed;
• left-tailed; or
• right-tailed.
3. This t-test calculator allows you to use either the p-value approach or the critical regions approach to hypothesis testing!

4. Enter your t-score, and the number of degrees of freedom. If you don't know them, provide some data about your sample(s): sample size, mean, and standard deviation, and our t-test calculator will compute the t-score and degrees of freedom for you.

5. Once all the parameters are present, the p-value, or critical region, will immediately appear underneath the t-test calculator, along with an interpretation!

## One-sample t-test

1. The null hypothesis is that the population mean is equal to some value μ0.

2. The alternative hypothesis is that the population mean is:

• different from $\mu_0$;
• smaller than $\mu_0$; or
• greater than $\mu_0$.

One-sample t-test formula:

$t = \frac{\bar{x} - \mu_0}{s} \cdot \sqrt{n}$

where:

• $\mu_0$ is the mean postulated in H₀;
• $n$ is sample size;
• $\bar{x}$ is the sample mean; and
• $s$ is sample standard deviation.

Number of degrees of freedom in t-test (one-sample) = $n-1$.

## Two-sample t-test

1. The null hypothesis is that the actual difference between these groups' means, $\mu_1$ and $\mu_2$, is equal to some pre-set value, $\Delta$.

2. The alternative hypothesis is that the difference $\mu_1 - \mu_2$ is:

• different from $\Delta$;
• smaller than $\Delta$; or
• greater than $\Delta$.

In particular, if this pre-determined difference is zero ($\Delta = 0$):

1. The null hypothesis is that the population means are equal.

2. The alternate hypothesis is that the population means are:

• $\mu_1$ and $\mu_2$ are different from one another;
• $\mu_1$ is smaller than $\mu_2$; and
• $\mu_1$ is greater than $\mu_2$.

Formally, to perform a t-test, we should additionally assume that the variances of the two populations are equal (this assumption is called the homogeneity of variance).

There is a version of a t-test which can be applied without the assumption of homogeneity of variance: it is called a Welch's t-test. For your convenience, we describe both versions.

#### Two-sample t-test if variances are equal

Use this test if you know that the two populations' variances are the same (or very similar).

Two-sample t-test formula (with equal variances):

$t = \frac{\bar{x}_1 - \bar{x}_2 - \Delta}{s_p \cdot \sqrt{\frac{1}{n_1} +\frac{1}{n_2} }}$

where $s_p$ is the so-called pooled standard deviation, which we compute as:

$s_p = \sqrt{\frac{(n_1-1)s_1^2 + (n_2-1)s_2^2}{n_1+n_2-2}}$

and

• $\Delta$ is the mean difference postulated in H₀;
• $n_1$ is the first sample size;
• $\bar{x}_1$ is the mean for the first sample;
• $s_1$ is the standard deviation in the first sample;
• $n_2$ is the second sample size;
• $\bar{x}_2$ is the mean for the second sample; and
• $s_2$ is the standard deviation in the second sample.

Number of degrees of freedom in t-test (two samples, equal variances) = $n_1 + n_2 - 2$.

#### Two-sample t-test if variances are unequal (Welch's t-test)

Use this test if the variances of your populations are different.

Two-sample Welch's t-test formula if variances are unequal:

$t = \frac{\bar{x}_1 - \bar{x}_2 - \Delta}{\sqrt{s_1^2/n_1 + s_2^2/n_2}}$

where:

• $\Delta$ is the mean difference postulated in H₀;
• $n_1$ is the first sample size;
• $\bar{x}_1$ is the mean for the first sample;
• $s_1$ is the standard deviation in the first sample;
• $n_2$ is the second sample size;
• $\bar{x}_2$ is the mean for the second sample; and
• $s_2$ is the standard deviation in the second sample.

The number of degrees of freedom in a Welch's t-test (two-sample t-test with unequal variances) is very difficult to count. We can be approximate it with help of the following Satterthwaite formula:

$\frac{(s_1^2/n_1 + s_2^2/n_2)^2}{\frac{(s_1^2/n_1)^2}{n_1-1} + \frac{(s_2^2/n_2)^2}{n_2-1} }$

Alternatively, you can take the smaller of $n_1 - 1$ and $n_2 - 1$ as a conservative estimate for the number of degrees of freedom.

🔎 The Satterthwaite formula for the degrees of freedom can be rewritten as a scaled weighted harmonic mean of the degrees of freedom of the respective samples: $n_1 - 1$ and $n_2 - 1$, and the weights are proportional to the standard deviations of the corresponding samples.

## Paired t-test

As we commonly perform a paired t-test when we have data about the same subjects measured twice (before and after some treatment), let us adopt the convention of referring to the samples as the pre-group and post-group.

1. The null hypothesis is that the true difference between the means of pre and post populations is equal to some pre-set value, $\Delta$.

2. The alternative hypothesis is that the actual difference between these means is:

• different from $\Delta$;
• smaller than $\Delta$; or
• greater than $\Delta$.

Typically, this pre-determined difference is zero. We can then reformulate the hypotheses as follows:

1. The null hypothesis is that the pre and post means are the same, i.e., the treatment has no impact on the population.

2. The alternative hypothesis:

• The pre and post means are different from one another (treatment has some effect);
• The pre mean is smaller than post mean (treatment increases the result); or
• The pre mean is greater than post mean (treatment decreases the result).

Paired t-test formula

In fact, a paired t-test is technically the same as a one-sample t-test! Let us see why it is so. Let $x_1, ... , x_n$ be the pre observations and $y_1, ... , y_n$ the respective post observations. That is, $x_i, y_i$ are the before and after measurements of the i-th subject.

For each subject, compute the difference, $d_i := x_i - y_i$. All that happens next is just a one-sample t-test performed on the sample of differences $d_1, ... , d_n$. Take a look at the formula for the t-score:

$t = \frac{\bar{x} - \Delta}{s}\cdot \sqrt{n}$

where:

• $\Delta$ is the mean difference postulated in H₀;

• $n$ is the size of the sample of differences, i.e., the number of pairs;

• $\bar{x}$ is the mean of the sample of differences; and

• $s$ is the standard deviation of the sample of differences.

Number of degrees of freedom in t-test (paired): $n - 1$

## t-test vs Z-test

We use a Z-test when we want to test the population mean of a normally distributed dataset, which has a known population variance. If the number of degrees of freedom is large, then the t-Student distribution is very close to N(0,1).

Hence, if there are many data points (at least 30), you may swap a t-test for a Z-test, and the results will be almost identical. However, for small samples with unknown variance, remember to use the t-test because, in such case, the t-Student distribution differs significantly from the N(0,1)!

🙋 Have you concluded you need to perform the z-test? Head straight to our z-test calculator!

## FAQ

### What is a t-test?

A t-test is a widely used statistical test that compares the means of two groups of data. For instance, a t-test is performed on data generated from two experiments to gather insights on the results obtained from independent or interdependent experiments.

### What are different types of t-tests?

Different types of t-tests are:

1. Single T-test;
2. Two-sample T-test; and
3. Paired T-test

The type of test is chosen based on whether the two groups to be tested are dependent on each other or not, the type of distribution, and conditions.

### How to find the t value in a one sample t-test?

To find the t value:

1. Subtract the null hypothesis mean from the sample mean value.
2. Divide the difference by the standard deviation of the sample.
3. Multiply the resultant with the square root of the sample size.
Anna Szczepanek, PhD
Test setup
Choose test type:
one-sample
t-test for the population mean, μ, based on one independent sample.
Null hypothesis H₀: μ = μ₀
Alternative hypothesis H₁
μ ≠ μ₀
Test details
Approach
p-value
Do you know the t-score?
Yes
t-score
Degrees of freedom
Test results
p-value
People also viewed…

Addiction calculator tells you how much shorter your life would be if you were addicted to alcohol, cigarettes, cocaine, methamphetamine, methadone, or heroin.

### Benford's law

Benford's law calculator allows you to verify if your data set follows Benford's distribution of leading digits.

### Humans vs vampires

Vampire apocalypse calculator shows what would happen if vampires were among us using the predator - prey model.

### Linear regression

The linear regression calculator determines the coefficients of linear regression model for any set of data points.