**As we talked about previously, usually the 'sig.level' = 0.05 and 'power' = 0.8. In an ideal world, the power will be 0.8, which means that you have an 80% chance of detecting a significant result if one exists.**

## Calculate effect size:

First we must calculate the effect size for a t-test. To get the data for this, you need to do a pilot study, search the literature for similar experiments, or make an educated guess based on the theoretical background to your study.

The effect size for a t-test is the parameter

The effect size for a t-test is the parameter

*d*. In order to calculate this you need the following formula:In case you are not familiar, if you see |

*x*| that means 'absolute value' (i.e., no negatives). 'Pooled SD (standard deviation)' should be calculated using the equation:where

*Sp2*is the pooled variance,*n**i*is the sample size of the*i'*th sample,*s**i*2 is the variance of the*i*th sample, and*k*is the number of samples being combined.**First, DON'T PANIC!!**Both of these can be calculated very straightforwardly in R. All you need to know is the

**means of the two groups**, the

**number of subjects in each group**and the

**group standard deviations**.

If you are calculating the

*d*from published data, you may only be provided with the standard error (i.e., not the standard deviation). The standard error is the standard deviation x square root of the the sample size, so to work out the standard deviation from the published standard error do this in reverse (SD = standard error/ square root of the sample size)1. Calculate the

*degrees of freedom*. If you have the number of subjects in each group, you simply need N-2 (i.e., total subjects in both groups - the number of groups). If you have collected some data and have two groups' data in a spreadsheet, type:where 'group1' and 'group2' are either two variables you have measured. The command 'length' in R counts the number of data points corresponding to that variable. If you are working from published literature, and you know the sample size in each group, you can simply insert the numbers where "length(group_n)" is.

2. Calculate the 'pooled variance' (remember, variance is the square-root of the standard deviation). So, you will need to type (I have based this on a sample size of 10 in each group, hence the '* 9' bits (10-1=9). You can change this to whatever your group sample size is minus 1):

Finally, you can calculate your

*d*value (effect size) by typing:and then type:

to find out what the effect size is.

According to Cohen (1988), 0.2 or lower is a small effect size, ~ 0.5 is medium and 0.8 or greater is a large effect size. Effect sizes can be much larger than 0.8, so don't be surprised if it comes out as >1. Also, effect sizes can be negative numbers, but the same principle applies (i.e., -0.2 = small, -0.5 = medium, etc.).

## Calculate sample size:

If the sample sizes are equal, type:

where n is the sample size,

So, if the effect size (d) = 0.5 (medium effect), the sig.level and power are set at 0.05 and 0.8 respectively, for a two-sample t-test (i.e., two independent groups are being tested) R calculates the effect size as:

*d*is the effect size, and type indicates a two-sample t-test (testing one group's mean against another group's mean), one-sample t-test (testing one group's mean against a predetermined mean, such as difference from '0' or from 'chance level') or paired t-test (testing one group's mean at two different points in time).So, if the effect size (d) = 0.5 (medium effect), the sig.level and power are set at 0.05 and 0.8 respectively, for a two-sample t-test (i.e., two independent groups are being tested) R calculates the effect size as:

This means that if you expect to find a medium effect size between your treatments (0.5) and you want to be 80% confident of detecting this effect at the 0.05 significance level, you will need 64 subjects in each group. Note I have left out the 'n = ...' as that is what I am asking R to calculate. If I wanted to run a paired t-test (i.e., the same subjects at point 1 and point 2), I would simply change the 'type = "two.sample"' to 'type = "paired"'.

That is quite a lot! What if you only have 60 subjects at your disposal? Well, let's ask R. This time, the n = ... is back in, and the 'power = ...' is not there (as I know the sample size I have, and I want to know how much statistical power I have in my design).

This means that if you run the experiment, you can only be 48% confident that you will detect the effect at p=0.05. Time to re-design your experiment I think! If it is possible to run this as a paired samples test, the power might look a little better. Why don't you have a go and see what you get! (Clue: The formula is the same as above, but 'type = ...' will change from ' ... two.sample' to '...paired').

If the sample sizes are unequal, the same basic principles apply, except the command in R is slightly different:

Here, n1 and n2 are the sample sizes of each group.

Finally, R defaults to a two-tailed hypothesis test ('alternative = "two.sided"'). If you wish to calculate the sample size for a one-tailed test, you would change this to 'alternative = "less"' or 'alternative = "greater"', depending on your specific requirements.