**As we talked about previously, usually the 'sig.level' = 0.05 and 'power' = 0.8. In an ideal world, the power will be 0.8, which means that you have an 80% chance of detecting a significant result if one exists.**

## Calculate effect size:

There are various effects sizes that are used in one-way ANOVA designs,
all
based around the proportion of the total variance that is explained in the relationship between the grouping factor and response. You may,
for example, have come across partial
eta

To calculate f we will use the following equation:

^{2}if you have used SPSS's General Linear Model function. Cohen's f is a useful measure of effect size, is calculated from the partial eta^{2}values, and is relatively simple to calculate in R. It is also handy for us as it is used in the 'pwr' library in R to calculate required sample size.To calculate f we will use the following equation:

where η

^{2}(eta^{2}) is calculated by dividing the between-group variance (called 'sum of squares' or SS_{B}) by the total variance of the sample (SS_{T}). You will find these values in your ANOVA table. For example, the ANOVA table below (based on fictitious data) describe the effect of my three-level treatment (called 'group') on some continuous response variable (called 'response'). The summary of the ANOVA shows me the between-groups variance (group Sum Sq) and the total variance (Residuals Sum Sq).in the above ANOVA table, the between-groups variance (group SS) is 2210, and the total variance (residual SS) is 116980. So in R I would type:

which gives me an effect size (

*f*) of:According to Cohen (1988),

*f*~ 0.1 is a weak effect, ~ 0.25 is a medium effect and ~0.5 is a strong effect, so in this case we have a fairly weak effect. This should be obvious from the ANOVA table above, as the F ratio is < 1 (0.557) and the associated p-value is 0.576, and this effect size further demonstrates that variable 'group' is explaining very little of the variance in 'response'.**If you are working from published data, the researchers should have reported the sums of squares!! In practice, of course, this may well not be the case. If the study was recent, it is worth emailing the authors to ask for the values. I am sure they will be more than happy to provide the information you need.**

** If the values are not published, and you can't get hold of them, you can look to see if the paper contains any means and SDs from the treatment groups. Sometimes (for example, if it is the effect of a drug at different doses on response x, there may be one dose that you are interested in. Although it will not be an accurate effect size if the number of treatments > 2, you could calculate the d value as if the authors had carried out a t-test. Again, this is not strictly correct, but it is better than nothing!**

## Calculate sample size:

OK, so now we have the effect size, we can carry out the power analysis to calculate the required sample size. The basic command in library(pwr) in R is:

here,

*k*is the number of groups and*n*is the sample size in each group. If the groups are not the same side (unbalanced design), this is not a major problem for a one-way ANOVA (it can be for two-way though, so beware!), but you must use the LOWEST VALUE for the power analysis (i.e., the number you get will be the minimum required if you are calculating sample size).So, if the effect size (

*f*) = 0.25 (medium effect), the sig.level and power are set at 0.05 and 0.8 respectively, for a one-way ANOVA (i.e., here, I have three independent groups) R calculates the effect size as:This means that in order to be 80% confident that I will detect a significant effect at the 0.05 cut-off point when the effect size (

*f*) is medium (0.25), I will need at least 52 subjects in each condition. Notice that I left the 'n = ...' command out, as this is what I want R to calculate for me.That is quite a lot!! What happens if you only have 90 subjects at your disposal? Let's ask R:

This means that if you run the experiment, you can only be 54% confident that you will detect the effect at p=0.05. Time to re-design your experiment I think!