This might be useful for some of you. It’s a really good hands-on power-analysis tutorial for the most common designs in psychology and cognitive science (from simple t-tests up to more complex designs like 2x2 repeated-measures ANOVA, all also as Bayesian analyses). It provides benchmarks for how large samples need to be for typical effect sizes depending on the number of factors, the pattern of results you predict (like different interaction patterns), and strength of correlations between several within-subject measures. Definitely goes beyond the tutorials I have seen so far.
Abstract: Given that an effect size of d = .4 is a good first estimate of the smallest effect size of interest in psychological research, we already need over 50 participants for a simple comparison of two within-participants conditions if we want to run a study with 80% power. This is more than current practice. In addition, as soon as a between-groups variable or an interaction is involved, numbers of 100, 200, and even more participants are needed. As long as we do not accept these facts, we will keep on running underpowered studies with unclear results. Addressing the issue requires a change in the way research is evaluated by supervisors, examiners, reviewers, and editors. The present paper describes reference numbers needed for the designs most often used by psychologists, including single-variable between-groups and repeated-measures designs with two and three levels, two-factor designs involving two repeated-measures variables or one between-groups variable and one repeated-measures variable (split-plot design). The numbers are given for the traditional, frequentist analysis with p < .05 and Bayesian analysis with BF > 10. These numbers provide researchers with a standard to determine (and justify) the sample size of an upcoming study. The article also describes how researchers can improve the power of their study by including multiple observations per condition per participant.