Well another approach is to to fix power at a certain level, say 80% and solve for the detectable effect given the available sample size and a desired significance level. But there's a problem, and it's the significance level. If only one hypothesis is going to be tested then you could use the traditional alpha=0.05. But with many hypotheses and many looks at the data, which is what often happens in secondary data analyses, a marginal alpha of 0.05 (i.e., 'p<.05') doesn't work well. What to do? one way would be to make the significance level more stringent, based on the number of tests that will be conducted, perhaps using some adjustment like Bonferroni, or FDR-based (which would require some assumptions).
As Bob Gerzoff mentioned, you would probably be better served by looking at confidence intervals and measures of effect size. However, with many hypotheses, the traditional 95% marginal confidence intervals have the same problem as 'p<0.05'. They fail to account for uncertainty due to multiplicity. A relatively simple approach to deal with this is to estimate simultaneous confidence intervals using an FDR approach. See the 2005 paper in JASA by Benjamini and Yekutieli 'False Discovery Rate-Adjusted Multiple Confidence Intervals for Selected Parameters'.
http://www.math.tau.ac.il/~yekutiel/papers/JASA%20FCR%20prints.pdf
------------------------------
Andres Azuero
UAB
Original Message:
Sent: 06-24-2016 11:15
From: Erick Suarez Perez
Subject: Power calculation
Dear All:
When a secondary data analysis is performed, is it recommended to compute the statistical power based on the observed data?
Best regards,
Erick