Hello Brandy, firstly of course Pearson's correlation measures association not agreement. The ICC and Kappa measure agreement. The ICC via ANOVA for continuous measures can measure within variance agreement for laboratory serum measures for example. Kappa via contingency tables can for example measure rater agreement on some discrete scale.
See Bartko On the Methods and Theory of Reliability. The Journal of Nervous and Mental Disease,Vol 163 No 5 1976 pages 307-317.
There are algebraic mappings between the ICC and Kappa.
For ICC effect sizes see for example Cicchetti, Domenic 1994 Psychological Assessment 6 (40 284-290.
For Kappa effect sizes see Landis and Koch 1977 The measurement of observer agreement for categorical data Biometrics 33, 159-174.
See Also Fleiss's book on Rates and Proportions, Wiley.
The above just skims the surface but you may find something for your use here.
All the best. John Bartko
------------------------------
John Bartko
Consulting Biostatistician
------------------------------
Original Message:
Sent: 04-09-2018 08:39
From: Brandy Sinco
Subject: Effect Size for Intraclass Correlation Coefficient
Hi ASA Community,
Any recommendations for articles about effect size guidelines for intra-class correlation coefficients?
According to Cohen (1992), a Pearson correlation coefficient of .1 has a small effect size, .3 is a medium effect size, and .5 is a large effect size. I would appreciate any articles or other references about effect sizes for intraclass correlation coefficients.
Reference mentioned above:
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159.
--
Regards,
Brandy R. Sinco
Statistician and Programmer/Analyst, UM School of Social Work
Current Projects:
Mon, Wed, Fri CHW Integration/REACH;
Tues, Thurs RISE/WCBT