Rebecca,
If you would like a contrarian perspective on experimental design courses and statistical issues in general, I'm happy to oblige! Though trained as an ecologist, I early on became dismayed by the poor quality of statistical analyses and poor advice put out by so many scientists (and statisticians) in mss reviews, stat texts, stat encyclopedias, etc., that I eventually developed and offered for about 25 years a course on experimental design (for students with only 1 or 2 prior semesters of statistics). Many of my lecture notes eventually were transformed, in conjunction with a few colleagues, into published articles on statistical malpractice and into reviews of experimental design texts.
Here is some of that output:
Hurlbert, S.H. Pseudoreplication and the Design of Ecological Field Experiments
.
Ecological Monographs, 54: 187-211
Hurlbert, S. H. 1990. Pastor binocularis: Now we have no excuse [review of Design of experiments by R. Mead]. Ecology 71: 1222-1228.
Hurlbert, S. H. & White, M. D. 1993. Experiments with freshwater invertebrate zooplanktivores: Quality of statistical analyses. Bulletin of Marine Science 53:128-153.
Hurlbert, S. H. 1997. Experiments in ecology [Review of book by same title by A.J. Underwood]. Endeavour 21: 172-173.
Hurlbert, S.H. & Lombardi, C. M. 2003. Design and analysis: Uncertain intent, uncertain result [Review of Experimental design and data analysis for biologists, by G.P. Quinn & M.J. Keough]. Ecology 83: 810-812.
Hurlbert, S.H. & Meikle, W.G. 2003. Pseudoreplication, fungi, and locusts. Journal of Economic Entomology 96: 533-535.
Kozlov, M. V. 2003. Pseudoreplication in Russian ecological publications. Bulletin of the Ecological Society of America 84: 45-47. [Condensation of original article published in Russian in Zhurnal Obstchei Biologii [Journal of Fundamental Biology], 64, 292-397].
Hurlbert, S. H. 2004. On misinterpretations of pseudoreplication and related matters: A reply to Oksanen. Oikos 104: 591-597.
Hurlbert, S. H. 2009. The ancient black art and transdisciplinary extent of pseudoreplication. Journal of Comparative Psychology 123: 434-443.
Hurlbert, S.H. 2010. Pseudoreplication capstone: Correction of 12 errors in Koehnle & Schank (2009). Department of Biology, San Diego State University, San Diego, California. 5 pp.
Hurlbert, S.H. 2012. Pseudofactorialism, response structures, and collective responsibility. Austral Ecology 38: 646-663 + suppl. inform.
Hurlbert, S.H. 2013a. Affirmation of the classical terminology for experimental design via a critique of Casella's Statistical Design. Agronomy Journal 105: 412-418 + suppl. inform.
Hurlbert, S.H. 2013b. [Review of Biometry, 4th edn, by R.R. Sokal & F.J. Rohlf]. Limnology and Oceanography Bulletin 22(2): 62-65.
Hurlbert, S.H. & Lombardi, C.M. 2016. Pseudoreplication, one-tailed tests, neoFisherianism, multiple comparisons, and pseudofactorialism. Integrated Environmental Assessment and Management 12:195-197.
BE HAPPY TO SEND YOU PDFS OF ANY OR ALL OF THESE. Now as to your specific questions:
- I'd say the best book out there by far is R. Mead's Design of Experiments (see my review)
- I regard Montgomery as a poor text as it (like the majority of design textbooks) pretty much ignores the conceptual and terminological frameworks for exptl design that had been developed, mostly by folks like Fisher, Finney, D.R. Cox, Yates, Kempthorne, etc. by the 1950s. Montgomery is a terminological mess. (see my critique of Casella and my article on pseudofactorialism). As I recall Montgomery is completely unfamiliar with the concept of the experimental unit.
- It would seem to be that rather than have the students conduct an experiment, it would be much more useful and empowering for them to critically evaluate a set (10-20) of experimental papers. For my course consisting mostly of biology grad students, I had them evaluate 25 papers as a major independent project that took up their lab time for the last half of the semester. I taught them how to easily spot about a half dozen of the commonest errors beforehand, these in many cases consisting of a conflict between the design employed and the analysis conducted. The students picked a particular topic, or journal, or author,, etc. and got their set of papers on their own. For each experiment they had to tabulate the information on the three aspects of the design (treatment structure, design (or error control) structure, and response structure, and tablulate all statistical errors found.
It varies among topics or subdisciplines, but typically the students were able to find major errors in 20 to 50 percent of the papers in their set. This gave them a somewhat jaundiced few of science and a healthy distrust of "authority" and glossy publications, but also a sense of empowerment, a sense that they could do better than many of their elders.
Be glad to send you my instructions to the students for that project if you'd like. Best regards, Stuart
------------------------------
Stuart Hurlbert
Emeritus Professor of Biology
San Diego State University
------------------------------
Original Message:
Sent: 09-08-2017 10:58
From: Rebecca Pierce
Subject: Seeking your thoughts about a Design of Experiments course
I'll be teaching a design of experiments course next semester which I have taught several times previously. It is an undergraduate senior level taught with first year graduate course. Thus, the intent is for the course to be at an introductory level. We use Montgomery's Design and Analysis of Experiments as the textbook and I supplement the class with content from G. Cobb's Design and Analysis of Experiments and Paul Mathew's Design of Experiments with MINITAB as well as several other books. In addition, I have my students complete a project. However, I'd like to revitalize the course and seek your input about what other books you consider appropriate as well as how you have students conduct an actual experiment. Any other ideas/comments/suggestions are also welcomed. Thanks!
------------------------------
Rebecca Pierce
Ball State University
------------------------------