ASA Connect

 View Only
  • 1.  Seeking your thoughts about a Design of Experiments course

    Posted 09-08-2017 10:59
    I'll be teaching a design of experiments course next semester which I have taught several times previously.  It is an undergraduate senior level taught with first year graduate course. Thus, the intent is for the course to be at an introductory level.  We use Montgomery's Design and Analysis of Experiments as the textbook and I supplement the class with content from G. Cobb's Design and Analysis of Experiments and Paul Mathew's Design of Experiments with MINITAB as well as several other books.  In addition, I have my students complete a project.  However, I'd like to revitalize the course and seek your input about what other books you consider appropriate as well as how you have students conduct an actual experiment.  Any other ideas/comments/suggestions are also welcomed. Thanks!

    ------------------------------
    Rebecca Pierce
    Ball State University
    ------------------------------


  • 2.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-12-2017 10:38
    Rebecca,
    If you would like a contrarian perspective on experimental design courses and statistical issues in general, I'm happy to oblige! Though trained as an ecologist, I early on became dismayed by the poor quality of statistical analyses and poor advice put out by so many scientists (and statisticians) in mss reviews, stat texts, stat encyclopedias, etc., that I eventually developed and offered for about 25 years a course on experimental design (for students with only 1 or 2 prior semesters of statistics). Many of my lecture notes eventually were transformed, in conjunction with a few colleagues, into published articles on statistical malpractice and into reviews of experimental design texts.

    Here is some of that output:

    Hurlbert, S.H. Pseudoreplication and the Design of Ecological Field Experiments 
.
Ecological Monographs, 54: 187-211

    Hurlbert, S. H. 1990. Pastor binocularis: Now we have no excuse [review of Design of experiments by R. Mead]. Ecology 71: 1222-1228.

    Hurlbert, S. H. & White, M. D. 1993. Experiments with freshwater invertebrate zooplanktivores: Quality of statistical analyses. Bulletin of Marine Science 53:128-153.

    Hurlbert, S. H. 1997. Experiments in ecology [Review of book by same title by A.J. Underwood]. Endeavour 21: 172-173.

    Hurlbert, S.H. & Lombardi, C. M. 2003. Design and analysis: Uncertain intent, uncertain result [Review of Experimental design and data analysis for biologists, by G.P. Quinn & M.J. Keough]. Ecology 83: 810-812.

    Hurlbert, S.H. & Meikle, W.G. 2003. Pseudoreplication, fungi, and locusts. Journal of Economic Entomology 96: 533-535.

    Kozlov, M. V. 2003. Pseudoreplication in Russian ecological publications. Bulletin of the Ecological Society of America 84: 45-47. [Condensation of original article published in Russian in Zhurnal Obstchei Biologii [Journal of Fundamental Biology], 64, 292-397].

    Hurlbert, S. H. 2004. On misinterpretations of pseudoreplication and related matters: A reply to Oksanen. Oikos 104: 591-597.

    Hurlbert, S. H. 2009. The ancient black art and transdisciplinary extent of pseudoreplication. Journal of Comparative Psychology 123: 434-443.

    Hurlbert, S.H. 2010. Pseudoreplication capstone: Correction of 12 errors in Koehnle & Schank (2009). Department of Biology, San Diego State University, San Diego, California. 5 pp.

    Hurlbert, S.H. 2012. Pseudofactorialism, response structures, and collective responsibility. Austral Ecology 38: 646-663 + suppl. inform.

    Hurlbert, S.H. 2013a. Affirmation of the classical terminology for experimental design via a critique of Casella's Statistical Design. Agronomy Journal 105: 412-418 + suppl. inform. 

    Hurlbert, S.H. 2013b. [Review of Biometry, 4th edn, by R.R. Sokal & F.J. Rohlf]. Limnology and Oceanography Bulletin 22(2): 62-65. 

    Hurlbert, S.H. & Lombardi, C.M. 2016. Pseudoreplication, one-tailed tests, neoFisherianism, multiple comparisons, and pseudofactorialism. Integrated Environmental Assessment and Management 12:195-197.

    BE HAPPY TO SEND YOU PDFS OF ANY OR ALL OF THESE. Now as to your specific questions:

    - I'd say the best book out there by far is R. Mead's Design of Experiments (see my review)

    - I regard Montgomery as a poor text as it (like the majority of design textbooks) pretty much ignores the conceptual and terminological frameworks for exptl design that had been developed, mostly by folks like Fisher, Finney, D.R. Cox, Yates, Kempthorne, etc. by the 1950s.  Montgomery is a terminological mess. (see my critique of Casella and my article on pseudofactorialism).  As I recall Montgomery is completely unfamiliar with the concept of the experimental unit.

    - It would seem to be that rather than have the students conduct an experiment, it would be much more useful and empowering for them to critically evaluate a set (10-20) of experimental papers. For my course consisting mostly of biology grad students, I had them evaluate 25 papers as a major independent project that took up their lab time for the last half of the semester. I taught them how to easily spot about a half dozen of the commonest errors beforehand, these in many cases consisting of a conflict between  the design employed and the analysis conducted. The students picked a particular topic, or journal, or author,, etc. and got their set of papers on their own. For each experiment they had to tabulate the information on the three aspects of the design (treatment structure, design (or error control) structure, and response structure, and tablulate all statistical errors found.

    It varies among topics or subdisciplines, but typically the students were able to find major errors in 20 to 50 percent of the papers in their set. This gave them a somewhat jaundiced few of science and a healthy distrust of "authority" and glossy publications, but also a sense of empowerment, a sense that they could do better than many of their elders.

    Be glad to send you my instructions to the students for that project if you'd like.  Best regards, Stuart



    ------------------------------
    Stuart Hurlbert
    Emeritus Professor of Biology
    San Diego State University
    ------------------------------



  • 3.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-13-2017 05:09

    Dear Rebecca,

    One useful book might be "Optimal design of experiments: a case-study approach" by Goos and Jones. The book is full of case studies and it has been written in a special style that students like a lot. The book also gives students a flavor of what consulting may be like.

    If you want students to conduct an experiment, you can do the usual stuff with the Six Sigma catapult, of <g class="gr_ gr_14 gr-alert gr_gramm gr_inline_cards gr_run_anim Grammar multiReplace" id="14" data-gr-id="14">perform</g> an 8-factor virtual garden sprinkler experiment via http://twilights.be/sprinkler/ for example.

    Kind regards,

    Peter



    ------------------------------
    Peter Goos
    Full Professor
    KULeuven
    ------------------------------



  • 4.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-13-2017 12:04
    Rebecca,

    I teach DOE for Green Belts and Black Belts. I use the books form Douglas Montgomery as background information but heavily rely on the creation of the design, and the analysis of the results using Minitab.  I believe that performing a classroom experiment drives the point home.  I use the catapult exercise, the paperclip bending exercise (destructive DOE), the helicopter exercise, and the water bubble exercise (for mixtures).

    In addition to the Montgomery classic on DOE, I use "Response Surface Methodology" (Meyers, Montgomery, and Anderson-Cook), "The Analysis of Messy Data" (Milliken and Johnson), "Experiments with Mixtures" (Cornell), "Statistics for Experimenters" (Box, Hunter, and Hunter), "Design and Analysis of Experiments" (Hinkelmann and Kempthorne),  and "Experiments" (Wu and Hamada).

    Regards,
    Filiep

    ------------------------------
    Filiep Samyn
    ------------------------------



  • 5.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-13-2017 13:29
    I'd use Montgomery's book as the main book and suppliment it with Peter and Brad's book, Optimal Design of Experiments. There shouldn't be a need for a Stats with Minitab book. Montgomery's book covers a lot of that material. 

    As far as topics go, perhaps have students run Placket Burman Designs and Definitive Screening Designs on the same system. It should be easy enough to get students to look at making the best cup of coffee on the department coffee pot or best cup of tea. I'd skip Taguchi designs. They are obsolete and far better designs exist, like optimal response surfaces. 

    I also like the idea of using research articles as a discussion point in a DOE class. Science is full of poorly designed experiments. It could be fun to have students review 5-6 papers and have students design an experiment based upon the experiments. I'd also look for journal articles with factorial design and response surface designs and data. Use that data as homework problems and see if your students come to the same conclusions as the authors. (I doubt they will.)

    ------------------------------
    Andrew Ekstrom

    Statistician, Chemist, HPC Abuser;-)
    ------------------------------



  • 6.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-13-2017 13:45
    ​I don't currently teach Design of Experiments (actually only teaching part time as an adjunct right now, day job in medical research) so I don't have suggestions for textbooks or etc.  But I do have an idea that I have been working on in spare time (not a lot devoted to it, since I am not currently teaching the class).

    The idea is for a more interactive data generating experience than using existing data already in the book (or elsewhere) but quicker than doing a full experiment (even quicker than lab experiments like statapults or paper helicopters).  Basically we (the instructor for the course) sets up some examples where there is a mathematical model behind the scenes that represents the "Truth".  This model could be fully fictional, based on an actual analysis, resample from a larger dataset, or other options.  The important part is that the model will generate realistic data given a file with an experimental design.  The instructor would also have a "Data Story" to go along with the model explaining a realistic situation (possibly a real experiment that has been done) along with the possible variables that can be used in the design and instructions on creating the file for the design.  The model is behind a webpage (I have created a couple of examples, I use R and Shiny to do this, but R knowledge is not needed to actually use it once everything is set up, any stats program can be used to work with the examples because they will just upload a csv file with the design, and download a new csv file that has the outcome data attached).

    I think this is more natural than pointing to a dataset and explaining, after seeing the data, what the experimental design was.

    This can be used as part of the teaching, where the instructor would talk about a particular experimental design, then create the input file showing the design, then generate the outcomes and show the analysis, graphs, etc.  This would allow for comparing different designs, the same "Data Story" can be used, then show how to design a randomized block and analyze it, then by contrast design an incomplete block design, generate new data, and analyze it by comparison.

    This can also be used directly by the students for homework, take-home exams, projects, practice, fun, etc.  You would just point them to the website with the details, they would create the design file, upload it, download the file that has the outcomes, and analyze it with their tool of choice.  We can even put in a question to seed the random number generator (e.g. name, or student ID) so that each student gets a unique dataset, but you could regenerate the exact same data for checking.

    I have even looked at having the program generate a cost of the design so that you can show the difference in cost (money, time, other) between for example a full factorial and a fractional factorial design.

    Let me know if you would be interested in trying this out and I can send you some of the sample code.

    ------------------------------
    Greg Snow
    ------------------------------



  • 7.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-15-2017 12:29
    For my course on Exp. Design I use Maxwell & Delaney, Designing Exp. and Analyzing Data (2/e).  The focus is on Psychology experiments, but that should not matter.  I would have preferred Winer (2/e) but it's older and out of print.  In addition to the Fisherian variant designs, I present models for equivalence and non-inferiority designs, and group sequential designs not typically covered in the standard exp design course.  For these topics, I assign articles by Makuch & Simon, Pocock, O'Brien, Lan, DeMets, Ware, et al.  No tests, but 2 projects based on databases available on the web.

    ------------------------------
    Gene S. Fisch
    CUNY/Baruch College
    ------------------------------



  • 8.  RE: Seeking your thoughts about a Design of Experiments course

    Posted 09-16-2017 16:33
    i agree with Stuart and Andrew that critiquing papers in the literature is important. I'm not so sure about doing experiments such as paper helicopters (unless the course is mainly for engineering students), because actual experiments in biology, medicine, and social sciences typically run up against messier situations than something cleaner like paper helicopters.

    ------------------------------
    Martha Smith
    University of Texas
    ------------------------------