Are the questions that the faculty and students answer even relevant?
If you look at the data i posted in the data science education section(at least I think it was there) you'll find student level data that allows you to track a student from their first math class to their last and see the effect of the professor on whether or not the student passes. Using a metric like, the percentage of students that passed a class based upon the number that started gives you gar more inside into a faculty members performance than some potentially biased reviewer.
The first time I was reviewed, the prof that sat in didn't like anything I did. At the end of the term, I had 32 of 35 students take the final. My class average was 2 points lower than the dept average. In the reviewers class, they started with the same 35 students. 14 took the final. From what I heard half of them usually don't pass.
So, for me to get reviewed by someone that sends the vast majority out of the classroom (drops and other things) to tell me I'm not good, should I really care? Why is their opinion worth more than used toilet paper?
I think the metrics we as faculty need to be evaluated on are:
1) How well did the students do in YOUR class?
2) How well did the students do in follow up classes that use yours as a pre-req?
3) What percentage of the students you started with pass your class?
Give that data a review after the faculty member has taught a few sections of a course.
If you failed 7 of the 14 students at the final, your students are average in the next class and only 25% of your students pass, you have issues.
If you want to use a common final as a benchmark for teaching effectiveness for part 1 above, fine. Just make sure the grades on that exam actually correlate to grades in follow up classes. Since no one does that, using it as a benchmark or guide is meaningless.
------------------------------
Andrew Ekstrom
Statistician, Chemist, HPC Abuser;-)
------------------------------
Original Message:
Sent: 09-28-2020 13:21
From: Lauren Cappiello
Subject: Teaching Evaluation Rubrick
Hi John,
I'm in a combined math/stat department, but we do peer teaching evaluations. The eval form is short and to the point - there are open response questions on preparation/organization, presentation of subject matter, and student involvement. There's also space for general comments and to comment on "overall effectiveness".
Our student eval form is similarly straightforward, but most of the questions are less open ended. I appreciate that the student eval form and peer eval form get at similar concepts! If you're looking to build your own peer eval rubric, you might start with the student eval questions.
------------------------------
Lauren Cappiello
Original Message:
Sent: 09-10-2020 08:37
From: John Kolassa
Subject: Teaching Evaluation Rubrick
Dear Colleagues, does anyone have a teaching evaluation rubric, specific to statistics classes, that they can share? Thanks, John Kolassa
------------------------------
John Kolassa
Rutgers, the State University of NJ
------------------------------