Final week and a comment on evaluation

So, it is final week and final week brings with it the intense focus on reading and grading that goes along with being a prof. I need a break.

I have been reading several books critical of the U.S. educational system, teachers, and how we were failing the business community, failing to prepare the younger generation to compete with the Chinese, Norwegians, or whomever and I was starting to get depressed. I would like to think of myself as a serious scholar and committed to preparing the next generation for their opportunity to make a living, have children, and think meaningful thoughts, but perhaps I have been deluding myself. By the way, I have started to dilute the depressing stuff with the book by Daniel Willingham and I am now less concerned about my world view.

Did you ever notice how Andy Rooney begins many of his 60 minute segments by asking did you ever notice?

Did you ever notice how multiple choice tests are often the culprit in the sad state of education? I don’t mean MC tests as in high stakes testing and NCLB, but multiple choice tests in how we focus far to much on memorization (actually read Willingham and note that the importance of knowing stuff is not necessarily an endorsement of rote memorization). Just so this post does not get out of control and I can get back to grading I will make two points.

1) MC test items are not equivalent to a focus on facts and knowing stuff (as if this were proven to be a bad thing).

One of my favorite things is to get into a discussion regarding cognitive skill hierarchies with someone (OK so this may not be one of your favorite things). I love when someone attempts to educate me regarding such hierarchies ’cause a) I can usually effectively demonstrate that alternative evaluate devices such as essay exams are typically focused on factual recall and b) because yes I have heard of Bloom’s taxonomy.

Regarding Bloom’s hierarchy – I happen to have one of his (with colleagues Hastings and Madeus) books on my shelf and because I am old I have actually read what he wrote in the original. In fact, if you are concerned with cognitive skill hierarchies, you might explore this book and consider the examples provided. It appears Bloom felt multiple choice items were an acceptable way to exemplify the assessment of cognitive skills above the levels of knowledge and comprehension. Many MC items are offered to help the reader understand the differences among levels of the taxonomy.  Hmm… who knew.

Point noted – what is possible is not necessarily what educators do. I agree. However, see my earlier comment regarding what types of skills that are tapped by the majority of essay items. Asking the student to list and explain the same three poinats you made in class is not higher level thinking just because the information has to be written in a blue book. I apologize for resorting to stereotypes – I know of no one who uses blue books.

2) Student reaction to different types of assessment and how instructors may adjust

I have been an administrator for more than a dozen years. I have been trying to think of any case in which a student came to me complaining about items on an instructor’s MC test. Honestly, I can think of no example. What I tend to field are complaints about the way papers, essay exams and presentations have been evaluated? Despite rubrics, it seems there are concerns about fairness and consistency. Just what is it that I should do in such situations? Should I attempt to score the various products to see if there is some pattern of bias? I love the complaints about the evaluation of presentations. What I have are the marked rubric and the student’s claim? An impossible situation to evaluate.

My point? I am not sure. I am just noting that students are more likely to complain when instructors make the effort to do what some argue is the only acceptable thing.

BTW – I have a similar position on developing “higher level” MC items (which I do when I work with hundreds of students at a time). Pretty much by definition, you have to use items that involve information you did not provide when you attempt to evaluate something other than retention. I am puzzled by the complaint that sometimes results from this – “you did not tell us that ….”. I try to respond consistently – “you are correct, I did not tell you that …”. I DO NOT explain that “If I told you that, this would be a recall question and I am supposed to be assessing your ability to apply, think critically, etc.”. I actually do explain that this is my intent at the beginning of the course, but when students are complaining about how my approach is unfair after the fact I tend to just listen.

I am of the opinion that students would not feel cheated if you were to ask them some obscure fact. Why – you could show them where the obscure fact was in the book and what could they say? That would be fair. Not particularly meaningful, but fair. There is some unfairness in asking questions that involve assumed background knowledge, awareness of the world, or awareness of what commonly happens in a work environment students are preparing for. This is not typically information provided in a courese.  I think of this as a sampling problem. I assume the general awareness that is necessary to respond to such “scenario” or application items evens out across items. I cannot deny that for any given item some students will be at a disadvantage. There are all kinds of sampling problems in education. What topics/skills did those three essay items that made up the final sample or miss? What skills did that oral presentation hit/miss?

So, assessment and the relationship of assessment to learning are complex issues and nothing at the surface level, in my opinion, guarantees success or failure. Much depends on the specifics of whatever assessment approach is taken.

Back to grading (essay exams)

Loading

Leave a Reply