Let me begin with a simple question. Would anyone notice if students learned more? It is an important question because while so many attempt to exert pressure for improvement in student (and instructor) performance I am not certain we would notice if change occurred.
I will allow you to define what “learn more” means and I realize that the “more of what” question is presently the source of much controversy within the educational community. Still, whether more learning is operationalized in terms of knowledge, procedural skills, or some combination, it would seem logical that the dependent variable in education should be a measure of learning. I intend to make the point that at the level of the class, the department, or the institution, there is little recognition for improving learning.
Here is why I think “awareness” is an issue. My approach to this question is as an educational researcher. I think I have some feel for what performance changes are associated with various interventions evaluated in research studies. As an instructor, I also think I have some insight into the extent of change that is practical and how the classroom environment may determine the awareness of possible benefits of such change.
As a hypothetical example, assume an instructor modifies instruction/learning experiences in a way that increases student performance by 10%. This would be a sigificant improvement and if such data represented the dependent variable in a research study evaluating an innovative intervention such results would likely appear in some journal. Would students notice? Would administrators notice? I wonder.
Assume in this example that students experience the benefit in the form of an increase in examination scores. The average score for a given examination might improve from 40 to 44. If grading standards remain the same, a few students might receive a higher grade as a consequence. Many would likely receive the same grade. If the average grade were raised, this might be regarded as grade inflation. If several instructors were teaching similar courses and some used the beneficial methods resulting in improved student performance, would students in the different sections appreciate the benefits? How would administrators evaluating the instructors responsible for these different courses recognize that improvement had occured?
Consider the same questions at different levels – say at the level of an academic department or even an institution. If a university were able to improve student achievement by 10% would anyone notice? How would this accomplishment be documented?
These are important questions. In my experience as a researcher, the type of small and incremental improvement I describe is the type of change that seems most likely. What concerns me as an instructor is that such improvements are likely unnoticed and unappreciated. We continue to attend to other data sources which are suspect and tangential. Impressions are noticed, but changes in capabilities tend to be ignored.
I wish I knew what to suggest. Standardized testing as presently implemented has known limitations. Teaching to the test and limiting exposure to topics that are not going to be tested generate data of questionable validity. The “improvement” in this case would involve learning more about less which is not the same as learning more.
Here is a technique I am exploring. Each year I teach “Introduction to Psychology”. Part of the assessment strategy involves multiple choice examinations. I have begun tracking class performance on specific items. I repeat a small number of items (not always the same ones) across semesters and compare the present class score for these items with the score from past classes. This discrepancy score gives me a feel for the relative performance of the present class. I am certain you can generate limitations in this method and so can I, but until someone offers a reasonable alternative at least I have some numbers to consider. At least I might notice.