Notetaking in the lab and the wild

Human behavior can be scientifically studied in the laboratory and the wild. This is the case with notetaking and other study behaviors. When politicians use the phrase “the science of learning” it can be misleading to the public because science in laboratory settings and in the wild can seemingly lead to different conclusions and related recommendations. I believe that the controversy of the “science of reading” is related to this issue, but I have greater experience with notetaking and study behavior so I will stick to explaining how this works in this more familiar area.

I have been referencing Daniel Willingham’s work a lot lately, and the following quote offers a good introduction to my point. In commenting on textbook companies building in proven study opportunities within their textbooks as aids to students, Willingham offers the following comment:

… if the readings include learning aids such as chapter outlines, chapter previews and summaries, boldface or italicized terms, or practice test questions, don’t try to use these learning aids as a replacement for reading the text. The funny thing about these features is that there’s very good research evidence that they work. Publishing companies paid to have high-quality research conducted; researchers had people read textbook chapters (with or without the learning aids), and they found that people who used the learning aids understood and remembered more than those who did not.

But the psychologists Regan Gurung and David Daniel pointed out that students “in the wild” will not necessarily use such materials the same way they were used by students in the laboratory. Gurung and Daniel suggested that some students use learning aids not to supplement the reading but to avoid it. They read the summary, look at the boldface terms, and then try to answer the practice test questions to see whether they understand enough to we skip the reading.

Willingham and other researchers (e.g., Gurung) note that educational research conducted under carefully controlled conditions may not predict applied situations. Applied situations often involve interactions as individuals make personal decisions about how learning strategies are applied. They may have different goals, different abilities, or different goals and life situations which cause them to use strategies in ways not intended or maybe not at all. Also tactics intended for the classroom situations may not encourage the development of personal skills that would be most likely used in life situations.

When I was still teaching, I sometimes contrasted attempting to do science with humans in contrast to what are often described as the “hard sciences” by note that the chemicals in a chemical reaction don’t decide if they feel like interacting. 

In looking back on my own research which was conducted in applied settings I was continually frustrated by this type of issue. I focused a lot of what I did on trying to create adaptive computer-supported study environments. The idea was that a computer can offer questions related to learning goals and use student accuracy and answer confidence to identify areas of weakness and to provide direct connections to the related textbook material. The idea was to identify heat maps of more difficult material for individual learners, to provide questions related to the areas of difficulty more frequently during a study session, and even to provide access to the question related content on the screen if the student wanted. Built into the online delivery system were ways to record the amount of use, the question performance and awareness of understanding, the use of the online content and the delay following wrong answers. My frustration arose from the findings that the system was really designed to assist less capable students (lower reading ability, poorer metacognitive awareness of strengths and weaknesses) who as it turned out were far less likely to use the system and to use it in ways the research would suggest were helpful (e.g., taking advantage of the feedback following wrong answers and especially wrong answers readers thought they understood). The failed opportunity to use the system to try to recognize the lack of understanding makes a good example of what Willingham, Gurung, and others have described. Even when investing time, these learners answered question after question without taking advantage of the opportunity to process feedback.

Understanding Why Tactics Work

Those situations in which learners invest time, but do so in an inefficient way are what I find most fascinating. Motivation makes a huge difference in learning, but would seem less of an issue with these individuals. Perhaps motivation is reflected in how hard in comparison to how long a learner works. This way of thinking would seem similar to Willingham’s “Outsmart your brain” suggestion that the brain interprets easier as better. It could follow that a possible remedy would be better understanding of how a given tactic works in addition to simply learning how to perform certain tactics. Answering questions is harder than rereading but works better because answering questions requires greater effort in actively engaging memory and thinking. Taking notes is better than highlighting because taking paraphrase notes requires more cognitive thinking. Etc.

I can’t help thinking about the fascination and process-oriented debate those interested in Personal Knowledge Management have with tools and tactics in comparison to most students in formal learning settings. Perhaps this is just an impression on my part, but it seems generally to be the case. If I am correct, I think the difference is in the opportunity self-directed learners have to set personal goals and as a consequence invest time in trying to understand why differences in processes matter. The only alternative I can imagine would involve more direct instruction and how to study instruction is not emphasized or cut when resources are in short supply. 

References

Daniel, David B., and Debra A. Poole. “Learning for life: An ecological approach to pedagogical research.” Perspectives on Psychological Science 4, no. 1 (2009): 91-96.

Grabe, M., & Flannery, K. (2009/2010). A preliminary exploration of on-line study question performance and response certitude as predictors of future examination performance.  Journal of Educational Technology Systems, 38(4), 457-472.

Grabe, M., Flannery, K., & Christopherson, K. (2008). Voluntary use of online study questions as a function of previous minimal use requirements and learner aptitude. Internet and Higher Education. 11, 145-151.

Grabe, M. & Holfeld, B. (2014). Estimating the degree of failed understanding: a possible role for online technology. Journal of Computer Assisted Instruction. 30, 173-186.

Gurung, Regan A. R., and David B. Daniel. (2005).  Evidence-Based Pedagogy: Do Pedagogical Features Enhance Student Learning? (pps. 41–55). In Best Practices for Teaching Introduction to Psychology, Dana S. Dunn and Stephen L. Chew (eds.), Mahwah, NJ: Erlbaum.

Loading