The Medium is the Message

Marshall McLuhan’s famous declaration “The medium is the message” never made sense to me. It sounded cool, but on the surface there was not enough there to offer much of an explanation. It seemed one of those things other people understood and used, but I did not. Perhaps I had missed the class or not read the book in which the famous phrase was explained.

The expression came up again in the book club I joined while we reading a book by Johns (The Science of Reading). A sizeable proportion of one chapter considers McLuhan’s famous proposal and provided a reference to his first use of the phrase. The original mention was a comment he made at a conference and then continued to develop. 

The page is not a conveyor belt for pots of message; it is not a consumer item so much as a producer of unique habits of mind and highly specialized attitudes to person and country, and to the nature of thought itself (…) Let us grant for the moment that the medium is the message. It follows that if we study any medium carefully we shall discover its total dynamics and its unreleased powers.

Print, by permitting people to read at high speed and, above all, to read alone and silently, developed a totally new set of mental operations.

Johns’ book is about the history of the study of reading as a science with more on how reading and the methods by which reading skill is developed became a political issue. My effort to create a personal understanding of what any of this would have to do with McLuhan now is based on my consideration of different media and what McLuhan had to say specifically about reading. I have come to think about reading as a generative activity which is a topic I write about frequently. From this perspective, reading is an external task that gives priority to certain internal behaviors. In contrast to some other media, reading allows personal control of speed. A reader can take in information quickly or pause to reflect. A reader can reread. Text sometimes requires the reader to generate imagery in contrast to having imagery offered to them as would be the case with video. Reading cannot transfer a complete experience from author to reader and much is constructed by the reader based on existing knowledge. Reading has a social component. In most cases reading involves an implied interaction with an author, but also with others who have interpreted the same input and who often interact to share personal interpretations. 

What McLuhan had to say about media now reminds me of the notion of affordances. Affordance refers to the potential actions or uses that an object or environment offers to an individual, based on its design and the individual’s perception of it. The term was originally coined by psychologist James J. Gibson in the context of ecological psychology to describe the possibilities for action that the environment provides. Affordances can be both obvious (like a door handle that affords pulling) or less obvious, depending on how the individual perceives and interacts with the object or environment. It is this less obvious type of affordance that applies based on expectations for texts and for how we anticipate texts to be used. Factors such as the allowances for controlling speed and pausing with a medium that is essentially static when we are not interacting with it to allow reflection are more like the obvious affordances Gibson proposes.

Those who reject a media effect

Having reached what I hope is an appropriate understanding of McLuhan’s famous insight, I realized that I have encountered a contradictory argument commonly taught within one of my fields of practice (educational technology). This controversy concerns what tends to be called the media effect

The “media effect” refers to the idea that the medium or technology used to deliver instruction (such as television, computers, or textbooks) has a significant impact on learning outcomes. This concept suggests that different media can produce different levels of learning or change the way people learn.

This perspective was challenged by Richard Clark in his influential 1983 article, “Reconsidering Research on Learning from Media.” Clark argued that the media itself does not influence learning; rather, it is the instructional methods and content delivered through the media that determine learning outcomes. Clark famously stated, “media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition.”

Clark’s challenge to the media effect emphasized that it’s the instructional design, the way content is presented, and the interaction between learners and content that are crucial for learning, not the medium through which the instruction is delivered.

I always struggled when teaching this position. Instructional designers are expected to consider this argument, but my interpretation never allowed me to understand why this would be true. If I wanted to teach someone the cross-over dribble, wouldn’t it make more sense to begin by showing the move rather than describing it with text? I understand that each of us learns through our own cognitive actions, but how we access inputs (external representations) would seem to matter in what our cognitive behaviors have to work with. When you ask advanced students to deal with arguments such as Clark’s that challenge actions they might be prone to take, it is common to match the challenging position with a source that offers a counterargument. I paired Clark’s paper with a paper written by Robert Kozma. If you are inclined to pursue this controversy, I recommend this combination.

Does it matter?

Possibly. I think we are experiencing changes in how we experience information. Most of us experience more and more video both for entertainment and for learning. It is worth considering how we might be influenced by the medium of input. If we are trying to learn more frequently from video, how do we attempt to process the video experience in a way similar to how we can take control and process text? 

References:

Clark, R. E. (1983) Reconsidering research on learning from media. Review of educational research 53 (4), 445-459.

Johns, A. (2023). The science of reading: Information, media, and mind in modern America. University of Chicago Press.

Kozma, R. B. (1994). Will media influence learning? Reframing the debate. Educational technology research and development, 42(2), 7-19.

*

Loading

Desirable Difficulty

Despite a heavy focus on cognitive psychology in the way I researched and explained classroom study tactics, I had not encountered the phrase desirable difficulty until I became interested in the handwritten vs. keyboard notetaking research. I discovered the idea when reviewing studies by Luo and colleagues and Mueller and Oppenheimer. Several studies have claimed students are better off taking notes by hand in comparison to on a laptop despite being able to record information significantly faster when using a keyboard. 

Since having a more complete set of notes would seem an advantage. The combination of more notes associated with poorer performance is counterintuitive. Researchers speculated that learners who understood they had to make decisions about what they had time to record selected information more carefully and possibly summarized rather than recorded verbatim what they heard. This focus on what could be described as deeper processing seemed like an example of desirable difficulty. The researchers also proposed that the faster keyboard recording involved shallow cognitive processing.  

Note: I am still a fan of more complete notes and the methodology used when demonstrating better performance from recording notes by hand needs to be carefully considered. I will comment on my argument more at the end of this post. 

Desirable difficulty an idea attributed to Robert Bjork has been used to explain a wider variety of retention phenomena. Bjork suggested that retrieval strength and storage strength are distinct phenomena and learners can be misled when an approach to learning is evaluated based on retrieval strength. I find these phrases to a bit confusing as applied, but I understand the logic. Students cramming for an exam make a reasonable example. Cramming results in what may seem to be successful learning (retrieval strength), but results in poorer retention over an extended period of time (storage storage strength). Students may understand and accept the disadvantages of cramming so it is not necessary that the distinction be unrecognized by learners. In a more recent book on learning for the general public, Daniel Willingham suggests that the brain is really designed to avoid rather than embrace thinking because thinking is effortful. The human tendency is to rely on memory rather than thinking. Desirable difficulty may be a way to explain why some situations that require thinking prevent something more rote. 

Increasing difficulty to improve retention

There are multiple tactics for productively increasing difficulty that I tend to group under the heading of generative learning. I describe generative activities as external tasks intended to increase the probability of productive cognitive (mental) behaviors. I suppose desirable difficulty is even more specific differentiating external tasks along a difficulty dimension. So in the following list of tasks, it is useful to imagine more and less difficult tasks. Often the less difficult task is the option learners choose to apply. In connecting these tactics with personal experience, I would recommend you consider the use of flashcards to conceptualize what would be the easier and the more challenging application. Then, move beyond flashcards to other study tactics and consider if you can identify similar contrasts. 

Retrieval Practice: Testing oneself on the material rather than passively reviewing notes is considered retrieval practice. The classic empirical demonstration of the retrieval practice or the testing effect compared reviewing content versus responding to questions. Even when controlling for study time, spending some time on questions was superior. With the flashcard applications I recommended you consider, answering multiple-choice questions would be less challenging than answering short-answer questions (recognition vs recall).

Spacing (Distributed Practice): Instead of cramming, spreading out study sessions over time is more productive. This method helps improve long-term retention and understanding. Spacing allows some retrieval challenges to develop and the learner must work harder to locate the desired information in memory. See my earlier description of Bjork’s distinction between retrieval strength and storage strength. 

Interleaving: Mixing different types of problems or subjects in one study session. For example, alternating between math problems and reading passages rather than focusing on one at a time. A simple flashcard version of this recommendation might be shuffling the deck between cycles through the deck. Breaking up the pattern of the review task increases the difficulty and requires greater cognitive effort. 

Other thoughts

First, the concept of committing to more challenging tasks is broader than the well researched examples I provide here. Writing and teaching could be considered examples in that both tasks require an externalization of knowledge that is both generative and evaluative. It is too easy to fake it and make assumptions when the actual creation of a product is not required.

Second, desirable difficulty seems to me to be a guiding principle that does not explain all of the actual cognitive mechanisms that are involved. The specific mechanisms may vary with activity – some might be motivational, some evaluative (metacomprehension), and some at the level of basic cognitive activities. For example, creating retrieval challenges probably creates an attempt to find alternate or new connections among stored elements of information. For example, in trying to put a name with a face one might attempt to remember the circumstances in which you may have met or worked with this person and this may activate a connection you do not typically use and is not automatic. For example, after being retired for 10 years and trying to remember the names of coworkers, I sometimes remember the arrangement of our offices working my way down the appropriate hallway and this sometimes helps me recall names. 

I did say I was going to return to the use of desirable difficulty as a justification for the advantage of taking notes by hand. If keyboarding allows faster data entry than handwriting, in theory keyboarding would allow more time for thinking, paraphrasing, and whatever advantage one would have when the recording method requires more time. Awareness and commitment would seem to be the issues here. However, I would think complete notes would have greater long-term value than sparse notes. One always has the opportunity to think while studying and a more complete set of notes would seem to provide the opportunity to have more external content to work with. 

References:

Bjork, R.A. (1994). Memory and metamemory considerations in the training of human beings. In J.  Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185-205). Cambridge,  MA: MIT Press.

Luo, L., Kiewra, K. A., Flanigan, A. E., & Peteranetz, M. S. (2018). Laptop versus longhand note taking: effects on lecture notes and achievement. Instructional Science, 46(6), 947-971.

Mueller, P. A., & Oppenheimer, D. M. (2014). The pen is mightier than the keyboard: Advantages of longhand over laptop note taking. Psychological science, 25(6), 1159-1168.

Willingham, D. T. (2021). Why don’t students like school?: A cognitive scientist answers questions about how the mind works and what it means for the classroom. John Wiley & Sons.

Loading

Potential conflicting benefits of your note-taking tool and approach

As I have explored and used several digital note-taking tools and examined the arguments that have been made regarding how such tools result in productivity benefits, I have identified a potential conflict in what produces more positive outcomes. The recognition of this conflict allows more purposeful execution on the part of the tool user and may better align activities with goals.

One way to identify note-taking goals is to use a long-standing approach differentiating generative and external storage benefits. This distinction was proposed long before PKM and was applied in the analysis of notes taken in classroom settings. The generative benefit proposes that the process of taking notes or sometimes of taking notes in a particular way engages our cognitive (mental) processes in ways that improve retention and understanding. External storage implies that our memory becomes less effective over time and having access to an external record (the notes) benefits our productivity. In practice (e.g., a student in a classroom) both benefits may apply, but one benefit depends on the other activity. Taking notes may not be beneficial, but to review notes one must have something to review. This is not always true as notes in one form or another can be provided or perhaps generated (for example AI identification of key ideas), but taking your own notes is by far the most common experience. In a PKM way of thinking, these two processes may function in different ways, but the classroom example should be familiar as a way to identify the theoretical benefits of note-taking.

I have written about the generative function of note-taking at length, but it is important to point out some unique specifics that apply to some digital note-taking tools. A source such as Ahrens’ Taking Smart Notes might provide the right mindset. I think of generative activities as external actions intended to produce a beneficial mental (cognitive) outcome. The idea is that external activities can encourage or change the likelihood of beneficial thinking behaviors. One way of operationalizing this perspective is to consider some of the specific activities Ahrens identified as external work resulting in such cognitive benefits. What are some of these activities? Isolating specific ideas and summarizing each as a note. Assigning tags that characterize a note. Making the effort to link notes. Periodically reviewing notes to generate retrieval practice, to reword existing notes, and to add new associations (links).

Retrieval is easier to explain. Note-taking apps with highly effective search capabilities make it easy to search and surface stored information when it might be useful. Links and tags may also be useful in this role, but search alone will often be sufficient.

What about the potential conflict?

The conflict I see proposes that some tools or approaches rely more heavily on search arguing in a way that generative processes are unnecessary.

I starting thinking about this assumption when contrasting the two note-taking systems I rely on – Mem.ai and Obsidian. While Mem.AI and Obsidian could be used in exactly the same way, Mem.ai developers argued that the built-in AI capabilities could eliminate the need to designate connections (with tags and links) because the AI capabilities would identify these connections for you. Thus when retrieving information via search, a user could use AI to also consider the notes with overlapping foci. If a user relied on this capability it would eliminate the work required to generate the connections manually created in Obsidian, but this approach would then also avoid the generative benefits of this work. 

AI capabilities fascinate me so I found a way to add a decent AI capability to Obsidian. Smart Connections is an Obsidian plugin that finds connections among notes and allows a user to chat with their notes. So, I found a way to mimic Mem.ai functionality with Obsidian. 

I find I have found a way to alter my more general PKM approach because of these capabilities. Rather than taking individual notes while reading, I can annotate and highlight pdfs, books, and videos and export the entire collection for each source and then bring this content into both Mem.ai and Obsidian as a very large note. Far easier than taking individual notes, but at what generative cost?

Smart Connections has added a new feature that even facilitates the use of the large note approach. Connections finds connections based on AI embeddings. An embedding is the mathematical representation of content (I would describe as weights based on what I remember of statistics). The more two notes embeddings’ weights are similar the more the notes consider similar ideas. Smart Connections used embeddings to propose related notes. Originally embeddings were generated at the note level and now at the “block” level. What this means (block level) is that Smart Connections can find the segments of a long document that have a similar focus as a selected note. 

Why is this helpful? When I read long documents (pdfs of journal articles or books in Kindle), I can export a long document containing my highlights and notes generated from these documents. With Smart Connections I can then just import this exported material into Obsidian and use Smart Connections to connect a specific note to blocks of all such documents. I can skip breaking up the long document into individual notes and assigning tags and creating links.

Why is this a disadvantage? Taking advantage of this capability can be a powerful disincentive to engaging in the generative activities involved in creating and connecting individual notes the basic version of Obsidian requires. 

Summary

As note-taking tools mature and add AI capabilities, it is important for users to consider how the way they use such tools can impact their learning and understanding. The tools themselves are quite flexible but can be used in ways that avoid generative tasks that impact learning and understanding. If the focus is on the retrieval of content for writing and other tasks, the generative activities may be less important. However, if you start using a tool such as Obsidian because a book such as Smart Notes influenced you, you might want to think about what might be happening if you rely on the type of AI capabilities I have described here. 

References
Ahrens, S. (2022). How to take smart notes: One simple technique to boost writing, learning and thinking. Sönke Ahrens.

Loading

Does flipping the classroom improve learning?

The instructional strategy of “flipping the classroom” is one of those recommendations that seems on first consideration to make a lot of sense. The core idea hinges on the truth that classroom time with students is limited and efficient use must be made of this time. Instead of taking up a substantial amount of this time with teacher presentations, why not move the exposure to content outside of class time and use class time for more active tasks such as helping students who have problems and allowing students to engage in active tasks with other students? With easy access to tools for recording presentations and sharing recordings online, why not simply have educators share presentations with students and have students review this material before class? So, presentations were flipped from class time to settings that might have been more frequently used for homework.

This all seemed very rational. I cannot remember where I first encountered the idea, but I did purchase Flip Your Classroom (Bergman and Sams, 2012) written by the high school teachers who I believe created the concept. While I did use my blog and textbook to promote this approach, I must have always wondered. I wrote a blog post in 2012 commenting that flipping the classroom sounded very similar to my large lecture experience of presenting to hundreds of students and expecting that these students would have read the textbook before class. Again, the logic of following up an initial exposure with an anecdote-rich and expanded focus on key concepts seemed sound. However, I knew this was not the way many students used their textbooks and some probably did not even make the purchase, but I was controlling what I could control. 

There have been hundreds of studies evaluating the flipping strategy and many meta-analyses of these studies. These meta-analyses tend to conclude that asking students to watch video lectures before coming to class is generally beneficial. I think many have slightly modified the suggested in-class component to expand the notion of greater teacher-student interaction to include a focus on active learning. Kapur et al (2022), authors of the meta-analysis I will focus on eventually, list the following experiences as examples of active learning – problem-solving, class discussions, dialog and debates, student presentations, collaboration, labs, games, and interactive and simulation-based learning activities. 

The institution where I taught had a group very much interested in active learning and several special active learning “labs” were created to focus on these techniques. The labs contained tables instead of rows of chairs, whiteboards, and other adaptations. To teach a large class in this setting you had to submit a description of the active techniques you intended to implement. The largest classes (200+) I taught could not be accommodated in these rooms and I am not certain if I would have ever submitted a proposal anyway. 

Kupar et al. (2022)

Kupar and colleagues found reason to add another meta-analysis to those already completed. While their integrated analysis of the meta-analytic papers concluded that the flipped classrooms have an advantage, Kapur and colleagues were puzzled by the great variability present among the studies. Some studies demonstrated a great advantage in student achievement for the flipped approach and some found that traditional instruction was superior. It did not seem reasonable that a basic underlying advantage would be associated with this much variability and the researchers proposed that a focus on the average effect size without consideration of the source or sources for this variability made little sense. They conducted their own meta-analysis and coded each study according to a variety of methodological and situational variables. 

The most surprising finding from this approach was that the inclusion of active learning components was relatively inconsequential. Remember that the use of such strategies in the face-to-face setting was emphasized in many applications. Surprisingly, segments of lecture within the face-to-face setting were a better predictor of an achievement advantage. Despite the break from the general understanding of how flipped classrooms are expected to work, educators seemed to use these presentations to review or supplement independent student content consumption and this provided an achievement bump.

The active beneficial learning component found to make a difference involved a problem-based strategy and when the entire process began with a problem-based experience. This finding reminds me of the problem-based learning research conducted by Deanna Kuhn who also proposed that the problem-based experience start the learning sequence. Kapur used the phrase productive failure to describe the way struggling with a problem before encountering relevant background information was helpful. Kuhn emphasized a similar process without the catchy label and proposed the advantage was more a matter of the activation of relevant knowledge and guiding the interpretation of information within the presentation of content that followed.

Regarding the general perspective on the flipped model identified by Kapur and colleagues, their findings were less an indictment of the concept, but a demonstration of the lack of fidelity in implementations to the proposed advantage of using face-to-face time to interact and adjust to student needs. Increasing response to the needs of individual needs would seem beneficial and may be ignored in favor of activities that are less impactful. 

References:

Kapur, M., Hattie, J., Grossman, I., & Sinha, T. (2022, September). Fail, flip, fix, and feed–Rethinking flipped learning: A review of meta-analyses and a subsequent meta-analysis. In Frontiers in Education (Vol. 7, p. 956416). Frontiers.

Pease, M. A., & Kuhn, D. (2011). Experimental analysis of the effective components of problem?based learning. Science Education, 95(1), 57-86.

Wirkala. C. & Kuhn, D. (2011). Problem-Based Learning in K–12 Education: Is it Effective and How Does it Achieve its Effects? American Educational Research Journal, 48, 1157–1186

Loading

Flashcard Effectiveness

This post is a follow-up to my earlier post promoting digital flashcards as an effective study strategy for learners of all ages. In that post, I suggested that at times educators were anti rote learning assuming that strategies such as flashcards promoted a shallow form of learning that limited understanding and transfer. While this might appear to be the case because flashcards seem to involve a simple activity, the cognitive mechanisms that are involved in trying to recall and reflect on the success of such efforts provide a wide variety of benefits.

The benefits of using flashcards in learning and memory can be explained through several cognitive mechanisms:

1. Active Recall: Flashcards engage the brain in active recall, which involves retrieving information from memory without cues (unless the questions are multiple-choice). This process strengthens the memory trace and increases the likelihood of recalling the information later. Active recall is now more frequently described as retrieval practice and the benefits as the testing effect. Hypothesized explanations for why efforts to recall and even why efforts to recall that are not successful are associated not only with increased success at recall in the future but also broader benefits such as understanding and transfer offer a counter to the concern that improving memory necessarily is a focus on rote. More on this at a later point.

2. Spaced Repetition: When used systematically, flashcards can facilitate spaced repetition, a technique where information is reviewed at increasing intervals. This strengthens memory retention by exploiting the psychological spacing effect, which suggests that information is more easily recalled if learning sessions are spaced out over time rather than crammed in a short period.

3. Metacognition: Flashcards help learners assess their understanding and knowledge gaps. Learners often have a flawed perspective of what they understand. As learners test themselves with flashcards, they become more aware of what they know and what they need to focus on, leading to better self-regulation in learning

4. Interleaving: Flash cards can be used to mix different topics or types of problems in a single study session (interleaving), as opposed to studying one type of problem at a time (blocking). Interleaving has been shown to improve discrimination between concepts and enhance problem-solving skills.

5. Generative Processing: External activities that encourage helpful cognitive behaviors is one way of describing generative learning. Responding to questions and even creating questions have been extensively studied and demonstrate achievement benefits. 

Several of these techniques may contribute to the same cognitive advantage. These methods (interleaving, spaced repetition, recall rather than recognition) increase the demands of memory retrieval and greater demands force a learner to move beyond rote. They must search for the ideas they want and effortful search activates related information that may provide a link to what they are looking for. An increasing number of possibly related ideas become available within the same time frame allowing new connections to be made. Connections can be thought of as understanding and in some cases creativity. 

This idea of the contribution of challenge to learning can be identified in several different theoretical perspectives. For example, Vygotsky proposed the concept of a Zone of Proximal Development that position ideal instruction as challenging learners a bit above their present level of functioning, but within the level of what a learner could take on with a reasonable change of understanding. A more recent, but similar concept proposing the benefits of desirable difficulty came to my attention as the explanation given for why taking notes on paper was superior to taking notes using a keyboard. The proposal was that keyboarding is too efficient forcing learners who record notes by hand to think more carefully about what they want to store. Deeper thought was required when the task was more challenging. 

Finally, I have been exploring researchers studying the biological mechanism responsible for learning. As anyone with practical limits on my time, I don’t spend a lot of time reviewing the work done in this area. I understand that memory is a biological phenomenon and cognitive psychologists do not focus on this more fundamental level, but I have also yet to find insights from biological research that required I think differently about how memory happens. Anyway, a recent book (Ranganath, 2024) proposes something called error-driven learning. The researcher eventually backs away a bit from this phrase suggesting that it does not require you to make a mistake but happens whenever you struggle to recall.

The researcher proposes that the hippocampus enables us to “index” memories for different events according to when and where they happened, not according to what happened.  The hippocampus generates episodic memories. by associating a memory with a specific place and time. As to why changes in contexts over time matter, memories stored in this fashion become more difficult to retrieve. Activating memories with spaced practice both creates an effortful and more error-prone retrieval, but if successful offers a different context connection. So, spacing potentially offers different context links because different information tends to be active in different locations and times (note other information from what is being studied would be active) and involves retrieval practice as greater difficulty involves more active processing and exploration of additional associations. I am adding concepts such as space and retrieval practice from my cognitive perspective, but I think these concepts fit very well with Ranganath’s description of “struggling”.

I have used the term episodic memory in a little different way. However, the way Rangath describes changing contexts over time seems useful as an explanation for what has long been appreciated as the benefit of spaced repetition in the development of long-term retention and understanding. 

When I taught educational psychology memory issues, I described the difference between episodic and declarative memories. I described the difference as similar to the students’ memory for a story and the memory for facts or concepts. I proposed that studying especially trying to convert the language and examples of the input (what they read or heard in class) into their own way of understanding with personal examples that were not part of the original content they were trying to process was something like converting episodic representations (stories) into declarative representations linked to relevant personal episodic elements (students’ own stories). This is not an exact representation of human cognition in several ways. For example, even our stories are not exact and are biased by past and future experiences and can change with retelling. However, it is useful as a way to develop what might be described as understanding. 

So, to summarize, memory tasks, even what might seem to be simple ones such as might be the case with basic factual flashcards can introduce a variety of factors conducive to a wide variety of cognitive outcomes. The assumption that flashcards are useful only for rote memory is flawed.

Flashcard Research 

There is considerably more research on the impact of flashcards that I realized and some recent studies that are specific to digital flashcards.

Self-constructed or provided flashcards – When I was still teaching the college students I say using flashcards were obviously using paper flashcards they had created. My previous post focused on flashcard tools for digital devices. As part of that post, I referenced sources for flashcards that were prepared by textbook companies and topical sets prepared by other educators and offered for use. I was reading a study comparing premade versus learner-created flashcards (description to follow) and learned that college students are now more likely to use flashcards created by others. I guess this makes some sense considering how digital flashcard collections would be easy to share. The question then is are questions you create yourself better than a collection that covers the material you are expected to learn. 

Pan and colleagues (2023) asked this question and sought to answer it in several studies with college students. One of the issues they raised was the issue of time required to create flashcards. They controlled the time available for the treatment conditions with some participants having to create flashcards during the fixed amount of time allocated for study. Note – this focus on time is similar to the retrieval practice studies using part of the time in the study phase for responding to test items while others were allowed to study as they liked. The researchers also conducted studies in which the flashcard group created flashcards in different ways – transcription (typing the exact content from the study material), summarization, and copy and pasting. The situation investigated here seems similar to note-taking studies comparing learner-generated notes and expert notes (quality notes provided to learners). With both types of research, one might imagine a generative benefit to learners in creating the study material and a completeness/quality issue. The researchers did not frame their research in this way, but these would be alternative factors that might matter. 

The results concluded that self-generated flashcards were superior. They also found that copy-and-paste flashcards were effective which surprised me and I wonder if the short time allowed may have been a factor. At least, one can imagine using copy and paste as a quick way to create the flashcards using the tool I described in my previous flashcard post.

Three-answer technique – Senzaki and colleagues (2017) evaluated a flashcard technique focused on expanding the types of associations used in flashcards. They proposed their types of flashcard associations based on the types of questions they argued college students in information-intensive courses are asked to answer on exams. The first category of test items are verbatim definitions for retention questions, the second are accurate, paraphrases for comprehension questions, and the third are realistic examples for application questions. Their research also investigated the value of teaching students to use the three response types in comparison to requesting they include these three response types. 

The issue of whether students who use a study technique (e.g., Cornell notes, highlighting) are ever taught how to use a study strategy why it might be important to apply the study in a specific way) has always been something I have thought was important.

The Senzaki and colleagues research found their templated flashcard approach to be beneficial and I could not help seeing how the Flashcard Deluxe tool I described in my first flashcard post was designed to allow three possible “back sides” for a digital flashcard. This tool would be a great way to implement this approach.

AI and Flashcards

So, while learner-generated flashcards offer an advantage, I started to wonder about AI and was not surprised to find that AI-generated capabilities are already touted by companies providing flashcard tools. This led me to wonder what would happen if I asked AI tools I use (ChatGPT and NotebookLM) to generate flashcards. One difference I was interested in was asking ChatGPT to create flashcards over topics and NotebookLM to generate flashcards focused on a source I provided. I got both approaches to work. Both systems would generate front and back card text I could easily transfer to a flashcard tool. I found that some of the content I decided would not be particularly useful, but there were plenty of front/back examples I thought would be useful. 

The following image shows a ChatGPT response to a request to generate flashcards about mitosis.

This use of AI used NotebookLM to generate flashcards based on a chapter I asked it to use as a source.

This type of output could also be used to augment learner-generated cards or could be used to generate individual cards a learner might extend using the Senzaki and colleagues design.

References

Pan, S. C., Zung, I., Imundo, M. N., Zhang, X., & Qiu, Y. (2023). User-generated digital flashcards yield better learning than premade flashcards. Journal of Applied Research in Memory and Cognition, 12(4), 574–588. https://doi-org.ezproxy.library.und.edu/10.1037/mac0000083

Ranganath, C. (2024). Why We Remember: Unlocking Memory’s Power to Hold on to What Matters. Doubleday Canada.

Senzaki, S., Hackathorn, J., Appleby, D. C., & Gurung, R. A. (2017). Reinventing flashcards to increase student learning. _Psychology Learning & Teaching, 16(3), 353-368.

Loading

Writing to Learn Research – Messy

Writing to learn is one of those topics that keeps drawing my attention. I have an interest in what can be done to encourage learning and approach this interest by focusing on external tasks that have the potential to manipulate the internal cognitive (thinking) behavior of learners. My background in taking this perspective is that of an educational psychologist with a cognitive perspective. I have a specific interest in areas such as study behavior trying to understand what an educator or instructional designer can do to promote experiences that will help learners be more successful. The challenge seems obvious – you cannot learn for someone else, but you may be able to create tasks that when added to exposure to sources of information encourage productive “processing” of those experiences. We can ask questions to encourage thinking. We can engage students in discussions that generate thinking through interaction. We can assign tasks that require the use of information. Writing would be an example of such an assigned task. 

Writing to Learn

Writing to learn fits with this position of an external task that would seem to encourage certain internal behaviors. To be clear, external tasks cannot control internal behavior. Only the individual learner can control what they think about and how they think about something, but for learners willing to engage with an external activity that activity may change the likelihood productive mental behaviors are activated.

I found the summary of the cognitive benefits of writing to learn useful and consistent with many of my own way of thinking about other learning strategies – external tasks that encourage productive internal behaviors. Writing based on content to be learned requires that the writer generate a personalized concrete representation at the “point of utterance”. I like this expression. To me, it is a clever way of saying that when you stare at the screen or the empty sheet of paper and must fill the void you can no longer fool yourself – you either generate something or you don’t. You must use what you know and how you interpret the experiences that supposedly have changed what you know to produce an external representation.

To produce an external product, you must think about what you already know in a way that brings existing ideas into consciousness (working memory) by following the connections activated by the writing task and newly acquired information. This forces processing that may not have occurred without the external task. Connections between existing knowledge and new information are not necessarily made just because both exist in storage. Using knowledge to write or to perform other acts of application encourages making connections.

Such attempts at integration may or may not be successful. Having something external to consider offers the secondary benefit of forced metacognition. Does what I wrote really make sense? Do the ideas hang together or do I need to rethink what I have said? Does what I have proposed fit with the life experiences (episodic memory) I have had? 

Writing ends up as a generative process that potentially creates understanding and feeds the product of this understanding back into storage.

Graham, Kiuhara & MacKay, M. (2020)

In carefully evaluating and combining the results of many studies of writing to learn, these researchers intended not only to determine if the impact of writing to learn had the intended general benefit but to use the variability of writing tasks and outcomes from studies to deepen our understanding of how writing to learn encouraged learning. Surely, some activities would be more beneficial than others because of the skills and existing knowledge of learners or the specifics of the assigned writing tasks. So, the meta-analysis is asking if there is a general effect (Is writing to learn effective), and secondarily are there significant moderator variables that may help potential practitioners decide when, with whom, and how to structure writing to learn activities?

The Graham and colleagues’ research focused only on K12 learners. Potential moderator variables included grade level, content area (science, social studies, mathematics), type of writing task (argumentation, informational writing, narrative), and some others. I have a specific interest in argumentation () which is relevant here as a variable differentiating the studies because it requires a deeper level of analysis than say a more basic summary of what has been learned. 

Overall, the meta-analysis demonstrated a general benefit for writing to learn (Effect size = .30). This level of impact is considered on the low end of a moderate effect. Graham and colleagues point out that the various individual studies included in the study generated great variability. A number of the studies demonstrated negative outcomes meaning in those studies the control condition performed better than the group spending time on writing to learn. The authors propose that this variability is informative as it cannot be assumed that any approach with this label will be productive. The variability also suggests that the moderator variables may reveal important insights.

Unfortunately, the moderator variables did not achieve the level of impact necessary to argue for useful insights as to how writing to learn works or who is most likely to be a priority group for this type of activity. Grade level was not significant. The topic area was not significant. The type of writing task was not significant. 

Part of the challenge here is having enough studies focused on a given approach with enough consistency of outcomes to allow statistical certainty in arguing for a clear conclusion. Studies that involved taking a position and supporting that position (e.g., argumentation) produced a much larger effect size, but the statistical method of meta-analysis did not reach the level at which a certain outcome could be claimed. 

One interesting observation from the study caught my attention. While writing to learn is used more frequently in social studies classrooms, the number of research studies associated with each content areas was the smallest for social studies. Think about this. Why? I wonder if the preoccupation of researchers and funding organizations with STEM is responsible. 

More research is needed. I know practitioners and the general public get tired of being told this, but what else can you recommend when confronted with the messiness of much educational research? When you take ideas out of carefully controlled laboratories and try to test them in applied settings the results here are fairly typical. Humans left to their own devices as implementers of procedures and reactors to interventions are all over the place. Certainly, the basic carefully controlled research and the general outcome of meta-analysis focused on writing to learn implementation are encouraging, but as the authors suggest the variability in effectiveness means something, and further exploration is warranted.

Reference

Graham, S., Kiuhara, S. A., & MacKay, M. (2020). The effects of writing on learning in science, social studies, and mathematics: A meta-analysis. Review of Educational Research90(2), 179-226.

Loading

Use EdPuzzle AI to generate study questions

This post allows me to integrate my interest in studying, layering, questions, and using AI as a tutor. I propose a specific use of EdPuzzle, a tool for adding (layering) questions and notes to videos, be used as a study tool. EdPuzzle has a new AI feature that allows for the generation and insertion of open-ended and multiple-choice questions. So an educator interested in preparing videos students might watch to prepare for class could prepare a 15 minute mini-lecture and then use EdPuzzle to layer questions on this video and assign the combination of video and questions to students to be viewed before class. Great idea. 

The AI capability was added to make the development and inclusion of questions less effortful. Or, the capability could be used to add some questions that educators could embellish with questions of their own. I propose a related, but different approach I think has unique value.

How about instead of preparing questions for students, allow students to use the AI generation tool to add and answer themselves or with peers. 

Here is where some of my other interests come into play. When you can interact with AI that can be focused on assigned content you are to learn, you are using AI as a tutor. Questions are a part of the tutoring process.

What about studying? Questions have multiple benefits in encouraging productive cognitive behaviors. There is such a thing as a prequestioning effect. Attempting to answer questions before you encounter related material is a way to activate existing knowledge. What do you already know? Maybe you cannot answer many of the questions, but just trying makes you think of what you already know and this activated knowledge improves understanding as you then process assigned material. Postquestions are a great check on understanding (improving metacognition and directing additional study) and attempting to answer questions involves retrieval practice sometimes called the testing effect. For most learners, searching your memory for information has been proven to improve memory and understanding beyond what just studying external information (e.g., your notes) accomplishes.

I have described EdPuzzle previously, here are some additional comments about the use of the generative question tool. 

After you have uploaded a video to EdPuzzle. You should encounter the opportunity to edit. You use edit to crop the video and to add notes and questions. The spots to initiate editing and adding questions are shown in the following images. When using AI to add questions, you use Teacher Assist – Add Questions.

After selecting Add Questions, you will be given the option of adding Open ended or Multiple Choice questions. My experience has been that unless your video includes a good deal of narration, the AI will generate more Open Ended than Multiple Choice questions. If you want to emphasize MC questions, you always have the option of adding questions manually.

Responding to a question will look like what you see in the following image. Playing the video will take the student to the point in the video where a question has been inserted and then stop to wait for a response. 


When an incorrect response is generated to a MC question, the error will be identified.

EdPuzzle allows layered videos to be assigned to classes/students. 

Anyone can explore EdPuzzle and create a few video lessons at no cost. The pricing structure for other categories of use can be found at the EdPuzzle site. 

One side note: I used a video I created fitting the potential scenario I described of an educator preparing content for student use. However, I had loaded this video to YouTube. I found it difficult to download this video and finally resorted to the use of ClipGrab. I am unclear why I had this problem and I understand that “taking” video from some sources can be regarded as a violation of copyright. I know this does not apply in this case, but I did not want to mention this issue.

References

Pan, S. C., & Sana, F. (2021). Pretesting versus posttesting: Comparing the pedagogical benefits of errorful generation and retrieval practice. Journal of Experimental Psychology: Applied, 27(2), 237–257.

Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. _Psychological Bulletin_, _147_(4), 399-435.

Loading