Flashcard Effectiveness

This post is a follow-up to my earlier post promoting digital flashcards as an effective study strategy for learners of all ages. In that post, I suggested that at times educators were anti rote learning assuming that strategies such as flashcards promoted a shallow form of learning that limited understanding and transfer. While this might appear to be the case because flashcards seem to involve a simple activity, the cognitive mechanisms that are involved in trying to recall and reflect on the success of such efforts provide a wide variety of benefits.

The benefits of using flashcards in learning and memory can be explained through several cognitive mechanisms:

1. Active Recall: Flashcards engage the brain in active recall, which involves retrieving information from memory without cues (unless the questions are multiple-choice). This process strengthens the memory trace and increases the likelihood of recalling the information later. Active recall is now more frequently described as retrieval practice and the benefits as the testing effect. Hypothesized explanations for why efforts to recall and even why efforts to recall that are not successful are associated not only with increased success at recall in the future but also broader benefits such as understanding and transfer offer a counter to the concern that improving memory necessarily is a focus on rote. More on this at a later point.

2. Spaced Repetition: When used systematically, flashcards can facilitate spaced repetition, a technique where information is reviewed at increasing intervals. This strengthens memory retention by exploiting the psychological spacing effect, which suggests that information is more easily recalled if learning sessions are spaced out over time rather than crammed in a short period.

3. Metacognition: Flashcards help learners assess their understanding and knowledge gaps. Learners often have a flawed perspective of what they understand. As learners test themselves with flashcards, they become more aware of what they know and what they need to focus on, leading to better self-regulation in learning

4. Interleaving: Flash cards can be used to mix different topics or types of problems in a single study session (interleaving), as opposed to studying one type of problem at a time (blocking). Interleaving has been shown to improve discrimination between concepts and enhance problem-solving skills.

5. Generative Processing: External activities that encourage helpful cognitive behaviors is one way of describing generative learning. Responding to questions and even creating questions have been extensively studied and demonstrate achievement benefits. 

Several of these techniques may contribute to the same cognitive advantage. These methods (interleaving, spaced repetition, recall rather than recognition) increase the demands of memory retrieval and greater demands force a learner to move beyond rote. They must search for the ideas they want and effortful search activates related information that may provide a link to what they are looking for. An increasing number of possibly related ideas become available within the same time frame allowing new connections to be made. Connections can be thought of as understanding and in some cases creativity. 

This idea of the contribution of challenge to learning can be identified in several different theoretical perspectives. For example, Vygotsky proposed the concept of a Zone of Proximal Development that position ideal instruction as challenging learners a bit above their present level of functioning, but within the level of what a learner could take on with a reasonable change of understanding. A more recent, but similar concept proposing the benefits of desirable difficulty came to my attention as the explanation given for why taking notes on paper was superior to taking notes using a keyboard. The proposal was that keyboarding is too efficient forcing learners who record notes by hand to think more carefully about what they want to store. Deeper thought was required when the task was more challenging. 

Finally, I have been exploring researchers studying the biological mechanism responsible for learning. As anyone with practical limits on my time, I don’t spend a lot of time reviewing the work done in this area. I understand that memory is a biological phenomenon and cognitive psychologists do not focus on this more fundamental level, but I have also yet to find insights from biological research that required I think differently about how memory happens. Anyway, a recent book (Ranganath, 2024) proposes something called error-driven learning. The researcher eventually backs away a bit from this phrase suggesting that it does not require you to make a mistake but happens whenever you struggle to recall.

The researcher proposes that the hippocampus enables us to “index” memories for different events according to when and where they happened, not according to what happened.  The hippocampus generates episodic memories. by associating a memory with a specific place and time. As to why changes in contexts over time matter, memories stored in this fashion become more difficult to retrieve. Activating memories with spaced practice both creates an effortful and more error-prone retrieval, but if successful offers a different context connection. So, spacing potentially offers different context links because different information tends to be active in different locations and times (note other information from what is being studied would be active) and involves retrieval practice as greater difficulty involves more active processing and exploration of additional associations. I am adding concepts such as space and retrieval practice from my cognitive perspective, but I think these concepts fit very well with Ranganath’s description of “struggling”.

I have used the term episodic memory in a little different way. However, the way Rangath describes changing contexts over time seems useful as an explanation for what has long been appreciated as the benefit of spaced repetition in the development of long-term retention and understanding. 

When I taught educational psychology memory issues, I described the difference between episodic and declarative memories. I described the difference as similar to the students’ memory for a story and the memory for facts or concepts. I proposed that studying especially trying to convert the language and examples of the input (what they read or heard in class) into their own way of understanding with personal examples that were not part of the original content they were trying to process was something like converting episodic representations (stories) into declarative representations linked to relevant personal episodic elements (students’ own stories). This is not an exact representation of human cognition in several ways. For example, even our stories are not exact and are biased by past and future experiences and can change with retelling. However, it is useful as a way to develop what might be described as understanding. 

So, to summarize, memory tasks, even what might seem to be simple ones such as might be the case with basic factual flashcards can introduce a variety of factors conducive to a wide variety of cognitive outcomes. The assumption that flashcards are useful only for rote memory is flawed.

Flashcard Research 

There is considerably more research on the impact of flashcards that I realized and some recent studies that are specific to digital flashcards.

Self-constructed or provided flashcards – When I was still teaching the college students I say using flashcards were obviously using paper flashcards they had created. My previous post focused on flashcard tools for digital devices. As part of that post, I referenced sources for flashcards that were prepared by textbook companies and topical sets prepared by other educators and offered for use. I was reading a study comparing premade versus learner-created flashcards (description to follow) and learned that college students are now more likely to use flashcards created by others. I guess this makes some sense considering how digital flashcard collections would be easy to share. The question then is are questions you create yourself better than a collection that covers the material you are expected to learn. 

Pan and colleagues (2023) asked this question and sought to answer it in several studies with college students. One of the issues they raised was the issue of time required to create flashcards. They controlled the time available for the treatment conditions with some participants having to create flashcards during the fixed amount of time allocated for study. Note – this focus on time is similar to the retrieval practice studies using part of the time in the study phase for responding to test items while others were allowed to study as they liked. The researchers also conducted studies in which the flashcard group created flashcards in different ways – transcription (typing the exact content from the study material), summarization, and copy and pasting. The situation investigated here seems similar to note-taking studies comparing learner-generated notes and expert notes (quality notes provided to learners). With both types of research, one might imagine a generative benefit to learners in creating the study material and a completeness/quality issue. The researchers did not frame their research in this way, but these would be alternative factors that might matter. 

The results concluded that self-generated flashcards were superior. They also found that copy-and-paste flashcards were effective which surprised me and I wonder if the short time allowed may have been a factor. At least, one can imagine using copy and paste as a quick way to create the flashcards using the tool I described in my previous flashcard post.

Three-answer technique – Senzaki and colleagues (2017) evaluated a flashcard technique focused on expanding the types of associations used in flashcards. They proposed their types of flashcard associations based on the types of questions they argued college students in information-intensive courses are asked to answer on exams. The first category of test items are verbatim definitions for retention questions, the second are accurate, paraphrases for comprehension questions, and the third are realistic examples for application questions. Their research also investigated the value of teaching students to use the three response types in comparison to requesting they include these three response types. 

The issue of whether students who use a study technique (e.g., Cornell notes, highlighting) are ever taught how to use a study strategy why it might be important to apply the study in a specific way) has always been something I have thought was important.

The Senzaki and colleagues research found their templated flashcard approach to be beneficial and I could not help seeing how the Flashcard Deluxe tool I described in my first flashcard post was designed to allow three possible “back sides” for a digital flashcard. This tool would be a great way to implement this approach.

AI and Flashcards

So, while learner-generated flashcards offer an advantage, I started to wonder about AI and was not surprised to find that AI-generated capabilities are already touted by companies providing flashcard tools. This led me to wonder what would happen if I asked AI tools I use (ChatGPT and NotebookLM) to generate flashcards. One difference I was interested in was asking ChatGPT to create flashcards over topics and NotebookLM to generate flashcards focused on a source I provided. I got both approaches to work. Both systems would generate front and back card text I could easily transfer to a flashcard tool. I found that some of the content I decided would not be particularly useful, but there were plenty of front/back examples I thought would be useful. 

The following image shows a ChatGPT response to a request to generate flashcards about mitosis.

This use of AI used NotebookLM to generate flashcards based on a chapter I asked it to use as a source.

This type of output could also be used to augment learner-generated cards or could be used to generate individual cards a learner might extend using the Senzaki and colleagues design.

References

Pan, S. C., Zung, I., Imundo, M. N., Zhang, X., & Qiu, Y. (2023). User-generated digital flashcards yield better learning than premade flashcards. Journal of Applied Research in Memory and Cognition, 12(4), 574–588. https://doi-org.ezproxy.library.und.edu/10.1037/mac0000083

Ranganath, C. (2024). Why We Remember: Unlocking Memory’s Power to Hold on to What Matters. Doubleday Canada.

Senzaki, S., Hackathorn, J., Appleby, D. C., & Gurung, R. A. (2017). Reinventing flashcards to increase student learning. _Psychology Learning & Teaching, 16(3), 353-368.

Loading

Writing to Learn Research – Messy

Writing to learn is one of those topics that keeps drawing my attention. I have an interest in what can be done to encourage learning and approach this interest by focusing on external tasks that have the potential to manipulate the internal cognitive (thinking) behavior of learners. My background in taking this perspective is that of an educational psychologist with a cognitive perspective. I have a specific interest in areas such as study behavior trying to understand what an educator or instructional designer can do to promote experiences that will help learners be more successful. The challenge seems obvious – you cannot learn for someone else, but you may be able to create tasks that when added to exposure to sources of information encourage productive “processing” of those experiences. We can ask questions to encourage thinking. We can engage students in discussions that generate thinking through interaction. We can assign tasks that require the use of information. Writing would be an example of such an assigned task. 

Writing to Learn

Writing to learn fits with this position of an external task that would seem to encourage certain internal behaviors. To be clear, external tasks cannot control internal behavior. Only the individual learner can control what they think about and how they think about something, but for learners willing to engage with an external activity that activity may change the likelihood productive mental behaviors are activated.

I found the summary of the cognitive benefits of writing to learn useful and consistent with many of my own way of thinking about other learning strategies – external tasks that encourage productive internal behaviors. Writing based on content to be learned requires that the writer generate a personalized concrete representation at the “point of utterance”. I like this expression. To me, it is a clever way of saying that when you stare at the screen or the empty sheet of paper and must fill the void you can no longer fool yourself – you either generate something or you don’t. You must use what you know and how you interpret the experiences that supposedly have changed what you know to produce an external representation.

To produce an external product, you must think about what you already know in a way that brings existing ideas into consciousness (working memory) by following the connections activated by the writing task and newly acquired information. This forces processing that may not have occurred without the external task. Connections between existing knowledge and new information are not necessarily made just because both exist in storage. Using knowledge to write or to perform other acts of application encourages making connections.

Such attempts at integration may or may not be successful. Having something external to consider offers the secondary benefit of forced metacognition. Does what I wrote really make sense? Do the ideas hang together or do I need to rethink what I have said? Does what I have proposed fit with the life experiences (episodic memory) I have had? 

Writing ends up as a generative process that potentially creates understanding and feeds the product of this understanding back into storage.

Graham, Kiuhara & MacKay, M. (2020)

In carefully evaluating and combining the results of many studies of writing to learn, these researchers intended not only to determine if the impact of writing to learn had the intended general benefit but to use the variability of writing tasks and outcomes from studies to deepen our understanding of how writing to learn encouraged learning. Surely, some activities would be more beneficial than others because of the skills and existing knowledge of learners or the specifics of the assigned writing tasks. So, the meta-analysis is asking if there is a general effect (Is writing to learn effective), and secondarily are there significant moderator variables that may help potential practitioners decide when, with whom, and how to structure writing to learn activities?

The Graham and colleagues’ research focused only on K12 learners. Potential moderator variables included grade level, content area (science, social studies, mathematics), type of writing task (argumentation, informational writing, narrative), and some others. I have a specific interest in argumentation () which is relevant here as a variable differentiating the studies because it requires a deeper level of analysis than say a more basic summary of what has been learned. 

Overall, the meta-analysis demonstrated a general benefit for writing to learn (Effect size = .30). This level of impact is considered on the low end of a moderate effect. Graham and colleagues point out that the various individual studies included in the study generated great variability. A number of the studies demonstrated negative outcomes meaning in those studies the control condition performed better than the group spending time on writing to learn. The authors propose that this variability is informative as it cannot be assumed that any approach with this label will be productive. The variability also suggests that the moderator variables may reveal important insights.

Unfortunately, the moderator variables did not achieve the level of impact necessary to argue for useful insights as to how writing to learn works or who is most likely to be a priority group for this type of activity. Grade level was not significant. The topic area was not significant. The type of writing task was not significant. 

Part of the challenge here is having enough studies focused on a given approach with enough consistency of outcomes to allow statistical certainty in arguing for a clear conclusion. Studies that involved taking a position and supporting that position (e.g., argumentation) produced a much larger effect size, but the statistical method of meta-analysis did not reach the level at which a certain outcome could be claimed. 

One interesting observation from the study caught my attention. While writing to learn is used more frequently in social studies classrooms, the number of research studies associated with each content areas was the smallest for social studies. Think about this. Why? I wonder if the preoccupation of researchers and funding organizations with STEM is responsible. 

More research is needed. I know practitioners and the general public get tired of being told this, but what else can you recommend when confronted with the messiness of much educational research? When you take ideas out of carefully controlled laboratories and try to test them in applied settings the results here are fairly typical. Humans left to their own devices as implementers of procedures and reactors to interventions are all over the place. Certainly, the basic carefully controlled research and the general outcome of meta-analysis focused on writing to learn implementation are encouraging, but as the authors suggest the variability in effectiveness means something, and further exploration is warranted.

Reference

Graham, S., Kiuhara, S. A., & MacKay, M. (2020). The effects of writing on learning in science, social studies, and mathematics: A meta-analysis. Review of Educational Research90(2), 179-226.

Loading

Use EdPuzzle AI to generate study questions

This post allows me to integrate my interest in studying, layering, questions, and using AI as a tutor. I propose a specific use of EdPuzzle, a tool for adding (layering) questions and notes to videos, be used as a study tool. EdPuzzle has a new AI feature that allows for the generation and insertion of open-ended and multiple-choice questions. So an educator interested in preparing videos students might watch to prepare for class could prepare a 15 minute mini-lecture and then use EdPuzzle to layer questions on this video and assign the combination of video and questions to students to be viewed before class. Great idea. 

The AI capability was added to make the development and inclusion of questions less effortful. Or, the capability could be used to add some questions that educators could embellish with questions of their own. I propose a related, but different approach I think has unique value.

How about instead of preparing questions for students, allow students to use the AI generation tool to add and answer themselves or with peers. 

Here is where some of my other interests come into play. When you can interact with AI that can be focused on assigned content you are to learn, you are using AI as a tutor. Questions are a part of the tutoring process.

What about studying? Questions have multiple benefits in encouraging productive cognitive behaviors. There is such a thing as a prequestioning effect. Attempting to answer questions before you encounter related material is a way to activate existing knowledge. What do you already know? Maybe you cannot answer many of the questions, but just trying makes you think of what you already know and this activated knowledge improves understanding as you then process assigned material. Postquestions are a great check on understanding (improving metacognition and directing additional study) and attempting to answer questions involves retrieval practice sometimes called the testing effect. For most learners, searching your memory for information has been proven to improve memory and understanding beyond what just studying external information (e.g., your notes) accomplishes.

I have described EdPuzzle previously, here are some additional comments about the use of the generative question tool. 

After you have uploaded a video to EdPuzzle. You should encounter the opportunity to edit. You use edit to crop the video and to add notes and questions. The spots to initiate editing and adding questions are shown in the following images. When using AI to add questions, you use Teacher Assist – Add Questions.

After selecting Add Questions, you will be given the option of adding Open ended or Multiple Choice questions. My experience has been that unless your video includes a good deal of narration, the AI will generate more Open Ended than Multiple Choice questions. If you want to emphasize MC questions, you always have the option of adding questions manually.

Responding to a question will look like what you see in the following image. Playing the video will take the student to the point in the video where a question has been inserted and then stop to wait for a response. 


When an incorrect response is generated to a MC question, the error will be identified.

EdPuzzle allows layered videos to be assigned to classes/students. 

Anyone can explore EdPuzzle and create a few video lessons at no cost. The pricing structure for other categories of use can be found at the EdPuzzle site. 

One side note: I used a video I created fitting the potential scenario I described of an educator preparing content for student use. However, I had loaded this video to YouTube. I found it difficult to download this video and finally resorted to the use of ClipGrab. I am unclear why I had this problem and I understand that “taking” video from some sources can be regarded as a violation of copyright. I know this does not apply in this case, but I did not want to mention this issue.

References

Pan, S. C., & Sana, F. (2021). Pretesting versus posttesting: Comparing the pedagogical benefits of errorful generation and retrieval practice. Journal of Experimental Psychology: Applied, 27(2), 237–257.

Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. _Psychological Bulletin_, _147_(4), 399-435.

Loading

Content focused AI for tutoring

My explorations of AI use to this point have resulted in a focus on two applications – AI as tutor and AI as tool for note exploration. Both uses are based on the ability to focus on information sources I designate rather than allowing the AI service to rely on its own body of information. I see the use of AI to interact with the body of notes I have created as a way to inform my writing. My interest in AI tutoring is more related to imagining how AI could be useful to individual students as they study assigned content.

I have found that I must use different AI services for these different interests. The reason for this differentiation is that two of the most popular services (NotebookLM and OpenAI’s Custom GPTs) limit the number of inputs that can be accessed. I had hoped that I could point these services at a folder of notes (e.g., Obsidian files) and then interact with this body of content. However, both services presently allow only a small number of individual files (10 and perhaps 20) can be designed as source material. This is not about the amount of content as the focus of this post involves using these two services to interact with a single file of 27,000 words. I assume in a year the number of files will be less of an issue.

So, this post will explore the use of AI as a tutor applied to assigned content as a secondary or higher ed student might want to do. In practice, what I describe here would require that a student would have access to a digital version of assigned content not protected in some way. For my explorations, I am using the manuscript of a Kindle book I wrote before the material was converted to a Kindle book. I wanted to work with a multi-chapter source of a length students might be assigned.

NotebookLM

NotebookLM is a newly released AI service from Google. The AI prompts can be focused on content that is available in Google drive or uploaded to the service. This service is available at no cost, but it should be understood that this is likely to change when Google is ready to offer a more mature service. Investing time in this service rather than others allows the development of skills and the exploration of potential, but in the long run some costs will be involved.

Once a user opens NotebookLM and creates a notebook (see red box surrounding new notebook), external content to be the focus of user prompts can be added (second image). I linked Notebook to the file I used in preparation for creating a Kindle book. Educators could create a notebook on unprotected content they wanted students to study.

The following image summarizes many essential features used when using NotebookLM. Starting with the right-hand column, the textbox near the bottom (enclosed in a red box) is where prompts are entered. The area above (another red box) provides access to content used by the service in generating the response to a prompt. The large area on the left-hand side displays the context associated with one of the areas referenced with the specific content used highlighted. 

Access to a notebook can be shared and this would be the way an educator would provide students access to a notebook prepared for their use. In the image below, you will note the icon (at the top) used to share content, and when this icon is selected, a textbox for entering emails for individuals (or for a class if already prepared) appears.

Custom GPTs (OpenAI)

Once you have subscribed to the monthly payment plan for ChatGPT – 4, accessing the service will bring up a page with the display shown below. The page allows access to ChatGPT and to any custom GPTs you have created. To create a Custom GPT you select Explore and then select Create a GPT. Describing the process of creating a GPT would require more space than I want to use in this post, but the process might best be described as conversational. You basically interact by describing what you are trying to create and you upload external resources if you want prompts to be focused on specific content. Book Mentor is the custom GPT I created for this demonstration.

Once created, a GPT is used very much in the same way a NotebookLM notebook is used. You use the prompt box to interact with the content associated with that GPT.

What follows are some samples of my interactions with the content. You should be able to see the prompt (Why is the word layering used to describe what the designer does to add value to an information source?)

Prompts can generate all kinds of ways of interaction (see a section below that describes what some of these interactions might be). One type I think has value in using AI as a tutor is to have the service ask you a question. An example of this approach is what is displayed in the following two images. The first image describes a request for the service to generate a multiple-choice question about generative activity which I then respond (correctly) and receive feedback. The second image shows the flexibility of the AI. When responding to the question, I thought a couple of the responses could be correct. After I answered the question and received feedback, I then asked about an answer I did not select wondering why this option could not also be considered correct. As you see in the AI reply, the system understands my issue and acknowledges how it might be correct. This seems very impressive to me and demonstrates that the interaction with the AI system allows opportunities that go beyond self-questioning.

Using AI as tutor

I have written previously about the potential of AI services to interact with learners to mimic some of the ways a tutor might work with a learner. I make no claims of equivalence here. I am proposing only that tutors are often not available and an AI system can challenge a learner in many ways that are similar to what a human tutor would do. 

Here are some specific suggestions for how AI can be used in the role of tutor

Summary

This post describes two systems now available that allow learners to work with assigned content that mimics how a tutor might work with a student. Both systems would allow a designer to create a tool focused on specific content that can be shared. ChatGPT custom GPTs require that those using a shared GPT have an active $20 per month account which probably means this approach would not presently be feasible for common application. Google’s Notebooks can be created at no cost to the designer or user, but this will likely change when Google decides the service is beyond the experimental stage. Perhaps the capability will be included in present services designed for educational situations.

While I recognize that cost is a significant issue, my intent here is to propose services that can be explored as proof of concept and those educators interested in AI opportunities might explore future productive classroom applications of AI. 

Loading

Evaluating tech tools for adult learning

I feel comfortable writing about learning in educational environments. I have reviewed many instructional and learning strategies, read applied studies intended to evaluate the efficacy of these strategies, and read a substantial amount of the basic cognitive research potentially explaining the why of the applied investigations. In a small way, I have contributed to some of this research. 

As my life circumstances have changed, I have begun exploring related, but unfamiliar topics. In retirement, I am by definition no longer playing an active role as a salaried educator or researcher. I retain the opportunity to access the scholarly literature as an emeritus faculty member, but I can no longer engage as a researcher. These changes led to a different perspective. I have become more interested in other folks like me who are still interested in learning and how they go about responding to such interests. 

As I have contemplated this situation, it has become clear that this situation is not a matter of age. While it was very important for me to constantly learn while I was working, I don’t think I spent much time considering how I should best go about it. There was work to be done and despite my own focus on education, I did little to consider the strategies of my own learning.

I began to think more deeply about self-directed learning, adult learning, or whatever else might be the current way to describe this situation when I began participating in a book club that has as one interest Personal Knowledge Management (PKM) and the technology tools that can be applied when committed to implementing this concept. For those who are unfamiliar with PKM, one way to gain insight would be to read a couple of the self-help books explaining views on this topic and describing techniques argued to be useful in achieving goals consistent with the general idea of Personal Knowledge Management.

Sonke Ahrens How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking 

Tiago Forte Building a Second Brain: A Proven Method to Organize Your Digital Life and Unlock Your Creative Potential 

There is plenty of specific information available from such books and other online resources regarding how to study topics for understanding and retention. It is easy to locate tutorials for online services and apps to implement these strategies. There seem to be hundreds of posts on Medium, Substack, and YouTube with titles like “My Obsidium Workflow”, “I Switched From OneNote to Notion and Can’t Believe My New Productivity”, and “All of the Notetaking Apps in One Post”. There must be something people want to understand and evaluate here. When i dig deeper there are some logical arguments proposed to justify techniques digital tools enable such as the creation of permanent and atomic notes, linking notes, and progressive summarization and I can sometimes associate these techniques with cognitive concepts I knew such as generative learning, spaced repetition, and retrieval practice. 

What I finally decided I was missing was the type of applied research I found readily available when specific study techniques are proposed for classroom use. Learning and studying over time is not really what is studied in K12 and postsecondary education. What students know is studied over time, but not frequently how different methods of study influence the development of skills and knowledge. Differences in what studies can do on the next exam or at the end of a course are typically the focus. This seems different from the goal of evaluating learner-guided activities to develop knowledge and skills over many years. 

The time frame is not the only difference. Some of the strategies for school and adult independent note-taking are similar on the surface but different enough to warrant additional research. Note-taking, sometimes even described as note-making to differentiate the processes by advocates of some PKM methods, is a good example. In the Smart Note approach, isolating specific concepts such as individual notes written with enough context to be interpretable over time and then linking these individual notes to other notes by way of multiple links is quite different from how students take and make use of notes. The note-taking tools are different, the goals are different, and the mechanisms of creating and then acting on the written record are different. I want to know if the mechanics of these differences are actually useful. Controlled comparisons would be interesting, but so would studies examining how adults familiar with these approaches make use of what the tools allow over time, if they actually do. Do learners working for their own purposes stick with what the logic proposed for the use of a learning tool or do they modify the ideal approach to something that is simpler and less cognitively demanding?  Formal research methods have proven useful in understanding study strategies proposed for classroom-associated use but should be repeated in evaluating self-directed adult learning. 

I don’t think much if any of the type of formal research I propose exists. At least, I have not been able to locate this work. Maybe the payoff for such effort just is not there. Maybe there is a lack of grant support to fund academic research, but we academics are still interested in topics that seldom bring funding. There is a payoff available to those who develop tools and services in the form of subscriptions and for those writing self-help books that attract attention in the form of sales. 

As I consider what it would take to work on these topics, I can imagine the challenges researchers would face. How would you collect data and how would you assure privacy when the tools used are often associated with work? How would you get individuals to participate in studies? What would individuals be willing to provide if you wanted to evaluate the effectiveness of the technique employed? I at least would hope individuals might be willing to provide information about the tools they used, how long they have used these tools, and how they have used the tools and perhaps changed their patterns of use over time. 

Adults continually have learning tasks to keep up with vocational demands and for personal growth. We are told that rapid advancements in so many areas and so many information sources learning and learning to learn using technology would seem of increasing value. Perhaps by explaining my observations I can interest those still involved as active researchers. It is also possible I am missing a body of research that would address my interests. If this is the case, I would welcome suggestions. 

Loading

Design learning experiences using generative activities – Layering

I have written multiple posts explaining generative activities and how such external activities encourage productive cognitive behaviors. Some of these posts describe specific classroom applications of individual generative tasks. In this post, I intend to describe how educators can apply some of these generative activities when they assign web content (pages or videos).

In many cases, online content assigned in K12 classrooms was not prepared as instructional content. For example, an article from Scientific American might offer information relevant to a specific standard addressed in sophomore biology. What activities might an instructor add to help learners understand, remember, and possibly apply concepts within this article. For example, a textbook would likely have activities inserted at the end of a chapter, added as boxes within content, or recommended in a teacher’s manual. Instructors often make additions as class assignments. What I am supporting here is similar to what educational researchers have described as adjunct questions. These were originally questions added within instructional texts or attached at the end of such texts. Embedded activities play different roles than even the same activities might play when delayed and isolated from the informative content. At the time of initial exposure, my argument is that there is a difference between information and instructional content and the connection of generative learning activities is a way to make this transition. 

A couple of years ago I became interested in a group of online services that were developed to improve the educational value of online content (web pages and videos). I developed my own way of describing what these services were developed to accomplish. Layering seemed a reasonable description because these services could not actually modify the content originally shared by content creators for ethical and legal reasons. What a layering service could do was take the feed from the creator’s service and add elements on top of that content. Elements were additions that could encourage important cognitive behaviors in a learner.

With a layering service, the content a learner encounters is a combination of the content from the content creator and additions layered on this content. Two sources and servers are involved. From the perspective of a designer, a layering service works by accepting the URL for a web page or video from the designer and then allows the designer to add elements that appear within or on top of the content from the designated source. The layering service sends this combination to the learner and this does not change the original document and still downloads the original from the server each time the combination of original and layered content is requested by a user. Ads still appear and the content server still records the download to give the creator credit. The layering service generates a link provided to learners and recreates the composite of content and designer additions each time a learner uses that link. 

Questions are my favorite example of an external activity that can be added to encourage a variety of important thinking (internal) behaviors. For example, if you want a learner to link a new concept to everyday experiences the concept is useful in understanding, you might ask the learner to provide examples that show the application of the concept. Many learners may do this without the question, but the question increases the likelihood more learners will work to identify such connections with their existing experiences. Those who think about instruction in this way may describe what they are doing as designing instruction. I offer an extended description of generative activity in a previous post. 

Depending on the specific service, the elements that layering services I am aware of include annotations, highlighting, questions, and discussion prompts. Annotations could include additional material such as examples, translations, or instructions. Questions could be open-ended or multiple-choice. A few of these elements could also be added by the learner (highlights and annotations) so elements provided to the designer could be used to encourage specific use of the elements available to students.

My personal interest in promoting layering services is intended to encourage the use of services that allow educators, educational content designers, and learners to work with this content to provide more effective learning resources and more generative learning experiences. In addition, content creators have a right to assume the server used by the content creator will be contacted each time content is requested and inclusions such as ads are included. The expectations of the content creator are not ignored when using a layering service.

I have identified several services that meet my definition of a layering service. Here, I will describe one service focused on web pages and one that focused on video. Other examples can be explored from the page linked above and I assume others exist that I have not identified. Services are constantly being updated, but I have just worked with the two examples I describe here and this information should be current as of the uploading of this post.

Insert Learning

Insert Learning is my best example of the services promoted here. I say this because it offers the most generative options and the generative options are part of an environment allowing an educator to both create multiple lessons, assign these lessons to members of multiple classes, and record data on student completion of some of the types of activity involved in individual lessons. 

The following image should give you some idea how this works. Down the left border of the image, you see a menu of icons allowing the designer to select highlight, note, question, and discussion. Highlight and note work as one probably expects. When the icon is selected text can be highlighted by the designer or learner. The note icon adds what appear as Postit notes allowing the inclusion of text, links, images, video, and whatever else works as an embed. The question icon adds questions either multiple choice as appears in the image or open-ended. The discussion icon appears very much like an open-ended question but accumulates and displays responses from multiple learners to a prompt. 

As I said, Insert Learning differentiates itself from many of the other services because the layering component is part of a system that allows the assignment of lessons to individual students organized as classes and also collects responses to questions by lesson and student. The following image shows a couple of responses to an open-ended question. I used Insert Learning in a graduate course I taught in Instructional Design. I made use of several of the tools I presented to students even when the most common use would be in K-12. This image shows how responses to questions would appear in the Grade Book. I could assign a score to a response and this score would then be visible to the student submitting a given response. 

It has been a few years since I used Insert Learning. When I did, I paid $8 a month. I see the price has now increased to $20 a month or $100 for the year. 

EdPuzzle 

EdPuzzle is a service for adding questions and notes to videos. It includes a system for adding these elements, assigning these videos to students, and saving student responses to questions. The following images are small to allow them to be inserted in this post. In the following image, the red box on the right allows the selection of the element to be added – MC question, open-ended question, and note. The timeline underneath the video (middle) is also enclosed in a red box. As the designer watches the video, clicking one of these buttons stops the video and allows the selected addition to be included. A dot appears below the timeline to indicate where an element has been added. A learner can either play the video which will stop for a response when one of these inclusions is reached or select one of the dots to respond. The second image shows the dialog box used to add an open-ended question. 

In the video I used in this example, I created a demonstration using Python to run LOGO commands and saved the video to YouTube. Again, this was a demonstration used in a graduate edtech course. Early in the video, I showed and explained the LOGO code. The video then showed the result of running this program.

When using EdPuzzle with this video, I inserted a note asking students to take a pencil and sheet of paper to draw what the LOGO program would create. Near the end of the video, I inserted an open-ended question asking that students explain how Papert’s notion of computational understanding would provide a different way of thinking about the traditional definition of circle (i.e., a plane closed figure with points equidistant from a point). 

I used the free version of EdPuzzle because I only assigned students to a few examples to experience what the service provided. You can do a lot with this service at no cost. The pro-level price is $13.50 per month. EdPuzzle Pricing 

Summary these two examples demonstrate the use of layering services to add generative activities to a web page and a web video. There are similar services available from other companies that generate similar student experiences. The value in such services is the opportunity to design learning experiences containing activities likely to improve understanding and retention.

Loading

Cornell Notes and Beyond

While a research assistant at Cornell, Walter Pauk was credited with the development of the Cornell Note-taking system. Cornell notes became widely known through Pauk’s popular book “How to study in college” first published in 1962 and available through multiple editions. I checked and Amazon still carries the text.

Pauk’s approach which can be applied within a traditional notebook involves dividing a page into two columns with the right-hand column about twice as wide as the left-hand column and leaving a space across the bottom of each page for writing a summary. The idea is to take notes during a presentation in the right-hand column and later follow-up in the left-hand (often called cue column) with questions and other related comments. This second pass is supposed to follow soon after class so that other memories of the presentation are still fresh. The summary section provides a space to add just what it says – a summary of the main ideas.

Paul explained the proper way to use his system as the five Rs of note-taking. In my experience, the 5 Rs are far less well-known and yet important because they explain how the basic system is to be used. I would organize and explain the 5 Rs as follows.

During class – Record

After class – Reduce

Over time 

Recite (cover notes and see what you can recall based on cues)

Reflect (add your own ideas, extensions)

Review (review all notes each week)

While the Cornell system was designed during a different time and was suited to the technology of the day (paper and pencil), those who promote digital note-taking tools offer suggestions for applying the Cornell structure within the digital environment of the tool they promote. 

Cornell notes within Obsidian 

Cornell notes within Notion 

When I used to lecture about study skills and study behavior, I explained the Cornell system, but I would preface my presentation with the following questions. How many of you have heard of Cornell Notes? The SQ3R system? More had heard of Cornett notes and a few of SQ3R. I would then ask are any of you now using either of these systems to study my presentations or your textbook. In the thousands of students I asked, I don’t remember anyone ever raising her or his hand. To test my approach, I also asked if any student made and study note cards in their classes. The positive responses here were much more frequent. I tried to get a sense of why without much luck. I think my data are accurate and I raise this experience to get you to consider this same question. Students take notes, but don’t have a system.

I think Cornell notes are frequently proposed and taught to younger learners because the design of the note collection environment is simple and easy to describe. I wonder about how the process is communicated and perhaps more importantly implemented. The structure makes less sense if students are only intending to cram rather than frequently review. Does the learner have to “buy in” to the logic or do learners understand the logic, but just are not motivated to put in the effort? How any method is taught and understood likely has at least some impact on whether suggestions are implemented.

Understanding Cornell Notes at a deeper level

Note-taking has always been a personal interest and my posts have frequently commented on note-taking. I may have mentioned Cornell notes in a few of these posts, but my focus tends to be on a more basic level. If I am describing a system, what about specific components of that system have known cognitive benefits to learners? 

I come to the interpretations of those advocating specific study strategies from a cognitive perspective trying to analyze those strategies from this perspective. I ask what about a given study strategy seems like it makes sense given what those who study human cognition have found that benefits learning, retention, and transfer (application). What in a given study strategy could be augmented or given additional emphasis based on principles proposed by cognitive researchers? I will now try to apply this strategy to Cornell notes. I don’t know enough about Pauk’s work to know his theoretical perspective when creating this approach. For the most part, the perspective I take in my analysis has followed Pauk’s work which occurred during the 1950s. Timelines in this regard do not require that research precede practice, but there is a possibility that new research may offer new suggestions,

Topics

My comments will be organized as three topics.

  1. Stages of study behavior – how should the activities intended to benefit learning occur over time. What should be done when?
  1. Generative experiences and a hierarchy of such experiences – My explanation of a generative activity is an external activity intended to encourage a productive cognitive behavior. By hierarchy, I am pointing to research that has attempted to identify more and less effective generative activities and explain what factors are responsible for this ranking.
  1. Retrieval practice / testing effect – Research demonstrates that activities requiring the recall of stored information increases the probably of future recall and also increases understanding. Testing – free recall, cued recall, and recognition tasks – are common, but not the only or necessarily the most effective ways to engage retrieval effort.

Stages of study behavior

My personal interest in note-taking can be traced to the insights of Di Vesta and Gray. These researchers actually differentiated functions – encoding and external storage, but these processes were really centered within the stages of taking notes and then review. Encoding interpreted more broadly can occur at multiple points in time and this is my point in recognizing stages.

Pauk clearly recognized stages of study in proposing that learners function according to the 5Rs. The original notes were to be interpreted, augmented, and reviewed several times between the original recording and the immediate preparation for use. 

Luo and colleagues proposed that notetaking should be imagined as a three-stage process with a revision or update stage recognized after notetaking and before final preparation for use. In addition to recognizing the importance of following up to improve the original record, these researchers advocated for collaboration with a partner. Students do not take complete notes and the opportunity to compare notes taken with others allows for improvements. Research included in the paper points to the percentage of important ideas missed in the notes most record. The authors propose that lectures pause during presentations to provide an opportunity for comparison.

This source describes studies with college students using this pause and update method. Students were given two colored pens so additions could be identified. The pause and improve condition generated a significant achievement advantage (second study). However, this study found no benefit when comparing taking notes with a partner vs alone. Researchers looked at notes added and found few elaborations.

In an even more recent focus on multiple stages as part of a model for building a second brain, Forte described a process called distillation or progressive summarization.  In this process focused on taking notes from written sources, original content is read using an app that allows the exportation of the highlighted material. This content is first bolded and then highlighted to identify key information (progressive distillation). A summary can then be added. The unique advantage in this approach is to keep all of the layers available. One can function at different levels from the same immediate source and backtrack to a more complete level should it become necessary to recall a broader context or to take what was originally created in a different direction. 

It is possible to draw parallels here between what the Cornell system allows and what Forte proposes. The capability of reinstating context and addressing information missing from the original notes is also an advantage of the digital recording of an audio input keyed to specific notes as they are taken (see SoundNote). 

Di Vesta, F. & Gray, S. G. (1972). Listening and note taking. _Journal of Educational Psychology, 63_(1), 8-14.

Forte, T. (2022). Building a second brain: A proven method to organize your digital life and unlock your creative potential. Atria Books.

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: How revision, pauses, and partners affect note taking and achievement. Instructional Science, 44(1), 45-67.

Hierarchy of generative tasks

Again, a generative experience is an external activity intended to encourage productive activities. These productive activities may occur without any external tasks and this would be best situation because there is overhead in implementing the external tasks. However, for many learners and for most under some situations, the external tasks require cognitive activities that may be avoided or remain unrecognized as a function of poor metacognition or lack of motivation.

Many tasks initiated by a learner or educator can function as a generative function. Fiorella and Mayer (2016) have identified a list of eight general categories most educators can probably turn into specific tasks. These categories include:

  • Summarizing
  • Mapping
  • Drawing
  • Imagining
  • Self-Testing
  • Self-Explaining
  • Teaching
  • Enacting

Immediately, summarization can be identified from this list as being included in the Cornell system. Self-testing would also be involved in the way Pauk described recitation.

What I mean by a hierarchy as applied to generative activities is that some activities are typically more effective than others. 

Chi offers a framework – active-constructive-interactive – to differentiate learning activities in terms of observable overt activities and underlying learning processes. Each stage in the framework assumes the integration of the earlier stage and is assumed more productive than the earlier stage.

Active – doing something physical that can be observed. Highlighting would be another example.

Constructive – creating a **product** that extends the input based on **what is already known**. For example, summarization.

Interactive – involves interaction with another person – expert/learner, peers – to produce a product.

One insight from this scheme is that there is a stage beyond what might seem to be the upper limit of the Cornell structure (i.e., summarization). I am tempted to describe this additional level as application or perhaps elaboration. Both terms to me imply using information.  

Chi, M. T. (2009). Active?constructive?interactive: A conceptual framework for differentiating learning activities. Topics in cognitive science, 1(1), 73-105.

Fiorella, L., & Mayer, R. (2016). Eight Ways to Promote Generative Learning. Educational Psychology Review, 28(4), 717-741.

Retrieval Practice

Retrieval practice is a learning technique that involves trying to recall information from memory (see also Roediger & Karpicke). There are several reasons why retrieval practice improves future retrieval, but also understanding. First, it forces learners to actively engage with the material. This helps to create stronger connections between the information and existing knowledge. I think of retrieval as looking externally into memory to try to find something connected to what I am searching to find. This makes sense if you understand memory as a web of connections among ideas. The efforts to find specific information results in the activation and awareness of other information in order to find a connection to what is desired.Exploring retrieval not only increases the strength of connection to the desired information, but also an exploration of potentially related information resulting in new insights. 

Second, retrieval practice provides feedback on what has been learned and what needs more attention. This helps learners to identify areas where they need to improve. 

Retrieval practice is sometimes called the testing effect and asking questions or being asked questions is one way to trigger the search process (e.g., Yang and colleagues), Self testing is an activity embedded in the way Pauk imagines the use of Cornell notes. I am guessing it is also a reason the strategy of making and using flash cards is such a common study strategy. 

There are however other ways to practice retrieval. Yang and colleagues speculate that retrieval practice plays in role in the proven benefits of a learner teaching and preparing to teach. Teaching represents an important link here to the more productive levels of generative learning (see previous section). The previously mentioned hierarchy attributed to Luo and colleagues recognized the value of collaboration in reviewing notes and again the addition of sharing and discussion would represent important extensions of a personal use of any note-taking system. 

 Koh, A. W. L., Lee, S. C., & Lim, S. W. H. (2018). The learning benefits of teaching: A retrieval practice hypothesis. Applied Cognitive Psychology, 32(3), 401-410.

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: How revision, pauses, and partners affect note taking and achievement. Instructional Science, 44(1), 45-67.

Roediger III, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on psychological science, 1(3), 181-210.

Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin, 147(4), 399-435.

Summary – My effort here was an attempt to cross reference what might be described as a learning system (Cornell Note) with mechanisms that might expain why the system has proven value and possibly allow the recognition of similar components present in other study systems. In addition, I have tried to emphasize that the components of a system may not be understood and applied in practice. Collaboration was suggested as a way to extend the Cornell system.

Loading