As I have explored and used several digital note-taking tools and examined the arguments that have been made regarding how such tools result in productivity benefits, I have identified a potential conflict in what produces more positive outcomes. The recognition of this conflict allows more purposeful execution on the part of the tool user and may better align activities with goals.
One way to identify note-taking goals is to use a long-standing approach differentiating generative and external storage benefits. This distinction was proposed long before PKM and was applied in the analysis of notes taken in classroom settings. The generative benefit proposes that the process of taking notes or sometimes of taking notes in a particular way engages our cognitive (mental) processes in ways that improve retention and understanding. External storage implies that our memory becomes less effective over time and having access to an external record (the notes) benefits our productivity. In practice (e.g., a student in a classroom) both benefits may apply, but one benefit depends on the other activity. Taking notes may not be beneficial, but to review notes one must have something to review. This is not always true as notes in one form or another can be provided or perhaps generated (for example AI identification of key ideas), but taking your own notes is by far the most common experience. In a PKM way of thinking, these two processes may function in different ways, but the classroom example should be familiar as a way to identify the theoretical benefits of note-taking.
I have written about the generative function of note-taking at length, but it is important to point out some unique specifics that apply to some digital note-taking tools. A source such as Ahrens’ Taking Smart Notes might provide the right mindset. I think of generative activities as external actions intended to produce a beneficial mental (cognitive) outcome. The idea is that external activities can encourage or change the likelihood of beneficial thinking behaviors. One way of operationalizing this perspective is to consider some of the specific activities Ahrens identified as external work resulting in such cognitive benefits. What are some of these activities? Isolating specific ideas and summarizing each as a note. Assigning tags that characterize a note. Making the effort to link notes. Periodically reviewing notes to generate retrieval practice, to reword existing notes, and to add new associations (links).
Retrieval is easier to explain. Note-taking apps with highly effective search capabilities make it easy to search and surface stored information when it might be useful. Links and tags may also be useful in this role, but search alone will often be sufficient.
What about the potential conflict?
The conflict I see proposes that some tools or approaches rely more heavily on search arguing in a way that generative processes are unnecessary.
I starting thinking about this assumption when contrasting the two note-taking systems I rely on – Mem.ai and Obsidian. While Mem.AI and Obsidian could be used in exactly the same way, Mem.ai developers argued that the built-in AI capabilities could eliminate the need to designate connections (with tags and links) because the AI capabilities would identify these connections for you. Thus when retrieving information via search, a user could use AI to also consider the notes with overlapping foci. If a user relied on this capability it would eliminate the work required to generate the connections manually created in Obsidian, but this approach would then also avoid the generative benefits of this work.
AI capabilities fascinate me so I found a way to add a decent AI capability to Obsidian. Smart Connections is an Obsidian plugin that finds connections among notes and allows a user to chat with their notes. So, I found a way to mimic Mem.ai functionality with Obsidian.
I find I have found a way to alter my more general PKM approach because of these capabilities. Rather than taking individual notes while reading, I can annotate and highlight pdfs, books, and videos and export the entire collection for each source and then bring this content into both Mem.ai and Obsidian as a very large note. Far easier than taking individual notes, but at what generative cost?
Smart Connections has added a new feature that even facilitates the use of the large note approach. Connections finds connections based on AI embeddings. An embedding is the mathematical representation of content (I would describe as weights based on what I remember of statistics). The more two notes embeddings’ weights are similar the more the notes consider similar ideas. Smart Connections used embeddings to propose related notes. Originally embeddings were generated at the note level and now at the “block” level. What this means (block level) is that Smart Connections can find the segments of a long document that have a similar focus as a selected note.
Why is this helpful? When I read long documents (pdfs of journal articles or books in Kindle), I can export a long document containing my highlights and notes generated from these documents. With Smart Connections I can then just import this exported material into Obsidian and use Smart Connections to connect a specific note to blocks of all such documents. I can skip breaking up the long document into individual notes and assigning tags and creating links.
Why is this a disadvantage? Taking advantage of this capability can be a powerful disincentive to engaging in the generative activities involved in creating and connecting individual notes the basic version of Obsidian requires.
Summary
As note-taking tools mature and add AI capabilities, it is important for users to consider how the way they use such tools can impact their learning and understanding. The tools themselves are quite flexible but can be used in ways that avoid generative tasks that impact learning and understanding. If the focus is on the retrieval of content for writing and other tasks, the generative activities may be less important. However, if you start using a tool such as Obsidian because a book such as Smart Notes influenced you, you might want to think about what might be happening if you rely on the type of AI capabilities I have described here.
References Ahrens, S. (2022). How to take smart notes: One simple technique to boost writing, learning and thinking. Sönke Ahrens.
John’s “The Science of Reading” explores the historical and scientific journey of reading as a science and a practice. Much of my professional life as a researcher focused on reading and reading skills and as a consequence, I was aware of some of the history of the research and theory. What I found my perspective lacked was the broader perspective on what was expected of reading as a determinant of culture and as the basis for citizenship and commercial and scientific advancement. The political perspective associated with assumptions about what specific skills were necessary for the general advancement of nations was an angle I had not considered.
The closest I can come to explaining some of the insights I encountered might be compared to present assumptions concerning political arguments over why “educated” citizens can believe the things they believe and even what should be excluded from classroom consideration to prevent what some see as undesirable outcomes. Those of us involved in the nitty-gritty of the learning and improvement of the skills of reading are often oblivious to broader questions of what the general population may expect the skill to accomplish or the problems the acquisition of a skill may create.
A historical perspective provides both a way to see transitions in a skill and how that skill is developed, but also how in this case to consider that a skill exists in a reciprocal relationship with that knowledge and culture. For example, political values, arguably a part of culture, have varied in demanding that a specific form of communication be prioritized and thus justifies support as a means for accomplishing prioritized goals. Who needs to develop a specific communication skill, what information should this skill target, and how will the use of this skill be controlled? More to the point of this post, are we in an era in which reading is coming to the end of its reign in this broader capacity and are we seeing the early stages of a transition to a different means for recording and transmitting knowledge and culture? Are we in the midst of this transition without acknowledging it and perhaps more importantly supporting and shaping the direction of this transition?
Perhaps asking whether we are moving on from reading seems radical, but these thoughts came to me as I have watched my grandchildren and truthfully most of my relatives spend hours exploring videos on their phones. The time children and adolescents spend on YouTube and other video content exceeds by a considerable margin the time they spend reading. It seems this reality has to be acknowledged. I tried to locate some specific data and found that the results of a recent Gallup poll indicate adolescents report spending an average of 1.9 hours daily on YouTube alone. Adults may be different, but I would wager when they encounter a skill they must execute they are far more likely to see if YouTube has something to offer rather than search for and read the manual that provides related information. I understand that what may seem a similar reaction has been associated with television viewing because everyone spent and spends so much time watching television, but how we make use of televised content seems different and less responsive to transitory personal interests than online video.
A modest proposal
OK. I have not abandoned reading and I rely on reading professionally. I must read journal articles and books to perform my occupational role. Scientific research demands the sharing and reading of text documents in a specific format and with a required approach to citing related sources so that any arguments made can be evaluated based on existing research findings and theory. At this point, I am bound by this approach. However, the process by which the findings of this formal research process reaches potential practitioners is not so rigid. Classroom educators can read articles and blog posts in which proposed instructional activities based on the findings of the research community are offered, but they can also listen to and watch podcasts and YouTube presentations. They can take courses (e.g., Coursera) and interactive classes (e.g., Zoom) that rely on video. We all have been taught to read (and write), but what about the development of skills that optimize learning from video.
For several years now, I have been interested in the role of Personal Knowledge Management (PKM) in self-directed learning. Part of this interest has involved the exploration of specific digital tools that support the processing of information within the context of PKM. The PKM perspective can be applied to traditional educational settings, but it also encourages a long-term perspective which is the environment all of us face once no longer involved in courses that require us to learn to pass examinations and produce projects that demonstrate our learning. Our challenge is remembering specifics earlier exposure to information sources have provided when potentially useful and finding personally useful connections within this great volume of information.
PKM is about tools and tactics. What processes (tactics) allow us to store (internally and externally) a residue from our reflection on the information we have experienced? What external activities (tools) can facilitate storage and processing?
There are plenty of tools and plenty of related suggestions for tactics proposed by the PKM community. My focus here is on the less extensive focus on video and the even more limited focus on digital tools that are used during the initial video experience. How does a video viewer capture ideas for later use? How can skills unique to this approach be learned?
Why an integrated digital note-taking tool?
While watching an informative video, why not just take notes in a notebook next to your laptop or tablet? Why not just open a second window and simple word-processing app in a second window on your laptop? My answer would be you use an integrated digital tool to link the context between the original video and individual notes in ways that recognize future issues and uses. Note-taking is a far from perfect process and being able to recover a missing piece of information necessary to fix a confusing note requires being able to reexamine a specific segment of the original video. I first wrote about the importance of the preservation of context when describing apps that allowed the sound from lectures to be recorded within note-taking apps. These apps automatically establish a link between any note taken with a time-stamp connecting the note to a specific point in the audio recording. I even suggested that when a note-taker realizes she has missed something she knows she should have written down as a note, they simply enter something like ??? in their notes as a signal to later check the recorded audio for something not mentioned in the notes that may have been important.
I have a different reason for proposing the importance of digital notes. I use digital note-taking systems that allow me to quickly search and find notes I may have taken years ago. Students are not in this situation, but the delays say in a course with only a midterm and final exam involve delays that are long enough to be related to a sizable amount of content to review and a time frame likely to increase memory retrieval challenges. Digital notes make searching simple and allow integration and cross-referencing of content over time to be relatively easy. For those of us now functioning to manage large amounts of information outside of a formal and short-term academic setting, such challenges are now often described and addressed as Personal Knowledge Management (PKM).
Reclipped
There are several tools available to annotate videos. My favorite is ReClipped. This tool is an extension that is added to the Chrome browser and is activated when a video source the tool can be used with appears in the browser. When the extension has been added, an icon will appear in the icon bar at the top of your browser and the appearance of this icon will change when it has been activated by the presence of video content within the browser. When active with YouTube, additional icons will appear in YouTube below and to the right of the window displaying the video (see the following image with ReClipped icons identified by a red box). (Note: the video used in this example was created by Dr. Dan Alosso and associated with an online book club he runs.)
I have written about ReClipped before in my series about layering tools. I define a layering tool as a tool that allows additions overlayed on existing online content without actually modifying that content as sent from the host server. I wrote previously about ReClipped as a way an instructor could add content (questions, comments) to a video so that the composite of the original video and the additions could be presented to students and supplement their learning. The difference here is that a learner is adding the additions for personal use.
To keep this as simple as possible, I will focus on one tool — the pencil. The pencil represents the note tool (see the icons with the pencil tool enclosed in a red box below the video window). Clicking on the pencil creates a time stamp in the panel to the right of the video window allowing the user to enter a note associated with that time stamp (see examples in the image). I tend to click the pencil, pause the video, and then enter my notes. Pausing the presentation is obviously an option not available when listening to a live lecture and solves all kinds of issues that learners face in the live lecture setting.
The save and export buttons are also important. ReClipped will archive your annotations for you when you save, but I am more interested in exporting my annotations so I can use them within my broader Personal Knowledge Management strategy. I use a tool called Obsidian to collect all of my notes and to work with this large collection in other ways (reworking, linking, tagging). I also make use of an AI tool ( Smart Connections) to “chat” with my collection of notes.
ReClipped allows the notes associated with a given video to be exported in several formats (e.g., pdf). I export notes in markdown because this is the format Obsidian likes for import. Markdown is a formatting style something like html if you are familiar with the formatting style used in creating web pages. Such additions allow the incorporation of other information with text (e.g., links). For example one of the entries included in the example I have displayed is exported as the text string that appears below.
– [08:43](https://www.youtube.com/watch?v=ukJtbtb8Tb4&t=523s) levels of notes — fleeting, literature, permanent — literature vs permanent is a matter of connecting to what you already know vs summarization. Permanent note has been “filtered by our interest”
When stored in Obsidian it appears as the following image (this is an image and not active).
Within Obsidian, the link is active and will cause the browser to return to the video stored in YouTube at the location identified by the time stamp. So, if necessary, I can review the video I saw when first creating the note at the point associated with that note. This link will simulate that experience. One issue with time stamps — the creation of a time stamp follows the content the stamp references. You listen and then decide to create a note. To reestablish the context for a note it thus requires that you use the link to a time stamp to activate the video and then scrub backward a bit to view the relevant material.
ReClipped allows other content (e.g., screen captures) from a video to be collected while viewing. Taking and exporting notes is straightforward and easy for me to explain in a reasonable amount of time.
There is a free version of ReClipped and the paid unlimited version is $2 a month. Note that ReClipped is presently free to teachers and students.
Research
I try to ground my speculation concerning the application of digital tools and techniques in unique learning situations with links to relevant research. In this case, my preference would be for studies comparing traditional note-taking from video with taking notes using integrated digital note-taking tools similar to ReClipped. I have been unable to locate the type of studies I had hoped to find. I did locate some studies evaluating the effectiveness of scratch-built tools typically incorporating some type of guided study tactic (see Fang and colleagues reference as an example). Though important work, learner application of more flexible and accessible tools seems a different matter and need to be evaluated separately.
Putting this all together
If you agree with the argument that we will increasingly rely on video content for the skills and information we want to learn, my basic suggestion is that we think more carefully about how to optimize learning from such content and teach/learn skills appropriate to this content and context. Digital tools such as Reclipped allow notes to be taken while viewing videos. These notes can be exported and stored within a Personal Knowledge Management system for reflection and connection with information from other sources. This post suggests that experience with such tools under educator supervision would provide learners the skills needed to take a more active approach to learning from videos they encounter.
References:
Fang, J., Wang, Y., Yang, C. L., Liu, C., & Wang, H. C. (2022). Understanding the effects of structured note-taking systems for video-based learners in individual and social learning contexts. Proceedings of the ACM on Human-Computer Interaction, 6(GROUP), 1–21.
Johns, A. (2023). The Science of Reading: Information, Media, and Mind in Modern America. University of Chicago Press.
AI, tutoring, and mastery learning are topics that have dominated my professional interests for years. Obviously, AI has been added recently, but the other topics have been active topics of my scholarship since the late 1960s. I have mostly treated these topics in isolation, but they can be interrelated and recent efforts have drawn attention to potential interconnections. I will end this post by providing my own take on how these topics now can be considered in combination.
Aptitude and mastery learning
I think the history of the interrelationship of these two concepts is important and not appreciated by current researchers and educational innovators. At least I do not see an effort to connect with what I think are important insights.
Aptitude and how educational experiences accommodate differences in aptitude don’t seem to receive a lot of attention. I see a lot of references to individual interests and perhaps existing knowledge under the heading of personalization, but less to aptitude. A common way of defining aptitude is as the natural ability to do something. When applied to learning, this definition becomes controversial. It may be the word “natural”. The idea of “natural” as biologically based is probably what causes the problems. You can kind of see what I mean by a messy idea if the word intelligence is equated with “natural”. Immediately those who disagree with the basic idea begin complaining about the limitations of intelligence tests and and the dangers of attaching labels to individuals. I can understand the concerns and potential abuses, but I have never thought the solution was to ignore what any educator faces in the variability in the students they work with. What way of thinking about this variability would be helpful?
As I was taught about intelligence and intelligence testing and learned about the correlates with academic achievement, I encountered a way of thinking I found helpful and useful. Perhaps aptitude could be thought of as speed of learning. This was the proposal of John Carroll. Instead of aptitude predicting differences in how much would be learned, Carroll proposed that differences in aptitude predicted how long most learners would take to grasp the same learning objectives. The implications of this perspective seem extremely important. Carroll argued that traditional educational settings, with their fixed timeframes for learning, often disadvantaged students with lower aptitude. In these settings, students with higher aptitude tend to learn more within the limited time available, while those who require more time to process information might fall behind. This disparity has an important secondary implication. Learning is cumulative with existing knowledge influencing future understanding and learning efficiency. Put another way teachers will likely recognize, missing prerequisites make related information difficult to understand. So, it is not just differences in aptitude that matter, but differences in aptitude. within a fixed learning environment (time and methods) that compounds learning speed and existing knowledge.
The connection with a traditional way of defining aptitude such as intelligence may not jump out at you, but consider the classic IQ=MA/CA so many learned in Intro to Psych courses. Think of it this way. CA is chronological age or within my way of explaining aptitude the time available for learning. MA is mental age or how much an individual of a given age knows. The quotient ends up as a way of learning efficiency or amount learned per unit of time.
Anyway, assuming this theoretical notion offers a reasonable way of understanding reality, what does this mean for educators and what actions seem reasonable responses?
Dealing with differences in rate of learning
When I present this to future teachers, I propose that educational settings do make some accommodations to this perspective. At the extreme, a few students might be required to repeat a grade. Schools provide extra help and time in the form of pull-out programs and other types of individual help. Schools used to and to some extent still group students based on performance/ability to match the rate of progress to instruction (e.g., tracking, ability grouping). While helpful, these programs do not stem the increasing variability in performance across elementary school grades. Perhaps once students get to high school variability is accommodated by the selection of different courses and the pursuit of different learning goals, but even if this is the case there are long-term consequences from the early learning experiences. How is motivation impacted by the increasing frequency of failure and related frustration? Are there practical ways to claw your way back from early failures once you fall behind?
Mastery learning
In the early 1970s, I became interested in two instructional strategies labeled mastery learning. These approaches proposed ways to respond to variability in the rate of learning. I will summarize these as Bloom’s Group-Based Method and Keller’s Personalized System of Instruction. Bloom was and continues to be a big name in education and gets a lot of attention. I see Keller as developing a system more attuned to the application of technology and AI. Both offer concrete proposals and encouraged a lot of research. The volume of research and related meta-analyses offer much to present efforts that lack the same detailed analyses (see references to the work of Kulik and colleagues).
Bloom’s Group-Based Mastery?—?In the late 1960s, Benjamin Bloom, an educational psychologist, considered the optimal approach to individualized education. Bloom concluded that individual tutoring yielded the best results for learners. Bloom’s research indicated that tutoring could produce significant improvements in student achievement, with 80% of tutored students achieving a level of mastery only attained by 20% of students in traditional classroom settings. Bloom recognized the impracticality of providing one-on-one tutoring for every student. Instead, he challenged researchers to explore alternative instructional strategies capable of replicating the effectiveness of individual tutoring. This has been described as the 2-sigma challenge based on the statistical advantage Bloom claimed for tutoring.
Bloom’s (1968} approach to mastery learning was group-based. A group of learners would focus on content (e.g. chapter) to be learned for approximately a week and would then be administered a formative evaluation. Those who passed this evaluation (often at what was considered a B level) would continue to supplemental learning activities and those who did not pass would receive remediation appropriate to their needs. At the end of this second period of instruction (at about the two-week mark), students would receive the summative examination to determine their grades. Those who were struggling were provided more time on the core goals. Yes, this is a practical more than a perfect approach as there is no guarantee that all students will have mastered the core objectives necessary for future learning by the end of the second week. A similar and more recent approach called the Modern Classroom Project categorizes goals as “need to know”, “good to know” and “aim to know”. The idea is that not all possible goals can practically be achieved.
Keller’s Personalized System of Instruction?—?Fred Keller, drawing inspiration from Carroll’s work, developed the Personalized System of Instruction (PSI) in 1968. Keller proposed that presenting educational content in written format, rather than through traditional lectures, could provide students with the flexibility to learn at their own pace. PSI utilizes written materials, tutors, and unit mastery to facilitate learning. Students progress through units of instruction at their own pace, revisiting concepts and seeking clarification as needed when initial evaluations show difficulties. This self-paced approach enables students to dedicate additional time to challenging concepts while progressing more quickly through familiar material. The focus on written materials that could be used by individuals allowed Keller’s approach to focus more on individual progress and it was not necessary that a group be kept to a common pace of progress.
PSI utilizes frequent assessments to gauge student understanding and identify areas requiring further instruction. These assessments are non-punitive, meaning they do not negatively impact a student’s grade. Instead, assessments provide feedback that guides students toward mastery of the material. If a student does not demonstrate mastery on an assessment, they receive additional support and instruction tailored to their specific needs, before retaking the assessment.
In Keller’s model, tutors play a crucial role in evaluating student progress, offering personalized feedback, and providing clarification or additional instruction when needed. The role of the tutor could be fulfilled by various individuals, including the teacher, teaching assistants, or even fellow students who have already achieved mastery ofthe subject matter. The teacher’s role in PSI shifts from delivering lectures to designing the curriculum, selecting and organizing study materials, and providing individualized support to students.
Adapting old models to modern technology
While mastery learning predates the widespread adoption of technology in education, technology has significantly enhanced its implementation. Meta-analyses generally found that mastery approaches offered achievement benefits when compared with traditional instruction. My interpretation of why interest in the original approaches declined was that interest waned not because of effectiveness, but because of practicality. Mastery approaches were simply difficult to implement. Online platforms and educational technologies can facilitate personalized learning experiences by delivering content, tracking student progress, and providing individualized feedback and support. Technology can also automate many of the administrative tasks associated with mastery learning, such as grading assessments and tracking student progress, freeing up educators to focus on providing individualized support. Both the Bloom and Keller approaches could be implemented making use of technology, but the greatest benefit would seem to be to the Keller approach.
AI, tutoring, and mastery learning
Recent mention of mastery learning (Kahn, Archambault and colleagues) do so in combination with tutoring. Bloom originally proposed that his mastery approach was his example that could be related to his two-sigma challenge. However, group-based mastery was compared to and not integrated with tutoring. The number of professionals working in schools is not increasing and if anything class sizes are increasing. Greater individualization only increases the importance of individual monitoring and attention and the AI as tutor can reduce some of the demands on the limited time of professional educators.
Archambault and colleagues summarize the complication posed by the seemingly conflicting education goals of individualized learning and the needs for interaction and socioemotional learning. I have included the following quote from their work.
For example, cultivating classroom community through building relationships online and having students work together to develop social interaction at a distance may have competing interests with personalizing instruction such that each student can work at their own pace and through their own path to master course content.
Summary
Mastery learning is gaining increasing attention among educators seeing the value of applications of technology to individualize learning. This post summarizes the history of mastery instructional methods and offers other insights into how old ideas may be practically implemented with technology.
I have written multiple posts about mastery learning and current efforts to apply mastery principles. Reviewing some of these posts may be valuable if this summary sparks your interest.
References:
Archambault, L., Leary, H., & Rice, K. (2022). Pillars of online pedagogy: A framework for teaching in online learning environments. Educational Psychologist, 57(3), 178–191. https://doi.org/10.1080/00461520.2022.2051513
Benjamin, S., Dhew, E., & Bloom, B. (1968). Learning for mastery. Eval. Comment, 1, 1–1
Kahn, S. (2024). Brave new words: How AI will revolutionize education and what that’s a good thing. Penguin Random House.
Keller, F. S. (1968). “Good-bye teacher”. Journal of Applied Behavior Analysis, 1, 79–89
Kulik, C., Kulik, J. & Bangert-Drowns, R.L. (1990). Effectiveness of mastery learning programs: A meta-analysis. Review of Educational Research, 60, 265–299.
Kulik, C., Kulik, J. & Bangert-Drowns, R.L. (1990). Is there better evidence on mastery learning? A response to Slavin. Review of Educational Research, 60, 303–307.
Kulik, J. A., Kulik, C. L. C., & Cohen, P. A. (1979). A meta-analysis of outcome studies of Keller’s personalized system of instruction. American Psychologist, 34(4), 307- 318
Thinking is not visible to self and others and this reality limits both personal analysis and assistance from others. I have always associated the request to show your work associated with learning math so subprocesses of mathematical solutions can be examined, but the advantage can be applied when possible to other processes. I have a personal interest in the ways in which technology can be used to externalize thinking processes and the ways in which technology offers unique opportunities when compared with other methods of externalization such as paper and pen.
Ideas from different areas of interest sometimes come together in unexpected ways. This has been a recent experience for me with a long-term interest in argumentation and digital tools applied to learning. Argumentation may not spark an immediate understanding for educators. It sometimes help if I connect it with the activity of debate, but it relates to many other topics such as critical thinking and the processes of science as well. It relates directly to issues such as the distribution of misinformation online and what might be done to protect us all from this type of influence.
For a time, I was fascinated by the research of Deanna Kuhn and wrote several posts about her findings and educational applications. Kuhn studied what I would describe as the development of argumentation skills and what educational interventions might be applied to change the limitations she observed. It is easy to see many of the limitations of online social behavior in the immaturity of middle school students engaged in a structured argument (debate). Immature interactions involving a topic with multiple sides might be described as egocentric. Even though there is an interaction with a common topic, participants mostly state the positions they take with and frequently without supporting evidence. As they go back and forth, the seldom identify the positions taken by an “opponent” or offer evidence to weaken such positions. Too often, personal attacks follow in the “adult” online version, and little actual examination of supposed issues of interest is involved.
Consideration of the process of clearly stating positions and evidence for and against maps easily to what we mean by critical thinking and the processes of science. In the political sphere what Kuhn and similar researchers investigate relates directly to whether or not policy matters are the focus of differences of opinion.
Externalization and learning to argue effectively
Kuhn proposed that to improve (develop) critical thinking skills learners would benefit from experiences encouraging reflection. An approach that proved productive was based in multiple studies on two techniques for encouraging reflection. Across multiple age groups (middle school, high school, college students) she had pairs of participants argue using online chat. A pair had to agree on a given “move” or statement before submission (externalizing rationales for consideration) and submitting statements in chat both allowed an opportunity to focus on the message with interference from the face-to-face issues that are present in formal debate and to create a record that could be critiqued. In some studies, the participants were asked to complete forms asking for a statement of the positions taken by opponents and evidence offered in support of these positions. The effectiveness of the treatments was examined following training without such scaffolds.
AI arguments result in an external record
I and others have been exploring the experience of arguing with an AI opponent. One insight I had while exploring this activity was that it resulted in an external product that could be examined much in the way Kuhn’s chat transcripts could be examined. Classroom applications seem straightforward. For example, the educator could provide the same prompt to all of the students in the class and ask the students to submit the resulting transcript after an allotted amount of time. Students could be asked to comment on their experiences and selected “arguments” could be displayed for consideration of the group. A more direct approach would use Kuhn’s pairs approach asking that the pairs decide on a chat entry before it was submitted. The interesting thing about AI large language models is that the experience across submissions of the same prompt are different for each individual or for the same individual submitting the prompt a second time.
I have described what an AI argument (debate) looks like and provided an example of a prompt that would initiate the argument and offer evaluation in a previous post. I have included the example I used in that post below. In this example, I am debating the AI service regarding the effectiveness of reading from paper or screen as I thought readers are likely familiar with this controversy.
…
Summary
Critical thinking, the process of science, and effective discussion of controversial topics depends on the skills of argumentation. Without development, the skills of argumentation are self-focused lacking the careful identification and evaluation of opposing ideas. These limitations can be addressed through instructional strategies encouraging reflection and the physical transcript resulting from an argument with an AI-based opponent provides the opportunity for reflection.
References:
Iordanou, K. (2013). Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help. Journal of Cognition & Development, 14(2), 292–320.
Kuhn, D., Goh, W., Iordanou, K., & Shaenfield, D. (2008). Arguing on the Computer: A Microgenetic Study of Developing Argument Skills in a Computer-Supported Environment. Child Development, 79(5), 1310-1328
Mayweg-Paus, E., Macagno, F., & Kuhn, D. (2016). Developing Argumentation Strategies in Electronic Dialogs: Is Modeling Effective. Discourse Processes, 53(4), 280–297. https://doi.org/10.1080/0163853X.2015.1040323
The instructional strategy of “flipping the classroom” is one of those recommendations that seems on first consideration to make a lot of sense. The core idea hinges on the truth that classroom time with students is limited and efficient use must be made of this time. Instead of taking up a substantial amount of this time with teacher presentations, why not move the exposure to content outside of class time and use class time for more active tasks such as helping students who have problems and allowing students to engage in active tasks with other students? With easy access to tools for recording presentations and sharing recordings online, why not simply have educators share presentations with students and have students review this material before class? So, presentations were flipped from class time to settings that might have been more frequently used for homework.
This all seemed very rational. I cannot remember where I first encountered the idea, but I did purchase Flip Your Classroom (Bergman and Sams, 2012) written by the high school teachers who I believe created the concept. While I did use my blog and textbook to promote this approach, I must have always wondered. I wrote a blog post in 2012 commenting that flipping the classroom sounded very similar to my large lecture experience of presenting to hundreds of students and expecting that these students would have read the textbook before class. Again, the logic of following up an initial exposure with an anecdote-rich and expanded focus on key concepts seemed sound. However, I knew this was not the way many students used their textbooks and some probably did not even make the purchase, but I was controlling what I could control.
There have been hundreds of studies evaluating the flipping strategy and many meta-analyses of these studies. These meta-analyses tend to conclude that asking students to watch video lectures before coming to class is generally beneficial. I think many have slightly modified the suggested in-class component to expand the notion of greater teacher-student interaction to include a focus on active learning. Kapur et al (2022), authors of the meta-analysis I will focus on eventually, list the following experiences as examples of active learning – problem-solving, class discussions, dialog and debates, student presentations, collaboration, labs, games, and interactive and simulation-based learning activities.
The institution where I taught had a group very much interested in active learning and several special active learning “labs” were created to focus on these techniques. The labs contained tables instead of rows of chairs, whiteboards, and other adaptations. To teach a large class in this setting you had to submit a description of the active techniques you intended to implement. The largest classes (200+) I taught could not be accommodated in these rooms and I am not certain if I would have ever submitted a proposal anyway.
Kupar et al. (2022)
Kupar and colleagues found reason to add another meta-analysis to those already completed. While their integrated analysis of the meta-analytic papers concluded that the flipped classrooms have an advantage, Kapur and colleagues were puzzled by the great variability present among the studies. Some studies demonstrated a great advantage in student achievement for the flipped approach and some found that traditional instruction was superior. It did not seem reasonable that a basic underlying advantage would be associated with this much variability and the researchers proposed that a focus on the average effect size without consideration of the source or sources for this variability made little sense. They conducted their own meta-analysis and coded each study according to a variety of methodological and situational variables.
The most surprising finding from this approach was that the inclusion of active learning components was relatively inconsequential. Remember that the use of such strategies in the face-to-face setting was emphasized in many applications. Surprisingly, segments of lecture within the face-to-face setting were a better predictor of an achievement advantage. Despite the break from the general understanding of how flipped classrooms are expected to work, educators seemed to use these presentations to review or supplement independent student content consumption and this provided an achievement bump.
The active beneficial learning component found to make a difference involved a problem-based strategy and when the entire process began with a problem-based experience. This finding reminds me of the problem-based learning research conducted by Deanna Kuhn who also proposed that the problem-based experience start the learning sequence. Kapur used the phrase productive failure to describe the way struggling with a problem before encountering relevant background information was helpful. Kuhn emphasized a similar process without the catchy label and proposed the advantage was more a matter of the activation of relevant knowledge and guiding the interpretation of information within the presentation of content that followed.
Regarding the general perspective on the flipped model identified by Kapur and colleagues, their findings were less an indictment of the concept, but a demonstration of the lack of fidelity in implementations to the proposed advantage of using face-to-face time to interact and adjust to student needs. Increasing response to the needs of individual needs would seem beneficial and may be ignored in favor of activities that are less impactful.
References:
Kapur, M., Hattie, J., Grossman, I., & Sinha, T. (2022, September). Fail, flip, fix, and feed–Rethinking flipped learning: A review of meta-analyses and a subsequent meta-analysis. In Frontiers in Education (Vol. 7, p. 956416). Frontiers.
Pease, M. A., & Kuhn, D. (2011). Experimental analysis of the effective components of problem?based learning. Science Education, 95(1), 57-86.
Wirkala. C. & Kuhn, D. (2011). Problem-Based Learning in K–12 Education: Is it Effective and How Does it Achieve its Effects? American Educational Research Journal, 48, 1157–1186
In my world, talk of AI is everywhere. I doubt most have a different opinion because nearly any news program has a story every other day or so commenting on AI capabilities, dangers, and the wealth and power being accumulated by developers. We all have experienced the history, beginning with ChatGPT in Nov. 2022, of the large language models.
I tried to find some specifics about the popularity of AI and this is challenging. There were quickly multiple companies involved and you can use free versions of AI programs with just a browser making an accurate of “users” difficult. We do know that a million users signed up for ChatGPT 3 within three months.
So, where are we at a year and a half later? Again, you and I may use an AI large language model on a daily or at least weekly basis, but how much use is actually going on “out there”?
Studies have started to appear attempting to determine the frequency of frequent use. Frequent use can be very different from “yeah, I tried that” use. My interpretation is that folks in many countries have heard of AI and quite a few have given at least one service a try, but most now appear puzzled by what should come next. One of these studies with the broadest approach, approached respondents in six countries – Argentina, Denmark, France, Japan, and the US. Among those surveyed, awareness was high, but frequent actual use was low. On a daily basis, frequent users ranged from 7% in the U.S. to 1% in Japan. 56% of 18-24 year olds had tried an AI service and 16% of those over 55.
My personal interest concerns AI in schools so I tried to locate studies that attempted to establish typical patterns of use by secondary students. Here is a 2024 study from Commonsense Media on this topic available to all online. A very short summary concluded that will half of 14-22-year-olds have used an AI service, but only 4% report being daily users. Beyond these basic statistics, I found it startling that minority youth (Blacks and Latinx) reported a higher frequency of use – 20% – 10% claimed to be weekly users. I cross-checked this result several times to make certain I understood it correctly. When asked to categorize their use, young people reported searching for information, generating ideas, and school work in that order. Another large category of use was generating pictures. The authors reported some concern when finding that searching for information was the most frequent category of use.
Participants were asked about concerns that limited their use of AI and potential accusations of cheating were high among these young people.
I admit I need to review this study more carefully because it is not clear to me if the participants were including any classroom use in contrast to what I would call personal use.
The “what can I do with this” question
Mollick and the 10-hour investment. I have read several efforts by Ethan Mollick (NYTimes, Kindle book) and find his perspective useful. He claims using AI is different from learning other technology applications in that there are not exact instructions you can follow to find productive uses. Instead, he proposes that you invest 10 hours and try the tool you select to accomplish various tasks that you face daily. If you write a lot of emails, chat with the AI tool about what you want to say and see what it generates. Request modifications to improve what is generated to suit your needs. Ask it to create an image you might have a way to use. Ask it to generate ideas for a task you want to accomplish. Some may tell you that AI is not a person and this is obviously the case, but forget this for a while and treat the AI service like an intern working with you. Converse in a natural way and give a clear description of what your tasks require. Ask the AI service to take on a persona and then explain your task. If you are trying to create something for a classroom situation, ask the service to act as an experienced teacher of XXX preparing for a lesson on YYY. Expect problems, but if you involve the tool in areas you understand, you should be able to identify what is incorrect and request improvements.
I watched the recent Apple announcement regarding the company’s soon to be released AI capabilities. Thinking about Apple’s approach, I could not help proposing that experiences with Apple products in the ways Apple plans could be a great gateway to finding personal practical applications of AI (Apple wants you to think of their approach as Apple Intelligence). Apple intends rolling out a two-tiered model – the AI capabilities available in a self-contained way on Apple devices and AI capabilities available off device. The device-located AI capabilities are designed to accomplish common tasks. Think of the on-device capabilities as similar to what Mollick proposes – ways to accomplish daily tasks (e.g., summarization, image creation, text evaluation and improvement, finding something I know I read recently). AI capabilities are available within most Apple products and also within other services. I could not help wondering how Grammarly will survive with AI tools available to Apple users who own recent Apple equipment.
Obviously, I have yet to try the new Apple Intelligence tools and I doubt I will close out my AI subscriptions, but I do think Apple tools as a transition will increase day-to-day usage.
I happened across this news story from NBC concerning the accuracy of election information. The story reported data from a research organization involving the submission of requests to multiple AI services and then having experts evaluate the quality of the responses. I also then read the description provided by the research organization and located the data used by this organization (the questions and methodology).
The results showed that a significant portion of the AI models’ answers were inaccurate, misleading, and potentially harmful. The experts found that the AI models often provided information that could discourage voter participation, misinterpret the actions of election workers, or mislead people about politicized aspects of the voting process. The focus in the research was on general information and did not address concerns with misinformation from candidates.
I have been exploring how I might address this same issue and perhaps offer an example educators might try in their classrooms. Educators exploring AI topics over the summer may also find my approach something they can try. AI issues seem important in most classrooms.
As I thought about my own explorations and this one specifically, a significant challenge is having confidence in the evaluations I make about the quality of AI responses. For earlier posts, I have written about topics such as tutoring. I have had the AI service engage with me using content from a textbook I have written. This approach made sense for evaluating AI as a tutor, but would not work with the topic of explaining political procedures. For this evaluation, I decided to focus on issues in my state (Minnesota) that were recently established and would be applied in the 2024 election.
The topic of absentee ballots and early voting has been contentious. Minnesota has a liberal policy allowing anyone to secure a mail ballot without answering questions about conditions and recently requested that this be the default in future elections without repeated requests. The second policy just went into effect in June and I thought would represent a good test of an AI system just to see if AI responses are based on general information about elections mixing the situation in some states with the situation in others or are specific to individual states and recent changes in election laws.
Here is the prompt I used:
I know I will not be in my home state of Minnesota during future Novembers, but I will be in Hawaii. Can I ask for an absentee ballot to be automatically sent to me before each election?
I used this prompt with ChatGPT (4) and Claud and found all responses to be appropriate (see below). When you chat with an AI tool using the same prompt, one interesting observation is that each experience is unique because it is constructed each time the prompt is submitted. So, each response is unique.
I decided to try one more request which I thought would be even more basic. As I already noted, Minnesota does not require a citizen to provide an explanation when asking for a mail-in ballot. Some states do, so I asked about this requirement.
Prompt: Do you need an explanation for why you want an absentee ballot in Minnesota
As you can see in the following two responses to this same prompt, I received contradictory responses. This would seem the type of misinformation that the AI Democracy Project was reporting.
Here is a related observation that seems relevant. If you use Google searches and you have the AI lab tool turned on, you have likely encountered an AI response to your search before you see the traditional list of links related to your request. I know that efforts are being made to address misinformation in regards to certain topics. Here is an example in response to such concerns. If you use the Prompt I have listed here, you should receive a list of links even if Google sends you a summary to other prompts (Note – this is different from submitting the prompt directly to ChatGPT or Claude). For a comparison try this nonpolitical prompt and you should see a difference -“ Are there disadvantages from reading from a tablet?” With questions related to election information, no AI summary should appear and you should see only links associated with your prompt.
Summary
AI can generate misinformation, which can be critical when voters request information related to election procedures. This example demonstrates this problem and suggests a way others can explore this problem.
66 total views
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
You must be logged in to post a comment.