As I have explored and used several digital note-taking tools and examined the arguments that have been made regarding how such tools result in productivity benefits, I have identified a potential conflict in what produces more positive outcomes. The recognition of this conflict allows more purposeful execution on the part of the tool user and may better align activities with goals.
One way to identify note-taking goals is to use a long-standing approach differentiating generative and external storage benefits. This distinction was proposed long before PKM and was applied in the analysis of notes taken in classroom settings. The generative benefit proposes that the process of taking notes or sometimes of taking notes in a particular way engages our cognitive (mental) processes in ways that improve retention and understanding. External storage implies that our memory becomes less effective over time and having access to an external record (the notes) benefits our productivity. In practice (e.g., a student in a classroom) both benefits may apply, but one benefit depends on the other activity. Taking notes may not be beneficial, but to review notes one must have something to review. This is not always true as notes in one form or another can be provided or perhaps generated (for example AI identification of key ideas), but taking your own notes is by far the most common experience. In a PKM way of thinking, these two processes may function in different ways, but the classroom example should be familiar as a way to identify the theoretical benefits of note-taking.
I have written about the generative function of note-taking at length, but it is important to point out some unique specifics that apply to some digital note-taking tools. A source such as Ahrens’ Taking Smart Notes might provide the right mindset. I think of generative activities as external actions intended to produce a beneficial mental (cognitive) outcome. The idea is that external activities can encourage or change the likelihood of beneficial thinking behaviors. One way of operationalizing this perspective is to consider some of the specific activities Ahrens identified as external work resulting in such cognitive benefits. What are some of these activities? Isolating specific ideas and summarizing each as a note. Assigning tags that characterize a note. Making the effort to link notes. Periodically reviewing notes to generate retrieval practice, to reword existing notes, and to add new associations (links).
Retrieval is easier to explain. Note-taking apps with highly effective search capabilities make it easy to search and surface stored information when it might be useful. Links and tags may also be useful in this role, but search alone will often be sufficient.
What about the potential conflict?
The conflict I see proposes that some tools or approaches rely more heavily on search arguing in a way that generative processes are unnecessary.
I starting thinking about this assumption when contrasting the two note-taking systems I rely on – Mem.ai and Obsidian. While Mem.AI and Obsidian could be used in exactly the same way, Mem.ai developers argued that the built-in AI capabilities could eliminate the need to designate connections (with tags and links) because the AI capabilities would identify these connections for you. Thus when retrieving information via search, a user could use AI to also consider the notes with overlapping foci. If a user relied on this capability it would eliminate the work required to generate the connections manually created in Obsidian, but this approach would then also avoid the generative benefits of this work.
AI capabilities fascinate me so I found a way to add a decent AI capability to Obsidian. Smart Connections is an Obsidian plugin that finds connections among notes and allows a user to chat with their notes. So, I found a way to mimic Mem.ai functionality with Obsidian.
I find I have found a way to alter my more general PKM approach because of these capabilities. Rather than taking individual notes while reading, I can annotate and highlight pdfs, books, and videos and export the entire collection for each source and then bring this content into both Mem.ai and Obsidian as a very large note. Far easier than taking individual notes, but at what generative cost?
Smart Connections has added a new feature that even facilitates the use of the large note approach. Connections finds connections based on AI embeddings. An embedding is the mathematical representation of content (I would describe as weights based on what I remember of statistics). The more two notes embeddings’ weights are similar the more the notes consider similar ideas. Smart Connections used embeddings to propose related notes. Originally embeddings were generated at the note level and now at the “block” level. What this means (block level) is that Smart Connections can find the segments of a long document that have a similar focus as a selected note.
Why is this helpful? When I read long documents (pdfs of journal articles or books in Kindle), I can export a long document containing my highlights and notes generated from these documents. With Smart Connections I can then just import this exported material into Obsidian and use Smart Connections to connect a specific note to blocks of all such documents. I can skip breaking up the long document into individual notes and assigning tags and creating links.
Why is this a disadvantage? Taking advantage of this capability can be a powerful disincentive to engaging in the generative activities involved in creating and connecting individual notes the basic version of Obsidian requires.
Summary
As note-taking tools mature and add AI capabilities, it is important for users to consider how the way they use such tools can impact their learning and understanding. The tools themselves are quite flexible but can be used in ways that avoid generative tasks that impact learning and understanding. If the focus is on the retrieval of content for writing and other tasks, the generative activities may be less important. However, if you start using a tool such as Obsidian because a book such as Smart Notes influenced you, you might want to think about what might be happening if you rely on the type of AI capabilities I have described here.
References Ahrens, S. (2022). How to take smart notes: One simple technique to boost writing, learning and thinking. Sönke Ahrens.
AI, tutoring, and mastery learning are topics that have dominated my professional interests for years. Obviously, AI has been added recently, but the other topics have been active topics of my scholarship since the late 1960s. I have mostly treated these topics in isolation, but they can be interrelated and recent efforts have drawn attention to potential interconnections. I will end this post by providing my own take on how these topics now can be considered in combination.
Aptitude and mastery learning
I think the history of the interrelationship of these two concepts is important and not appreciated by current researchers and educational innovators. At least I do not see an effort to connect with what I think are important insights.
Aptitude and how educational experiences accommodate differences in aptitude don’t seem to receive a lot of attention. I see a lot of references to individual interests and perhaps existing knowledge under the heading of personalization, but less to aptitude. A common way of defining aptitude is as the natural ability to do something. When applied to learning, this definition becomes controversial. It may be the word “natural”. The idea of “natural” as biologically based is probably what causes the problems. You can kind of see what I mean by a messy idea if the word intelligence is equated with “natural”. Immediately those who disagree with the basic idea begin complaining about the limitations of intelligence tests and and the dangers of attaching labels to individuals. I can understand the concerns and potential abuses, but I have never thought the solution was to ignore what any educator faces in the variability in the students they work with. What way of thinking about this variability would be helpful?
As I was taught about intelligence and intelligence testing and learned about the correlates with academic achievement, I encountered a way of thinking I found helpful and useful. Perhaps aptitude could be thought of as speed of learning. This was the proposal of John Carroll. Instead of aptitude predicting differences in how much would be learned, Carroll proposed that differences in aptitude predicted how long most learners would take to grasp the same learning objectives. The implications of this perspective seem extremely important. Carroll argued that traditional educational settings, with their fixed timeframes for learning, often disadvantaged students with lower aptitude. In these settings, students with higher aptitude tend to learn more within the limited time available, while those who require more time to process information might fall behind. This disparity has an important secondary implication. Learning is cumulative with existing knowledge influencing future understanding and learning efficiency. Put another way teachers will likely recognize, missing prerequisites make related information difficult to understand. So, it is not just differences in aptitude that matter, but differences in aptitude. within a fixed learning environment (time and methods) that compounds learning speed and existing knowledge.
The connection with a traditional way of defining aptitude such as intelligence may not jump out at you, but consider the classic IQ=MA/CA so many learned in Intro to Psych courses. Think of it this way. CA is chronological age or within my way of explaining aptitude the time available for learning. MA is mental age or how much an individual of a given age knows. The quotient ends up as a way of learning efficiency or amount learned per unit of time.
Anyway, assuming this theoretical notion offers a reasonable way of understanding reality, what does this mean for educators and what actions seem reasonable responses?
Dealing with differences in rate of learning
When I present this to future teachers, I propose that educational settings do make some accommodations to this perspective. At the extreme, a few students might be required to repeat a grade. Schools provide extra help and time in the form of pull-out programs and other types of individual help. Schools used to and to some extent still group students based on performance/ability to match the rate of progress to instruction (e.g., tracking, ability grouping). While helpful, these programs do not stem the increasing variability in performance across elementary school grades. Perhaps once students get to high school variability is accommodated by the selection of different courses and the pursuit of different learning goals, but even if this is the case there are long-term consequences from the early learning experiences. How is motivation impacted by the increasing frequency of failure and related frustration? Are there practical ways to claw your way back from early failures once you fall behind?
Mastery learning
In the early 1970s, I became interested in two instructional strategies labeled mastery learning. These approaches proposed ways to respond to variability in the rate of learning. I will summarize these as Bloom’s Group-Based Method and Keller’s Personalized System of Instruction. Bloom was and continues to be a big name in education and gets a lot of attention. I see Keller as developing a system more attuned to the application of technology and AI. Both offer concrete proposals and encouraged a lot of research. The volume of research and related meta-analyses offer much to present efforts that lack the same detailed analyses (see references to the work of Kulik and colleagues).
Bloom’s Group-Based Mastery?—?In the late 1960s, Benjamin Bloom, an educational psychologist, considered the optimal approach to individualized education. Bloom concluded that individual tutoring yielded the best results for learners. Bloom’s research indicated that tutoring could produce significant improvements in student achievement, with 80% of tutored students achieving a level of mastery only attained by 20% of students in traditional classroom settings. Bloom recognized the impracticality of providing one-on-one tutoring for every student. Instead, he challenged researchers to explore alternative instructional strategies capable of replicating the effectiveness of individual tutoring. This has been described as the 2-sigma challenge based on the statistical advantage Bloom claimed for tutoring.
Bloom’s (1968} approach to mastery learning was group-based. A group of learners would focus on content (e.g. chapter) to be learned for approximately a week and would then be administered a formative evaluation. Those who passed this evaluation (often at what was considered a B level) would continue to supplemental learning activities and those who did not pass would receive remediation appropriate to their needs. At the end of this second period of instruction (at about the two-week mark), students would receive the summative examination to determine their grades. Those who were struggling were provided more time on the core goals. Yes, this is a practical more than a perfect approach as there is no guarantee that all students will have mastered the core objectives necessary for future learning by the end of the second week. A similar and more recent approach called the Modern Classroom Project categorizes goals as “need to know”, “good to know” and “aim to know”. The idea is that not all possible goals can practically be achieved.
Keller’s Personalized System of Instruction?—?Fred Keller, drawing inspiration from Carroll’s work, developed the Personalized System of Instruction (PSI) in 1968. Keller proposed that presenting educational content in written format, rather than through traditional lectures, could provide students with the flexibility to learn at their own pace. PSI utilizes written materials, tutors, and unit mastery to facilitate learning. Students progress through units of instruction at their own pace, revisiting concepts and seeking clarification as needed when initial evaluations show difficulties. This self-paced approach enables students to dedicate additional time to challenging concepts while progressing more quickly through familiar material. The focus on written materials that could be used by individuals allowed Keller’s approach to focus more on individual progress and it was not necessary that a group be kept to a common pace of progress.
PSI utilizes frequent assessments to gauge student understanding and identify areas requiring further instruction. These assessments are non-punitive, meaning they do not negatively impact a student’s grade. Instead, assessments provide feedback that guides students toward mastery of the material. If a student does not demonstrate mastery on an assessment, they receive additional support and instruction tailored to their specific needs, before retaking the assessment.
In Keller’s model, tutors play a crucial role in evaluating student progress, offering personalized feedback, and providing clarification or additional instruction when needed. The role of the tutor could be fulfilled by various individuals, including the teacher, teaching assistants, or even fellow students who have already achieved mastery ofthe subject matter. The teacher’s role in PSI shifts from delivering lectures to designing the curriculum, selecting and organizing study materials, and providing individualized support to students.
Adapting old models to modern technology
While mastery learning predates the widespread adoption of technology in education, technology has significantly enhanced its implementation. Meta-analyses generally found that mastery approaches offered achievement benefits when compared with traditional instruction. My interpretation of why interest in the original approaches declined was that interest waned not because of effectiveness, but because of practicality. Mastery approaches were simply difficult to implement. Online platforms and educational technologies can facilitate personalized learning experiences by delivering content, tracking student progress, and providing individualized feedback and support. Technology can also automate many of the administrative tasks associated with mastery learning, such as grading assessments and tracking student progress, freeing up educators to focus on providing individualized support. Both the Bloom and Keller approaches could be implemented making use of technology, but the greatest benefit would seem to be to the Keller approach.
AI, tutoring, and mastery learning
Recent mention of mastery learning (Kahn, Archambault and colleagues) do so in combination with tutoring. Bloom originally proposed that his mastery approach was his example that could be related to his two-sigma challenge. However, group-based mastery was compared to and not integrated with tutoring. The number of professionals working in schools is not increasing and if anything class sizes are increasing. Greater individualization only increases the importance of individual monitoring and attention and the AI as tutor can reduce some of the demands on the limited time of professional educators.
Archambault and colleagues summarize the complication posed by the seemingly conflicting education goals of individualized learning and the needs for interaction and socioemotional learning. I have included the following quote from their work.
For example, cultivating classroom community through building relationships online and having students work together to develop social interaction at a distance may have competing interests with personalizing instruction such that each student can work at their own pace and through their own path to master course content.
Summary
Mastery learning is gaining increasing attention among educators seeing the value of applications of technology to individualize learning. This post summarizes the history of mastery instructional methods and offers other insights into how old ideas may be practically implemented with technology.
I have written multiple posts about mastery learning and current efforts to apply mastery principles. Reviewing some of these posts may be valuable if this summary sparks your interest.
References:
Archambault, L., Leary, H., & Rice, K. (2022). Pillars of online pedagogy: A framework for teaching in online learning environments. Educational Psychologist, 57(3), 178–191. https://doi.org/10.1080/00461520.2022.2051513
Benjamin, S., Dhew, E., & Bloom, B. (1968). Learning for mastery. Eval. Comment, 1, 1–1
Kahn, S. (2024). Brave new words: How AI will revolutionize education and what that’s a good thing. Penguin Random House.
Keller, F. S. (1968). “Good-bye teacher”. Journal of Applied Behavior Analysis, 1, 79–89
Kulik, C., Kulik, J. & Bangert-Drowns, R.L. (1990). Effectiveness of mastery learning programs: A meta-analysis. Review of Educational Research, 60, 265–299.
Kulik, C., Kulik, J. & Bangert-Drowns, R.L. (1990). Is there better evidence on mastery learning? A response to Slavin. Review of Educational Research, 60, 303–307.
Kulik, J. A., Kulik, C. L. C., & Cohen, P. A. (1979). A meta-analysis of outcome studies of Keller’s personalized system of instruction. American Psychologist, 34(4), 307- 318
Thinking is not visible to self and others and this reality limits both personal analysis and assistance from others. I have always associated the request to show your work associated with learning math so subprocesses of mathematical solutions can be examined, but the advantage can be applied when possible to other processes. I have a personal interest in the ways in which technology can be used to externalize thinking processes and the ways in which technology offers unique opportunities when compared with other methods of externalization such as paper and pen.
Ideas from different areas of interest sometimes come together in unexpected ways. This has been a recent experience for me with a long-term interest in argumentation and digital tools applied to learning. Argumentation may not spark an immediate understanding for educators. It sometimes help if I connect it with the activity of debate, but it relates to many other topics such as critical thinking and the processes of science as well. It relates directly to issues such as the distribution of misinformation online and what might be done to protect us all from this type of influence.
For a time, I was fascinated by the research of Deanna Kuhn and wrote several posts about her findings and educational applications. Kuhn studied what I would describe as the development of argumentation skills and what educational interventions might be applied to change the limitations she observed. It is easy to see many of the limitations of online social behavior in the immaturity of middle school students engaged in a structured argument (debate). Immature interactions involving a topic with multiple sides might be described as egocentric. Even though there is an interaction with a common topic, participants mostly state the positions they take with and frequently without supporting evidence. As they go back and forth, the seldom identify the positions taken by an “opponent” or offer evidence to weaken such positions. Too often, personal attacks follow in the “adult” online version, and little actual examination of supposed issues of interest is involved.
Consideration of the process of clearly stating positions and evidence for and against maps easily to what we mean by critical thinking and the processes of science. In the political sphere what Kuhn and similar researchers investigate relates directly to whether or not policy matters are the focus of differences of opinion.
Externalization and learning to argue effectively
Kuhn proposed that to improve (develop) critical thinking skills learners would benefit from experiences encouraging reflection. An approach that proved productive was based in multiple studies on two techniques for encouraging reflection. Across multiple age groups (middle school, high school, college students) she had pairs of participants argue using online chat. A pair had to agree on a given “move” or statement before submission (externalizing rationales for consideration) and submitting statements in chat both allowed an opportunity to focus on the message with interference from the face-to-face issues that are present in formal debate and to create a record that could be critiqued. In some studies, the participants were asked to complete forms asking for a statement of the positions taken by opponents and evidence offered in support of these positions. The effectiveness of the treatments was examined following training without such scaffolds.
AI arguments result in an external record
I and others have been exploring the experience of arguing with an AI opponent. One insight I had while exploring this activity was that it resulted in an external product that could be examined much in the way Kuhn’s chat transcripts could be examined. Classroom applications seem straightforward. For example, the educator could provide the same prompt to all of the students in the class and ask the students to submit the resulting transcript after an allotted amount of time. Students could be asked to comment on their experiences and selected “arguments” could be displayed for consideration of the group. A more direct approach would use Kuhn’s pairs approach asking that the pairs decide on a chat entry before it was submitted. The interesting thing about AI large language models is that the experience across submissions of the same prompt are different for each individual or for the same individual submitting the prompt a second time.
I have described what an AI argument (debate) looks like and provided an example of a prompt that would initiate the argument and offer evaluation in a previous post. I have included the example I used in that post below. In this example, I am debating the AI service regarding the effectiveness of reading from paper or screen as I thought readers are likely familiar with this controversy.
…
Summary
Critical thinking, the process of science, and effective discussion of controversial topics depends on the skills of argumentation. Without development, the skills of argumentation are self-focused lacking the careful identification and evaluation of opposing ideas. These limitations can be addressed through instructional strategies encouraging reflection and the physical transcript resulting from an argument with an AI-based opponent provides the opportunity for reflection.
References:
Iordanou, K. (2013). Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help. Journal of Cognition & Development, 14(2), 292–320.
Kuhn, D., Goh, W., Iordanou, K., & Shaenfield, D. (2008). Arguing on the Computer: A Microgenetic Study of Developing Argument Skills in a Computer-Supported Environment. Child Development, 79(5), 1310-1328
Mayweg-Paus, E., Macagno, F., & Kuhn, D. (2016). Developing Argumentation Strategies in Electronic Dialogs: Is Modeling Effective. Discourse Processes, 53(4), 280–297. https://doi.org/10.1080/0163853X.2015.1040323
In my world, talk of AI is everywhere. I doubt most have a different opinion because nearly any news program has a story every other day or so commenting on AI capabilities, dangers, and the wealth and power being accumulated by developers. We all have experienced the history, beginning with ChatGPT in Nov. 2022, of the large language models.
I tried to find some specifics about the popularity of AI and this is challenging. There were quickly multiple companies involved and you can use free versions of AI programs with just a browser making an accurate of “users” difficult. We do know that a million users signed up for ChatGPT 3 within three months.
So, where are we at a year and a half later? Again, you and I may use an AI large language model on a daily or at least weekly basis, but how much use is actually going on “out there”?
Studies have started to appear attempting to determine the frequency of frequent use. Frequent use can be very different from “yeah, I tried that” use. My interpretation is that folks in many countries have heard of AI and quite a few have given at least one service a try, but most now appear puzzled by what should come next. One of these studies with the broadest approach, approached respondents in six countries – Argentina, Denmark, France, Japan, and the US. Among those surveyed, awareness was high, but frequent actual use was low. On a daily basis, frequent users ranged from 7% in the U.S. to 1% in Japan. 56% of 18-24 year olds had tried an AI service and 16% of those over 55.
My personal interest concerns AI in schools so I tried to locate studies that attempted to establish typical patterns of use by secondary students. Here is a 2024 study from Commonsense Media on this topic available to all online. A very short summary concluded that will half of 14-22-year-olds have used an AI service, but only 4% report being daily users. Beyond these basic statistics, I found it startling that minority youth (Blacks and Latinx) reported a higher frequency of use – 20% – 10% claimed to be weekly users. I cross-checked this result several times to make certain I understood it correctly. When asked to categorize their use, young people reported searching for information, generating ideas, and school work in that order. Another large category of use was generating pictures. The authors reported some concern when finding that searching for information was the most frequent category of use.
Participants were asked about concerns that limited their use of AI and potential accusations of cheating were high among these young people.
I admit I need to review this study more carefully because it is not clear to me if the participants were including any classroom use in contrast to what I would call personal use.
The “what can I do with this” question
Mollick and the 10-hour investment. I have read several efforts by Ethan Mollick (NYTimes, Kindle book) and find his perspective useful. He claims using AI is different from learning other technology applications in that there are not exact instructions you can follow to find productive uses. Instead, he proposes that you invest 10 hours and try the tool you select to accomplish various tasks that you face daily. If you write a lot of emails, chat with the AI tool about what you want to say and see what it generates. Request modifications to improve what is generated to suit your needs. Ask it to create an image you might have a way to use. Ask it to generate ideas for a task you want to accomplish. Some may tell you that AI is not a person and this is obviously the case, but forget this for a while and treat the AI service like an intern working with you. Converse in a natural way and give a clear description of what your tasks require. Ask the AI service to take on a persona and then explain your task. If you are trying to create something for a classroom situation, ask the service to act as an experienced teacher of XXX preparing for a lesson on YYY. Expect problems, but if you involve the tool in areas you understand, you should be able to identify what is incorrect and request improvements.
I watched the recent Apple announcement regarding the company’s soon to be released AI capabilities. Thinking about Apple’s approach, I could not help proposing that experiences with Apple products in the ways Apple plans could be a great gateway to finding personal practical applications of AI (Apple wants you to think of their approach as Apple Intelligence). Apple intends rolling out a two-tiered model – the AI capabilities available in a self-contained way on Apple devices and AI capabilities available off device. The device-located AI capabilities are designed to accomplish common tasks. Think of the on-device capabilities as similar to what Mollick proposes – ways to accomplish daily tasks (e.g., summarization, image creation, text evaluation and improvement, finding something I know I read recently). AI capabilities are available within most Apple products and also within other services. I could not help wondering how Grammarly will survive with AI tools available to Apple users who own recent Apple equipment.
Obviously, I have yet to try the new Apple Intelligence tools and I doubt I will close out my AI subscriptions, but I do think Apple tools as a transition will increase day-to-day usage.
I happened across this news story from NBC concerning the accuracy of election information. The story reported data from a research organization involving the submission of requests to multiple AI services and then having experts evaluate the quality of the responses. I also then read the description provided by the research organization and located the data used by this organization (the questions and methodology).
The results showed that a significant portion of the AI models’ answers were inaccurate, misleading, and potentially harmful. The experts found that the AI models often provided information that could discourage voter participation, misinterpret the actions of election workers, or mislead people about politicized aspects of the voting process. The focus in the research was on general information and did not address concerns with misinformation from candidates.
I have been exploring how I might address this same issue and perhaps offer an example educators might try in their classrooms. Educators exploring AI topics over the summer may also find my approach something they can try. AI issues seem important in most classrooms.
As I thought about my own explorations and this one specifically, a significant challenge is having confidence in the evaluations I make about the quality of AI responses. For earlier posts, I have written about topics such as tutoring. I have had the AI service engage with me using content from a textbook I have written. This approach made sense for evaluating AI as a tutor, but would not work with the topic of explaining political procedures. For this evaluation, I decided to focus on issues in my state (Minnesota) that were recently established and would be applied in the 2024 election.
The topic of absentee ballots and early voting has been contentious. Minnesota has a liberal policy allowing anyone to secure a mail ballot without answering questions about conditions and recently requested that this be the default in future elections without repeated requests. The second policy just went into effect in June and I thought would represent a good test of an AI system just to see if AI responses are based on general information about elections mixing the situation in some states with the situation in others or are specific to individual states and recent changes in election laws.
Here is the prompt I used:
I know I will not be in my home state of Minnesota during future Novembers, but I will be in Hawaii. Can I ask for an absentee ballot to be automatically sent to me before each election?
I used this prompt with ChatGPT (4) and Claud and found all responses to be appropriate (see below). When you chat with an AI tool using the same prompt, one interesting observation is that each experience is unique because it is constructed each time the prompt is submitted. So, each response is unique.
I decided to try one more request which I thought would be even more basic. As I already noted, Minnesota does not require a citizen to provide an explanation when asking for a mail-in ballot. Some states do, so I asked about this requirement.
Prompt: Do you need an explanation for why you want an absentee ballot in Minnesota
As you can see in the following two responses to this same prompt, I received contradictory responses. This would seem the type of misinformation that the AI Democracy Project was reporting.
Here is a related observation that seems relevant. If you use Google searches and you have the AI lab tool turned on, you have likely encountered an AI response to your search before you see the traditional list of links related to your request. I know that efforts are being made to address misinformation in regards to certain topics. Here is an example in response to such concerns. If you use the Prompt I have listed here, you should receive a list of links even if Google sends you a summary to other prompts (Note – this is different from submitting the prompt directly to ChatGPT or Claude). For a comparison try this nonpolitical prompt and you should see a difference -“ Are there disadvantages from reading from a tablet?” With questions related to election information, no AI summary should appear and you should see only links associated with your prompt.
Summary
AI can generate misinformation, which can be critical when voters request information related to election procedures. This example demonstrates this problem and suggests a way others can explore this problem.
The issue I have with streaming television services is the same as the issue I have with services that support my personal knowledge management – many have a feature or two that I find helpful, but when should I stop paying to add another feature I might use? Exploring the pro version of AI tools so I can write based on experience is one thing, but what is a reasonable long-term commitment to multiple subscriptions for the long term in my circumstances?
My present commitments are as follows:
ChatGPT – $20 a month
Perplexity – $20 a month
Scispace – $12 a month
Smart Connections – $3-5 a month for ChatGPT API
Those who follow me on a regular basis probably have figured out my circumstances. I am a retired academic who wants to continue writing for what can most accurately be described as a hobby. There are ads on my blogs and I post to Medium, but any revenue I receive is more than offset by my server costs and the Medium subscription fee. So, let’s just call it a hobby.
The type of writing I do varies. Some of my blog posts are focused on a wide variety of topics mostly based on personal opinions buttressed by a few links. My more serious posts are intended for practicing educators and are often based on my review of the research literature. Offering citations that back my analyses is important to me even if readers seldom follow up by reading the cited literature themselves. I want readers to know my comments can be substantiated.
I don’t make use of AI in my writing. The exception would be that I use Smart Connections to summarize the research notes I have accumulated in Obsidian and I sometimes include these summaries. I rely on two of these AI tools (SciSpace and Perplexity) to find research articles relevant to topics I write about. With the proliferation of so many specialized journals, this has become a challenge for any researcher. There is an ever expanding battery of tools one can use to address this challenge and this post is not intended to offer a general review of this tech space. What I offer here is an analysis of a smaller set of services I hope identifies issues others may not have considered.
Here are some issues that add to the complexity of making a decision about the relative value of AI tools. There are often free and pro versions of these tools. The differences vary. Sometimes you have access to more powerful/recent versions of the AI. Sometimes the Pro version is the same as the free version, but you have no restrictions on the frequency of use. Occasionally other features such as exporting options or online storage of past activities become available in the pro version. Some differences deal with convenience and there are workarounds (eg., copying from the screen with copy and paste vs exporting).
Services differ in the diversity of tools included and this can be important when selecting several services from a collection of services in comparison to committing to one service. Do you want to generate images to accompany content you might write based on your background work? Do you want to use AI to write for you or perhaps to suggest a structure and topics for you something you might write yourself?
There can also be variability in how well a service does a specific job. For example, I am interested in a thorough investigation of the research literature. What insights related to the individual articles identified are available that can be helpful in determining which articles I should spend time reading?
Perplexity vs. SciSpace
I have decided that Perplexity is the most expendable of my present subscriptions. What follows is the logic for this personal decision and an explanation of how it fits my circumstances.
I am using a common prompt for this sample comparison
What does the research conclude related to the value of studying lecture notes taken by hand versus notes taken on a laptop or tablet?
Perplexity
I can see how Perplexity provides a great service for many individuals who have broad interests. I was originally impressed when I discovered that Perplexity allowed me to focus its search process on academic papers. When I first generated a prompt, I received mostly sources from Internet-based authors on the topics that were of interest to me and as I have indicated, I was more interested in the research published in journals.
I mentioned that there is a certain redundancy of functions across my subscriptions and the option of writing summaries or structuring approaches I might take in my own writing using different LLMs was enticing.
The characteristic I value in both Perplexity and SciSpace is that summary statements are linked to sources. A sample of the output from Perplexity appears below (the red box encloses links to sources).
When the content is exported, the sources appear as shown below.
I went through these sources and the results are what I found disappointing. I have read most of the research studies on this topic and have specific sources I expected to see. The sources produced were from what I would consider low value sources when I know better content is available. These are not top tier educational resource journals.
Schoen, I. (2012). Effects of Method and Context of Note-taking on Memory: Handwriting versus Typing in Lecture and Textbook-Reading Contexts. [Senior thesis]
Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.
Wiechmann W, Edwards R, Low C, Wray A, Boysen-Osborn M, Toohey S. No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study. J Educ Eval Health Prof. 2022;19:8. doi: 10.3352/jeehp.2022.19.8. Epub 2022 Apr 26. PMID: 35468666; PMCID: PMC9247713.
Crumb, R.M., Hildebrandt, R., & Sutton, T.M. (2020). The Value of Handwritten Notes: A Failure to Find State-Dependent Effects When Using a Laptop to Take Notes and Complete a Quiz. Teaching of Psychology, 49, 7 – 13.
Mitchell, A., & Zheng, L. (2019). Examining Longhand vs. Laptop Debate: A Replication Study. AIS Trans. Replication Res., 5, 9.
Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.
SciSpace
SciSpace was developed as more focused on the research literature.
The output from the same prompt generated a summary and a list of 90 citations (see below). Each citations appears with characteristics from a list available to the user. These supplemental comments are useful in determining which citations I may wish to read in full. Various filters can be applied to the original collection that help narrow the output. Also included are ways to designate the recency of the publications to be displayed and to limit the output to journal articles.
Here is the finding I found most important. SciSpace does provide many citations from open-access journals. These are great, but I was most interested in what was generated from the main sources I knew should be there. These citations were included.
Linlin, Luo., Kenneth, A., Kiewra., Abraham, E., Flanigan., Markeya, S., Peteranetz. (2018). Laptop versus longhand note taking: effects on lecture notes and achievement. Instructional Science, 46(6):947-971. doi: 10.1007/S11251-018-9458-0
Pam, Mueller., Daniel, M., Oppenheimer. (2014). The Pen Is Mightier Than the Keyboard Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6):1159-1168. doi: 10.1177/0956797614524581
Dung, C., Bui., Joel, Myerson., Sandra, Hale. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2):299-309. doi: 10.1037/A0030367
Summary
This post summarizes my thoughts on which of multiple existing AI-enabled services I should retain to meet my personal search and writing interests. I found SciSpace superior to Perplexity when it came to identifying prompt-relevant journal articles. Again, I have attempted to be specific about what I use AI search to accomplish and your interests may differ.
I have been working my way through a couple of new and popular books that consider the importance and perils of AI and that contain at least significant commentary on AI in education. There is not a lot in these books that is based on the research literature I tend to find most influential, but the authors have sufficient experience and opportunities to offer some very credible insights. This is not a book report, but I want to credit a few ideas that encouraged my own exploration.
This time of the year, I often suggest some topics educators might explore over the summer while they have a little more time. With the attention AI has received in the past year and a half, I likely made a related recommendation last year at about this time. Reading these two books (citations at the end of this post) would be very useful if you spend time reading related to your profession. Perhaps you read in a different area. Hopefully. I can offer a few insights that will be sufficient to encourage your own exploration of AI tools.
Ethan Mollick’s book, Co-Intelligence, is different in that it focuses on applications and ways to think about AI capabilities. Mollick offers interesting ideas that sometimes run in opposition to traditional advice. For example, it is OK to interact with AI tools as if they were a person even though you know they are not. Asking questions and making requests as you would with another person is just a practical way to explore AI tools. Mollick also suggests that we stop looking for how to do it techniques for AI. Instead, he suggests we explore. If you have the time, try to use AI whenever there seems some possible value and see what happens. In other words, once you get past the basics of how to use a given tool, explore. Value and issues will be different for all of us so the only way to make decisions is to spend time. Again, for educators, the summer seems a great time to explore. Finally, understand that your present experiences will be with AI tools that are the least powerful they will ever be. If you find something interesting, but flawed in some way, just wait until you see what will come next.
There were some other suggestions about prompts I found useful. Perhaps the most concrete example is what was described as chain of thought prompting. AI tools will try to provide what you ask for, but it may be helpful to provide the sequence you want the tool to follow if a given process seems useful
Sal Kahn, the creator of Kahn Academy, offers thoughts on how AI will be helpful in education in his new book “Brave New Words”. Kahnmigo, the adaptation of AI as a tutor within the context of the other opportunities for learners and educators provided by Kahn and colleagues received a good deal of attention. An interesting theme seemed how this AI tool was prepared to assist, but not do for you (my interpretation).
One example, which Kahn uses to start his book, I found particularly interesting and I have attempted to use as the basis for the implementation I will describe in the comments that follow, describes a collaborative writing experience in which the AI tool and a student were assigned personas of two individuals writing collaboratively. The two personas took terms introducing portions of a story with the other writer finishing the section of the story the other persona had initiated. Two collaborative writers with one controlled by AI and the other by a student.
My version
Several of my posts have considered AI as a tutor and I have tried to demonstrate how existing AI tools can be used to implement various functions provided by a human tutor. This post has some similar arguments. Here, I describe an effort to create something similar to what Khan described in his account of collaborative writing, but I am proposing an approach more focused on the strategy of writing to learn. The core idea of writing to learn is that the cognitive processes involved in explaining something by writing require beneficial cognitive activities that would be less likely to occur in more passive study strategies. So, I have been attempting to write a prompt in which I try to create a version of the Kahnmigo collaborative story writing example applied as a writing to learn task.
Kahnmigo was just announced Kahnmigo will be a free AI tool for teachers subsidized by Microsoft. Perhaps it would have made sense to use one of the Kahnmigo tools to implement what I have been working on. I will leave it to others to make such efforts. I wanted to try to create a similar approach using the AI services I have already been working with and paying for.
What follows was developed using the paid version of ChatGPT using the most recently available version – 4o. I think the following prompt illustrates the chain of thought approach (Mollick) and is similar in approach to the Khan collaborative writing example.
Prompt: Obviously this is longer and more detailed than how many would request action from an AI tool. This prompt was entered into the ChatGPT accompanied by the textbook chapter as a file to upload.
Create an activity in which two students work together to write a document. You will represent a graduate student studying educational technology and I will represent an undergraduate student taking a course preparing teachers to use technology in K12 classrooms. This interaction will be based on information from the attached document. The graduate student will first identify and list significant topics from this document. Then the grad student will identify a topic for the collaborative writing project from this list and suggest an outline. The grad student will then write the introduction to this topic. The graduate student will next write the first couple of sentences about the first item from the outline and then ask the undergraduate to complete writing comments to complete the comments about this section of the outline. Do not include examples or applications in the introductory sentences and allow the undergraduate to provide comments on these topics. Wait for the undergraduate to add their additional content before continuing. After the undergrad submits their additions, offer feedback that evaluates what the undergraduate student has added. What the undergraduate has said should be identified and then how well it summarizes what was included in the material to be read should be presented. Limit feedback to what the undergraduate writes for this section without including information the outline suggested should be included in other sections. The graduate student should then ask the undergraduate student if they want to continue. Do not continue until a response of yes is submitted. If the undergraduate student responds “yes”, the graduate student should move to the next item from the outline and continue the same sequence of interactions until all items from the outline have been completed.
AI response:
Significant Topics from the Document
1. Cognitive System and Learning
2. Memory Stores (Short-Term and Long-Term Memory)
3. Working Memory Constraints
4. Cognitive Processes and Mental Tools
5. Metacognition and Executive Control
6. Higher Order Thinking Skills
7. Meaningful Learning and Constructivism
8. Reception vs. Discovery Learning
9. Modeling Cognitive Behaviors
10. Authentic Activities
11. Project-Based Learning
12. Role of Technology in Learning
Selected Topic for Collaborative Writing Project
Higher Order Thinking Skills
Suggested Outline
1. **Introduction**
– Definition of higher-order thinking skills
– Importance in the 21st-century education
2. **Problem-Solving**
– Description and components
– Importance in education
3. **Critical Thinking**
– Definition and key elements
– Role in evaluating information
4. **Creative Thinking**
– Explanation and examples
– Relevance in innovative problem-solving
5. **Integration and Application**
– How these skills interrelate
– Examples of application in real-world scenarios
Introduction
Higher-order thinking skills (HOTS) encompass a range of cognitive processes that are essential for success in the 21st-century educational landscape. These skills go beyond basic memorization or recall of facts, engaging students in complex tasks such as analysis, evaluation, synthesis, and creation of new ideas. HOTS are critical in preparing students to navigate the vast amounts of information available today and to solve novel problems effectively.
Problem-Solving
Problem-solving is a foundational aspect of higher-order thinking skills. It involves identifying a problem, generating possible solutions, evaluating these solutions, and implementing the most effective one. In education, problem-solving skills enable students to tackle complex challenges and develop strategies that can be applied across various domains.
Your turn: Please add your comments about the importance of problem-solving in education and provide examples of how it can be incorporated into classroom activities.
Me – entered as a prompt:
The issue of whether or not problem-solving is domain specific has received a great deal of attention. Some content areas might be more directly focused on problem solving – e.g., coding, mathematics. If problem-solving is domain specific then the problem-solving skills developed in one area would not transfer to other areas in a positive way. Ths would suggest teachers in all areas should offer opportunities for problem-solving challenges related to what is studied in these areas.
AI: Feedback
Your addition about the domain specificity of problem-solving is excellent. It addresses a critical aspect of the debate on how problem-solving skills are best developed and applied. Your point about certain content areas, like coding and mathematics, being more directly focused on problem-solving is particularlyinsightful. Highlighting the importance of teachers in all subject areas providing problem-solving opportunities helps underline the necessity for a holistic approach in education. This aligns well with the document’s emphasis on the importance of developing these skills across various domains.
Would you like to continue to the next section on Critical Thinking?
Me: entered as prompt
Yes
AI:
Critical Thinking
Critical thinking is another crucial component of higher-order thinking skills. It involves the ability to analyze information objectively and make reasoned judgments. Critical thinking requires students to engage in reflective and independent thinking, assess the validity and relevance of information, and form well-supported conclusions. This skill is vital in today’s information-rich world, where individuals must navigate a vast array of sources and discern credible information from misinformation.
Your turn: Please add your comments about the key elements of critical thinking and its role in evaluating information, particularly in an educational setting.
ETC:
This interaction continues according to the instructions in the prompt. ChatGPT has a method for sharing a chat with the public. Here is the link that should display the additional content AI and I generated (public access to chat).
Related comments: Here are some related observations related to my experience.
This interaction is developed on the fly. Redo the entire process and the elements of the interaction may all be different. The topics, the outline, the prompts, and even the way the content is formatted on the screen will be different. The prompt is similar to a program in that it specifies a sequence of operations you want to the AI to follow. However, unlike a coded program, here you are asking for a series to be created rather than a series of actions to follow in a specific manner. The process of generation (it is generative AI) will work a little differently each time it is implemented.
I found the execution of the prompt varied with the AI tool I used. ChatGPT worked the best. I thought that NotebookLM would be an ideal way to implement this writing-to-learn task because of the way in which you can upload content to be the target of interactions with the AI. It is my opinion that the way Notebooklm works is easier to understand and explain. However, after an extended series of attempts to rework the prompt, I was unable to get the required sequence to work. NotebookLM would provide feedback in response to the first item from the outline and then stop.
This post is not an attempt to promote the specific prompt I wrote. I certainly don’t care if others try it with information sources of their choosing. This was an exploration for me and it is my hope others may continue in a similar way using my initial effort as a guide.
One final point I think is important. The approach I am describing here is using the interactive capabilities of AI to focus on an information source I trust. I am not asking AI to use its generic information base to provide the content to be learned. The nature of the interaction may not be perfect, but it primarily focuses on a vetted source and assumes learners have read this source.
Resources:
Khan, S. (2024). Brave new words: How AI will revolutionize education (and why that’s a good thing). Viking.
Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Penguin
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
You must be logged in to post a comment.