Is there such a thing as aptitude?

The concept of aptitude and how differences in aptitude influencing learning could be reduced through mastery strategies have interested me throughout my academic career. I understood aptitude as something I thought of as intelligence. Intelligence is an abstraction that researchers attempt to measure with intelligence tests and investigate in practice through correlations with academic progress. Intelligence tests are not a direct measure of aptitude, but really an estimate based on differences in what individuals have learned and can do. Even the simple representation of intelligence as IQ (intelligence quotient) imagines intelligence as how much has been learned (mental age or MA) divided by age (chronological age).

Intelligence tests have come under a great deal of criticism based on potential racial/SES biases. These criticisms are certainly fair, but the tests do predict academic achievement and I was never convinced to support the abandonment of the development and use of such tests. The correlations measure something, and whatever this is does not disappear when tests are not given. If both tests and educational practice are biased, why not recognize that this is the case?

The theoretical basis for mastery learning (see Arlin and Bloom references) proposes that educators consider the rate of learning and accept that the rate of learning differs greatly among individuals. To me, this sounded very much like intelligence, and the concept of IQ is obviously related to learning rate (how much was learned per unit of time). However, what these researchers and educational theorists proposed was that other factors were involved in traditional educational practice and these other factors had a significant impact on achievement. While time required for learning was determined by aptitude, it was also influenced by whether the method of instruction met individual needs and by differences in existing knowledge. Think of it this way. If aptitude-based differences in learning create a range of learning speeds and a class of students moves through learning experiences faster than some students can master some important skills and concepts, in the future some students will be burdened not only by learning at a slower rate, but also by missing knowledge prerequisite to new skills and concepts they are trying to learn. Over time, these missing elements (Sal Kahn calls this Swiss cheese learning) will accumulate increasing failure and frustration in some learners. Mastery learning strategies focus on limiting the accumulation of knowledge prerequisites by individualizing the rate of learning to the rate of mastery. Some students in completely individualized approaches do move more slowly (and some faster), but the theory proposes that the rate of actual mastery would be faster than without mastery for all learners because deficits would not accumulate in learners needing more time and more capable students could move more quickly. The work of Arlin attempted to demonstrate what these changes in the rate of learning might be. When ratios such as 5:1 or 7:1 are proposed, it is easy to see why some students would fall hopelessly behind.

Individualization is challenging. Tutoring has always been a personal interest, but not economically feasible. With access to personal computers in the 1990s I saw the first method that might be available to provide individualization and this continues as an interest. Many attack present attempts to make use of technology in direct instruction as boring and depersonalizing. I think these folks have the wrong idea, but this is a topic I address elsewhere. Here, I want to recognize recent research that claims individualized instruction with technology (Koedinger, et al) may not only deal with individual differences in background knowledge, but also challenge the notion there are meaningful differences in the rate of learning.

How variable is the impact of aptitude?

Koedinger and colleagues studied the work of thousands of students from all grade levels working on different types of content using the type of technology-enabled methods I described above. Their focus was different in being based on the mastery of very specific capabilities rather than courses or even weeks of work. The learning experiences consisted of initial exposure to information (video or written) followed by a sequence of worked activities. I suppose a worksheet would be an example of a worked activity, but the variety and type of activities included a many different activities. The goal was to reach 80% mastery on a worked activity. The authors found that in the first attempt following the acquisition phase, the top half of students scored 75% and the bottom half scored 50%. The top half then required 3.7 practice trials to reach mastery (80%) and the bottom half 13 trials. What startled the researchers was that the gain per practice trial was very similar leading the researchers to conclude learning rate was very similar once existing knowledge was addressed. Aptitude (if I can be allowed to switch terms here) accounts for little difference in speed.

I am not convinced I would interpret these results in the same way given the method, but I do like the demonstration that allowing additional learning trials allows students the same level of achievement. I encourage interested parties to review the study themselves and see if they agree with my assessment. The statistical method is quite sophisticated and I wonder what interpretations the method allows. I would be more convinced had the researchers carried their research over an extended period of time and actually determined what happens when individual differences in existing knowledge are eliminated. The difference in understanding after the individual phase of exposure to new content was substantial and while likely a partial product of existing differences in background it does not seem to me that the difference would not partially also be due to aptitude differences. Since learners with existing background knowledge are not involved, it seems to me there is no demonstration that aptitude does not play a role in determining the number of practice trials that are required.

I am pleased to see that this type of research continues and assume this study will generate replications and hopefully extensions.

Additional comments on mastery learning and learning speed

Arlin, M. (1984). Time variability in mastery learning. American Educational Research Journal, 21(1), 103–120.

Arlin, M. (1984b). Time, equality, and mastery learning. Review of Educational Research, 54(1), 65–86.

Bloom, B. S. (1974). Time and learning. American Psychologist, 29(9), 682–688.

Koedinger, K. R., Carvalho, P. F., Liu, R., & McLaughlin, E. A. (2023). An astonishing regularity in student learning rate. Proceedings of the National Academy of Sciences, 120(13), e2221311120.

Khan, S. (2012) The One World Schoolhouse?—?Education Reimagined. Hodder and Stoughton, London, 2012 and Twelwe, Boston & New York.

Loading

Content focused AI for tutoring

My explorations of AI use to this point have resulted in a focus on two applications – AI as tutor and AI as tool for note exploration. Both uses are based on the ability to focus on information sources I designate rather than allowing the AI service to rely on its own body of information. I see the use of AI to interact with the body of notes I have created as a way to inform my writing. My interest in AI tutoring is more related to imagining how AI could be useful to individual students as they study assigned content.

I have found that I must use different AI services for these different interests. The reason for this differentiation is that two of the most popular services (NotebookLM and OpenAI’s Custom GPTs) limit the number of inputs that can be accessed. I had hoped that I could point these services at a folder of notes (e.g., Obsidian files) and then interact with this body of content. However, both services presently allow only a small number of individual files (10 and perhaps 20) can be designed as source material. This is not about the amount of content as the focus of this post involves using these two services to interact with a single file of 27,000 words. I assume in a year the number of files will be less of an issue.

So, this post will explore the use of AI as a tutor applied to assigned content as a secondary or higher ed student might want to do. In practice, what I describe here would require that a student would have access to a digital version of assigned content not protected in some way. For my explorations, I am using the manuscript of a Kindle book I wrote before the material was converted to a Kindle book. I wanted to work with a multi-chapter source of a length students might be assigned.

NotebookLM

NotebookLM is a newly released AI service from Google. The AI prompts can be focused on content that is available in Google drive or uploaded to the service. This service is available at no cost, but it should be understood that this is likely to change when Google is ready to offer a more mature service. Investing time in this service rather than others allows the development of skills and the exploration of potential, but in the long run some costs will be involved.

Once a user opens NotebookLM and creates a notebook (see red box surrounding new notebook), external content to be the focus of user prompts can be added (second image). I linked Notebook to the file I used in preparation for creating a Kindle book. Educators could create a notebook on unprotected content they wanted students to study.

The following image summarizes many essential features used when using NotebookLM. Starting with the right-hand column, the textbox near the bottom (enclosed in a red box) is where prompts are entered. The area above (another red box) provides access to content used by the service in generating the response to a prompt. The large area on the left-hand side displays the context associated with one of the areas referenced with the specific content used highlighted. 

Access to a notebook can be shared and this would be the way an educator would provide students access to a notebook prepared for their use. In the image below, you will note the icon (at the top) used to share content, and when this icon is selected, a textbox for entering emails for individuals (or for a class if already prepared) appears.

Custom GPTs (OpenAI)

Once you have subscribed to the monthly payment plan for ChatGPT – 4, accessing the service will bring up a page with the display shown below. The page allows access to ChatGPT and to any custom GPTs you have created. To create a Custom GPT you select Explore and then select Create a GPT. Describing the process of creating a GPT would require more space than I want to use in this post, but the process might best be described as conversational. You basically interact by describing what you are trying to create and you upload external resources if you want prompts to be focused on specific content. Book Mentor is the custom GPT I created for this demonstration.

Once created, a GPT is used very much in the same way a NotebookLM notebook is used. You use the prompt box to interact with the content associated with that GPT.

What follows are some samples of my interactions with the content. You should be able to see the prompt (Why is the word layering used to describe what the designer does to add value to an information source?)

Prompts can generate all kinds of ways of interaction (see a section below that describes what some of these interactions might be). One type I think has value in using AI as a tutor is to have the service ask you a question. An example of this approach is what is displayed in the following two images. The first image describes a request for the service to generate a multiple-choice question about generative activity which I then respond (correctly) and receive feedback. The second image shows the flexibility of the AI. When responding to the question, I thought a couple of the responses could be correct. After I answered the question and received feedback, I then asked about an answer I did not select wondering why this option could not also be considered correct. As you see in the AI reply, the system understands my issue and acknowledges how it might be correct. This seems very impressive to me and demonstrates that the interaction with the AI system allows opportunities that go beyond self-questioning.

Using AI as tutor

I have written previously about the potential of AI services to interact with learners to mimic some of the ways a tutor might work with a learner. I make no claims of equivalence here. I am proposing only that tutors are often not available and an AI system can challenge a learner in many ways that are similar to what a human tutor would do. 

Here are some specific suggestions for how AI can be used in the role of tutor

Summary

This post describes two systems now available that allow learners to work with assigned content that mimics how a tutor might work with a student. Both systems would allow a designer to create a tool focused on specific content that can be shared. ChatGPT custom GPTs require that those using a shared GPT have an active $20 per month account which probably means this approach would not presently be feasible for common application. Google’s Notebooks can be created at no cost to the designer or user, but this will likely change when Google decides the service is beyond the experimental stage. Perhaps the capability will be included in present services designed for educational situations.

While I recognize that cost is a significant issue, my intent here is to propose services that can be explored as proof of concept and those educators interested in AI opportunities might explore future productive classroom applications of AI. 

Loading

Evaluating tech tools for adult learning

I feel comfortable writing about learning in educational environments. I have reviewed many instructional and learning strategies, read applied studies intended to evaluate the efficacy of these strategies, and read a substantial amount of the basic cognitive research potentially explaining the why of the applied investigations. In a small way, I have contributed to some of this research. 

As my life circumstances have changed, I have begun exploring related, but unfamiliar topics. In retirement, I am by definition no longer playing an active role as a salaried educator or researcher. I retain the opportunity to access the scholarly literature as an emeritus faculty member, but I can no longer engage as a researcher. These changes led to a different perspective. I have become more interested in other folks like me who are still interested in learning and how they go about responding to such interests. 

As I have contemplated this situation, it has become clear that this situation is not a matter of age. While it was very important for me to constantly learn while I was working, I don’t think I spent much time considering how I should best go about it. There was work to be done and despite my own focus on education, I did little to consider the strategies of my own learning.

I began to think more deeply about self-directed learning, adult learning, or whatever else might be the current way to describe this situation when I began participating in a book club that has as one interest Personal Knowledge Management (PKM) and the technology tools that can be applied when committed to implementing this concept. For those who are unfamiliar with PKM, one way to gain insight would be to read a couple of the self-help books explaining views on this topic and describing techniques argued to be useful in achieving goals consistent with the general idea of Personal Knowledge Management.

Sonke Ahrens How to Take Smart Notes: One Simple Technique to Boost Writing, Learning and Thinking 

Tiago Forte Building a Second Brain: A Proven Method to Organize Your Digital Life and Unlock Your Creative Potential 

There is plenty of specific information available from such books and other online resources regarding how to study topics for understanding and retention. It is easy to locate tutorials for online services and apps to implement these strategies. There seem to be hundreds of posts on Medium, Substack, and YouTube with titles like “My Obsidium Workflow”, “I Switched From OneNote to Notion and Can’t Believe My New Productivity”, and “All of the Notetaking Apps in One Post”. There must be something people want to understand and evaluate here. When i dig deeper there are some logical arguments proposed to justify techniques digital tools enable such as the creation of permanent and atomic notes, linking notes, and progressive summarization and I can sometimes associate these techniques with cognitive concepts I knew such as generative learning, spaced repetition, and retrieval practice. 

What I finally decided I was missing was the type of applied research I found readily available when specific study techniques are proposed for classroom use. Learning and studying over time is not really what is studied in K12 and postsecondary education. What students know is studied over time, but not frequently how different methods of study influence the development of skills and knowledge. Differences in what studies can do on the next exam or at the end of a course are typically the focus. This seems different from the goal of evaluating learner-guided activities to develop knowledge and skills over many years. 

The time frame is not the only difference. Some of the strategies for school and adult independent note-taking are similar on the surface but different enough to warrant additional research. Note-taking, sometimes even described as note-making to differentiate the processes by advocates of some PKM methods, is a good example. In the Smart Note approach, isolating specific concepts such as individual notes written with enough context to be interpretable over time and then linking these individual notes to other notes by way of multiple links is quite different from how students take and make use of notes. The note-taking tools are different, the goals are different, and the mechanisms of creating and then acting on the written record are different. I want to know if the mechanics of these differences are actually useful. Controlled comparisons would be interesting, but so would studies examining how adults familiar with these approaches make use of what the tools allow over time, if they actually do. Do learners working for their own purposes stick with what the logic proposed for the use of a learning tool or do they modify the ideal approach to something that is simpler and less cognitively demanding?  Formal research methods have proven useful in understanding study strategies proposed for classroom-associated use but should be repeated in evaluating self-directed adult learning. 

I don’t think much if any of the type of formal research I propose exists. At least, I have not been able to locate this work. Maybe the payoff for such effort just is not there. Maybe there is a lack of grant support to fund academic research, but we academics are still interested in topics that seldom bring funding. There is a payoff available to those who develop tools and services in the form of subscriptions and for those writing self-help books that attract attention in the form of sales. 

As I consider what it would take to work on these topics, I can imagine the challenges researchers would face. How would you collect data and how would you assure privacy when the tools used are often associated with work? How would you get individuals to participate in studies? What would individuals be willing to provide if you wanted to evaluate the effectiveness of the technique employed? I at least would hope individuals might be willing to provide information about the tools they used, how long they have used these tools, and how they have used the tools and perhaps changed their patterns of use over time. 

Adults continually have learning tasks to keep up with vocational demands and for personal growth. We are told that rapid advancements in so many areas and so many information sources learning and learning to learn using technology would seem of increasing value. Perhaps by explaining my observations I can interest those still involved as active researchers. It is also possible I am missing a body of research that would address my interests. If this is the case, I would welcome suggestions. 

Loading

Design learning experiences using generative activities – Layering

I have written multiple posts explaining generative activities and how such external activities encourage productive cognitive behaviors. Some of these posts describe specific classroom applications of individual generative tasks. In this post, I intend to describe how educators can apply some of these generative activities when they assign web content (pages or videos).

In many cases, online content assigned in K12 classrooms was not prepared as instructional content. For example, an article from Scientific American might offer information relevant to a specific standard addressed in sophomore biology. What activities might an instructor add to help learners understand, remember, and possibly apply concepts within this article. For example, a textbook would likely have activities inserted at the end of a chapter, added as boxes within content, or recommended in a teacher’s manual. Instructors often make additions as class assignments. What I am supporting here is similar to what educational researchers have described as adjunct questions. These were originally questions added within instructional texts or attached at the end of such texts. Embedded activities play different roles than even the same activities might play when delayed and isolated from the informative content. At the time of initial exposure, my argument is that there is a difference between information and instructional content and the connection of generative learning activities is a way to make this transition. 

A couple of years ago I became interested in a group of online services that were developed to improve the educational value of online content (web pages and videos). I developed my own way of describing what these services were developed to accomplish. Layering seemed a reasonable description because these services could not actually modify the content originally shared by content creators for ethical and legal reasons. What a layering service could do was take the feed from the creator’s service and add elements on top of that content. Elements were additions that could encourage important cognitive behaviors in a learner.

With a layering service, the content a learner encounters is a combination of the content from the content creator and additions layered on this content. Two sources and servers are involved. From the perspective of a designer, a layering service works by accepting the URL for a web page or video from the designer and then allows the designer to add elements that appear within or on top of the content from the designated source. The layering service sends this combination to the learner and this does not change the original document and still downloads the original from the server each time the combination of original and layered content is requested by a user. Ads still appear and the content server still records the download to give the creator credit. The layering service generates a link provided to learners and recreates the composite of content and designer additions each time a learner uses that link. 

Questions are my favorite example of an external activity that can be added to encourage a variety of important thinking (internal) behaviors. For example, if you want a learner to link a new concept to everyday experiences the concept is useful in understanding, you might ask the learner to provide examples that show the application of the concept. Many learners may do this without the question, but the question increases the likelihood more learners will work to identify such connections with their existing experiences. Those who think about instruction in this way may describe what they are doing as designing instruction. I offer an extended description of generative activity in a previous post. 

Depending on the specific service, the elements that layering services I am aware of include annotations, highlighting, questions, and discussion prompts. Annotations could include additional material such as examples, translations, or instructions. Questions could be open-ended or multiple-choice. A few of these elements could also be added by the learner (highlights and annotations) so elements provided to the designer could be used to encourage specific use of the elements available to students.

My personal interest in promoting layering services is intended to encourage the use of services that allow educators, educational content designers, and learners to work with this content to provide more effective learning resources and more generative learning experiences. In addition, content creators have a right to assume the server used by the content creator will be contacted each time content is requested and inclusions such as ads are included. The expectations of the content creator are not ignored when using a layering service.

I have identified several services that meet my definition of a layering service. Here, I will describe one service focused on web pages and one that focused on video. Other examples can be explored from the page linked above and I assume others exist that I have not identified. Services are constantly being updated, but I have just worked with the two examples I describe here and this information should be current as of the uploading of this post.

Insert Learning

Insert Learning is my best example of the services promoted here. I say this because it offers the most generative options and the generative options are part of an environment allowing an educator to both create multiple lessons, assign these lessons to members of multiple classes, and record data on student completion of some of the types of activity involved in individual lessons. 

The following image should give you some idea how this works. Down the left border of the image, you see a menu of icons allowing the designer to select highlight, note, question, and discussion. Highlight and note work as one probably expects. When the icon is selected text can be highlighted by the designer or learner. The note icon adds what appear as Postit notes allowing the inclusion of text, links, images, video, and whatever else works as an embed. The question icon adds questions either multiple choice as appears in the image or open-ended. The discussion icon appears very much like an open-ended question but accumulates and displays responses from multiple learners to a prompt. 

As I said, Insert Learning differentiates itself from many of the other services because the layering component is part of a system that allows the assignment of lessons to individual students organized as classes and also collects responses to questions by lesson and student. The following image shows a couple of responses to an open-ended question. I used Insert Learning in a graduate course I taught in Instructional Design. I made use of several of the tools I presented to students even when the most common use would be in K-12. This image shows how responses to questions would appear in the Grade Book. I could assign a score to a response and this score would then be visible to the student submitting a given response. 

It has been a few years since I used Insert Learning. When I did, I paid $8 a month. I see the price has now increased to $20 a month or $100 for the year. 

EdPuzzle 

EdPuzzle is a service for adding questions and notes to videos. It includes a system for adding these elements, assigning these videos to students, and saving student responses to questions. The following images are small to allow them to be inserted in this post. In the following image, the red box on the right allows the selection of the element to be added – MC question, open-ended question, and note. The timeline underneath the video (middle) is also enclosed in a red box. As the designer watches the video, clicking one of these buttons stops the video and allows the selected addition to be included. A dot appears below the timeline to indicate where an element has been added. A learner can either play the video which will stop for a response when one of these inclusions is reached or select one of the dots to respond. The second image shows the dialog box used to add an open-ended question. 

In the video I used in this example, I created a demonstration using Python to run LOGO commands and saved the video to YouTube. Again, this was a demonstration used in a graduate edtech course. Early in the video, I showed and explained the LOGO code. The video then showed the result of running this program.

When using EdPuzzle with this video, I inserted a note asking students to take a pencil and sheet of paper to draw what the LOGO program would create. Near the end of the video, I inserted an open-ended question asking that students explain how Papert’s notion of computational understanding would provide a different way of thinking about the traditional definition of circle (i.e., a plane closed figure with points equidistant from a point). 

I used the free version of EdPuzzle because I only assigned students to a few examples to experience what the service provided. You can do a lot with this service at no cost. The pro-level price is $13.50 per month. EdPuzzle Pricing 

Summary these two examples demonstrate the use of layering services to add generative activities to a web page and a web video. There are similar services available from other companies that generate similar student experiences. The value in such services is the opportunity to design learning experiences containing activities likely to improve understanding and retention.

Loading

Turning AI on my content

In reviewing the various ways I might use AI, I am starting to see a pattern. There are uses others are excited about that are not relevant to my life. There are possible uses that are relevant, but I prefer to continue doing these things myself because I either enjoy the activity or feel there is some personal benefit beyond the completion of a given project. Finally, there are some tasks for which AI serves a role that augments my capabilities and improves the quality or quantity of projects I am working on. 

At this time, the most beneficial way I use AI is to engage an AI tool in discussing a body of content I have curated or created as notes and highlights in service of a writing project I have taken on. There are two capabilities here that are important. First, I value the language skills of an AI service, but I want the service to use this capability only as a way to communicate with me about the content I designate. I am not certain I know exactly what this means as it would be similar to saying to an expert with whom I was interacting tell me about these specific sources without adding in ideas from sources I have not asked you to explore. Use your general background, but use this background only as a way to explain what these specific sources are proposing. What I mean is don’t add in stuff to address my prompt that does not exist within the sources I gave you.

Second, if I ask an AI service about the content I have provided, I want the service to be able to identify the source and possibly the specific material within a source that was the basis for a given position taken. Think of this expectation as similar to the expectation one might have in reading a scientific article to which the author provides citations for specific claims made. My desire here is to be able to evaluate such claims myself. I have a concern in simply basing a claim on the language of sources not knowing the methodology responsible for producing data used as a basis for a claim. For serious work, you need to read more than the abstract. Requiring a precise methodology section in research papers is important because the methodology establishes the context responsible for the generation of the data and ultimately the conclusions that are reached. Especially in situations in which I disagree with such conclusions, I often wonder if the methodology applied may explain the differences between my expectations and the conclusions reached by the author. Human behavior is complex and variables that influence behavior are hardly ever completely accounted for in research. Researchers do not really lie with statistics, but they can mislead by broad conclusions they share based on a less-than-perfect research method. There are no perfect research methods hence the constant suggestion that more research is needed. 

Several services approximate the characteristics I am looking for. I will identify three such services. I had hoped to add a fourth, but I intended to subscribe to the new OpenAI applications recently announced, but the $20 a month subscription fee necessary to use these functions was recently suspended so I will have to wait to explore these functions until OpenAI decides to expand the user base. 

The three services I have worked with include NotebookLM, Mem.ai, and Smart Connections with Obisidan. I have written about Mem.ai and Smart Connections in previous posts, so I will use NotebookLM for extended comments and then offer quick examples produced by the other two services.

NotebookLM

NotebookLM recently was made available to me so I have had less experience using this tool than the other two I have included. 

At this time you can create a notebook based on 5 pdfs or 10 text segments. There is a limit to how much content these units of information can total, but the pdfs I used were research articles so this is a lot of content. In the following image, icons for the 5 pdfs appear in the column on the left of the screen. All of the pdfs have been selected for the interaction I intend with the AI tool and a check beside each pdf indicates this is the case. The window for submitting my prompts appears at the bottom of the image.

The second image shows the results from the prompt I have submitted (I will explain my interest in interacting with these pdfs following this description). The response to this query is provided above the prompt window (I will share the text) and below this window you will see that the summary was based on 10 “citations”. Selecting one of these citations brings up the content that now appears in the left-hand column. This window shows the content of the pdf and some summary comments about that document generated by the AI. The AI also proposes additional prompts a user might want to apply to that content. 

The task I asked all three services to address was related to a recent post I wrote describing generative cognitive tasks and comparisons among the impacts of these generative activities. The pdfs were research studies I had reviewed and were the basis for that post. I had detailed familiarity with this content from both reviewing and annotating this material and writing the previous post and I use this approach of first engaging a process without AI to provide a basis for commenting on what the AI provides. The prompt that follows is what I submitted to NotebookLM to see what would be generated. I will use a similar prompt with the other two AI services/

Prompt: Use only my notes, generate a 400 word summary of generative learning activities and include what is known about the relative effectiveness of different activities.

NoteBook response

Generative learning is a type of learning in which students are actively engaged in creating new knowledge by interacting with the material. This is in contrast to passive learning, in which students simply listen to a lecture or read a textbook.

Generative learning activities are designed to encourage students to think critically about the material and to make connections between new information and their existing knowledge.

There is a significant body of research evidence that supports the effectiveness of generative learning activities. A meta-analysis of 500 studies found that generative learning activities resulted in better learning outcomes than traditional, passive learning methods (Coffey et al., 2003).

Some of the most effective generative learning activities include:

  • Summarizing: Students are asked to summarize a text in their own words. This activity helps students to identify the main ideas of the text and to understand the relationships between them.
  • Questioning: Students are asked to generate questions about a text. This activity helps students to think critically about the text and to engage with the material in a deeper way.
  • Discussion: Students are asked to discuss a text with their classmates. This activity helps students to share their understanding of the text and to learn from each other.

The relative effectiveness of different generative learning activities depends on the specific learning goals. For example, summarizing is a good activity for helping students to remember information, while questioning is a good activity for helping students to understand the material in a deeper way.

Generative learning activities can be used in a variety of settings, including K-12 classrooms, higher education, and corporate training. They are a valuable tool for helping students to learn and remember new information in a more effective way.

***

You will note in this material that NoteBook included a reference. This was not one of the pdfs I provided for review so I thought it might have been included in the reference section of one of the pdfs. I asked Notebook to provide a full citation and it generated the following. 

Coffey, J. E., & McDonald, J. (2003). The effects of generative learning activities on student achievement: A meta-analysis. Contemporary Educational Psychology, 28(4), 240-263. doi:10.1016/j.cedpsych.2003.09.001

While this citation looks authentic and has the proper APA format, it is made up. I tried searching Google Scholar and used my library online access to search the location in Contemporary Educational Psychology and no such article exists. Google asks for user feedback and I provided the fake citation in my evaluation. 

***

Mem.AI

My use of Mem.ai and Smart Connections require a little different approach. The content stored in these services results from my exporting notes and highlights from the research articles I read. Both services are directed at my entire collection of notes so this is different from what NotebookLM could access. However, the citations indicated that the output was based on very similar source material.

Prompt: Use only my notes, generate a 400 word summary of generative learning activities and include what is known about the relative effectiveness of different activities.

Mem.ai response

Generative learning is a process where learners actively engage with material, creating new knowledge and connections. This approach involves strategies such as summarization, teaching, self-testing, and self-explaining, which require learners to select and organize relevant information, elaborate on the material based on personal knowledge, and integrate new information with existing knowledge 1^.

Fiorella and Mayer (2016) identified eight strategies that promote generative learning, including summarization, mapping, drawing, imagining, self-testing, self-explaining, teaching, and enacting. These strategies help learners to create a product based on existing knowledge 1^.

Brod (2021) focused on age differences in generative learning strategies and emphasized the importance of producing a meaningful product that goes beyond simple paraphrasing or retelling. For instance, highlighting does not result in new content, but summarization does. Brod’s review suggests that summarizing becomes useful from high school onwards, while testing is beneficial for all ages 2^.

Chi’s (2009) framework differentiates learning activities into active, constructive, and interactive categories. Active engagement with the material can activate existing knowledge and store new input, while constructive activities involve creating a product based on existing knowledge. Interactive activities involve engaging with others to produce a product that integrates external input with personal knowledge 1^.

Fiorella (2023) proposed a new framework of sense-making that distinguishes among three sense-making modes—explaining, visualizing, and enacting—that serve unique and complementary cognitive functions. The explaining mode generalizes knowledge, the visualizing mode organizes knowledge, and the enacting mode simulates knowledge 3^.

In conclusion, generative learning involves actively engaging with the material to create new connections and deeper understanding. The effectiveness of different generative learning activities can vary based on factors such as age and the specific learning context. However, strategies such as summarization, teaching, self-testing, and notetaking with revision and elaboration can promote generative learning and enhance understanding of the material.

Summarize my notes on generative learning Tue 4/18

Brod’s generative Fri 4/14

Fiorellaet2023 Fri 10/13

1^: Summarize my notes on generative learning

2^: Brod’s generative

3^: [Fiorellaet2023](https://mem.ai/m/cqo3CLWb5tkb8syut

At the end of each paragraph in this output, you see a number. The number corresponds to the citation at the bottom. About the three links that take a user to that note, you see the names of the notes. So, this output is carefully linked to sources. I will provide an example of what my notes look like in association with the output from Smart Connections, but each note contains a citation of the article from which I created that note. 

***

Smart Connections (within Obsidian)

Obsidian contains my notes from research articles I have read and also highlights from other sources provided to Obsidian through Readwise. I tell you this again because I do not have a comparison among AI services based on exactly the same base of resources. However, again, you will note similar sources are found. Note also that the prompt is a little different to get Smart Connections to include citations. 

Prompt: Use only my notes, generate a 400 word summary of generative learning activities and include what is known about the relative effectiveness of different activities. Include links to the individual source notes within your summary

Response:

Based on your notes, generative learning involves actively making sense of new information by mentally reorganizing and integrating it with one’s prior knowledge (Fiorella and Mayer, 2016). It is a constructive process that requires learners to select important information, reorganize it, and relate it to what they already know (Fiorellaetal2016). 

Fiorella and Mayer (2016) identified eight strategies that promote generative learning: summarizing, mapping, drawing, imagining, self-testing, self-explaining, teaching, and enacting. Summarizing involves selecting the main ideas, organizing them into a logical structure, and integrating new information with prior knowledge (Fiorellaetal2016). It is more effective for relatively simple concepts and requires training to use effectively (Fiorellaetal2016). 

Mapping and drawing also involve transforming information into different forms of representation (Fiorellaetal2016). These strategies help learners build connections and organize the material (Fiorellaetal2016). 

Self-testing, or retrieval-based learning, requires learners to actively retrieve information from memory without access to the provided materials (Fiorellaetal2016). It strengthens memory and makes it more accessible in the future (EndresHigh). 

Self-explaining involves explaining the content of a lesson to oneself during learning (Fiorellaetal2016). It promotes meaningful learning outcomes when learners engage in reflective knowledge building (Fiorellaetal2016). 

Teaching, on the other hand, involves selecting relevant information, organizing it into a coherent structure, and elaborating on it by incorporating existing knowledge (Fiorellaetal2016). It sets itself apart from other generative strategies by focusing on helping others learn (Fiorellaetal2016). 

Based on the research by Fiorella and Mayer (2016), it is more appropriate to consider each generative strategy as suitable for certain types of learning and learning contexts rather than regarding one strategy as the most effective (Fiorellaetal2016). 

In a study by Koh, Lee, and Lim (2018), the learning benefits of teaching were investigated. The results showed that teaching improved learning outcomes when it involved retrieval practice, but not when learners used a prepared script (Generative > TeachingAsRetrieval). This supports the idea that retrieval practice is an effective generative learning activity. 

Overall, generative learning activities have been shown to be effective in promoting meaningful learning outcomes. However, the effectiveness of each activity may vary depending on the complexity of the material and the training provided to learners. It is important to consider the specific learning goals and contexts when selecting and implementing generative learning strategies (Fiorellaetal2016). 

Based on your notes, here are the individual notes that provide more information on each generative learning activity:

– [[Generate1]]: Provides an overview of generative learning theory and the eight strategies identified by Fiorella and Mayer (2016).

– [[Summarizing]]: Describes the summarizing strategy and its effectiveness in selecting main ideas and creating quality summaries.

– [[Teaching]]: Discusses the goal of teaching and its distinction from other generative strategies.

– [[Self-Testing]]: Explains the self-testing strategy and its role in retrieval-based learning.

– [[Promote generative learning]]: Provides a link to the article by Fiorella and Mayer (2016) that identifies the eight strategies and discusses their effectiveness.

– [[Fiorellaetal2016]]: Summarizes the key points from Fiorella and Mayer’s (2016) article on generative learning strategies.

– [[EndresHigh]]: Discusses the importance of task differences in retrieval practice and the effects of different types of recall tasks.

– [[Yang2021]]: Highlights a meta-analysis on the effectiveness of retrieval practice in classroom learning and raises questions about the mechanism and student perception of its benefits.

***

Note: The output of Smart Connections is in markdown and the terms included in double parentheses are links that connect to the source note. So, if the summary above was opened in an app that would interpret markdown, the text within the double parentheses would appear as a link and the link would take me to a file stored on my computer. The file is named Generate1.

Here is an example of one of the original notes that was identified as source material. 

Generative learning makes sense of new information by reorganizing it and relating it to existing knowledge. This position comes from Wittrock, but is similar to other theorists (Mayer, Piaget). This specific article identified eight learning strategies that promote generative learning and provides a review of research relevant to each strategy.

[[Summarizing]]

Mapping

Drawing

Imagining

[[Self-Testing]]

Self-Explaining

[[Teaching]]

Enacting

The first four strategies (summarizing, mapping, drawing, and imagining) involve changing the input into a different form of representation.

The final four strategies (self-testing, self-explaining, teaching, and answering practice questions) require additional elaboration. 

Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. _Educational Psychology Review, 28(4), 717-741.

***

Summary

Keeping in mind my recognition that the AI of the three AI services was applied to slightly different content, I would argue that Smart Connections and Mem.ai are presently more advanced than NotebookLM. Eventually, I assume a user will be able to direct NotebookLM at a folder of files so the volume of content would be identical. Google does acknowledge that Notebook is still in the early stages and access is limited to a limited number of individuals willing to test and provide feedback. The content generated by all of the services was reasonable, but NoteBook did hallucinate a reference. 

My experience in comparing services indicates it is worth trying several in the completion of a given task. I have found it productive to keep both Smart Connections and Mem.ai around as the one I find most useful seems to vary. I do pay to use both services.

Loading

Cornell Notes and Beyond

While a research assistant at Cornell, Walter Pauk was credited with the development of the Cornell Note-taking system. Cornell notes became widely known through Pauk’s popular book “How to study in college” first published in 1962 and available through multiple editions. I checked and Amazon still carries the text.

Pauk’s approach which can be applied within a traditional notebook involves dividing a page into two columns with the right-hand column about twice as wide as the left-hand column and leaving a space across the bottom of each page for writing a summary. The idea is to take notes during a presentation in the right-hand column and later follow-up in the left-hand (often called cue column) with questions and other related comments. This second pass is supposed to follow soon after class so that other memories of the presentation are still fresh. The summary section provides a space to add just what it says – a summary of the main ideas.

Paul explained the proper way to use his system as the five Rs of note-taking. In my experience, the 5 Rs are far less well-known and yet important because they explain how the basic system is to be used. I would organize and explain the 5 Rs as follows.

During class – Record

After class – Reduce

Over time 

Recite (cover notes and see what you can recall based on cues)

Reflect (add your own ideas, extensions)

Review (review all notes each week)

While the Cornell system was designed during a different time and was suited to the technology of the day (paper and pencil), those who promote digital note-taking tools offer suggestions for applying the Cornell structure within the digital environment of the tool they promote. 

Cornell notes within Obsidian 

Cornell notes within Notion 

When I used to lecture about study skills and study behavior, I explained the Cornell system, but I would preface my presentation with the following questions. How many of you have heard of Cornell Notes? The SQ3R system? More had heard of Cornett notes and a few of SQ3R. I would then ask are any of you now using either of these systems to study my presentations or your textbook. In the thousands of students I asked, I don’t remember anyone ever raising her or his hand. To test my approach, I also asked if any student made and study note cards in their classes. The positive responses here were much more frequent. I tried to get a sense of why without much luck. I think my data are accurate and I raise this experience to get you to consider this same question. Students take notes, but don’t have a system.

I think Cornell notes are frequently proposed and taught to younger learners because the design of the note collection environment is simple and easy to describe. I wonder about how the process is communicated and perhaps more importantly implemented. The structure makes less sense if students are only intending to cram rather than frequently review. Does the learner have to “buy in” to the logic or do learners understand the logic, but just are not motivated to put in the effort? How any method is taught and understood likely has at least some impact on whether suggestions are implemented.

Understanding Cornell Notes at a deeper level

Note-taking has always been a personal interest and my posts have frequently commented on note-taking. I may have mentioned Cornell notes in a few of these posts, but my focus tends to be on a more basic level. If I am describing a system, what about specific components of that system have known cognitive benefits to learners? 

I come to the interpretations of those advocating specific study strategies from a cognitive perspective trying to analyze those strategies from this perspective. I ask what about a given study strategy seems like it makes sense given what those who study human cognition have found that benefits learning, retention, and transfer (application). What in a given study strategy could be augmented or given additional emphasis based on principles proposed by cognitive researchers? I will now try to apply this strategy to Cornell notes. I don’t know enough about Pauk’s work to know his theoretical perspective when creating this approach. For the most part, the perspective I take in my analysis has followed Pauk’s work which occurred during the 1950s. Timelines in this regard do not require that research precede practice, but there is a possibility that new research may offer new suggestions,

Topics

My comments will be organized as three topics.

  1. Stages of study behavior – how should the activities intended to benefit learning occur over time. What should be done when?
  1. Generative experiences and a hierarchy of such experiences – My explanation of a generative activity is an external activity intended to encourage a productive cognitive behavior. By hierarchy, I am pointing to research that has attempted to identify more and less effective generative activities and explain what factors are responsible for this ranking.
  1. Retrieval practice / testing effect – Research demonstrates that activities requiring the recall of stored information increases the probably of future recall and also increases understanding. Testing – free recall, cued recall, and recognition tasks – are common, but not the only or necessarily the most effective ways to engage retrieval effort.

Stages of study behavior

My personal interest in note-taking can be traced to the insights of Di Vesta and Gray. These researchers actually differentiated functions – encoding and external storage, but these processes were really centered within the stages of taking notes and then review. Encoding interpreted more broadly can occur at multiple points in time and this is my point in recognizing stages.

Pauk clearly recognized stages of study in proposing that learners function according to the 5Rs. The original notes were to be interpreted, augmented, and reviewed several times between the original recording and the immediate preparation for use. 

Luo and colleagues proposed that notetaking should be imagined as a three-stage process with a revision or update stage recognized after notetaking and before final preparation for use. In addition to recognizing the importance of following up to improve the original record, these researchers advocated for collaboration with a partner. Students do not take complete notes and the opportunity to compare notes taken with others allows for improvements. Research included in the paper points to the percentage of important ideas missed in the notes most record. The authors propose that lectures pause during presentations to provide an opportunity for comparison.

This source describes studies with college students using this pause and update method. Students were given two colored pens so additions could be identified. The pause and improve condition generated a significant achievement advantage (second study). However, this study found no benefit when comparing taking notes with a partner vs alone. Researchers looked at notes added and found few elaborations.

In an even more recent focus on multiple stages as part of a model for building a second brain, Forte described a process called distillation or progressive summarization.  In this process focused on taking notes from written sources, original content is read using an app that allows the exportation of the highlighted material. This content is first bolded and then highlighted to identify key information (progressive distillation). A summary can then be added. The unique advantage in this approach is to keep all of the layers available. One can function at different levels from the same immediate source and backtrack to a more complete level should it become necessary to recall a broader context or to take what was originally created in a different direction. 

It is possible to draw parallels here between what the Cornell system allows and what Forte proposes. The capability of reinstating context and addressing information missing from the original notes is also an advantage of the digital recording of an audio input keyed to specific notes as they are taken (see SoundNote). 

Di Vesta, F. & Gray, S. G. (1972). Listening and note taking. _Journal of Educational Psychology, 63_(1), 8-14.

Forte, T. (2022). Building a second brain: A proven method to organize your digital life and unlock your creative potential. Atria Books.

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: How revision, pauses, and partners affect note taking and achievement. Instructional Science, 44(1), 45-67.

Hierarchy of generative tasks

Again, a generative experience is an external activity intended to encourage productive activities. These productive activities may occur without any external tasks and this would be best situation because there is overhead in implementing the external tasks. However, for many learners and for most under some situations, the external tasks require cognitive activities that may be avoided or remain unrecognized as a function of poor metacognition or lack of motivation.

Many tasks initiated by a learner or educator can function as a generative function. Fiorella and Mayer (2016) have identified a list of eight general categories most educators can probably turn into specific tasks. These categories include:

  • Summarizing
  • Mapping
  • Drawing
  • Imagining
  • Self-Testing
  • Self-Explaining
  • Teaching
  • Enacting

Immediately, summarization can be identified from this list as being included in the Cornell system. Self-testing would also be involved in the way Pauk described recitation.

What I mean by a hierarchy as applied to generative activities is that some activities are typically more effective than others. 

Chi offers a framework – active-constructive-interactive – to differentiate learning activities in terms of observable overt activities and underlying learning processes. Each stage in the framework assumes the integration of the earlier stage and is assumed more productive than the earlier stage.

Active – doing something physical that can be observed. Highlighting would be another example.

Constructive – creating a **product** that extends the input based on **what is already known**. For example, summarization.

Interactive – involves interaction with another person – expert/learner, peers – to produce a product.

One insight from this scheme is that there is a stage beyond what might seem to be the upper limit of the Cornell structure (i.e., summarization). I am tempted to describe this additional level as application or perhaps elaboration. Both terms to me imply using information.  

Chi, M. T. (2009). Active?constructive?interactive: A conceptual framework for differentiating learning activities. Topics in cognitive science, 1(1), 73-105.

Fiorella, L., & Mayer, R. (2016). Eight Ways to Promote Generative Learning. Educational Psychology Review, 28(4), 717-741.

Retrieval Practice

Retrieval practice is a learning technique that involves trying to recall information from memory (see also Roediger & Karpicke). There are several reasons why retrieval practice improves future retrieval, but also understanding. First, it forces learners to actively engage with the material. This helps to create stronger connections between the information and existing knowledge. I think of retrieval as looking externally into memory to try to find something connected to what I am searching to find. This makes sense if you understand memory as a web of connections among ideas. The efforts to find specific information results in the activation and awareness of other information in order to find a connection to what is desired.Exploring retrieval not only increases the strength of connection to the desired information, but also an exploration of potentially related information resulting in new insights. 

Second, retrieval practice provides feedback on what has been learned and what needs more attention. This helps learners to identify areas where they need to improve. 

Retrieval practice is sometimes called the testing effect and asking questions or being asked questions is one way to trigger the search process (e.g., Yang and colleagues), Self testing is an activity embedded in the way Pauk imagines the use of Cornell notes. I am guessing it is also a reason the strategy of making and using flash cards is such a common study strategy. 

There are however other ways to practice retrieval. Yang and colleagues speculate that retrieval practice plays in role in the proven benefits of a learner teaching and preparing to teach. Teaching represents an important link here to the more productive levels of generative learning (see previous section). The previously mentioned hierarchy attributed to Luo and colleagues recognized the value of collaboration in reviewing notes and again the addition of sharing and discussion would represent important extensions of a personal use of any note-taking system. 

 Koh, A. W. L., Lee, S. C., & Lim, S. W. H. (2018). The learning benefits of teaching: A retrieval practice hypothesis. Applied Cognitive Psychology, 32(3), 401-410.

Luo, L., Kiewra, K. A., & Samuelson, L. (2016). Revising lecture notes: How revision, pauses, and partners affect note taking and achievement. Instructional Science, 44(1), 45-67.

Roediger III, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on psychological science, 1(3), 181-210.

Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychological Bulletin, 147(4), 399-435.

Summary – My effort here was an attempt to cross reference what might be described as a learning system (Cornell Note) with mechanisms that might expain why the system has proven value and possibly allow the recognition of similar components present in other study systems. In addition, I have tried to emphasize that the components of a system may not be understood and applied in practice. Collaboration was suggested as a way to extend the Cornell system.

Loading

Trust AI?

I have found that I cannot trust AI for a core role in the type of tasks I do. I am beginning to think about why this is the case because such insights may have value for others. When do I find value in AI and when do I think it would not be prudent to trust AI?

I would describe the goal of my present work or hobby, depending on your perspective, as generating a certain type of blog post. I try to explain certain types of educational practices in ways that might help educators make decisions about their own actions and the actions they encourage in their students. In general, these actions involve learning and the external activities that influence learning. Since I am no longer involved in doing research and collecting data, I attempt to provide these suggestions based on my reading of the professional literature. This literature is messy and nuanced and so are learning situations so there is no end to topics and issues to which this approach can be applied. I do not fear that I or the others who write about instruction and learning will run out of topics. 

A simple request of AI to generate a position on an issue I want to write about is typically not enough. Often a general summary of an issue AI generates only tells me what is a common position on that topic. In many cases, I agree with this position and want to generate a post to explain why. I think I understand why this difference exists. AI works by creating a kind of mush out of the content it has been fed. This mush is created from multiple sources differing in quality which makes such generated content useful for those not wanting to write their own account of that topic. I write a lot and sometimes I wish the process was easier. If the only goal was to explain something that was straightforward and not controversial relying on AI might be a reasonable approach or at least a way to generate a draft.

As I said earlier, educational research and probably applied research in many areas is messy. What I mean by that is that studies of what seems to be the same phenomenon do not produce consistent results. I know this common situation leads some in the “hard” science to belittle fields like psychology as not a real science. My response is that chemists don’t have to worry that the chemicals they mix may not feel like responding in a given way on a given day. The actual issue is that so many phenomena I am interested in are impacted by many variables and a given study can only take so many of these variables into account. Those looking to make summary conclusions often rely on meta-analyses to combine the results of many similar studies to achieve a type of conclusion and this approach seems somewhat similar to what AI accomplishes. Finding a general position glosses over specifics.

Meta-analysis does include some mechanisms that go beyond the basic math involved in combining the statistical results of studies. This approach involves the researchers looking for categories of studies within the general list of studies that identify a specific variable and then quantitatively or logically trying to determine if this unique variable modified the overall result in some way. 

The approach of examining subcategories is getting closer to what I am trying to do. I think it essential when considering an applied issue to review the methodology of the studies that differ and see what variables have been included or ignored. There is not an easy way to do this. It is not what AI does and it is not something humans can do when simply reviewing the abstracts of research. Do the researchers control a variable you as a consumer/practitioner think may matter? I encounter this issue frequently and I admit this experience often occurs because I have a bias that trends in a different direction than what the data and conclusion of a given study may propose. Biases are always important to recognize, but science relies heavily on doubt and testing alternatives is an important part of the process.

For example, I don’t happen to believe that taking notes by hand and reading from paper are necessarily better than their digital equivalents. I have read most of the studies that seem to make this case, but I find little in the studies that would explain why? Without credible answers to the “why” question, I continue to doubt and since I cannot generate data myself, I continue to examine the data and methodologies of specific studies looking for explanations.

Long intro, but I thought it necessary to support the following point. AI is not particularly helpful to me because conclusions reached from a mess or amalgam of content without specific links to sources I can examine seems a digital representation of the problem I have just outlined. AI searches for a common position when the specifics of situations and may create a common position that is misleading.

If you use AI and haven’t tried this, I would ask that you try working backward. By this, I mean ask the AI system to offer sources that it used in responding to your request. Exploring this approach works best when you know the field and have a good handle on what the AI should produce. 

Here is an example I generated recently when trying to understand what factors were responsible for the proposed value of the Cornell note-taking system. Many who read my posts probably recognize this approach to taking notes. What about this system explains why it seems to help students? I was using the AI system Leo which is now available as a tool within the Brave browser. 

The following image shows Leo as a window that opens in a Brave side panel (panel is opened using the icon at the top and Leo is activated by the icon enclosed by the red box on the right). At that point, you can being “chatting” with Leo.

Leo gave me a nice analysis of what the Cornell note-taking system is and offered several possibilities for why the system is effective. I then asked Leo for references in support of the reasons offered for effectiveness which resulted in names and dates of studies and I then asked for full citations. Leo responded with the following list and even indicated that it hoped this information would be helpful.

Here are the full citations for the references I provided earlier:

1. Fisher, R. P. (1993). The Cornell note-taking system: A study of its effectiveness in improving lecture note-taking and recall. Journal of Educational Psychology, 85(2), 278-286.
2. Kiewra, K. A. (1994). The Cornell note-taking system: A review of research on its effectiveness. Journal of Learning Disabilities, 27(3), 171-180.
3. Koech, J. (2013). The effect of the Cornell note-taking system on the academic performance of students with learning disabilities. Journal of Learning Disabilities, 46(2), 148-158.
4. Kornell, N., Son, L. K., & Tai, K. (2007). The importance of retrieval for learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(3), 576-585.
5. Bjork, E. L., & Bjork, R. A. (1992). A new theory of disuse and an old theory of stimulus decay. In A. Healy & R. Brown (Eds.), The psychology of learning and motivation: Vol. 8. Cognition and practice (pp. 115-145). New York: Academic Press.

I hope these references are helpful for you! Let me know if you have any other questions.

Here is the thing about this list. None of these citations is real. The names are appropriate as researchers who work in this area, the titles make sense, and the journals exist. However, try inserting any title from this list in Google Scholar and you will find there are no matches. I tried the more direct approach of actually going to the journals and looking for the papers. There was nothing to find. AI mushes things together and loses track of specifics. Even the names and titles make sense out of this mush, but the origins of the information cannot be retraced and reviewed.

If I were to offer the summary of my request as a blog post, it would be informative and accurate. If I were to append the citations on which this summary was generated, I would find myself embarrassed as soon as someone decided they wanted to use a citation to learn more. Is there value here? I think so as long as a user understands what they are getting. AI seems to do a reasonable job of presenting a summary of what others have written. However, at least within the scenario I have described, it is important to understand limitations. When I challenged Leo on a specific citation, Leo was willing to explain in its own words that it had just made the citation up. 

I have come to my own strategy for using AI. I use a tool such as Elicit to identify citations that I read creating my own notes. I then use AI tools to offer analysis or summaries of my content and to identify the notes that were used in generating responses. If it references one of my notes, I am more confident I agree with the associated statement.

This post is already far too long, so here is a link to an earlier post describing my use of Obsidian and AI tools I can focus on my own notes. 

Loading