Applying AI to Discuss Your Own Content

I have moved past the initial stage of awe in connection with access to large language models such as ChatGPT and after considerable exploration have begun to focus on how I might find value in what these systems can provide. I presently use AI tools to support the research I do to inform my writing – blog posts such as this. I have found that I feel uncomfortable trusting a tool like ChatGPT when I simply prompt it to provide me information. There are simply too many situations in which it generates replies that sound good, but are fabrications when checked. 

The one task most trustworthy requires that I focus the AI on a source of content I control and can use to check if something seems off. In this post, I will identify three such tools and explain a little of how you might also find these tools helpful.

ChatPDF

As the name implies, ChatPDF allows a user to interact with the content of a designated PDF. Much of the content I personally review consists of scientific journal articles available to me as PDFs from my university library. This has been the case now for many years and I have a collection of hundreds of such files I have read, highlighted, and annotated. The link I provide above explains how ChatPDF allows me to explore the content of content in such files. Because I read and annotate such files anyway, I actually don’t interact with journal articles in this way very often. The link I have provided describes the use of ChatPDF as a tutor applied to a textbook chapter. The intent of the description was to describe multiple ways in which ChatPDF could benefit a learner trying to understand and store important ideas from a document.

The other two examples here describe AI tools available to allow a user to interact with collections of notes. One tool works with notes saved in Obsidian and the second with notes in Mem.AI. These are digital tools for storing and organizing personal notes and digital content. The tools are designed for the organization and exploration of such notes, but as AI has become available new ways to make use of what can become large collections of information can also be applied. 

Smart Chat Obsidian Plugin

I have prepared a video to offer some idea of how Smart Chat prompts can be applied to the content stored in Obsidian. If you are unfamiliar with Obsidian, the video also offers a glimpse of Obsidian itself. One point I think is important Obsidian and differentiates it from Mem.AI is the way it stores content. Obsidian stores content as individual text files which include content as text and what is called markdown. Markdown is a simple set of tags that allow a user to establish links, tags, and text embellishments such as bolding, size, etc. The benefit is the independence of the files from any specific tool. So, if Obsidian was to go away, you would still have a collection of documents with your information you could salvage. In contrast, Mem.AI stores content in what I think of a database in the cloud. There are ways to export your content, but I find value in knowing I have my content on my computer in a format I can access if necessary.

The Smart Chat plugin requires that you have an Open.Ai account and add some money to cover the cost of your AI activity. I don’t consider the investment that large and have been contributing $5 a month which has covered the cost of my activity.

Mem.AI

This image shows the home screen for Mem.AI. At the top of the screen, you can see the AI prompt window. I pay $10 a month for MEM.AI, but the AI feature is included. The following prompt (same as I used for the Smart Chat plugin in Obsidian) generated the following response. The content of my two note systems is similar, but not completely identical. You see some differences in the content that was generated.

Prompt

List my notes that involve a comparison of taking notes by hand and by using a keyboard. Write a 400 word summary of these notes.

Response

Based on your knowledge base, there are two notes that involve a comparison of taking notes by hand and using a keyboard:

  1. Horbury2021: This study compared the performance of 10 and 11-year-olds taking notes by hand or using a computer. The study found that conceptual understanding was superior when taking notes by hand. The authors speculated that keyboarding may have produced greater cognitive load, especially for children who may have less experience in keyboarding.
  2. Writing a paper step by step: This note mentions a study by Mueller and Oppenheimer (2014) that found no difference in terms of the number of facts students were able to remember whether they took notes by hand or by typing them into their laptops. However, in terms of understanding the content of the lecture, the students who took their notes by hand performed much better.

Here’s a 400-word summary of these notes:

The method of note-taking, whether by hand or keyboard, can significantly impact a student’s understanding and recall of the material. A study by Horbury (2021) compared the performance of 10 and 11-year-olds taking notes by hand or using a computer. The study found that students had a superior conceptual understanding when taking notes by hand. The authors speculated that keyboarding may have produced greater cognitive load, especially for children who may have less experience in keyboarding. This suggests that the physical act of writing might help students process and understand the information better.

On the other hand, a study by Mueller and Oppenheimer (2014) found no difference in terms of the number of facts students were able to remember whether they took notes by hand or by typing them into their laptops. However, in terms of understanding the content of the lecture, the students who took their notes by hand performed much better. This suggests that while the amount of content stored may not significantly differ between handwriting and keyboarding, the process of handwriting might lead to a deeper understanding of the content.

These findings have several implications for educational practices. Given that handwriting seems to lead to a deeper understanding of the content, educators might want to encourage students to take notes by hand, especially during complex lectures or discussions. However, it’s also important to remember that what works best might vary from student to student. Personalization of learning strategies, taking into account individual strengths and preferences, could also be a valuable approach.

In conclusion, while technology has its place in the classroom, these findings suggest it should not completely replace traditional methods. A balance between digital and analog methods could be the key to effective learning. Furthermore, students could benefit from explicit instruction on effective note-taking strategies, such as focusing on understanding over verbatim transcription. The cognitive load of keyboarding for younger students or those less familiar with typing should also be considered when deciding on note-taking methods.

Loading

Summer – time for relaxed exploration

Summer is a great time for educators to do a little reading and explore a bit in preparation for next Fall. Our grandkids are either out of school or will be within a week. The educators in our family are soon to have more time. I usually offer some suggestions for technology-related and classroom relevant summer explorations at about this time of the year.

I seem to be spending so much of my time lately exploring and writing about AI. It is hard to get away from this topic and the uncertainty related to applications and challenges. Everything about AI seems mysterious and as a consequence, unsettling. As I have written previously, I have been unable to find a book that provided the insights I felt I needed and my related recommendation was to explore a combination of personal experimentation and online blog posts and resources as most productive. What follows are recommendations based on this perspective.

I have divided my recommendations based on two goals. First, I want to understand a bit about how AI works and to understand general “how to do it” skills. I don’t like the feeling of not understanding how things work the way they do. Without some sense of understanding, I have trust issues. At the other extreme, I want specific recommendations I can implement. I want examples and variations on these examples I can apply to content and topics of my choosing.

Second, I want specifics related to applications in education.

Here are some recommendations related to the first goal. The content is free with the exception of the Udemy course which I have found useful. I tend to differentiate Google Bard applications from OpenAI applications in my explorations. It is worth spending some time with each, but because I have decided to use several OpenAI API applications (applications built on the model that AI approach used in ChatGPT) I pay to use, I am more experienced and have spent more time with OpenAI-related resources. Hence, I am more confident in these recommendations.

The AI Canon (Andreessen Horowitz)

Generative AI learning path (Google)

ChatGPT complete guide (Udemy – $15?)

As an educator, you may or may not feel the need I feel to invest time in developing a sense of how and why. The following are sources specific to education. The resource from the Office of Educational Technology focuses on AI in education, but lacks the specifics I want. It is a reasonable overview of the potential of AI in education. I am also somewhat put off by the constant emphasis on the message that AI will not replace teachers and humans must remain in the loop, which I find obvious and unnecessary if there is a successful focus on useful applications. It seems there is a concern that those who would read the document in the first place need to be convinced.

I have included one blog post I wrote a couple of months ago. I added it because it is the type of effort I want to read because of the focus on how AI might be used for a specific educational goal. I cannot evaluate the quality of this offering, but I think efforts concerning concrete uses educators can try and/or assign now are the type of thing educators are looking for. I don’t believe in recipes, but my effort was intended to focus on opportunities to address a need and to encourage exploration. I think we are at this stage with AI use in classrooms and the summer is a good time to explore.

Artificial intelligence and the future of teaching and learning (Office of Educational Technology)

Google Bard for educators (Control Alt Achieve)

AI tutoring now (me) 

Loading

ChatPDF as tutor

Educators concerned about AI and unable to generate productive ways their students could use AI tools need to check this out. The tool is called ChatPDF and is available using a browser or an iPad. At this point, it is free and available without an account.

Once connected you upload a pdf.  I wanted to give it a significant challenge and something I could evaluate easily for accuracy so I took a chapter I had written (the chapter on learning as applied to technology from the textbook I wrote with my wife (Integrating technology for meaningful learning) and uploaded it as a pdf file. I then began to ask for explanations, examples, and questions relevant to that chapter. I responded to the questions the AI tool generated and had my answers evaluated. What I have long thought potentially valuable AI was the role AI might play in functioning as a tutor. How can learners get flexible assistance when studying that they can shape to their needs? How can students discover what their needs are and then have their challenges addressed? 

While the system did require that I restart a couple of times, perhaps because I was working from a coffee shop with a sketchy connection, I was very impressed with the quality of the system. By quality, I was primarily interested in the accuracy of the content. Were the explanations accurate and different enough from the wording in the chapter to offer a reasonable opportunity for a learner to achieve a better understanding? Were the questions posed more than simplistic keyword vocabulary checks? Was the system flexible enough to understand me even when I got a little sloppy? 

Any educator should evaluate similar issues for themselves using material they might assign. I understand that content they might like to evaluate may not be available in pdf format, but as I understand the developers there is already a Google docs version and soon to be a Word version. 

There are a few differences between the browser and app versions I observed. The app version references short segments following its replies and the browser version gives a page number. I guess my preference would be the page number as I see value in a learner being able to go back and forth between the book (PDF) and the AI tool. As I have read about this tool there was a warning about the difficulty of the system making connections across different sections of a document and this must apply to transfer/applications external to the document as well. I make no claim that using this AI tool as a tutor is the equivalent of being able to work with a knowledgeable human tutor, but I would argue few students have the advantage of a knowledgeable human tutor at all times.

Take a look. Some example questions and responses the system generated are included in the following images.

The following image may be a little difficult to read, but I was trying to show the text segments the system tells you it primarily used to generate the content it displayed.

Loading

Evaluating AI Tools

Like so many other educators, I am trying to decide what I think of AI tools. Granted, the tools produce some reasonable prose. However, how accurate is the content generated? Here is an example.

I recently generated a post focused on the addition of an AI article summarization capability added to MEMEX Garden. The summarization I offered as my example was based on this article describing the new AI capabilities being added to Kahn Academy resources. I understand there are several references here to AI, but the focus should be on the summary of the article. The summary appears below.

Khan Academy has launched a limited pilot of a new learning guide called Khanmigo for educators, powered by GPT-4, the next generation of OpenAI’s large language model technology. GPT-4 is the successor to OpenAI’s GPT-3.5, which ChatGPT is built upon. It can accept images as input, handle up to 25,000 words of text at once, and is designed with more safety and alignment features, among other changes.

Khanmigo is designed to help teachers create personalized learning plans for their students. It uses GPT-4 to analyze student data and provide recommendations for activities and resources that will help them learn more effectively. The tool also includes features like progress tracking and assessment tools to help teachers monitor student progress.

The pilot program is currently limited to a small group of educators, but Khan Academy plans to expand it in the future. The company believes that Khanmigo has the potential to revolutionize education by providing teachers with powerful tools to help them personalize learning for each student.

The summary sounds great, but after reading the article I realized I had a very different interpretation of the focus of the article. By my interpretation, the key capability of the new AI capability of Kahn’s Khanmigo was to engage interactively with students as tutor. Asking about their understanding and then attempting to guide their thinking by asking questions that would encourage them to think in a different way about what they were trying to understand. My interpretation would have suggested that the most important capability was to serve as a tutor to the students more than providing guidance to the teachers.

So, compare my interpretation with the second paragraph I have bolded in the summary provided by the AI. I would encourage your attention to the original article to see if you agree with my assessment.

I copied a few of the statements from the original article to show where I got my interpretation.

“In a blog post, Khan Academy Founder Sal Khan wrote: “When GPT-4 is carefully adapted to a learning environment like Khan Academy, it has enormous potential. It can guide students as they progress through courses and ask them questions like a tutor would. AI can assist teachers with administrative tasks, which saves them valuable time so they can focus on what’s most important — their students.”

I think there is a big difference between arguing that a product helps the student versus helps the teacher simply because these positions mean very different things to me as someone interested in the history of mastery learning and the role of tutors in this instructional approach. Is this quibbling? If my interpretation is correct, I don’t think this is a difference of no consequence.

Loading