Humata

Humata AI is another of those AI tools for exploring designated content. It is being promoted as a tool for researchers, but its use is not limited to any specific category of content explorers. An easy comparison would be ChatPDF as the service allows a user to upload and then interact with a pdf. However, the “pro” version also allows a user to interact with a collection of documents (see my description of other services with this capability). 

You can presently explore the capabilities of this service at no cost for individual documents. The Pro version is $15 a month for 250 pages and an additional penny a page after that page allocation is exhausted. 

Humata automatically generates a short summary of the document uploaded and proposes some questions. It is not clear to me which large language tool is being used to power this service. The product description proposes that a user can ask for descriptions (summaries), ask questions, and generate responses and write material based on the content that is uploaded. If you have used other AI tools, you can use this tool in a similar way and just see what it will do in response to requests.

The one feature I found uniquely useful in comparison to most of me experiences with other tools is that it assumes you may want to connect the content generated with the source material. It will both highlight and link to this material in an adjacent window (see image). 

Here is another description of this product.

Loading

Applying AI to Discuss Your Own Content

I have moved past the initial stage of awe in connection with access to large language models such as ChatGPT and after considerable exploration have begun to focus on how I might find value in what these systems can provide. I presently use AI tools to support the research I do to inform my writing – blog posts such as this. I have found that I feel uncomfortable trusting a tool like ChatGPT when I simply prompt it to provide me information. There are simply too many situations in which it generates replies that sound good, but are fabrications when checked. 

The one task most trustworthy requires that I focus the AI on a source of content I control and can use to check if something seems off. In this post, I will identify three such tools and explain a little of how you might also find these tools helpful.

ChatPDF

As the name implies, ChatPDF allows a user to interact with the content of a designated PDF. Much of the content I personally review consists of scientific journal articles available to me as PDFs from my university library. This has been the case now for many years and I have a collection of hundreds of such files I have read, highlighted, and annotated. The link I provide above explains how ChatPDF allows me to explore the content of content in such files. Because I read and annotate such files anyway, I actually don’t interact with journal articles in this way very often. The link I have provided describes the use of ChatPDF as a tutor applied to a textbook chapter. The intent of the description was to describe multiple ways in which ChatPDF could benefit a learner trying to understand and store important ideas from a document.

The other two examples here describe AI tools available to allow a user to interact with collections of notes. One tool works with notes saved in Obsidian and the second with notes in Mem.AI. These are digital tools for storing and organizing personal notes and digital content. The tools are designed for the organization and exploration of such notes, but as AI has become available new ways to make use of what can become large collections of information can also be applied. 

Smart Chat Obsidian Plugin

I have prepared a video to offer some idea of how Smart Chat prompts can be applied to the content stored in Obsidian. If you are unfamiliar with Obsidian, the video also offers a glimpse of Obsidian itself. One point I think is important Obsidian and differentiates it from Mem.AI is the way it stores content. Obsidian stores content as individual text files which include content as text and what is called markdown. Markdown is a simple set of tags that allow a user to establish links, tags, and text embellishments such as bolding, size, etc. The benefit is the independence of the files from any specific tool. So, if Obsidian was to go away, you would still have a collection of documents with your information you could salvage. In contrast, Mem.AI stores content in what I think of a database in the cloud. There are ways to export your content, but I find value in knowing I have my content on my computer in a format I can access if necessary.

The Smart Chat plugin requires that you have an Open.Ai account and add some money to cover the cost of your AI activity. I don’t consider the investment that large and have been contributing $5 a month which has covered the cost of my activity.

Mem.AI

This image shows the home screen for Mem.AI. At the top of the screen, you can see the AI prompt window. I pay $10 a month for MEM.AI, but the AI feature is included. The following prompt (same as I used for the Smart Chat plugin in Obsidian) generated the following response. The content of my two note systems is similar, but not completely identical. You see some differences in the content that was generated.

Prompt

List my notes that involve a comparison of taking notes by hand and by using a keyboard. Write a 400 word summary of these notes.

Response

Based on your knowledge base, there are two notes that involve a comparison of taking notes by hand and using a keyboard:

  1. Horbury2021: This study compared the performance of 10 and 11-year-olds taking notes by hand or using a computer. The study found that conceptual understanding was superior when taking notes by hand. The authors speculated that keyboarding may have produced greater cognitive load, especially for children who may have less experience in keyboarding.
  2. Writing a paper step by step: This note mentions a study by Mueller and Oppenheimer (2014) that found no difference in terms of the number of facts students were able to remember whether they took notes by hand or by typing them into their laptops. However, in terms of understanding the content of the lecture, the students who took their notes by hand performed much better.

Here’s a 400-word summary of these notes:

The method of note-taking, whether by hand or keyboard, can significantly impact a student’s understanding and recall of the material. A study by Horbury (2021) compared the performance of 10 and 11-year-olds taking notes by hand or using a computer. The study found that students had a superior conceptual understanding when taking notes by hand. The authors speculated that keyboarding may have produced greater cognitive load, especially for children who may have less experience in keyboarding. This suggests that the physical act of writing might help students process and understand the information better.

On the other hand, a study by Mueller and Oppenheimer (2014) found no difference in terms of the number of facts students were able to remember whether they took notes by hand or by typing them into their laptops. However, in terms of understanding the content of the lecture, the students who took their notes by hand performed much better. This suggests that while the amount of content stored may not significantly differ between handwriting and keyboarding, the process of handwriting might lead to a deeper understanding of the content.

These findings have several implications for educational practices. Given that handwriting seems to lead to a deeper understanding of the content, educators might want to encourage students to take notes by hand, especially during complex lectures or discussions. However, it’s also important to remember that what works best might vary from student to student. Personalization of learning strategies, taking into account individual strengths and preferences, could also be a valuable approach.

In conclusion, while technology has its place in the classroom, these findings suggest it should not completely replace traditional methods. A balance between digital and analog methods could be the key to effective learning. Furthermore, students could benefit from explicit instruction on effective note-taking strategies, such as focusing on understanding over verbatim transcription. The cognitive load of keyboarding for younger students or those less familiar with typing should also be considered when deciding on note-taking methods.

Loading

How will AI tutoring change

With the recent attention paid to AI because of large language tools such as ChatGPT, my title’s promise to describe change my surprise most educators. Tools for AI tutoring have been deployed and evaluated for some time. I provide a citation in support of this claim at the conclusion of this post. This paper is worth exploring because it offers insights into what has been meant by AI when applied to tutoring and even some insights into human tutoring. For example, I have often quoted the work of Bloom who argued that human tutors probably represented the ideal in teaching and suggested that dedicated intensive tutoring provided a two-standard deviation advantage to those tutored and this advantage represented the best educational interventions could accomplish. I have read the papers in which Bloom made this claim. Kulik and Fletcher offer a different interpretation explaining that Bloom’s data actually involved a combination of tutoring and a mastery approach and the mastery approach may have accounted for at least half of the benefit in this research. That aside, tutoring still offers learners a significant advantage.

Before AI was based on large language models, the AI involved in tutoring was based in a technology-supported system based on a model of what was to be learned, a model of the individual learner, a model of effective instructional strategies, and an interface allowing communication with the system. I had AI generate a description of what the researchers explained these three models involved.

  1. Learner Model: This model represents the student’s knowledge, skills, and learning preferences. It helps the ITS to adapt its teaching strategies to the individual needs of the student.
  2. Teacher Model: This model represents the teaching strategies and pedagogical knowledge used by the ITS to guide the student’s learning process. It helps the ITS to provide appropriate feedback, hints, and explanations.
  3. Content Model: This model represents the subject matter being taught by the ITS. It includes the concepts, relationships, and problem-solving procedures relevant to the domain.

The Kulik review found generally positive benefits for the AI studies, but indicated impact was smaller when the dependent measure was a standardized test rather than local tests, the sample size was small, learners were from the lower grades, the subject was math, MC tests were used as the dependent variable, and the tool studied was Cognitive tutor. For those interested in this type of approach, the review identifies a number of the systems available for use.

My interest in the potential application of the AI tools now available takes a somewhat different approach and suggests that educators and researchers begin with an analysis of the techniques used in successful studying and tutoring and attempt to translate these techniques into tasks that students or educators can apply using AI. I purposefully focus on the research on studying as a general way to think about the cognitive activities of learners following initial experiences which could involve lectures, readings, or any observation of what happens in the world. Simply put, learning requires the processing of external experiences for understanding, retention, and application whether entirely internally and unaided or encouraged by additional external activities (e.g., taking notes, answering questions, discussions with a partner). The natural language capabilities of large language AI allow approximation of these external activities. I have attempted to demonstrate what some of the activities might look like in an earlier post

My prediction is that companies serving the education market will quickly combine the type of AI approaches I have described here (the multi-model approach with the more flexible capabilities of large language models) because of the resources required to do so. You may already see the direction in which this is going be taking note of the efforts of the Kahn Academy (Kahnmigo). 

I do think there are immediate opportunities to take advantage of the tools now available. One distinction that I think educators should consider involves whether activities are applied to the knowledge base used to train the models or applied to designated content. For example, you have ChatGPT ask you multiple choice questions about mitosis or you can ask some AI tools to ask you questions about a pdf or designated content about mitosis. My prediction about commercial ventures will likely be more like this second option as a textbook company would like to focus an AI tutoring approach focused on individual textbooks. There is nothing inherently wrong with either system. I do believe there are more errors when using the more general knowledge base. I also see advantages in controlling the scope of experiences to a designated body of content as a way to control the depth of the experiences. What I mean the discussion of mitosis in a sophomore biology class will be different than in a graduate class because the knowledge base would be different.

I have explored the use of ChatPDF (not ChatGPT) in engaging with the content of a pdf in a previous post. I have been exploring Mem.AI as a way to engage in similar interactions with notes I have taken within this service (you do need the $8 a month version). What I like about this system is that it allows a user to focus the AI prompts on specific material. For example, I could use the service to take class notes on a given day, tag these individual notes or a large page of notes with the date, and then write a prompt to engage with the content identified by this tag. I could also tag notes with a class identifier (PSY101) and then write prompts focused on this tag to review for exams covering more material. I have included several different types of interaction in the material that follows. This material includes my prompts and the AI responses. 

***********************

Me: Using only my knowledge base with tag generative, write a summary of the activities that are classified as generative. Include a list of notes providing this information.

Generative learning activities involve actively engaging with the material to create new knowledge and connections. Some key strategies that promote generative learning include summarization, teaching, self-testing, self-explaining, and notetaking with revision and elaboration. These strategies require learners to select and organize relevant information, elaborate on the material based on personal knowledge, and integrate new information with existing knowledge.

Here are the notes from your knowledge base with the tag “generative”:

  1. Brod’s generative
  2. Summarize my notes on generative learning
  3. Cognitive Activities

These notes provide valuable insights into generative learning strategies and their effectiveness across different age groups, as well as the cognitive processes involved in these activities.

Me: Using only my knowledge base with tag june9, write a summary of the models the author described

Based on your knowledge base with the “june9” tag, here’s a summary of the models described by the author:

In the review by AlShaikh and Hewahi (2021), the authors discuss the development of Intelligent Tutoring Systems (ITS) and their use of multiple models. These models include:

  1. Learner Model: This model represents the student’s knowledge, skills, and learning preferences. It helps the ITS to adapt its teaching strategies to the individual needs of the student.
  2. Teacher Model: This model represents the teaching strategies and pedagogical knowledge used by the ITS to guide the student’s learning process. It helps the ITS to provide appropriate feedback, hints, and explanations.
  3. Content Model: This model represents the subject matter being taught by the ITS. It includes the concepts, relationships, and problem-solving procedures relevant to the domain.

These models work together to create a personalized learning experience for the student. However, the authors note that recent large language models and conversational approaches offer new possibilities for influencing learning without built-in strategies, which could be an important difference in the application of newer general tools.

Me: Using only my knowledge base with tag june9, write a multiple choice questions for me to answer.

Based on your knowledge base with the “june9” tag, here’s a multiple-choice question for you to answer:

In the context of Intelligent Tutoring Systems (ITS) described by AlShaikh and Hewahi (2021), which of the following models is responsible for representing the student’s knowledge, skills, and learning preferences?

A. Learner Model B. Teacher Model C. Content Model D. Language Model

Me: A

That’s correct! The Learner Model in Intelligent Tutoring Systems is responsible for representing the student’s knowledge, skills, and learning preferences. Great job! If you have any more questions or need further clarification, feel free to ask.

References

Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: a meta-analytic review. Review of educational research86(1), 42-78.

Loading

Summer – time for relaxed exploration

Summer is a great time for educators to do a little reading and explore a bit in preparation for next Fall. Our grandkids are either out of school or will be within a week. The educators in our family are soon to have more time. I usually offer some suggestions for technology-related and classroom relevant summer explorations at about this time of the year.

I seem to be spending so much of my time lately exploring and writing about AI. It is hard to get away from this topic and the uncertainty related to applications and challenges. Everything about AI seems mysterious and as a consequence, unsettling. As I have written previously, I have been unable to find a book that provided the insights I felt I needed and my related recommendation was to explore a combination of personal experimentation and online blog posts and resources as most productive. What follows are recommendations based on this perspective.

I have divided my recommendations based on two goals. First, I want to understand a bit about how AI works and to understand general “how to do it” skills. I don’t like the feeling of not understanding how things work the way they do. Without some sense of understanding, I have trust issues. At the other extreme, I want specific recommendations I can implement. I want examples and variations on these examples I can apply to content and topics of my choosing.

Second, I want specifics related to applications in education.

Here are some recommendations related to the first goal. The content is free with the exception of the Udemy course which I have found useful. I tend to differentiate Google Bard applications from OpenAI applications in my explorations. It is worth spending some time with each, but because I have decided to use several OpenAI API applications (applications built on the model that AI approach used in ChatGPT) I pay to use, I am more experienced and have spent more time with OpenAI-related resources. Hence, I am more confident in these recommendations.

The AI Canon (Andreessen Horowitz)

Generative AI learning path (Google)

ChatGPT complete guide (Udemy – $15?)

As an educator, you may or may not feel the need I feel to invest time in developing a sense of how and why. The following are sources specific to education. The resource from the Office of Educational Technology focuses on AI in education, but lacks the specifics I want. It is a reasonable overview of the potential of AI in education. I am also somewhat put off by the constant emphasis on the message that AI will not replace teachers and humans must remain in the loop, which I find obvious and unnecessary if there is a successful focus on useful applications. It seems there is a concern that those who would read the document in the first place need to be convinced.

I have included one blog post I wrote a couple of months ago. I added it because it is the type of effort I want to read because of the focus on how AI might be used for a specific educational goal. I cannot evaluate the quality of this offering, but I think efforts concerning concrete uses educators can try and/or assign now are the type of thing educators are looking for. I don’t believe in recipes, but my effort was intended to focus on opportunities to address a need and to encourage exploration. I think we are at this stage with AI use in classrooms and the summer is a good time to explore.

Artificial intelligence and the future of teaching and learning (Office of Educational Technology)

Google Bard for educators (Control Alt Achieve)

AI tutoring now (me) 

Loading

$20 a month is too much

I am intrigued by large language AI applications, but I admit I am confused. It is not so much trying to understand at a deep level how these services do what they do. I gave up on investigating the “how” question long ago. I am confused by pricing. What am I likely to have to pay say six months from now when we get past the free experimental stage? I understand that ChatGPT Plus is $20 a month. This seems like a lot. What makes it seem like a lot is the price I pay for the various services I pay for as an API allowing access through some other service.

It has taken me some time even to figure out which AI activity I use through some service other than ChatGPT or Bard, I actually am paying for. I know I have an OpenAI API account (so Bard is not included) and I know I have had to generate and attach licenses to other services. I should have labeled the licenses from the start. That would have made identifying where my charges come from easier. However, whichever services generate a cost, I can’t seem to run up much of a bill.

I am not a heavy user of AI services, but I do spend time each day exploring. So far, my bill for the month is a nickel. If the rate for ChatGPT Plus is the same as for the API use, I can’t imagine what it would take to run up a $20 bill.

For the curious, the following is the OpenAI Usage page allowing users to set a limit. I originally thought this limit was just a sum I would allocate and then utilize, but it turns out it is the limit I set for myself for a month. Obviously, this is far too high, but you pay only for what you use (sounds like a television ad). I have no idea how usage fees will be set in the future, but for now you get a lot of activity for little money.

I have tried to find clear descriptions of fees online, but I cannot find a useful summary of pricing and options. I did finally decided just to ask ChatGPT.

What is the difference between OpenAI API prices and ChatGPT Plus?

OpenAI API prices and ChatGPT Plus are two different products, with different features and pricing models.

OpenAI API is an artificial intelligence platform that provides developers with access to a wide range of state-of-the-art models, including natural language processing, machine learning, and computer vision. OpenAI API pricing is based on usage, with different rates for each API endpoint. Developers are charged based on the number of requests made to the API, with pricing starting at $0.001 per request.

ChatGPT Plus, on the other hand, is a specific natural language processing tool that uses the GPT-3 language model. It is designed for chatbot development, and includes features such as sentiment analysis, entity recognition, and conversational response generation. ChatGPT Plus is priced per month, with different pricing tiers based on the number of API calls and the level of support provided.

In short, OpenAI API provides access to a wide range of AI models with pricing based on usage, while ChatGPT Plus is a specific chatbot development tool based on the GPT-3 model with monthly pricing.

Loading

Thoughts about the value of online content

I was listening to a recent podcast from This Week in Tech (TWIT) and the program began as so many podcasts I listen to recently with a lengthy discussion of AI. Part of the discussion concerned the role AI tools will likely play in the election of 2024. Besides the concern that AI would likely be used to produce credible “deep fakes” before the election, the discussants raised an issue I had not considered. They predicted that AI will make political messages of all types so easy to produce that most people will simply be overwhelmed by messages and find it difficult to sort through this barrage to make meaningful personal decisions. I do have advice to offer on this issue (return to RSS and follow those you trust), but I will save longer advocacy for a future post.

The issue of being overwhelmed by content is not new, but the quantity is ramping up and AI content generation will likely make this trend even more significant. This post addresses this issue from the perspective of a content creator. I author three specific blogs with two focused on technology in education and one more general content. While I believe I have a credible background for commenting on technology in education and making recommendations and creating tutorials, when are my expertise and experience really that valuable? For example, why should a learner follow bloggers or search for relevant content when AI sources will generate credible tutorials and discuss this content with you when requested? I recognize that AI can drift off course and a tool such as ChatGPT is not a good source for recent topics because it was trained a year or so ago, but for established, factual content, AI does a credible job.

I do not intend to quit writing, but thinking about the issue of relevance is probably important. I do think there is still a role for experience and expertise. I also think there is an important role for analysis. Factual content should be easy for AI (yes I can point to specific examples of errors. Last night I asked BARD to provide a description of Target Field while I was at a Twins game. The service did a great job, but claimed that Target Field has a retractable roof which I can guarantee it does not have.), but when there is a substantial body of content for an AI service to access, factual errors will become rare for topics that generate consistent descriptions. 

Here is one observation that may be relevant. I find the data generated by my blogs interesting to consider. Which posts generate the most interest? Again, I think I have the credentials to write authoritatively about issues related to technology use in teaching and learning. However, the most popular post I have written by far was and continues to be about a visit my wife and I made to the Amish Greenhouses of southern Minnesota. I wrote this post in 2019 and I have comment on related issues since (what about photographing the Amish) and even this year this post remains the most popular thing I have written.

Why? I can only guess, but I think this post filled a unique niche providing information on a topic that continues to be difficult to find elsewhere. The Amish be a function of their personal beliefs are unlikely to use online sources to advertise their business ventures. You can find simple listening, but very little of any depth including images of just what these greenhouses are like. Gardeners in the Spring are interested in inexpensive places to purchase quality plants and the lure of visiting a location allowing interaction with people with a very different world view is an added benefit. As the weekends this time of year approach, the hit count of this post (and followups) jump dramatically. One of the more recent posts included a photo of the hand drawn map showing the farms with greenhouses and this evidently also was of significant value.

So? I guess I write because I like to write and the process teaches me interesting things about myself and the world. Sometimes, I hit on a topic that evidently provides a similar experience for others. 

We just returned from our annual trip to the Amish Greenhouses and I can’t resist including a few pictures. I always ask permission when I take photos that could potentially include people. The farmer who arrived to restock one 

I am a serious gardner and I start the seeds for my vegetable plants inside before transplanting. I am interested in how greenhouses go about their busness. I asked Google Bard about a specific topic that interested me. Because Bard is sometimes regarded as factually suspect, let me offer my personal observation on the question I asked. The vegetable plants sold by the Amish appear to be started at the greenhouses from seed. The flowers are either started from plugs or purchased as potted plants from greenhouse wholesalers.

Do the Amish raise their plants from seed?

The Amish do raise plants in their greenhouses from seeds. They also sell plants that they have propagated from other plants. In some cases, they may also buy plants from other growers.

The Amish are known for their commitment to quality and sustainability. They use only natural methods to grow their plants, and they do not use any pesticides or herbicides. They also take great care in selecting the seeds that they use.

The Amish greenhouses in Minnesota are a great place to find high-quality plants at an affordable price. The plants are grown with care and attention, and they are sure to thrive in your home or garden.

Here are some more details about how the Amish raise plants in their greenhouses:
They use only natural methods to grow their plants.
They do not use any pesticides or herbicides.
They take great care in selecting the seeds that they use.
The plants are grown with care and attention.
They are sure to thrive in your home or garden.

If you are looking for high-quality plants at an affordable price, I encourage you to visit an Amish greenhouse in Minnesota. You won’t be disappointed.

Loading

Sharing my MEMEX content

Social bookmarking is a method of saving, organizing, and sharing web pages or online resources that you find useful or interesting. It allows users to store bookmarks or links to web pages on a social bookmarking website, rather than saving them in their web browser’s bookmarking tool.

The process typically involves creating an account on a social bookmarking site and then adding bookmarks by submitting the URL of a webpage along with a title, description, and relevant tags or keywords. These bookmarks are usually public and can be viewed and accessed by other users of the social bookmarking site.

One of the key features of social bookmarking is the ability to discover and explore content based on the recommendations and interests of other users. Users can browse through popular bookmarks, search for specific topics or tags, and subscribe to the bookmarks of other users whose interests align with their own.

Social bookmarking can be a valuable tool for organizing and managing online resources, as it provides a centralized platform for storing and accessing bookmarks from any device with an internet connection. It also enables collaboration and knowledge sharing, as users can follow each other, comment on bookmarks, and engage in discussions around specific topics.

I use several social bookmarking services and this post concerns MEMEX. This service offers some unique capabilities not available in what I would describe as first-generation services (e.g., Diigo). Memex (Memex garden for desktop and Memex Go for tablet) is really designed for collaboration. Wnat I mean here is that individuals share back and forth with other members of a team. Among the shared features when used in this way is discussion/commenting. I don’t really want things to work that way when sharing with others I don’t know because I don’t want to have to moderate comments. I simply want to make the resources I have generated available.

Memex provides a way for me to create what the service refers to as Spaces that are categories of resources I have created. I can share access that is read only. Others can explore the resources within a space, use AI to generate summaries of the resources to see if further investigation might be useful, and use the link to move to the original source. Memex should allow my personal highlights and annotations to be shared as well, but I would have had to indicate that I wanted each individual highlight or annotation to be public as each was generated. The default is private and my collection is too large to make the thousands of such existing additions public as this point. New additions to this collection will include public annotations,

Here is a page that contains links to the Spaces I have generated to this point. As already stated Memex is really designed for a different model of sharing and sharing read-only content is not the way developers imagined the service being used.

The content within a Space appears as shown below. I have applied the AI summarization tool to generate a summary for the first bookmark (Critical thinking stems ….) filed as part of the space Instructional materials. The idea is that a user would generate summaries of articles that generate interest based on the article titles and then follow up by access the articles that seem useful.

The highlights I generated for a different article are accessed with a different icon and offer a different way to make the decision to access the full article.

In summary, what I hope to provide here is a way to learn something about this social bookmarking tool and to take advantage of the time I have invested in using this system to accumulate online resources I have found useful. Page that links to individual Spaces.

Loading