Citizen Science Month and Bird Buddy

It is Citizen Science Month. For science educators there are opportunities to recognize the role of citizens in your community contributing to science and perhaps to start a project that allows students to contribute to science.

The Citizen Science website offers explanations and examples, but I have several of my own (personal weather station). Recently, I have exploring the potential of Bird Buddy. My wife became interested in this product as a Kick Starter project and invested. Bird Buddy is essentially a bird feeder that contains a motion activated camera. The feeder uses wifi to connect to your phone.

Images that are shared with you are also sent to Bird Buddy and the company uses AI to attempt to identify the birds in the images that you capture.

In addition, Bird Buddy has initiated a project called HeartBeat that collects information about the birds photographed in different areas of the world (the device is now being sold world wide). As a participant you can access these data and view some of the photos being captured world wide.

An interesting feature of BirdBuddy is that you share images you capture with others you designate. To my knowledge educators have not taken advantage of this opportunity, but it would be possible to collect data and share images with other classrooms.

Loading

Complexity of the textbook cost issue

Capitalism is not a perfect system. No method of encouraging productivity and fair compensation is. Rather than making the effort to sort through the complexities involved in the creation and manufacturing of a product or the delivery of a service, it is too easy to seize on a single issue and feel justified in some conclusion you have reached. I want to argue that this is the case in the public perception of the cost of textbooks.

Why am I writing about this topic? My motive comes from comments made in the reaction to the legal decision made against the Internet Archives and by advocates for Open Educational Resources. My intent to broaden the discussion a bit to address parts of the complexity that are often ignored.

I am the author of a couple of college textbooks and I was a college faculty member. This combination places me in a position of being able to observe both the actions of publishers, but also students, bookstores, and those who comment on various aspects of the book business and the use of commercial instructional materials. A little about me as a textbook author. While I wrote for a small, specific market, I would argue I was successful. The textbook my wife and I wrote had a run of five editions with major textbook publishers. Our book was imagined as a book that undergraduates in education programs would use in a course with a title something like “Technology for teachers”, but was also used in graduate courses and by individuals interested in the topic. We now have the rights to our book because we were unable to work out an arrangement that would offer a $29 Primer in combination with related web content. The idea was that we would update the web contentment continuously editions and organize this content to expand the Primer. In fairness and full disclosure, we wanted to be paid for the continual process of writing rather than a furious revision effort when a new edition was authorized. We now sell the Primer as a $9 Kindle trying to offer a version of our vision.

The issue of textbook cost and what the cost provides:

I have written about aspects of the textbook issue for years. One of my favorite posts was headlined “The beer money ploy”. While the title may seem unrelated, the post explored an aspect of the perceived cost of textbooks that is often ignored. I see parents and politicians talking about the cost of textbooks all of the time. Kids are paying $600+ a semester for textbooks in some fields of study. This seems possible. Our textbook was once sold for between $100 and $140 depending on the supplemental materials that were bundled and books of this type cost nothing compared to books in math and the sciences. Here is the thing about this cost as explained in the “Beer Money Ploy”. This is not the actual cost to the student. Nearly all book stores and many online outlets purchase used textbooks for 50% of the sales price. So a $600 bill at the beginning of the semester allows a resale of $300 to the campus bookstore or online outlet at the end of the semester. If you don’t explain this to your parents, you have $300 to spend as you wish. It is true that bookstores keep an eye on what books have been ordered for next semester and use this when purchasing used books, but the proliferating online services don’t worry about this for a given institution. Used book may be resold several times allowing bookstores and online services easy money for putting books back on the shelf or on the online market.

So, an author or the company paying the author a percentage of the wholesale price to the bookstores have to make their money on the original sale only. This means the company must jack up the price compared to what they would charge for a consumable item. Textbook companies have begin participating in the used market and now may lease textbooks. This solves their problem to some extent.

Textbook companies are not without blame. Textbook companies spend some of the money from sales on selling. Unlike a bookstore, textbook companies pay sales reps that visit campuses and individual instructors. Of course, this contributes to the cost of textbooks but seems necessary to get instructors to take a look at the books they consider. The effort instructors devote to the exploration of the multiple options they have for assignment is a related issue. Bookstores want instructors to continue with existing assigned books. Textbook reps promote their most recent offerings in a given space (large courses almost always encourage multiple books from a single publisher because of the amount of money involved) arguing for the value of current information. While this is true and there be other good reasons for considering a different book, new book adoptions are also how the company, author, and sales reps make their money. See above description of used book market. See previous comment on capitalism.

Publishers

What you pay for with a typical commercial in comparison to a roll your own approach:

A commercial publisher spends money on people who perform functions that may be diminished or absent in self publishing or the absent in the online material an educator might identify and patch together as an information source. When you develop a commercial textbook you work with an editor who comments on everything from writing style, the importance of content included or maybe not included, to embellishments such as when an example, feature box, or chart might be helpful. Sometimes they work with you to cut down the amount of material to meet some cost to page number target. They liked that I put content on the web to support my book because that became an option they could recommend for material I had spent hours to create and they wanted to drop.

Commercial publishers have specialists who check every reference to see that references you include are actually accurately cited at the end of the book and people who specialize in creating an index. They pay photographers to provide images appropriate to the content and people who transform charts and graphs from sketches authors might provide. Pages are carefully laid out translating the page after page of generic text into something that has a professional appearance. Is this worth it? I have different feelings about different things. I am constantly annoyed when I cannot find a reference that an author in a self-published book forgets to add. An appendix of key terms is helpful. The layout I like, but I read a lot of stuff in ebooks that lacks this feature. I read theses and dissertations that are hundreds of pages long and I clearly would rather read something in a more pleasing format. However, I can still read this material and I would not want students to spend time creating content that has a more diverse appearance.

I have already mentioned sales reps who bring free copies of textbooks (another expense) to the office of instructors maybe after they look up what you teach and quickly reading the promotional materials about books they have available on that topic. Necessary? Not in an ideal world, but I also know that few instructors spend time looking through even a few books to select the one they will assign. Some do, many just make a selection based on what they found acceptable in a previous edition, the scholarly reputation of the author(s), or a quick examination of a few topics of personal interest. I admit I tended to switch back and forth between a couple of books I liked so I kind of fall in the middle of this laziness continuum. Being forced to read the book I used at least every other year was my motivation.

One caveat to what I have said. My opinions probably best apply to lower division service courses and less to upper division courses for majors or graduate students. This has to do with the background of the instructor (see following comments), the uniqueness and depth of the content, and what type of literature best suits the purpose of a course. For example, advanced courses are more likely to require exposure to multiple authors who have specific expertise and primary rather than secondary source material.

Authors

Expertise

I would argue that writing a quality lower level, survey textbook in many ways requires more preparation than a specialized upper division textbook. I have found that the survey course requires that I address topics I know should be included that I am not prepared to address. I may be able to write on topics generically because I should know more about most topics than students because of previous teaching experiences, but quality instruction and more so writing instructional content requires a depth of knowledge beyond what ends up conveyed in what is actually written. When you write in an actual area of expertise, the range of topics is much more restricted and you are likely an active researcher/scholar in that area. When I try to explain this to people, I use the example of copyright specifically when it comes to fair use and what classroom teachers can do to present content themselves or that their students have created online. I started reading about this topic and encountered something called the TEACH act (Technology, Education, and Copyright Harmonization). I learned that the purpose of this legislation was to place online and in person instruction on an equal footing when it came to fair use for instruction. Sounds important for K12 teachers to me. To qualify for this equal footing, there were certain expectations – protected access for students in actual classes. By my understanding of what this means, unrestricted access as would be allowed when using popular tools such as web environments (Google Sites) should not be used to share content allowed for classroom instruction because access is not limited, whether intended or not, to students with access. I started asking lawyers with educational responsibility (my university attorney) and experts talking about educational fair use at conferences and they all were baffled by the question and even the existence of the TEACH Act. I have not found reference to this act in other technology textbooks in the discussion of fair use.

This is just an example, but I offer it to make the point that there is some unique work required when creating the background for writing about topics beyond what I would describe as the typical expertise of most authors writing in a broad academic area. There may be far more work required than you realize if you have not written a textbook yourself. Part of what authors are paid for is to do this work. Again, I have a specific kind of product in mind and I assume that support for the statements made and the actions recommended have a basis not in personal opinion but a careful review of expert opinions and research.

Summary

Understanding what is a fair price for work and the cost of a product depends on a realistic understanding of what it takes to generate that product. These comments were intended to communicate some of the factors consumers may not recognize.

Loading

AI in Social Bookmarking Tools

I presently make use of two subscription social bookmarking services (Diigo and Memex) and one service that is presently free (Glasp). If I was forced to select one for its capabilities, it would be Glasp. While I am annoyed when a service I have used for years adds new features I never use and then expects a bump in the subscription price, I have the opposite opinion of Diigo. It does what it has always done, there just have been no new developments in years.

To be fair, most of the features I find most interesting about Memex and Glasp have little to do with the social components. I have written about both services in previous posts so I will not review most features, but concentrate on how Memex and Glasp use AI.

When it comes to AI, my primary interest is how AI can be used to process specific content I have already identified. At present, these two systems differ in how they stored content. Glasp assumes I want to ask questions or ask for the processing of content I have accumulated over time (all bookmarked sources). Memex assumes I want to ask questions of a specific source. There are cases in which each is what I want.

Memex Garden (the computer-based and not the tablet/phone Memex Go) allows the user to apply AI to a selected source (Ask and then select summarize selection, summarize the entire source, or general question). In general web pages are short enough that my highlights and annotations are easy enough to scan that these options are not particularly useful. However, Memex allows the annotation of PDFs which are long enough, and because I mostly read journal articles as PDFs complex enough that summarization may be helpful.

In contrast, Glasp applies AI across all of the content that has been collected. Note, this applies only to web content and not Kindle book highlights and annotations which can be accessed through this service. Glasp does not presently allow the highlighting and annotation of PDFs. The nice thing about the AI application in Glasp is that it identifies the specific sources that are used in responding to inquiries.

While I said I would not focus on social capabilities, Glasp does have an interesting AI social capability. Glasp allows a user to designate the account of another user and ask questions of that user’s public annotations. This may seem a little creepy, but the mission of Glasp clearly emphasizes the collective intelligence or efforts of users.

The following is an example of the content from one of the developers I was able to ask AI to query.

Almost there

I have been able to exchange texts with both the Memex and Glasp developers expressing my interests. I am most interest in having an efficient want to explore the highlights and annotations I have applied to Kindle books and PDFs because these are the sources I mostly focus on in my professional work. I can understand why copyright issues may be factor to consider in sharing highlights from commercial products (books, journal articles), but I would hope there would be an easy way for access to be limited to the sources I own. Perhaps notes could be treated as shareable, but not highlights. Goodreads allows the sharing of some highlights so some sharing must be allowed.

Loading

ChatPDF as tutor

Educators concerned about AI and unable to generate productive ways their students could use AI tools need to check this out. The tool is called ChatPDF and is available using a browser or an iPad. At this point, it is free and available without an account.

Once connected you upload a pdf.  I wanted to give it a significant challenge and something I could evaluate easily for accuracy so I took a chapter I had written (the chapter on learning as applied to technology from the textbook I wrote with my wife (Integrating technology for meaningful learning) and uploaded it as a pdf file. I then began to ask for explanations, examples, and questions relevant to that chapter. I responded to the questions the AI tool generated and had my answers evaluated. What I have long thought potentially valuable AI was the role AI might play in functioning as a tutor. How can learners get flexible assistance when studying that they can shape to their needs? How can students discover what their needs are and then have their challenges addressed? 

While the system did require that I restart a couple of times, perhaps because I was working from a coffee shop with a sketchy connection, I was very impressed with the quality of the system. By quality, I was primarily interested in the accuracy of the content. Were the explanations accurate and different enough from the wording in the chapter to offer a reasonable opportunity for a learner to achieve a better understanding? Were the questions posed more than simplistic keyword vocabulary checks? Was the system flexible enough to understand me even when I got a little sloppy? 

Any educator should evaluate similar issues for themselves using material they might assign. I understand that content they might like to evaluate may not be available in pdf format, but as I understand the developers there is already a Google docs version and soon to be a Word version. 

There are a few differences between the browser and app versions I observed. The app version references short segments following its replies and the browser version gives a page number. I guess my preference would be the page number as I see value in a learner being able to go back and forth between the book (PDF) and the AI tool. As I have read about this tool there was a warning about the difficulty of the system making connections across different sections of a document and this must apply to transfer/applications external to the document as well. I make no claim that using this AI tool as a tutor is the equivalent of being able to work with a knowledgeable human tutor, but I would argue few students have the advantage of a knowledgeable human tutor at all times.

Take a look. Some example questions and responses the system generated are included in the following images.

The following image may be a little difficult to read, but I was trying to show the text segments the system tells you it primarily used to generate the content it displayed.

Loading

Evaluating AI Tools

Like so many other educators, I am trying to decide what I think of AI tools. Granted, the tools produce some reasonable prose. However, how accurate is the content generated? Here is an example.

I recently generated a post focused on the addition of an AI article summarization capability added to MEMEX Garden. The summarization I offered as my example was based on this article describing the new AI capabilities being added to Kahn Academy resources. I understand there are several references here to AI, but the focus should be on the summary of the article. The summary appears below.

Khan Academy has launched a limited pilot of a new learning guide called Khanmigo for educators, powered by GPT-4, the next generation of OpenAI’s large language model technology. GPT-4 is the successor to OpenAI’s GPT-3.5, which ChatGPT is built upon. It can accept images as input, handle up to 25,000 words of text at once, and is designed with more safety and alignment features, among other changes.

Khanmigo is designed to help teachers create personalized learning plans for their students. It uses GPT-4 to analyze student data and provide recommendations for activities and resources that will help them learn more effectively. The tool also includes features like progress tracking and assessment tools to help teachers monitor student progress.

The pilot program is currently limited to a small group of educators, but Khan Academy plans to expand it in the future. The company believes that Khanmigo has the potential to revolutionize education by providing teachers with powerful tools to help them personalize learning for each student.

The summary sounds great, but after reading the article I realized I had a very different interpretation of the focus of the article. By my interpretation, the key capability of the new AI capability of Kahn’s Khanmigo was to engage interactively with students as tutor. Asking about their understanding and then attempting to guide their thinking by asking questions that would encourage them to think in a different way about what they were trying to understand. My interpretation would have suggested that the most important capability was to serve as a tutor to the students more than providing guidance to the teachers.

So, compare my interpretation with the second paragraph I have bolded in the summary provided by the AI. I would encourage your attention to the original article to see if you agree with my assessment.

I copied a few of the statements from the original article to show where I got my interpretation.

“In a blog post, Khan Academy Founder Sal Khan wrote: “When GPT-4 is carefully adapted to a learning environment like Khan Academy, it has enormous potential. It can guide students as they progress through courses and ask them questions like a tutor would. AI can assist teachers with administrative tasks, which saves them valuable time so they can focus on what’s most important — their students.”

I think there is a big difference between arguing that a product helps the student versus helps the teacher simply because these positions mean very different things to me as someone interested in the history of mastery learning and the role of tutors in this instructional approach. Is this quibbling? If my interpretation is correct, I don’t think this is a difference of no consequence.

Loading

John Henry vs AI

I have been having a discussion on a Mastodon instance about the knowledge base supporting AI apps. It was initiated by someone who proposed that if AI is based on the content of too many content creators developing content using AI wouldn’t the improvement of the knowledge base bog down rather than become more helpful. This made some sense to me.

Imagine the following comparison. I am a retired academic and in retirement, I have transitioned from doing research to writing based on my reading of research. This is where the John Henry reference came from. Instead of John Henry the “steel driving man” against the steam engine, it is Mark the blog-writing man against ChatGPT. I read stuff and then write stuff. ChatGPT scans stuff and then with prompts writes stuff. We both can now only build from the knowledge base that exists and then contribute back to that knowledge base. Neither ChatGPT nor I can contribute insights back to that knowledge base. We both can summarize and interpret, but not hypothesize and test.

Granted ChatGPT has the potential to access more content than I can possibly read. I think I can speculate and propose in ways that ChatGPT can not, but I admit I cannot test my speculation.

Without new inputs, we may get to a point similar to that of those Republicans who suggest that Fauci lies. I would argue that a scientist being wrong as viewed in hindsight is probably not proof that a scientist has lied. It is that a scientist may have been relying on the existing data and made an adjustment when new information revealed that an old idea was wrong. Likewise, Chat GPT needs new and independent inputs to the knowledge base to advance.

I have been exploring a related idea by using ChatGPT to comment on topics I know to be based on research reaching conflicting decisions. An issue within my own domain of study that fits this scenario is the question of whether college students should take class notes using laptops or paper and pencil. Perhaps this is not an issue many see as earth shattering, but the point is here is an issue that is unresolved yet practitioners are relying on positions taken to make decisions. I land on the laptop side because of the long-term benefits I see in collecting notes in a way that allows the learner to make use of these notes in more powerful ways over time and in the future.

The more common position at present is that taking notes by hand is more productive. Some studies do indicate this is the case, but I keep asking for a rationale that would explain the advantage. The best existing explanation I know of goes something like this. Most students can enter text on a keyboard at a significantly faster rate than they can write by hand. On the surface, this might seem to be an advantage for laptops. However, the argument speculates that because of limitations pen and paper users think more carefully about what they will record and benefit from this more active cognitive behavior. Transcription or close to it involves less thinking. Ok, perhaps, but this position ignores that keyboarding is under the control of the laptop user and that they could use the more efficient input to think more and record the same amount as the paper users. Does this seem reasonable?

So, I presented this “issue” to ChatGPT to see what it would say. Here is what I got.

Is it better to take notes on a laptop of by hand on paper?

Both taking notes on a laptop and by hand on paper have their advantages and disadvantages, and the better option depends on personal preferences and the situation at hand. Here are some factors to consider:

– Faster typing speed: If you are a fast typist, you may be able to take notes more quickly on a laptop than by hand, which can be helpful if the speaker is speaking quickly or if you need to take a lot of notes in a short amount of time.

– Searchable notes: With digital notes, you can use the search function to find specific information quickly.

– Easy editing: You can easily revise, reorganize, and format your notes without having to cross out or erase anything.

– Convenience: You can have all your notes in one place and access them from anywhere if you save them to the cloud or a digital note-taking app.

Advantages of taking notes by hand:

– Better retention: Some research suggests that taking notes by hand may lead to better retention and understanding of the material compared to typing. This is because handwriting engages different parts of the brain and can help with processing  and remembering information

– Less distracting: When you take notes by hand, you are less likely to get distracted by notifications or other apps on your device.

– More flexibility: You can draw diagrams, arrows, or other symbols easily by hand, which can be helpful for visual learners or when taking notes in math or science classes.

Ultimately, the better option for you depends on your personal preferences and the situation at hand. If you are more comfortable with a laptop and need to take a lot of notes quickly, then typing may be the better choice. If you prefer handwriting and want to retain the information better, then taking notes by hand may be the way to go.

Can you provide citations I should read on this topic?

Mueller, P. A., & Oppenheimer, D. M. (2014). The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6), 1159–1168. https://doi.org/10.1177/0956797614524581

Carter, M. J., & Van Matre, N. H. (2016). Note-taking and Handouts in the Digital Age. College Teaching, 64(2), 67–77. https://doi.org/10.1080/87567555.2015.1124408

Kiewra, K. J., Benton, S. L., Kim, S., Risch, N., & Christensen, M. (2019). Investigating the Benefits and Drawbacks of Digital Versus Traditional Lecture Capture. Journal of Educational Psychology, 111(5), 725–741. https://doi.org/10.1037/edu0000315

Kontra, C., Lyons, M. J., Fischer, S. M., & Beilock, S. L. (2015). Physical Experience Enhances Science Learning. Psychological Science, 26(6), 737–749. https://doi.org/10.1177/0956797615572659

I hope you find these resources helpful!

Mark the blog-writing man cannot compete with this. I have read several, but not all of these references. ChatGPT hedges, because students either use one system or the other. The specific comment about retention claims it is best with hand written notes. The “Pen is mightier than the keyboard” study is frequently cited to support requiring students to use pen and notebook instead of a laptop. I described this study in a previous post. I believe I have an advantage in being able to add analysis to this issue.

The issue is whether the knowledge base ChatGPT or me uses will improve as new evidence is translated into content. This is a complex challenge that depends on continued incentives for researchers and those who work to translate and share what these researchers contribute. Any change in the incentives supporting these individuals could reduce the improvement of the knowledge base that guides human behavior. ChatGPT and other AI systems ultimately rely on a knowledge base humans built. Making summarization and communication easier or less costly does not change the need for the constant upgrading of this knowledge base.

Loading

The AI in Mem AI

I have been using Mem AI as a smart notetaking system and as an alternative to Obsidian. This post demonstrates the present AI capabilities which focus on the content I have added, Mem AI proposes it has differentiated itself from competing products such as Obsidian because the AI capabilities are intended as an alternative to personally generated links among notes.

The summary generated made sense at the beginning and then drifted into other content that did not seem related to the directions I gave.

Here is the summary generated by the app and shared.

Loading