Show Your Work – Improve Argumentation

Thinking is not visible to self and others and this reality limits both personal analysis and assistance from others. I have always associated the request to show your work associated with learning math so subprocesses of mathematical solutions can be examined, but the advantage can be applied when possible to other processes. I have a personal interest in the ways in which technology can be used to externalize thinking processes and the ways in which technology offers unique opportunities when compared with other methods of externalization such as paper and pen.

Ideas from different areas of interest sometimes come together in unexpected ways. This has been a recent experience for me with a long-term interest in argumentation and digital tools applied to learning. Argumentation may not spark an immediate understanding for educators. It sometimes help if I connect it with the activity of debate, but it relates to many other topics such as critical thinking and the processes of science as well. It relates directly to issues such as the distribution of misinformation online and what might be done to protect us all from this type of influence.

For a time, I was fascinated by the research of Deanna Kuhn and wrote several posts about her findings and educational applications. Kuhn studied what I would describe as the development of argumentation skills and what educational interventions might be applied to change the limitations she observed. It is easy to see many of the limitations of online social behavior in the immaturity of middle school students engaged in a structured argument (debate). Immature interactions involving a topic with multiple sides might be described as egocentric. Even though there is an interaction with a common topic, participants mostly state the positions they take with and frequently without supporting evidence. As they go back and forth, the seldom identify the positions taken by an “opponent” or offer evidence to weaken such positions. Too often, personal attacks follow in the “adult” online version, and little actual examination of supposed issues of interest is involved. 

Consideration of the process of clearly stating positions and evidence for and against maps easily to what we mean by critical thinking and the processes of science. In the political sphere what Kuhn and similar researchers investigate relates directly to whether or not policy matters are the focus of differences of opinion.

Externalization and learning to argue effectively

Kuhn proposed that to improve (develop) critical thinking skills learners would benefit from experiences encouraging reflection. An approach that proved productive was based in multiple studies on two techniques for encouraging reflection. Across multiple age groups (middle school, high school, college students) she had pairs of participants argue using online chat. A pair had to agree on a given “move” or statement before submission (externalizing rationales for consideration) and submitting statements in chat both allowed an opportunity to focus on the message with interference from the face-to-face issues that are present in formal debate and to create a record that could be critiqued. In some studies, the participants were asked to complete forms asking for a statement of the positions taken by opponents and evidence offered in support of these positions. The effectiveness of the treatments was examined following training without such scaffolds. 

AI arguments result in an external record 

I and others have been exploring the experience of arguing with an AI opponent. One insight I had while exploring this activity was that it resulted in an external product that could be examined much in the way Kuhn’s chat transcripts could be examined. Classroom applications seem straightforward. For example, the educator could provide the same prompt to all of the students in the class and ask the students to submit the resulting transcript after an allotted amount of time. Students could be asked to comment on their experiences and selected “arguments” could be displayed for consideration of the group. A more direct approach would use Kuhn’s pairs approach asking that the pairs decide on a chat entry before it was submitted. The interesting thing about AI large language models is that the experience across submissions of the same prompt are different for each individual or for the same individual submitting the prompt a second time. 

I have described what an AI argument (debate) looks like and provided an example of a prompt that would initiate the argument and offer evaluation in a previous post. I have included the example I used in that post below. In this example, I am debating the AI service regarding the effectiveness of reading from paper or screen as I thought readers are likely familiar with this controversy.

Summary

Critical thinking, the process of science, and effective discussion of controversial topics depends on the skills of argumentation. Without development, the skills of argumentation are self-focused lacking the careful identification and evaluation of opposing ideas. These limitations can be addressed through instructional strategies encouraging reflection and the physical transcript resulting from an argument with an AI-based opponent provides the opportunity for reflection.

References:

Iordanou, K. (2013). Developing Face-to-Face Argumentation Skills: Does Arguing on the Computer Help. Journal of Cognition & Development, 14(2), 292–320.

Kuhn, D., Goh, W., Iordanou, K., & Shaenfield, D. (2008). Arguing on the Computer: A Microgenetic Study of Developing Argument Skills in a Computer-Supported Environment. Child Development, 79(5), 1310-1328

Mayweg-Paus, E., Macagno, F., & Kuhn, D. (2016). Developing Argumentation Strategies in Electronic Dialogs: Is Modeling Effective. Discourse Processes, 53(4), 280–297. https://doi.org/10.1080/0163853X.2015.1040323

Loading

Does flipping the classroom improve learning?

The instructional strategy of “flipping the classroom” is one of those recommendations that seems on first consideration to make a lot of sense. The core idea hinges on the truth that classroom time with students is limited and efficient use must be made of this time. Instead of taking up a substantial amount of this time with teacher presentations, why not move the exposure to content outside of class time and use class time for more active tasks such as helping students who have problems and allowing students to engage in active tasks with other students? With easy access to tools for recording presentations and sharing recordings online, why not simply have educators share presentations with students and have students review this material before class? So, presentations were flipped from class time to settings that might have been more frequently used for homework.

This all seemed very rational. I cannot remember where I first encountered the idea, but I did purchase Flip Your Classroom (Bergman and Sams, 2012) written by the high school teachers who I believe created the concept. While I did use my blog and textbook to promote this approach, I must have always wondered. I wrote a blog post in 2012 commenting that flipping the classroom sounded very similar to my large lecture experience of presenting to hundreds of students and expecting that these students would have read the textbook before class. Again, the logic of following up an initial exposure with an anecdote-rich and expanded focus on key concepts seemed sound. However, I knew this was not the way many students used their textbooks and some probably did not even make the purchase, but I was controlling what I could control. 

There have been hundreds of studies evaluating the flipping strategy and many meta-analyses of these studies. These meta-analyses tend to conclude that asking students to watch video lectures before coming to class is generally beneficial. I think many have slightly modified the suggested in-class component to expand the notion of greater teacher-student interaction to include a focus on active learning. Kapur et al (2022), authors of the meta-analysis I will focus on eventually, list the following experiences as examples of active learning – problem-solving, class discussions, dialog and debates, student presentations, collaboration, labs, games, and interactive and simulation-based learning activities. 

The institution where I taught had a group very much interested in active learning and several special active learning “labs” were created to focus on these techniques. The labs contained tables instead of rows of chairs, whiteboards, and other adaptations. To teach a large class in this setting you had to submit a description of the active techniques you intended to implement. The largest classes (200+) I taught could not be accommodated in these rooms and I am not certain if I would have ever submitted a proposal anyway. 

Kupar et al. (2022)

Kupar and colleagues found reason to add another meta-analysis to those already completed. While their integrated analysis of the meta-analytic papers concluded that the flipped classrooms have an advantage, Kapur and colleagues were puzzled by the great variability present among the studies. Some studies demonstrated a great advantage in student achievement for the flipped approach and some found that traditional instruction was superior. It did not seem reasonable that a basic underlying advantage would be associated with this much variability and the researchers proposed that a focus on the average effect size without consideration of the source or sources for this variability made little sense. They conducted their own meta-analysis and coded each study according to a variety of methodological and situational variables. 

The most surprising finding from this approach was that the inclusion of active learning components was relatively inconsequential. Remember that the use of such strategies in the face-to-face setting was emphasized in many applications. Surprisingly, segments of lecture within the face-to-face setting were a better predictor of an achievement advantage. Despite the break from the general understanding of how flipped classrooms are expected to work, educators seemed to use these presentations to review or supplement independent student content consumption and this provided an achievement bump.

The active beneficial learning component found to make a difference involved a problem-based strategy and when the entire process began with a problem-based experience. This finding reminds me of the problem-based learning research conducted by Deanna Kuhn who also proposed that the problem-based experience start the learning sequence. Kapur used the phrase productive failure to describe the way struggling with a problem before encountering relevant background information was helpful. Kuhn emphasized a similar process without the catchy label and proposed the advantage was more a matter of the activation of relevant knowledge and guiding the interpretation of information within the presentation of content that followed.

Regarding the general perspective on the flipped model identified by Kapur and colleagues, their findings were less an indictment of the concept, but a demonstration of the lack of fidelity in implementations to the proposed advantage of using face-to-face time to interact and adjust to student needs. Increasing response to the needs of individual needs would seem beneficial and may be ignored in favor of activities that are less impactful. 

References:

Kapur, M., Hattie, J., Grossman, I., & Sinha, T. (2022, September). Fail, flip, fix, and feed–Rethinking flipped learning: A review of meta-analyses and a subsequent meta-analysis. In Frontiers in Education (Vol. 7, p. 956416). Frontiers.

Pease, M. A., & Kuhn, D. (2011). Experimental analysis of the effective components of problem?based learning. Science Education, 95(1), 57-86.

Wirkala. C. & Kuhn, D. (2011). Problem-Based Learning in K–12 Education: Is it Effective and How Does it Achieve its Effects? American Educational Research Journal, 48, 1157–1186

Loading

Is AI overhyped? Maybe Apple has the right idea.

In my world, talk of AI is everywhere. I doubt most have a different opinion because nearly any news program has a story every other day or so commenting on AI capabilities, dangers, and the wealth and power being accumulated by developers. We all have experienced the history, beginning with ChatGPT in Nov. 2022, of the large language models.

I tried to find some specifics about the popularity of AI and this is challenging. There were quickly multiple companies involved and you can use free versions of AI programs with just a browser making an accurate of “users” difficult. We do know that a million users signed up for ChatGPT 3 within three months. 

So, where are we at a year and a half later? Again, you and I may use an AI large language model on a daily or at least weekly basis, but how much use is actually going on “out there”?

Studies have started to appear attempting to determine the frequency of frequent use. Frequent use can be very different from “yeah, I tried that” use. My interpretation is that folks in many countries have heard of AI and quite a few have given at least one service a try, but most now appear puzzled by what should come next. One of these studies with the broadest approach, approached respondents in six countries – Argentina, Denmark, France, Japan, and the US. Among those surveyed, awareness was high, but frequent actual use was low. On a daily basis, frequent users ranged from 7% in the U.S. to 1% in Japan. 56% of 18-24 year olds had tried an AI service and 16% of those over 55. 

My personal interest concerns AI in schools so I tried to locate studies that attempted to establish typical patterns of use by secondary students. Here is a 2024 study from Commonsense Media on this topic available to all online. A very short summary concluded that will half of 14-22-year-olds have used an AI service, but only 4% report being daily users. Beyond these basic statistics, I found it startling that minority youth (Blacks and Latinx) reported a higher frequency of use – 20% – 10% claimed to be weekly users. I cross-checked this result several times to make certain I understood it correctly. When asked to categorize their use, young people reported searching for information, generating ideas, and school work in that order. Another large category of use was generating pictures. The authors reported some concern when finding that searching for information was the most frequent category of use.

Participants were asked about concerns that limited their use of AI and potential accusations of cheating were high among these young people.

I admit I need to review this study more carefully because it is not clear to me if the participants were including any classroom use in contrast to what I would call personal use. 

The “what can I do with this” question

Mollick and the 10-hour investment. I have read several efforts by Ethan Mollick (NYTimes, Kindle book) and find his perspective useful. He claims using AI is different from learning other technology applications in that there are not exact instructions you can follow to find productive uses. Instead, he proposes that you invest 10 hours and try the tool you select to accomplish various tasks that you face daily. If you write a lot of emails, chat with the AI tool about what you want to say and see what it generates. Request modifications to improve what is generated to suit your needs. Ask it to create an image you might have a way to use. Ask it to generate ideas for a task you want to accomplish. Some may tell you that AI is not a person and this is obviously the case, but forget this for a while and treat the AI service like an intern working with you. Converse in a natural way and give a clear description of what your tasks require. Ask the AI service to take on a persona and then explain your task. If you are trying to create something for a classroom situation, ask the service to act as an experienced teacher of XXX preparing for a lesson on YYY. Expect problems, but if you involve the tool in areas you understand, you should be able to identify what is incorrect and request improvements.

I watched the recent Apple announcement regarding the company’s soon to be released AI capabilities. Thinking about Apple’s approach, I could not help proposing that experiences with Apple products in the ways Apple plans could be a great gateway to finding personal practical applications of AI (Apple wants you to think of their approach as Apple Intelligence). Apple intends rolling out a two-tiered model – the AI capabilities available in a self-contained way on Apple devices and AI capabilities available off device. The device-located AI capabilities are designed to accomplish common tasks. Think of the on-device capabilities as similar to what Mollick proposes – ways to accomplish daily tasks (e.g., summarization, image creation, text evaluation and improvement, finding something I know I read recently). AI capabilities are available within most Apple products and also within other services. I could not help wondering how Grammarly will survive with AI tools available to Apple users who own recent Apple equipment. 

Obviously, I have yet to try the new Apple Intelligence tools and I doubt I will close out my AI subscriptions, but I do think Apple tools as a transition will increase day-to-day usage. 

Loading

Can Ai be trusted for election information

I happened across this news story from NBC concerning the accuracy of election information. The story reported data from a research organization involving the submission of requests to multiple AI services and then having experts evaluate the quality of the responses. I also then read the description provided by the research organization and located the data used by this organization (the questions and methodology). 

The results showed that a significant portion of the AI models’ answers were inaccurate, misleading, and potentially harmful. The experts found that the AI models often provided information that could discourage voter participation, misinterpret the actions of election workers, or mislead people about politicized aspects of the voting process. The focus in the research was on general information and did not address concerns with misinformation from candidates.

I have been exploring how I might address this same issue and perhaps offer an example educators might try in their classrooms. Educators exploring AI topics over the summer may also find my approach something they can try. AI issues seem important in most classrooms.

As I thought about my own explorations and this one specifically, a significant challenge is having confidence in the evaluations I make about the quality of AI responses. For earlier posts, I have written about topics such as tutoring. I have had the AI service engage with me using content from a textbook I have written. This approach made sense for evaluating AI as a tutor, but would not work with the topic of explaining political procedures. For this evaluation, I decided to focus on issues in my state (Minnesota) that were recently established and would be applied in the 2024 election.

The topic of absentee ballots and early voting has been contentious. Minnesota has a liberal policy allowing anyone to secure a mail ballot without answering questions about conditions and recently requested that this be the default in future elections without repeated requests. The second policy just went into effect in June and I thought would represent a good test of an AI system just to see if AI responses are based on general information about elections mixing the situation in some states with the situation in others or are specific to individual states and recent changes in election laws. 

Here is the prompt I used:

I know I will not be in my home state of Minnesota during future Novembers, but I will be in Hawaii. Can I ask for an absentee ballot to be automatically sent to me before each election?

I used this prompt with ChatGPT (4) and Claud and found all responses to be appropriate (see below). When you chat with an AI tool using the same prompt, one interesting observation is that each experience is unique because it is constructed each time the prompt is submitted. So, each response is unique.

I decided to try one more request which I thought would be even more basic. As I already noted, Minnesota does not require a citizen to provide an explanation when asking for a mail-in ballot. Some states do, so I asked about this requirement. 

Prompt: Do you need an explanation for why you want an absentee ballot in Minnesota

As you can see in the following two responses to this same prompt, I received contradictory responses. This would seem the type of misinformation that the AI Democracy Project was reporting.

Here is a related observation that seems relevant. If you use Google searches and you have the AI lab tool turned on, you have likely encountered an AI response to your search before you see the traditional list of links related to your request. I know that efforts are being made to address misinformation in regards to certain topics. Here is an example in response to such concerns. If you use the Prompt I have listed here, you should receive a list of links even if Google sends you a summary to other prompts (Note – this is different from submitting the prompt directly to ChatGPT or Claude). For a comparison try this nonpolitical prompt and you should see a difference -“ Are there disadvantages from reading from a tablet?” With questions related to election information, no AI summary should appear and you should see only links associated with your prompt.

Summary

AI can generate misinformation, which can be critical when voters request information related to election procedures. This example demonstrates this problem and suggests a way others can explore this problem.

Loading

Prioritizing AI Tools

The issue I have with streaming television services is the same as the issue I have with services that support my personal knowledge management – many have a feature or two that I find helpful, but when should I stop paying to add another feature I might use? Exploring the pro version of AI tools so I can write based on experience is one thing, but what is a reasonable long-term commitment to multiple subscriptions for the long term in my circumstances?

My present commitments are as follows:

  • ChatGPT – $20 a month
  • Perplexity – $20 a month
  • Scispace – $12 a month
  • Smart Connections – $3-5 a month for ChatGPT API

Those who follow me on a regular basis probably have figured out my circumstances. I am a retired academic who wants to continue writing for what can most accurately be described as a hobby. There are ads on my blogs and I post to Medium, but any revenue I receive is more than offset by my server costs and the Medium subscription fee. So, let’s just call it a hobby.

The type of writing I do varies. Some of my blog posts are focused on a wide variety of topics mostly based on personal opinions buttressed by a few links. My more serious posts are intended for practicing educators and are often based on my review of the research literature. Offering citations that back my analyses is important to me even if readers seldom follow up by reading the cited literature themselves. I want readers to know my comments can be substantiated.

I don’t make use of AI in my writing. The exception would be that I use Smart Connections to summarize the research notes I have accumulated in Obsidian and I sometimes include these summaries. I rely on two of these AI tools (SciSpace and Perplexity) to find research articles relevant to topics I write about. With the proliferation of so many specialized journals, this has become a challenge for any researcher. There is an ever expanding battery of tools one can use to address this challenge and this post is not intended to offer a general review of this tech space. What I offer here is an analysis of a smaller set of services I hope identifies issues others may not have considered.

Here are some issues that add to the complexity of making a decision about the relative value of AI tools. There are often free and pro versions of these tools. The differences vary. Sometimes you have access to more powerful/recent versions of the AI. Sometimes the Pro version is the same as the free version, but you have no restrictions on the frequency of use. Occasionally other features such as exporting options or online storage of past activities become available in the pro version. Some differences deal with convenience and there are workarounds (eg., copying from the screen with copy and paste vs exporting).

Services differ in the diversity of tools included and this can be important when selecting several services from a collection of services in comparison to committing to one service. Do you want to generate images to accompany content you might write based on your background work? Do you want to use AI to write for you or perhaps to suggest a structure and topics for you something you might write yourself? 

There can also be variability in how well a service does a specific job. For example, I am interested in a thorough investigation of the research literature. What insights related to the individual articles identified are available that can be helpful in determining which articles I should spend time reading? 

Perplexity vs. SciSpace

I have decided that Perplexity is the most expendable of my present subscriptions. What follows is the logic for this personal decision and an explanation of how it fits my circumstances.

I am using a common prompt for this sample comparison

What does the research conclude related to the value of studying lecture notes taken by hand versus notes taken on a laptop or tablet?

Perplexity

I can see how Perplexity provides a great service for many individuals who have broad interests. I was originally impressed when I discovered that Perplexity allowed me to focus its search process on academic papers. When I first generated a prompt, I received mostly sources from Internet-based authors on the topics that were of interest to me and as I have indicated, I was more interested in the research published in journals. 

I mentioned that there is a certain redundancy of functions across my subscriptions and the option of writing summaries or structuring approaches I might take in my own writing using different LLMs was enticing.

The characteristic I value in both Perplexity and SciSpace is that summary statements are linked to sources. A sample of the output from Perplexity appears below (the red box encloses links to sources).

When the content is exported, the sources appear as shown below. 

Citations:

[1] https://www.semanticscholar.org/paper/d6f6a415f0ff6e6f315c512deb211c0eaad66c56

[2] https://www.semanticscholar.org/paper/ab6406121b10093122c1266a04f24e6d07b64048

[3] https://www.semanticscholar.org/paper/0e4c6a96121dccce0d891fa561fcc5ed99a09b23

[4] https://www.semanticscholar.org/paper/3fb216940828e27abf993e090685ad82adb5cfc5

[5] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10267295/

[6] https://www.semanticscholar.org/paper/dac5f3d19f0a57f93758b5b4d4b972cfec51383a

[7] https://pubmed.ncbi.nlm.nih.gov/34674607/

[8] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8941936/

[9] https://www.semanticscholar.org/paper/43429888d73ba44b197baaa62d2da56eb837eabd

[10] https://www.semanticscholar.org/paper/cb14d684e9619a1c769c72a9f915d42ffd019281

[11] https://www.semanticscholar.org/paper/17af71ddcdd91c7bafe484e09fb31dc54623da22

[12] https://www.semanticscholar.org/paper/8386049efedfa2c4657e2affcad28c89b3466f0b

[13] https://www.semanticscholar.org/paper/1897d8c7d013a4b4f715508135e2b8dbce0efdd0

[14] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9247713/

[15] https://www.semanticscholar.org/paper/d060c30f9e2323986da2e325d6a295e9e93955aa

[16] https://www.semanticscholar.org/paper/b41dc7282dab0a12ff183a873cfecc7d8712b9db

[17] https://www.semanticscholar.org/paper/0fe98b8d759f548b241f85e35379723a6d4f63bc

[18] https://www.semanticscholar.org/paper/9ca88f22c23b4fc4dc70c506e40c92c5f72d35e0

[19] https://www.semanticscholar.org/paper/47ab9ec90155c6d52eb54a1bb07152d5a6b81f0a

[20] https://pubmed.ncbi.nlm.nih.gov/34390366/

I went through these sources and the results are what I found disappointing. I have read most of the research studies on this topic and have specific sources I expected to see. The sources produced were from what I would consider low value sources when I know better content is available. These are not top tier educational resource journals. 

  • Schoen, I. (2012). Effects of Method and Context of Note-taking on Memory: Handwriting versus Typing in Lecture and Textbook-Reading Contexts. [Senior thesis]
  • Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.
  • Wiechmann W, Edwards R, Low C, Wray A, Boysen-Osborn M, Toohey S. No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study. J Educ Eval Health Prof. 2022;19:8. doi: 10.3352/jeehp.2022.19.8. Epub 2022 Apr 26. PMID: 35468666; PMCID: PMC9247713.
  • Crumb, R.M., Hildebrandt, R., & Sutton, T.M. (2020). The Value of Handwritten Notes: A Failure to Find State-Dependent Effects When Using a Laptop to Take Notes and Complete a Quiz. Teaching of Psychology, 49, 7 – 13.
  • Mitchell, A., & Zheng, L. (2019). Examining Longhand vs. Laptop Debate: A Replication Study. AIS Trans. Replication Res., 5, 9.
  • Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.

SciSpace

SciSpace was developed as more focused on the research literature. 

The output from the same prompt generated a summary and a list of 90 citations (see below). Each citations appears with characteristics from a list available to the user. These supplemental comments are useful in determining which citations I may wish to read in full. Various filters can be applied to the original collection that help narrow the output. Also included are ways to designate the recency of the publications to be displayed and to limit the output to journal articles.

The journals that SciSpace accesses can be reviewed and I was pleased to see that what I consider the core educational research journals are covered.

Here is the finding I found most important. SciSpace does provide many citations from open-access journals. These are great, but I was most interested in what was generated from the main sources I knew should be there. These citations were included. 

Linlin, Luo., Kenneth, A., Kiewra., Abraham, E., Flanigan., Markeya, S., Peteranetz. (2018). Laptop versus longhand note taking: effects on lecture notes and achievement. Instructional Science, 46(6):947-971. doi: 10.1007/S11251-018-9458-0

Pam, Mueller., Daniel, M., Oppenheimer. (2014). The Pen Is Mightier Than the Keyboard Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6):1159-1168. doi: 10.1177/0956797614524581

Dung, C., Bui., Joel, Myerson., Sandra, Hale. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2):299-309. doi: 10.1037/A0030367

Summary

This post summarizes my thoughts on which of multiple existing AI-enabled services I should retain to meet my personal search and writing interests. I found SciSpace superior to Perplexity when it came to identifying prompt-relevant journal articles. Again, I have attempted to be specific about what I use AI search to accomplish and your interests may differ. 

Loading