Does flipping the classroom improve learning?

The instructional strategy of “flipping the classroom” is one of those recommendations that seems on first consideration to make a lot of sense. The core idea hinges on the truth that classroom time with students is limited and efficient use must be made of this time. Instead of taking up a substantial amount of this time with teacher presentations, why not move the exposure to content outside of class time and use class time for more active tasks such as helping students who have problems and allowing students to engage in active tasks with other students? With easy access to tools for recording presentations and sharing recordings online, why not simply have educators share presentations with students and have students review this material before class? So, presentations were flipped from class time to settings that might have been more frequently used for homework.

This all seemed very rational. I cannot remember where I first encountered the idea, but I did purchase Flip Your Classroom (Bergman and Sams, 2012) written by the high school teachers who I believe created the concept. While I did use my blog and textbook to promote this approach, I must have always wondered. I wrote a blog post in 2012 commenting that flipping the classroom sounded very similar to my large lecture experience of presenting to hundreds of students and expecting that these students would have read the textbook before class. Again, the logic of following up an initial exposure with an anecdote-rich and expanded focus on key concepts seemed sound. However, I knew this was not the way many students used their textbooks and some probably did not even make the purchase, but I was controlling what I could control. 

There have been hundreds of studies evaluating the flipping strategy and many meta-analyses of these studies. These meta-analyses tend to conclude that asking students to watch video lectures before coming to class is generally beneficial. I think many have slightly modified the suggested in-class component to expand the notion of greater teacher-student interaction to include a focus on active learning. Kapur et al (2022), authors of the meta-analysis I will focus on eventually, list the following experiences as examples of active learning – problem-solving, class discussions, dialog and debates, student presentations, collaboration, labs, games, and interactive and simulation-based learning activities. 

The institution where I taught had a group very much interested in active learning and several special active learning “labs” were created to focus on these techniques. The labs contained tables instead of rows of chairs, whiteboards, and other adaptations. To teach a large class in this setting you had to submit a description of the active techniques you intended to implement. The largest classes (200+) I taught could not be accommodated in these rooms and I am not certain if I would have ever submitted a proposal anyway. 

Kupar et al. (2022)

Kupar and colleagues found reason to add another meta-analysis to those already completed. While their integrated analysis of the meta-analytic papers concluded that the flipped classrooms have an advantage, Kapur and colleagues were puzzled by the great variability present among the studies. Some studies demonstrated a great advantage in student achievement for the flipped approach and some found that traditional instruction was superior. It did not seem reasonable that a basic underlying advantage would be associated with this much variability and the researchers proposed that a focus on the average effect size without consideration of the source or sources for this variability made little sense. They conducted their own meta-analysis and coded each study according to a variety of methodological and situational variables. 

The most surprising finding from this approach was that the inclusion of active learning components was relatively inconsequential. Remember that the use of such strategies in the face-to-face setting was emphasized in many applications. Surprisingly, segments of lecture within the face-to-face setting were a better predictor of an achievement advantage. Despite the break from the general understanding of how flipped classrooms are expected to work, educators seemed to use these presentations to review or supplement independent student content consumption and this provided an achievement bump.

The active beneficial learning component found to make a difference involved a problem-based strategy and when the entire process began with a problem-based experience. This finding reminds me of the problem-based learning research conducted by Deanna Kuhn who also proposed that the problem-based experience start the learning sequence. Kapur used the phrase productive failure to describe the way struggling with a problem before encountering relevant background information was helpful. Kuhn emphasized a similar process without the catchy label and proposed the advantage was more a matter of the activation of relevant knowledge and guiding the interpretation of information within the presentation of content that followed.

Regarding the general perspective on the flipped model identified by Kapur and colleagues, their findings were less an indictment of the concept, but a demonstration of the lack of fidelity in implementations to the proposed advantage of using face-to-face time to interact and adjust to student needs. Increasing response to the needs of individual needs would seem beneficial and may be ignored in favor of activities that are less impactful. 

References:

Kapur, M., Hattie, J., Grossman, I., & Sinha, T. (2022, September). Fail, flip, fix, and feed–Rethinking flipped learning: A review of meta-analyses and a subsequent meta-analysis. In Frontiers in Education (Vol. 7, p. 956416). Frontiers.

Pease, M. A., & Kuhn, D. (2011). Experimental analysis of the effective components of problem?based learning. Science Education, 95(1), 57-86.

Wirkala. C. & Kuhn, D. (2011). Problem-Based Learning in K–12 Education: Is it Effective and How Does it Achieve its Effects? American Educational Research Journal, 48, 1157–1186

Loading

Is AI overhyped? Maybe Apple has the right idea.

In my world, talk of AI is everywhere. I doubt most have a different opinion because nearly any news program has a story every other day or so commenting on AI capabilities, dangers, and the wealth and power being accumulated by developers. We all have experienced the history, beginning with ChatGPT in Nov. 2022, of the large language models.

I tried to find some specifics about the popularity of AI and this is challenging. There were quickly multiple companies involved and you can use free versions of AI programs with just a browser making an accurate of “users” difficult. We do know that a million users signed up for ChatGPT 3 within three months. 

So, where are we at a year and a half later? Again, you and I may use an AI large language model on a daily or at least weekly basis, but how much use is actually going on “out there”?

Studies have started to appear attempting to determine the frequency of frequent use. Frequent use can be very different from “yeah, I tried that” use. My interpretation is that folks in many countries have heard of AI and quite a few have given at least one service a try, but most now appear puzzled by what should come next. One of these studies with the broadest approach, approached respondents in six countries – Argentina, Denmark, France, Japan, and the US. Among those surveyed, awareness was high, but frequent actual use was low. On a daily basis, frequent users ranged from 7% in the U.S. to 1% in Japan. 56% of 18-24 year olds had tried an AI service and 16% of those over 55. 

My personal interest concerns AI in schools so I tried to locate studies that attempted to establish typical patterns of use by secondary students. Here is a 2024 study from Commonsense Media on this topic available to all online. A very short summary concluded that will half of 14-22-year-olds have used an AI service, but only 4% report being daily users. Beyond these basic statistics, I found it startling that minority youth (Blacks and Latinx) reported a higher frequency of use – 20% – 10% claimed to be weekly users. I cross-checked this result several times to make certain I understood it correctly. When asked to categorize their use, young people reported searching for information, generating ideas, and school work in that order. Another large category of use was generating pictures. The authors reported some concern when finding that searching for information was the most frequent category of use.

Participants were asked about concerns that limited their use of AI and potential accusations of cheating were high among these young people.

I admit I need to review this study more carefully because it is not clear to me if the participants were including any classroom use in contrast to what I would call personal use. 

The “what can I do with this” question

Mollick and the 10-hour investment. I have read several efforts by Ethan Mollick (NYTimes, Kindle book) and find his perspective useful. He claims using AI is different from learning other technology applications in that there are not exact instructions you can follow to find productive uses. Instead, he proposes that you invest 10 hours and try the tool you select to accomplish various tasks that you face daily. If you write a lot of emails, chat with the AI tool about what you want to say and see what it generates. Request modifications to improve what is generated to suit your needs. Ask it to create an image you might have a way to use. Ask it to generate ideas for a task you want to accomplish. Some may tell you that AI is not a person and this is obviously the case, but forget this for a while and treat the AI service like an intern working with you. Converse in a natural way and give a clear description of what your tasks require. Ask the AI service to take on a persona and then explain your task. If you are trying to create something for a classroom situation, ask the service to act as an experienced teacher of XXX preparing for a lesson on YYY. Expect problems, but if you involve the tool in areas you understand, you should be able to identify what is incorrect and request improvements.

I watched the recent Apple announcement regarding the company’s soon to be released AI capabilities. Thinking about Apple’s approach, I could not help proposing that experiences with Apple products in the ways Apple plans could be a great gateway to finding personal practical applications of AI (Apple wants you to think of their approach as Apple Intelligence). Apple intends rolling out a two-tiered model – the AI capabilities available in a self-contained way on Apple devices and AI capabilities available off device. The device-located AI capabilities are designed to accomplish common tasks. Think of the on-device capabilities as similar to what Mollick proposes – ways to accomplish daily tasks (e.g., summarization, image creation, text evaluation and improvement, finding something I know I read recently). AI capabilities are available within most Apple products and also within other services. I could not help wondering how Grammarly will survive with AI tools available to Apple users who own recent Apple equipment. 

Obviously, I have yet to try the new Apple Intelligence tools and I doubt I will close out my AI subscriptions, but I do think Apple tools as a transition will increase day-to-day usage. 

Loading

Can Ai be trusted for election information

I happened across this news story from NBC concerning the accuracy of election information. The story reported data from a research organization involving the submission of requests to multiple AI services and then having experts evaluate the quality of the responses. I also then read the description provided by the research organization and located the data used by this organization (the questions and methodology). 

The results showed that a significant portion of the AI models’ answers were inaccurate, misleading, and potentially harmful. The experts found that the AI models often provided information that could discourage voter participation, misinterpret the actions of election workers, or mislead people about politicized aspects of the voting process. The focus in the research was on general information and did not address concerns with misinformation from candidates.

I have been exploring how I might address this same issue and perhaps offer an example educators might try in their classrooms. Educators exploring AI topics over the summer may also find my approach something they can try. AI issues seem important in most classrooms.

As I thought about my own explorations and this one specifically, a significant challenge is having confidence in the evaluations I make about the quality of AI responses. For earlier posts, I have written about topics such as tutoring. I have had the AI service engage with me using content from a textbook I have written. This approach made sense for evaluating AI as a tutor, but would not work with the topic of explaining political procedures. For this evaluation, I decided to focus on issues in my state (Minnesota) that were recently established and would be applied in the 2024 election.

The topic of absentee ballots and early voting has been contentious. Minnesota has a liberal policy allowing anyone to secure a mail ballot without answering questions about conditions and recently requested that this be the default in future elections without repeated requests. The second policy just went into effect in June and I thought would represent a good test of an AI system just to see if AI responses are based on general information about elections mixing the situation in some states with the situation in others or are specific to individual states and recent changes in election laws. 

Here is the prompt I used:

I know I will not be in my home state of Minnesota during future Novembers, but I will be in Hawaii. Can I ask for an absentee ballot to be automatically sent to me before each election?

I used this prompt with ChatGPT (4) and Claud and found all responses to be appropriate (see below). When you chat with an AI tool using the same prompt, one interesting observation is that each experience is unique because it is constructed each time the prompt is submitted. So, each response is unique.

I decided to try one more request which I thought would be even more basic. As I already noted, Minnesota does not require a citizen to provide an explanation when asking for a mail-in ballot. Some states do, so I asked about this requirement. 

Prompt: Do you need an explanation for why you want an absentee ballot in Minnesota

As you can see in the following two responses to this same prompt, I received contradictory responses. This would seem the type of misinformation that the AI Democracy Project was reporting.

Here is a related observation that seems relevant. If you use Google searches and you have the AI lab tool turned on, you have likely encountered an AI response to your search before you see the traditional list of links related to your request. I know that efforts are being made to address misinformation in regards to certain topics. Here is an example in response to such concerns. If you use the Prompt I have listed here, you should receive a list of links even if Google sends you a summary to other prompts (Note – this is different from submitting the prompt directly to ChatGPT or Claude). For a comparison try this nonpolitical prompt and you should see a difference -“ Are there disadvantages from reading from a tablet?” With questions related to election information, no AI summary should appear and you should see only links associated with your prompt.

Summary

AI can generate misinformation, which can be critical when voters request information related to election procedures. This example demonstrates this problem and suggests a way others can explore this problem.

Loading

Prioritizing AI Tools

The issue I have with streaming television services is the same as the issue I have with services that support my personal knowledge management – many have a feature or two that I find helpful, but when should I stop paying to add another feature I might use? Exploring the pro version of AI tools so I can write based on experience is one thing, but what is a reasonable long-term commitment to multiple subscriptions for the long term in my circumstances?

My present commitments are as follows:

  • ChatGPT – $20 a month
  • Perplexity – $20 a month
  • Scispace – $12 a month
  • Smart Connections – $3-5 a month for ChatGPT API

Those who follow me on a regular basis probably have figured out my circumstances. I am a retired academic who wants to continue writing for what can most accurately be described as a hobby. There are ads on my blogs and I post to Medium, but any revenue I receive is more than offset by my server costs and the Medium subscription fee. So, let’s just call it a hobby.

The type of writing I do varies. Some of my blog posts are focused on a wide variety of topics mostly based on personal opinions buttressed by a few links. My more serious posts are intended for practicing educators and are often based on my review of the research literature. Offering citations that back my analyses is important to me even if readers seldom follow up by reading the cited literature themselves. I want readers to know my comments can be substantiated.

I don’t make use of AI in my writing. The exception would be that I use Smart Connections to summarize the research notes I have accumulated in Obsidian and I sometimes include these summaries. I rely on two of these AI tools (SciSpace and Perplexity) to find research articles relevant to topics I write about. With the proliferation of so many specialized journals, this has become a challenge for any researcher. There is an ever expanding battery of tools one can use to address this challenge and this post is not intended to offer a general review of this tech space. What I offer here is an analysis of a smaller set of services I hope identifies issues others may not have considered.

Here are some issues that add to the complexity of making a decision about the relative value of AI tools. There are often free and pro versions of these tools. The differences vary. Sometimes you have access to more powerful/recent versions of the AI. Sometimes the Pro version is the same as the free version, but you have no restrictions on the frequency of use. Occasionally other features such as exporting options or online storage of past activities become available in the pro version. Some differences deal with convenience and there are workarounds (eg., copying from the screen with copy and paste vs exporting).

Services differ in the diversity of tools included and this can be important when selecting several services from a collection of services in comparison to committing to one service. Do you want to generate images to accompany content you might write based on your background work? Do you want to use AI to write for you or perhaps to suggest a structure and topics for you something you might write yourself? 

There can also be variability in how well a service does a specific job. For example, I am interested in a thorough investigation of the research literature. What insights related to the individual articles identified are available that can be helpful in determining which articles I should spend time reading? 

Perplexity vs. SciSpace

I have decided that Perplexity is the most expendable of my present subscriptions. What follows is the logic for this personal decision and an explanation of how it fits my circumstances.

I am using a common prompt for this sample comparison

What does the research conclude related to the value of studying lecture notes taken by hand versus notes taken on a laptop or tablet?

Perplexity

I can see how Perplexity provides a great service for many individuals who have broad interests. I was originally impressed when I discovered that Perplexity allowed me to focus its search process on academic papers. When I first generated a prompt, I received mostly sources from Internet-based authors on the topics that were of interest to me and as I have indicated, I was more interested in the research published in journals. 

I mentioned that there is a certain redundancy of functions across my subscriptions and the option of writing summaries or structuring approaches I might take in my own writing using different LLMs was enticing.

The characteristic I value in both Perplexity and SciSpace is that summary statements are linked to sources. A sample of the output from Perplexity appears below (the red box encloses links to sources).

When the content is exported, the sources appear as shown below. 

Citations:

[1] https://www.semanticscholar.org/paper/d6f6a415f0ff6e6f315c512deb211c0eaad66c56

[2] https://www.semanticscholar.org/paper/ab6406121b10093122c1266a04f24e6d07b64048

[3] https://www.semanticscholar.org/paper/0e4c6a96121dccce0d891fa561fcc5ed99a09b23

[4] https://www.semanticscholar.org/paper/3fb216940828e27abf993e090685ad82adb5cfc5

[5] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10267295/

[6] https://www.semanticscholar.org/paper/dac5f3d19f0a57f93758b5b4d4b972cfec51383a

[7] https://pubmed.ncbi.nlm.nih.gov/34674607/

[8] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8941936/

[9] https://www.semanticscholar.org/paper/43429888d73ba44b197baaa62d2da56eb837eabd

[10] https://www.semanticscholar.org/paper/cb14d684e9619a1c769c72a9f915d42ffd019281

[11] https://www.semanticscholar.org/paper/17af71ddcdd91c7bafe484e09fb31dc54623da22

[12] https://www.semanticscholar.org/paper/8386049efedfa2c4657e2affcad28c89b3466f0b

[13] https://www.semanticscholar.org/paper/1897d8c7d013a4b4f715508135e2b8dbce0efdd0

[14] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9247713/

[15] https://www.semanticscholar.org/paper/d060c30f9e2323986da2e325d6a295e9e93955aa

[16] https://www.semanticscholar.org/paper/b41dc7282dab0a12ff183a873cfecc7d8712b9db

[17] https://www.semanticscholar.org/paper/0fe98b8d759f548b241f85e35379723a6d4f63bc

[18] https://www.semanticscholar.org/paper/9ca88f22c23b4fc4dc70c506e40c92c5f72d35e0

[19] https://www.semanticscholar.org/paper/47ab9ec90155c6d52eb54a1bb07152d5a6b81f0a

[20] https://pubmed.ncbi.nlm.nih.gov/34390366/

I went through these sources and the results are what I found disappointing. I have read most of the research studies on this topic and have specific sources I expected to see. The sources produced were from what I would consider low value sources when I know better content is available. These are not top tier educational resource journals. 

  • Schoen, I. (2012). Effects of Method and Context of Note-taking on Memory: Handwriting versus Typing in Lecture and Textbook-Reading Contexts. [Senior thesis]
  • Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.
  • Wiechmann W, Edwards R, Low C, Wray A, Boysen-Osborn M, Toohey S. No difference in factual or conceptual recall comprehension for tablet, laptop, and handwritten note-taking by medical students in the United States: a survey-based observational study. J Educ Eval Health Prof. 2022;19:8. doi: 10.3352/jeehp.2022.19.8. Epub 2022 Apr 26. PMID: 35468666; PMCID: PMC9247713.
  • Crumb, R.M., Hildebrandt, R., & Sutton, T.M. (2020). The Value of Handwritten Notes: A Failure to Find State-Dependent Effects When Using a Laptop to Take Notes and Complete a Quiz. Teaching of Psychology, 49, 7 – 13.
  • Mitchell, A., & Zheng, L. (2019). Examining Longhand vs. Laptop Debate: A Replication Study. AIS Trans. Replication Res., 5, 9.
  • Emory J, Teal T, Holloway G. Electronic note taking technology and academic performance in nursing students. Contemp Nurse. 2021 Apr-Jun;57(3-4):235-244. doi: 10.1080/10376178.2021.1997148. Epub 2021 Nov 8. PMID: 34674607.

SciSpace

SciSpace was developed as more focused on the research literature. 

The output from the same prompt generated a summary and a list of 90 citations (see below). Each citations appears with characteristics from a list available to the user. These supplemental comments are useful in determining which citations I may wish to read in full. Various filters can be applied to the original collection that help narrow the output. Also included are ways to designate the recency of the publications to be displayed and to limit the output to journal articles.

The journals that SciSpace accesses can be reviewed and I was pleased to see that what I consider the core educational research journals are covered.

Here is the finding I found most important. SciSpace does provide many citations from open-access journals. These are great, but I was most interested in what was generated from the main sources I knew should be there. These citations were included. 

Linlin, Luo., Kenneth, A., Kiewra., Abraham, E., Flanigan., Markeya, S., Peteranetz. (2018). Laptop versus longhand note taking: effects on lecture notes and achievement. Instructional Science, 46(6):947-971. doi: 10.1007/S11251-018-9458-0

Pam, Mueller., Daniel, M., Oppenheimer. (2014). The Pen Is Mightier Than the Keyboard Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6):1159-1168. doi: 10.1177/0956797614524581

Dung, C., Bui., Joel, Myerson., Sandra, Hale. (2013). Note-taking with computers: Exploring alternative strategies for improved recall. Journal of Educational Psychology, 105(2):299-309. doi: 10.1037/A0030367

Summary

This post summarizes my thoughts on which of multiple existing AI-enabled services I should retain to meet my personal search and writing interests. I found SciSpace superior to Perplexity when it came to identifying prompt-relevant journal articles. Again, I have attempted to be specific about what I use AI search to accomplish and your interests may differ. 

Loading

Writing to learn in collaboration with an AI tutor

I have been working my way through a couple of new and popular books that consider the importance and perils of AI and that contain at least significant commentary on AI in education. There is not a lot in these books that is based on the research literature I tend to find most influential, but the authors have sufficient experience and opportunities to offer some very credible insights. This is not a book report, but I want to credit a few ideas that encouraged my own exploration. 

This time of the year, I often suggest some topics educators might explore over the summer while they have a little more time. With the attention AI has received in the past year and a half, I likely made a related recommendation last year at about this time. Reading these two books (citations at the end of this post) would be very useful if you spend time reading related to your profession. Perhaps you read in a different area. Hopefully. I can offer a few insights that will be sufficient to encourage your own exploration of AI tools. 

Ethan Mollick’s book, Co-Intelligence, is different in that it focuses on applications and ways to think about AI capabilities. Mollick offers interesting ideas that sometimes run in opposition to traditional advice. For example, it is OK to interact with AI tools as if they were a person even though you know they are not. Asking questions and making requests as you would with another person is just a practical way to explore AI tools. Mollick also suggests that we stop looking for how to do it techniques for AI. Instead, he suggests we explore. If you have the time, try to use AI whenever there seems some possible value and see what happens. In other words, once you get past the basics of how to use a given tool, explore. Value and issues will be different for all of us so the only way to make decisions is to spend time. Again, for educators, the summer seems a great time to explore. Finally, understand that your present experiences will be with AI tools that are the least powerful they will ever be. If you find something interesting, but flawed in some way, just wait until you see what will come next. 

There were some other suggestions about prompts I found useful. Perhaps the most concrete example is what was described as chain of thought prompting. AI tools will try to provide what you ask for, but it may be helpful to provide the sequence you want the tool to follow if a given process seems useful

Sal Kahn, the creator of Kahn Academy, offers thoughts on how AI will be helpful in education in his new book “Brave New Words”. Kahnmigo, the adaptation of AI as a tutor within the context of the other opportunities for learners and educators provided by Kahn and colleagues received a good deal of attention. An interesting theme seemed how this AI tool was prepared to assist, but not do for you (my interpretation). 

One example, which Kahn uses to start his book, I found particularly interesting and I have attempted to use as the basis for the implementation I will describe in the comments that follow, describes a collaborative writing experience in which the AI tool and a student were assigned personas of two individuals writing collaboratively. The two personas took terms introducing portions of a story with the other writer finishing the section of the story the other persona had initiated. Two collaborative writers with one controlled by AI and the other by a student.

My version

Several of my posts have considered AI as a tutor and I have tried to demonstrate how existing AI tools can be used to implement various functions provided by a human tutor. This post has some similar arguments. Here, I describe an effort to create something similar to what Khan described in his account of collaborative writing, but I am proposing an approach more focused on the strategy of writing to learn. The core idea of writing to learn is that the cognitive processes involved in explaining something by writing require beneficial cognitive activities that would be less likely to occur in more passive study strategies. So, I have been attempting to write a prompt in which I try to create a version of the Kahnmigo collaborative story writing example applied as a writing to learn task. 

 Kahnmigo was just announced Kahnmigo will be a free AI tool for teachers subsidized by Microsoft. Perhaps it would have made sense to use one of the Kahnmigo tools to implement what I have been working on. I will leave it to others to make such efforts. I wanted to try to create a similar approach using the AI services I have already been working with and paying for. 

What follows was developed using the paid version of ChatGPT using the most recently available version – 4o. I think the following prompt illustrates the chain of thought approach (Mollick) and is similar in approach to the Khan collaborative writing example. 

Prompt: Obviously this is longer and more detailed than how many would request action from an AI tool. This prompt was entered into the ChatGPT accompanied by the textbook chapter as a file to upload.

Create an activity in which two students work together to write a document. You will represent a graduate student studying educational technology and I will represent an undergraduate student taking a course preparing teachers to use technology in K12 classrooms. This interaction will be based on information from the attached document. The graduate student will first identify and list significant topics from this document. Then the grad student will identify a topic for the collaborative writing project from this list and suggest an outline. The grad student will then write the introduction to this topic. The graduate student will next write the first couple of sentences about the first item from the outline and then ask the undergraduate to complete writing comments to complete the comments about this section of the outline. Do not include examples or applications in the introductory sentences and allow the undergraduate to provide comments on these topics. Wait for the undergraduate to add their additional content before continuing. After the undergrad submits their additions, offer feedback that evaluates what the undergraduate student has added. What the undergraduate has said should be identified and then how well it summarizes what was included in the material to be read should be presented. Limit feedback to what the undergraduate writes for this section without including information the outline suggested should be included in other sections. The graduate student should then ask the undergraduate student if they want to continue. Do not continue until a response of yes is submitted.  If the undergraduate student responds “yes”, the graduate student should move to the next item from the outline and continue the same sequence of interactions until all items from the outline have been completed.

AI response

Significant Topics from the Document

1. Cognitive System and Learning

2. Memory Stores (Short-Term and Long-Term Memory)

3. Working Memory Constraints

4. Cognitive Processes and Mental Tools

5. Metacognition and Executive Control

6. Higher Order Thinking Skills

7. Meaningful Learning and Constructivism

8. Reception vs. Discovery Learning

9. Modeling Cognitive Behaviors

10. Authentic Activities

11. Project-Based Learning

12. Role of Technology in Learning

Selected Topic for Collaborative Writing Project

Higher Order Thinking Skills

Suggested Outline

1. **Introduction**

   – Definition of higher-order thinking skills

   – Importance in the 21st-century education

2. **Problem-Solving**

   – Description and components

   – Importance in education

3. **Critical Thinking**

   – Definition and key elements

   – Role in evaluating information

4. **Creative Thinking**

   – Explanation and examples

   – Relevance in innovative problem-solving

5. **Integration and Application**

   – How these skills interrelate

   – Examples of application in real-world scenarios

 Introduction

Higher-order thinking skills (HOTS) encompass a range of cognitive processes that are essential for success in the 21st-century educational landscape. These skills go beyond basic memorization or recall of facts, engaging students in complex tasks such as analysis, evaluation, synthesis, and creation of new ideas. HOTS are critical in preparing students to navigate the vast amounts of information available today and to solve novel problems effectively.

Problem-Solving

Problem-solving is a foundational aspect of higher-order thinking skills. It involves identifying a problem, generating possible solutions, evaluating these solutions, and implementing the most effective one. In education, problem-solving skills enable students to tackle complex challenges and develop strategies that can be applied across various domains. 

Your turn: Please add your comments about the importance of problem-solving in education and provide examples of how it can be incorporated into classroom activities.

Me – entered as a prompt:

The issue of whether or not problem-solving is domain specific has received a great deal of attention. Some content areas might be more directly focused on problem solving – e.g., coding, mathematics. If problem-solving is domain specific then the problem-solving skills developed in one area would not transfer to other areas in a positive way. Ths would suggest teachers in all areas should offer opportunities for problem-solving challenges related to what is studied in these areas. 

AI: Feedback

Your addition about the domain specificity of problem-solving is excellent. It addresses a critical aspect of the debate on how problem-solving skills are best developed and applied. Your point about certain content areas, like coding and mathematics, being more directly focused on problem-solving is particularly insightful. Highlighting the importance of teachers in all subject areas providing problem-solving opportunities helps underline the necessity for a holistic approach in education. This aligns well with the document’s emphasis on the importance of developing these skills across various domains.

Would you like to continue to the next section on Critical Thinking?

Me: entered as prompt

Yes

AI

Critical Thinking

Critical thinking is another crucial component of higher-order thinking skills. It involves the ability to analyze information objectively and make reasoned judgments. Critical thinking requires students to engage in reflective and independent thinking, assess the validity and relevance of information, and form well-supported conclusions. This skill is vital in today’s information-rich world, where individuals must navigate a vast array of sources and discern credible information from misinformation.

Your turn: Please add your comments about the key elements of critical thinking and its role in evaluating information, particularly in an educational setting.

ETC: 

This interaction continues according to the instructions in the prompt. ChatGPT has a method for sharing a chat with the public. Here is the link that should display the additional content AI and I generated (public access to chat).  

Related comments: Here are some related observations related to my experience.

  1. This interaction is developed on the fly. Redo the entire process and the elements of the interaction may all be different. The topics, the outline, the prompts, and even the way the content is formatted on the screen will be different. The prompt is similar to a program in that it specifies a sequence of operations you want to the AI to follow. However, unlike a coded program, here you are asking for a series to be created rather than a series of actions to follow in a specific manner. The process of generation (it is generative AI) will work a little differently each time it is implemented.
  2. I found the execution of the prompt varied with the AI tool I used. ChatGPT worked the best. I thought that NotebookLM would be an ideal way to implement this writing-to-learn task because of the way in which you can upload content to be the target of interactions with the AI. It is my opinion that the way Notebooklm works is easier to understand and explain. However, after an extended series of attempts to rework the prompt, I was unable to get the required sequence to work. NotebookLM would provide feedback in response to the first item from the outline and then stop. 
  3. This post is not an attempt to promote the specific prompt I wrote. I certainly don’t care if others try it with information sources of their choosing. This was an exploration for me and it is my hope others may continue in a similar way using my initial effort as a guide. 
  4. One final point I think is important. The approach I am describing here is using the interactive capabilities of AI to focus on an information source I trust. I am not asking AI to use its generic information base to provide the content to be learned. The nature of the interaction may not be perfect, but it primarily focuses on a vetted source and assumes learners have read this source.

Resources:

Khan, S. (2024). Brave new words: How AI will revolutionize education (and why that’s a good thing). Viking.

Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Penguin

Loading

Tapping into the crowd mind with chat

I recently read Paul’s “The extended mind” which deals with how we can harness resources external to our own minds and our own cognitive activities to deal with the abundance and complexity of information we must process daily. One value of a book such as this is that it greatly expands the variety of external resources that can influence our thinking. This variety aside, my main focus continues to emphasize technology (sometimes what some now call a second brain) and collaborations of various types (e.g., peers, experts, those with similar interests we have never met). I offer this brief overview as a recommendation for those who might find this book a useful investment of reading time.

I will focus this post on several social experiences that often are denigrated?—?meetings and lectures (especially online presentations). Without defending these social experiences, I will offer some insights that may increase the productivity of these social gatherings. In both cases, the key idea concerns how to take better advantage of the collective knowledge available in the real-time group setting.

Meetings

The suggestion I discovered here is straightforward so I can describe it quickly. Why are meetings the butt of so many jokes somehow targeting the time wasted and inefficiency of such gatherings? I spent a good part of my years as a college faculty member and department chair. The advantage of meetings is not the distribution of information as this could be accomplished by the distribution of a memo or email. The advantage is the opportunity to tap into uniquely held knowledge and collectively problem-solve. However, patterns of communication often work against these potential advantages. Too often, only a few or even one individual speaks. Paul suggests that this is due to the way issues are presented. When a group leader begins by spending too much time presenting his or her perspective on an issue, few participants tend to respond to add their insights. Paul proposes that the leader begin by asking “what do you think about XXX” and waiting to offer a personal opinion. He even recommends that individuals write their positions on note cards to distribute input more broadly. There must be a digital version of this suggestion. 

The challenge of eliciting what individuals know is key to many of the strategies Paul proposes in benefiting from collaboration. Perhaps a better way to describe the message might be collaborative efforts are often actually not that collaborative.

The lecture experience

It is common to diss lectures. It is not that I don’t understand some of the issues. Why is it educator/teacher presentations are singled out for criticism? Why is the reading of a book different from participating in a lecture? Both approaches focus on the communication of information that is to be processed by individuals. Lectures probably offer greater opportunity for collaboration because individuals are in the same location and perhaps more importantly they are present at the same time. Yes, there is such a thing as collaborative reading and that also interests me, but “social presence” is more real in a physical or virtual group experience. Sure, tutoring is superior, but we must recognize practical realities in information exposure and do what is reasonable to take advantage of the social setting. While not a part of most critical reactions, exposure to information is not the end of the learning process, but it seems obvious exposure is a necessary component.

I emphasize thinking opportunities in my general approach to formal and informal learning experiences. How might thinking opportunities be encouraged when and in ways that improve the efficiency and success of learning? The effort I have invested in evaluating and promoting layering activities is based on this perspective. Layering provides a way to embed learning opportunities in what I call the exposure phase of learning. Adding questions on top of reading content or within video presentations makes a good example. When used in this way, questions are a way to add thinking opportunities during a stage of learning that is often passive unless a learner initiates such behaviors without external prompts.

Paul takes a similar perspective with an emphasis on collaboration. Many educators may recognize an example of this emphasis in the active learning method in which a lecturer pauses during a presentation to throw out a discussion challenge to small groups of students typically sitting around tables rather than the lines of chairs in a typical lecture hall. The groups deal with this challenge and may be asked to report a summary or conclusion. Think-pair-share activities are a variant of this approach. With many lecture settings, a group is there, but there is no collaboration.

The pandemic brought an educational emphasis on virtual learning typified by Zoom sessions. I think it fair to say the typical experience has received a great deal of criticism. I had retired from teaching so aside from using ZOOM in other ways I was not teaching online during that time. My experience with ZOOM has involved a book club I join each week and graduate classes in instructional design. The ZOOM classes involved students who found it impractical to come to campus so I was teaching both face-to-face and online simultaneously which comes with its own set of challenges. My personal experience using chat while involved online has been limited to the book club.

There are plenty of suggestions for the use of chat to create active learning experiences while working with students in a synchronous setting. These strategies involve an assortment of proposed benefits. My focus is narrower and addresses the importance of tapping individual experiences of a group members to embellish presentations of the instructor. The intent of this recommendation is to encourage students to connect concepts being presented with personal experiences. Such connections are essential in encouraging transfer?—?making what is learned useful in a more general way. Students can make such connections without external assistance, but the thinking required to make such connections requires that relevant experiences come to mind and learners have the motivation to commit to reflection. Questions and prompts are a way to encourage this effort and a chat system provides the opportunity.

It is easy to imagine how such prompts could work with ZOOM. The chat function extends and expands the impact of asking a question of the group. So, imagine I am presenting the concept of working memory overload. I may think this concept offers important applied insights, but understanding an explanation of the concept does not lead to learner associations. Asking the group to suggest examples typically generates useful responses from the usual participants, but it is often the nonparticipants who are most in need of external supports. Here is the value of chat. All can participate somewhat anonymously with less pressure and not need to be the first to respond. Chat has a very important added benefit. You get to tap into the knowledge of peers. An immediate response might identify the problem of distracted driving which you might not have thought of, but which you recognize and see the connection. Someone else mentions trying to use Instant Messenger in class reducing the attention students pay to what else is happening in class. As the responses roll in, each learner has the opportunity to consider connections both familiar and unfamiliar. Some may not be appropriate examples, but even considering such offerings allows extended thinking about the concept in question.

I do recognize that ZOOM chat offers private messages which obviously provides the potential for nonproductive distraction. So do the side conversations students in a face-to-face classroom might be having with the students sitting next to them.

ZOOM does not have a provision for blocking the responses of others until a personal response has been submitted. Some instructors recognizing this issue have proposed a “Ready, Set, Go” strategy. If the type of activity I have just described is a frequent technique and a pattern of participation has been identified, students understand that a question or request will be made and then students are to consider a response, but not enter it until given the signal. Time for thinking is essential if thinking is expected. Time to submit a response and time to contemplate the responses generated by peers must be allowed. The instructor might also use the post-response delay to identify some of the examples for the group.

Using chat as part of online instruction may sound like the online equivalent of classroom response systems that seemed popular a decade or so ago, but now no longer receive as much attention. This is true. Such systems have a variety of hypothetical benefits. This post has focused on one benefit?—?How to tap into uniquely held knowledge in an online group setting?

Resource

Paul, A. M. (2021). The extended mind: The power of thinking outside the brain. Eamon Dolan Books.

Loading

Adolescent mental health, phones, and schools

I recently listened to Kara Swisher interviewing Jonathan Haidt the author of The Anxious Generation. I checked on this book and it is #1 on the New York Times nonfiction list this week. I provide a comment on the ranking because the ranking would indicate the book was receiving a great deal of attention from the general public and the message must have substantial appeal.

Briefly stated, the book argues that rising anxiety and mental health issues of adolescents are significantly determined by overparenting and the amount of time spent, especially by adolescent females, on social media. I have personal comments related to this issue particularly Haidt’s recommendation for phone use in schools, but I would encourage you to listen to the interview because Swisher pushes back and the interchange offers some useful insights into whether cellphone activity is responsible for an increase in mental health problems.

Certain facts are well established: a) adolescents spend a great amount of time on their cellphones since the introduction and wide purchase of iPhones (this specific event figures heavily in some arguments), and b) beginning before the Covid shutdown and continuing to the present adolescents, particularly females, have shown an increase in mental health issues. The big question, the focus of Haidt’s book, and lots of research (citations will be included) is whether there is a causal relationship such that a significant proportion of the increase in mental health problems can be accounted for by the great amount of online activity mostly using personal phones.

The amount of time adolescents and many of the rest of us spend online is staggering. PEW has done regular surveys of teens to quantify their online activity and provides the following data points

  1. YouTube (95%), TikTok (67%), Instagram (61%), and Snapchat (60%) are among the most popular social media platforms used by teens.
  2. On average, teens spend 1.9 hours daily on YouTube and 1.5 hours on TikTok, with males spending more time on YouTube and females on TikTok.
  3. Around 35% of teens say they use at least one of the top social media platforms “almost constantly”.

As an aside, one of the “quality studies” Haidt mentioned to support his claim in the Swisher interview involved adult use of Facebook (Allcott et al. 2022). Swisher asked for examples of manipulative research showing that phone use and depression were related and Haidt provided this study. PEW doesn’t have much to say about Facebook activity among adolescents because the level has dropped so low. Haidt acknowledges that few quality studies exist with adolescents because doing manipulative research before the age of 18 is very difficult. This is why so many researchers use college students – they are available and they can participate. 

Adolescent mental health issues have increased year by year with a big jump during the COVID years. Data from the Center for Disease Control and Prevention tracking depression provide a good example of this trend. The following chart was taken from this report. 

So, facts demonstrating adolescents’ heavy use of phones and increasing mental health issues are solid. Now, are these two variables related and how? All I can say is that the research or perhaps more accurately the interpretation of the research is messy. I understand that parents, educators, and policymakers just want the researchers to provide a clear summary, but this just isn’t happening. 

You have books such as Haidt’s (I would also recommend Jean Twenge) and medical experts (this summary of research from the National Library of Medicine) offering analyses of the research that come across placing a heavy burden of blame on cell phone use. I can also recommend scholarly meta-analyses of the phone-mental health students to reach just the opposite decision (e.g., Ferguson, et al. 2022). I admit summaries of many of the same research publications that come to very different conclusions are challenging. I read and comment about this type of controversy in other areas (should notes by taken on paper or laptop, are books better understood on paper or from the screen) in which I have read most of the relevant studies and can offer a personal opinion. I am not a clinical psychologist and in this case, I do not want to go on the record telling parents or teachers what they should do about kids and cell phones. If you are interested, I hope I have offered some of the resources you can use to get started.

What I do want to talk about

Getting back to Haidt’s book. Haidt makes several specific recommendations based on his conclusion that cellphone use is damaging.

  1. Schools should be phone free zones
  2. Children should first be provided a phone specialized for communication and not Internet use (e.g., a flip phone)
  3. Adolescents should not have access to a smartphone until high school (but see #1)
  4. Access to social media should be changed from 13 to 16.

Some additional Haidt comments on schools as phone free zones are:

  1. The agreement of schools to ban smartphones is important because a total ban applies to all students and avoids the problem of some students having access and some students not.
  2. School policies such as having access only during class or having phones in backpacks or lockers don’t work.

I do not support classroom bans on smartphones. In part my logic is based in research experience I do have and this work involved cyberbullying. A couple of things I remember from the research are that cyberbullying very rarely originated using school equipment (cell phones were less of an issue at the time), but the targets and originators of cyberbullying typically involved students from the same schools. For this second reason, most assumed schools were in the best position to address the problem. The key point here is that bullying actions originated outside of schools (homes and homes of friends), but schools were in the best position to provide “educational remedies”. Some aspects of the cellphone and mental health issues are similar.

I see classroom use of cellphones as a convenience not that different from the use of laptops, chromebooks, or tablets. All devices could be used to access damaging or useful experiences. However, students would be in a supervised environment, unlike the situation in the home or other locations outside of the classroom. In allowing the use of phones (or other digital devices) teachers do not only monitor use, but have opportunities to focus on productive uses AND explore the damaging issues in a group setting.

You may not agree with my position, but I think the logic is sound and recognizes that phone use is far more frequently unsupervised outside of classrooms than within classrooms. It is easy to target schools and ignore the reality that parents are more likely to ignore what their children are doing on their phones for much longer periods of time. Experiences within schools are not the core source of problems that may exist.

The importance of a whole group experience also has two sides. Yes, if no student can use a phone in school then there are no haves and have nots. Nothing about this solves what happens outside of classrooms. Addressing the issues of what students experience online is going to be more consistent and probably effective when all students experience issues as part of a formal curriculum. 

So, I don’t think banning phones in classrooms solves a mental health problem. I think the science on mental health and adolescence is still unclear. I disagree with what Haidt said in response to Swisher’s probing. His position eventually came down to “if it isn’t cellphones then what is it”. When there are positives and negatives associated with an activity (perhaps causally and perhaps not), simple solutions rarely produce a substantial advantage.

References:

Links are provided when possible. Other sources are cited below. 

Allcott, H., Braghieri, L., Eichmeyer, S., & Gentzkow, M. (2020). The welfare effects of social media. American Economic Review, 110(3), 629-676.

Ferguson, C. J., Kaye, L. K., Branley-Bell, D., Markey, P., Ivory, J. D., Klisanin, D., Elson, M., Smyth, M., Hogg, J. L., McDonnell, D., Nichols, D., Siddiqui, S., Gregerson, M., & Wilson, J. (2022). Like this meta-analysis: Screen media and mental health. Professional Psychology: Research and Practice, 53(2), 205–214. https://doi.org/10.1037/pro0000426

Loading