I want to make what follows as practical as I can. I understand that standards, curriculum, and instructional approaches (e,g., mastery) can drift into the realm of what some regard in a negative way as theoretical, but I think I can frame what I have to say here in terms of some questions teachers must answer all of the time.
Let me begin with the question of what you regard as the essential knowledge and skills your students should acquire. I would suggest that you in one way or another address this question by answering a series of questions.
What content and experiences will I use with the students in my class?
What will I do to evaluate student mastery of these experiences and content?
What will I do with the students who perform poorly on the methods of evaluation I have applied?
My proposal is that this sequence of questions provides a way to look at the label of essential? Some experiences and content were essential enough to provide. Some experiences and content were essential enough to evaluate. It is this third question I am most interested in because all educators face this challenge. It really gets at the core of the designation of “essential”? What happens when skills or knowledge are not developed? Is the answer “I move ahead to new content and new skills”? I refer some students for outside help? I take students or small groups aside and work with them in an effort at remediation.
Old school mastery proponents (e.g., Bloom, Keller) addressed what must be mastered in a fuzzy way. Rather than identify specific things that must be known, they hedged. Bloom proposed a group-based mastery system. Imagine a textbook chapter and related classroom contributions to be mastered over a two-week period of time. Bloom proposed that teachers first focus on essential skills and knowledge (not really clear to me how this material was identified). At the end of maybe a week, students completed an evaluation related to these materials that Bloom labeled a formative evaluation. Those learners who “passed” this evaluation went on to supplemental goals and those who failed to achieve mastery received further help with the essential goals. At the end of the time set aside for the unit, students completed a summative evaluation over the essential goals and everyone moved on essential goals met or not.
Keller’s PSI (personalized system of instruction) focused heavily on written content as a way to allow personalized progress – think textbook again. Reading is an individual way to confront new information. When students felt they were ready to be evaluated on their mastery of a unit, the asked a tutor to provide an assessment. Pass/not pass was based on an overall score so what was mastered was not really determined at the level of specific elements of understanding. Those who passed went on to the next unit and those who did not pass continued to study the chapter yet to be mastered with some assistance from a tutor.
Modern mastery (Kahn Academy, Modern Classroom Project) advocates confront the question of essential more directly. Before I try to address how, I will try to answer my original question – “Mastery of what?” I would suggest essential means a) knowledge or skill is necessary for learning some other essential knowledge or skill or b) knowledge or skill the system has a responsibility to develop and this development is expected of the course or grade level I teach.
For example, double digit subtraction is essential to being able to master long division. The “North Dakota Studies” course is likely the one time you would learn why the Red River Valley has some of the richest farmland in the world. Okay, maybe this is not essential, but it matters to those who live in this area and depend on agriculture. Essential is a squishy thing and one could argue that a cellphone would allow anyone to perform long division and explain the soil quality of the Red River Valley without knowing how to subtract or basic geological facts. However, I assume there are essential things we teach that are a subset of all things we teach.
The Kahn Academy uses a complex model of the content with multiple strands identifying which skills/knowledge are prerequisites to what other skills. Students make progress across strands and must show mastery of prerequisites when identified within a given strand. Kahn complained about “Swiss cheese knowledge” that can be generated when students advance without prerequisite knowledge leaving gaps in skills and understanding that make future learning more difficult.
The Modern Classroom Project suggests educators identify differences in the importance of specific knowledge or skills using a triage of sorts – must do, should do, and aspire to do or need to know, good to know, and aim to know. This approach allows classroom educators to differentiate objectives in a way that allows more uniform progress within a group and still requires an extended focus on some prerequisites.
My long-time interest in mastery learning more recently combined with my interest in the classroom benefits of technology allow what I consider improvements in both the value and the practicality of mastery approaches. The value concerns a way to address the difficulty of new learning when past learning does not provide important existing knowledge. The efficiency associated with technology comes from the tracking of what has been learned and what should be learned next on a far more specific and individual student level. As I hope my analysis has made clear, the specificity of what should be learned next sometimes matters and sometimes does not. “Just in-time learning” is always possible, but this concept still requires a method of identification and application that group based approaches to teaching/learning do not make practical. Using teacher skills in a different way (tutor, coach) in combination with the value of technology in tracking individuals and delivering learning experiences seems a productive alternative to group-based approaches.
As a final comment, I wonder if big data will provide a way to address the issue of necessary prerequisites in a more specific way. Would there be a computational way of creating the strands of knowledge/skill units Kahn has identified based on intuition?
References
Bloom, B. S. (1968). Learning for Mastery. Instruction and Curriculum. Regional Education Laboratory for the Carolinas and Virginia, Topical Papers and Reprints, Number 1. Evaluation comment, 1(2), n2.
Khan, S. (2012). The one world schoolhouse: Education reimagined. Twelve.
Keller, F. S. (1968). “Good-bye teacher”. Journal of Applied Behavior Analysis, 1, 79–89
Modern Classroom Project – https://intercom.help/modern-classrooms/en/articles/5261634-must-do-should-do-and-aspire-to-do
I am a participant in a book study group that covers a variety of topics. The group has an interest in writing and the writing process that is sometimes the focus of what we read and discuss. The subtopic of the preparation of content for college classes comes up frequently and sometimes the role of Open Educational Resources (OER) as classroom resources. The discussion of OER and ChatGPT (another issue that keeps coming up) got me thinking about whether ChatGPT could be wielded to produce college textbooks.
I have written a couple of college textbooks and I still dabble (see left-hand column) so I understand the process. I would not use ChatGPT for the type of course I have focused on.
I have probably taught the Introduction to Psychology course 40-50 times in my career and used maybe 10 different textbooks. Some who write for this course may disagree with my assessment that the treatment of the topics in this type of course is generic and predictable, but this would be my description. I think most efforts for this audience take a predictable approach so that the instructors making book selections are comfortable with the book they assign. I would find it easy to predict chapter themes, chapter topics, and the classic experiments that are described and related implications. I wanted to see how much I could copy and paste from the content I asked ChatGPT to generate based on my prompts and if a reasonable body of content could be produced in this fashion.
I think the best way to go about this type of writing project is to approach it as developing and then expanding an outline. You ask ChatGPT to identify topics and subtopics and then ask it to describe or explain the individual elements of the outline created in this manner. You could then ask for classic experiments, examples of supporting research, and examples of the concepts described. ChatGPT should do most of the work with this approach.
I found because I knew pretty much what I was looking for based on my existing knowledge of this type of course I could create an expanding document in which I embedded this material within the outline I created in a second tool. So think of copying and pasting material from ChatGPT to Google Docs, Word, or Scrivener. First the outline of topics and subtopics and then the explanations and descriptions, the studies, examples, and applications, and perhaps an occasional story of a famous psychologist. I don’t see creating a final product purely by copying and pasting, but I would probably write certain elements myself (e.g., chapter introductions and conclusions) and generate transitions between the chunks of the content generated by ChatGPT so the work does not seem so choppy when read. Just having the lists ChatGPT generates would save me a great amount of time in putting together such a product. I would describe the process as spending far less time thinking and more time writing and the combination of my experience and the generative capabilities of ChatGPT should allow the generation of content much more quickly and with minimal effort.
Again, this approach works because of my familiarity with the topic and the type of course I am writing to support. I could not create a credible product if writing a similar introduction to a different field of study.
What follows are some examples of the process I have just described. The idea is to offer something concrete as generated by ChatGPT. It would be far too lengthy to include all of the pieces of content for an entire chapter so I have tried to generate examples of the stages of development I propose and some of the elements to be embedded within the expandable outline I have just described.
I have selected a chapter on behaviorism as my example simply because the structure of the chapter is easy for me to conceptualize. I can begin by asking about the three main subtopics (operant conditioning, classical conditioning, and modeling) and let ChatGPT create the structure of the chapter from this initial prompt. I found it to be more difficult to approach other chapters I have experimented with, but this should be enough for a demonstration.
I will interject my own comments in the material that follows to explain what I am having ChatGPT do. The ChatGPT material appears as quotes and my prompts as bolded text.
The first set of materials provides an example of what I mean by an outline of topics and subtopics. You can see how I attempted to prompt the AI service to generate the output of the type l wanted.
Prompt:I want to have my students read about behavioral models of learning – operant conditioning, classical conditioning, and modeling. What topics are important for each model?
Sure, here are some important topics for each of the three behavioral models of learning you mentioned:
Social cognitive theory: self-efficacy, outcome expectancies, personal goals
Mirror neurons and brain regions involved in imitation and observation
The following prompt was used to expand a portion of the higher level outline in this case focused on operating conditioning.
Prompt:What should students in an introductory class know about the topics you have listed for operant conditioning?
Great question! Here are some key concepts that students in an introductory class should know about operant conditioning:
Reinforcement:
Positive reinforcement occurs when a behavior is followed by the presentation of a pleasant stimulus, which increases the likelihood of that behavior occurring again.
Negative reinforcement occurs when a behavior is followed by the removal of an unpleasant stimulus, which also increases the likelihood of that behavior occurring again.
Punishment:
Positive punishment occurs when a behavior is followed by the presentation of an unpleasant stimulus, which decreases the likelihood of that behavior occurring again.
Negative punishment occurs when a behavior is followed by the removal of a pleasant stimulus, which also decreases the likelihood of that behavior occurring again.
Shaping:
Shaping involves reinforcing successive approximations of a desired behavior until the desired behavior is achieved.
Shaping schedules can be continuous (every instance of the desired behavior is reinforced) or intermittent (only some instances of the desired behavior are reinforced).
Extinction:
Extinction occurs when a behavior that was previously reinforced is no longer reinforced, which leads to a decrease in the frequency of that behavior.
Extinction can be accompanied by an extinction burst, which is a temporary increase in the frequency of the behavior before it decreases.
It’s important for students to understand these concepts and how they apply to real-life situations, as operant conditioning plays a role in many aspects of our behavior, including education, parenting, and workplace management.
The following is a prompt intended to identify examples of reinforcement students might connect to their own lives. There would be no need to include the entire list, but I could efficiently select several from this list and expand a bit.
Prompt: What are some examples of positive and negative reinforcement might students recognize from their own experiences?
Great question! Here are some examples of positive and negative reinforcement that students might recognize from their own experiences:
Positive reinforcement
A child receives a candy for completing their homework, which increases the likelihood of them completing their homework in the future.
A student gets praise from their teacher for answering a question correctly, which increases the likelihood of them participating in class in the future.
An employee receives a bonus for meeting their sales quota, which increases the likelihood of them meeting their sales quota in the future.
Negative reinforcement:
A driver puts on their seatbelt to stop the beeping sound, which increases the likelihood of them putting on their seatbelt in the future.
A student finishes their work early to avoid having to do it during recess, which increases the likelihood of them finishing their work early in the future.
An employee works overtime to avoid getting a reprimand from their boss, which increases the likelihood of them working overtime in the future.
It’s important to note that reinforcement can be either positive or negative depending on whether a desirable consequence is presented or an undesirable consequence is removed. Additionally, it’s important to recognize that reinforcement doesn’t always have to involve tangible rewards or punishment; sometimes, social praise or criticism can also serve as a form of reinforcement.
The following prompts illustrates how I would identify prominent individuals associated with this topic. I would probably not include an entire list for all chapters, but it would be common to include all from this list in a chapter on this topic.
Prompt: There are several important scientists who developed our understanding of operant conditioning. Who are a few of them and some of the classic studies associated with their work
B.F. Skinner: Skinner is perhaps the most well-known and influential scientist associated with operant conditioning. He developed the “Skinner box,” a laboratory apparatus used to study operant conditioning in animals. Some of his classic studies include “The Behavior of Organisms” (1938) and “Science and Human Behavior” (1953), both of which outlined the principles of operant conditioning and their application to human behavior.
Edward Thorndike: Thorndike was one of the first scientists to study the principles of operant conditioning. He developed the “law of effect,” which states that behaviors followed by positive consequences are more likely to be repeated, while behaviors followed by negative consequences are less likely to be repeated. Some of his classic studies include “Animal Intelligence” (1911) and “The Fundamentals of Learning” (1932).
Ivan Pavlov: Pavlov is perhaps best known for his work on classical conditioning, but he also made important contributions to the study of operant conditioning. He studied the effects of rewards and punishments on dogs’ behaviors, and his work provided insights into the role of conditioning in the formation of habits. Some of his classic studies include “The Work of the Digestive Glands” (1897) and “Lectures on Conditioned Reflexes” (1928).
John Watson: Watson was a psychologist who made important contributions to the study of behaviorism, which emphasizes the role of environmental factors in shaping behavior. He studied the principles of operant conditioning and their application to human behavior, and he is perhaps best known for his controversial “Little Albert” study, in which he conditioned a young child to fear a white rat.
These scientists and their classic studies have contributed significantly to our understanding of operant conditioning and its application to human and animal behavior. By studying their work, students can gain a deeper appreciation for the principles of operant conditioning and how they shape the behaviors we exhibit in our daily lives.
Prompt:What principles of learning did Skinner discover and how did he make these discoveries?
B.F. Skinner was a psychologist who made significant contributions to the field of behaviorism and our understanding of operant conditioning. He discovered several principles of learning, which he outlined in his influential work “The Behavior of Organisms” (1938) and later in “Science and Human Behavior” (1953). Here are some of the principles of learning that Skinner discovered:
Reinforcement: Skinner discovered that behaviors that are followed by positive consequences, such as rewards or praise, are more likely to be repeated, while behaviors that are followed by negative consequences, such as punishment or criticism, are less likely to be repeated. He called this principle reinforcement and believed it was a fundamental principle of learning.
Shaping: Skinner also discovered that behaviors can be shaped through reinforcement of successive approximations. This means that a behavior can be broken down into smaller components, and each component can be reinforced until the desired behavior is achieved. This principle is known as shaping, and it is often used in animal training and education.
Extinction: Skinner also discovered that behaviors that are no longer reinforced will eventually become extinct. This means that if a behavior is no longer followed by a positive consequence, the behavior will gradually decrease in frequency until it ceases altogether.
Schedules of reinforcement: Skinner discovered that the frequency and timing of reinforcement can have a significant impact on behavior. He studied different schedules of reinforcement, such as fixed-ratio, variable-ratio, fixed-interval, and variable-interval, and found that each schedule had unique effects on behavior.
Skinner made these discoveries through a combination of laboratory experiments, observations of animal and human behavior, and theoretical analysis. He developed the “Skinner box,” a laboratory apparatus used to study operant conditioning in animals, and he used this device to study the effects of reinforcement and punishment on behavior. He also conducted experiments with humans, such as studying the effects of different reinforcement schedules on study habits. Overall, Skinner’s work had a profound impact on our understanding of learning and behavior, and his discoveries continue to be influential in psychology and related fields.
This is an example of a prompt I used to see how close ChatGPT would get to generating a final segment of text on shaping.
Prompt:Provide an example of how shaping works
Shaping is a process of reinforcing successive approximations of a desired behavior until the final behavior is achieved. This process is often used in animal training, education, and therapy.
Here’s an example of how shaping works in animal training:
Let’s say you want to train a dog to spin in a circle on command. However, the dog has no prior experience with this behavior, so you need to shape the behavior from scratch.
Start by reinforcing any behavior that is even remotely close to spinning in a circle. For example, if the dog turns its head to the side, give it a treat or praise. This will help the dog understand that it is being rewarded for moving in the right direction
Once the dog starts turning its head to the side consistently, raise the criteria by only rewarding the dog when it turns its head a little further. For example, if the dog turns its head 45 degrees to the side, reward it. If it only turns its head 30 degrees, do not reward it.
Keep raising the criteria gradually until the dog is turning in a full circle. This might take several training sessions, as the dog needs to understand what is expected of it at each step of the process.
Finally, add a verbal or visual cue to the behavior, such as saying “spin” or using a hand signal. Once the dog associates this cue with the behavior, it should be able to spin on command.
Through the process of shaping, the dog has learned to perform a complex behavior that it had no prior experience with. This process can be used to teach a wide variety of behaviors, from simple tricks to complex tasks. By breaking down a behavior into small, achievable steps and reinforcing each step along the way, shaping can help animals (and humans) learn and master new skills.
The group of prompts are examples of what I would use to generate material on modeling.
Prompt: What is important for introductory students to know about observational learning
Observational learning, also known as social learning or modeling, is the process of acquiring new behaviors or modifying existing behaviors by observing others. Here are some important points to consider when introducing students to this topic:
The basic principles:
Observational learning involves several key principles, including attention, retention, reproduction, and motivation. These principles suggest that people learn by paying attention to the behaviors of others, retaining information about these behaviors in memory, reproducing the observed behaviors, and being motivated to imitate the behaviors based on their outcomes.
Examples of observational learning:
There are many examples of observational learning, such as learning to cook a new recipe by watching a cooking show, learning to drive a car by observing others, or learning to solve a math problem by watching a teacher demonstrate the steps. These examples illustrate how observational learning can occur in a variety of contexts and for a range of behaviors.
Observational learning and modeling:
Observational learning is sometimes referred to as modeling because it involves the observation of models, or individuals who demonstrate the behavior being learned. Models can be live or symbolic (such as characters on TV or in books), and they can influence behavior through both positive and negative reinforcement.
Observational learning and cognitive processes:
Observational learning involves more than just watching others; it also involves cognitive processes such as attention, memory, and motivation. Observational learning can be influenced by factors such as the salience of the behavior being observed, the similarity between the observer and the model, and the perceived outcome of the behavior.
Applications of observational learning:
Observational learning has many practical applications, such as in education, parenting, and advertising. By understanding the principles of observational learning, individuals can use modeling to promote positive behaviors and discourage negative ones.
Overall, observational learning is a complex process that involves cognitive and behavioral factors. By understanding the principles of observational learning, students can gain insights into how they learn and how they can use modeling to promote their own growth and development.
The following prompts expand the basic material on modeling to include a classic study (Bandura’s bobo doll experiment) and then extend this classic experiment to consider how aggressive behavior might be encouraged by playing certain types of video games.
Prompt:Describe the bobo doll experiment
The Bobo doll experiment was a study conducted by psychologist Albert Bandura in 1961 to examine how social learning occurs in children. The experiment involved exposing children to a video of an adult aggressively hitting and verbally abusing an inflatable doll called “Bobo.”
In the experiment, Bandura divided children into three groups. The first group was exposed to the video of the adult hitting and yelling at the doll. The second group was shown a video of an adult playing with the doll in a non-aggressive manner. The third group was not shown any videos.
After watching the videos, the children were brought into a room filled with toys, including the Bobo doll. The researchers observed how the children behaved towards the doll. The results of the study showed that children who were exposed to the aggressive video were more likely to exhibit aggressive behavior towards the doll compared to the other two groups. They imitated the behavior they had witnessed in the video by hitting, kicking, and verbally abusing the Bobo doll.
The Bobo doll experiment demonstrated that children learn through observation and imitation. It also highlighted the importance of modeling appropriate behavior for children and being mindful of the types of behaviors that children are exposed to in their environments. The study had a significant impact on the field of psychology and our understanding of social learning theory.
Prompt: What are the implications of this experiment? What real world behaviors might it explain?
The Bobo doll experiment has important implications for our understanding of how children learn and develop behavior. Some of the key implications of the experiment are:
Children learn through observation and imitation: The experiment showed that children can learn new behaviors by watching others. They may imitate positive or negative behaviors, depending on what they observe.
Aggressive behavior can be learned: The experiment demonstrated that exposure to aggressive behavior can lead to children imitating that behavior. This suggests that violent behavior in the media, such as in video games and movies, may contribute to aggressive behavior in children.
Modeling appropriate behavior is important: The study highlighted the importance of modeling appropriate behavior for children. Adults need to be mindful of the behaviors they exhibit in front of children and ensure they model positive behaviors.
Social learning theory has important implications: The study provided support for social learning theory, which suggests that people learn from observing others in their environment.
In terms of real-world behaviors, the Bobo doll experiment may explain why some children are more likely to engage in aggressive behavior, especially if they have been exposed to violence in their environment. It also highlights the potential negative impact of violent media on children. Additionally, the experiment suggests that positive role models, such as parents and teachers, can have a significant impact on a child’s behavior by modeling appropriate behaviors
Prompt:Cite research that shows some video games increase aggression
There is a significant body of research that suggests that some video games can increase
There is a significant body of research that suggests that some video games can increase aggression in children and adolescents. Here are a few examples:
Anderson, C. A., & Dill, K. E. (2000). Video games and aggressive thoughts, feelings, and behavior in the laboratory and in life. Journal of Personality and Social Psychology, 78(4), 772-790. This study found that playing violent video games led to increased aggression in participants, both in the laboratory and in real-life situations.
Greitemeyer, T., & Mügge, D. O. (2014). Video games do affect social outcomes: A meta-analytic review of the effects of violent and prosocial video game play. Personality and Social Psychology Bulletin, 40(5), 578-589. This meta-analysis of multiple studies found that playing violent video games was associated with increased aggression and decreased prosocial behavior
Ferguson, C. J., & Kilburn, J. (2010). The public health risks of media violence: A meta-analytic review. Journal of Pediatrics, 154(5), 759-763. This meta-analysis found that exposure to violent video games was associated with increased aggression, as well as other negative outcomes such as decreased empathy and increased desensitization to violence.
Gentile, D. A., Lynch, P. J., Linder, J. R., & Walsh, D. A. (2004). The effects of violent video game habits on adolescent hostility, aggressive behaviors, and school performance. Journal of Adolescence, 27(1), 5-22. This study found that playing violent video games was associated with increased aggression and decreased academic performance in adolescents.
These are just a few examples of the research suggesting that some video games can increase aggression. While the relationship between video games and aggression is complex and not fully understood, these studies provide evidence that parents and caregivers should be mindful of the types of games their children are playing and limit exposure to violent content if necessary.
Prompt: Describe the Anderson and Dill study
The Anderson and Dill study (2000) investigated the relationship between playing violent video games and aggressive thoughts, feelings, and behaviors. The study was conducted in two parts: a laboratory experiment and a field study.
For the laboratory experiment, the researchers randomly assigned 210 college students to play either a violent or non-violent video game for 20 minutes. The violent game was the first-person shooter game “Mortal Kombat,” while the non-violent game was the sports game “Tetris.” After playing the game, the participants completed a series of tasks designed to measure aggressive thoughts, feelings, and behaviors.
The results showed that participants who played the violent game had more aggressive thoughts and feelings compared to those who played the non-violent game. They also had higher levels of physiological arousal, as measured by heart rate and skin conductance. Additionally, when given the opportunity to aggress against another person in a separate task, those who played the violent game were more likely to administer louder and longer-lasting noise blasts to their opponent.
For the field study, the researchers surveyed 227 high school students about their video game habits and levels of aggression. They found that students who played more violent video games were more likely to engage in physical fights, verbal arguments, and other forms of aggressive behavior.
Overall, the Anderson and Dill study provided evidence that playing violent video games can increase aggressive thoughts, feelings, and behaviors. While the study was conducted with college and high school students, the findings suggest that parents and caregivers should be mindful of the types of games their children are playing and limit exposure to violent content if necessary.
Summary:
This post was intended as a proof of concept explaining how I could make use of ChatGPT and the knowledge already in my head from multiple years of teaching the Course Introduction to Psychology to produce a reasonable textbook for this course.
The approach I have outlined here made use of ChatGPT to generate an outline of topics and subtopics appropriate to one chapter of this textbook. ChatGPT was then asked to produce explanations of items from the outline, produce examples and descriptions of research appropriate to these concepts, and possible applications of the core ideas from these concepts.
My proposal is that an experienced individual in combination with ChatGPT could rapidly generate the content for a survey course. I am not proposing that this product would be a great product, but it would be the reasonable equivalent of commercial products generated at no cost beyond my time. I had those interested in OER content in mind in exploring this option.
I can identify a half dozen or fewer themes that have captivated my professional imagination over the 40+ years of my academic career. So many of these themes often were at the core of specific research interests and my applied work. Sometimes a theme was something I found interesting at the time it was first encountered, but I saw no practical way the idea could be implemened. Sometimes this situation has changed. The best example of this “opportunity discovered” comes from my original interest in individual differences in the rate of learning and my later interest in technology and how the affordances of technology could make responding to differences practical.
The concept of aptitude is a topic educational psychologists teach. We may talk about issues associated with aptitude tests and perhaps biases in these tests as measures of aptitude or perhaps problems in the way test results were applied. Intelligence tests make perhaps the best example of an attempt to estimate general aptitude. Aptitude tests are about prediction and intelligence scores are predictive of achievement. Past achievement may be a better predictor of future achievement, but sometimes there is value in breaking down the components that contribute to achievement differences. Aptitude as an estimate of potential does not guarantee that potential will be realized and this difference, if real, is worth investigating.
As I said originally, I am interested in individual differences in the rate of learning and the practical consequences of these differences in rate under different classroom circumstances. I can trace my personal interest back to the theoretical work of Carrol (1963, 1989) which proposed what I interpreted as an optimistic model of learning. The model proposed that most individuals could learn most things if provided enough time. Carroll then differentiated the time required the time provided and then broke time required down according to variables that were influential. Aptitude proposed that aptitude was a way of understanding the time required under ideal conditions of optimal instruction and the presence of relevant existing knowledge.
I saw a connection to the notion of IQ which few seemed to make. The classic representation, IQ=MA/CA, is really about time and rate of learning. CA (chronological age) is the time available for learning and MA (mental age) is really how much has been learned estimated as the average knowledge of others of a given age. Hence MA/CA is rate of learning. The amount of general knowledge that has been acquired relative to what is typical is one way to estimate this rate. It is problematic in practice because it assumes equal opportunity which is of course idealistic.
A different way to estimate rate of learning might be to measure it directly and this is possible with various forms of individualized instruction. I remember the time when individualization was called programmed instruction and was accomplished using sequenced paper materials (see Yaeger). For example, I remember a reading comprehension implementation based on a box of cards with short reading passages and related questions that reflected different levels of text complexity. I remember this as an SRA reading product. The box of cards was based on a color scheme representing each level (e.g., brown cards, green cards, orange cards) and there were multiple cards at each level. Students would start at a common level, read a card, and attempt the related questions. If they obtained an established score, they were advanced to the next level. If not, they would take a different card of the same color and try again. Students would progress at different rates and the difference in time required to advance from level a to level could be used as one way to estimate reading aptitude.
There are now multiple technology-supported systems (e.g., Kahn Academy) based on a similar model (I refer to such approaches as mastery learning after the use of this term by Bloom, and Keller in the late 1960s).
Rate of learning could also be impacted by the presence or absence of relevant background knowledge. More recently, Kahn (Kahn Academy) has described this as the problem of Swiss cheese knowledge. Do students have the relevant prerequisites for acquiring a given new skill or concept?
How little variability in the rate of learning would exist given ideal instruction and the mastery of prerequisites has become an interesting question. To me, this seems similar to asking the question if there are really differences in the theoretical notion of intelligence or are the individual differences we observed due to differences in motivation, background knowledge, and instructional quality.
Why does it matter? I think it matters because educators and on a different level our models of education must deal with individual differences. However conceptualized, every teacher must make decisions about the rate of presentation that slows down the rate at which some students could learn and moves too fast for other students. The reality of aptitude as differences in rate of learning is there whether we choose to ignore it or not. Estimates of this variable range from 3:1 to 10:1 (Arlin). I liked to pick 5:1 and proposed to future teachers that some of their students would “get it” during their class on Monday and suggest they would have to work on the same concepts for the rest of the week to get most of the students to the same place. What should they do between Monday and Friday?
I would suggest that techniques have been available to provide a solution since the late 1960s. Mastery learning proposes to create settings that address differences in background knowledge by focusing on assuring students progress when ready and not so much the calendar says it is time to begin the next unit. My way of describing the goal would be to say the goal is to reduce the variability in time required to the bare minimum required by differences in aptitude by addressing differences in background knowledge and moving ahead at a rate individual students can handle reducing their frustration at not being able to succeed at meeting learning goals.
I see two practical ways to accomplish an approach of this type – tutoring and technology. Tutoring is very effective in meeting individual student needs, but expensive. Technology provides a more cost effective approach and offers advantages in content presentation, evaluation of understanding, and record keeping over early implementations of mastery learning. Technology can free teachers from having to take total responsibility for these functions and to provide more time to function as an individual or small group tutor. More on some of these ideas in future posts.
Related references:
Arlin, M. (1984). Time variability in mastery learning. American Educational Research Journal, 21(1), 103-120.
Arlin, M. (1984b). Time, equality, and mastery learning. Review of Educational Research, 54(1), 65-86.
Bloom, B. S. (1968). Learning for Mastery. Instruction and Curriculum. Regional Education Laboratory for the Carolinas and Virginia, Topical Papers and Reprints, Number 1. Evaluation comment, 1(2), n2.
Bloom, B. S. (1974). Time and learning. American psychologist, 29(9), 682-688.
Carroll, J. B. (1963). A model of school learning. Teachers college record, 64(8), 1-9.
Carroll, J. B. (1989). The Carroll model: A 25-year retrospective and prospective view. Educational researcher, 18(1), 26-31.
Keller, F. S. (1968). Goodbye teacher… Journal of Applied Behavior Analysis Vol. 1, pg. 79-89.
Khan, S. (2012). The one world schoolhouse: Education reimagined. Twelve.
Yeager, J. L., & Lindvall, C. M. (1967). An exploratory investigation of selected measures of rate of learning. The Journal of Experimental Education, 36(2), 78-81.
One of my broad personal interests has long been how what might be described as digital tools can facilitate human cognitive tasks. Many of us might accurately be described as knowledge workers. What this work involves varies with occupation and avocation, but, by definition, our work largely depends on cognitive (thinking) behaviors. The cognitive tasks that are employed vary in type and frequency of application across categories of knowledge workers so it follows that the ways in which technological tools might be of value will vary as well. I happen to be an educator.
This was perhaps a long way of setting up my argument which involves the application of AI. We are at a point of discovery when it comes to how the recently available AI tools will be of value in the many specific niches in which AI may support us. I am past the point of worrying about whether AI tools are a problem for education and the discussions of how potential problems might be countered. This took about a week. Our new reality seems obvious as AI tools for research and content production now seem to be arriving daily. It seems obvious we should commit to a period of exploration and evaluation (formal and informal). I think this is what OpenAI was hoping for with the release of ChatGPT. What creative uses will emerge and how well do they seem to work? In what specific aspects of work do these tools as they exist in their present form offer functions that can be identified and shared? Worrying about evaluation and control can wait until we have more experience.
I have an example of what I mean by a niche. The example may or may not be something you relate to in your own situation. If it fits, great. If not, consider it an example of the type of problem identification I think is valuable in searching for applications of a new capability.
One exploration
As a writer for a specific community (educators interested in what research might say about practice), I do and describe some of my own research, but I also spend a great amount of time combing the published literature for studies that might be relevant to what I describe for others. Like so many areas of the sea of information that could possibly be examined, what might be relevant to my interests is immense and growing and quite challenging to explore. It gets more challenging. The research in areas that involve human behavior is very different than say research in chemistry. Human behavior seems more messy and complicated and you encounter bodies of work that seem on the surface to be very similar and yet produce inconsistent and sometimes contradictory results. Those who don’t go deep can easily make pronouncements based on a given study that someone else can challenge based on a different study. It is frustrating to those of us who do this work and I am certain even more frustrating to those who consume our work only to find that what we propose can be questioned.
Here is perhaps an easier way to explain at least part of my challenge. On any given topic (e.g., Should students take class notes in a notebook by hand or using a laptop or tablet), there are both studies asking this straightforward question and there are studies dealing with the underlying mechanisms. Since straightforward comparisons often generate inconsistent results, a careful knowledge worker must review the method section of the studies carefully to try to identify differences that might be responsible for the inconsistencies and then try to locate other studies that may bear on the importance of differences in the methodologies. You find yourself trying to make a simple decision facing maybe hundreds of documents that may each take an hour or so to carefully review.
So let’s start with the task of identifying the potential group of studies that may be relevant and making the decision of which of these documents should time be invested in reading. It may seem a small thing, but we may be making decisions that could easily impact days of labor.
Stages in useful content identification
My workflow typically follows a three-stage process.
Identify potentially relevant papers
Evaluate papers to determine potential
Read papers
Stage 1
Most folks who have a content location challenge probably now assume a Google search is how to proceed. I tend to work a little differently. I usually begin with a credible source familiar to me. I examine the reference section of this source, but I also use two online services. The reference section identifies studies cited by the author(s) of the paper I have already found valuable. These papers are older, but may identify studies producing conflicting conclusions or complementary studies using somewhat different methodologies, different participant populations, or other potentially informative variations.
Google Scholar extends the value of the existing list of resources I am now aware of in a forward direction. In other words, it lists other later studies that have cited the study I started with. Using Google Scholar I can review the abstracts of these later studies and perhaps find other relevant studies.
Research Rabbit extends the power of Google Scholar in both directions. It moves both forward and backward and forward for multiple “generations”. These data are returned as individual abstracts and citations, but also as a graph showing additional interactions. By “seeding” the service with several related papers it is possible to identify papers that are commonly cited together which are then proposed as being of greater value. The following image shows an example of a graph (red box) for the paper I am using here as an example.
There are several other tools that offer similar capabilities, but these two tools typically provide me plenty of resources to consider
Stage 2
Now what. I can review the abstracts of papers that seem relevant to prioritize a bit, but I keep hoping that something more would be available to get me started.
I wondered what I could learn with ChatGPT and I have been experimenting. My exploratory strategy has involved asking ChatGBT for information about papers I have written. The idea is that I know the information in the paper and I am trying to determine if a given tool will provide this information. Would I be able to ask questions about other papers and generate answers that are valuable and that I could trust?
The following is an example of this exploration and it is the text generated by my interaction. My inputs are bolded. I can enter a title for the paper and GPT will generate a credible summary. I would not offer this summary as different from the abstract, but it turns out to be similar and ChatGPT obviously can find information about a specific study based only on the title. This alone is at least interesting.
Summarize the journal article Optional student use of online lecture resources: resource preferences, performance and lecture attendance
The article investigates the optional use of online lecture resources by students, including their preferences for different types of resources and the impact on performance and lecture attendance. The results suggest that students have diverse preferences for online resources and using these resources can have a positive effect on performance, but may not necessarily affect lecture attendance.
Provide additional details
The study aimed to understand students’ preferences for online lecture resources, such as videos, slides, and lecture notes, and how these preferences impact their academic performance and lecture attendance. The study found that students have different preferences for different types of online resources, with some students preferring videos while others prefer slides or lecture notes. The results also showed that the use of online resources can have a positive impact on students’ performance, but there was no significant effect on lecture attendance. This indicates that students may use online resources as a supplement to attending lectures, rather than as a replacement. The results of this study provide valuable insights for educators looking to incorporate online resources into their teaching.
What statistics were used to analyze the data from this study
The information on the statistics used in this study is not specified in the description given. To know more about the specific methods and statistics used in this study, it is best to refer to the original article.
How was class attendance determined
The information on how class attendance was determined in the study is not specified in the description given. To know more about the specific methods used to determine class attendance in this study, it is best to refer to the original article.
What were a couple of the preferences for online notes the study identified
The information on specific preferences for online notes identified in the study is not specified in the description given. To know more about the preferences for online notes identified in this study, it is best to refer to the original article.
I am guessing ChatGPT is limited exactly as it suggests. The full published study requires paid access either by a library or individuals and this AI tool’s knowledge base does not include this information.
Elicit
Elicit uses AI to provide deeper insights than I was able to generate with ChatGPT. I doubt this is the advantage of the AI engine it uses, but most likely the result of the knowledge base it can access and the presets to look for specific things. Beyond the presets, Elicit provides the opportunity to ask custom questions and I was able to get answers to some but not all of the questions I failed to find answers for using ChatGPT. Sometimes I had to rephrase a question several times to generate the answer I knew was there. What I really wanted but could not get was a summary of the Methods section. I don’t need the results because the results are just math and can be approximated from the Abstract. The key to answering many important questions is the Methods that are applied as with this information one should be able to find factors necessary to interpret the Results and possibly explain differences between studies.
I recommend others with goals similar to my own take a look at this AI tool. The following images offer a peak. Some of what I describe as presets are shown in the red box in the first image. The opportunity to ask specific questions about the study is shown in the second image.
The summary and other information provided by Elicit was more detailed than the summary provided by ChatGPT.
Stage 3
Now it is time to make a few selections and invest the time required to read papers predicted to be most valuable. This ends up being a never-ending process with new questions emerging and reuse of the same tools over and over again.
My point. It is time to explore. I have made a start on exploring AI tools for my own niche interests and perhaps I have offered some ideas you might use. Beyond that, I think this is the stage we are at and it would be useful for more individuals to see what AI tools offer for their own personal needs and share what they discover.
I have been reading “From reopen to reinventing” because I know the author promotes mastery learning and I came across a section that talked about the reluctance to consider changing many components of the curriculum. The discussion used Algebra II as an example. The section of the book argued that the math sequence – algebra, geometry, algebra II, and pre-calc has become possibly the most static part of the curriculum despite the discouraging experience it provides so many students. On one hand, there is the increasing K12 emphasis on STEM and on the other the low number of graduates who claim they so seldom have use for any mathematical knowledge/procedures past what they learned in Algebra I.
The book made reference to a Freakonomics podcast (America’s math curriculum doesn’t add up) which is probably over the top, but very interesting and well worth a listen. The podcast contains some of the stats on the long-term application of math that Horn uses in his book. For example, the Center for Education and the Workforce contend that only 11% of U.S. jobs require knowledge of concepts taught in Algebra II and 6% advanced operations. I suppose the argument “who actually ever uses this information” can be made for many content areas, but it is the combination of the number of math courses expected, the frustration so many students have with this discipline, and the limited number of courses students take that make this an important issue. BTW – if you do listen to the podcast take a look at the comment section which contains counterarguments from math teachers and math advocates. The argument which I recognize from one other area with many advocates (computational thinking developed through coding) suggests a different value for math beyond a direct application in a vocation. Such courses encourage the development of higher order thinking skills.
As I have argued in my comments on the “coding for all” position, there are many ways to develop the components of problem-solving in many other disciplines and the extent to which problem-solving as a higher-order skill transfers from one domain to another cannot be assumed. It is far easier to develop problem-solving skills in the domain to which they are to be applied.
Horn makes a suggestion similar to the argument I have made several times in this blog. Statistics (I would argue it is more valuable to teach a course focused on both Statistics and Research Methods) as a math-oriented course teaching what I call data literacy. The use of data and the importance of interpreting and critiquing arguments made with data are so common it would seem familiarity with this topic is of greater value to more citizens and in more areas, than would be the case for advanced algebra concepts. Horn says the percentage of K12 students with some exposure to statistics is less than 20%.
So, I more and more see several of the math courses taught in K12 similar to the way I think about computer science courses in high school. These should be widely available as electives as both areas are essential for some students. Data literacy should be a topic widely integrated in multiple areas and statistics (and research methodology) should be as widely available as Algebra II.
Like so many educators I have been reading about and exploring ChatGPT. I recently read an article in the NYTimes that among other arguments made the point that all educators have a responsibility to prepare students for their future and AI will be an important factor in their future. So, while I was thinking about the role of educators and how they would deal with traditional issues such as student evaluation, I had not considered the broader goal of helping rather than preventing students from using AI.
I have not spent a great deal of time thinking through this new perspective, but it does seem important. Help students make use of ChatGPT rather than worrying about preventing the use of this tool.
For example, in their immediate circumstances, students might use ChatGPT to help themselves study. Note – I was originally trained as a biologist and to teach high school biology so I still often use that content area in my examples.
write 10 multiple choice questions about mitosis and meiosis and add an answer key at the end
1. What is the process of cell division that results in the formation of two genetically identical daughter cells?
a) Mitosis
b) Meiosis
c) Binary fission
d) Asexual reproduction
2. Which stage of mitosis do the chromosomes line up in the middle of the cell?
a) Prophase
b) Metaphase
c) Anaphase
d) Telophase
3. In which stage of mitosis do the sister chromatids separate?
a) Prophase
b) Metaphase
c) Anaphase
d) Telophase
4. How many cells are produced during mitosis?
a) 1
b) 2
c) 4
d) 8
5. In which stage of meiosis do homologous chromosomes pair up?
a) Prophase I
b) Metaphase I
c) Anaphase I
d) Telophase I
6. In which stage of meiosis do the cells divide?
a) Prophase I
b) Metaphase I
c) Anaphase I
d) Telophase I
7. How many cells are produced during meiosis?
a) 1
b) 2
c) 4
d) 8
8. Which of the following is a characteristic of meiosis?
a) The formation of genetically identical cells
b) The formation of genetically diverse cells
c) The formation of cells with half the number of chromosomes as the original cell
d) The formation of cells with twice the number of chromosomes as the original cell
9. What is the process called when homologous chromosomes exchange segments of DNA?
a) Mitosis
b) Meiosis
c) Crossing over
d) Chromosome segregation
10. What is the purpose of meiosis?
a) The formation of new cells
b) The formation of genetically identical cells
c) The formation of genetically diverse cells
d) The formation of haploid cells for reproduction.
Answer key:
1. a) Mitosis
2. b) Metaphase
3. c) Anaphase
4. b) 2
5. a) Prophase I
6. b) Metaphase I
7. c) 4
8. b) The formation of genetically diverse cells
9. c) Crossing over
10. d) The formation of haploid cells for reproduction
Or, maybe I am getting ahead of myself here. Maybe the first step would be to generate a quick review.
Explain mitosis and meiosis to a 9th grade student
Mitosis is the process of cell division that results in the formation of two genetically identical daughter cells. During mitosis, the cell duplicates its chromosomes, which are the structures that carry the cell’s genetic information. The duplicated chromosomes are called sister chromatids, and they are held together by a structure called a centromere. After the chromosomes are duplicated, the cell goes through several stages of mitosis: prophase, metaphase, anaphase, and telophase. In prophase, the chromosomes condense and become visible under a microscope. In metaphase, the chromosomes line up in the middle of the cell. In anaphase, the sister chromatids separate and are pulled to opposite ends of the cell. In telophase, the cell divides and two identical daughter cells are formed.
Meiosis is a type of cell division that occurs in the reproductive cells of organisms, such as sperm and eggs. During meiosis, the cell goes through two rounds of division, resulting in the formation of four genetically diverse daughter cells. In the first round of meiosis, called meiosis I, the cell duplicates its chromosomes, just like in mitosis. However, during meiosis I, the homologous chromosomes, which are pairs of chromosomes that carry the same type of genetic information, pair up and exchange segments of DNA. This process is called crossing over and it results in the formation of genetically diverse chromosomes. After crossing over, the cell goes through the stages of meiosis I: prophase I, metaphase I, anaphase I, and telophase I. In meiosis II, the cell divides again, resulting in the formation of four genetically diverse daughter cells.
So, to sum up Mitosis is the process of cell division that results in the formation of two genetically identical daughter cells, while Meiosis is a type of cell division that occurs in the reproductive cells of organisms, such as sperm and eggs and results in the formation of four genetically diverse daughter cells.
—
My personal interests are in instructional design and student study behavior, but I think creating suggestions for preparing students to use AI is an important perspective for some curriculum person to take. Here is an Edutopia article with some ideas and resources for introducing AI concepts to students.
Data literacy is one of the categories of posts I continue to develop. My initial exposure to data literacy came from a journal for school librarians. “Data literacy refers to the ability to understand, generate, and use data.” As schools continue to encounter recommendations for the development of skills students need to deal with the realities of the future (e.g., computational thinking), the case can be made that data literacy is as important as any other. The closest I can come to understanding what this means is based in my repeated exposure to statistics. It was an important part of my training as a psychologist, but working with data is common across so many areas and now with “big data” it seems the importance of thinking with data has only grown.
One of the interesting things about data is that it is so common and part of so many issues we encounter. The variety of situations in which understanding and using data is important offers both a challenge and an opportunity. The category of posts I have generated on data literacy offers what I hope are opportunities to encounter this variety. Here is another example that recently emerged from an unusual source.
A few months ago my wife finally received the complimentary sample of an IndieGoGo project she had invested in. The product was Bird Buddy. I like to describe it as a bird feeder with an embedded motion-activated digital camera. In other words, it takes photos of birds that visit the feeder and sends them to your phone. Pretty cool.
My initial experiences with the Bird Buddy reminded me of a data literacy project I had imagined several years ago. I do feed birds and I do take pictures. I had a different setup for taking pictures of the birds that came to my feeders and I proposed science projects be based on data that could be generated from such setups. The idea was not original and related to citizen science projects I was aware of originating from the ornithology program at Cornell University.
The Bird Buddy people seem to have similar ideas and have taken advantage of their technology for capturing images and sharing these images digitally. My description of their process left out a few details. In their process, they make use of AI to identify the birds photographed and return the proposed identification to users. I say proposed because image identification is tricky and when uncertainty is clearly present the system proposes an identification and asks what the feeder owner thinks.
The key to the potential for data literacy projects here is that the data from thousands of these feeders gets routed through the company to take advantage of AI identification and the company has begun aggregating these data in a project they call the Heartbeat Project. Try this link. It shows confirmed identifications of birds on a map as submitted from all over the world. It is fun just to watch.
The company has started to do some simple analyses of these data and the first venture involved graphing the time of day with the greatest frequency of feeder visits. Simple, but you can begin to see the potential here. The variables are endless – region of world (down to a few yards), type of bird, time of year, time of day, etc., etc., etc. The company promises to share data with interested parties every few months.
I am hoping the company takes advantage of their data in the near future to offer opportunities to classrooms. It would be a great authentic data literacy experience to share feeder data with other schools and to propose questions the data might answer.
I encourage educators who see the potential here to keep an eye on this company and its product.
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
You must be logged in to post a comment.