Why is tutoring effective?

We know that tutoring is one of the most successful educational interventions with meta-analyses demonstrating the advantage to be between .3 and 2.3 standard deviations. In some ways, the explanation of this advantage seems obvious as it provides personal attention that cannot be matched in a classroom. The challenges in applying tutoring more generally are the cost and availability of personnel. One of my immediate interests in the AI tools that are now available is in exploring how students might make use of these tools as a tutor. This is different from the long-term interest of others in intelligent tutoring systems designed to personalize learning. The advantage of the new AI tools is that these tools are not designed to support specific lessons and can be applied as needed. I assume AI large language chatbots and intelligent tutoring will eventually merge, but I am interested in what students and educators can explore now?

My initial proposal for the new AI tools was to take what I knew about effective study behavior and know about the capabilities of AI chatbots and suggest some specific things a student might do with AI tools to make studying more productive and efficient. Some of my ideas were demonstrated in an earlier post. I would still suggest interested students try some of these suggestions. However, I wondered if an effort to understand what good tutors do could offer some additional suggestions to improve efficiency and move beyond what I had suggested based on what is known about effective study strategies. Tutors seem to function differently from study buddies. I assumed there must be research literature based on studies of effective tutors and what it is that these individuals do that less effective tutors do not. Perhaps I could identify some specifics a learner could coax from an AI chatbot. 

My exploration turned out to be another example of finding that what seems likely is not always the case. There have been many studies of tutor competence (see Chi et al, 2001) and these studies have not revealed simple recommendations for success. Factors such as tutor training or age differences between tutor and learner do not seem to offer much as whatever is offered as advice to tutors and what might be assumed to be gained from experience do not seem to matter a great deal.

Chi and colleagues proposed that efforts to examine what might constitute skilled tutoring begin with a model of tutoring interactions they call a tutoring frame. The steps in a tutoring session were intended to isolate different actions that might make a difference depending on the proficiency with which the actions are implemented.

Steps in the tutoring frame:

(1) Tutor asks an initiating question

(2) Learner provides a preliminary answer 

(3) Tutor gives confirmatory or negative feedback on whether the answer is correct or not

(4) Tutor scaffolds to improve or elaborate the learner’s answer in a successive series of exchanges (taking 5–10 turns)  

(5) Tutor gauges the learner’s understanding of the answer

One way to look at this frame is to compare what is different that a tutor provides from what happens in a regular classroom. While steps 1-3 occur in regular classrooms, tutors would typically apply these steps with much greater frequency. There are approaches classroom teachers could apply to provide these experiences more frequently and effectively (e.g., ask questions and pause before calling on a student, make use of student response systems allowing all students to respond), but whether or not classroom teachers bother is a different issue from whether effective tutors differ from less effective tutors in making use of questions. The greatest interest for researchers seems to be in step 4. What variability exists during this step and are there significant differences in the impact identifiable categories of such actions have that impact learning?

Step 4 involves a back-and-forth between the learner and tutor that goes beyond the tutor declaring the initial response from the learner as correct or incorrect. Both teacher and learner might take the lead during this step. When the tutor controls what unfolds, the sequence that occurs might be described as scaffolded or guided. The tutor might break the task into smaller parts, complete some of the parts for the student (demonstrate), direct the student to attempt a related task, remind the student of something they might not have considered, etc. After any of these actions, the student could respond in some way.

A common research approach might evaluate student understanding before tutoring, identify strategy frequencies and sequence patterns during a tutoring session, evaluate student understanding after tutoring, and see if relationships can be identified between the strategy variables and the amount learned.

As I looked at the research of this type, I happened across a study that applied new AI not to implement tutoring, but to search for patterns within tutor/learner interaction (Lin et al., 2022). The researchers first trained an AI model by feeding examples of different categories identified within tutoring sessions and then attempted to see what could be discovered about the relationship of categories within new sessions. While potentially a useful methodology, the approach was not adequate to account for differences in student achievement. A one-sentence summary from that study follows; 

More importantly, we demonstrated that the actions taken by students and tutors during a tutorial process could not adequately predict student performance and should be considered together with other relevant factors (e.g., the informativeness of the utterances)

Chi and colleagues (2001)

Chi and colleagues offer an interesting observation they sought to investigate. They proposed that researchers might be assuming that the success of tutoring is somehow based on differences in the actions of the tutors and look for explanations in narratives based on this assumption. This would make some sense if the intent was to train or select tutors. 

However, they propose that other perspectives should be examined and suggest the  effectiveness of tutoring experiences is largely determined by some combination of the following:

  1. the ability of the tutor to choose ideal strategies for specific situations. (Tutor-Centered) 
  2. the degree to which the learner engages in generative cognitive activities during tutoring in contrast to the more passive, receptive activities of the classroom (Learner-Centered), and
  3. the joint efforts of the tutor and learner. (Interactive)

In differentiating these categories, the researchers proposed that in the learner-centered and interactive labels, the tutor will have enabled an effective learning environment to the extent that the learner asks questions, summarizes, explains, and answers questions (learner-centered) or interactively as the learner is encouraged to interact by speculating, exploring, continuing to generate ideas (interactive).

These researchers attempted to test this three-component model in two experiments. In the first, the verbalizations of tutoring sessions were coded for these three categories and related to learning gains. In the second experiment, the researchers asked tutors to minimize tutor-centered activities (giving explanations, providing feedback, adding additional information) and instead to invite more dialog – what is going on here, can you explain this in your own words, do you have any other ideas, can you connect this with anything else you read, etc. The idea was to compare learning gains with tutoring sessions from the first study in which the tutor took a more direct role in instruction. 

In the first experiment, the researchers found evidence for the impact of all three categories of tutor session benefits, but codes for learner-centered and interactive had benefits for performance outcomes relying on deeper learning. The second experiment found equal or greater benefits for learner-centered and interactive events when tutor-focused events were minimized.

The researchers argued that tutoring research that focuses on what tutors do may have yet to find much regarding what tutors should or not do may be disappointing because the focus should be on what learners do during tutoring sessions. Again, tutoring is portrayed as a follow-up to classroom experiences so the effectiveness of experiences during tutoring sessions should be interpreted given what else is needed in this situation. 

A couple of related comments. Other studies have reached similar conclusions. For example, Lepper and Woolverton (2002) concluded that tutors are most successful when they “draw as much as possible from the students” rather than focus on explaining. The advocacy of these researchers for a “Socratic approach” is very similar to what Chi labeled as interactive. 

One of my earlier posts on generative learning offered examples of generative activities and proposed a hierarchy of effectiveness among these activities. At the top of this hierarchy were activities involving interaction.  

Using an AI chatbot as a tutor:

After my effort to read a small portion of the research on effective tutors, I am more enthusiastic about the application of readily available AI tools to the content to be learned. My post which I presented more as a way to study with such tools, could also be argued as a way for a learner to take greater control of a learner/AItutor session. In the examples I provided, I showed how the AI agent could be asked to summarize, explain at a different level, and quiz the learner over the content a learner was studying. Are such inputs possibly more effective when a learner asks for them? There is a danger that a learner does not recognize what topics require attention, but an AI agent can be asked questions with or without designating a focus. In addition, the learner can explain a concept and ask whether his/her understanding was accurate. AI chats focused on designated content offer students a responsive rather than a controlling tutor. Whether or not AI tutors are a reasonable use of learner time, studies such as Chi, et al. and Lepper et al. suggest that more explanations may not be what students need most. Learners need opportunities that encourage their thinking.

References

Chi, M. T., Siler, S. A., Jeong, H., Yamauchi, T., & Hausmann, R. G. (2001). Learning from human tutoring. Cognitive science, 25(4), 471-533.

Fiorella, L., & Mayer, R. (2016). Eight Ways to Promote Generative Learning. Educational Psychology Review, 28(4), 717-741.

Lepper, M. R., & Woolverton, M. (2002). The wisdom of practice: Lessons learned from the study of highly effective tutors. In Improving academic achievement (pp. 135-158). Academic Press.

Lin, J., Singh, S., Sha, L., Tan, W., Lang, D., Gaševi?, D., & Chen, G. (2022). Is it a good move? Mining effective tutoring strategies from human–to–human tutorial dialogues. Future Generation Computer Systems, 127, 194-207.

Loading