Direct Instruction vs. Discovery

In June, the Monitor On Psychology contained the summary of a research study contrasting discovery learning with direct instruction. The summary (the original is not freely available online) describes a study by Klahr, Chen and Fey that either directly instructed 3rd and 4th graders how to form meaningful hypotheses regarding the question of how the steepness and length of a ramp influence how far a ball rolls from the end of a ramp or allowed the children to explore the ramp on their own. The children learned and were more likely to transfer understanding when directly instructed.

My concern is that educators will confuse pure “discovery learning” and student-centered learning. Way back when, Ausubel differentiated direct instruction from discovery learning and noted that both could result in meaningful or rote learning. I would guess Ausubel would label a technique in which “teachers did not intervene beyond suggesting a learning objective” as rote discovery.

Note in the analysis of this study, one critic noted that “I would like to see a replication with guided discovery.” So would I.

Loading

Thinking about NCLB – again

I have to admit I have been ignoring NCLB – perhaps hoping it would go away. I guess now I must accept that this will not be the case. We do need to pay attention to how well each student is achieving and be concerned when progress is lacking. It is the “accountability” thing that bothers me. While I am a researcher, I also live in the world of real schools, real teachers, real neighborhoods, real tests and real instructional resources. I also understand that there is money to be made and jobs to be protected. I am not optimistic that the “problem” of poor achievement is simple or that some of the proposed consequences (allowing students to transfer from poorly performing schools resulting in lower levels of resources and probably the removal of some of the more motivated students) are meaningful solutions. Simple solutions and implied blame work well in the political arena, but such practices are more useful for generating votes than improving achievement.

So here is the deal. The Department of Ed has commissioned some “quality” research that will evaluate the potential benefits of a set of carefully selected reading and math software on standard measures of achievement. Companies were encouraged to propose the use of products in these areas and provide evidence of prior evaluation. The “winners” are listed on the Ed.gov web site. The focus will be on low-income schools. The studies will be conducted in schools that have not used this software but are interested. The research will be based on a control/treatment design model (I would guess this means that classes will be assigned at random to the no technology / technology conditions). The research will be considered “successful” if the treatment generates an effect size of .35.

Don’t get me wrong – I think this will be interesting. It may establish that some companies have created bodies of instructional materials that are effective. This will at least be a starting place.

Will this type of research tell us what software schools should purchase? I assume any company that has a product on this list and generates an effect size greater than .35 may think so. It will not be possible to determine why any given product is effective and hence it will not be possible to determine if similar products might be useful or how products might be improved.

Loading