Allowing teachers to think critically about PBL

This is a follow-up to my last post and a reaction to an Edutopia annotated bibliography of PBL research .

I am aware of the calls for educational reform and the proposals that problem-based or project-based activities are a way to address this need for reform. Depending on the sources you follow, you may get the idea that educational leaders and educators are resistant to new and more productive approaches that  help students learn if PBL methods are not implemented immediately. I am guessing that those who continue to rely on more traditional methods are aware of the advocacy for what have sometimes been described as “learner-centered”, “inquiry-based” or “discovery” methods and wonder about their own behavior as a consequence.

In reaction to this information environment, I have considered what my role should be. I with my wife have a textbook used in the preparation of future teachers and in the further development of teachers returning for graduate work. Here is my thinking on my role. My job is to identify key issues in the field and offer the best information available related to these issues. I do believe a textbook should be more than a “how to do it” manual. The information that should be made available when a situation is complex may often involve describing controversies and the evidence supporting the different sides of a given difference of opinion.

I believe my job is to encourage reflection on practice alternatives. You cannot encourage critical thinking if you knowingly leave out credible alternative positions. A partial description leaves the decision maker in the dark. To do so would be akin to propaganda. The goal of a textbook when valid controversies exist is not to sell one side of the issue but to help the learner come to a reasoned understanding.

My issue with PBL is that reviews of the research completed by some of the most established educational researchers (some examples appear at the end of this post) have found direct instruction to be a more productive method. These conclusions are based on what the researchers regard as quality studies. It should be noted that a summary of the research does not imply that a given method is always found to be superior. What I find objectionable about the Edutopia  bibliography is that there is no hint that this is an area of disagreement. None of the research summaries I mention are included.

There are examples of quality research that demonstrate the potential of PBL (see references for Kuhn appearing below) and there are detailed analyses of the implementation and affective issues that must be considered for PBL methods to be effective (Belland, et al., 2013; Hung, 2011). To borrow the title of a book on a completely unrelated topic “it is complicated” (apologizes to d boyd) and to imply otherwise is simply misleading.

Belland, B. R., Kim, C., & Hannafin, M. J. (2013). A Framework for Designing Scaffolds That Improve Motivation and Cognition. Educational Psychologist, 48(4), 243-270.

Capon, N., & Kuhn, D. (2004). What’s so good about problem-based learning? Cognition and Instruction, 22(1), 61–79.

Hung, Woei (2011). “Theory to reality: A few issues in implementing problem-based learning”. Educational Technology Research and Development 59 (4): 529.

Kirschner, P.A.; Sweller, J.; Clark, R.E. (2006). “Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching”. Educational Psychologist 41 (2): 75–86.

http://learningaloud.com/grabe6/Chapter8/ch8_kuhnprojects.html

Lesgold, A (2001). “The nature and methods of learning by doing”. American Psychologist 56 (11): 964–971.

Mayer, R. (2004). “Should there be a three-strikes rule against pure discovery? The case for guided methods of instruction.”. American Psychologist 59: 14–19.

Wirkala. C. & Kuhn, D. (2011). Problem-Based Learning in K–12 Education: Is it Effective and How Does it Achieve its Effects?, American Educational Research Journal, 48, 1157–1186

PBL Challenge

It seems we point to the findings of science selectively. Most folks I know profess amazement that some ignore the overwhelming scientific evidence on climate change or evolution. They are concerned when the “scientific perspective” is not the basis for what students experience if these topics are considered.

Why then is the assumption that our best evidence should not guide practice applied when selecting learning activities? You will have to trust me on this (unless someone really wants to review my reference list), but direct instruction consistently results in better academic performance than project based learning, problem based learning, discovery learning, etc. How do PBL advocates rationalize this reality? I seriously want to know because I find the PBL philosophy appealing as well. I just personally struggle with ignoring what research findings suggest.

I try arguing with myself seeking answers. I know many of the proposals. Direct instruction works when the dependent variable is simple, factual knowledge. Direct instruction turns learners off because it is boring. Direct instruction results in learning that fails when it comes to application or flexibility. However, whatever the counter argument, the position is only an hypothesis unless tested. Show me the data (or the money if you prefer). I am waiting to be convinced.

I am aware of what I consider successful PBL research. Success is possible. Here is what I think until shown otherwise. I am guessing that successful PBL takes far more skill to implement with classroom groups than direct instruction. Most PBL attempts probably do not meet an acceptable standard.  I know this sounds harsh, but what is the goal here? In general, I think many students are simply lost or overwhelmed when self directed. I do not think a substantial proportion of students are any more motivated by many PBL tasks. The outcome data simply do not support the argument that common implementations of PBL are as productive as more traditional methods.

So, at this point in my career, I do no longer have the opportunity to conduct research studies. I do have great interest in this topic and continue to search the journals for interesting studies. Learning experiences should not be promoted by talk or novelty.

Evaluating Kahn

The government is spending nearly 3 million to evaluate the “efficacy” of Kahn Academy math tutorials. The research will be conducted by WestED and will follow the randomized-control methods proposed as the best way to eliminate the confounds present in  so much applied research. Careful research methods require this level of funding.

This evaluate effort seemed familiar and a search of my blog confirmed a similar evaluation of math software that was part of the original NCLB initiative.

Would more tech at home improve the academic performance of low SES students?

If you follow edtech blog or twitter feeds, you likely encountered the description of a recently released study concluding that providing computers to low income middle school students (mostly) did nothing to improve their academic performance. I first encountered the description of this study on TechCrunch. This study has not been published, but has been released as a working paper. I located the paper by way of a search for the authors (the TechCrunch link points you to a site that wants to charge you for the paper).

Since TechCrunch could not argue that placing computers in the homes of families with low income improved academic performance, the authors concluded:

This means that the likely culprit is far more insidious: the family and environment. I taught at-risk youth for years and saw first-hand how parents who didn’t prioritize college paralyzed their eager children. In my home, it was expected that I go to graduate school before I even knew what it was

This conclusion (possibly accurate) reminds me of a study conducted by James Coleman back in the 1960s.

Let me provide a little more detail about the study (read it yourself if you want) and encourage you to come to your own conclusions. This is the type of assignment I like to require of graduate students. I want them to make decisions based on research, but I also want them to consider research with a critical eye.

Farlie and Robinson secured a large number of refurbished Windows computers and made them available to children who had no access to computers at home. They did this in a way that would eliminate concerns often raised with nonmanipulative research. They first identified children without home access and then then gave the computers to half at random (everyone received comparable equipment by the end of the study). This was pretty much it. There were no instructions for parents. I am unsure regarding Internet access but the computers had an ethernet card and modem.

The study found no treatment-related advantage in grades or standardized test scores. In comparison to the control group, the experimental group spent .8 hours more per week using computers for school work, 8. hours more for games, and .6 hours more for social networking. This confused me for a bit, but time at school and the homes of friends counted so all students did have some access. There was no significant difference in time spent on homework.

So, do you reach the same conclusion as the author of the TechCrunch article? I recall a comment (I attribute to Larry Cuban, but I am not certain) that tech advocates do not admit what they propose does little good, they assume that the resources were not sufficient, something was done wrong, etc.

What arguments can you generate that do not fit with this criticism? Would providing some training or suggestions for parents be a good thing to do or would such a suggestion not be a difference associated with SES?

Here is a criticism that occurred to me. I would predict that teachers did nothing to take advantage of the technology made available to students. How could they? Not all students could be assumed to have computers. Hence, little could change with the type of assignments teachers could offer.

The “Don’t Learn what you can Google” fallacy

I often have a particular frustration when listening to politicians and pundits. The frustration is basically that pronouncements offer no opportunity for give and take. I cannot tell what the person really meant and I am more concerned with how the remark may be interpreted by others. There is a certain ambiguity in simplicity that can often validate wrong-headed positions. Pundits often over simplify.

I have been reading a new Kindle book by a popular blogger. This writer believes K-12 education should be reformed and technology should play a major role in the new version.

One of the specific concerns in this short book is the focus on fact learning and I think (but I am not certain) on the focus on fact learning in assessment. The authors contends that we should “Stop asking questions on tests that can be answered by a Google search.” This proclamation is followed by a specific example of a question from the New York regents exam that the author found particularly annoying. The question concerned “Which geographic feature impacted the development of the Gupta Empire?” OK, I did not know either, but I have not studied history for some time.

Here is my concern. What specifically should educators conclude from such arguments?
a) Fact learning no longer serves a meaningful purpose and should not be emphasized in instruction?
b) Fact learning may serve a purpose, but skills that build on fact learning should be the focus of evaluation?

I do not know if the author took a clear position. There are too many issues here. There is the role of fact learning. There is the focus of student evaluation. There is the concern that the outcomes of student evaluations are leading to destructive behaviors regarding how students are taught and how teachers are evaluated. Where in this chain off concerns do we see the problem and which are legitimate concerns?

One should not leap to conclusions until the issue at stake is made clear. However, I feel the need to say:
1) Fact knowledge is essential. Clearly there is nothing wrong with searching for information when we lack factual or conceptual knowledge. Knowing how to answer our individual questions is extremely important and searching the Internet is a practical way to acquire information we lack. However, the value of search does not eliminate the value of existing knowledge to learning and understanding. I regard this position as “good science”. If you are interested, I would suggest the recent efforts of Daniel Willingham to dispute popularized claims that fact knowledge is not valuable (his book is great, but here is a quick summary).
2) Tests rely on sampling. It is impractical to evaluate every possible thing a learner might know or be able to do. items should be representative of the skills and knowledge we want students to have. I am not certain that one should read too much into a given item (such as the item regarding Gupta). We did not focus on India much years ago, but would your reaction to understanding the connection between a geological feature and the development of an empire have been different if the question had been “What role did the Cumberland Gap play in the expansion beyond the original colonies”?

Arguing that the focus of tests should extend beyond factual knowledge, that test preparation has received too much emphasis diminishing the time spend on instruction and learning, and that the results of examinations have been used to judge rather than inform instruction are positions I strongly endorse. These are policy issues. The relationship between existing knowledge and learning is not about policy, it is about how human cognition works.

If you want to google something try “existing knowledge and learning”.

 

Why proven ideas are not used

I don’t focus on the research literature on this site, but I want to make an exception. I encourage those of you interested in educational research make the effort to read the article by Rohrer & Pahler in the June/July issue of Educational Researcher (2010, 39(5), 406-412).

The article argues that researchers have made a number of concrete suggestions that would improve student learning and that these suggestions are often ignored. The three examples offered include – learning through testing (increasing retrieving), spacing of practice, and interleaving. These examples were selected because the basis for the suggestion are quite solid AND because the suggestions are about studying differently rather than studying more.

Perhaps the most interesting part of the review was a speculative addition at the end of the presentation which asked the question “Why are inferior strategies so popular?” It is this focus I think should receive wider consideration because it may extend to many current topics in education. The authors pose the question in an interesting fashion. Since some of these suggestions could have developed as common practice simply as a function of student trial and error, why do students typically spend their time in inferior strategies? Remember these proposals are about a different way of doing things and not about the expectation that students spend more time. The authors suggest that these strategies tend to produce a higher error rate during study which may be more discouraging for students. It appears students prefer passive strategies because there is less challenge to their illusion of understanding. I wonder. Perhaps students do understand that a technique is less effective, but still persist because it is what they know and it is easier.

Powered by ScribeFire.

Missing data

I read a recent post from The Blue Skunk Blog that is evidently a repost. I actually think I remember the original post and belief I responded to that post as well.

Johnson states – I would find standards in the following areas extremely helpful as I try to evaluate our district’s technology infrastructure and plan for improvement :

  • 1. Connectivity (LAN, WAN, and Internet I & II capacities)
  • 2. Security (firewalls, filters, policies)
  • 3. Tech support (technicians per computer, tech support response time, reliability rates, policies about technology replacement,)
  • 4. Administrative applications (student information systems, transportation, personnel systems, payroll systems, data mining systems, home-school communication systems, online testing)
  • 5. Information resources (e-mail, mailing lists, blogging software, online learning software, commercial databases, library automation systems)
  • etc.

As I understand this portion of Doug Johnson’s post on standards, he would like to know just what resources schools should have available in order to provide effective educational experiences for students.

I develop resources to prepare future teachers and part of what I attempt to do is to offer descriptions of the situations they may face in the schools they will eventually enter. My interest is in the variability in such situations. Perhaps this is also what Doug Johnson is looking for – what are the standards that educators should be able to expect and what are the typical means and standard deviations for some of these indicators. In other words – how does my school compare, what should be expect to be able to provide?

I think locating good data on technology in schools – what is there and how it is used – has become increasing difficult. I know that good data on what technology exists is there, but these data are collected by businesses that intend to sell the information to vendors. I can’t afford access and evidently neither can the libraries I use. I used to use the National surveys conducted by Henry Becker and the Technology Counts annual publications to locate such data. Technology Counts abandoned the state to state comparisons. Perhaps some states were embarrassed and resisted offering up information. The vast differences among the states was pretty clear evidence that students had very different experiences.

I am thinking researchers found it difficult to secure funds to actually conduct quality surveys and this has cut off on-going, independently collected descriptions of what is typical in this area. The best resource I can find is offered by the National Center for Education Statistics (Teachers’ Use of Educational Technology in U.S. Public Schools: 2009 ). Still, even on the descriptive level, I do not think the data are complete enough. Wouldn’t you like answers to some of the descriptors Johnson identifies?