PBL Challenge

It seems we point to the findings of science selectively. Most folks I know profess amazement that some ignore the overwhelming scientific evidence on climate change or evolution. They are concerned when the “scientific perspective” is not the basis for what students experience if these topics are considered.

Why then is the assumption that our best evidence should not guide practice applied when selecting learning activities? You will have to trust me on this (unless someone really wants to review my reference list), but direct instruction consistently results in better academic performance than project based learning, problem based learning, discovery learning, etc. How do PBL advocates rationalize this reality? I seriously want to know because I find the PBL philosophy appealing as well. I just personally struggle with ignoring what research findings suggest.

I try arguing with myself seeking answers. I know many of the proposals. Direct instruction works when the dependent variable is simple, factual knowledge. Direct instruction turns learners off because it is boring. Direct instruction results in learning that fails when it comes to application or flexibility. However, whatever the counter argument, the position is only an hypothesis unless tested. Show me the data (or the money if you prefer). I am waiting to be convinced.

I am aware of what I consider successful PBL research. Success is possible. Here is what I think until shown otherwise. I am guessing that successful PBL takes far more skill to implement with classroom groups than direct instruction. Most PBL attempts probably do not meet an acceptable standard.  I know this sounds harsh, but what is the goal here? In general, I think many students are simply lost or overwhelmed when self directed. I do not think a substantial proportion of students are any more motivated by many PBL tasks. The outcome data simply do not support the argument that common implementations of PBL are as productive as more traditional methods.

So, at this point in my career, I do no longer have the opportunity to conduct research studies. I do have great interest in this topic and continue to search the journals for interesting studies. Learning experiences should not be promoted by talk or novelty.

Loading

Web content evaluation – data for a change

I sometimes complain that pundits and keynoters receive too much blog attention and researchers too little. Since the researchers I follow seldom seem to blog, perhaps I should post in support of their activity.

So much attention has been focused on the quality of online resources and the skills necessary to critically evaluate these resources as a literacy component of 21st Century functioning that one might think this area would have generated considerable research activity. There seem to be plenty of recommendations for practice, but little formal assessment of skill or of the success of interventions.

The recent AERJ article by Wiley and colleagues (citation at end of post) describes an interesting study I feel both evaluates the value of commonly suggested practices for evaluating web sites (e.g., identify the page author and possible motive for offering the information) in terms of whether students (college students in this case) learn to apply such skills and whether the development of such skills influence how students then go about completing an online inquiry task. I thought the procedure used in the study was creative – basically offer students a fabricated Google results page based on a given search phrase and have participants evaluate the various links. Social psychologists and other researchers often employ deception in their research. The research demonstrated that more specific guidance and a more active evaluation task resulted in improved performance on a second site evaluation task AND the use of higher quality information in an inquiry task.

This study needs to be replicated with younger learners.

BTW – the methodology is similar (evaluate a set of sites addressing a given topic) to that proposed on the Beck “Good, bad and ugly” site.

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106.

Powered by ScribeFire.

Loading

New Data In From Math & Reading Software Evaluation

A major study supported by NCLB has been evaluating the impact of math software in classrooms. I appear to be on a mailing list resulting in my receipt of project summaries (I specifically asked about this research last year). The data from the first year basically showed little benefit. Data from the second year of the study are now available (pdf of executive summary).

A couple of issues were being evaluated – did experience with the software mater, did the impact on achievement vary from product to product.

Regarding experience:

For sixth grade math, product effects on student test scores were statistically significantly lower (more negative) in the second year than in the first year, and for algebra I, effects on student test scores were statistically significantly higher in the second year than in the first year.

Regarding individual products:

One product had a positive and statistically significant effect.  Nine did not have statistically significant effects on test scores.

In 2007, I attended a detailed analyses of the first round of this research at AERA. I anticipate there will be a comparable report this year.

Loading

EdTech Research Agenda

The recently authorized National Center for Research in Advanced Information and Digital Techologies comes with a stated set of priorities:

• Research, development and demonstrations of learning technologies that could include simulations, games, virtual worlds, intelligent tutors, performance-based assessments, and innovative approaches to pedagogy that these tools can implement.
• Design and testing of components needed to build prototype systems. This could include tools for answering questions, for building and evaluating the construction of simulations and virtual worlds that could include sophisticated physical and biological systems or reconstructions of ancient cities brought to life with intelligent avatars (models of humans in virtual spaces).
• Research to determine how these new systems can best be used to build interest and expertise in learners of different ages and backgrounds. This will give educators, parents, employers, and learners the information they need to make informed choices.

It will be interesting to see what research funds are eventually made available and what realistic opportunities various institutions have to compete for these resources.

Andy Carvin offers additional information on this initiative.

Loading

You Still Must Think

Perhaps too much has been made of generational difference in the way technology has been used. And, perhaps there is a confusion between comfort level and productivity.

A research study just released concludes:

The first ever virtual longitudinal study carried out by the CIBER research team at University College London claims that, although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

The group responsible for this study intends to track individuals longitudinally. The group also seems to contend that bad scholarship is like a disease – the rest of us are catching it from our students (I made the part about the disease model up – the group says no such thing, but they do observe that similar patterns are emerging across generations).

(pdf explaining the research and intent of the group is available to those in which the problem has not progressed past the point of no return – read while you still can) 😉

BTW- the details in the pdf are a little sparse. This is more of an issue piece, but the issues are interesting and linked to some research.

Loading

Communicating in Email

I came across a brief Wired post exploring the inability to communicate actual intent in email. The Wired articled referenced “recent” research published in the Journal of Personality and Social Psychology by Epley and Kruger.

The Wired article refers to Epley and Kruger in stating:

The researchers took 30 pairs of undergraduate students and gave each one a list of 20 statements about topics like campus food or the weather. Assuming either a serious or sarcastic tone, one member of each pair e-mailed the statements to his or her partner. The partners then guessed the intended tone and indicated how confident they were in their answers.

Evidently, those receiving the messages understood the tone at about chance level.

I think this and the explanation of the researchers (we are egocentric and know what we want to convey and assume that the message says that) is interesting and should be part of the message when talking with teachers about email. Evidently, we often lack the metacognitive ability to differentiate the meaning in our head from the meaning on the screen until we receive the reply indicating we have been misunderstood.

I wanted to read the original work and tried to locate the authors and theme in Google Scholar. The Wired article does not provide a reference.

I found Kruger, J., Epley, N., Parker, J. & Ng, A. (2005). Egocentrism over e-mail: We communicate as well as we think? JPSP, 89, 925-935. The article does deal with egocentrism and email, but the Wired piece leads with comments from the experiments and seems to imply newer work.

Blogged with Flock

Loading

Here is my response

The Gates Tip Line includes a recent post in which the host asks for replies to a teachers negative analysis of Prensky’s comments (I did not see the phrase “engage me or enrage me”, but this is the type of comment that Prensky uses). The host was disappointed with the lack of response to the request for responses to the teacher. I attempted to add a comment, but the options appear to require that you identify yourself through a commercial blog service or OpenID. I will add my comment here. By the time I read the comments, some had already made the effort to reply. I have excerpted one comment I would like to address:

The teacher’s statements above fly in the face of what the last two decades of psychological research have found (which (surprise!) support constructivist models of learning rather than a transmission model of education!). ‘Guide on the side,’ not ‘sage on the stage.’ As much as possible, discovery- and inquiry-based learning rather than lecture and regurgitation.

I don’t like phrases like “regurgitation”. These discussions should be about data and sound judgment. We can leave the defamatory phrases to the politicians. If you mean memorization, say so. I do agree that education should attempt to require more than memorization. Lecturing, like books, is an information delivery system. Hopefully, learners are capable of using information, however they encounter it, as the starting point for learning. The constructivist model, as I understand it, suggests we all understand by attempting to interpret experiences (including lectures I assume) based on our existing personal knowledge.

I would sincerely like to be made aware of the research mentioned here (please provide references). If you have followed my recent and past comments, I have not read what I consider quality research supporting the “child-centered” position. I have read many books and articles on the topic and I have myself added to this material, but these are not research papers. As I have said, I can direct you to reviews of research by Sweller; Chall; Mayer; and Lesgold that are quite critical. You have to consult these reviews for the specific studies that are available. So, there are many studies arguing the negative side of this debate.

Perhaps this is a matter of differences in definition – constructivism and child-centered are difficult to operationalize. I am not attempting to bait anyone here, but since blog hosts are appealing to general readership for help and information. If we can switch the discussion to the data, please help by offering references the rest of us can review. I have already read negative reviews, where are the positive studies?????

Chall, J. (2000). The academic achievement challenge: What really works in the classroom. Guilford.

Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why minimal guidance during instruction does not work. Educational Psychologist, 41, 75-86.

Lesgold, A. (2001). The nature and methods of learning by doing. American Psychologist, 56(11), 964-973.

Mayer, R. (2001). Should there be a three-strikes rule against pure discovery? The case for guided methods of instruction. American Psychologist, 59, 14-19.

Loading