I sometimes complain that pundits and keynoters receive too much blog attention and researchers too little. Since the researchers I follow seldom seem to blog, perhaps I should post in support of their activity.
So much attention has been focused on the quality of online resources and the skills necessary to critically evaluate these resources as a literacy component of 21st Century functioning that one might think this area would have generated considerable research activity. There seem to be plenty of recommendations for practice, but little formal assessment of skill or of the success of interventions.
The recent AERJ article by Wiley and colleagues (citation at end of post) describes an interesting study I feel both evaluates the value of commonly suggested practices for evaluating web sites (e.g., identify the page author and possible motive for offering the information) in terms of whether students (college students in this case) learn to apply such skills and whether the development of such skills influence how students then go about completing an online inquiry task. I thought the procedure used in the study was creative – basically offer students a fabricated Google results page based on a given search phrase and have participants evaluate the various links. Social psychologists and other researchers often employ deception in their research. The research demonstrated that more specific guidance and a more active evaluation task resulted in improved performance on a second site evaluation task AND the use of higher quality information in an inquiry task.
This study needs to be replicated with younger learners.
BTW – the methodology is similar (evaluate a set of sites addressing a given topic) to that proposed on the Beck “Good, bad and ugly” site.
Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106.
Powered by ScribeFire.