Web content evaluation – data for a change

I sometimes complain that pundits and keynoters receive too much blog attention and researchers too little. Since the researchers I follow seldom seem to blog, perhaps I should post in support of their activity.

So much attention has been focused on the quality of online resources and the skills necessary to critically evaluate these resources as a literacy component of 21st Century functioning that one might think this area would have generated considerable research activity. There seem to be plenty of recommendations for practice, but little formal assessment of skill or of the success of interventions.

The recent AERJ article by Wiley and colleagues (citation at end of post) describes an interesting study I feel both evaluates the value of commonly suggested practices for evaluating web sites (e.g., identify the page author and possible motive for offering the information) in terms of whether students (college students in this case) learn to apply such skills and whether the development of such skills influence how students then go about completing an online inquiry task. I thought the procedure used in the study was creative – basically offer students a fabricated Google results page based on a given search phrase and have participants evaluate the various links. Social psychologists and other researchers often employ deception in their research. The research demonstrated that more specific guidance and a more active evaluation task resulted in improved performance on a second site evaluation task AND the use of higher quality information in an inquiry task.

This study needs to be replicated with younger learners.

BTW – the methodology is similar (evaluate a set of sites addressing a given topic) to that proposed on the Beck “Good, bad and ugly” site.

Wiley, J., Goldman, S.R., Graesser, A.C., Sanchez, C.A., Ash, I.K. & Hemmerich, J.A. (2009). Source evaluation, comprehension, and learning in Internet science inquiry tasks. American Educational Research Journal, 46, 1060-1106.

Powered by ScribeFire.

Loading

New Data In From Math & Reading Software Evaluation

A major study supported by NCLB has been evaluating the impact of math software in classrooms. I appear to be on a mailing list resulting in my receipt of project summaries (I specifically asked about this research last year). The data from the first year basically showed little benefit. Data from the second year of the study are now available (pdf of executive summary).

A couple of issues were being evaluated – did experience with the software mater, did the impact on achievement vary from product to product.

Regarding experience:

For sixth grade math, product effects on student test scores were statistically significantly lower (more negative) in the second year than in the first year, and for algebra I, effects on student test scores were statistically significantly higher in the second year than in the first year.

Regarding individual products:

One product had a positive and statistically significant effect.  Nine did not have statistically significant effects on test scores.

In 2007, I attended a detailed analyses of the first round of this research at AERA. I anticipate there will be a comparable report this year.

Loading

EdTech Research Agenda

The recently authorized National Center for Research in Advanced Information and Digital Techologies comes with a stated set of priorities:

• Research, development and demonstrations of learning technologies that could include simulations, games, virtual worlds, intelligent tutors, performance-based assessments, and innovative approaches to pedagogy that these tools can implement.
• Design and testing of components needed to build prototype systems. This could include tools for answering questions, for building and evaluating the construction of simulations and virtual worlds that could include sophisticated physical and biological systems or reconstructions of ancient cities brought to life with intelligent avatars (models of humans in virtual spaces).
• Research to determine how these new systems can best be used to build interest and expertise in learners of different ages and backgrounds. This will give educators, parents, employers, and learners the information they need to make informed choices.

It will be interesting to see what research funds are eventually made available and what realistic opportunities various institutions have to compete for these resources.

Andy Carvin offers additional information on this initiative.

Loading

You Still Must Think

Perhaps too much has been made of generational difference in the way technology has been used. And, perhaps there is a confusion between comfort level and productivity.

A research study just released concludes:

The first ever virtual longitudinal study carried out by the CIBER research team at University College London claims that, although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

The group responsible for this study intends to track individuals longitudinally. The group also seems to contend that bad scholarship is like a disease – the rest of us are catching it from our students (I made the part about the disease model up – the group says no such thing, but they do observe that similar patterns are emerging across generations).

(pdf explaining the research and intent of the group is available to those in which the problem has not progressed past the point of no return – read while you still can) 😉

BTW- the details in the pdf are a little sparse. This is more of an issue piece, but the issues are interesting and linked to some research.

Loading

Communicating in Email

I came across a brief Wired post exploring the inability to communicate actual intent in email. The Wired articled referenced “recent” research published in the Journal of Personality and Social Psychology by Epley and Kruger.

The Wired article refers to Epley and Kruger in stating:

The researchers took 30 pairs of undergraduate students and gave each one a list of 20 statements about topics like campus food or the weather. Assuming either a serious or sarcastic tone, one member of each pair e-mailed the statements to his or her partner. The partners then guessed the intended tone and indicated how confident they were in their answers.

Evidently, those receiving the messages understood the tone at about chance level.

I think this and the explanation of the researchers (we are egocentric and know what we want to convey and assume that the message says that) is interesting and should be part of the message when talking with teachers about email. Evidently, we often lack the metacognitive ability to differentiate the meaning in our head from the meaning on the screen until we receive the reply indicating we have been misunderstood.

I wanted to read the original work and tried to locate the authors and theme in Google Scholar. The Wired article does not provide a reference.

I found Kruger, J., Epley, N., Parker, J. & Ng, A. (2005). Egocentrism over e-mail: We communicate as well as we think? JPSP, 89, 925-935. The article does deal with egocentrism and email, but the Wired piece leads with comments from the experiments and seems to imply newer work.

Blogged with Flock

Loading

Here is my response

The Gates Tip Line includes a recent post in which the host asks for replies to a teachers negative analysis of Prensky’s comments (I did not see the phrase “engage me or enrage me”, but this is the type of comment that Prensky uses). The host was disappointed with the lack of response to the request for responses to the teacher. I attempted to add a comment, but the options appear to require that you identify yourself through a commercial blog service or OpenID. I will add my comment here. By the time I read the comments, some had already made the effort to reply. I have excerpted one comment I would like to address:

The teacher’s statements above fly in the face of what the last two decades of psychological research have found (which (surprise!) support constructivist models of learning rather than a transmission model of education!). ‘Guide on the side,’ not ‘sage on the stage.’ As much as possible, discovery- and inquiry-based learning rather than lecture and regurgitation.

I don’t like phrases like “regurgitation”. These discussions should be about data and sound judgment. We can leave the defamatory phrases to the politicians. If you mean memorization, say so. I do agree that education should attempt to require more than memorization. Lecturing, like books, is an information delivery system. Hopefully, learners are capable of using information, however they encounter it, as the starting point for learning. The constructivist model, as I understand it, suggests we all understand by attempting to interpret experiences (including lectures I assume) based on our existing personal knowledge.

I would sincerely like to be made aware of the research mentioned here (please provide references). If you have followed my recent and past comments, I have not read what I consider quality research supporting the “child-centered” position. I have read many books and articles on the topic and I have myself added to this material, but these are not research papers. As I have said, I can direct you to reviews of research by Sweller; Chall; Mayer; and Lesgold that are quite critical. You have to consult these reviews for the specific studies that are available. So, there are many studies arguing the negative side of this debate.

Perhaps this is a matter of differences in definition – constructivism and child-centered are difficult to operationalize. I am not attempting to bait anyone here, but since blog hosts are appealing to general readership for help and information. If we can switch the discussion to the data, please help by offering references the rest of us can review. I have already read negative reviews, where are the positive studies?????

Chall, J. (2000). The academic achievement challenge: What really works in the classroom. Guilford.

Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why minimal guidance during instruction does not work. Educational Psychologist, 41, 75-86.

Lesgold, A. (2001). The nature and methods of learning by doing. American Psychologist, 56(11), 964-973.

Mayer, R. (2001). Should there be a three-strikes rule against pure discovery? The case for guided methods of instruction. American Psychologist, 59, 14-19.

Loading

Why Johnny Can’t Read

I think I read Flesch’s Why Johnny Can’t Read back in graduate school. For one reason or another, this problem seemed to disappear. Perhaps math and science issues became greater concerns.

Problems with reading proficiency are about to receive more attention. A recent study by the National Endowment for the Arts urges attention to the lack of reading activity once students hit middle school.

“We are doing a better job of teaching kids to read in elementary school. But once they enter adolescence, they fall victim to a general culture which does not encourage or reinforce reading. Because these people then read less, they read less well. Because they read less well, they do more poorly in school, in the job market and in civic life.”

Among the possible causes is the “profileration of electronic media”. The news article I have linked does include statements contending that the research does not include online reading and students may be reading, but reading different types of material.

Loading