AERA 2007

We spent the last week in Chicago and in the journey getting from here to there and back. The event was the American Educational Research Association convention (2007). The one presentation I attended that may be of interest to those who read this blog concerned the evaluation of math and reading software I just described last week. A presenter from Mathematica and a discussant from Stanford considered the results of the first phase of the study. Here are a couple of comments that I found helpful.

Even though the study involved many schools, teachers, and students, treating school and teacher as units of analysis required, based on a power analysis, that the researchers expect an effect size of .15 to regard an effect as significant. In this situation, it was not possible to evaluate the impact of the 16 individual instructional packages that were used and the overall effect of reading or math software did not achieve statistical significance.

Based on tracking functions present in some of the software programs, the researchers estimated that use of the software replaced approximately 10% of traditional instruction. The discussant noted that for approximately $25 per seat (the average cost of the software) it might be argued that 10% of teacher time was freed to attend to the needs of individual students. This “spin” on the results was kind of interesting and at first I thought this to be a positive statement and quite optimistic given the very guarded approach taken by the researchers. However, while potentially true, it would also seem that this flexible time must not have been used productively or achievement gains would have been generated.

The data from the second phase of the study is still under review. The second phase will allow a comment on individual software packages.

Representatives from a couple of software companies were in the audience and noted that the software was implemented with the minimal amount of technical support and inservice preparation. The presenters accepted this description, but countered that this was the level of support that schools normally purchased.

The presenters were careful to stay away from speculating how the results of the research would be interpreted by politicians and policy analysts. The presenters argued that the results could be spun in different ways.

We did have the opportunity for a little recreation. The Sunday before the conference we were able to watch the White Sox and Twins. It was cold, but Santana was pitching for Minnesota and we are becoming big Twins fans. Twins won.

Baseball field

Blogged with Flock

Loading

Research Results are now available.

The results of the NCLB mandated evaluation of reading and math software is now available (see press release).

  • On average, after one year, products did not increase or decrease test scores by amounts that were statistically different from zero.
  • For reading products, effects on overall test scores were correlated with the student-teacher ratio in first-grade classrooms and with the amount of time that products were used in fourth-grade classrooms.
  • For math products, effects were uncorrelated with classroom and school characteristics.

The pdf of the full report is available for download.

A report based on a second year of data collection will be released at a later date.

Loading

Online vs. Print

if:book has an interesting post concerning the future of print media – most specifically newspapers. The post references material suggesting that print will be around for a long time for certain applications.

Print reading, he says, tends toward the sustained and immersive, the long-form linear narrative. Computer reading, on the other hand, is multi-tasky — distracted, social, bite-sized, multidirectional. (Cory Doctorow)

The post also includes reference to a new eye-movement study that concludes readers process online stories more extensively than that traditional newspaper format (see if:book link for more information and link). Such data may require careful interpretation – newspaper stories (if I remember how writers create stories) are written in a way that positions important and summary information early within the story and then provide details. Such stories are written to allow readers with different motives to process the stories differently. My point – the media type and the media format may be confounded and this connection must be considered in interpretation.

Loading

Ed Tech Research Under Fire

Back in January, I offered a post concerning the Department of Ed’s attempt to improve the quality of research focused on classroom software. On a lark, I emailed one of the researchers and received assurance that the report (based on research conducted in 2005) would be available in 6 weeks. OK – Jan, Feb. Mar, Apr – and counting.

Today I receive my copy of eSchool news and note that an eSN “exclusive” has now targeted the same issue. Among the many disappointing comments in the eSN article is the revelation that findings will not be broken out by software program but reported as an aggregate. I thought the idea of determining which of the targeted programs worked and which did not was the purpose of this $10 mill mandated study. So, we won’t end up finding out which programs are effective (I guess this removes the concern that EETT money if it is still available will have to be spent on effective software). I also wonder whether the software used in 2004-2005 continues to exist in the format that was evaluated.

Loading

Technology Counts – 10 Years

EdWeek’s Technology Counts is now available (online). A unique aspect of this year’s issue is that it marks the tenth year of this special issue and anyone interested is invited to view the previous 9 issues.

We have long used this research as a source of basic quantitative data on the technology resources available in schools. While always dated, the data used were fairly standard and it was possible to track change over time.

In recent years, the publication has taken to grading the accomplishments of individual states and even providing a feature allowing comparisons among states. I see that North Dakota was given a D+ in use of technology. This sounds pretty dire and happens to reflect what I would regard as the bottom line issue. I continue to be dissatisfied with how EdWeek operationalizes Use of Technology – student standards include technology, students are tested on knowledge of technology, state has a virtual school, state offers computer-based assessments. Is this what an educator or parent would think of when asked about the educational use of technology? Please call these variables something else. I am interested in whether students make use of technology in learning the subject matter they are expected to master. This goal has little to do with what students are expected to learn about technology, whether technology is used in evaluating their knowledge, or whether they have access to a virtual school. Unfortunately, EdWeek appears focused on data that are easily obtained from state reports, but have little to do with what students do in content area learning.

Loading