I think I read Flesch’s Why Johnny Can’t Read back in graduate school. For one reason or another, this problem seemed to disappear. Perhaps math and science issues became greater concerns.
Problems with reading proficiency are about to receive more attention. A recent study by the National Endowment for the Arts urges attention to the lack of reading activity once students hit middle school.
“We are doing a better job of teaching kids to read in elementary school. But once they enter adolescence, they fall victim to a general culture which does not encourage or reinforce reading. Because these people then read less, they read less well. Because they read less well, they do more poorly in school, in the job market and in civic life.”
Among the possible causes is the “profileration of electronic media”. The news article I have linked does include statements contending that the research does not include online reading and students may be reading, but reading different types of material.
Researchers have released a report evaluating student achievement as a consequence of a 1:1 laptop initiative. The researchers claim quite significant improvements in writing proficiency. At this point I have only had an opportunity to read the executive summary so I may have more to say after I have read the methodology.Maine Laptop Research.
This is kind of interesting – the U.S. Census Bureau has compiled a collection of data relevant to the K12 environment under the heading Back to School 2007-2008 (thanks to Ray Schroeder for this lead).
You have to scroll down quite a ways to find the technology section. In looking at the topics (e.g., use of computer or Internet to complete school assignments – 75% and 66%), I continue to be amazed by how dated the sources are on such topics. The government cites data from 2003. I have been trying to remember how I used a computer in 2003. I remember the components were pretty much the same (keyboard, monitor, etc.). I don’t remember Second Life, creating a wiki, creating a wiki within Blackboard, YouTube, free CNN video, Last.FM, slingbox, the iphone, listening to college lectures from MIT and U.C. Berkeley as late night entertainment when I can’t sleep, etc. Now, some of these options may have been available and they escaped by attention, but most are new. Think the stats have changed?
I do appreciate the effort of whomever put together the list. Not his/her fault the data are old.
We spent the last week in Chicago and in the journey getting from here to there and back. The event was the American Educational Research Association convention (2007). The one presentation I attended that may be of interest to those who read this blog concerned the evaluation of math and reading software I just described last week. A presenter from Mathematica and a discussant from Stanford considered the results of the first phase of the study. Here are a couple of comments that I found helpful.
Even though the study involved many schools, teachers, and students, treating school and teacher as units of analysis required, based on a power analysis, that the researchers expect an effect size of .15 to regard an effect as significant. In this situation, it was not possible to evaluate the impact of the 16 individual instructional packages that were used and the overall effect of reading or math software did not achieve statistical significance.
Based on tracking functions present in some of the software programs, the researchers estimated that use of the software replaced approximately 10% of traditional instruction. The discussant noted that for approximately $25 per seat (the average cost of the software) it might be argued that 10% of teacher time was freed to attend to the needs of individual students. This “spin” on the results was kind of interesting and at first I thought this to be a positive statement and quite optimistic given the very guarded approach taken by the researchers. However, while potentially true, it would also seem that this flexible time must not have been used productively or achievement gains would have been generated.
The data from the second phase of the study is still under review. The second phase will allow a comment on individual software packages.
Representatives from a couple of software companies were in the audience and noted that the software was implemented with the minimal amount of technical support and inservice preparation. The presenters accepted this description, but countered that this was the level of support that schools normally purchased.
The presenters were careful to stay away from speculating how the results of the research would be interpreted by politicians and policy analysts. The presenters argued that the results could be spun in different ways.
We did have the opportunity for a little recreation. The Sunday before the conference we were able to watch the White Sox and Twins. It was cold, but Santana was pitching for Minnesota and we are becoming big Twins fans. Twins won.
The results of the NCLB mandated evaluation of reading and math software is now available (see press release).
On average, after one year, products did not increase or decrease test scores by amounts that were statistically different from zero.
For reading products, effects on overall test scores were correlated with the student-teacher ratio in first-grade classrooms and with the amount of time that products were used in fourth-grade classrooms.
For math products, effects were uncorrelated with classroom and school characteristics.
The pdf of the full report is available for download.
A report based on a second year of data collection will be released at a later date.
Back in January, I offered a post concerning the Department of Ed’s attempt to improve the quality of research focused on classroom software. On a lark, I emailed one of the researchers and received assurance that the report (based on research conducted in 2005) would be available in 6 weeks. OK – Jan, Feb. Mar, Apr – and counting.
Today I receive my copy of eSchool news and note that an eSN “exclusive” has now targeted the same issue. Among the many disappointing comments in the eSN article is the revelation that findings will not be broken out by software program but reported as an aggregate. I thought the idea of determining which of the targeted programs worked and which did not was the purpose of this $10 mill mandated study. So, we won’t end up finding out which programs are effective (I guess this removes the concern that EETT money if it is still available will have to be spent on effective software). I also wonder whether the software used in 2004-2005 continues to exist in the format that was evaluated.
I have decided that my formal academic training is somewhat of an obstacle. It functions somewhat like a conscience. It is that little voice in the back of my head reminding me that it is inappropriate to be a promoter without having solid evidence to back claims I may find advantageous to advocate. This is probably the one issue that troubles me the most when I read many blogs related to my personal interests. I don’t mind bloggers promoting ideas associated with personal profit motives (e.g., books, speaking or consultant fees) because I do believe individuals with good ideas should be compensated for their knowledge and skills. However, those in such situations do have a responsibility to seriously consider the evidence for and against the positions they accept fees for promoting. In my world view, this is about science and not promotion.
Many participatory or constructivist positions on education (this covers a great deal of ground) are contrasting themselves with something. This alternative is described in a variety of ways – the traditional approach, the instructivist approach, the lecture approach, instructor centered, etc. – and as a hold over from another time requiring different skills (an assembly line mentality). Clearly, occupations have changed and require different skills or at least a change in the skills that are emphasized. Clearly, information can often be conveniently accessed from external sources and need not be stored in personal memory. The issue is not so much that performance in changing times requires more sophisticiation and a different set of skills, but how best to develop these skills. There are many proposals that essentially suggest that new skills require different learning experiences and some of these proposals have come from me. What that little voice in my head keeps reminding me is that I should be able to offer sound emprical support when I make such claims.
This concern comes and goes with me. At present the topic is more salient because I am teaching a graduate educational psyçhology course and engaging students in the consideration of constructivism. I have always felt this unease when presenting/discussing this topic noting that the “research base” in the reading assignments I use seem weak in contrast to the references available for other topics. My personal preparation for these discussions also engages me with challenging positions taken by very well respected scholars (a partial list appears at the end of this entry). Why is it that the blogs I read seem unaware of these contrary positions and seem unable to respond in kind by offering empirical support for the positions they advocate?
As if by magic, one blog I follow (EdTechDev) has posted an entry that at least seems to acknowledge my concern. D Holton seems to be thinking along similar lines in recognizing this unacknowledged challenge.
I sometimes wish there would be an opportunity to get these folks together in the same room and make them deal with each other. One problem with reading what people have to say is that individuals are free to ignore core issues and continue to harp on personal perspectives. If the blogosphere is to move causes forward by “churning ideas”, part of the process must require a more direct approach to considering the data presented by contrasting positions. So, I guess this is a challenge – how about a little more effort directed toward reading what the position being put down has to offer and a commitment to offer up a strong refutation (e.g., Hake).
– – – –
Chall, J. (2000). The academic achievement challenge: What really works in classrooms. New York: Guilford Press.
Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why minimal guidance during instruction does not work. Educational Psychologist, 41, 75-86.
Lesgold, A. (2001). The nature and methods of learning by doing. American Psychologist, 56(11), 964-973.
Mayer, R. (2001). Should there be a three-strikes rule against pure discovery? The case for guided methods of instruction. American Psychologist, 59, 14-19.
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.