Why We’re Behind

“Why we’re behind: What top nations teach their students but we don’t” is a report offered to the public and policy makers by Common Core. Multiple players with varying motives (business leaders, politicians, academics) are weighing in on the state of education, assessments of what is wrong, and what needs to be done. This is one more contribution. It is starting to remind me of the story of the blind men and the elephant except that everyone here has their eyes open and still tend to see only part of the picture. I have comment on other policy statements/recommendations/concerns in other posts. My goal here is only to summarize the most current report.

The report is based on a simple logic – identify those countries scoring above the US on the Programme for International Student Assessment (PISA) and attempt to identify factors the differentiate these countries from the US. Nine countries fall into this category. So, the selection of countries is based on quantitative data assumed to reflect a fair comparison of student performance and the explanation for the difference in performance is based on a logical/qualitative assessment of policies, expectations, materials, etc.

The arguments made appear to focus on examples of standards and examples of assessment materials (see the report for a large number of examples).

The general conclusion appears to be:

Differences in achievement among countries is a consequence of what is taught. It appears differences are not a consequence of method or accountability assessments.

Several contributors offer individual insights as part of the general report that provide perspective and somewhat different takes on the findings.

It seems there is agreement that the successful countries focus on content knowledge and not just skills. The U.S. has moved to a curriculum that is narrower and more basic. Narrowness might by exemplified by the focus on math and reading. NCLB is described as encouraging this focus.

More successful countries encourage a broader focus in terms of content areas and accept the importance of knowledge (what some might describe as factual knowledge) in addition to skills. The performance advantage in the skill areas is used to argue that broad knowledge may be necessary for the meaningful development of skills.

The report appears to contest counter-claims I have accepted in the past. The U.S. spends more money per student. The focus of the U.S. on all students does not result in a relative weakening of the correlation between SES and performance.

There did appear to be some disagreement among the experts offering comments. For example, the impact of the significance of movement of time away from other subjects to focus on areas emphasized by NCLB exams was not explained in the same way and how assessment fits in was evaluated differently. Existing testing practices were regarded has having some negative influence, but one expert proposes that some countries also assess specific knowledge, but have specific expectations for what will be learned in a greater diversity of areas. Hence, the focus here was not on less testing. In other words there are standards that are more specific in contrast to the vague standards in the U.S. and assessment matches these specific expectations more accurately. All agree on the diversity of knowledge issue.

One expert pushing the position that testing reduces time spent on other areas offers data indicating time spent on science in grades 1-6 decreased by 20% 1994-2004 while time spent on language arts increased 9% and math increased 5%.

Some may be disappointed because the position paper:
a) downplays emphasis on what some describe as 21st Century skills – disputes high stakes, narrow focus testing but does not then propose what have been described as 21st century skills should be the alternative focus
b) clearly advocates content knowledge (information) – basis for understanding, effective skill acquisition

One writer acknowledges an obvious limitation of the methodology. The method assumes that differences noted between the U.S. and “more successful nations” are caused by the differences argued to reliably differentiate instruction and policy. An obvious test of the conclusions reached could be accomplished by testing the findings against other examples. If other countries at the same performance level of the U.S. or lower do not show the same pattern, the explanation would be less credible. In other words if nations scoring below the U.S. offer a broader curriculum and focus on knowledge, the argument that such factors explain the poorer performance of the U.S. relative to the “more successful” countries above would be far less credible.

Blogged with the Flock Browser

Loading