­The Invention of Benchmarks for Science Literacy

The terms and circumstances of human existence can be expected to change radically during the next human life span. Science, mathematics, and technology will be at the center of that change—causing it, shaping it, responding to it. Therefore, they will be essential to the education of today’s children for tomorrow’s world. What should the substance and character of such education be?

Answering that question was what Project 2061 set out to do in 1985. Its answer appeared in 1989 in its first publication, Science for All Americans (SFAA). Those words serve equally well to describe the Project’s second publication, namely Benchmarks for Science Literacy, SFAA’s companion report of 1993. Whereas SFAA proposes what essentially constitutes adult science literacy, Benchmarks specifies how K-12 students should progress toward such literacy.

How Benchmarks came into being is, I believe, an unusual story in the annals of science education reform. Chapter 13 of Benchmarks gives an account of its origin. Here I limit myself to only two aspects of its development: the role of teachers and how in the process failure led to success.

About the Central Role of Teacher Teams in Creating Benchmarks

When trying to reach agreement on what students should learn in science by the time they graduate from high school, i.e., to propose what constitutes adult science literacy, Project 2061 decided that the primary authority should rest with scholars—basic and applied scientists (physical, life, and social), engineers, mathematicians, historians, and philosophers. The secondary authority would rest with elementary and secondary teachers of science, math, and technology, school administrators, learning physiologists, and university professors responsible for science teacher education. This was justified on the grounds that its purpose was to be to decide what science was most worth knowing by everyone, not on how that knowledge would be acquired over time

When it came to Benchmarks, Project 2061 took just the opposite stance—educators would serve as the primary source, scholars as the back-up. The focus was to be on student development of content over time in order to reach the SFAA endpoint by graduation day.

Fair enough. But which kinds of teachers? Elementary, middle school, and high school? Teachers of astronomy, geology, biology, chemistry, physics, general science, math, and technology? Teachers in city, suburban, and country schools? How many teachers would be needed to cover those bases? And then given their typical work schedules, how could teachers possibly find time to carry out such a time-demanding task? How could they get help when they needed it? How could they communicate with each other and with the Project 2061 staff? It was one thing to declare that teachers would have the lead, quite another to figure out how to make that happen.

Here was our answer.  We decided to have district teams rather than an assembled national group. Members of each district team would then be close enough to discuss issues at length and often. The various teams would also be very different from one another by virtue of locale, demographics, and available resources, so that they might collectively represent the nation. The Project recruited teams of school teachers and administrators from six sites around the country—in rural Georgia; in suburban McFarland, Wisconsin; and in urban Philadelphia (large African-American population), San Antonio (large Hispanic population), San Francisco (ethnically mixed population), and San Diego (rapidly changing school population).

Each team had 5 elementary teachers, 5 middle-school teachers, 10 high-school teachers, one principal from each level, and 2 school-based curriculum specialists. Collectively the teachers had taught the life and physical sciences, social studies, mathematics, technology, and also other disciplines. Each member had to agree to serve with colleagues across grade levels and subjects, and to commit to participate fully over the life of the task at hand.

Each 25-member team received clerical support, computers at home and computer training (remember, this was 1989, computer just beginning to show up in schools), office space, reference materials, travel funds, and other needed resources. The school districts involved agreed to release team members an average of four days per month from their classrooms to work on Project 2061 tasks. Faculty from local universities agreed to provide consultation and technical assistance to the teams as requested throughout the year. In addition, the teams met together with Project 2061 staff and consultants from around the country in each of four annual month-long summer workshops (successively at the universities of Colorado, Boulder; Wisconsin, Madison; Washington, Seattle; and Cornell).

Never before or since in science education have so many teachers been engaged in a national project of this magnitude, intensity, and duration. The significance of Benchmarks testifies to their creativity.

About the Failure That Led to Success

All the more so since Benchmarks for Science Literacy—not to mention its offspring, Atlas of Science Literacy—was not the intended outcome of the project. It was rather, as described in the proposal funded by the National Science Foundation, to develop a set of curriculum models based on Science for All Americans. Each of the six sites was to produce a model that would make sense for itself and other districts like it.

We tried. After two years of effort, it became apparent to me that the six draft models were not satisfactory and were unlikely to become so. But as it turned out, not was all lost, for it allowed us to focus intensely and together on what was working—the creation of maps and lists. In order to develop K-12 curriculum models, the team members had to imagine what progress students could make toward each separate SFAA goal, a process that came to be called mapping because it required groups to link more sophisticated ideas in later grades to the more primitive ones suitable in the earlier years.

Based on the maps and charts of the six teams, a common set of benchmarks was drafted by spring 1992, reviewed by the teams and consultants, and then revised accordingly.  Format was an issue, with champions for lists, maps, and essays. Decisions leading to the existing benchmarks were based on the staff’s desire to maintain the “less is more” focus of goals while fostering creativity and variety of means. Hence the list form bolstered by short explanatory essays.

Well, not the promised models, but Benchmarks which has enjoyed widespread use nationally and internationally in the years since its publication. In addition:

In the months prior to the summer 1992 meeting, “Science Education Standards” got on the nation’s education-reform agenda, and the National Academy of Sciences set out to develop standards for education in science. In spite of different notions as to what standards might mean in the context of science education, all advocates seemed to want, above all, recommendations for “what students should know and be able to do at certain grade levels”—precisely what Project 2061 had been formulating as benchmarks. By providing the Academy working group with the June 1992 draft of Benchmarks, our teams helped shape the learning goals in the National Science Education Standards.

The maps developed by the teams quickly became popular, but they are difficult and time consuming to create. For that reason they could not be made ready for release with Benchmarks. They were so promising, however, that funding was made available to complete the job. Thus the two volumes of Atlas of Science Literacy, together containing nearly 100 maps, are direct descendents of those created by the Benchmark teams.

Out of a failure of sorts came success of a different and greater kind.