Scoring the new SAT

The politically correct position on the SAT these days is that it fails the test. According to many progressive educators the exam unjustifiably claims to predict college success and reinforces socioeconomic inequities, the relic of a bygone era when so-called objective testing was believed to efficiently quantify intellectual ability. The College Board itself acknowledged the test’s inadequacies earlier this month when it announced core format changes. Students will be relieved to know they no longer need to identify the definition of equivocate or write an essay in 25 minutes on whether television disrupts social cohesion. The new test, whose specifics will be unveiled this month, supposedly will ask students to interpret literary and historical texts more actively and answer math problems more in line with authentic college prep curriculum. But can the new SAT reconfigure itself into a form that actually measures something worth measuring? Or do we need to begin with a whole new evaluation paradigm?

The A in SAT stands for aptitude. Aptitude is different from knowledge. It means capacity. The SAT supposedly measures a student’s capacity for college success, whatever the particular curriculum he may have studied. That’s why student scores from two totally dissimilar schools in Spokane and Schenectady can ostensibly be compared. This capacity underlies classroom experience and is more intrinsic to the mind of the student. But if the SAT assesses deeper qualities of thinking that transcend classroom experience, that is, if it isn’t about knowledge, then you shouldn’t be able to study for it. But you can. I spend an inordinate amount of my work time as a private academic mentor teaching the SAT trade, usually to high end students from well off families anxiously hoping to get into Yale, Cal, Duke or Oberlin (It’s rare I get a student going only for a “second tier” school like San Francisco State or Texas A & M). So, what does the SAT look like, what aspects of it can be taught, will the new SAT reduce the coachability of the test and will it evaluate ability more authentically?

The SAT now consists of four multiple choice question types and an essay. One type asks students to fill in a blank from a sentence with a vocabulary word. A second asks reading comprehension questions based on a passage. A third requires identification of grammatical and word usage errors, and a fourth consists of mathematical problems based on basic algebra and geometry rules.

The 25 minute essay section, added in 2005, has been unpopular with both students and college admissions officers. Is it a helpful tool for determining writing skills? My perhaps reactionary position is that it is. It captures a dimension of a student’s ability to think on his feet in a structured way. There are admittedly drawbacks to the assignment. 25 minutes is an inappropriately short amount of time and breeds superficiality. The topics are also sometimes inane. And it’s certainly not an authentic task. No college student will ever be asked to write cold on a decontextualized topic. But I see how the quality of response might correlate to how well a student parses Faulkner at NYU two years down the road. This is a wildly unpopular view in liberal circles, but it’s based on having worked on writing with scores of students. The kind of essay writing expected in college is often largely formulaic and a formula is what I give students for writing the SAT essay. The template is basic: a three sentence introduction with a thesis at the end, two body paragraphs each with a topic sentence and an example capable of analysis. I recommend that they keep two generic examples in their pockets that could be used for a wide range of topics. I suggest Obama as one stock example, so if the topic is something like “Does success usually involve struggle,” a typical prompt, one could be ready to discuss Obama’s fight with the Republican Party. The new SAT will not alter much the need for this formula. Expanding the time allotted to 50 minutes will allow students to delve a little deeper and the task will involve analysis of the argument in a text, but I’ll likely be able to coach it in a similar way.

The new SAT appears to drop what is the least legitimate part of the existing test, a multiple choice writing section that asks students to correct grammatical and word usage errors and critique the quality of some horrendously written sham student essays. This section is conspicuously flawed. Whether a student can pick out a dangling modifier or notice that imminent is mistakenly being used for eminent, bears little on his ability to write or do college work in general. But more fundamentally, rules of grammar and word usage are teachable knowledge. I tell students this is the section I can help them raise their scores on more than any other. I simply give them a crash English course. I don’t even need to be a particularly skilled teacher. Any one of the SAT study guides by companies like Barrons and Princeton Review have lists of grammar and usage rules with sample questions. I merely explain how they apply and make sure students take notes. The College Board says that with the new test students “will be asked to do more than correct errors; they’ll edit and revise to improve texts.” But the old SAT also asked students to edit by doing things like identifying a poor transition in a weakly written student essay. The bottom line is that a multiple choice approach to a task as complex and ill-structured as writing will inherently fail to assess writing skill. College English professors and their high school counterparts might even find such an approach to writing assessment offensive.

My experience has been that the most helpful section of the SAT has been the critical reading. Students read passages often by authentic authors (I’ve encountered Richard Wright and Oliver Sacks) and answer questions testing their understanding. The questions can be subjective, but the logic of the answers is generally accurate. I do acknowledge concern about cultural bias in the selection of passages. My students who perform well on this section correspondingly tend to be strong humanities students in school. And my students who struggle to comprehend these passages are apt to encounter problems in working with college level texts. The new SAT is not likely to improve much on its formula. The College Board emphasizes that there will be a new focus on analyzing an author’s argument, particularly the use of evidence. The Board has clearly joined the crowd of educators saying it’s all about critical thinking. Questions asking for deconstruction of an argument are a welcome addition, but I am fond of the current questions that ask students whether one passage from A Portrait of the Artist as a Young Man is ethereal or whether another portrays Atlantic jellyfish as passive-aggressive.

The most controversial question type of the current SAT and the one that the College Board seems to have the most need to disown is the sentence completion that tests vocabulary. What value is there to knowing a student’s vocabulary? In a rare moment of self-critique the Board slapped itself on the wrist recently when it said “no longer will students use flashcards to memorize obscure words, only to forget them the minute they put their test pencils down.” It’s true that students sometimes spend countless hours filling their vocabulary buckets one grain at a time in the hopes that their work will translate into higher scores on this section. It’s also true that the size of a student’s vocabulary reflects socioeconomic status (particularly the education of the parents). But there is a correlation between vocabulary and reading skill, in part because of the valid truism that reading increases vocabulary. In a more obscure way my observation is that vocabulary is partly a function of how someone engages with language in the world, both written and spoken. Does a student passively absorb challenging language, or wrestle with its meaning, reformulate it, integrate its contents. A more active relationship with language means a stronger vocabulary and that active approach is the mark of a successful student. There actually is something to testing students’ understanding of less than common words. The new SAT, though, will not test knowledge of less common words. The College Board has taken pains to make vocabulary testing authentic by using relevant words in the context of a real piece of writing. When the Board releases sample questions later this month, it will be interesting to see what types of words it considers relevant.

Finally, the math section is certainly the least accurate predictor of college performance, given that a minority of students use algebra and higher level math skills in their courses. Supposedly the ability to solve the SAT mathematical puzzles indicates a certain generalizable left-brained problem solving ability, perhaps something like chess acumen, but really it just displays savvy in the math subject area, not useless by any means, but hardly a genuine indicator of college readiness. The math section can be coached much more easily than the reading because a student can be taught to identify specific question types that have been practiced. If a question asks one to measure the area of a figure formed by four overlapping circles, it is doubtful the student will have seen such a figure before, but he may have studied areas formed by other geometric shapes. So the problem becomes an exercise in adeptly transposing conceptual understanding. That’s a helpful capacity to know about that gets beyond specific content knowledge. But in some ways it appears the new SAT may move back toward evaluating traditional content knowledge, similar to what the rival college entrance test, the ACT does. The College Board claims it intends to measure what it calls “quantitative literacy.” It’s hard to project what this might be. Is there some kind of underlying mathematical comprehension that parallels reading comprehension? It’s more likely that the new math section will measure directly or indirectly what students learn in the classroom.

Whatever improvements the new SAT makes over the old there is no getting around the problematic fact that it is a multiple choice test. Multiple choice tests cannot detect complex, creative, and thickly reasoned thought. They also place the testee in a passive position, not allowing her to construct answers herself but only having her choose limited possibilities from a pre-written menu. A multiple choice exam is seductive to those wishing for an objective standard. By giving statistical outcomes to student performance it creates a manageable system for sorting college applicants at a national level. It is the only de facto national assessment we have for high school students. Whatever it may actually measure, it appears the SAT will remain a valued currency for purchasing a ticket to college. Test revisions may be welcome, but the old regime remains.