Network News

X My Profile
View More Activity

Posted at 6:30 AM ET, 12/ 7/2009

Debating the value of the SAT, ACT

By Valerie Strauss

Last month I posted an interview with a sharp critic of the SAT and the ACT college admissions tests. Today I present a critic of the critic.

In the November post, Bob Schaeffer, public education director of the National Center for Fair and Open Testing, known as FairTest, explained why his non-profit organization is dedicated to ending what it says are misuses and flaws in these and other standardized tests.

Among other criticisms, Schaeffer said the SAT and the ACT are “inaccurate, biased, highly susceptible to coaching, and not necessary for making high-quality admissions decisions.” I had asked the nonprofit College Board, which owns the SAT, to debate Schaeffer but its representatives refused, saying that the organization does not consider FairTest a valid critic.

Still, FairTest is quoted by many education journalists when they wrote about standardized testing and college admissions exams, and I wanted a debate on the issue. So I found an expert in college admissions--Illinois Wesleyan University Admissions Dean Tony Bankston--who agreed to critique Schaeffer’s critique.

In the interests of offering you all a robust discussion on the issue, here are Bankston’s comments on Schaeffer’s responses to my questions. It is long, as was the original, but equally interesting. Take the time to read it.


Here’s what Illinois Wesleyan University Admissions Dean Tony Bankston wrote to me:

I read with great interest your interview with Bob Schaeffer of FairTest regarding his organization’s attempt to diminish the role of standardized testing in the college admission process through promoting test-optional policies.

In the past, I have shared my thoughts directly with Mr. Schaeffer on the subject, and have agreed with him that test scores should not be abused, they should not carry overwhelming weight in the admission process, and they should not be viewed as the best predictor for college success.

However, I strongly disagree with the belief that test-optional admission policies are required to achieve what FairTest seeks. I believe the overwhelming majority of colleges who still require test scores use those results in a very responsible and equitable manner.

Furthermore, the emergence of test-optional policies has begun to have other questionable effects on the college admission landscape.

For some colleges, going test-optional has provided the alluring benefit of increasing applications and bolstering enrollment while at the same time increasing reported test averages in rankings and other marketing venues.

It is important to have an organization like FairTest monitoring and questioning the misuse of standardized test results, but the thinking behind using test-optional policies as the tool to fix the few problems that do exist is deeply flawed.


Below you will find the original question I asked Schaeffer, the portion of his response to which Bankston takes issue, and then Bankston's comments.

Q) First let’s talk about the SAT. Why do you object so strongly to its use in college admissions?

FairTest has strongly criticized the misuse and overuse of the SAT (and ACT) because independent research demonstrates that the tests are inaccurate, biased, highly susceptible to coaching, and not necessary for making high-quality admissions decisions.

I think the use of the word “overuse” could easily be debated. Every year, the National Association of College Admission Counselors (NACAC) conducts a survey asking colleges what criteria they use to make admission decisions along with the amount of weight they place on different criteria. High school grades are always the number one criteria used by colleges, and this survey includes mostly non-test-optional colleges.

Do some colleges misuse or put too much emphasis on test scores? Perhaps. But it isn’t the widespread problem FairTest makes it out to be. Overwhelmingly, the majority of colleges are using test scores as part and parcel of the review process, and grades carry significantly stronger weight then test scores, or for that matter any other criteria.
In addition, this opening sentence makes it sounds like the SAT and ACT are both completely useless because they are “inaccurate, biased,” etc.

While FairTest can produce studies to support these assertions, and College Board and ACT can produce just as many studies to support the opposing viewpoint, the fact is both sides are talking about these issues for some test takers, not all, and certainly not even a significant majority. This is exactly why most colleges who still require test scores sometimes place a stronger emphasis on test scores for some students versus others, despite any arguments FairTest or the testing industry might make.

Detailed fact sheets on our website, as well as a rich, annotated bibliography, summarize the evidence. We do not, however, try to "ban" the exams. Rather, FairTest has long been the leader in the movement for test-optional admissions, which allow applicants to choose whether to submit SAT/ACT scores.

The flawed logic here is that a test-optional policy puts it in the hands of the student to determine whether or not the test results are an accurate measure of certain abilities, and they can therefore choose to withhold scores that may actually be a very good measure of their current abilities.

As an example, a student scores a 27 on the ACT. If she applies to a test-optional college where the ACT average is 24, she will most definitely submit the 27 ACT because it gives her a leg up on other applicants. If this same student applies to a test-optional college where the average ACT is 29, she will most likely NOT submit her 27 ACT, because in this case it makes her appear to be “below average.”

Obviously, this has nothing to do with the actual validity or accuracy of the exam, but more to do with the student having greater control over what they do or don’t want included as part of their application for admission.

As of today, nearly 850 accredited bachelor-degree granting colleges and universities -- about one-third of all such institutions -- will admit all or many students without regard to test scores.

A large number of the schools on the list of 850 accredited bachelor-degree granting colleges are religious-specific or technical/trade institutions, along with other four-year colleges, who had virtually open admission policies even BEFORE they went to a test-optional policy.

In other words, they were already admitting students with just about any test score. DeVry is listed 22 times. IIT Technical Institute is listed 32 times. Many of these institutions were open admission because it was necessary to meet enrollment needs.

In many ways, the test-optional movement now defines these colleges in a more selective light than “open admission” with the added benefit of raising their reported test averages since they only report the scores of those applicants who willfully submit scores. Certainly not all of the schools on the FairTest list fall into this category, but a significant number do.

The test-optional list now includes 33 of the nation’s top 100 liberal arts colleges, based on the most recent "U.S. News & World Report" rankings. Since "new" versions of the SAT and ACT were introduced in 2005, more than five dozen additional schools have dropped their admissions testing requirements.

What we definitely do seek to eliminate are widespread misuses of the exam, such as requiring minimum scores for admission (a direct violation of the test-makers’ own guidelines for proper use)....

This is already listed as a violation in NACAC’s Statement of Principles and Good Practice, which means all members, which is almost every college in the United States, are prohibited from doing so. Any discovery of such a policy can be reported to Admissions Practices at any time.

... [and we seek to eliminate] linking tuition aid solely to performance on a standardized test...

This is also already a violation of NACAC’s Statement of Principles and Good Practice. Almost every college uses a combination of criteria to determine different types of aid, including testing, grades, rank, leadership, etc. On the whole, it’s actually harder not to get a scholarship these days than get one.

....e.g. the so-called National Merit Scholarship should really be called the "National Test-Taking Scholarship" since 99% of all competitors are barred from receiving awards based solely on their Preliminary SAT scores)...

National Merit recognizes one half of one percent of high school seniors in the country as finalists. While you can argue with their method of selection, it is hardly a “widespread” problem.

.... and requiring admissions test scores for jobs, a purpose for which neither the SAT nor ACT was designed. Unfortunately, the testing industry generally turns a blind eye to improper applications of its products.

Obviously, this last item has nothing to do with what colleges are doing. But with everything listed above, you don’t have to be test-optional to address these problems, as rare as they are. Throwing out all standardized tests just to fix a handful of possible problems is like using a bazooka to swat a fly.

Q) Is the ACT any better?

The ACT is a different test from the SAT, not a better one. Neither exam is as fair or accurate a tool for predicting college performance as is an applicant’s high school record, according to the test-makers themselves.

Actually, both College Board and ACT say the best predictor of college success is the combination of grades AND test results, not one or the other.

There are a couple of important points to consider here.

First, the ACT is indeed different in that it has always been tied closely to high school curriculum (which Mr. Schaeffer mentions later in this interview). The SAT started as primarily an aptitude test used to predict success in college, and only in recent years did it become more like the ACT in tying its content more closely to what students should be learning in high school.

Second, while both exams do attempt to provide correlations between test results and predicted college success, that is by no means their ONLY purpose or value. They also provide some measure of what a student has learned in high school.

For example, you can not do well on the math section of the ACT unless you have taken (and to a certain degree mastered the concepts of) algebra, geometry, trigonometry, etc. It also provides a good measure of a student’s grasp on things like vocabulary and grammar or reading and comprehension skills.

The main thrust of the FairTest argument says that standardized tests aren’t always the best predictors of college success, and, guess what, almost every college agrees!

That is why standardized test results are used as PART of the process. Most colleges attempt to ascertain the validity of a given test score in relation to all the other academic information provided as part of the application for admission and then assign that score the appropriate weight amongst the other criteria. Again, you don’t need to be test-optional to accomplish this.

The ACT has, however, become a more consumer-friendly test in recent years, featuring an optional writing section, which makes the exam shorter and less costly than the SAT with its mandatory writing component, no penalty for guessing, and the ability for students to control whether a college sees a score from a particular test administration. Partially in response to these initiatives, nearly as many members of the high school class of 2009 took the ACT as the SAT.

Q) Is there any test that would be fair for that purpose?

It is highly unlikely that any mass-administered standardized test could ever be as good a tool for predicting college performance (the sole purpose of an admissions test) as students’ high school records. An applicant’s transcript includes data from dozens of classroom tests and other assessments, such as essays and science experiments, which provide a much, much deeper and richer picture of academic preparation and motivation than any one-time exam, particularly one that is based largely on filling in multiple-choice bubbles.

Once again, this approach is viewing standardized tests as merely “predictors” instead of “measurements.” It seems absurd to think there should not be some type of standardized examination administered in high school, something that is the same for all students, to measure what they are learning during their secondary education.

With the wide disparity in the quality of high schools, the resources available to students, the subjectivity involved in grading, and the increase of grade inflation at the high school level, it seems overly simplistic to base all admission decisions solely on high school grades. You need a standard measuring stick as PART of the process.

You can quibble over whether or not the SAT and ACT should be that exam, but they are the best developed tools we have at this point for that purpose. And, overall, they actually do a pretty good job. No exam will ever be perfect or capture the essence of every student. That is why it remains only PART of the process and not THE process by which students are admitted.

Q) So what is the result of coaching on the ACT and the SAT? The College Board says the SAT is not really coachable. I’ve been told scores can rise a few hundred points for some kids who are coached.

Many test preparation firms "guarantee" gains of 300 points or more on the three-part SAT (score scale 600 - 2400) and 3 to 4 points on the ACT (scale 4 -36). Improvement of these magnitudes could have a significant impacts on a candidate’s chances for admission at selective colleges and result in thousands of additional dollars in so-called "merit" scholarships, which are often improperly based on test scores.

If these promised score increases are legitimate, they give children from families who can afford $1,000 for a basic course given by Kaplan or Princeton Review, $5,000 for individual SAT tutoring, or even $15,000 or more for an intensive college-prep package another huge leg up in the admissions process.

Of course, the test manufacturers deny that such score changes are possible. That’s no more surprising than tobacco companies stating the cigarette smoking does not cause cancer: it’s in their self-interest.

As long ago as 1955, an annual report by the College Board, the SAT’s sponsor, concluded, "If the Board’s tests can be regularly beaten through coaching, then the Board is itself discredited."

A number of studies summarized in FairTest’s report "The SAT Coaching Cover-up" demonstrate that good test preparation programs raised scores on the old two-part SAT by 100 points or more (scale of 400 - 1600), significantly more than the testing industry says is possible but less than coaching companies claim.

There’s still no definitive answer to the question, "How much does coaching help?".....

It is a real stretch to use what test prep firms “claim” they can do for students as part of the argument for how “coachable” the exams are. There are three primary points to consider.

First, even with coaching, my experience is that most students improve one or two points on the ACT or 50-60 points on a section of the SAT. You very rarely see huge jumps between a “non-coached” test and a “coached” test. And, if a student is coached before they ever take an exam, it’s impossible to measure how much the coaching actually helped.

Second, most colleges would tell you that going up a point or two on the ACT or 50-60 points on a section of the SAT isn’t that significant a difference when it comes to most admission decisions, unless perhaps you are talking about the MOST selective colleges in the country. And even then, only about 10% of the colleges fall into that category, hardly a widespread occurrence of “coaching” making a huge difference in admission outcomes nationally.

Third, not only are more organizations offering free “coaching” to underrepresented populations, but almost every college already accounts for the fact that, in most cases, wealthier students are more likely to do well on standardized exams than poor students.
For that reason, almost every college, even the most selective, will look at a test score differently for a student coming out of a very affluent suburban high school versus a student coming out of an under-funded public high school in the inner-city. Again, you don’t have to be test-optional to account for these differences. Colleges have been doing it for years, even before test-optional policies became more common.

Q) Of the schools that have dropped the SAT admissions requirement, how many are large schools that get tens of thousands of applications?

Many large public universities, such as the University of Arizona, have never required SAT or ACT scores. Others, such as George Mason [University] and Christopher Newport [University] in Virginia and Salisbury [State University] in Maryland, have recently eliminated test score requirements for applicants with top-level high school records.

The University of Texas system does not consider SAT or SAT results for in-state students who graduated in the top 10% of their high school classes, a group that now makes up more than 75% of all enrollees at the flagship Austin campus.

As a side note, this was mandated by the Texas legislature as a way to ensure diversity representation at state institutions while dealing with changes to affirmative action laws; and it is currently being revisited. But it had nothing to do with testing or the validity of testing.)

On the private sector side, Wake Forest [University in North Carolina] recently earned substantial favorable attention for dropping admissions testing requirements.

Colleges gaining “favorable attention” is part of my concern.

Are they going test-optional because they really feel it’s a significant detriment to how they run their admission process, or are they more interested in the public relations benefit, the bump in applications, and the increase in reported average test scores?

It’s probably a mix of both for colleges that go test-optional, with some leaning more toward one side than the other. Of course, colleges can do whatever they want in terms of their own admission process. Some colleges don’t accept test scores from any applicant, which is the rarest stance to take, but one that it least makes more sense.

Test-optional is riding the fence; it’s having your cake and eating it too. And, most alarmingly, it encourages even more admissions “gamesmanship” on both the student and college side, which is never a good thing for the process.

Research has shown only a handful of colleges collect all test scores after enrollment and then report a true test average for the entering class. The vast majority of test-optional colleges are only reporting the average test score of those who submitted standardized tests, which are typically those who scored higher on the exam.

Yet, there is no mention of this statistical fact in guidebooks, web sites, or brochures.

For some colleges, this provides an inaccurate portrayal of the “average student” academic profile. In the attempt to provide students and parents with the most accurate information possible to assist good decision-making, test-optional is going in the opposite direction.

A full list of nearly 850 test-optional institutions, including liberal arts colleges, large universities, and specialty schools, is available free online ....

A major new book, "Crossing the Finish Line: Completing College at America’s Public Universities," co-authored by former Princeton President William Bowen, adds important data to the debate about the value of test-scores at large schools. Among the most relevant findings (p. 226):

- “High school grades are a far better incremental predictor of graduation rates than are standard SAT/ACT test scores”

None of this is news to anybody, which has already been addressed above.

--“Overly heavy reliance on SAT/ACT scores in admitting students can have adverse effects on the diversity of the student bodies enrolled by universities”

This doesn’t say colleges are doing it, it just says doing so CAN have adverse effects, which is why most colleges DON’T do it. Before test-optional was ever a movement, you could have gone to any college campus in the country and found that the average ACT or SAT score for African-American and Latino students was lower than the Caucasian and Asian-American population. Colleges were already accommodating for the differences in testing among diverse populations well before test-optional was even a discussion.

-- “The strong predictive power of high school GPA holds even when we know little or nothing about the quality of the high school attended.”

It isn’t news to state that “A” students in high school tend to be “A” students in college, and so on. However, just like testing, grades aren’t always a perfect predictor either, especially when it comes to students earning good grades at weak high schools and then trying to attend highly challenging academic institutions when they haven’t had the advantage of a strong high school curriculum.

This research is already encouraging several large universities to review their admissions testing requirements

Q) A question about ACT. I’ve talked to test prep folks who say that the ACT is fairer than the SAT because it supposedly tests what kids have learned, unlike the SAT. Is this true or not true? If true, why wouldn’t it be useful in helping predict college success?

Yes, both the content of the ACT is closer to the material a student has covered in high school (in fact, it is based on a national curriculum survey). But the test-makers’ own research show that the ACT is neither more accurate nor fairer than the SAT in predicting college grades. The most likely reason for the tests’ relative weakness is that no half-day, largely fill-in-the-bubbles exercise can capture key elements of academic success such as motivation, perseverance, work habits and coping skills.

Again, this focuses only on the exam’s use for predicting instead of measuring. And no one at any college believes a student is solely represented by one test score from one particular Saturday exam. To assume so is overly simplistic.

Q) So what are parents and students to make out of all this? Is there something a parent can actually do?

College-bound students and their parents should understand that there are now more options than ever in dealing with the standardized testing hurdle for undergraduate admissions. Nearly one-third of all colleges and universities do not require SAT or ACT scores from all or many applicants.

The test-optional list includes dozens of top ranked liberal arts schools, many large public campuses, and specialty schools focused on the arts, religion, music, design and more. The existence of so many test-optional alternatives can significantly reduce the pressure and anxiety about scoring high on one-shot exams. Students can, in essence, opt-out of the test-prep game by focusing on these schools which recognize "you are more than a score."

The most dangerous message in all of this is that it leads students to believe they can’t get a fair admission assessment at colleges where test results are required. Reminding students about all available options is always a good idea.

But the very end of Mr. Schaeffer’s comments shows his belief that ONLY test-optional colleges can view students as “more than a score,” and that is simply not true, not by a long shot.

I have told Mr. Schaeffer in the past that I agree with him that test scores should not be abused, but that they can still provide some valid and important information if used properly. Bob’s response to me was that while my college may be treating test results the right way, there is no guarantee that other colleges are doing the same. From that perspective, you can see how the mission of FairTest then seems to be more about forcing testing completely out of the picture instead of advocating for colleges to use the tests appropriately.

Follow Valerie's blog all day, every day at

For all the Post's Education coverage, please see

By Valerie Strauss  | December 7, 2009; 6:30 AM ET
Categories:  College Admissions, Standardized Tests  | Tags:  ACT, SAT, college admissions  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   StumbleUpon   Technorati   Google Buzz   Previous: Blasingame: Vampires vs. angels in adolescent lit, why schools are removing Laurie Halse Anderson books--and more
Next: Willingham: An online teaching surprise


The best most cogent response to Fair Test and would be opponents of standardized college admissions' tests I've ever seen.

Posted by: patrickmattimore1 | December 7, 2009 8:22 AM | Report abuse

Outstanding, Mr. Bankston.

Posted by: jane100000 | December 7, 2009 4:54 PM | Report abuse

As I recall, the SATs had the same problems as other standardized tests--the problems we talked about in recent columns. A standardized test is a standardized test--most of them are made up by the same few companies, using the same techniques.

Posted by: opinionatedreader | December 8, 2009 7:42 PM | Report abuse

The comments to this entry are closed.


© 2010 The Washington Post Company