The lawsuit against the University of California alleges that it discriminates against students by requiring SAT or ACT test scores (“Students, Community Groups Sue University of California to Drop SAT, ACT,” The Wall Street Journal, Dec. 11). The plaintiffs say the tests act as a proxy for wealth, race and privilege.
I have no brief for either test, but I base my opinion on the way the tests are constructed, rather than on their outcomes. Both tests appeal to colleges and universities because they allow students to be ranked against each other. If the tests were loaded up with items that measured only the most important material taught effectively by teachers, scores would likely be bunched together, making comparisons extremely difficult.
To avoid that probability, test makers include items that measure what students bring to class, instead of what they learn in class. They do so because they’ve found it produces score spread. That is not fair, but it is not illegal. Yet plaintiffs charge that disparate impact is illegal. I maintain that they won’t be happy unless all students post equal scores.
(To post a comment, click on the title of this blog.)
Thought that at least part of the purpose of the SAT is to test a student’s academic ability as opposed to the student’s knowledge. There will definitely be substantial overlap — given that the academic-ability questions favor students who have accumulated substantial knowledge.
In any event, I always assumed that the main purpose of the SAT was to help colleges compare students with similar GPAs and/or class ranks who attended different high schools. A student with a 4.0 GPA who is second in the graduating class of 500 at a high-SES suburban high school is more likely to be an academic standout at an academically competitive college than a student with a 4.0 GPA who is second in the graduating class of 100 at a low-SES rural high school or a student with a 4.0 GPA who is second in the graduating class of 300 at a low-SES inner-city high school. “More likely”, not definitely. The SAT scores give the college a shot at identifying the high-achieving/very-academically-talented student at the low-SES rural high school or at the low-SES inner-city high school; conversely, the SAT scores give the college a shot at identifying the only moderately-achieving/moderately-academically-talented student at these two schools whose high GPA and high class rank were the result of weak competition at the school rather than achievement/talent.
In my anecdotal experience, students with high GPAs/class rank at weak high schools generally do not perform in college at the high level that those high GPAs/class rank would predict.
LikeLike
Labor Lawyer: Although they overlap, aptitude and achievement tests are not identical. The change in the name of the SAT over the decades is evidence of that. The College Board is able to sell the SAT to colleges and universities because it has convinced them that the test allows students to be ranked. The trouble is that if the SAT were loaded up with items measuring only the most important content effectively taught by teachers, scores would likely be bunched together, making comparisons difficult. So essentially the SAT presents a false picture of what students have learned in the classroom. Instead, it measures largely what students bring to the classroom in terms of their innate intelligence.
LikeLike
Assuming that the SAT uses non-course-content questions to create a scoring spread, the underlying issue is whether the SAT scoring spread accurately predicts academic performance in college — at least that’s what I’ve always thought the SAT debate was about.
If, as the SAT people argue, the SAT does accurately predict academic performance in college, then it’s largely irrelevant how/why the SAT does this — at least until/unless someone comes up with a better way to predict academic performance in college.
Obviously, high school GPA and class rank predict academic performance in college. It seems equally obvious — to me, at least — that SAT scores predict academic performance in college. None are perfect predictors, of course — there will always be some students with high GPAs and/or class rank and/or SAT scores who bomb in college and vice versa.
The critical issue in the SAT debate is: To what extent does having an applicant’s SAT scores improve the college’s ability to predict the applicant’s academic performance in college beyond that which the college would have had using only SAT/class rank? In my opinion, where Applicants A and B attend similar high schools, the SAT scores add relatively little; where Applicants A and B attend very dissimilar high schools, the SAT scores add a lot.
LikeLike
Labor Lawyer: Bates College was the first to make test scores optional for applicants. In 2005, it released the findings of its Its 20-year study. It found virtually no differences in the four-year performance and on-time graduation rates of 7,000 students. Since then, more than 800 colleges and universities have followed the same policy, with similar outcomes. They’ve all reported that grades and courses taken are a far better predictor of success. It’s hard to argue with the experience of so many schools.
LikeLike
Bates is a pretty expensive private college, I think. If so, there were probably relatively few Bates students during the 20-year study who came from low-SES high schools. Said another way, most of the Bates students during the 20-year study came from relatively high-SES high schools — schools that were roughly comparable in terms of overall academic competitiveness/quality.
In my view, GPAs/class rank (and perhaps courses taken, but I’d give less weight to this factor) are better predictors of college academic success than SAT. However, when a college is comparing applicants from dissimilar high schools (a high-SES/academically strong high school vs. a low-SES/academically weak high school), it is unfair to the applicants from the high-SES/academically strong high school to rely exclusively on GPAs/class rank. The student with the 3.5/85th percentile class rank at the high-SES school might well have had a 4.0/98th percentile class rank at the low-SES school. Conversely, the student with a 4.0/98th percentile class rank at the low-SES school might well have had a 3.5/85th percentile class rank at the high-SES school. Giving some weight to the SAT allows the college to more fairly compare these two applicants.
Cynically, it’s possible that many colleges feel under pressure to admit more students from lower-SES high schools — for PR, political, or social-justice reasons. If so, eliminating SAT scores would be an easy way to increase admission rates for low-SES high schools relative to admission rates for high-SES high schools. That’s what I think is actually happening.
Personally, I’m on the fence re whether it is good for society (or appropriate for individual colleges) to give admission preferences to applicants from low-SES high schools — at a minimum, some of those applicants who would benefit from this preference would be children from higher-SES families who for whatever reason attended a relatively low-SES high school and whose admission preference would not be warranted by social-justice considerations. If the college is going to give an admissions preference to low-SES applicants for social-justice purposes, that preference should only be given to applicants who are in fact low-SES (not anyone who attended a low-SES high school) + the preference should come in the form of adding X points to the bottom-line admissions score rather than by eliminating SAT consideration for all applicants.
LikeLike
Labor Lawyer: Yet more than 800 other colleges and universities report similar results as Bates did. Since that is the case, it’s hard to argue that the SAT has much predictive value. Perhaps it does in the first year when students are adjusting to college, but it fades rapidly thereafter. Today’s Los Angeles Times (Dec. 22) has a front-page story about the issue. Strangely, it fails to mention the Bates study, instead relying largely on anecdotal evidence.
LikeLike