Prepping for a test doesn't give you any practical knowledge or expand how you think about things; it's just about getting familiar with the test’s format and the way the question authors think. It'd be a bit like getting “better” at poker by always playing with the same people and learning their tells. It's “gaming” it because you're not better at what the test is supposed to measure, you're just better at what it actually does measure.
Prepping for the test is a valuable signal when you're talking about the entire pool of high school students. And it does not gate opportunities by economics as much as you'd think: https://www.nytimes.com/2017/10/25/magazine/asian-test-prep-.... Immigrant families eligible for reduced price lunch are able to scrounge up the money for these tests.
Give ten kids from the same class a self study online course and give another ten a private tutor, and they won't see the same score distribution. Where's the signal there?
SAT results don't have a line item that notes the amount of wealth or privilege that went into preparing.
> Immigrant families eligible for reduced price lunch are able to scrounge up the money for these tests.
Some families can't. Other families aren't aware, or aren't interested. But we judge the kids in the family for that.
That said... I don't know how the _new_ system will work at fighting that privilege -- there are still lots of ways for it to disguise itself. But we have to at least acknowledge the issues with the SAT.
But, to me at least, this goes beyond privilege. This is about diversity of skills and diversity of learner profiles and moving away from linear quantification of potential.
The effects of studying "for the test" as you put have been measured, improved test taking skills tends to be worth ~30 points which is not that significant. This matches my anecdotal experience and that of people I know who run SAT prep courses.
It's far more effective to actually teach students the material, either by teaching them new concepts or by firming up their understanding to ones they've already been exposed to. Particularly in Math, many students in high school have shaky understandings of fractions or algebra. Firming up these foundations can often lead to >100 point increase (given sufficient lead time). Those foundations are something the test is actually looking for since numeracy and strong algebra skills are a strong predictor of success in Calculus.
It's true that tutoring grants unfair advantages but this is going to be true in any system that uses skills as part of a selection criteria.
> The effects of studying "for the test" as you put have been measured, improved test taking skills tends to be worth ~30 points which is not that significant. This matches my anecdotal experience and that of people I know who run SAT prep courses.
I see this often but I suspect that it is lumping "Took a prep class for 1 hour on a Saturday" and "Spent 6 hours a week for 52 weeks with a tutor" in the same category.
Any tutor who only gets a 30 point increase won't be seeing much business among the folks I know.
However, I do agree with you that firming up skills is a remarkably quick way to get a significant boost. Being able to add 2 + 2 and come up with 4, repeatedly and accurately is often a big deal on these tests even with a calculator.
I'm familiar with that 30 point differential because it shows up in research.
You're right that 30 points isn't that much if you're thinking about the whole distribution, but I guarantee it can be significant around the selection threshold. That threshold might be implicit or explicit, but it's there, and if it's enough to nudge applicants past it, it's significant.
Sure, but having read a large chunk of educational literature I'm not aware of any alternatives with fewer distortions from parental aid. Grades correlate more highly with a good home life than test scores do, for example.
As long as we have "prestige" universities there's going to be some form of skills testing, and no one has ever designed an un-gameable test that can be administered nationally. The question we have to ask ourselves then is how we can reduce game-ability and I doubt we can make improvements that are more than incremental.
The JEE solved this problem by changing the format of the test each year. It's not disclosed before the exam. So it is really hard to form meaningful strategy that consistently helps you
> That said... I don't know how the _new_ system will work at fighting that privilege -- there are still lots of ways for it to disguise itself. But we have to at least acknowledge the issues with the SAT.
I'm in favor of using tests like that SAT as cheaper diagnostic tests, to help with student placements and accommodations, not for admissions. It's too bad this is being lost with the removal of the testing requirements, but I guess it doesn't matter much as the tests were never used this way in the first place, despite providing this information. https://cepa.stanford.edu/sites/default/files/ACT%20Paper%20... (note that I don't agree with the conclusions of this paper, merely the identified diagnostic criteria)
> This is about diversity of skills and diversity of learner profiles
I might believe that if I didn't believe that the diversity would mostly be token, with the majority of students in selective schools fitting a handful of templates.