This will be epically bad if the Australian “robodebt” saga is any indicator. Robodebt was a government software solution that would automate previously manual assessments for overpayment and issuance of debt notices to welfare recipients. It’s exactly the kind of scheme that the new US government would be keen to implement. Claw back welfare overpayments and do it while reducing the size of the public service. What’s not to love?
Well, it didn’t work out so well for the government that implemented it and some of the lessons that came out of it should serve as a warning to any government seeking to go down the same path.
Note: given Australia is less than a tenth the population of the US, add a zero to all the numbers in the article to get a proportionate sense of scale.
And don’t forget the US’s own automated foreclosure schemes where banks automatically and fraudulently foreclosed on homeowners in good standing, literally stealing their houses and ruining their lives.
Bold. Do we have sufficient confidence in AI led anything at scale, for extended durations and across crises of different kinds, to feel comfortable with how this goes?
Personally, I doubt it. PRK (purely as an example) had a 4 year lag from FDA approval (1995) to acceptance in the USAF (1999) -So a single functional change which itself underwent MASSIVE testing at scale, across cohorts to get FDA approval had a delay to adoption inside a government strategic interest.
We haven't even begun to do anything remotely like FDA pre-acceptance trials for AI assisted government. This isn't a space to do "move fast and break things" and I would think there is no mandate for this, nor even a legal basis to do this, and abrogate functional responsibility this way.
I scent many fine lawsuits. I suggest that even the current SCOTUS would be wary of machine generated outcomes applied to the state: After all, their own existence depends on the criterion of choice they display, interpreting laws and the constitution. I would be surprised if they really felt the AI interpretations of law exceeded theirs, purely on selfish grounds.
One might as well suggest the senate and house could be replaced by AI. Or, Musk himself.
What an incredible opportunity for regulatory capture and graft. I can see why the "Let's just delete every regulation, and rediscover why we have them all via a lot of blood" people would be eager to push it into government.
Who owns the models, what are they trained on, how are they vetted, how do we confirm that they don't have backdoors? If we're relying on "Trust me bro, I would tell you if I was doing a conflict of interest" then we've already lost.
Note: given Australia is less than a tenth the population of the US, add a zero to all the numbers in the article to get a proportionate sense of scale.
https://en.wikipedia.org/wiki/Robodebt_scheme