I'd also argue that "hallucination" is, at least in some form, pretty commonplace in courtrooms. Neither lawyers' nor judges' memories are foolproof and eyewitness studies show that humans don't even realise how much stuff their brain makes up on the spot to fill blanks. If nothing else, I expect AI to raise awareness for human flaws in the current system.
That the legal system has flaws isn't a good argument for allowing those flaws to become automated. If we're going to automate a task, we should expect it to better, not worse or just as bad (at this stage it would definitely be worse).