437 private links
Goldman noted that Ranson relying on Copilot for "what was essentially a numerical computation was especially puzzling because of generative AI's known hallucinatory tendencies, which makes numerical computations untrustworthy." //
Because Ranson was so bad at explaining how Copilot works, Schopf took the extra time to actually try to use Copilot to generate the estimates that Ranson got—and he could not.
Each time, the court entered the same query into Copilot—"Can you calculate the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004 through January 31, 2021?"—and each time Copilot generated a slightly different answer.
This "calls into question the reliability and accuracy of Copilot to generate evidence to be relied upon in a court proceeding," Schopf wrote. //
Until a bright-line rule exists telling courts when to accept AI-generated testimony, Schopf suggested that courts should require disclosures from lawyers to stop chatbot-spouted inadmissible testimony from disrupting the legal system. //
Goldman suggested that Ranson did not seemingly spare much effort by employing Copilot in a way that seemed to damage his credibility in court.
"It would not have been difficult for the expert to pull the necessary data directly from primary sources, so the process didn't even save much time—but that shortcut came at the cost of the expert's credibility," Goldman told Ars.