Can Artificial Intelligence Be Trusted to Get the Work Done?

Professor Daniel Schwarcz is Leading Research on the Use of AI in Legal Practice and Financial Services

By
Alexander Gelfand
Professor Daniel Schwartz standing outside with the city skyline behind him.

Professor Daniel Schwarcz
Photo: Tony Nelson

Can AI be trusted? Two new articles by Professor Daniel Schwarcz, Fredrikson & Byron Professor of Law, and his colleagues try to answer that question. Both articles focus on generative AI, similar to ChatGPT, which produces new content based on large-language models.

Proponents of generative AI say that it will revolutionize most industries. But skeptics emphasize its limitations and risks: In addition to hallucinating — fabricating inaccurate output — generative AI models are “black boxes.”

“Because we don’t understand how it works, the ways it can be relied upon are not always transparent, and how it can result in harm is not always easy to regulate and monitor,” Schwarcz says.

In the first study, “AI-Powered Lawyering: AI Reasoning Models, Retrieval Augmented Generation, and the Future of Legal Practice,” Schwarcz and colleagues at Minnesota Law, the University of Michigan Law School, and the firm Ogletree Deakins ran a randomized controlled trial with a group of 2L, 3L, and LLM students at both universities. They sought to evaluate generative AI’s ability to assist lawyers in everyday tasks such as drafting a client email and analyzing a complaint.

While the AI tools’ accuracy was mixed, they boosted participants’ productivity from 34 to 140 percent. They also improved students’ clarity, organization, and professionalism. A tool called 01-preview enhanced the quality of legal analysis, while Vincent AI drove down the number of hallucinations.

“The results substantiate a lot of the hype,” says Schwarcz, who thinks such tools will soon become fundamental to the practice of law.

Still, Schwarcz would not want his own 1L students to use generative AI. “These tools excel at producing output that looks great,” he says. “And if you don’t already have expertise and an understanding of the underlying problem, it’s almost impossible to know whether it’s good or not.”

This facility for making questionable content seem convincing led Schwarcz to consider how best to regulate generative AI in another realm where the technology’s persuasiveness could lead to trouble: financial services.

As Schwarcz and his co-authors at the University of Michigan and the University of Pennsylvania explain in “Regulating Robo-advisors in an Age of Generative Artificial Intelligence,” financial institutions seek to develop generative AI robo-advisors. But while these tools could expand access to financial advice, their propensity to hallucinate and their capacity to sell people inappropriate financial products could also cause significant harm.

“There’s so much gain to be had, but also so much risk,” Schwarcz says, and current regulation governing financial robo-advisors is limited.

Schwarcz proposes that a license granted by existing sectoral regulators be required to use generative AI to match people with financial products or services. The AI systems could be audited to ensure appropriate recommendations — and that consumers know they are dealing with chatbots rather than real people.

Schwarcz further argues that if generative robo-advisors offer fully automated financial advice, the firms that deploy them — and the companies that develop them — should adhere to the highest standard of conduct just like flesh-and-blood financial advisors.

“The highest standard of care that would apply to a human should apply to the machine,” Schwarcz says.

Because while generative AI can do a lot of good, it can’t be entirely trusted — no matter what you’re asking it to do.

Minnesota Law Magazine

Spring 2025
Minnesota Law Magazine wordmark