The debate over openness in artificial intelligence (AI) is the topic of a forthcoming article in the Wisconsin Law Review, co-authored by Minnesota Law Professor Alan Rozenshtein, with colleagues Parth Nobel (Ph.D. candidate, Stanford University) and Chinmayi Sharma (professor, Fordham School of Law).
“Unbundling AI Openness” explores the complexity of making components of an AI system available for public inspection and modification, an issue of increasing importance as policymakers seek to balance innovation, access, safety, and national security with the rapid advancement of AI.
“AI is an extremely transformative technology, so it matters a lot how it’s developed and who controls it,” Rozenshtein says. “The decisions we make today around AI openness will have profound implications for the future.”
The terms “open” and “closed” refer to how easy it is for others to understand and modify a technology independent of the people who originally created it. “The easier it is to adapt or inspect it, the more open it is,” he says. “The harder, the more closed.” But he cautions that it is a dangerous oversimplification to categorize AI as either open or closed.
“Open does not mean just one thing when it comes to AI,” he says. “AI is composite technology, created and controlled by different stakeholders, often with competing interests. We have to have a broader understanding of openness to effectively govern in this arena.”
While openness can invite new startup entities, foster innovation, and democratize access, Rozenshtein notes openness with AI also can lead to consolidation of power by technology giants or allow entities to shortcut the years and significant investment it takes to build a system.
“Many people say open is good, closed is bad,” he says. “But that’s simply not accurate. It depends on what part of a component is open or closed and what goals you are trying to achieve. Is it safety or national security? Or are you trying to drive economic growth? Do we secure a system by hiding details or by broadcasting and challenging people to find flaws in that system? Who controls the data to train these models? It’s a tremendously complex topic.”
Rozenshtein and his co-authors explore how each component of AI has its own spectrum of openness and how that informs trade-offs within and between policy goals. They also make recommendations for the specific legal and regulatory levers that policymakers can use.
“Our aim is to equip policymakers with the toolkit to consider trade-offs,” he says. “There is no easy takeaway. Or, as we say, there are no solutions, only trade-offs.”
Rozenshtein points to a particularly thorny example. “What if the same open system that allows people to create art can also be used to create nonconsensual deepfake pornography? If you, say, close that system to prevent harmful content, you might end up with one or two companies controlling how digital art is made. This is the paradox of open AI. It’s both a wellspring for innovation and a source for digital malice.”
Despite the complexity AI presents, Rozenshtein is an ardent supporter of the technology. “I truly believe that AI is great for humanity and one day will be equated with the invention of electricity or the printing press in terms of importance. But it’s a tool that we must handle carefully if we are to shape a future that is in the public’s best interest.”