A recently released U.K. government report on the safety of artificial intelligence ( AI ) is based on contributions from 100 experts from 33 different nations and intergovernmental organizations, including Princeton School of Public and International Affairs students.
Billed as” the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems”, the report outlines how AI could exacerbate risks such as the manipulation of public opinion, biological and chemical attacks, environmental impact, copyright infringement, and loss of privacy.
Although it doesn’t make policy advice, it does address these challenges and offers a global perspective on these issues. Participants from Princeton SPIA include Arvind Narayanan, professor of computer technology and CITP director, Edward Felten, Robert E. Kahn Professor of Computer Science and Public Affairs, professor, and Sayash Kapoor, a computer science Ph. D. member at CITP. Jonathan Barry, MPA ‘ 25, was a project manager for the document.
A few weeks after the study’s release, U. S. Vice President J. D. Vance spoke at the AI Action Summit in Paris, signaling a change in U. S. plan from focusing on AI safety to seizing AI option.
” The Trump administration believes that AI will have many, innovative applications in financial innovation, job creation, regional security, health care, complimentary expression, and above. And limiting its development right now had wrongfully gain occupants in the field, Vance said.” And it would also suggest paralyzing one of the most promising technology we have seen in generations.
According to Kapoor, the report provides a significant, fact-based review of AI dangers that nations should take into account as they embrace the potential for AI.
There has been a significant change over the past few years from focusing on evidence-based protection to arguments based on speculative and existential hazards of AI. This myth that we must locked down AI in order to make it safer, according to Kapoor, is what the vice chairman was trying to counteract. ” But the danger in directly disavowing safety is that businesses will adopt even more difficult methods of productionizing the technology.”
A part of the statement on empty AI foundation models, based on a CITP studio in September 2023, that addressed the risks and benefits of such models was led by Kapoor. He even co-authored an essay in , on this topic, as well as a guide with Narayanan,  ,” AI Snake Oil”. which examines the dangers and publicity surrounding AI.
Kapoor makes the case in his article on the U.K. report that when assessing the level of risk involved in these types as opposed to what is already made available through existing systems.
For example, in 2023, a class at the Massachusetts Institute of Technology showed that , harmful players could use a huge language model to create a bioweapon. However, according to Kapoor, it turned out that all of the details involved in that exercise was also accessible on Wikipedia.
” If we are trying to analyze the real risks of flexibility, we should focus on the peripheral risk”, Kapoor said. ” AI is similar to some other innovations we employ.” It has both good and bad purposes.
Nevertheless, Kapoor claimed that the report’s significance is that it provides a “wildly divergent opinion” between professionals who share similar viewpoints.
Yet laying out areas where experts disagree is good, he said, because those frequently tend to be the areas where we need to do more job to deepen our understanding.