Hell to NSF: AI Action Plan May Put People First

This past January the new administration issued an on Artificial Intelligence ( AI), taking the place of the now rescinded , calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to ( NSF).

EFF answered with a few clear points: First, government procurement of decision-making ( ADM) technologies must be done with transparency and public accountability—no algorithms should decide who keeps their job or who in the United States. Next, Generative AI policy rules may become narrowly focused and equal to exact harms, with an eye on protecting various public interests. And finally, we shouldn’t dig the biggest firms and gatekeepers with Artificial registration schemes.

Government Automated Decision Making

US purchasing of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI firms, but challenges infringing the civil rights of all people content to these automatic decisions.

These harms aren’t philosophical, we have already seen a shift to embrace experimental AI tools in police and national protection, including immigration protection. Recent reports also indicate the Department of Government Efficiency ( DOGE ) intends to apply AI to evaluate federal workers, and use the results to .

Automating crucial judgments about people is reckless and dangerous. At best these new AI tools are inadequate nonsense machines which require more work to correct inaccuracies, but at worst result in foolish and discriminatory outcomes obscured by the blackbox nature of the systems.

Otherwise, the implementation of such equipment must be done with a strong community notice-and-comment process as required by the Administrative Procedure Act. This method helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are incorrect or dangerous.

Also, the AI action plan may favour tools developed under the principles of free and open-source application. These principles are essential for evaluating the efficacy of these designs, and ensure they uphold a more reasonable and academic growth process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.

Don’t Enable Powerful Gatekeepers

Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is back in 2023.

For example, bills like and expand copyright laws to favor corporate giants over everyone else’s expression. even includes a scheme for a , long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like that are riddled with practical points of failure.

Among these dubious solutions is the growing prominence of which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more .

This is the wrong approach. Looking for easy solutions like . Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.

People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.

You can read our comments .

Leave a Comment