Opinions on regulations
March 15, 2025
Office of Science and Technology Policy
2415 Eisenhower Ave.
Alexandria, VA 22314
Re: Federal Register No. 2025-02305 Request for Information on the Development of an Artificial Intelligence ( AI ) Action Plan
Submitted Online
Opinions made by the R Street Institute’s Cybersecurity and Emerging Threats Team in September.
Request for Information on the Development of an Artificial Intelligence ( AI ) Action Plan
I. Overview of Comments:  ,  ,  ,  ,  ,  ,  ,  ,  ,
We appreciate the opportunity to respond to the White House’s Request for Information ( RFI ) on the Development of an Artificial Intelligence ( AI ) Action Plan.
The R Street Institute ( RS I), a nonpartisan, non-profit public policy research institute with a headquarters in Washington, D.C., is committed to promoting free markets and a limited, effective government. It agrees with the Trump administration that “artificial intelligence ( AI ) will have countless revolutionary applications in economic innovation, job creation, national security, healthcare, free expression, and beyond. ]1 ]
We recognize the critical steps this administration has taken within the first few months of its name to establish America’s leadership in AI and technical development. Notably, President Joe Biden’s rescission of 2021 Executive Order ( EO ) 14110, which imposed stringent rules on the development of private-sector AI and had numerous implications for cybersecurity, signals a welcome return to an open regulatory environment and pro-innovation policies. ]2 ] President Donald J. Trump’s EO 14179 ensures that unnecessary compliance burdens do not unduly hinder private investment in AI. EO 13859, which committed federal resources to AI research and development ( R&, D), established AI research institutes, and provided regulatory guidance to ensure AI continued to be an engine of U.S. economic and national security growth, builds on President Trump’s first-term AI legacy, particularly EO 13859, which provided regulatory guidance. ]4 ]
Furthermore, President Trump’s recent announcement of the Stargate joint venture—a private-sector-led initiative with government support expected to drive up to$ 500 billion in private investment toward AI infrastructure across America—represents a landmark effort. [5 ] By accelerating the development of local AI system through policy support and regulatory support, Stargate is poised to strengthen America’s technical foundation and advantage in the ongoing global AI arms culture. This devotion, largely driven by private companies with the Trump administration’s assistance, ensures the United States continues to lead in advanced AI capabilities, semiconductor production, and next-generation computing, thereby securing our long-term scientific leadership and financial resilience.
At the Paris AI Summit in February 2025, Vice President JD Vance reaffirmed this idea, highlighting the urgent need to” see to this new frontier with enthusiasm rather than apprehension.” ]6 ] He contended that” …restrict]ing ]]AI’s ] development now, when it is just beginning to take off, would not just unfairly benefit incumbents in the space but would mean paralyzing one of the most promising technologies we have seen in generations”. ]7 ] His remarks underscore President Trump’s commitment to viewing AI as an opportunity rather than a risk—an approach that fosters technological innovation while ensuring national security and economic growth.
Importantly, Vice President Vance also raised the issue of the rising trend for people to stop developing AI in the name of security, arguing that” the Artificial potential will not be won by hand wringing about security, it will be won by building—from dependable power plants to the production facilities that can make the cards of the future.” ]8 ] While AI” safety” concerns have often been used to justify overly cautious or restrictive policies, Vance’s remarks distinguish between excessive precaution and the imperative of AI security. Strong AI security is vital to America’s success and leadership, making sure that AI-driven advancements are trustworthy, trusted worldwide, flexible, and resistant to exploitation rather than stifling innovation. As the Trump administration rightly acknowledges, maximizing AI’s prospective requires both technical passion and a security-first approach that strengthens our resilience while fostering financial prosperity.
In alignment with this vision, the RS I’s Cybersecurity and Emerging Threats ( CSET ) team—which focuses on the national security implications of individual, business, and government cyber risk—urges the development of an AI Action Plan that prioritizes three key areas:
- Through AI-driven protection capabilities, strengthening both our government’s cybersecurity and AI protection.
- Establishing a healthy data-privacy platform that protects customers without stifling innovation
- keeping America’s position as the world’s leader in AI and industrial development. ]9 ]
Although this post focuses strictly on the privacy and security issues related to AI, our RSI partner Adam Thierer has submitted a separate opinion addressing broader AI innovation and governance considerations. ]10] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ] ]
II. Cybersecurity  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,
Ongoing advances in AI are already transforming the cybersecurity landscape for both defenders and adversaries. On one hand, AI-driven tools can compress incident analysis from minutes to milliseconds and even identify novel threats through predictive intelligence, on the other hand, malicious actors can easily exploit the same tools. For instance, cybercriminal groups and advanced persistent threats coming from China, Iran, Russia, and North Korea have already used generative AI services to” translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.” ]11]
Foreign-developed AI models raise additional cybersecurity concerns in addition to these immediate threats, particularly as more adversarial countries invest in open-weight AI systems with weak security guardrails. For instance, the release of DeepSeek’s R1 model in January 2025 exposed grave cybersecurity failures, including jailbreaking vulnerabilities and leaked chat histories, raising alarms about how unchecked foreign AI deployments could be exploited to collect data, spread disinformation, or facilitate cyberattacks. ]12] To mitigate these risks, the United States must maintain leadership in AI development while exploring necessary restrictions on foreign AI models like DeepSeek and ensuring that critical AI infrastructure components do not fall into the hands of adversaries. Strengthening domestic AI capabilities is a national security imperative as opposed to just a matter of economic competitiveness.
Over the past two years, RS I’s CSET team has brought together experts from academia, industry, civil society, and government to examine the intersection of AI and cybersecurity. Our findings underscore AI’s growing role in cyberattacks and defenses, as well as its significant potential for national security applications. ]14] However, to harness these AI benefits fully, the Trump administration’s AI Action Plan must pursue a balanced and risk-based approach that mitigates legitimate and emerging cybersecurity threats without imposing restrictive regulations that undermine AI’s role and opportunity in cybersecurity. The following recommendations outline the unique role that federal policy can and should play in strengthening our national cyber resilience and protecting our AI innovation, which is especially important as adversarial countries like China seek to become the world’s AI leader.
- Address “gray areas” in the development of AI.
Given the rapid evolution of AI in cybersecurity, the AI Action Plan would be well positioned to clarify significant policy and compliance ambiguities. From acceptable methods of AI-driven security research to risk-management expectations for AI deployments, the plan should provide targeted guidance to provide clarity on gray areas in AI development and deployment. For example, the National Institute of Standards and Technology ( NIST ) and the Department of Homeland Security’s Cybersecurity and Infrastructure Agency ( CISA ) should define permissible actions for security researchers using AI. ]15 ] Clearer guidelines can both define and support the implementation of AI-driven vulnerability testing, red teaming, and threat hunting across public and private sectors, particularly for smaller and less-resourced entities that may lack the expertise to conduct such assessments effectively. For instance, standardized frameworks and toolkits could provide systematic guidance on AI red teaming, outlining best practices and permitted activities, and ensuring that even organizations without dedicated cybersecurity teams could identify and mitigate AI-related threats. By removing legal uncertainty and establishing clear guardrails, these guidelines would empower researchers to strengthen AI security without fear of liability.
Additionally, the Trump White House should direct NIST to establish best practices and risk tolerance standards for AI in security. ]17] This guidance would help federal agencies and the private sector gauge how much risk is acceptable when deploying AI into mission-critical systems, guiding decisions on where human oversight or traditional controls might remain necessary. Moreover, the Department of Defense, in coordination with the National Security Agency, could also provide direction by expanding offensive cyber capabilities that leverage AI, thereby improving deterrence and clarifying the U. S. government’s role in preempting AI-enabled threats.
- Prioritize industry-specific frameworks
AI’s cybersecurity risks vary significantly between crucial infrastructure sectors and industries, making a one-size-fits-all approach ineffective. The AI Action Plan should direct CISA, in partnership with sector-specific agencies like the U. S. Food and Drug Administration, to develop tailored AI security frameworks for key sectors including energy, finance, and transportation. ]19] Each sector’s unique risk profile, from supply chain vulnerabilities to operational safety requirements, demands tailored guidance. For instance, an AI system that manages a power grid substation is threatened by grid disruption, which is different from what an AI system might encounter when it analyzes medical records or identifies financial fraud. ]20]
These frameworks, developed in collaboration with industry stakeholders, should not serve as compliance exercises; rather, they should provide a practical roadmap and best practices for sector-specific risk assessment and mitigation. They must also prioritize proactive defenses against AI-related vulnerabilities, such as adversarial attacks, data poisoning, and weaponization. ]21] By ensuring these frameworks remain flexible and adaptive, America can safeguard critical systems while continuing to lead in AI innovation. Additionally, the United States should actively promote these security-based AI governance principles on a global scale in order to stop foreign actors from imposing restrictive” AI safety” measures that could devalue innovation and sway the competitive landscape in their favor. Establishing American-led security standards would reinforce both domestic resilience and American leadership in shaping the future of AI governance.  ,
- Promote responsible AI use
Promoting responsible AI use means leveraging AI’s full potential for cybersecurity and other applications while ensuring that risks are managed pragmatically. Digital twins, emerging AI applications that simulate cyber threats and responses, are effective tools for evaluating system resilience, and they should be actively used, with all benefits being weighed against potential risks and limitations. ]22] Rather than leaning into panic over exaggerated fears, the AI Action Plan should prioritize evidence-based threats, such as adversarial attacks on AI models or data breaches. To support this, CISA should publish voluntary, use-case-specific guidelines that help end users distinguish between actual and imagined security risks and promote the use of fair AI security practices. Furthermore, continued investment in AI-driven cybersecurity R&, D is essential. Many of the most novel security solutions have emerged from small AI companies, and the AI Action Plan should explore ways to support industry leaders in advancing AI-driven cyber defense technologies. This innovative and risk-aware approach is in line with the Trump administration’s strategy to reduce costs while maximizing the potential for AI-driven solutions.
As the United States strengthens its AI security policies, we must recognize that adversaries will continue to misuse AI regardless of any restrictions imposed domestically. While guardrails are necessary, the United States cannot afford to hamstring itself with overly cautious laws that restrict innovation while foreign actors work to quickly advance their own AI capabilities. A balanced approach that mitigates real security threats while preserving AI’s role as a strategic asset for national defense and economic competitiveness is critical.
The AI Action Plan must take a risk-based approach that addresses probable security challenges while ensuring that the United States remains at the forefront of AI-driven cybersecurity. Policymakers should be aware that excessive security restrictions could harm it rather than improve it. While concerns persist about AI deployments in critical infrastructure, restrictions should not unintentionally hinder well-established AI applications that have long bolstered cybersecurity, such as anomaly-detection products that leverage machine learning. The United States must strike a balance between proactive security measures and the ability to use AI as a strategic asset in order to maintain its position as a leader in AI, protecting against emerging threats without placing unnecessary restraints on innovation.  ,
III. Data Privacy and Security:  ,  ,  ,  ,  ,  ,  ,  ,  ,
Data is the lifeblood of AI innovation and development, and AI requires both more and better data. Without effective privacy and security measures, however, this information can turn into a target for adversaries to harm and elude Americans. ]23] To protect individuals and promote trust in AI products and services, the AI Action Plan should prioritize the following actionable recommendations.
- Enact comprehensive federal privacy and data security provisions
Due to the absence of a federal privacy law in the United States, it is an outsider among developed countries. Instead, we rely on inconsistent state requirements that leave Americans either unprotected or under-protected. The development of AI-specific state and local laws is only worsening the situation, with the current patchwork of about 20 state privacy laws forcing industry to adhere to varying requirements. The AI Action Plan should articulate support for a clear national data privacy standard that would ensure all Americans have baseline protections and provide businesses with one set of rules to follow. ]25 ] Any guidance or rules should recognize that data uses, available protections, and privacy implications may vary between the development/training phase and the products and application phase.
Strong preemption and balanced enforcement mechanisms should be a part of such a law, which can never be abused. However, it is critical for any privacy action to remain focused on privacy without adding AI-specific provisions and to ensure that broader privacy provisions do not inadvertently or unnecessarily limit AI or data requirements, such as rigid data-minimization rules. After all, there is only one type of technology, and privacy regulations should apply to all kinds of technology. It would help to look at Texas and other states with existing privacy laws for inspiration rather than the European Union and its overreaching efforts.
Additionally, a privacy law should include data-security requirements. Some users of sensitive data are likely to safeguard it improperly because the only current requirements are sector-specific. This is amplified by the fact that countries like China have an interest in stealing Americans ‘ data for nefarious purposes and can leverage AI to quickly make sensitive inferences, such as who might be an intelligence asset.
- Leverage AI for data security
While AI can introduce new privacy challenges, it can also be a powerful tool for strengthening data security and compliance. AI-driven systems can automatically search, categorize, and safeguard large data stores, making sure sensitive information is mapped, protected, or deleted in accordance with privacy laws. ]28 ] These capabilities enhance compliance monitoring by detecting unauthorized data sharing or improper retention far more efficiently than manual reviews. Additionally, new methods like AI-powered anomaly detection can aid organizations in proactively identifying security threats and preventing data breaches from occurring before they occur. ]29 ] Given AI’s ability to enhance both defensive security measures and regulatory compliance, the AI Action Plan should encourage its responsible use in securing sensitive data.
- Promote privacy-enhancing technologies in AI development
AI can actively enhance privacy protections when it improves security, in addition to security improvements.
implemented responsibly. Differential privacy, federated learning, and homomorphic encryption are privacy-enhancing technologies ( PETs ), which enable organizations to extract insights from data while maintaining privacy. This ensures that personal information is kept private even as AI models learn from it. ]30] To drive broader adoption of PETs, the AI Action Plan should support public–private partnerships, issue guidance, and provide targeted incentives—including federal grant programs, research funding, and workforce training incentives—to advance their development. ]31] The AI Action Plan should also consider incorporating safe-harbor provisions or liability protections for organizations that adopt PETs in good faith, ensuring that companies are encouraged to implement privacy-first AI solutions without excessive legal exposure. Researchers and developers can continue to innovate by incorporating PETs into AI systems, such as automated data anonymization or encrypted computation, while ensuring that privacy is still a fundamental tenet of AI development and deployment. ]32 ]
Each of these recommendations is in line with the Trump administration’s strategy to” seize the opportunity to make the most of AI and its transformative potential” while “ensuring that all Americans benefit from it.” ]33] Rather than impose rigid, AI-specific rules that might inadvertently hinder technological progress, a comprehensive privacy provision with risk-based safeguards would provide clear guardrails for data use and consumer protections.
IV. Open-Source AI includes  ,  ,  ,  ,  ,  ,  .
Open-source AI has quickly emerged as a driving force of innovation, fostering collaboration and expanding access to advanced AI tools. Open-source AI enables researchers, startups, and large firms to build on shared advances by making model codes and weights publicly accessible, as opposed to creating systems and features from scratch. ]34] This approach has fueled a competitive AI ecosystem in which breakthroughs accelerate through collective contributions.
The line between open-source and proprietary AI is blurring, with major tech companies integrating open models into their development pipelines. As community-driven improvements rapidly improve AI’s benchmark performance, open-source AI is poised to overtake or even surpass proprietary models. ]36] Given its strategic importance to America’s technological leadership, the AI Action Plan should embrace open-source AI development and address potential cybersecurity and governance challenges. We suggest the following policy provisions to achieve this balance.
- Encourage secure deployment over blanket bans
Recent incidents, such as the DeepSeek-R1 model’s leaked data and jailbreaking vulnerabilities, highlight the need for basic cybersecurity hygiene in open-source projects. They also demonstrate that open models can be safely used when used in controlled environments. ]38] For example, Microsoft quickly sandboxed DeepSeek-R1 on isolated servers with strict access controls through its Azure AI Foundry platform, allowing researchers to experiment with it without exposing sensitive data. The AI Action Plan should promote similar approaches to scale up America’s AI innovation and advancement while reducing risks, such as using open-source AI models on air-gapped systems, using sandboxed environments, and monitoring for anomalies. ]40]
- Establish clear guidelines for open-source AI development and deployment
The development and deployment of open-source AI models should also be governed by the AI Action Plan’s voluntary, risk-based best-practice guidelines. These guidelines could include measures like rigorous pre-release testing, transparency in model provenance, and additional safeguards for high-risk applications, such as deploying AI systems in critical infrastructure. Instead of pursuing licensing, certification requirements, or other heavy-handed regulation, this approach would improve open-source AI’s resilience and accountability without compromising its advantages.
- Incorporate tiered liability protection provisions for open-source AI
The AI Action Plan should consider incorporating liability protections that correspond with the risk levels associated with different types of open-source AI projects and applications. In accordance with this rule, creators of lower-risk models, such as educational tools, could benefit from more expansive liability protections that promote innovation while limiting their legal exposure in the event of third-party misuse. ]43] This approach would protect developers by offering clearer legal boundaries and reducing uncertainty.
V.  ,  ,  ,  ,  ,  ,  ,  , Conclusion
The AI Action Plan must prioritize policies that strengthen AI-driven cybersecurity, establish a balanced privacy framework, and ensure that the United States maintains its leadership in AI development and technological innovation for generations to come. We are happy to have a resource and are prepared to work with policymakers to develop AI and emerging technology policies that promote innovation, cybersecurity, and economic growth.
Respectfully submitted,
Brandon Pugh
Policy Director, Cybersecurity and Emerging Threats
R Street Institute
Haiman Wong
Fellow, Cybersecurity and Emerging Threats
R Street Institute
This document has received a public-response approval. The document contains no business-proprietary or confidential information. Document contents may be reused by the government in developing the AI Action Plan and associated documents without attribution.
The American Presidency Project, Feb. 11, 2025, https ://www.presidency .ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-Paris-france.  ,
[2 ]” Executive Order on Removing Barriers to American Leadership in Artificial Intelligence,” Brandon Pugh and Amy Chang, “Cybersecurity Implications of the White House’s AI Executive Order,” R Street Institute, October 31, 2023. https ://www.rstreet .org/commentary/cybersecurity-implications-of-the-white-houses-ai-executive-order.  ,
]3 ] Ibid.
[4 ]” Executive Order on Maintaining American Leadership in Artificial Intelligence,” Trump White House Archives, February 11, 2019.
]5 ] Steve Holland,” Trump announces private-sector$ 500 billion investment in AI infrastructure”, Reuters, Jan. 21, 2025. https ://www.reuters.com/technology/artificial-intelligence/trump-announce-private-sector-ai-infrastructure-investment-cbs-reports-2025-01-21.
Vance, 6]. https ://www.presidency .ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.
]7 ] Ibid.
Vance, 8]. https ://www.presidency .ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.  ,
]9 ] “Cybersecurity and Emerging Threats”, R Street Institute, last accessed March 5, 2025. https ://www.rstreet .org/home/our-issues/cybersecurity-and-emerging-threats.  ,
Adam Thierer, “Comments of the R Street Institute in Request for Information on the Development of an Artificial Intelligence ( AI ) Action Plan,” R Street Institute, March 15, 2025. .
]11]” Disrupting malicious uses of AI by state-affiliated threat actors”, OpenAI, Feb. 14, 2024. https ://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors.  ,
]12] Haiman Wong,” DeepSeek’s cybersecurity failures expose a bigger risk. Here’s what we really should be watching”, R Street Institute, Feb. 4, 2025. https ://www.rstreet .org/commentary/deepseeks-cybersecurity-failures-expose-a-bigger-risk-heres-what-we-really-should-be-watching.  ,
]13]” R Street Cybersecurity-Artificial Intelligence Working Group”, R Street Institute, last accessed March 5, 2025. https ://www.rstreet .org/home/our-issues/cybersecurity-and-emerging-threats/cyber-ai-working-group.
]14] Ibid.
]15 ] Haiman Wong and Brandon Pugh,” Key Cybersecurity and AI Policy Priorities for Trump’s Second Administration and the 119th Congress”, R Street Institute, January 2025. https ://www.rstreet .org/research/key-cybersecurity-and-ai-policy-priorities-for-trumps-second-administration-and-the-119th-congress.  ,
]16] Ibid.
Ibid. .
]18] Ibid.
]19] Ibid.
Ibid.
]21] Ibid.
Ibid.
]23] Brandon Pugh and Steven Ward,” What does AI need? A comprehensive federal data privacy and security law”, IAPP, July 12, 2023. https ://iapp.org/news/a/what-does-ai-need-a-comprehensive-federal-data-privacy-and-security-law.  ,
]24] Brandon Pugh and Steven Ward,” Key Data Privacy and Security Priorities for 2025″, R Street Institute, January 2025. https ://www.rstreet .org/research/key-data-privacy-and-security-priorities-for-2025.  ,
]25 ] Ibid.
]26] Testimony of Brandon J. Pugh, Esq., House Committee on Energy and Commerce,” Hearing on Economic Danger Zone: How America Competes to Win the Future Versus China”, 118th Congress, February 2023. https ://d1dth6e84htgma.cloudfront .net/Brandon_Pugh_Testimony_020123_ Hearing_36ecfd8b92.pdf ?updated_at=2023-02-01T14:31:57.744Z.  ,
]27] Testimony of Brandon J. Pugh, Esq., Bipartisan Task Force on Artificial Intelligence United States House of Representatives,” Hearing on Privacy, Transparency, and Identity”, 118th Congress, June 28, 2024. https ://www.rstreet .org/outreach/brandon-pugh-testimony-hearing-on-privacy-transparency-and-identity.  ,
]28 ] Pugh and Ward. https ://www.rstreet .org/research/key-data-privacy-and-security-priorities-for-2025.  ,
]29 ] Steven Ward,” Leveraging AI and Emerging Technology to Enhance Data Privacy and Security”, R Street Policy Study No. 317, March 2025, p. 2. https ://www.rstreet .org/research/leveraging-ai-and-emerging-technology-to-enhance-data-privacy-and-security  ,
]30] Ibid.
Ibid. Ibid.
]32 ] Ibid.
Vance https ://www.presidency .ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.  ,
Ben Brooks,” Open-Source AI Is Good for Us,” IEEE Spectrum, Feb. 8, 2024, https ://spectrum .ieee .org/open-source-ai-good.  ,  ,  ,
Ibid., p. 35.
]36] Ibid.
]37] Wong. http ://www.rstreet .org/commentary/deepseeks-cybersecurity-fails-expose-a-bigger-risk-heres-what-we-really-should-be-watching.
]38] Ibid.
Ibid.
]40] Ibid.
]41] Ibid.
Ibid.
]43] Ibid.
Ibid.