As businesses push for GenAI implementation, AI files security risks rise.

Even as cybersecurity risks are rising, businesses are adopting generative AI ( GenAI ). According to Fortanix’s 2025 State of Data Security in GenAI Report, which surveyed 1, 000 professionals, 87 % of security officials reported a data breach in the past year. At the same time, 97 % of companies plan to integrate GenAI into their operations, either by purchasing existing solutions or developing in-house systems, to streamline processes and drive revenue growth. Organizations must tackle new security issues as AI implementation grows in order to protect sensitive data across a variety of platforms and environments.

” The data clearly indicates that nothing will stand in the way of organizations moving forward with the deployment of GenAI this year,” according to Fortanix’s main product officer Anuj Jaiswal.” Many organizations are not fully understanding the complex data security issues surrounding the technology,” Jaiswal said.

The report highlights that 97 % of enterprises restrict GenAI usage, and 89 % of executives believe these controls are effective. Despite these laws, 95 % of professionals continue using AI equipment, indicating a space between policy and actual use. Among them, 66 % use GenAI for work-related tasks, while 64 % access AI tools through personal email accounts, bypassing corporate security controls.

This pattern raises questions about uncontrolled data exposure because sensitive business information may be accessed or shared inside of obtained environments. The report points out that some businesses lack oversight over how employees engage with GenAI, increasing the risk of illegal access and compliance violations. More than half of IT executives ( 53 % ) expressed concern about unauthorised GenAI access by employees, which is a concern for employees.

Additionally, 41 % of safety managers reported that their organizations had discovered illegal AI software being used on their networks, making it harder to keep data integrity and security compliance.

Crypto strategies lag behind AI development

Security is still a top priority in some situations despite the fact that 88 % of businesses have already allocated budgets for GenAI deployment. Although line of business ( LOB), IT, and security executives prioritize accuracy of AI models above other important issues, only IT executives place data security and privacy at the top of their priority list.

Unauthorized data entry is still a crucial security measure, but Fortanix’s report suggests that some security measures are obsolete and inadequate for AI-driven environments. 62 % of safety leaders claimed that their latest encryption methods are insufficiently optimized for protecting AI-generated information. Additionally, 58 % of companies struggle to maintain consistent encryption policies across sky, on-premises, and hybrid environments.

End-to-end crypto across all touchpoints is a growing concern as AI techniques generate and process increasing amounts of custom and customer data. The report makes it clear that conventional encryption algorithms for structured data may not be enough for the unstructured and changing data involving GenAI.

The report highlights that 74 % of executives feel pressure to implement GenAI, driven by board directives, competitive market demands, and leadership expectations. 82 % of LOB executives and 81 % of IT executives feel the most urgency, while 56 % of security executives are more cautious because of potential cybersecurity threats.

Among those investing in GenAI, 47 % cite competitive advantages as their primary reason for adoption, while 39 % see AI-driven efficiencies as the most significant driver. Only 21 % of security managers, however, think their organizations are properly prepared to deal with AI-specific security risks.

Leave a Comment