Microsoft Launches New AI Security Agents Amid Surge in Cyberthreats

Microsoft ( , ) is with new artificial intelligence-powered tools to address a sharp rise in digital threats and the challenges of securing generative AI.

The firm said on Monday that it will be releasing 11 new Security Copilot officials in April. These brokers will help simplify a lot of security work, such as finding hacking efforts, stopping data loss, managing risks, and controlling who can get your information. Microsoft says there have been a lot more hacks nowadays. Every day, its systems look at 84 trillion safety signs and stop over 30 billion fraudulent emails until 2024. The company says that these amounts of customers are too much for safety teams that use animal methods for finding and sorting emails. It took six months of work to make six of the officials. They include tools for hacking review, inside risk management, personality access policy enforcement, risk repair, threat intelligence gathering, and more. The tools are made to work with all of Microsoft’s techniques, including Microsoft Defender, Microsoft Purview, and Microsoft Entra. This will help safety team respond faster, respond to hazards in real time, and increase operations while keeping power. Microsoft is also delivering five more officials with the help of partners such as OneTrust, Aviatrix, BlueVoyant, Tanium, and Fletch. These solutions will help businesses handle data breaches, keep an eye on system problems, review surveillance operations, put threat alerts in context, and decide which great cyber risks need the most attention. Microsoft said it will slide out regulates to find and prevent illegal entry to ghost AI apps, which are illegal artistic AI tools that are used without IT oversight. This comes as concerns about these programs grow. Some of these are a new AI online class monitor and principles built into Microsoft Edge for businesses to prevent data loss. These characteristics are meant to keep private information from being entered into apps like Google Gemini, ChatGPT, Copilot Chat, and DeepSeek. The organization is also adding more cloud services and AI versions to its AI safety position management. Microsoft Defender may defend Google Vertex AI, Meta Llama, Mistral, and various models in the Azure AI Foundry starting in May 2025. The Open Worldwide Application Security Project lists threats like direct prompt treatment attacks, personal data leaks, and key abuse. More tools will be made available to find these threats. Microsoft said that starting in April 2025, one of its new features will be hacking defense in Microsoft Teams. Defender for Office 365 will also offer real-time link destruction and warning viewing. The improvements are part of the company’s larger effort to help secure AI change as more businesses use creative tools. Microsoft pointed to its own study that showed 57 % of companies have had more security problems since using AI, while 60 % have not yet put controls in place.

This article first appeared on .

Leave a Comment