Op-Ed: Overtaking cyberattacks driven by AI

Artificial intelligence ( AI ) is revolutionizing cyber security, serving both as a potent ally and as a formidable adversary, at a fascinating crossroads right now.

Organizations may benefit from tools like Copilot and DeepSeek AI, but they also give cybercriminals the ability to launch more complex and obscure attacks.

You’re out of free content this month.

In 2025, Australian organizations are projected to spend almost$ 6.2 billion on risk management products and services, up 14 % from 2024 ( Gartner ), a clear indication of the crucial role of cyber security in protecting digital assets and maintaining business continuity in an increasingly digital world.

According to Richard Addiscott of Gartner,” AI will significantly affect security strategies, requiring leaders to conform to evolving threats, skill gaps, and governmental challenges.” Protection leaders must know AI’s role in the threat landscape to come up with wise, adaptable strategies and be ahead of advanced attacks as a result.

Understanding identity theft and the function of the fog

AI equipment are enabling attackers to produce destructive code as well as accelerating the whole attack process. Attackers can now use automated surveillance to obtain information much more quickly than ever before to bypass protective measures. Personality episodes and cross-border AI dangers are examples of this.

Cross-IDP imitation is a new technique for evading identity authentication controls, according to research done by . This technique enables attackers to use SaaS applications and identity providers ( IDPs ) to impersonate users without triggering security alerts.

This problem is most common in cloud settings, where hackers can impersonate domains, bypass MFA, and access sensitive systems without being discovered. Traditional and even contemporary CNAP security tools frequently fail to pick up, making organizations conscious that a violation has occurred. These assaults are very difficult to detect.

This underscores the need for strong exposure and identity control methods, particularly for cloud-reliant organizations. As soldiers, we must remain vigilant and constantly review our security measures to respond to new hazards. It is crucial to constantly modify and improve safety steps because the more cloud-based a company is, the risk is greater.

The risk of cross-border AI is growing in more and more of a priority.

The threats posed by cross-border information and the use of AI tools are another one that leaders need to be aware of. Concerns about data breaches and balancing regulatory demands and ethical use increase as Generative AI ( GenAI ) integrates into business operations.

Governments are currently creating Artificial laws, especially in the EU, with places like the US, Australia, and New Zealand likely to follow. The difficulty is in coordinating data control and citizenship, as AI tools frequently operate across borders and without boundaries.

Organizations require procedures like information masking and procedures that place data in the appropriate locations. Regulations only won’t, but, solve this issue. Education and awareness are essential to ensuring that organizations and their employees are protected managing these tools.

The development of dark AI and what it means for businesses

The fall of GenAI devices like DeepSeek AI also presents the intriguing problem of” dark Iot.” Given the risks they present, many businesses are finding ways to effectively manage these resources. Even though completely outlawing these devices may seem like a solution, it frequently leads to their” dark AI” use without the organization’s knowledge.

We saw this occur in Australia, where the use of GenAI tools has resented safe and ethical use. The issue lies not only in how people use engineering, but also in how they use it. Similar to how learning on hacking was approached in the past, security and business leaders must make staff aware of the benefits and risks of GenAI. Instead of simply imposing a blanket ban, this should be done.

Beyond raising awareness, businesses must take initiative in creating safe AI frameworks to ensure data security, retention, and sharing.

What CISOs need to know about the danger environment:

Chief Information Security Officers ( CISOs ) need to make their strategies adaptable in light of the rise of AI-powered attacks. Understanding how intruders operate is the key to surviving these risks. Assailants only use AI as a further application to accelerate their work. You have a problem if your mean-time-to-remediation ( MTTR ) is still sitting in days.

In fact, attackers are also looking for disruption or data. AI simply draws the ball closer, not to change it. You are in the strongest place possible to defend your target line by identifying threats shortly and stopping them from causing harm.

One of the best ways to avoid these hazards is to concentrate on early diagnosis. A strong starting point for ongoing screening of a security environment is through dark teaming, vulnerability testing, or simulated attacks. It’s about identifying and removing harm vector as quickly as possible. Your protection posture will increase the more you can shut down attack paths.

Future of AI in security: Automation and moral criteria

AI-driven technology will significantly improve threat detection and response, but many businesses are reluctant to fully embrace it due to concerns about mistakes and disruptions. Automated messages can be effective, but they run the risk of halting business-critical processes or causing false positives.

GenAI has the amazing potential to make decisions better, and integrated threat responses will become more trustworthy as AI gets more sophisticated. However, it will take time to build trust in AI systems and make sure they don’t interfere with necessary business operations, but it must be done.

Undoubtedly, the crossing of AI and digital security is changing protection techniques, providing both opportunities and dangers. Organizations can develop defenses against the changing threat landscape by focusing on education, collaboration, constant screening, and early detection.

Leave a Comment