4 Important Factors For Business Leaders to Take into Account When Addressing AI-Driven Risks

image

by Chuck Brooks and John Lindsay

For better and for worse, AI is revolutionizing the security environment

The transformative power of artificial intelligence ( AI ) is transforming the cybersecurity industry for good, not just for businesses, but also for the world. Positively, AI is improving automated, real-time digital risk identification and analysis. Businesses now have improved and faster capabilities for digital security, enabling them to observe program activity, identify anomalies, and identify individuals acting exceedingly.

Admittedly, there are two elements to consider when developing AI to enhance efficiency. Cybercriminals are also using AI and machine learning ( ML) tools to more effectively attack and spoof victims ‘ networks in the current digital landscape, and there are two main areas where these tools are having the greatest impact.

Second, AI can be used to simplify and speed up attacks. AI equipment reduce time spent on surveillance, data gathering, and analysis back of an assault. When an attacker is inside a murderer’s IT systems, they may also lead the exploration and exploitation of the system, eventually reducing the amount of time between first compromise and the theft or destruction of information.

Next, AI is being used to create extremely reasonable but phony text, sound, or even video. Additionally, previously unheard data can be used to create these deepfakes even more quickly, such as public images of interviews with C-suite users. Deepfakes are now being used to help disinformation campaigns against businesses, for instance by presenting an employee’s statement that could seriously harm an business. The most obvious usage of deepfakes is to create hacking attempts more compelling. And the issue is getting worse. In 2024, almost half of global firms reported an example of deepfake-related deception].

If no properly deployed, working AI tools may also cause issues.

Unintentional abuse of AI equipment can also have an effect on data protection. There has been a danger that employees will continue to use relational AI tools to simplify daily work tasks since the release of relational AI applications like ChatGPT. However, placing sensitive business information in public environments of conceptual AI tools significantly increases the risk of unintended disclosure. This danger has been made worse by the appearance of AI devices like DeepSeek, which are based in China. In theory, the Chinese government may ask for access to information and inquiries relating to DeepSeek thanks to DeepSeek’s liberal privacy policy, along with China’s national protection laws.

Business leaders should take into account four important factors when responding to AI-driven challenges:

1 ) Guarantee the help of business leaders

Business leaders may challenge the risks posed by AI and tackle any defensive spaces if they are to remain economically feasible to help shop rising threats and challenges. They must invest the necessary resources to comprehend how artificial intelligence ( AI ) could harm their businesses, including how it could be used to create sophisticated phishing or disinformation campaigns. They must also be aware of the ways in which sensitive corporate data may be accidentally or purposefully leaked into the public area through the use of AI tools like ChatGPT.

Additionally, senior leadership teams must give cybersecurity a priority and take action to combat the growing threat of AI-enabled attacks. Given the business areas and locations that they operate in, they may be aware of the specific challenges they face. And they should continue to be aware of the dangers of attacks by protesters motivated by their activities by state actors, non-targeted legal attacks, and other threats from activists. They may also make use of AI-powered cybersecurity tools where possible to adapt to the increasing sophistication of adversaries ‘ skills.

2. Understand that you could be a victim of AI-driven virtual risks.

Any commitment to security begins with an analysis of the risk of computer risks. AI-driven security devices may be integrated into those evaluations.

AI can be used as a tool to help identify any code defects, design mistakes, or even malware that might already be in programs and applications. The initial step in that evaluation process should be software safety testing.

The tests procedure’s main goal is to identify problems before they affect devices, networks, and production. It must be continuous, if, as dangers evolve. In specific, AI danger detection tools may be required to maintain proper security.

Technical security settings and techniques are a crucial factor in protecting businesses and organizations. IT/security departments need to be empowered to ensure that there is internet monitoring, data security, computer defenses, and incident response.

Additionally, it is crucial to put the appropriate technical safeguards in place to safeguard any corporate information you put into any commercial tools when using AI.

3. Create a top-down cybersecurity culture with knowledge of cutting-edge technology trends

AI is already having a significant influence on operating systems for security and business. The C-suite needs to have a thorough understanding of their capabilities and give enough resources to security teams for the acquisition and integration of new security tools to fully utilize emerging technologies.

However, technology tools can only take you so far and they are continually evolving. As of now, the methods for identifying deepfakes are not always trustworthy. Furthermore, if employees believe that utilizing AI tools will make their jobs easier, they will try to use them, regardless of any restrictions. Therefore, you should either provide staff with secure AI solutions to use or ensure that they understand how to protect information ( for example, by not putting it insecure AI tools ).

To facilitate the assimilation of technology, a proper culture must be established, starting at the top. Business leaders should ensure that their staff learns to identify AI-driven risks, fosters a healthy skepticism, and verifies information using reliable, alternative sources. The potential dangers of using AI tools must be communicated, along with their benefits. Employees should be urged to take responsibility for unintentional AI abuse and to encourage them to do so. Those who self-report or report issues should not be penalized.

The C-suite also needs to develop and demonstrate an understanding of best practices, legislation, and challenges around cybersecurity and AI, as otherwise businesses will remain largely unprepared. There should be more emphasis on attracting cybersecurity and AI experts to board-level positions in order to improve cybersecurity culture. Given that breaches in the business world continue to raise the risks and costs, it is wise to seek outside assistance to improve the C-Suite’s cybersecurity and AI readiness.

4. Be prepared to respond to AI-driven threat scenarios

Additionally, the C-suite needs to regularly evaluate its ability to deal with AI-driven threat scenarios. Key elements to test include how to handle internal and external narratives, which could be manipulated by disinformation, how to deal with potential disruption to the chain of command caused by deepfakes, and how to respond to relevant regulators.

Operating securely in a rapidly evolving digital world, driven by developing technologies, presents numerous challenges. Plans that can identify, stop, and lessen evolving cyber threats must be reorganized, and industry awareness must be maintained.

Authors:

John Lindsay is a Director in the Washington, D. C. office of global strategic advisory firm Hakluyt. He gives advice to businesses and investors on the opportunities and risks that their businesses face, with a particular emphasis on geopolitics and technology.

Before joining Hakluyt, John held various public affairs and diplomatic roles in the UK government, including as a cyber security adviser to the UK Ministry of Defence and, most recently, for the UK Foreign, Commonwealth &amp, Development Office, where he focused on Afghan politics.

John has a particular knack for fostering dialogue between technical and non-technical audiences. He studied politics and international relations at the University of Cambridge, where he received both undergraduate and graduate degrees. He also holds several advanced cybersecurity qualifications.

Follow John on

Chuck Brooks currently serves as an Adjunct Professor at Georgetown University in the Cyber Risk Management Program, where he teaches graduate courses on risk management, homeland security, and cybersecurity. He also has his own consulting firm, Brooks Consulting International.

Chuck has received numerous international awards for his efforts in promoting cybersecurity. He was recently named the most popular cybersecurity expert on social media and a top cybersecurity leader for 2024. He has also been named “Cybersecurity Person of the Year” by Cyber Express, Cybersecurity Marketer of the Year, and a” Top 5 Tech Person to Follow” by Linked In”. Chuck has 122, 000 followers on his profile on Linked In. Chuck is also a contributor to Forbes, The Washington Post, Dark Reading, Homeland Security Today, Skytop Media, GovCon, Barrons, Reader’s Digest, and The Hill on cybersecurity and emerging technology topics. He has authored a book” Inside Cyber,” that is now available on Amazon.

Chuck has served as the first Director of Legislative Affairs at the DHS Science &amp, Technology Directorate and has received executive appointments from two U.S. presidents throughout his career. He worked on tech and security issues for the late Senator Arlen Specter on Capitol Hill for ten years. Chuck has also served in executive roles for companies such as General Dynamics, Rapiscan, and Xerox.

Chuck graduated with degrees from The Hague Academy of International Law, DePauw University, and the University of Chicago, where he received his MA.

Follow Chuck on Linked In: https ://www.linkedin.com/in/chuckbrooks/

]1 ] https ://www.globenewswire .com/news-release/2024/09/30/2955054/0/en/Deepfake-Fraud-Doubles-Down-49-of-Businesses-Now-Hit-by-Audio-and-Video-Scams-Regula-s-Survey-Reveals .html

Leave a Comment