Cyberthreat hackers are using the ChatGPT exploit to assault healthcare and other sectors.

According to a March 12 statement from Veriti, a cybersecurity company, a ChatGPT risk discovered last year is being used by cyberthreat players to exploit security flaws in artificial intelligence systems. The risk is categorized as medium risk by the National Institute of Standards and Technology, but Veriti claims that cyberthreat actors have used it in more than 10,000 attack attempts worldwide. According to the company, financial organizations, health care, and government agencies have been the main target of the attacks. The assaults could result in reputational damage, illicit transactions, regulatory penalties, and data breaches. &nbsp,
 
According to Scott Gee, AHA assistant national consultant for cybersecurity and risk,” This may help an attacker to take sensitive data or affect the availability of the AI tool.” This underscores the value of incorporating patch control when it is put into practice in a hospital setting into a comprehensive management strategy for AI. The fact that the risk is a yr old and that a proof of concept for exploitation has been made available for some time also serves as a great reminder of the necessity of proper updating of software.
 
Call Gee at for more information on this or other digital and risk issues. Visit hey for the most recent digital and danger resources and threat intelligence. org/cybersecurity

Leave a Comment