According to a March 12 statement from Veriti, a cybersecurity company, a ChatGPT risk discovered last year is being used by cyberthreat actors to exploit security flaws in artificial intelligence systems. The risk is categorized as moderate risk by the National Institute of Standards and Technology, but Veriti claims that it has been used by cyberthreat stars in more than 10,000 attack attempts around the world. According to the company, the problems have targeted primarily financial institutions, as well as government and health care entities. The assaults could result in data breaches, illicit transactions, regulatory fines, and reputational damage.  ,
According to Scott Gee, AHA assistant national consultant for cybersecurity and risk,” This may help an attacker to take sensitive data or affect the availability of the AI tool.” When patch management is put into practice in a hospital setting, it is crucial to include it in a comprehensive management program for AI. The fact that the risk has been around a year and a proof of concept for abuse has been made public for some time also serves as a reminder of how crucial it is to patch software quickly.
Call Gee at for more information on this or other digital and risk issues. Visit hey for the most recent computer and danger resources and threat intelligence. org/cybersecurity
Cyberthreat hackers are using the ChatGPT exploit to strike healthcare and other sectors.
