Researchers at Cisco emphasize the emerging dangers to AI models.

This week, Cisco security researchers released a list of threats they are seeing from bad actors attempting to infect or attack the big language model, the most prevalent component of AI.

Safety experts are well-versed in some methods used to conceal messages or assaults from anti-spam systems.” Hiding the nature of the articles displayed to the recipient from anti-spam techniques is not a new approach. In a website blog about current and upcoming AI threats, Martin Lee, a safety architect with Cisco Talos, wrote that spammers have used writing rules to conceal their true message from anti-spam analysis for decades. &nbsp,” But, we have seen boost in the use of such methods during the second quarter of 2024″.

According to Lee, having the ability to conceal and conceal material from appliance study or human monitoring is likely to be a more significant vector of attack against AI systems. ” Happily, spam detection devices like Cisco Email Threat Defense have already implemented the methods to find this kind of obfuscation. In contrast, it becomes clear that a message is destructive and can be classified as spam when there are attempts to obscure content in this way, Lee wrote.

Leave a Comment