Not all Artificial Intelligence ( AI ) apps are safe and users signing up for them should consider using an anonymous account not linked to their personal or professional identity, India’s federal cyber security agency has advised. In its advisory, the Computer Emergency Response Team of India ( CERT-In ), the national technology arm to guard the Indian Internet space and combat cyberattacks, has underlined “vulnerabilities” in AI design, training and interaction mechanism.
The “vulnerabilities” the latest expert talks about include technical issues such as data poison, hostile attacks, type rotation, quick shot and hallucination exploitation.
” No all AI software out there are safe”, says the expert, accessed by PTI.
Artificial Intelligence has become a cornerstone of innovation, revolutionising industries ranging from medical to communications and it is increasingly used to control activities typically undertaken by humans, it says.
The advice says AI has accelerated automation of daily tasks, fostering imagination and supporting business functions such as customer services, logistics, medical diagnosis, and cybersecurity.
” As AI becomes extremely advanced and more prevalent, the associated risks even improve. Many problems target AI programs, by taking advantage of defects in information processing and machine learning models.
” These assaults pose major threats to AI applications ‘ safety, reliability, and trustworthiness across a variety of areas,” says the expert.
Hazard actors can take advantage of the rising need for AI applications to create false applications designed to trap users into downloading them, it says.
If someone downloads these false AI apps on their devices, it maximises the opportunity to deploy malware designed to steal all their data, the advisory says, asking users to exercise due diligence before clicking the’ download’ button in order to minimise AI cybersecurity risks.
The agency advised AI users to avoid sharing personal and sensitive information as the data is collected and used by the service provider to improve their models.
” It is advised to avoid utilising generative AI tools available online for professional work involving sensitive information, “it said.
The advisory said when singing up for AI services, users should consider using an anonymous account that is not linked to their personal or professional identity.
This helps protect privacy and prevents data breaches from being traced back to the user, the Cert-In said.
It emphasised AI tools should be used for their intended purpose of answering questions and generating content.
They cannot be relied upon to make” critical “decisions, especially in legal or medical contexts, it added.
The advisory cautioned that AI should not be trusted when it comes to accuracy as” bad data “or malicious hackers could” fool” AI tools to churn out inaccurate content, called’ hallucinations” in tech terms.
” The AI tool you are using is only as accurate as the data it uses. If the data it uses is old or incomplete, the content it churns out will be biased, inaccurate or outright wrong”, it said.
Talking about potential risks linked to AI usage, it said the technology can suffer’ data poisoning ‘ which involves manipulating the training data so that the model learns incorrect patterns and potentially misclassify data or generate inaccurate, biased or malicious outputs.
Explaining other AI fallibilities, the advisory said’ adversarial attacks ‘ change inputs to AI models to make them dish out wrong predictions while’ model inversion’ attacks extract sensitive information about a machine learning model’s training data.
A’ prompt injection’ is a manipulation attack that enables malicious actors to ‘ hijack’ the AI model’s output and ‘ jailbreak’ its system to bypass its safeguards.
As part of ‘ backdoor attack ‘ malicious actors implant hidden triggers within an AI model during its training process.
These attacks pose significant threats to AI applications ‘ security, reliability, and trustworthiness across a variety of fields, according to the advisory. >,