China’s DeepSeek AI is barred in Italy due to social concerns and data protection.

China’s DeepSeek’s service in Italy has been by Italy’s data protection watchdog due to a lack of information regarding how users ‘ personal data is used.

The Garante, the power, made the announcement a day after DeepSeek received a number of inquiries regarding its data management practices and the location of its training data.

In particular, it wanted to know what personal data is collected by its web platform and mobile app, from which resources, for what reasons, on what legal basis, and whether it is stored in China.

The Garante claimed in a statement released on January 30, 2025, that it made the decision based on information it claimed was” fully insufficient” provided by DeepSeek.

The companies behind the company, Hangzhou DeepSeek Artificial Intelligence, and Beijing DeepSeek Artificial Intelligence, have “declared that they do not operate in Italy and that Western policy does not apply to them”, it added.

As a result, the regulator said it’s blocking exposure to DeepSeek with quick result, and that it’s together opening a spacecraft.

The data protection authority also temporarily baffled OpenAI’s ChatGPT in 2023, a restriction that was when the artificial intelligence ( AI ) company stepped in to address the concerns raised about data privacy. Consequently, OpenAI was over how it handled private information.

The company’s restrictions comes as it has been riding a wave of popularity this year, with millions of users signing up for the service and putting its mobile programs at the top of the download charts.

Besides becoming the target of “large-scale destructive attacks”, it has drawn the attention of lawmakers and regulars for its protection plan, China-aligned repression, propaganda, and the national security concerns it may cause. As of January 31 the business has started a fix to combat the services attacks.

Adding to the challenges, DeepSeek’s large language models ( LLM) have been found to be to like Crescendo, Bad Likert Judge, Deceptive Delight, Do Anything Now ( DAN), and EvilBOT, thereby allowing bad actors to generate malicious or prohibited content.

According to a report released on Thursday, Palo Alto Networks Unit 42 found a range of harmful outputs, from detailed instructions for making dangerous items like Molotov cocktails to generating malicious code for attacks like SQL injection and lateral movement.

” While DeepSeek’s initial responses often appeared benign, in many cases, carefully crafted follow-up prompts often exposed the weakness of these initial safeguards. The LLM readily provided thorough malicious instructions, which demonstrated the potential for these ostensibly innocent models to be used for evil purposes.

Further evaluation of DeepSeek’s reasoning model, DeepSeek-R1, by AI security company HiddenLayer, has that it’s not only vulnerable to prompt injections but also that its Chain-of-Thought ( CoT ) reasoning can lead to inadvertent information leakage.

In an intriguing twist, the business claimed that the model “reported numerous instances suggesting that OpenAI data was incorporated, raising ethical and legal questions about data sourcing and model originality.”

The disclosure comes in the wake of the discovery of a jailbreak vulnerability in OpenAI ChatGPT-4o dubbed Time Bandit, which allows an attacker to circumvent the LLM’s safety guardrails by asking the chatbot questions that cause it to lose its temporal awareness. OpenAI has since mitigated the problem.

The CERT Coordination Center ( CERT/CC ) claims that an attacker can exploit the vulnerability by initiating a session with ChatGPT and prompting it to respond directly to a specific historical event, historical time period, or by pretending to be a user is assisting the user in a particular historical event.

The user can pivot the responses to various illicit topics that have been asked in the future using the prompts that have been set up.

Similar jailbreak vulnerabilities have been found in Git Hub’s Copilot coding assistant and , both of which give threat actors the ability to circumvent security restrictions and write harmful code without the need for words like” sure” in the prompt.

According to Apex researcher Oren Saban,” Starting queries with affirmative words like” Sure” or other forms of confirmation acts as a trigger, shifting Copilot into a more compliant and risk-prone mode.” ” This minor adjustment is all it takes,” says the author,” to unlock responses that range from unethical suggestions to outright dangerous advice.”

Apex claimed it also discovered a new vulnerability in Copilot’s proxy configuration that could be exploited to completely circumvent access restrictions without paying for usage or even tamper with the Copilot system prompt, which serves as the model’s fundamental instructions.

However, the attack relies on capturing an authentication token for a functioning Copilot license, which would cause GitHub to declare the breach an abuse issue following responsible disclosure.

The GitHub Copilot proxy bypass and positive affirmation jailbreak serve as excellent examples of how even the most powerful AI tools can be abused without adequate safeguards, Saban continued.

Found this article interesting? Follow us on and Twitter to access more exclusive content we post.

Leave a Comment