In response to safety concerns, Taiwan is the latest nation to impose a moratorium on government agencies using the Artificial platform from Chinese startup DeepSeek.
According to a statement released by Taiwan’s Ministry of Digital Affairs, “government companies and critical equipment should not use DeepSeek, because it endangers regional information security,” according to .
” DeepSeek AI services is a Chinese goods. Its function involves cross-border distribution, and information leakage and additional data security issues”.
Government from different nations have been looking into the use of personal information that DeepSeek’s Chinese roots have . Last year, it was in Italy, citing a lack of information regarding its data management techniques. Similar hazards are also being addressed by a number of businesses.
The chatbot, which is open source and as capable as other recent primary models but built for a fraction of the cost of its competitors, has attracted a lot of mainstream attention over the past few days.
However, it has been discovered that the platform’s large language models ( LLMs) are susceptible to jailbreak methods, which is a persistent issue with these products, as well as causing controversy by to topics considered sensitive by the Chinese government.
DeepSeek’s popularity has also resulted in “large-scale malicious attacks,” with NSFOCUS reporting that it has detected three waves of distributed denial-of-service ( DDoS ) attacks targeting its API interface between January 25 and January 27, 2025.
” The regular attack length was 35 days”, it . NTP and memcached representation attacks are two of the main harm methods.
On January 20, the DeepSeek bot system was attacked twice by DDoS attacks, according to the report, and 25 of those attacks, using techniques like NTP and SSDP, persisted for an hour on average.
The sustained activity largely originated from the United States, the United Kingdom, and Australia, the risk intelligence company added, describing it as a “well-planned and organized attack”.
Malicious actors have also profited from the controversy surrounding DeepSeek by publishing fake packages on the Python Package Index ( PyPI ) repository, which are intended to spoof developer information from developers. Ironically, there are indications that the Python text was created with the assistance of an AI helper.
The plans, named deepseeek and deepseekai, were saved at least 222 occasions before being taken down on January 29, 2025, as well as pretending to be a Python API buyer for DeepSeek. A majority of the files came from the U. S., China, Russia, Hong Kong, and Germany.
Russian security company Positive Technologies stated that the functions in these packages are intended to collect user and system data and seize environment variables. ” The author of the two items used Pipedream, an integration program for developers, as the command-and-control site that receives stolen data”.
The Artificial Intelligence Act, which bans AI applications and systems that pose an intolerable threat and imposes certain legal requirements for high-risk programs, became effective on February 2, 2025 in the European Union as a result of the creation.
In a related development, the U.K. government has released a new that aims to protect AI systems from hackers and damage, including security risks from data poison, type obfuscation, and direct fast injection, as well as ensure that they are being developed in a safe manner.
Meta, for its part, has its Frontier AI Framework, noting that it will stop the development of AI models that have been determined to have reached a critical risk threshold and cannot be mitigated. Some of the cybersecurity-related scenarios highlighted include-
- Automated end-to-end compromise of a best-practice-protected corporate-scale environment ( e. g., Fully patched, MFA-protected )
- Prior to defenders discovering and patching critical zero-day vulnerabilities in currently widely used, security-best-practices software, automated discovery and reliable exploitation of them.
- Automated end-to-end scam flows ( e. g., aka ) that could result in widespread economic damage to individuals or corporations
The possibility that AI systems might be used to carry out a malicious attack is not implausible. Last week, Google’s Threat Intelligence Group ( GTIG ) that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia have attempted to use Gemini to enable and scale their operations.
Threat actors have also been reported as trying to jailbreak AI models in an effort to obstruct their ethical and safety standards. It’s a kind of adversarial attack designed to compel a model to produce output that it has been specifically trained not to, such as creating malware or providing instructions for building a bomb.
Anthropic, an AI company, created a new line of defense called that it claims can protect models against universal jailbreaks in response to ongoing concerns raised by jailbreak attacks.
These Constitutional Classifiers are input and output classifiers trained on synthetically generated data that filter the vast majority of jailbreaks with minimal overrefusals and without having to pay a lot of compute overhead, the company claimed on Monday.