When DeepSeek first appeared in the software stores a few weeks ago, promising to offer the same kinds of high-performing artificial intelligence concepts as the well-known players like OpenAI and Google for a much lower price, it rocked the tech industry and the financial markets.
Some in government and data security worry that TikTok, a newly popular open-source AI assistant ,’s connections to China, could put American data in danger, in contrast to the social media platform, which members of Congress overwhelmingly voted to outlaw last year.
Those problems aren’t limited to DeepSeek. Even with the national security flag-waving that occurs in legislative rooms, all downloading AI bot apps onto their phones should keep in mind. We’ll describe some valuable tips below.  ,
The Chinese Communist Party’s ability to access information collected by DeepSeek and another Chinese-owned programs, as well as the possibility for DeepSeek to be used to distribute Chinese deception, were two US House members who on Thursday to boycott the software on all state devices.
” This is a five alarm national security fireplace”, US Rep. Josh Gottheimer, a New Jersey Democrat, said in a statement, adding that the country can’t harm China being ready to”infiltrate” the products of government workers and possibly put national security at risk.  ,
” We’ve seen China’s handbook before with TikTok, and we cannot allow it to happen again”, Gottheimer said.  ,
Australia banned the app from being installed on federal devices next week. is one of the first state in the US to do this. Additionally, the governor of New York announced on Monday that DeepSeek is prohibited from being played on state-run computers and networks.
DeepSeek’s ties to China, as well as its wild popularity in the US and the media hype surrounding it, make for an easy comparison to TikTok, but security experts say that while the DeepSeek’s data security threats are true, they’re different from those of the social media platform.
Even though DeepSeek may be the hottest new AI assistant right now, there are numerous new AI models and versions coming out, making it important to be cautious when using any kind of AI software.
In the meantime, it’s going to be a tough sell to get the average person to avoid downloading and using DeepSeek, said Dimitri Sirota, CEO of BigID, a cybersecurity company that specializes in AI security compliance.
” I think it’s tempting, especially for something that’s been in the news so much”, he said. ” I think to some degree, people just need to make sure they operate within a certain set of parameters” . ,
Why are people worried about DeepSeek?
Similar to TikTok, DeepSeek has ties to China, where user data is sent back to cloud servers there. Similar to TikTok, which is owned by China-based ByteDance, DeepSeek is required by Chinese law to give user data to the government if requested by the government.
With TikTok, lawmakers on both sides of the aisle expressed concern that the Chinese Communist Party might use US user data for intelligence purposes, or that the app itself might be modified to infuriate American users with Chinese propaganda. In the end, those concerns led Congress to pass a law last year that would outlaw TikTok unless it is sold to a buyer deemed appropriate by US officials.
But getting a handle on DeepSeek, or any other AI, isn’t as simple as banning an app. Unlike TikTok, which companies, governments and individuals can choose to avoid, DeepSeek is something people might end up encountering, and handing information to, without even knowing it.
The average consumer probably won’t even know what AI model they’re interacting with, Sirota said. Depending on what tasks need to be completed, many companies already have more than one kind of AI model running, and the “brain” or specific AI model powering that avatar could even be” swapped” with another in the company’s collection while the consumer interacts with it.
Meanwhile, there is no stopping the buzz about AI in general. More models from other companies, including some that’ll be open-source like DeepSeek, are also on the way and will certainly grab the future attention of companies and consumers.  ,  ,  ,  ,  ,  ,
As a result, focusing on DeepSeek removes only some of the data security risks, said Kelcey Morgan, Rapid7’s senior manager of product management.
Instead of focusing on the model that is currently in the spotlight, businesses and consumers must determine the level of risk they want to take with all forms of AI, as well as implement policies to safeguard data.
” That’s regardless of whatever hot thing comes out next week”, Morgan said.
Could the Chinese Communist Party use DeepSeek data to gather intelligence?
According to cybersecurity experts, China has the resources to mine the sizable amounts of data that DeepSeek has gathered and combine it with other sources of data to create profiles for American users.
” I do believe we’ve entered a new era where compute is no longer the constraint,” Sirota said, citing the capabilities of businesses like Palantir Technologies, which produces software that allows US agencies to gather sizable amounts of data for intelligence purposes. China also has the same kinds of capabilities.  ,
China is happy to play the long game and waits to see if any of the people who are playing around with DeepSeek become influential and worth potentially targeting, Sirota said. However, the people playing around with it may be young and largely unimportant now, as with TikTok’s users.
Andrew Borene, executive director at Flashpoint, the world’s largest private provider of threat data and intelligence, said that’s something people in Washington, regardless of political leanings, have become increasingly aware of in recent years.
” We know that the policymakers are aware, we know the technology community is aware”, he said. I’m not sure the American consumer is necessarily aware of those risks, or where those data go, and why that might be a concern, according to my personal opinion.
Borene urged anyone working in the government to use the “highest levels of caution” if they did, but he also urged all users to keep in mind that their data might end up in the hands of Chinese officials.
” That’s an important factor to consider”, he said. ” You didn’t need to read the privacy policy to know that”.
Keep your private information private.
How to protect yourself when using DeepSeek or other AI models
Experts advise using any of them with care because it can be challenging to know which AI model you’re actually using for a lot of the time.
Here are some helpful hints.
Be intelligent with AI, just like you are with everything else. The traditional best practices also apply to tech in this context. Set long, complicated and unique , always enable when you can, and keep all your devices and software updated.  ,
Keep personal info personal. Before entering personal information into an AI chatbot, take some time. Yes, this covers obvious no-no’s like Social Security numbers and banking information, but also the kinds of details that might not automatically set off alarm bells, like your address, place of employment, and friends’ or coworkers ‘ names.
Be skeptical. Just like you’d be wary of information requests that come in the form of emails, texts or social media posts, you should be concerned about AI queries, too. Think of it like a first date, Sirota said. Walk away if a model uses it for the first time and asks strangely personal questions.
Don’t rush to be an early adopter. Morgan said that just because an AI or app is popular doesn’t mean you have to have it right away. Decide for yourself how much of a risk you want to take with brand-new software.
Read the terms and conditions. Yes, this is a lot of questions, but you should read these statements before handing over data to get an idea of where it’s going, what it’s being used for, and who it might be shared with. This is true of any app or software. According to Borene, those statements could also provide insight into whether an AI or app is collecting and sharing data with other devices. If that’s the case, turn those permissions off.  ,
Be aware of America’s adversaries. According to Borene, apps made in China should be viewed with suspicion, as should those made in other hostile or under-governed nations like Russia, Iran, or North Korea. Regardless of what the terms and conditions say, privacy rights you might enjoy in places like the US or the European Union don’t apply to those apps.