Signal leader warns the promoted agentic Artificial bots threaten person privacy
- Meredith Whittaker, the president of Signal, said agentic AI poses major security risks to users.
- Agentic AI refers to machines that can cause and do tasks for people without their input.
- But having a scammer finish tasks for customers means giving it entry to scads of information, Whittaker said.
Signal President Meredith Whittaker is skeptical about — that is, AI agents that may complete things or make decisions without human input.
While some tech titans have touted how useful agentic AI can be and for people to consider, Whittaker warned of the privacy risks posed by the intelligent agents while speaking at the SXSW 2025 Conference and Festivals in Austin on Friday.
“ I think there’s a real threat that we’re facing”, Whittaker said, “in part because what we’re doing is giving so much power to these techniques that are going to have access to data”.
Whittaker is the chairman of the non-profit Signal Technology Foundation that runs the end-to-end encrypted Signal game known for its online security.
An AI agent is marketed like a “magic fairy bot” that you think many steps ahead and complete tasks for users so that “your brain can sit in a jar, and you’re not doing any of that yourself”, Whittaker said.
As an example, she said agentic AI could perform tasks like finding a concert, booking reservations, and opening an app like Signal to text companions with concert ticket information. But at every step in that process, the AI agent would access data that the user may want to keep private, she said.
” It would need access to our browser, an ability to drive that. It would need our credit card information to pay for the tickets. It would need access to our calendar, everything we’re doing, everyone we’re meeting. It would need access to Signal to open and send that message to our friends”, she said. ” It would need to be able to drive that across our entire system with something that looks like root permission, accessing every single one of those databases, probably in the clear because there’s no model to do that encrypted”.
Whittaker added that an AI agent powerful enough to do that would “almost certainly” process data off-device by sending it to a cloud server and back.
“ So there’s a profound issue with security and privacy that is haunting this sort of hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services, muddying their data, and doing things like undermining the privacy of your Signal messages”, she said.
Whittaker isn’t the only one worried about the risks posed by agentic AI.
Yoshua Bengio, the Canadian research scientist regarded as one of the , issued a similar warning while speaking to Business Insider at the World Economic Forum in Davos in January.
” All of the catastrophic scenarios with AGI or superintelligence happen if we have agents”, Bengio said, referring to , the threshold at which machines can reason as well as humans can.
” We could advance our science of safe and capable AI, but we need to acknowledge the risks, understand scientifically where it’s coming from, and then do the technological investment to make it happen before it’s too late, and we build things that can destroy us”, Bengio said.