
Google is once again leaning into its Gemini model, this time with a focus on security. You see, the search giant has Sec-Gemini v1, an experimental new AI design. It is designed to support surveillance professionals fight back against virtual risks using real-time data and advanced argument. Because AI makes everything better, best?
Look, folks, attackers only need to get happy again, while soldiers have to be best every time. That terrible disparity has made security a problem for many organizations. Google is hoping AI may shift that, giving soldiers a slight advantage.
Sec-Gemini v1 is built on top of Gemini, but it’s not just some marketed bot. Acrually, it has been tailored with safety in mind, pulling in new information from sources like Google Threat Intelligence, the OSV risk databases, and Mandiant’s risk reports. This gives it the ability to assist with underlying reason research, risk recognition, and risk diagnosis.
Google says the model performs better than others on two well-known measures. On CTI-MCQ, which measures how well types understand risk intelligence, it values at least 11 percent higher than competition. On CTI-Root Cause Mapping, it ends out foes by at least 10.5 percent. Benchmarks only tell part of the story, but those figures suggest it’s doing everything right.
The obvious query is whether it’s really good or just another bright AI tool with restricted real-world value. The security market has seen plenty of guarantees that didn’t really offer. Google obviously hopes this one will be unique.
Straight then, Sec-Gemini v1 isn’t being made widely available. Google is giving first access to select organizations, scientists, Organizations, and security experts. It’s meant for research and study– at least for now. If you meet that criteria, you can request access around.