New Delhi: OpenAI has laid out a new cyber defence plan at a time when AI is making online attacks faster, cheaper, and frankly, a bit scarier for everyone from big banks to small schools. The company’s April 2026 paper, titled Cybersecurity in the Intelligence Age, says AI is now helping both sides of the cyber fight, with defenders using it to find weak spots and attackers using it for phishing, malware work, and faster online scouting.
The report, credited to Sasha Baker, Head of National Security Policy at OpenAI, says the company wants to push advanced AI tools into the hands of “trusted defenders” rather than keep them locked away for a very small group. Its central line is sharp: “Attackers will not wait.” That pretty much sums up the mood of the paper.
we’re starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days.
we will work with the entire ecosystem and the government to figure out trusted access for cyber; we want to rapidly help secure companies/infrastructure.
— Sam Altman (@sama) April 30, 2026
OpenAI’s cyber defence plan: What is changing?
OpenAI’s plan is built around five broad pillars. In simple words, it wants AI tools to help more cyber defenders, not just top labs or elite agencies.
| Pillar | What it means in simple words |
|---|---|
| Democratizing cyber defense | Give trusted defenders access to stronger AI cyber tools |
| Government and industry coordination | Share threat data faster between agencies, companies, and AI labs |
| Securing frontier cyber capabilities | Protect powerful models, systems, and sensitive knowledge |
| Visibility and control | Watch for misuse and restrict access when risk grows |
| Helping users protect themselves | Help regular people spot scams, phishing, and account risks |
The phrase “controlled acceleration” appears in the paper, and that is the basic idea here. Move fast, but keep locks on the doors. OpenAI says stronger access should come with stronger vetting, monitoring, security promises, and clear use cases.
Trusted Access for Cyber is the big piece
The main programme mentioned in the report is Trusted Access for Cyber, or TAC. OpenAI says this will give legitimate cyber experts a path to use more capable models for defensive work.
That could include government teams, security companies, cloud platforms, banks, critical infrastructure operators, and software supply-chain defenders. The paper also names smaller hospitals, school districts, water utilities, municipalities, and local infrastructure providers as groups that may need help through trusted intermediaries.
This matters in India too. Many smaller firms, colleges, hospitals, and local bodies do not have huge cyber teams. If AI tools can help them check suspicious emails, patch code, or respond to breaches faster, that could make a real difference.
Regular users are part of the plan, too
One of the more relatable parts of the report is about normal users. OpenAI says ChatGPT users already send over 15 million messages per month asking whether something is a scam. That sounds very believable. Most of us have seen those fake bank texts, delivery links, and “urgent KYC” messages by now.
OpenAI says AI can help people understand suspicious messages, secure accounts, use stronger passwords, turn on multifactor authentication, and recover after fraud or account compromise.