An Urgent Briefing on Existential Risk

AI is making
biological weapons
accessible to anyone.

For decades, catastrophic bioterrorism was blocked by a single barrier: you needed to be an expert. Artificial intelligence is eliminating that barrier — quietly, rapidly, and right now.

Understand the Risk
▼ scroll to learn more
0% Of virology experts outperformed by AI on lab protocol questions
ASL-3 Safety classification triggered by Claude Opus 4's CBRN capabilities — Anthropic's highest risk tier
0% Accuracy of LLMs on bioweapon-release questions (up from 15% in 2023)
3M Potential casualties from a single aerosolized anthrax attack on a US city

The Core Problem

The Expert Barrier is Collapsing

Every major bioterror attack in history was either carried out by trained experts or failed due to technical incompetence. AI removes the incompetence. It democratizes expertise — including the kind that kills millions.

🔬
Before AI: High Barriers
  • Needed PhD-level virology expertise
  • Required access to specialized labs
  • Technical failures were common and fatal to plots
  • Equipment was expensive and rare
  • Secrecy conflicted with needing expert collaborators
  • Dispersal methods required engineering expertise
🤖
After AI: Barriers Eliminated
  • AI outperforms 94% of expert virologists on protocols
  • Expert troubleshooting available instantly, anonymously
  • Technical mistakes correctable in real time by AI
  • Lab equipment now commercially available and affordable
  • Secrecy preserved — no human expert accomplices needed
  • Drone dispersal technology widely available commercially
How AI Changes the Barrier to Catastrophic Harm — Required Expertise Level
Before AI After AI
Pathogen Selection High expertise required AI-guided, accessible
Acquisition / Synthesis High expertise required AI-guided, accessible
Weaponization High expertise required AI-guided, accessible
Dispersal Method High expertise required AI-guided, accessible
Evading Detection High expertise required AI-guided, accessible

Real History

These Attacks Actually Happened

These aren't hypotheticals. They are documented history. The only reason many failed was technical incompetence — the exact gap AI now fills.

2001 — USA
The Anthrax Letters
In the weeks following 9/11, letters containing weaponized anthrax spores were mailed to media outlets and two US senators. The perpetrator, Bruce Ivins, was a veteran Army biological-weapons researcher — one of America's top anthrax experts.

The spores had been processed with specialized additives to increase their inhalability, requiring highly sophisticated technical knowledge. This was not amateur work.
● Attack Succeeded
Casualties
5 dead · 17 injured[source]
What AI Changes About This
🧪
The expert requirement disappears. Ivins needed decades of government training to weaponize anthrax effectively. AI can provide equivalent technical guidance to anyone with a basic science background.
🔒
Insider threats multiply. If Ivins — a trusted government scientist — could do this, AI empowers that same threat from far more actors with far less traceable access to expertise.
📈
Scale could be dramatically larger. Congressional analysis estimates aerosolizing anthrax over Washington D.C. could cause 130,000–3 million casualties.[source] The 2001 attack was targeted. A mass-casualty attempt is the next step.
1984 — Oregon, USA
The Rajneeshee Bioterror Attack
Members of the Rajneeshee cult deliberately contaminated salad bars at ten restaurants with Salmonella to incapacitate voters before a local election. They cultivated the bacteria in what authorities described as a "fairly sophisticated medical research laboratory" at their commune.

The bacteria had been purchased over the counter from a scientific supply house. This remains the largest bioterrorist attack in U.S. history.[source]
● Attack Succeeded
Casualties
751 poisoned · 45 hospitalized[source]
What AI Changes About This
🦠
More lethal agents, lower barrier. The cult explored using HIV and other lethal pathogens but lacked the expertise. AI fills that knowledge gap, enabling more dangerous pathogens to be used with minimal biological training.
🌐
Supply acquisition goes underground. The cult was caught partly because large orders flagged suppliers. Dark web markets and AI-assisted synthesis routes reduce or eliminate that detection point.
⚙️
Dispersal optimization. Modern drone technology and AI-optimized delivery could turn a food-contamination attack into a large-scale aerosol event with far greater reach.
1993 — Tokyo, Japan
Aum Shinrikyo Anthrax Attack
In July 1993, the Aum Shinrikyo doomsday cult aerosolized anthrax from the roof of an 8-story building in Tokyo, attempting to kill thousands. The attack killed no one — due to a cascade of technical failures.

They used the wrong strain (a vaccine-grade strain that couldn't cause disease), achieved insufficient spore concentrations, and their aerosolization equipment failed. The cult had full labs, resources, and scientifically trained members — and still couldn't pull it off.
● Attack Failed — Technical Errors
Intent vs. Outcome
Thousands targeted · 0 killed
What AI Would Have Changed
🤖
Every failure point AI eliminates. The wrong strain, wrong concentration, wrong dispersal technique — AI would have corrected all three in real time. OpenAI's o3 model outperforms 94% of expert virologists on exactly these types of questions.
🔐
No need to involve outside experts. Aum failed partly because they kept weapons development secret within leadership, limiting their expertise. With AI, secrecy and technical competence are no longer in conflict.
🚁
Modern dispersal technology. Drone-based aerosol dispersal has advanced enormously since 1993. Commercial crop-duster drones are now cheap, precise, and widely available — the equipment failures that doomed Aum are obsolete problems.
2018 — Cologne, Germany
The Cologne Ricin Bomb Plot
A 29-year-old ISIS sympathizer successfully produced ricin — a biological toxin with no known antidote — and assembled a bomb intended for use in a crowded venue. He ordered 3,300 castor seeds along with explosive components on the internet.[source]

This was the first time a jihadi terrorist in the West had successfully produced this toxin. He was only caught because the CIA flagged his large castor seed order to German intelligence.
● Plot Stopped — Surveillance
Potential at Time of Arrest
1,000 toxic doses ready[source]
What AI Changes About This
👤
Acquisition goes invisible. He was caught because of a large internet order. VPNs, Tor, dark web markets, and AI-assisted synthesis guidance using locally available materials would make this surveillance-proof.
⚗️
Production quality improves dramatically. He produced ricin, but not at professional potency. AI could guide him to optimize for maximum lethality using freely available chemistry knowledge.
📊
Scale is the chilling part. He built 1,000 doses with basic internet skills. With AI guidance and optimized delivery, a motivated individual could cause mass casualties at a concert, stadium, or transit hub.
2014 — Syria/Iraq
ISIS "Laptop of Doom"
A laptop seized from a Tunisian ISIS operative in 2014 contained a comprehensive biological weapons research program. The operative had university-level scientific training.

The laptop contained a 19-page document in Arabic[source] detailing how to develop and weaponize biological agents for mass-casualty attacks in metro systems, stadiums, and other crowded venues. ISIS was actively working to develop bioweapons with real scientific personnel.
● Program Disrupted
If Successful
Mass casualty event in Western city
What AI Changes About This
🌍
No need for a trained scientist. ISIS had to recruit someone with university-level training. With AI, any motivated individual can access equivalent expertise without institutional education or institutional traceability.
📱
Distributed, cell-based operations. Centralized programs get seized. AI enables fully distributed operations where each cell operates independently with AI as their silent expert — no single point of failure.
🔬
Pathogen engineering becomes accessible. Synthetic biology techniques increasingly allow pathogen modification without traditional expertise. AI models can guide that process step by step, from design to execution.
1937–1945 — Manchuria, China
Unit 731 — Imperial Japan's Bioweapon Program
The Imperial Japanese Army's top-secret Unit 731 conducted lethal biological weapons experiments on prisoners of war and deployed biological weapons against Chinese civilians. This was state-sponsored biowarfare at industrial scale — not a fringe group or lone actor, but a government program with full military resources.

Unit 731 developed and weaponized anthrax, plague, cholera, and typhoid for deployment against civilian populations. Prisoners were subjected to forced infection, vivisection, and weapons testing. The program operated with the full knowledge and resources of the Imperial Japanese military establishment.
● State-Sponsored Deployment
Death Toll
Tens of thousands killed
What AI Changes About State Bioweapons Programs
🏛️
Governments already have the will. The barrier for state actors has never been motivation — it's been technical capability and blowback risk. AI dramatically accelerates the technical side while potentially enabling more targeted, attribution-resistant weapons that reduce blowback risk.
🤖
Private model access. Governments like China and the US can negotiate direct access to AI models via national security frameworks and defense contracts — including variants without the same safety constraints as public models.
🌍
The threat is active today. In 2024, the Washington Post reported satellite imagery showing expansion of a notorious Cold War-era Russian bioweapons laboratory.[source] China has already been linked to AI-aided bioweapon development and an alleged attempt to smuggle an aggroterrorism weapon into the US.

The AI Factor

How AI Multiplies the Threat

This isn't speculation. Leading AI labs have tested their own models and were alarmed enough to classify them as the highest-risk category before public release.

🧬
Expert-Level Virology Knowledge
OpenAI's o3 model outperforms 94% of professional virologists on laboratory protocol questions — including questions directly relevant to their specific specialties. This expertise is available to anyone with an internet connection.[source]
94% of experts outperformed (Virology Capabilities Test, 2025)
⬆️
Anthropic's Own Red Team Alarm
Anthropic's own evaluations found that Claude Opus 4 showed "clearly superior performance" on CBRN-related tasks compared to previous models — enough to trigger its highest internal safety classification, ASL-3.[source][source] ASL-3 is reserved for models that could meaningfully assist non-experts in acquiring or deploying CBRN weapons. External red-teamers reported Claude Opus 4 "performed qualitatively differently from any model they previously tested."
ASL-3 classification activated — Anthropic's highest CBRN risk tier
📈
Rapid Capability Acceleration
In 2024 alone, LLM accuracy on questions about the release of bioweapons jumped from 15% to 80%.[source] Biological AI models' ability to predict how proteins interact with small molecules — relevant to both medicine and weapons — rose from 42% to 90%.
15% → 80% accuracy on bioweapon questions in a single year
🚁
New Delivery Technology
Commercial drones and modern aerosol technologies have fundamentally solved the dispersal problems that caused historical attacks to fail. What required military-grade equipment in the 1990s now costs a few hundred dollars on Amazon.
Commercial crop-duster drones: civilian-accessible, AI-guidable
🔬
Synthetic Biology Access
Synthetic biology techniques allow pathogen creation and modification without traditional microbiology expertise. Gene synthesis costs have dropped 10,000x since 2000. AI can guide this process with specialized knowledge unavailable in textbooks.
Gene synthesis costs: 99.99% cheaper since 2000
🕵️
Surveillance Evasion
Modern privacy tools — encrypted communications, Tor, VPNs, dark web procurement, cryptocurrency — mean the online footprints that historically enabled intelligence services to intercept plots are increasingly invisible.
Surveillance detection that stopped Cologne plot: increasingly obsolete

AI Bioweapon Capability — Growth in 2024

LLM accuracy on bioweapon release questions80%
Protein interaction prediction (relevant to weapon design)90%
Virologists outperformed by AI on specialized questions94%

Beyond Chatbots

The Real Danger: Biological Design Tools

LLMs like ChatGPT are concerning. But AI-powered biological design tools are in a different threat category entirely — they don't just answer questions about pathogens. They design them.

🧬
AlphaFold & Protein Design
DeepMind's AlphaFold revolutionized protein structure prediction. Tools like RFdiffusion and LigandMPNN enable design of novel proteins from scratch — including pathogen components and toxins. 61.5% of high-risk biological design tools are open-source and freely available.[source]
61.5% of high-risk design tools: open-source, no access control
☠️
MegaSyn: 40,000 Toxic Molecules in 6 Hours
In 2022, researchers at Collaboration Pharmaceuticals demonstrated that by inverting a drug-safety AI model (designed to filter out toxic molecules), it generated over 40,000 novel toxic compounds — including nerve-agent analogs — in 6 hours on a consumer laptop.[source] This was a proof-of-concept. The capability exists.
40,000 toxic compounds generated in 6 hours — consumer hardware
🔄
Fine-Tuning Bypasses Safety Filters
Research on the Evo 2 biological foundation model shows that fine-tuning on related viral sequences can recover restricted biological capabilities within approximately 50 training steps — less than one hour on a single GPU.[source] Safety filters designed into base models can be bypassed by anyone with modest computational resources.
Evo 2: restricted capabilities recovered in <1 GPU-hour
🧪
The Tacit Knowledge Barrier — Narrowing Fast
Academic research (Collins) shows that lab competence — pipetting technique, visual culture assessment, contamination detection — transfers only through direct human contact, not text. This gap currently limits non-expert bioterrorists. But robotic lab automation and AI-guided protocols are systematically closing it.
Key remaining barrier: hands-on lab skill — being automated away
🧬
DNA Synthesis: Voluntary Screening Only
The International Gene Synthesis Consortium (IGSC) operates voluntary DNA synthesis screening — meaning companies outside the consortium are under no obligation to check what genes they're producing. A 2024 US framework mandated screening only for federally funded research, leaving commercial and foreign providers unregulated.
Synthesis screening: voluntary, not mandatory — gaps remain
📊
The Differential Uplift Paradox
AI provides greater relative uplift to novices (attack planning, conceptual knowledge) but greater absolute capability uplift to sophisticated actors (novel pathogen design, optimization). Both ends of the threat spectrum are amplified — for very different reasons.
Both novice planners and expert designers are uplifted — differently
AI functions as a biosecurity risk amplifier — not creator. It changes the accessibility, speed, and potential severity of existing threats. The question is not whether this changes the threat landscape. It already has.
— Bryan Tegomoh, MD, MPH · UC Berkeley School of Public Health · biosecurityhandbook.com

The Threat Landscape

What We're Actually Talking About

These are not movie-plot threats. They are classified by the CDC as highest-priority bioterrorism agents. Click each to understand the scale of harm possible.

🧫
Anthrax
Bacillus anthracis
Tier 1
Max casualties (urban)
Untreated mortality
Days
Survival window
Non-transmissible but environmentally persistent. Tiny spores can be silently released. Congressional analysis: 100kg over Washington D.C. = 130,000–3,000,000 casualties.[source]

Why AI changes this: Weaponizing anthrax requires highly specialized knowledge about spore processing. The 2001 attacker was a career government bioweapons researcher. AI provides equivalent guidance to anyone.

Detection window: Symptoms appear 1–5 days post-exposure, often resembling flu. By the time diagnosis is confirmed, exposure may be complete and mass casualty response overwhelmed.

⚗️
Botulinum Toxin
Clostridium botulinum
Tier 1
Deaths from 1g inhaled
0.9μg[source]
Lethal inhaled dose
#1
Most lethal substance known
The most lethal substance known to science. A single gram evenly dispersed and inhaled can kill over one million people.[source] Can contaminate food or water supplies or be deployed as an aerosol.

Why AI changes this: Aum Shinrikyo attempted botulinum attacks and failed due to technical production challenges. AI eliminates those barriers by providing expert-level troubleshooting on production.

Scale context: Aum aerosolized botulinum toxin at multiple locations in Tokyo, including US Navy bases, between 1990–1995. All attempts failed — but with AI, the technical knowledge to succeed is now democratized.

💀
Smallpox
Variola virus
Critical
Fatality rate
~0%
Global immunity
Billions
Potential victims
Eradicated in 1980 — meaning almost no one alive has immunity. ~30% fatality rate (variola major).[source] Only two officially acknowledged stockpiles exist, but security concerns are significant.

Why AI changes this: "Using its genetic sequence, someone could make smallpox from scratch today."[source] DNA synthesis technology and AI-assisted design make recreating extinct pathogens an increasingly realistic threat.

The stockpile problem: In 2014, six unlabeled freeze-dried vials of smallpox were found in a cardboard box in a storage room at the NIH.[source] They were not in a high-security lab. Unknown origin. Unknown how they got there.

🦠
Engineered Pandemic
AI-designed pathogen
Emerging
Unknown
Upper bound
ASL-3
Risk tier activated
Now
Timeline
Anthropic's chief scientist stated that current AI could help users "synthesize something like COVID or a more dangerous version of the flu."[source] This is not a future concern. It is present-day capability.

This is why ASL-3 was activated: Anthropic's evaluations found Claude Opus 4 showed "clearly superior performance" on CBRN-related tasks — enough that external red-teamers said it "performed qualitatively differently from any model they previously tested." ASL-3 is specifically for AI that could help non-experts deploy CBRN weapons.

Dario Amodei's words: "[AI has the potential to] greatly widen the range of actors with the technical capability to conduct a large-scale biological attack."[source] — CEO, Anthropic, the company that builds Claude.

Expert Consensus

The People Who Know Are Alarmed

This isn't coming from activists or alarmists. It's coming from the CEOs of the companies building the AI, from former government officials, and from the world's most authoritative safety bodies.

The biggest issue with AI is actually going to be… its use in biological conflict.[source]
Eric Schmidt
Former CEO, Google
[AI has the potential to] greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.[source]
Dario Amodei
CEO, Anthropic (makers of Claude AI)
Notable advances are LLMs' accuracy in answering questions about the release of bioweapons, which increased from 15% to 80%, and biological AIs' ability to predict how proteins interact with small molecules, which increased from 42% to 90% during 2024.[source]
International AI Safety Report 2024
UK Government / 30-nation scientific consensus
Claude Opus 4 could help users synthesize something like COVID or a more dangerous version of the flu — and basically, our modeling suggests that this might be possible.[source]
Jared Kaplan
Chief Science Officer, Anthropic

The Critical Question

If It's So Dangerous, Why Hasn't It Happened?

It almost has. Repeatedly. We've been lucky — but luck is not a biosecurity strategy, and the conditions that made us lucky are disappearing.

State-Level Threats

Governments Are Already Doing This

The biggest near-term threat isn't a lone actor with a laptop — it's state programs with classified compute, defense contracts, and no public accountability.

🇨🇳
China
China has already been linked to AI-aided bioweapon development efforts and was implicated in an alleged attempt to smuggle an aggroterrorism weapon into the United States. Chinese researchers have direct access to frontier AI with fewer constraints than Western public models.
Active AI bioweapon development reported [Washington Post, 2024]
🇷🇺
Russia
In 2024, the Washington Post reported satellite imagery showing significant expansion of a notorious Cold War-era bioweapons laboratory. Russia is widely believed to be developing bioweapons for use in the Ukraine conflict. Russia also retains the world's largest declared smallpox stockpile, with historically inconsistent safety protocols.
Cold War bioweapons lab expansion confirmed via satellite imagery, 2024 [source]
⚠️
The AI Advantage for State Actors
Compared to terrorist organizations, governments have: (1) massive compute budgets; (2) access to classified infrastructure; (3) the ability to negotiate direct AI company collaboration via national security frameworks; and (4) access to private model variants without public safety constraints. Publicly available models already outperform 94% of expert virologists — state actors access far more capable systems.
State actors can circumvent public AI safety constraints entirely

What Comes Next

This Is Solvable. But Not Without Pressure.

Biosecurity experts, AI safety researchers, and policy advocates are working on this — but public awareness is the missing piece. The people making decisions about AI regulation need to know that voters understand what's at stake.

📢
Share This
The fastest way to create political will is public awareness. Share this page with people who haven't heard of AI biosecurity risks. Tag your representatives. Make this issue visible.
📋
Learn from the Experts
Organizations like the NTI Biosecurity, Johns Hopkins Center for Health Security, and the Nuclear Threat Initiative publish policy frameworks and research on this exact issue. Read them. Cite them.
NTI Biosecurity Program
🏛️
Contact Your Representatives
AI biosecurity policy is being made right now. Congress and parliaments worldwide are drafting AI regulation. Demand that biosecurity red-teaming, capability evaluations, and mandatory safeguards are included.
🔬
Support Biosecurity Research
Defensive biosecurity — better detection, faster vaccines, improved attribution — is underfunded relative to the risk. Organizations working on pandemic preparedness and biodefense need support.
Johns Hopkins Center for Health Security
🧬
Demand Mandatory DNA Synthesis Screening
There is currently no universal legal requirement for gene synthesis providers to screen DNA orders for dangerous sequences. In 2006, a Guardian journalist mail-ordered a modified sequence of smallpox DNA — and it wasn't screened because it was under 100 letters long.[source] Existing voluntary screening misses short sequences and non-participating providers entirely.
Read the policy case
🔬
Support Mandatory AI Red-Teaming Laws
Without legal mandates, AI companies are incentivized to rush deployment to stay competitive. xAI and DeepSeek both scored 'F' in the risk assessment category of the AI Safety Index (Summer 2025).[source] California's SB 53 requires catastrophic risk assessments — but there is no federal equivalent, and companies are lobbying to pre-empt state laws.
AI Safety Index
We are at a crossroads. The same technology that could help cure diseases could help create them. The difference is governance, oversight, and the choices we make in the next few years — before the capability gap closes permanently.
— Based on the research document "Mapping Emergent AI-Aided Biosecurity Risks"