[Applications Open] AI Security Bootcamp Singapore—Apr 20–26, 2026

Link post

tl;dr

We’re running a 7-day intensive AI security program in Singapore for experienced security professionals who want to upskill on securing frontier AI systems. This is the second iteration of AISB—the first ran in London in August 2025. Bootcamp and accommodation costs are covered.

Apply by March 15, 2026.

Why this program exists

As AI systems become more capable and integrated into critical infrastructure, new attack surfaces and failure modes are emerging that traditional security training doesn’t cover. Current AI security resources tend to focus on application-layer vulnerabilities (prompt injection, jailbreaks) but the threat landscape for frontier AI systems is broader and more complex.

AISB is designed to fill that gap. We focus on risks that emerge as advanced AI systems become more common and are targeted by increasingly motivated attackers. The curriculum will cover scenarios like misuse by sophisticated adversaries, loss of control risks from advanced AI systems, governance interventions, and more.

Curriculum

NOTE: The most updated curriculum will be on the website https://​​aisb.dev/​​

The program runs 7 days with pre-work to establish ML fundamentals. Each day combines lectures, demos, guest speakers, and hands-on red/​blue exercises.

DAY 1: Introduction & Threat Modeling

  • Current threat landscape: frameworks, misuse (e.g., to assist cyberattacks), application security, infrastructure security

  • Future threat models: misalignment, model theft and tampering, integrity attacks (backdoors, trojans), governance guarantees

  • Mapping threat models to attacks, defenses, and follow-up pathways

  • Threat modeling exercise against an AI deployment, which we will attack and defend in future days

DAY 2: Adversarial Attacks, Watermarking & Data Security

  • Adversarial examples and attacks on image models

  • Trojans, backdoors, and fine-tuning attacks on open-source models

  • Model weight extraction attacks

  • Watermarking techniques and detection

  • Data security: weight security, training data protection, inference-time data handling

DAY 3: LLM Security

  • Jailbreaks, prompt injection, and RAG injection

  • Guardrails: Constitutional classifiers and linear probes for input and output monitoring

  • Abliteration and model editing techniques

  • Tokenization vulnerabilities

  • MCP (Model Context Protocol) security

DAY 4: Infrastructure Security

  • NVIDIA Container Toolkit exploits and case studies

  • GPU isolation and confidential computing

  • Sandbox design: containment, escape vectors, and design considerations

DAY 5: Weight security, Verification & Formal Methods

  • RAND report analysis and policy implications

  • Output verification using formal methods

  • Detecting and defending against rogue deployments

DAY 6: Data Center Security & ML Stack Threat Modeling

  • Data center infrastructure: power, networking, physical security

  • ML stack threat modeling end-to-end

  • Personnel security considerations for AI deployments

  • (Potential site visit—TBD) to a local data center for a behind-the-scenes look at real-world deployments

DAY 7: AI Control & Hardware Governance

  • AI control mechanisms and policy

  • Hardware supply chains and governance frameworks

  • Securing against treaty violations and governance guarantees

What you’ll learn

  1. How to develop threat models for frontier AI systems—including risks that scale with AI capability

  2. Hands-on skills across the full attack surface: adversarial techniques, infrastructure exploitation, supply chain attacks, and model-level vulnerabilities

  3. Security challenges that frontier AI organizations are actively working on, not yet covered in standard training curricula

  4. How to position for high-impact roles at AI labs, government programs, and research institutions

Who should apply

Security professionals with 5+ years of hands-on experience. We want a cohort spanning offensive security, incident response, threat intelligence, infrastructure, and application security backgrounds. No prior AI/​ML experience required. We provide comprehensive pre-work to cover the fundamentals.

Selection prioritizes candidates interested in frontier AI risk, high-consequence failure modes, or work involving sophisticated threat actors.

Logistics

  • When: April 20–26, 2026

  • Where: Central Singapore

  • Cohort size: 10–12 participants

  • Cost: Bootcamp and accommodation costs covered. Participants cover their own travel (limited support available on a case-by-case basis).

  • Application deadline: March 15, 2026

  • Decisions by: March 28, 2026

We’re also overlapping with Black Hat Asia (April 21–24) and running right before DEF CON Singapore (April 28–30), so if you’re already attending the latter, this fits naturally into the same trip.

Application process

  1. Fill out the application form at aisb.dev. We review applications on a rolling basis—early applications encouraged

  2. 3 levels of evaluation:

    1. Application evaluation

    2. Short work exercise

    3. 30 minute interview

  3. If you need a faster decision, note your deadlines in the form

Team

  • Pranav Gade (Program Lead) - Research engineer at Conjecture; created AISB to bridge AI safety and security

  • Nitzan Shulman (Security Lead) - Head of Cyber Security at Heron AI Security Initiative; 6+ years security research specializing in IoT, robotics, malware and AI security

  • Singapore AI Safety Hub (SASH) - Local execution and institutional support

Questions?

Reach out to pranav@aisb.dev or ask in the comments.

Apply here

No comments.