SWARM LABS

Proactive Assurance and Integrity for Generative AI

Swarm Labs delivers the automated AI red-teaming you need for confidence in your LLM

CORE BENEFITS

  • Uncover Hidden Vulnerabilities
  • Protect Against AI-Powered Adversaries
  • Build Confidence in Your Generative AI 

INDUSTRY STUDY

Preventing Proprietary Data Exfiltration with Proactive AI/LLM Red-Teaming

Read More

This is a cybersecurity problem, not just an AI problem.

This is a cybersecurity problem, not just an AI problem.

Industry Norms

___

Most LLMs red teaming is currently done by humans. This is a bad idea:

Humans can’t try very many prompts
Humans can't perform the input manipulations needed to break multi-modal models
Major vulnerabilities are becoming so enigmatic that they’re no longer well suited to humans
LLMs will keep getting smarter, humans won’t

What Should Be Done

---

This process should be done by an AI, or specifically many different AIs, to maximize the number and diversity attack of strategies tested, maximizing test coverage

What We're Doing

---

To maximize comprehensiveness, we not only presently use AI, we are also set up to source AI attackers from:

A platform where a user community is incentivized to create and submit novel attacking AIs
Internally developing proprietary attackers (that incentivize novelty)
Using every open source package for LLM attacking

Industry Norms

Most LLMs red teaming is currently done by humans. This is a bad idea:

  • Humans can’t try very many prompts
  • Humans can’t perform the input manipulations needed to break multi-modal models
  • Major vulnerabilities are becoming so enigmatic that they’re no longer well suited to humans
  • LLMs will keep getting smarter, humans won’t

What should be done

This process should be done by an AI, or specifically many different AIs, to maximize the number and diversity attack of strategies tested, maximizing test coverage

What We’re Doing

To maximize comprehensiveness, we not only presently use AI, we are also set up to source AI attackers from:

  • A platform where a user community is incentivized to create and submit novel attacking AIs
  • Internally developing proprietary attackers (that incentivize novelty)
  • Using every open source package for LLM attacking
Untitled design (9)
"This solves a very expensive problem that we have no unified approach to solving"
Microsoft
AI Executive
Untitled design (10)
"This is clearly the best way to solve this"
LLM Engineering Lead
OpenAI
Untitled design (8)
"If anything you're understating how much we need this for regulatory compliance"
AI Executive
Google

Our Team

2
Leo Paska
Chief Executive Officer

Experienced entrepreneur who has founded and exited companies across four industry verticals.

11
Jordan Terry, PhD
Chief Technology Officer

Swarm Labs Founder. CEO of Farama Foundation that took over flagship projects from OpenAI and DeepMind.

8
Ellie Howard
Chief Operational Officer

COO, Farama Foundation. Former COO of multiple organizations.

10
Ty Begley
Director of Communications

Worked with fortune 500 companies doing PR and communications, ran comms. for several state-wide political candidates & electeds.

5
Tim Dundorf
Community Manager

Former CTO + founder Good Noodle, 10+ years industry experience

3
Mallory Crumbliss
Director Of Engineering

Formerly at stealth mode OpenAI/Anthropic startup, Capital One

7
Hannah Tran
Technical Engineer
9
Ahshat Parikh
Technical Engineer
6
Elliot Tower
Technical Engineer
4
Nathan Tablang
Technical Engineer

Industry Norms

Most LLMs red teaming is currently done by humans. This is a bad idea:

  • Humans can’t try very many prompts
  • Humans can’t perform the input manipulations needed to break multi-modal models
  • Major vulnerabilities are becoming so enigmatic that they’re no longer well suited to humans
  • LLMs will keep getting smarter, humans won’t

What Should Be Done

This process should be done by an AI, or specifically many different AIs, to maximize the number and diversity attack of strategies tested, maximizing test coverage

What We’re Doing

To maximize comprehensiveness, we not only presently use AI, we are also set up to source AI attackers from:

  • A platform where a user community is incentivized to create and submit novel attacking AIs
  • Internally developing proprietary attackers (that incentivize novelty)
  • Using every open source package for LLM attacking
Scroll to Top