RSS

Red teaming

TagLast edit: Jun 13, 2022, 3:01 PM by Leo

A red team is an independent group that challenges an organization or movement in order to improve it. Red teaming is the practice of using red teams.

History of the term

The term “red teaming” appears to originate in the United States military. A common exercise was to pitch an offensive “red team”, representing the enemy, against a defensive “blue team”, representing the U.S. The purpose of the exercise was to identify vulnerabilities and develop effective countermeasures.[1] The term was later extended to cover related practices in other fields, including information security and intelligence analysis.

Red teaming in effective altruism

Within effective altruism, “red teaming” refers to attempts to identify problems or errors in popular or prestigious views held by members of this community, such as views about the value of different causes or organizations.[2]

Related concepts include minimal-trust investigations,[3] epistemic spot-checks,[4] and hypothetical apostasy.[5]

Further reading

Räuker, Max et al. (2022) Idea: Red-teaming fellowships, Effective Altruism Forum, February 2.

Vaintrob, Lizka & Fin Moorhouse (2022) Resource for criticisms and red teaming, Effective Altruism Forum, June 1.

Zhang, Linchuan (2021) Red teaming papers as an EA training exercise?, Effective Altruism Forum, June 22.

Related entries

criticism of effective altruism | epistemology | epistemic deference | tabletop exercises

  1. ^

    Johnson, Rowland (2015) How your red team penetration testers can help improve your blue team, SC Magazine, August 18.

  2. ^

    Räuker, Max et al. (2022) Idea: Red-teaming fellowships, Effective Altruism Forum, February 2.

  3. ^

    Karnofsky, Holden (2021) Minimal-trust investigations, Effective Altruism Forum, November 23.

  4. ^

    Ravid, Yoav (2020) Epistemic spot check, LessWrong Wiki, August 7.

  5. ^

    Bostrom, Nick (2009) Write your hypothetical apostasy, Overcoming Bias, February 21.

Min­i­mal-trust investigations

Holden KarnofskyNov 23, 2021, 6:02 PM
161 points
10 comments12 min readEA link

Idea: Red-team­ing fellowships

MaxRaFeb 2, 2022, 10:13 PM
102 points
12 comments3 min readEA link

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

LinchSep 14, 2021, 8:43 PM
285 points
59 comments23 min readEA link

Ap­ply for Red Team Challenge [May 7 - June 4]

Cillian_Mar 18, 2022, 6:53 PM
92 points
11 comments2 min readEA link

The Fu­ture Might Not Be So Great

JacyJun 30, 2022, 1:01 PM
145 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

$100 bounty for the best ideas to red team

Cillian_Mar 18, 2022, 6:54 PM
48 points
33 comments2 min readEA link

Malaria vac­cines: how con­fi­dent are we?

SanjayJan 5, 2024, 12:59 AM
62 points
15 comments7 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinsteinApr 28, 2022, 3:49 PM
37 points
21 comments7 min readEA link

Com­pila­tion of Profit for Good Redteam­ing and Responses

Brad West🔸Sep 19, 2023, 1:33 PM
41 points
10 comments9 min readEA link

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

LizkaOct 1, 2022, 1:50 AM
226 points
41 comments19 min readEA link

Po­ta­toes: A Crit­i­cal Review

Pablo VillalobosMay 10, 2022, 3:27 PM
120 points
27 comments9 min readEA link
(docs.google.com)

Seek­ing In­put for Ap­ply­ing Satel­lite Tech to Re­duce Farmed Fish Suffering

havenJul 10, 2024, 6:43 PM
27 points
4 comments3 min readEA link

A philo­soph­i­cal re­view of Open Philan­thropy’s Cause Pri­ori­ti­sa­tion Framework

MichaelPlantJul 15, 2022, 8:21 AM
160 points
11 comments29 min readEA link
(www.happierlivesinstitute.org)

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah WeilerJun 26, 2022, 10:59 AM
57 points
2 comments21 min readEA link

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo🔸Jan 25, 2024, 7:56 AM
67 points
29 comments18 min readEA link

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_CarlsmithDec 16, 2021, 8:50 PM
71 points
4 comments1 min readEA link

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 8Jul 7, 2022, 4:44 AM
61 points
9 comments27 min readEA link

A Red-Team Against the Im­pact of Small Donations

AppliedDivinityStudiesNov 24, 2021, 4:03 PM
182 points
53 comments8 min readEA link

Cri­tique of OpenPhil’s macroe­co­nomic policy advocacy

Hauke HillebrandtMar 24, 2022, 10:03 PM
143 points
39 comments24 min readEA link

A dozen doubts about GiveWell’s numbers

JoelMcGuireNov 1, 2022, 2:25 AM
134 points
18 comments19 min readEA link
(www.happierlivesinstitute.org)

In­de­pen­dent impressions

MichaelA🔸Sep 26, 2021, 6:43 PM
155 points
8 comments1 min readEA link

An­nounc­ing a con­test: EA Crit­i­cism and Red Teaming

LizkaJun 1, 2022, 6:58 PM
276 points
64 comments13 min readEA link

The Long Reflec­tion as the Great Stag­na­tion

LarksSep 1, 2022, 8:55 PM
43 points
2 comments8 min readEA link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

Ozzie GooenNov 5, 2021, 9:01 AM
91 points
20 comments3 min readEA link

Re­source for crit­i­cisms and red teaming

LizkaJun 1, 2022, 6:58 PM
61 points
3 comments8 min readEA link

A re­view of Our Fi­nal Warn­ing: Six De­grees of Cli­mate Emer­gency by Mark Lynas

John G. HalsteadApr 15, 2022, 1:43 PM
176 points
6 comments13 min readEA link

Why Effec­tive Altru­ists Should Put a Higher Pri­or­ity on Fund­ing Aca­demic Research

Stuart BuckJun 25, 2022, 7:17 PM
118 points
15 comments11 min readEA link

Flimsy Pet The­o­ries, Enor­mous Initiatives

Ozzie GooenDec 9, 2021, 3:10 PM
212 points
57 comments4 min readEA link

Pre-an­nounc­ing a con­test for cri­tiques and red teaming

LizkaMar 25, 2022, 11:52 AM
173 points
27 comments2 min readEA link

Is­sues with Futarchy

LizkaOct 7, 2021, 5:24 PM
63 points
8 comments25 min readEA link

Red Team­ing CEA’s Com­mu­nity Build­ing Work

AnonymousEAForumAccountSep 1, 2022, 2:42 PM
296 points
68 comments66 min readEA link

A con­cern about the “evolu­tion­ary an­chor” of Ajeya Co­tra’s re­port on AI timelines.

NunoSempereAug 16, 2022, 2:44 PM
75 points
40 comments5 min readEA link
(nunosempere.com)

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

JhrosenbergMar 26, 2022, 9:22 AM
129 points
13 comments18 min readEA link

My im­pact as­sess­ment of Giv­ing What We Can

Vasco Grilo🔸Apr 15, 2023, 6:59 AM
40 points
33 comments9 min readEA link

Against Longter­mism: I wel­come our robot over­lords, and you should too!

MattBallJul 2, 2022, 2:05 AM
5 points
6 comments6 min readEA link

The Wind­fall Clause has a reme­dies problem

John Bridge 🔸May 23, 2022, 10:31 AM
40 points
0 comments17 min readEA link

Red team­ing a model for es­ti­mat­ing the value of longter­mist in­ter­ven­tions—A cri­tique of Tarsney’s “The Epistemic Challenge to Longter­mism”

Anjay FJul 16, 2022, 7:05 PM
21 points
0 comments30 min readEA link

Effec­tive Altru­ism Risks Per­pet­u­at­ing a Harm­ful Worldview

Theo CoxAug 20, 2022, 1:10 AM
−4 points
9 comments20 min readEA link

Red-team­ing Holden Karnofsky’s AI timelines

Vasco Grilo🔸Jun 25, 2022, 2:24 PM
58 points
2 comments11 min readEA link

The role of academia in AI Safety.

PabloAMC 🔸Mar 28, 2022, 12:04 AM
71 points
19 comments3 min readEA link

Red-team­ing ex­is­ten­tial risk from AI

Zed TararNov 30, 2023, 2:35 PM
30 points
16 comments6 min readEA link

Opinioni indipendenti

EA ItalyJan 18, 2023, 11:21 AM
1 point
0 comments1 min readEA link

[Question] Prizes for EA Red Teaming

Noah BirnbaumJul 22, 2024, 9:13 PM
12 points
0 comments1 min readEA link

#176 – The fi­nal push for AGI, un­der­stand­ing OpenAI’s lead­er­ship drama, and red-team­ing fron­tier mod­els (Nathan Labenz on the 80,000 Hours Pod­cast)

80000_HoursJan 4, 2024, 4:00 PM
15 points
0 comments22 min readEA link

Belonging

BarracudaJun 10, 2022, 7:27 AM
2 points
0 comments1 min readEA link

Against AI As An Ex­is­ten­tial Risk

Noah BirnbaumJul 30, 2024, 7:24 PM
6 points
3 comments1 min readEA link
(irrationalitycommunity.substack.com)

Guid­ing civil ser­vants to Im­prove In­sti­tu­tional De­ci­sion-Mak­ing through an ‘Im­pact Challenge’

IftekharJul 1, 2022, 1:37 PM
19 points
0 comments8 min readEA link

Con­cerns/​Thoughts over in­ter­na­tional aid, longter­mism and philo­soph­i­cal notes on speak­ing with Larry Temkin.

Ben YeohJul 27, 2022, 7:51 PM
35 points
1 comment12 min readEA link

Re­search I’d like to see

Alex CohenDec 17, 2024, 2:08 PM
62 points
1 comment5 min readEA link

The Im­por­tance of In­ter­causal Impacts

Sebastian Joy 樂百善Aug 24, 2022, 10:41 AM
61 points
2 comments8 min readEA link
No comments.