RSS

Red teaming

TagLast edit: 13 Jun 2022 15:01 UTC by Leo

A red team is an independent group that challenges an organization or movement in order to improve it. Red teaming is the practice of using red teams.

History of the term

The term “red teaming” appears to originate in the United States military. A common exercise was to pitch an offensive “red team”, representing the enemy, against a defensive “blue team”, representing the U.S. The purpose of the exercise was to identify vulnerabilities and develop effective countermeasures.[1] The term was later extended to cover related practices in other fields, including information security and intelligence analysis.

Red teaming in effective altruism

Within effective altruism, “red teaming” refers to attempts to identify problems or errors in popular or prestigious views held by members of this community, such as views about the value of different causes or organizations.[2]

Related concepts include minimal-trust investigations,[3] epistemic spot-checks,[4] and hypothetical apostasy.[5]

Further reading

Räuker, Max et al. (2022) Idea: Red-teaming fellowships, Effective Altruism Forum, February 2.

Vaintrob, Lizka & Fin Moorhouse (2022) Resource for criticisms and red teaming, Effective Altruism Forum, June 1.

Zhang, Linchuan (2021) Red teaming papers as an EA training exercise?, Effective Altruism Forum, June 22.

Related entries

criticism of effective altruism | epistemology | epistemic deference | tabletop exercises

  1. ^

    Johnson, Rowland (2015) How your red team penetration testers can help improve your blue team, SC Magazine, August 18.

  2. ^

    Räuker, Max et al. (2022) Idea: Red-teaming fellowships, Effective Altruism Forum, February 2.

  3. ^

    Karnofsky, Holden (2021) Minimal-trust investigations, Effective Altruism Forum, November 23.

  4. ^

    Ravid, Yoav (2020) Epistemic spot check, LessWrong Wiki, August 7.

  5. ^

    Bostrom, Nick (2009) Write your hypothetical apostasy, Overcoming Bias, February 21.

Min­i­mal-trust investigations

Holden Karnofsky23 Nov 2021 18:02 UTC
160 points
9 comments12 min readEA link

Idea: Red-team­ing fellowships

MaxRa2 Feb 2022 22:13 UTC
102 points
12 comments3 min readEA link

The mo­ti­vated rea­son­ing cri­tique of effec­tive altruism

Linch14 Sep 2021 20:43 UTC
285 points
59 comments23 min readEA link

Ap­ply for Red Team Challenge [May 7 - June 4]

Cillian_18 Mar 2022 18:53 UTC
92 points
11 comments2 min readEA link

The Fu­ture Might Not Be So Great

Jacy30 Jun 2022 13:01 UTC
142 points
118 comments34 min readEA link
(www.sentienceinstitute.org)

$100 bounty for the best ideas to red team

Cillian_18 Mar 2022 18:54 UTC
48 points
33 comments2 min readEA link

Malaria vac­cines: how con­fi­dent are we?

Sanjay5 Jan 2024 0:59 UTC
62 points
15 comments7 min readEA link

[Question] What “pivotal” and use­ful re­search … would you like to see as­sessed? (Bounty for sug­ges­tions)

david_reinstein28 Apr 2022 15:49 UTC
37 points
21 comments7 min readEA link

Seek­ing In­put for Ap­ply­ing Satel­lite Tech to Re­duce Farmed Fish Suffering

haven10 Jul 2024 18:43 UTC
27 points
4 comments3 min readEA link

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

Lizka1 Oct 2022 1:50 UTC
226 points
41 comments19 min readEA link

Po­ta­toes: A Crit­i­cal Review

Pablo Villalobos10 May 2022 15:27 UTC
120 points
27 comments9 min readEA link
(docs.google.com)

Com­pila­tion of Profit for Good Redteam­ing and Responses

Brad West🔸19 Sep 2023 13:33 UTC
41 points
10 comments9 min readEA link

A philo­soph­i­cal re­view of Open Philan­thropy’s Cause Pri­ori­ti­sa­tion Framework

MichaelPlant15 Jul 2022 8:21 UTC
160 points
11 comments29 min readEA link
(www.happierlivesinstitute.org)

A Cri­tique of The Precipice: Chap­ter 6 - The Risk Land­scape [Red Team Challenge]

Sarah Weiler26 Jun 2022 10:59 UTC
57 points
2 comments21 min readEA link

Is­sues with Futarchy

Lizka7 Oct 2021 17:24 UTC
63 points
8 comments25 min readEA link

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith16 Dec 2021 20:50 UTC
71 points
4 comments1 min readEA link

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 87 Jul 2022 4:44 UTC
61 points
9 comments27 min readEA link

A Red-Team Against the Im­pact of Small Donations

AppliedDivinityStudies24 Nov 2021 16:03 UTC
182 points
53 comments8 min readEA link

Cri­tique of OpenPhil’s macroe­co­nomic policy advocacy

Hauke Hillebrandt24 Mar 2022 22:03 UTC
142 points
39 comments24 min readEA link

A dozen doubts about GiveWell’s numbers

JoelMcGuire1 Nov 2022 2:25 UTC
134 points
18 comments19 min readEA link
(www.happierlivesinstitute.org)

In­de­pen­dent impressions

MichaelA🔸26 Sep 2021 18:43 UTC
152 points
7 comments1 min readEA link

An­nounc­ing a con­test: EA Crit­i­cism and Red Teaming

Lizka1 Jun 2022 18:58 UTC
276 points
64 comments13 min readEA link

The Long Reflec­tion as the Great Stag­na­tion

Larks1 Sep 2022 20:55 UTC
43 points
2 comments8 min readEA link

Disagree­ables and Asses­sors: Two In­tel­lec­tual Archetypes

Ozzie Gooen5 Nov 2021 9:01 UTC
91 points
20 comments3 min readEA link

Re­source for crit­i­cisms and red teaming

Lizka1 Jun 2022 18:58 UTC
60 points
3 comments8 min readEA link

A re­view of Our Fi­nal Warn­ing: Six De­grees of Cli­mate Emer­gency by Mark Lynas

John G. Halstead15 Apr 2022 13:43 UTC
176 points
6 comments13 min readEA link

Why Effec­tive Altru­ists Should Put a Higher Pri­or­ity on Fund­ing Aca­demic Research

Stuart Buck25 Jun 2022 19:17 UTC
118 points
15 comments11 min readEA link

Flimsy Pet The­o­ries, Enor­mous Initiatives

Ozzie Gooen9 Dec 2021 15:10 UTC
212 points
57 comments4 min readEA link

Pre-an­nounc­ing a con­test for cri­tiques and red teaming

Lizka25 Mar 2022 11:52 UTC
173 points
27 comments2 min readEA link

Red Team­ing CEA’s Com­mu­nity Build­ing Work

AnonymousEAForumAccount1 Sep 2022 14:42 UTC
296 points
68 comments66 min readEA link

A con­cern about the “evolu­tion­ary an­chor” of Ajeya Co­tra’s re­port on AI timelines.

NunoSempere16 Aug 2022 14:44 UTC
75 points
40 comments5 min readEA link
(nunosempere.com)

Nu­clear Ex­pert Com­ment on Samotsvety Nu­clear Risk Forecast

Jhrosenberg26 Mar 2022 9:22 UTC
129 points
13 comments18 min readEA link

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo🔸25 Jan 2024 7:56 UTC
67 points
29 comments18 min readEA link

Con­cerns/​Thoughts over in­ter­na­tional aid, longter­mism and philo­soph­i­cal notes on speak­ing with Larry Temkin.

Ben Yeoh27 Jul 2022 19:51 UTC
35 points
1 comment12 min readEA link

The Im­por­tance of In­ter­causal Impacts

Sebastian Joy 樂百善24 Aug 2022 10:41 UTC
61 points
2 comments8 min readEA link

Against Longter­mism: I wel­come our robot over­lords, and you should too!

MattBall2 Jul 2022 2:05 UTC
5 points
6 comments6 min readEA link

The Wind­fall Clause has a reme­dies problem

John Bridge 🔸23 May 2022 10:31 UTC
40 points
0 comments17 min readEA link

Red team­ing a model for es­ti­mat­ing the value of longter­mist in­ter­ven­tions—A cri­tique of Tarsney’s “The Epistemic Challenge to Longter­mism”

Anjay F16 Jul 2022 19:05 UTC
21 points
0 comments30 min readEA link

Red-team­ing Holden Karnofsky’s AI timelines

Vasco Grilo🔸25 Jun 2022 14:24 UTC
58 points
2 comments11 min readEA link

The role of academia in AI Safety.

PabloAMC 🔸28 Mar 2022 0:04 UTC
71 points
19 comments3 min readEA link

Belonging

Barracuda10 Jun 2022 7:27 UTC
2 points
0 comments1 min readEA link

Red-team­ing ex­is­ten­tial risk from AI

Zed Tarar30 Nov 2023 14:35 UTC
30 points
16 comments6 min readEA link

Opinioni indipendenti

EA Italy18 Jan 2023 11:21 UTC
1 point
0 comments1 min readEA link

Guid­ing civil ser­vants to Im­prove In­sti­tu­tional De­ci­sion-Mak­ing through an ‘Im­pact Challenge’

Iftekhar1 Jul 2022 13:37 UTC
19 points
0 comments8 min readEA link

[Question] Prizes for EA Red Teaming

Daniel Birnbaum22 Jul 2024 21:13 UTC
12 points
0 comments1 min readEA link

#176 – The fi­nal push for AGI, un­der­stand­ing OpenAI’s lead­er­ship drama, and red-team­ing fron­tier mod­els (Nathan Labenz on the 80,000 Hours Pod­cast)

80000_Hours4 Jan 2024 16:00 UTC
15 points
0 comments22 min readEA link

Effec­tive Altru­ism Risks Per­pet­u­at­ing a Harm­ful Worldview

Theo Cox20 Aug 2022 1:10 UTC
−4 points
9 comments20 min readEA link

Against AI As An Ex­is­ten­tial Risk

Daniel Birnbaum30 Jul 2024 19:24 UTC
6 points
3 comments1 min readEA link
(irrationalitycommunity.substack.com)
No comments.