Announcing a contest: EA Criticism and Red Teaming
Update from October 2022: This contest has wrapped up. You can see the winners of the contest here.
Introduction
tl;dr: We’re running a writing contest for critically engaging with theory or work in effective altruism (EA).
Submissions can be in a range of formats (from fact-checking to philosophical critiques or major project evaluations); and can focus on a range of subject matters (from assessing empirical or normative claims to evaluating organizations and practices).
We plan on distributing $100,000, and we may end up awarding more than this amount if we get many excellent submissions.
The deadline is September 1, 2022. You can find the submission instructions below. Neither formal nor significant affiliation with effective altruism is required to enter into the contest.
We are: Lizka Vaintrob (the Content Specialist at the Centre for Effective Altruism),Fin Moorhouse (researcher at the Future of Humanity Institute), and Joshua Teperowski Monrad (biosecurity program associate at Effective Giving). The contest is funded via the FTX Future Fund Regranting Program, with organizational support from the Centre for Effective Altruism.
We ‘pre-announced’ this contest in March.
The rest of this post gives more details, outlines the kinds of critical work we think are especially valuable, and explains our rationale. We’re also sharing a companion resource for criticisms and red teams.
How to apply
Submit by posting on the EA Forum[1] and tagging the post[2] with the contest’s tag, or by filling out this form.
If you post on the Forum, you don’t need to do anything except tag your post[2] with the “Criticism and Red Teaming Contest” topic, and we’ll consider your post for the contest. If you’d prefer to post your writing outside the Forum, you can submit it via this form — we’d still encourage you to cross-post it to the Forum (although please be mindful of copyright issues).
We also encourage you to refer other people’s work to the contest if you think more people should know about it. To refer someone else’s work, please submit it via this form. If it wins, we may reward you for this — please see an explanation below.
The deadline is September 1, 2022.
Please contact us with any questions. You can also comment here.
Prizes
We have $100,000 currently set aside for prizes, which we plan on fully distributing.
Prizes will fall under three main tiers:
Winners: $20,000
Runners up: $5,000 each
Honourable mentions: $1,000 each
In addition, we may award a prize of $100,000 for outstanding work that looks likely to cause a very significant course adjustment in effective altruism.
Therefore, we’re prepared to award (perhaps significantly) more than $100,000 if we’re impressed by the quality and volume of submissions.
We’re also offering a bounty for referring winning submissions: if you refer a winning submission (if you’re the first person to refer it, and the author never entered the contest themselves), you’ll get a referral bounty of 5% of the award.
We will also consider helping you find proactive funding for your work if you require the security of guaranteed financial support to enable a large project (though we may deduct proactive funding from prize money if you are awarded one). See the FAQ for more details.
Submissions must be posted or submitted no later than 11:59 pm BST on September 1st, and we’ll announce winners by the end of September.
Criteria
Overall, we want to reward critical work according to a question like: “to what extent did this cause me to change my mind about something important?” — where “change my mind” can mean “change my best guess about whether some claim is true”, or just “become significantly more or less confident in this important thing.”
Below are some virtues of the kind of work we expect to be most valuable. We’ll look out for these features in the judging process, but we’re aware it can be difficult or impossible to live up to all of them:
Critical. The piece takes a critical or questioning stance towards some aspect of EA theory or practice. Note that this does not mean that your conclusion must end up disagreeing with what you are criticizing; it is entirely possible to approach some work critically, check the sources, note some potential weaknesses, and conclude that the original was broadly correct.
Important. The issues discussed really matter for our ability to do the most good as a movement.
Constructive and action-relevant. Where possible we would be most interested in arguments that recommend some specific, realistic action or change of belief. It’s fine to just point out where something is going wrong; even better to be constructive, by suggesting a concrete improvement.
Transparent and legible. We encourage transparency about your process: how much expertise do you have? How confident are you about the claims you’re making? What would change your mind? If your work includes data, how were they collected? Relatedly, we encourage epistemic legibility: the property of being easy to argue with, separate from being correct.
Aware. Take some time to check that you’re not missing an existing response to your argument. If responses do exist, mention (or engage with) them.
Novel. The piece presents new arguments, or otherwise presents familiar ideas in a new way. Novelty is great but not always necessary — it’s often still valuable to distill or “translate” existing criticisms.
Focused. Critical work is often (but not always) most useful when it is focused on a small number of arguments and a small number of objects. We’d love to see (and we’re likely to reward) work that engages with specific texts, strategic choices, or claims.
We don’t expect that every winning piece needs to do well at every one of these criteria, but we do think each of these criteria can help you most effectively change people’s minds with your work.
We also want to reward clarity of writing, avoiding ‘punching down’, awareness of context, and a scout mindset. We don’t want to encourage personal attacks, or diatribes that are likely to produce much more heat than light. And we hope that subject-matter experts who don’t typically associate with EA find out about this, and share insights we haven’t yet heard.
What to submit
We’re looking for critical work that you think is important or useful for EA. That’s a broad remit, so we’ve suggested some topics and kinds of critiques below.
If you’re looking for more detail, we’ve collaborated on a separate post that collects resources for red teaming and criticisms, including guides to different kinds of criticisms, and examples. If you’re interested in participating in this contest, we highly recommend that you take a look. (We’d also love help updating and improving it.)
It’s helpful —but not required — to also suggest 1–3 people you think most need to heed your critique. For many topics, this nomination is better done privately (contact us, or submit through the form). We’ll send it their way where possible. (If you don’t know who needs to see it most, we’ll work it out.)
Formats
You might consider framing your submission as one of the following:
Minimal trust investigation — A minimal trust investigation involves suspending your trust in others’ judgments, and trying to understand the case for and against some claim yourself. Suspending trust does not mean determining in advance that you’ll end up disagreeing.
Red teaming — ‘Red teaming’ is the practice of “subjecting [...] plans, programmes, ideas and assumptions to rigorous analysis and challenge”. You’re setting out to find the strongest reasonable case against something, whatever you actually think about it (and you should flag that this is what you’re doing).
Fact checking and chasing citation trails — If you notice claims that seem crucial, but whose origin is unclear, you could track down the source, and evaluate its legitimacy.
Adversarial collaboration — An adversarial collaboration is where people with opposing views work together to clarify their disagreements.
Clarifying confusions — You might simply be confused about some aspect of EA, rather than confidently critical. You could try getting clear on what you’re confused about, and why.
Evaluating organizations — including their (implicit) theory of change, key claims, and their track record; and suggesting concrete changes where relevant.
Steelmanning and ‘translating’ existing criticism for an EA audience — We’d love to see work succinctly explaining these existing ideas, and constructing the strongest versions (‘steelmanning’) them. You might consider doing this in collaboration with a domain expert who does not consider themself part of the EA community.
Again, for more detail on topic ideas, kinds of critiques, and examples: visit our longer post with resources for critiques and red teams.
We don’t want to give an analogous list for topic ideas, because any list is necessarily going to leave things out. However, you might take a look at Joshua’s post outlining four categories of effective altruism critiques: normative and moral questions, empirical questions, institutions & organizations, and social norms & practices.
Browsing this Forum (especially curated lists like the Decade Review prizewinners, the EA Wiki, and the EA Handbook) could be a good way to get ideas if you are new to effective altruism.
If you’re unsure whether something you plan on writing could count for this contest, feel free to ask us.
Additional resources
We’ve compiled a companion post, in which we’ve collected some resources for criticisms and red teaming.
We’re also tentatively planning on running (or helping with) several workshops on criticisms and red teaming, which will be open to anyone who is interested, including people who are new to effective altruism. We hope that the first two will be in June. If you’d like to hear about dates when they’re decided, you can fill out this form.
The judging panel
The judging panel is:
Rebecca Kagan, J.D. Candidate at Harvard Law School and formerly an External Affairs Specialist at Georgetown’s Center for Security and Emerging Technology (CSET)
Jessica McCurdy, Groups Associate at the Centre for Effective Altruism
Zachary Robinson, Chief of Staff at Open Philanthropy
Applied Divinity Studies, independent blogger
Charlotte Siegmann, Research Fellow at Longview Philanthropy; Predoctoral Research Fellow in Economics at the Global Priorities Institute
TJ, Research Scholar at the Future of Humanity Institute
Owen Cotton-Barratt, independent researcher and board member of the Centre for Effective Altruism
Gavin Leech, founder of Arb Research, Strategic Advisor to Emergent Ventures panel on AI Talent, AI PhD student
Nicole Ross, Head of Community Health at the Centre for Effective Altruism
Xuan (Tan Zhi Xuan), AI PhD student at MIT, Board Member of EA Singapore
No one on the judging panel will be able to “veto” winners, and every submission will be read by at least two people. If submissions are technical and outside of the panelists’ fields of expertise, we will consult domain experts.
If we get many submissions or if we find that the current panel doesn’t have enough bandwidth, we may invite more people to the panel.
Rationale
Why do we think this matters? In short, we think there are some reasons to expect good criticism to be undersupplied relative to its real value. And that matters: as EA grows, it’s going to become increasingly important that we scrutinize the ideas and assumptions behind key decisions — and that we welcome outside experts to do the same.
Encouraging criticism is also a way to encourage a culture of independent thinking, and openness to criticism and scrutiny within the EA community. Part of what made and continues to make EA so special is its epistemic culture: a willingness to question and be questioned, and freedom to take contrarian or unusual ideas seriously. As EA continues to grow, one failure mode we anticipate is that this culture may give way to a culture of over-deference.
We also really care about raising the average quality of criticism. Perhaps you can recall some criticisms of effective altruism that you think were made in bad faith, or otherwise misrepresented their target in a mostly unhelpful and frustrating way. If we don’t make an effort to encourage more careful, well-informed critical work, then we may have less reason to complain about the harms that poor-quality work can cause, such as by misinforming people who are learning about effective altruism. Crucially, we’d also miss out on the real benefits of higher-quality, good-faith criticism.
In his opening talk for EA Global this year, Will MacAskill considered how a major risk to the success of effective altruism is the risk of degrading its quality of thinking: “if you look at other social movements, you get this club where there are certain beliefs that everyone holds, and it becomes an indicator of in-group mentality; and that can get strengthened if it’s the case that if you want to get funding and achieve very big things you have to believe certain things — I think that would be very bad indeed. Looking at other social movements should make us worried about that as a failure mode for us as well.”
It’s also possible that some of the most useful critical work goes relatively unrewarded because it might be less attention-grabbing or narrow in its conclusions. Conducting really high-quality criticism is sometimes thankless work: as the blogger Dynomight points out, there’s rarely much glory in fact-checking someone else’s work. We want to set up some incentives to attract this kind of work, as well as more broadly attention-grabbing work.
Ultimately, critiques have an impact by bringing about actual changes. The ultimate goal of this contest is to facilitate those positive changes, not just to spot what we’re currently getting wrong.
In sum, we think and hope:
Criticism will help us form truer beliefs, and that will help people with the project of doing good effectively. People and institutions in effective altruism might be wrong in significant ways — we want to catch that and correct our course.
This is especially important in the non-profit context, since it lacks many of the signals in the for-profit world (like prices). For-profit companies have a strong signal of success: if they fail to make a profit, they eventually fail. One insight of effective altruism is that there are weaker pressures for nonprofits to be effective — to achieve the goals that really matter — because their ability to fundraise isn’t necessarily tied to their effectiveness. Charity evaluators like GiveWell do an excellent job at evaluating nonprofits, but we should also try to be comparably rigorous and impartial in assessing EA organizations and projects, including in areas where outputs are harder to measure. Where natural feedback loops don’t exist, it’s our responsibility to try making them!
It’s also especially important for effective altruism, given that so many of the ideas are relatively new and untested. We think this is especially true of longtermist work.
Stress-testing important ideas is crucial even when the result is that the ideas are confirmed; this allows us to rely more freely on the ideas.
We want to sustain a culture of intellectual openness, open disagreement, and critical thinking. We hope that this contest will contribute to reinforcing that culture.
Highlighting especially good examples of criticism may create more templates for future critical work, and may make the broader community more appreciative of critical work.
We also think that people in the effective altruism network tend to hear more from other people in the network, and hope that this contest might bring in outside experts and voices. (You can see more discussion of this phenomenon in “The motivated reasoning critique of effective altruism”.)
We want to break patterns of pluralistic ignorance where people underrate how sceptical or uncertain others (including ‘experts’) are about some claim.
Finally, we want to frame this contest as one step towards generating high-quality criticism, and not the final one. For instance, we’re interested in following up with winning submissions, such as by meeting with winning entrants to discuss ways to translate your work into concrete changes and communicate your work to the relevant stakeholders.
What this is not about
Note that critical work is not automatically valuable just by virtue of being critical: it can be attention-grabbing in a negative way. It can be stressful and time-consuming to engage with bad-faith or ill-considered criticism. We have a responsibility to be especially careful here.
This contest isn’t about making EA look open-minded or self-scrutinizing in a performative way: we want to award work that actually strikes us as useful, even if it isn’t likely to be especially popular or legible for a general audience.
We’re not going to privilege arguments for more caution about projects over arguments for urgency or haste. Scrutinizing projects in their early stages is a good way to avoid errors of commission; but errors of omission (not going ahead with an ambitious project because of an unjustified amount of risk aversion, or oversensitivity to downsides over upsides) can be just as bad.
Similarly, we don’t want this initiative to only result in writing that one-directionally worries about EA ideas or projects being too ‘weird’ or too different from some consensus or intuitions. We’re just as interested to hear why some aspect of EA is being insufficiently weird — perhaps not taking certain ideas seriously enough. Relatedly, this isn’t just about being more epistemically modest: we are likely being both overconfident in some spots, and overly modest in others. What matters is being well calibrated in our beliefs!
We would also caution against criticizing the actions or questioning the motivations of a specific individual, especially without first asking them. We urge you to focus on the ideas or ‘artefacts’ individuals produce, without speculating about personal motivations or character — this is rarely helpful.
Contact us
Email criticism-contest@effectivealtruism.com, message any of the authors of this post via the Forum, or leave a comment on this post.
Q&A
Submissions and how they’ll be judged
Can I submit work I’ve already done? Yes, if it’s recent. We’re accepting posts from the date of our pre-announcement (March 25, 2022) onwards.
Can I submit something that I got funding for already? Yes. Let us know if you have specific concerns.
Can I refer another person’s work? Yes. And if that person’s work wins a prize (and the author didn’t submit it themselves, and you’re the first person to refer the work), we’ll also reward you with a commission (5% of the prize). We’d love to discover work from outside the EA community that could be relevant for effective altruism. Submit referrals via this form.
What if I want to work on a large project for this contest that I can’t afford to carry out on my own time? Contact us. We can’t guarantee anything, but we’d like to help enable your work, by pointing you to sources of funding in effective altruism, and potentially arranging direct financial support where necessary. If we (the organizers of this contest) directly fund your work in advance, we’ll deduct whatever amount you received in advance from any potential prize that you win.
I have a complaint or criticism about an organization or individual, but it’s not something that’s appropriate to share publicly. You might consider contacting the CEA Community Health Team, who can advise on the next steps, including acting as an intermediary. You can also send them an anonymous message.
Can I submit anonymously? Yes. You can make an anonymous account on the Forum, or you can use this form to submit without posting to the Forum.
Do I have to already be involved in effective altruism to submit something? No, not at all. We’re actively excited to bring in external ideas and expertise. If you’re new to the Forum, the Wiki could be a good place to start to check for what has already been written. You’re welcome to make broad criticisms of effective altruism, but focused critiques that draw on your area(s) of expertise could stand an especially good chance of being entirely novel.
I’d love to hear what [person who’s not engaged with effective altruism] would have to say about [some aspect of effective altruism]. How can I make that happen? If you know this person, we encourage you to reach out to them! If you’re unsure or uncomfortable about contacting them directly, let us know, and we can try getting in touch.
Some of the panellists belong to organizations I’d like to criticize. Isn’t that an issue? All our panellists are committed to evaluating your work on its own merit — being associated with an org or project you are criticizing should and will not count as a reason to downgrade your work. Panellists will recuse themselves if they (or we) feel that a conflict of interest will inhibit their ability to fairly evaluate a particular submission. If you’re still concerned about this or would like to request that specific panellists be recused, feel free to contact us.
What counts as “EA”? We have in mind the ideas, institutions, projects, and communities associated with effective altruism. You can learn more at effectivealtruism.org and here on the Forum.
Does the criticism or red teaming have to come to the conclusion that the original work was wrong? No. We’re very happy to award prizes to work of the form: “I checked the arguments and sources in this text. In fact, they check out. Here are my notes.”
Does my submission need to fulfill all the criteria outlined above? No. We understand that some formats make it difficult or impossible to satisfy all the requirements, and we don’t want that to be a barrier to submitting. At the same time, we do think each of the criteria are good indicators of the kind of work we’d like to see.
About the contest
How does this relate to Training for Good’s ‘Red Team challenge’? The Red Team Challenge is not this prize, and this prize is not the Red Team Challenge (RTC). The RTC is a program run by Training for Good which provides training in red teaming best practices and then pairs small teams of 2-4 people together to critique a particular claim and publish the results. We are very excited about the results of the programme being submitted to this contest! So this contest is a complement to the Red Team Challenge, rather than a substitute. Training for Good may also collaborate with us on workshops and [other resources].
Where’s the money coming from? The prizes will be awarded via the FTX Future Fund Regranting Program. The Centre for Effective Altruism is providing operational support (like coordination between judges). Note that the EA Forum is not sponsoring this prize, and isn’t liable for it.
Doesn’t this penalize the people whose work is getting criticized? We want to encourage a norm where having your work fairly criticized is great news: an indication that it was trying to answer an important question. We want to encourage a sense of criticism being part of the joint enterprise to figure out the right answers to important questions. However, we are aware that being criticized is not always enjoyable, and some criticism is made in bad faith. If you’re concerned about being the subject of bad-faith criticism, let us know.
Does this mean that you think that non-critical work is less valuable than critical work? No. We just think that high-quality critical work is often under-rewarded and under-supplied — like many other kinds of non-critical work!
Other
I have another question that isn’t answered in this post. Leave a comment if you suspect others might have the same question, and we’ll try to answer it here. Otherwise, feel free to contact us.
We’re extremely grateful to everyone who helped us kick this off, including the many people who gave feedback following our pre-announcement of the contest.
- ^
You can find instructions for that here.
- ^
Instructions for how to tag a post are here.
- EA is about maximization, and maximization is perilous by 2 Sep 2022 17:13 UTC; 488 points) (
- Leaning into EA Disillusionment by 21 Jul 2022 17:06 UTC; 425 points) (
- A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by 16 Jun 2022 16:40 UTC; 302 points) (
- Effective altruism in the garden of ends by 31 Aug 2022 20:47 UTC; 289 points) (
- Future Fund June 2022 Update by 1 Jul 2022 0:50 UTC; 279 points) (
- Winners of the EA Criticism and Red Teaming Contest by 1 Oct 2022 1:50 UTC; 226 points) (
- Pre-announcing a contest for critiques and red teaming by 25 Mar 2022 11:52 UTC; 173 points) (
- Effective Altruism: Not as bad as you think by 24 Nov 2022 13:11 UTC; 169 points) (
- Announcing the Change Our Mind Contest for critiques of our cost-effectiveness analyses by 6 Sep 2022 18:10 UTC; 142 points) (
- Some concerns about policy work funding and the Long Term Future Fund by 12 Aug 2022 13:54 UTC; 124 points) (
- Criticism of EA Criticism Contest by 14 Jul 2022 14:30 UTC; 108 points) (LessWrong;
- Criticism of EA Criticism Contest by 14 Jul 2022 14:20 UTC; 101 points) (
- Experiment in Retroactive Funding: An EA Forum Prize Contest by 1 Jun 2022 21:15 UTC; 96 points) (
- The Strange Shortage of Moral Optimizers by 7 Jun 2022 15:23 UTC; 89 points) (
- Notes on how prizes may fail and how to reduce the risk of them failing by 30 Aug 2022 18:57 UTC; 89 points) (
- Four Concerns Regarding Longtermism by 6 Jun 2022 5:42 UTC; 82 points) (
- Doing good is a privilege. This needs to change if we want to do good long-term. by 3 Aug 2022 15:44 UTC; 79 points) (
- Resource for criticisms and red teaming by 1 Jun 2022 18:58 UTC; 60 points) (
- Monthly Overload of EA—July 2022 by 1 Jul 2022 16:22 UTC; 55 points) (
- Red teaming introductory EA courses by 30 Aug 2022 15:47 UTC; 52 points) (
- Notes on Effective Altruism by 11 Jul 2022 23:00 UTC; 49 points) (
- Examples of someone admitting an error or changing a key conclusion by 27 Jun 2022 15:37 UTC; 47 points) (
- Future Paths for Effective Altruism by 4 Aug 2022 10:34 UTC; 44 points) (
- Concrete positive visions for a future without AGI by 8 Nov 2023 3:12 UTC; 41 points) (LessWrong;
- Productive criticism: what could help? by 13 Jun 2023 14:56 UTC; 38 points) (
- What if states don’t listen? A fundamental gap in x-risk reduction strategies by 30 Aug 2022 4:27 UTC; 30 points) (
- The Role of “Economism” in the Belief-Formation Systems of Effective Altruism by 1 Sep 2022 7:33 UTC; 27 points) (
- Review of WWOTF by 15 Aug 2022 18:53 UTC; 25 points) (
- 2 Jan 2023 0:17 UTC; 24 points) 's comment on Your 2022 EA Forum Wrapped 🎁 by (
- Effective altruism in the garden of ends by 14 Sep 2022 22:02 UTC; 24 points) (LessWrong;
- Predict which posts will win the Criticism and Red Teaming Contest! by 27 Sep 2022 22:46 UTC; 21 points) (
- Effective Altruism Should Seek Less Criticism by 1 Sep 2022 7:33 UTC; 19 points) (
- Announcing a contest: EA Criticism and Red Teaming by 2 Jun 2022 20:27 UTC; 17 points) (LessWrong;
- 24 Jul 2022 6:47 UTC; 14 points) 's comment on How EA is perceived is crucial to its future trajectory by (
- EA criticism contest: Why I am not an effective altruist by 16 Aug 2022 17:16 UTC; 13 points) (
- 16 Jun 2022 20:36 UTC; 12 points) 's comment on A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform by (
- Why the EA aversion to local altruistic action? by 10 Jun 2022 23:01 UTC; 9 points) (
- The Happiness Maximizer: Why EA is an x-risk by 30 Aug 2022 4:29 UTC; 8 points) (
- 14 Nov 2022 3:46 UTC; 8 points) 's comment on Hubris and coldness within EA (my experience) by (
- How avoiding drastic career changes could support EA’s epistemic health and long-term efficacy. by 31 Aug 2022 6:58 UTC; 7 points) (
- 25 Feb 2023 3:23 UTC; 7 points) 's comment on “EA is very open to some kinds of critique and very not open to others” and “Why do critical EAs have to use pseudonyms?” by (
- 22 Jul 2022 10:23 UTC; 7 points) 's comment on Leaning into EA Disillusionment by (
- 6 Jun 2022 0:37 UTC; 7 points) 's comment on How effective is your Altruism? by (
- 2 Jun 2022 15:13 UTC; 5 points) 's comment on Experiment in Retroactive Funding: An EA Forum Prize Contest by (
- 23 Sep 2022 15:51 UTC; 5 points) 's comment on The motivated reasoning critique of effective altruism by (
- EA Writing Event—Criticisms and Red Teaming EA by 14 Jul 2022 10:49 UTC; 3 points) (
- Investigating Ideology: want to earn money, help EA and/or me? Then check this out; it may be a mighty neglected cause by 12 Aug 2022 5:52 UTC; 3 points) (
- 効果的利他主義について考える by 18 Aug 2023 15:48 UTC; 2 points) (
- Note sull’altruismo efficace by 18 Jan 2023 11:24 UTC; 1 point) (
- Effective Altruism Criticisms by 11 Aug 2022 16:07 UTC; 1 point) (
- EA Should Rename Itself by 2 Sep 2022 4:28 UTC; 1 point) (
- Run For President by 2 Jul 2022 0:52 UTC; -1 points) (
- Should we continue dismissing most ideas about EA (without a fair trial)? by 20 Jun 2023 17:24 UTC; -11 points) (
A few questions, suggestions and concerns.
Firstly, I expect people who’s criticisms I’d most want to hear to be very busy, I hope the contest will consider lower effort but insightful or impactful submissions to account for this?
Secondly, I’d expect people with the most valuable critiques to be more outside EA since I would expect to find blindspots in the particular way of thinking, arguing and knowing EA uses. What will the panelists do to ensure they can access pieces using a very different style of argument? Have you considered having non-EA panelists to aid with this?
Thirdly, criticisms from outside of EA might also contain mistakes about the movement but nonetheless make valid arguments. I hope this can be taken into account and such pieces not just dismissed.
Fourthly, I would also expect criticisms from people who have been heavily involved in EA over the years to be valuable but, if drawing on their experience, hard to write fully anonymously. What reassurances can you offer and safeguards do you have in place beyond trusting the panelists and administrators that pieces would be fairly assessed? What plans do you have in place to help prevent and mitigate backlash, especially given that many decisions within EA are network based and thus even with the best of intentions criticism is likely to have some costs to relationships.
Replying in personal capacity:
Yes, very short submissions count. And so should “low effort” posts, in the sense of “I have a criticism I’ve thought through, but I don’t have time to put together a meticulous writeup, so I can either write something short/scrappy, or nothing at all.” I’d much rather see unpolished ideas than nothing at all.
Thanks, I think this is important.
We (co-posters) are proactively sharing this contest with non-EA circles (e.g.), and others should feel welcome and encouraged to do the same.
Note the incentives for referring posts from outside the Forum. This can and should include writing that was not written with this contest in mind. It could also include writing aimed at some idea associated with EA that doesn’t itself mention “effective altruism”.
It obviously shouldn’t be a requirement that submissions use EA jargon.
I do think writing a post roughly in line with the Forum guidelines (e.g. trying to be clear and transparent in your reasoning) means the post will be more likely to get understood and acted on. As such, I do think it makes sense to encourage this manner of writing where possible, but it’s not a hard requirement.
To this end, one idea might be to speak to someone who is more ‘fluent’ in modes of thinking associated with effective altruism, and to frame the submission as a dialogue or collaboration.
But that shouldn’t be a requirement either. In cases where the style of argument is unfamiliar, but the argument itself seems potentially really good, we’ll make the effort — such as by reaching out to the author for clarifications or a call. I hope there are few really important points that cannot be communicated through just having a conversation!
I’m curious which non-EA judges you would have liked to see! We went with EA judges (i) to credibly show that representatives for big EA stakeholders are invested in this, and (ii) because people with a lot of context on specific parts of EA seem best placed to spot which critiques are most underrated. I’m also not confident that every member of the panel would strongly identify as an “effective altruist”, though I appreciate connection to EA comes in degrees.
Yes. We’ll try to be charitable in looking for important insights, and and forgiving of innacuracies from missing context where they don’t affect the main argument.
That said, it does seem straightforwardly useful to avoid factual errors that can easily be resolved with public information, because that’s good practice in general.
My guess is that the best plan is going to be very context specific. If you have concerns in this direction, you can email criticism-contest@effectivealtruism.com, and we will consider steps to help, such as by liaising with the community health team at CEA. I can also imagine cases where you just want to communicate a criticism privately and directly to someone. Let us know, and we can arrange for that to happen also (“we” meaning myself, Lizka, or Joshua).
I can’t speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I’m hesitant to say “low effort is fine”, because I’m not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I’m excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On “style of argument” I’ll just point to my own backlog of very non-EA writing on mostly non-EA topics.
3. I wouldn’t hold it against a submission as a personal matter, and wouldn’t dismiss it out of hand, but it’s definitely a negative if there are substantive mistakes that could have been avoided using only public information.
A big part of my getting into EA was this debate between Oxford lefties and the baby 80k staff. The socialist/deontological case was weaker. But the points that Mills makes about systemic change and the streetlight fallacy describe the two biggest ways EA practice has changed in the last decade. We moved in his direction, despite him.
Maybe the lesson is: “even if you don’t win, you might shape the movement”
I feel that external criticism of EA was generally stronger back then. Perhaps this is just a reflection of broader recent cultural trends, which have degraded the quality of public discourse.
Here is a useful steelman of Mills’ critique, courtesy of ‘pragmatist’ (note that “earning to give” used to be known as “professional philanthropy”):
Maybe because EA was tiny and elite then, so only a true intellectual would bother to criticise.
Back in my day my enemies did instrumental harm like a rational person.
At the same time, if the shift in EA practice as claimed by you is indeed real (which I think it is), then it would also seem that EA has failed to do adequate mistake acknowledgement with respect to past critiques. This might hold some insights as to why certain forms of criticisms are by-default disincentivized.
(I do hope that this contest will make a genuine attempt to correct that disincentive landscape.)
Sounds right
The problem is, we’re not an agent and so no one makes The decision to shift and so no one is noticeably responsible for acknowledging credit and blame. But it’s still fair to want it.
One traditional solution
I also suspect that making a big deal about the winners would be a good thing. For example, if the winner of the prize was awarded on the main stage at an EA Global and given a fireside chat that’d further encourage good faith criticism and demonstrate that we really care about it.
Thank you so much for your work on this, I’m excited to see what comes out of it.
What percentage of the people on the panel are longtermists? It seems, at first glance, that almost everyone is, or at least working in a field/org that strongly implies they are. If so, isn’t this a problem for the impartiality of the results? Even if not, how is an independent outsider (like the people making submissions) supposed to believe that?
This is likely to have the opposite effect; it will reinforce the current thinking in EA rather than challenge it, while monetarily rewarding people for parroting back the status quo.
I sympathise with this and generally think that EA should take conflicts of interest more seriously.
That said, I think this is subtly the wrong question: what we really want is, “how rational are the judges?” How often did they change their mind in response to arguments of various kinds from various places of various tones?
Can we say anything to convince you of that? Maybe.
Anyway: Most days I feel like more of a “holy shit x-risk” guy than a strong longtermist. I briefly worked in international development, was a socialist, a feminist, a vegan, an e2g, etc, etc. I took and liked a bunch of classes on weird things like Nietzsche, Derrida, Bourdieu. My comments on here are a good sample of me on my best behaviour.
The crucial complementary question is “what percentage of people on the panel are neartermists?”
FWIW, I have previously written about animal ethics, interviewed Open Phil’s neartermist co-CEO, and am personally donating to neartermist causes.
Just a quick update: we got more submissions than we were expecting, and a number of the panelists are low-capacity right now. We’re still targeting the end-of-September deadline, but there’s a chance that we’ll get delayed by a week or two.
I apologize in advance if that ends up happening.
How do EA anchor institutions plan to operationalize changes based on these critiques? There seems to be a bit of a pattern in some that I’ve read where people point out problems and then nothing changes.
This is a really good question, and I’m curious whether the contest organizers have anything planned. I’d love to see some sort of after the fact analysis of whether this contest led to meaningful changes or whether it looks more like kabuki theater with hindsight. I’d be interested in looking at this question from multiple perspectives, e.g. having the largest EA organizations self-report whether they’ve updated in any way, and asking authors of contest contributions (or a subset of prize winners and/or posts that cleared a certain karma threshold) whether they think their concerns have been addressed.
I would think some sort of retrospective evaluation would be an important part of deciding whether or not to run another Red Teaming contest in the future.
Pablo is quoting a 10-year-old comment; the 80k article you link was published in 2020.
The Less Wrong posts Politics as Charity from 2010 and Voting is like donating thousands of dollars to charity from November 2012 have similar analyses to the 2020 80k article.
I’m interested in fleshing out “what you’re looking for”; do you have some examples of things written in the past which changed your minds, which you would have awarded prizes to?
For example, I thought about my old comment on patient long-termism, which observes that in order to say “I’m waiting to give later” as a complete strategy you need to identify the conditions under which you would stop waiting (as otherwise, your strategy is to give never). On the one hand, it feels “too short” to be considered, but on the other hand, it seems long enough to convey its point (at least, embedded in context as it was), and so any additional length would be ‘more cost without benefit’.
Random personal examples:
This won the community’s award for post of the decade. Its disagreement with EA feels half-fundamental; a sweeping change to implementation details and some methods.
This was much-needed and pretty damning. About twice as long as it needed to be though.
This old debate looks good in hindsight
The initial patient longtermist posts shook me up a lot.
Robbie’s anons were really good
This is on the small end of important, but still rich and additive.
This added momentum to the great intangibles vibe shift of 2016-8
This was influential, bizarrely necessary to correct a community bubble which burned a lot of time and mental health. But hardly fundamental.
Can’t remember where it was, a Progress Studies bit about how basic science looks bad on a naive cost-benefit view but has to date clearly been the fount of utility
EA is (was?) ignoring criticism
I like your comment and would’ve taken it seriously, but this contest is only accepting things written after March 2022. Here’s a form for older stuff (no cash yet sorry).
When is the exact deadline? Like… the 1st September by UTC? Or the 1st of September by some American timezone?
We didn’t specify when we posted the announcement, so let’s be as generous as possible and say “Anywhere on Earth.” (Here’s a live clock for AoE time.)
Sorry to be a nit pick, but what time on 1st September AoE? 00:00 or 23:59? “As generous as possible” would suggest 23:59, but granted it feels like that might be taking the piss a little.
11:59 pm AoE on September 1st.
It’s BST in the announcement post, but I’ve messed up in this comment thread (I missed the time while skimming) and now commit to AoE. Apologies for the confusion, folks!
From a different comment thread:
I’d be personally grateful (and grateful in my Forum role) if people didn’t wait until the last minute to post their submissions (but last-minute submissions won’t be penalized in the scoring). Besides other problems, posting last-minute doesn’t allow wiggle room for things to go wrong.
And as an FYI, we’re not going to be accepting any late submissions.
I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?
just an appreciation comment. I think this post was very well written and handled tricky questions well, especially the Q&A section.
And this seems great to highlight:
As this page comes first on Google search for the contest, I’d like to suggest linking the results on the beginning of end of this article now.
Thanks for the suggestion! I just added a note.
It’s possible I missed it but I didn’t see anything stating whether multiple submissions from one author are allowed, I assume they are though?
Don’t see why not, as long as it’s not salami sliced.
Makes sense, thanks!
Is co-authorship permitted? Apologies if I missed this in the post!
It’s permitted, yes!
The team of coauthors who write the winning submission will get the prize, and can share it as the members see fit. A good default might be to just split the prize evenly, and if you’re collaborating on something that might win a prize that you think should be distributed differently, I’d recommend that you agree on this in advance.
(No need to apologize. I don’t think we discussed co-authorship anywhere in the post. I’m now thinking we should consider adding it to the Q&A section, so thank you for bringing it up!)
Thanks for putting this contest together! Is there a comprehensive list of major EA projects?
Best I can think of is looking for the announcement posts inside each of these tags
https://forum.effectivealtruism.org/topics/all
Do the new SBF revelations cause any reconsideration of this contest?
The end of September 1st, right?
There is a section in the article that says:
(I nearly missed this as well)
When re-skimming the announcement post that I myself co-wrote, I missed this, too, and have now committed to “as generous as possible — “Anywhere on Earth.” (Here’s a live clock for AoE time.) ” So it’s 11:59 pm AoE on September 1st.
I’d be personally grateful (and grateful in my Forum role) if people didn’t wait until the last minute to post their submissions (but last-minute submissions won’t be penalized in the scoring). Besides other problems, posting last-minute doesn’t allow wiggle room for things to go wrong.
And as an FYI, we’re not going to be accepting any late submissions.
Question—how did you select judges for your contest? How did you balance expertise with diversity?
Thanks!
One issue is that networked and connected people may have greater access to pre-publish criticism in the form of google doc comments, and getting google doc comments seems like a fairly robust strategy for improving the quality of an essay. If simply the best essays are awarded, then we may ossify some dynamics around being networked and well connected, or failing to recognize people from outside of our ingroup.
Can we run a formal critique of the criticism contest after I find out that my submissions didn’t win? I don’t have a pile of cash though I do have a lot of extra special bonus points for people.
Hello. Would length be an issue. For instance, would a highly focused criticism of say 7 to ten thousand words count.
I’m curious to hear more about how critiques have been processed historically by the EA movement. Shortform post here: https://forum.effectivealtruism.org/posts/boYH7XH4xE9iugxWi/tyleralterman-s-shortform?commentId=RJYzym2mwrnXP9amn
Can someone post something and then re-post a better version that takes into account all of the feedback they got in the comments? (or should early versions not be tagged with the contest tag?)
Motivation for this question: trying to work out a low effort way for my smart[1] non-EA friends to
1) post their thoughts in a way that feels relatively low-stakes but still has a clear upside; and
2) give them the option to iterate on their ideas in the coming months based on anything that they find thought-provoking in the initial response.
I have the good fortune of often being the least intelligent person in the room and I feel I should be making better use of this superpower 💪🏼
It’s probably extremely hard to critique people who have spent 10 years steel-manning their assumptions[1] without being able to go back and forth to build up any butterfly ideas, even if there is a great critique out there.
this feels related
(and I also am obviously not going to be nearly as good an intellectual sparring partner as the entire EA community collectively would be so it seems better to develop ideas in public than in private)
I’d be happy to see this kind of process, and don’t think it’s against the rules of the contest. You might not want to tag early versions with the contest tag if you don’t expect them to win and don’t think panelists should bother voting on them, but tagging the early versions wouldn’t count against you for the final version.
On a different note (taking off my contest-oragnizer hat, putting on my Forum hat): I think people should feel free to post butterfly ideas with the idea that they will develop them further. The Forum exists in part for this kind of communal idea development. (Of course, this isn’t the best approach for certain kinds of idea development. In particular, it might make sense to do some basic research on the Forum before posting certain questions or starting to write something long on a topic you’re very unsure about.
Hello, I have written a post in response to this contest but it doesn’t appear to be visible for whatever reason—net downvotes perhaps? Here is a link in case anyone is interested: https://forum.effectivealtruism.org/posts/bep6LhLcKqtEj3eLs/belonging
It’s visible but well off the front page without scrolling or pressing “more posts”.
Basically, there’s limited space and posts with low interest or “low quality” will fall off (I haven’t read your post, this isn’t judgement).
Even without positive votes, your post would have been visible for a few hours to a day. Usually, forum members will upvote posts they think deserve to be on the front page. You might not have gotten any votes.
I guess this is unfair or path dependent but basically there’s limited space and no better scheme has been clearly proposed (keeping new posts higher comes at the expense of older highly voted posts for example).
Will you consider all submissions together post 1 September, or on an ad hoc basis as and when they are received? Is there any advantage or disadvantage to posting early? I am working on something currently but am wary of submitting it early and it falling to the back of people’s minds by the time the decisions are made in September.
Are people encouraged to share this opportunity with non-EA friends and in non-EA circles? If so, maybe consider making this clear in the post?
I’m currently writing a sequence exploring the legal viability of the Windfall Clause in key jurisdictions for AI development. It isn’t strictly a red-team or a fact-checking exercise, but one of my aims in writing the sequence is to critically evaluate of the Clause as a piece of longtermist policy.
If I’d like to participate, would this sort of thing be eligible? And should I submit the sequence as a whole or just the most critical posts?
Sounds to me like that would count! Perhaps you could submit the entire sequence but highlight the critical posts.
Maybe interesting. A friend writing a draft asked me for some posts for background.
Here are posts that came to mind from the top of my head (do suggest posts I missed):
Blindspots:
https://forum.effectivealtruism.org/posts/DxfpGi9hwvwLCf5iQ/objections-to-value-alignment-between-effective-altruists
https://forum.effectivealtruism.org/posts/LJwGdex4nn76iA8xy/some-blindspots-in-rationality-and-effective-altruism
https://forum.effectivealtruism.org/posts/pxALB46SEkwNbfiNS/the-motivated-reasoning-critique-of-effective-altruism
https://forum.effectivealtruism.org/posts/rpFjPmCL4tcfBic6a/effective-altruism-is-self-recommending
https://forum.effectivealtruism.org/posts/AqpFkoq3oSEvsqker/milan-griffes-on-ea-blindspots
Diversity:
https://forum.effectivealtruism.org/posts/rb5YDEk3zej3HF5bg/ea-diversity-unpacking-pandora-s-box
Policy:
https://forum.effectivealtruism.org/posts/Q7qzxhwEWeKC3uzK3/managing-risk-in-the-ea-policy-space
https://forum.effectivealtruism.org/posts/cDdcNzyizzdZD4hbR/critique-of-openphil-s-macroeconomic-policy-advocacy
https://forum.effectivealtruism.org/posts/9WxdtLEfEDJfBAruX/are-we-actually-improving-decision-making
Jobs:
https://forum.effectivealtruism.org/posts/jmbP9rwXncfa32seH/after-one-year-of-applying-for-ea-jobs-it-is-really-really
https://forum.effectivealtruism.org/posts/PbtXD76m7axMd6QST/the-funnel-or-the-individual-two-approaches-to-understanding
Organisation: -https://forum.effectivealtruism.org/posts/oNY76m8DDWFiLo7nH/what-to-do-with-people
https://forum.effectivealtruism.org/posts/KmondPMrgZ2ctKnP4/a-framework-for-assessing-the-potential-of-ea-development-in-1 - https://forum.effectivealtruism.org/posts/YKEPXLQhYjm3nP7Td/ways-money-can-make-things-worse
https://forum.effectivealtruism.org/posts/wHyy9fuATeFPkHSDk/how-x-risk-projects-are-different-from-startups
https://forum.effectivealtruism.org/posts/by8u954PjM2ctcve7/experimental-longtermism-theory-needs-data
Of organisations: https://forum.effectivealtruism.org/topics/criticism-of-effective-altruist-organizations
Moderation of the boards, to point-out misrepresentations and fallacies, would put it on par with the philosophy message board I moderated in the 90s. New folks shouldn’t have to defend themselves from EA regulars’ misrepresentations.
And, the selection of judges seems an arcane cabal… did you notice the irony, that your own, privately selected judges are the ones who determine if critique of themselves is valid? That’s equivalent to being “judge in your own trial”.
I also fear that, by offering a prize to the ‘best’, you are then able to disregard all those who ‘didn’t make the prize-threshold’. You gave only two months for it, while other organizations have a suggestion box that is always available, without judges dismissing all but the ‘best’.
Oh, darn—I can’t tell you this stuff, because you had already closed the contest by the time word of it had trickled to me.
✨✨Content✨✨
Alrighty, not sure how this contest works or what is going on, but I’ve got content to add in this thread!
My content might be different because I don’t see it as “red teaming”. I think “red teaming” is criticism that tends to be opposed to the issues. While wildly aggressive, I think that I accept the underlying goals and to try to improve them systemically. For example, by finishing with constructive, specific suggestions.
Also, I think my content is different because it’s not circling the same topics (like, I don’t see anyone else writing these ideas or solutions).
Finally, everything will be themed with Nirvana. Please play the following song (Sliver)
FYI I downvoted this and your other comment entirely because of the gratuitous pictures, videos etc.
Without directly confronting you (it’s wrong and not acceptable), and writing in an impartial voice:
These pictures and videos are a deliberate comment/critique on the hidden effects of current aesthetics and norms of discussion.
Here, your reaction is being intentionally provoked, because they are further illustrations of what the critique views as defective: the prioritization of aesthetics over content. (Any number of the points being made, about for-profit entities, alternative theories of change, seem monumental, even if half true, but “We’ll downvote them because of a picture”.)
Suspicion of Anthropic Silent Shadow (AKA “Sass” or “Sassy”)
“PREGISTRATION OF CRITICISM” (this isn’t the full criticism or solution but I don’t know when I will type it up):
The root issue of Sassy concerns is that a major realization of the EA interest in, and money entering AI, might be in the form of super high levels of funding to nascent entities. A major thread here is the straddling of these entities across the non-profit/for-profit boundary. Anthropic is one member of this class, but a number of other organizations are coming up.
(A brief sketch to give an impression of this level of funding is in this comment: “Next-level Next-level”).
The consequent effects of this funding are large and include casting a shadow on all recruiting and organization formation across EA. This is still true (maybe some are increased) if this is virtuous—if EAs are recruited for example, pulling EA talent into middle managers in AI orgs. There are positive effects too, such as high talent inflows. Importantly, most of these effects are silent.
As mentioned, a major thread is that the for-profit status of these organizations. Some complications of this status are important (but cerebral):
the “cost effectiveness” of these interventions could be infinitely positive
it introduces new theory of change of EA steering and leadership of relevant industries
a complete new theory of change related to TAI and takeoff, distinct from AI safety
However, the most immediate issue about for-profit status is venal. The slipperiness/porousness of straddling altruistic/profit projects, and the incentives related to this, might be bad and hard to manage. To be clear, I am worried about situations where for-profits wielding altruistic narratives, results in bad outcomes, much worse outcomes than just having a regular for-profit.
Regarding the amount of funding, it seems possible no situation like this exists in any non-profit ecosystem like EA in history (but we can probably find smaller instances where high quality non-profits are decapitated as their talent and processes flows to for-profits).
Note that Sassy criticism differs from, or is even opposed to most concerns about spending. For example, it views certain concerns about “conflict of interest” as irrelevant or even misguided and counterproductive ( EAs want closely aligned EAs together in leadership positions).
Solutions
Sass can’t be “stopped” now, and probably never could have been.
There are tangible things we can do that are robustly good:
Norms that involve frank communication about what people are doing when they get money or interest from EAs about these AI projects, this is good and interesting
A person whose explicit job is to check out what’s going on (and who is funded by an endowed fund for a period of time)
Note that both the above actions don’t need to have an adversarial character. Basically, it’s just leaning into the reality.
Conflicts of interest
Note that I have 4 conflicts of interests (basically, in the wrong way, that would normally cause a sane person not to write this):
I am funded by the relevant parties I am directly criticizing
I am a wannabe working on a for-profit language model thingy (so the very thing I am writing against)
I seek collaboration with people inside of these entities
I directly use several APIs and tools from the companies and even undocumented features and aid, which can be cut off
Finally, in theory, I know (non-EAs) people who want to invest in these “for profit” organizations, and writing this isn’t helping that deal flow
My collaborators read and cringe my forum comments
No wait, that’s actually six conflicts of interest.
So maybe “Next-level Next-level” will actually refer to the effects on my career, which is exciting.
Wow. You have 100 grand and you’re going to spend it on blowharding? Ok, that’s interesting. But, um, aren’t there countless people here already willing to provide that service for free?
I don’t understand how this choice aligns with EA principles, but ok, it’s your money.