Results from the First Decade Review
Tl;dr:
The Decade Review happened: EA Forum users voted on which posts from the first decade of effective altruism they found most useful and important and reviewed the posts to explain what they appreciated.
We’re awarding $12,500 in prizes to authors and reviewers.[1]
The distribution of prizes to authors was based on your votes, the results of which are linked.
I’m using this post to highlight some aspects of the winning posts and reviews. I encourage more discussion in the comments (and in new posts!).
Prizes for posts
We’re awarding a total of $10,000 to the top posts (and $2,500 for the top reviews). Note that some authors chose to donate their prizes.
Summary
We’re awarding $10,000 to authors of the posts, broken down as follows.
$1,500 to Hauke Hillebrandt and John G. Halstead ($750 each) for Growth and the case against randomista development
$1000 each to
Helen for Effective Altruism is a Question (not an ideology)
Greg Lewis for Beware surprising and suspicious convergence
Nate Soares for On Caring
Luisa Rodriguez for two posts(!) : What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)? and How bad would nuclear winter caused by a US-Russia nuclear exchange be?
$500 each to
David Althaus and Tobias Baumann ($250 each) for Reducing long-term risks from malevolent actors
Will MacAskill for Are we living at the most influential time in history?
Holly Elmore for We are in triage every second of every day
“EA applicant” for After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
Brian Tomasik for Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
Julia Wise for You have more than one goal, and that’s fine
Peter Singer for Peter Singer – Famine, Affluence, and Morality
First prize ($1500): Growth and the case against randomista development by Hauke Hillebrandt and John Halstead
Randomista development is “an approach to development economics [that focuses on] interventions which can be tested by randomised controlled trials (RCTs).” This is popular in effective altruism, but, as this post argues, might systematically undervalue or miss the interventions that have been responsible for some of the biggest improvements ever in quality of life. In particular, the authors discuss the importance of economic growth for human welfare and argue that the welfare gains from increasing GDP per capita in a country are so large that they outweigh the benefits of randomista development programs by orders of magnitude. As a result, they argue, researchers and grantmakers should spend much more time looking for interventions that can promote growth. The post also discusses objections and explores the limitations of GDP/growth, but emphasizes that this topic is under-prioritized by people in effective altruism.
I like a lot of things about this post. I had to print the post out and read it carefully (pen, highlighters, and all), and don’t claim expertise in the subjects involved, but I understand enough to see that it’s making some important and true points (although you should also read the comments on the post for a discussion on potential weaknesses and more disputable claims).
I think more people should explore the value and tractability of economic growth, and more people might want to read this post as an introduction to that subject. Beyond that, it’s also a good example to follow if you’re considering writing or researching something. So I’m glad this post has won the first prize.
Or, as Maxime CdS put it,
[...] this post makes a very good point in a very important conversation, namely that we can do better than our currently identified best interventions for development.
The argument is convincing, and I would like to see both more people working on growth-oriented interventions, and counter-arguments to this.
Other things I like about this post (these might be some of the reasons people voted for it):
It is well researched.
It has a very specific summary (which lists the main arguments made and provides a rough outline of the post), and the section headers make it easy for readers to find what they are looking for.
Moreover, the best criticisms target deep and ubiquitous beliefs, and this post is a great example of this principle.
Second prizes ($1000 each)
Effective Altruism is a Question (not an ideology) by Helen
Feminism, secularism, and many other movements and worldviews answer questions like: “Should men and women be equal? (Yes.) What role should the church play in governance? (None.)” So it’s natural to ask, “what claims does ‘effective altruism’ make”? This post argues that, unlike those movements, effective altruism asks a question: “How can I do the most good, with the resources available to me?”
I view the statement “effective altruism is a question” as aspirational: it is a motto we should use to keep ourselves on track. As Helen points out, a key conclusion of this mindset is that “our suggested actions and causes are best guesses, not core ideas.”[2]
This post, and the question behind it, has become a central part of my attitude towards EA — and judging by the fact that it won second place in this review, I’m not unique in this respect.
A review for this post also won a prize.
500 Million, But Not A Single One More by Jai
In 2018, Jai wrote, according to one of the reviewers, “one of the best pieces of EA creative writing of all time” — a retelling of the story of smallpox eradication.
I don’t have much more to say, except perhaps that I agree with another reviewer that I’d like to see more work along these lines. A short excerpt:
We will never know their names.
The first victim could not have been recorded, for there was no written language to record it. They were someone’s daughter, or son, and someone’s friend, and they were loved by those around them. And they were in pain, covered in rashes, confused, scared, not knowing why this was happening to them or what they could do about it — victims of a mad, inhuman god. There was nothing to be done — humanity was not strong enough, not aware enough, not knowledgeable enough, to fight back against a monster that could not be seen.
Beware surprising and suspicious convergence by Gregory Lewis
This post added a tool to my cognitive toolkit. It points out that if you discover that two different beliefs, crucial considerations, or philosophies give the same outcome, you might want to be suspicious that you haven’t taken something to its real logical conclusion.
Or, to use an example from the post:
Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.
Eleanor: Okay, but I’m principally interested in improving human welfare.
Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.
Beware of this phenomenon.
Greg has other posts that I think enrich our cognitive toolkits:
“Reality is often underpowered” narrowly missed winning a prize (it was in the 17th place).[3]
“Use resilience, instead of imprecision, to communicate uncertainty” has added “resilience” to my personal vocabulary.
From the post: “Suppose you want to estimate some important X (e.g. risk of great power conflict this century, total compute in 2050). If your best guess for X is 0.37, but you’re very uncertain, you still shouldn’t replace it with an imprecise approximation (e.g. “roughly 0.4”, “fairly unlikely”), as this removes information. It is better to offer your precise estimate, alongside some estimate of its resilience, either subjectively (“0.37, but if I thought about it for an hour I’d expect to go up or down by a factor of 2”), or objectively (“0.37, but I think the standard error for my guess to be ~0.1″).”
You can see more on his profile.
On Caring by Nate Soares
Nate Soares explores scope insensitivity in this essay, where he discusses the fact that he’s “not very good at feeling the size of large numbers,” and that this gets dangerous when we use our feelings as the key factors for our altruistic choices:
My internal care-o-meter was calibrated to deal with about 150 people, and it simply can’t express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn’t go up that high. [...] Nobody has one capable of faithfully representing the scope of the world’s problems. But the fact that you can’t feel the caring doesn’t mean that you can’t do the caring.
The post also discusses the difference between how much something is worth (it’s worth at least 3 minutes of my time to save the life of a bird affected by an oil spill, and it’s worth months to save the other thousands of birds) and what we should actually do when we can’t possibly do enough (I should probably not spend months cleaning birds, even if I want to).
I also recommend his series on “Replacing Guilt.”
Two posts on war, nuclear winter, and the likelihood of recovery from civilizational collapse by Luisa Rodriguez
Only one author had two posts make it into the list of 14 winning posts.
Luisa Rodriguez builds models for how different civilizational catastrophes might lead to extinction and analyzes the likelihood of human extinction in different scenarios by looking into factors like how long certain supplies might last.
It’s good research, and it’s also good communication about research. For instance, Rodriguez announces the approximate time spent on different sections, helping readers to understand how much to trust these numbers. The post is a great demonstration of epistemic legibility; it explains precisely and in detail what points led to which beliefs. This allowed commenters to point out errors in the original post and collectively come to truer conclusions.
Rodriguez also shares many suggestions for further work on this topic, such as hosting wargames and further building out her models.
How bad would nuclear winter caused by a US-Russia nuclear exchange be?
This post explores the likely effects of a US-Russia nuclear exchange. Here’s an excerpt: “By my estimation, a nuclear exchange between the US and Russia would lead to a famine that would kill 5.5 billion people in expectation (90% confidence interval: 2.7 billion to 7.5 billion people).”
Kit Harris (who recently helped launch a nuclear security grantmaking program at Longview) writes, in a review that also won a prize:
This was the single most valuable piece on the Forum to me personally. It provides the only end-to-end model of risks from nuclear winter that I’ve seen and gave me an understanding of key mechanisms of risks from nuclear weapons. I endorse it as the best starting point I know of for thinking seriously about such mechanisms. I wrote what impressed me most here and my main criticism of the original model here (taken into account in the current version).
This piece is part of a series. I found most articles in the series highly informative, but this particular piece did the most excellent job of improving my understanding of risks from nuclear weapons.
Details that I didn’t cover elsewhere, based on recommended topics for reviewers:
How did this post affect you, your thinking, and your actions?
It was a key part of what caused me to believe that civilisation collapsing everywhere solely due to nuclear weapons is extremely unlikely without a large increase in the number of such weapons. (The model in the post is consistent with meaningful existential risk from nuclear weapons in other ways.)
This has various implications for prioritisation between existential risks and prioritisation within the nuclear weapons space.
Does it make accurate claims? Does it carve reality at the joints? How do you know?
I spent about 2 days going through the 5 posts the author published around that time, comparing them to much rougher models I had made and looking into various details. I was very impressed.
The work that went into the post did the heavy lifting and pointed a way to a better understanding of nuclear risk. The model in the original version of the post was exceptionally concrete and with a low error rate, such that reviewers were able to engage with it to identify the key errors in the original version of the post.
If you want to produce useful research, this is a good example to learn from.
Third prizes ($500 each)
David Althaus and Tobias Baumann ($250 each) for Reducing long-term risks from malevolent actors
Throughout history, “malevolent actors” — people who wanted to harm — have been involved in serious catastrophes (some of which are listed in the post). Importantly, the possibility of advanced technologies like AI or mind-uploading technology means that the powers of malevolent actors might get much more significant soon. The authors suggest a number of possible interventions that people in effective altruism should investigate and potentially put in action (like developing the science of malevolence and protecting political institutions from bad people).
In a review, Pablo wrote: “[This post] embodies many of the attributes I like to see in EA reports, including reasoning transparency, intellectual rigor, good scholarship, and focus on an important and neglected topic.”
Will MacAskill for Are we living at the most influential time in history? (consider reading the updated paper instead of the version in the post)
A rigorous analysis of the “hinge of history” hypothesis: “In this article I try to make the hinge of history claim more precise, give arguments in favour and against, and assess whether it is true. Ultimately, I argue that the claim [...] is quite unlikely to be true, and that this fact can serve as part of an argument for the conclusion that impartial altruists should generally be investing their resources, rather than trying to do good immediately. [...] Assessing whether this is true is crucially important, deserving of far more attention than I have been able to give it in this article. There are some good arguments for thinking that our time is very unusual [...]. But the claim that we are among the most influential people [...] does not seem warranted.”
Ben West[4] explains the benefits of the post in a review; the post introduced the hypothesis to a broader audience, established clear terms and definitions, and is still “the canonical reference for skepticism about us living at the hinge of history.”
Holly Elmore for We are in triage every second of every day
A post that argues against the rejection of hard decisions. “Making better choices through conscious triage is no more ‘playing God’ than blithely abdicating responsibility for the effects of our actions. Both choices are choices to let some live and others die. The only difference is that the person who embraces triage has a chance to use their brain to improve the outcome. The suffering of the person who doesn’t receive the scarce resource is no less because you, personally, haven’t witnessed it.”
As the prizewinning review put it: “This post articulates an essential component of effective altruism in an elegant way. It provides a simple metaphor that is helpful both for adherents of the movement to reflect on what effective altruism involves and to communicate with the public.”
“EA applicant” for After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
A personal take that resonated deeply with many people in the community. The anonymous author of this post shares their experience of struggling to find a job at an EA organization.
In a review of this post, Ben West writes, “As an employer, I still think about this post three years after it was published, and I regularly hear it referenced in conversations about hiring in EA. The experiences in it clearly resonated with a lot of people, as evidenced by the number of comments and up votes. I think it’s meaningfully influenced the frame of many hiring rounds at EA organizations over the past three years.” Ben has since published a sequence on EA Hiring.
Brian Tomasik for Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness
“I think charities do differ a lot in expected effectiveness. Some might be 5, 10, maybe even 100 times more valuable than others. Some are negative in value by similar amounts. But when we start getting into claimed differences of thousands of times, especially within a given charitable cause area, I become more skeptical. And differences of 10^30 are almost impossible, because everything we do now may affect the whole far future and therefore has nontrivial expected impact on vast numbers of lives.”
In a review of his own post, Tomasik writes “I think maybe the main value of this post is to help people keep in mind how complex the effects of our actions are and that many different people are doing work that’s somewhat relevant to what we care about (for good or ill).”
Julia Wise for You have more than one goal, and that’s fine
“Cost-effectiveness analysis is a very useful tool. I wish more people and institutions applied it to more problems. But like any tool, this tool will not be applicable to all parts of your life. Not everything you do is in the ‘effectiveness’ bucket. I don’t even know what that would look like.”
As Wise’s self-review puts it: “I would like people to examine whether they’re doing things for more self-regarding personal reasons, or for optimizer-y improve-the-world reasons. And enjoy the resources they put toward themselves and their friends, but also take seriously the project of improving the world and put significant resources toward that. Rather than being confused about which project you’re pursuing, which I think is suboptimal both for your own enjoyment and for improving the world.”
Peter Singer for Famine, Affluence, and Morality
In this classic, Peter Singer writes, “I shall argue that the way people in relatively affluent countries react to a situation like that in Bengal cannot be justified; indeed, the whole way we look at moral issues—our moral conceptual scheme—needs to be altered, and with it, the way of life that has come to be taken for granted in our society. [...] Discussion, though, is not enough. What is the point of relating philosophy to public (and personal) affairs if we do not take our conclusions seriously? In this instance, taking our conclusion seriously means acting upon it.”
This post was link-posted by Zach Stein-Perlman (for which we’re really grateful). He writes: “this is Peter Singer’s signature article, and one of the most influential articles ever published in ethics. This article is a significant part of the intellectual foundation and motivation for effective altruism.”
Prizes for reviews
None of this would be possible without reviewers. We’re awarding…
$250 each to
Sophia, for a review on Helen’s “Effective Altruism is a Question (not an ideology)”
“I don’t think it necessarily accurately describes what the effective altruism movement is, but more of what it aspires to be. Having this compass on where we want to be helps us nudge the movement towards that.” As I mentioned above, I really value this understanding of the post. The review also adds to the discussion about what “EA” is.
Matthew Barnett for a review on “Concerning the Recent 2019-Novel Coronavirus Outbreak”
We often hesitate to share our weird thoughts. At the start of the COVID-19 pandemic, a number of people were starting to get worried about the virus, but felt uncomfortable using their platforms to share their worries with other people for fear of appearing alarmist or silly.
This is why I think Matthew’s self-review is so powerful. He notes that,
“Writing this post was a very difficult thing for me to do. On an object-level, l realized that the evidence coming out of Wuhan looked very concerning. The more I looked into it, the more I thought, ‘This really seems like something someone should be ringing the alarm bells about.’ But for a while, very few people were predicting anything big on respectable forums (Travis Fisher, on Metaculus, being an exception), so I stayed silent.”
This phenomenon is important to keep in mind for the next time. Matthew ended up writing the post — and is rightfully praised for doing that. Let’s do the same.
Sarah H, for a review on Holly Elmore’s “We are in triage every second of every day”
Sarah points out both the strong upside of the post and how someone could improve or adapt it: “An abbreviated version of the piece, consisting of the first full paragraph in conjunction with the final four, could serve as a brief overview of this sharp idea.”
Michael Aird for a review on “Database of existential risk estimates” (and two other reviews)
There is a pattern to these three self-reviews by Michael. He writes:
“I think it’s kind of crazy that nothing very much like this database existed before I made it, especially given how simple making it was.”
“It’s one of many posts of mine that seem like they were somewhat obvious low-hanging fruit that other people could’ve plucked too.”
“I think these points are pretty obvious and probably were already in many people’s heads. I think probably many people could’ve easily written approximately this.”
In other words, Michael is pointing out that he noticed a need that could be filled with a bit of effort, and went ahead and filled the need.
Note also that two of the posts (the database and the directory of research topic lists) are collections. As Michael points out in another of his posts, this is a kind of useful post that is especially easy to publish “when you’re learning about a topic anyway.”
Another thing that I like about the reviews is that they point out ways in which the related posts can be improved or are less useful than expected.
Kit Harris, for a review on Luisa Rodriguez’s “How bad would nuclear winter caused by a US-Russia nuclear exchange be?”
It is really powerful to know that a piece of writing was useful (it informed choices in grantmaking) and that it withstood someone checking it: “I spent about 2 days going through the 5 posts the author published around that time, comparing them to much rougher models I had made and looking into various details. I was very impressed.”
I’d love to see more commenters share if they have put some effort into checking a post, or if they’ve used it for something.
David Moss, for a review on “EA Survey 2019 Series: How EAs Get Involved in EA”
David’s review on his own post is a great summary of the post, and a TLDR of the key updates that have happened since he published it; what’s held up, what hasn’t, what presentations of data are misleading, what was useful, and where to find more info. More comments like this would help Forum users be as informed as possible.
Peter Wildeford for a review on “You have a set amount of “weirdness points”. Spend them wisely.”
Peter’s self-review gives a lot of context on what’s changed since the original post was written. This is especially useful given that the post still gets shared and discussed. Peter writes that “now, in 2022, I think the EA movement unequivocally has credibility / traction and the times now call for ‘keeping EA weird’ - the movement has enough sustaining power now that we can afford to be weird and that this weirdness is an asset as it is what puts important but otherwise completely disregarded concepts on the map.”
Rohin Shah for a review on “Thoughts on the ‘Meta Trap’”
One thing I particularly like about this self-review is that Rohin notes that he’d make a change in terminology if he were rewriting the post: “[I’d] stop saying ‘meta’. I don’t know what I’d replace it with, but ‘meta’ is too ambiguous and easily misunderstood. ‘Promotion traps’ came up as a suggestion in the comments; that seems reasonable.” I also like that he identifies what he views as “the biggest critique of this post” — “that it doesn’t demonstrate that any of these traps actually happen(ed) in practice.”
$100 each to
Nuño Sempere’s review of “SHOW: A framework for shaping your talent for direct work”
“This post influenced my own career to a non-insignificant extent. I am grateful for its existence, and think it’s a great and clear way to think about the problem. As an example, this model of patient spending was the result of me pushing the “get humble” button for a while. This post also stands out to me in that I’ve come back to it again and again.”
Adam Gleave’s review of “2017 Donor Lottery Report”
A discussion of the impact of this post, and some suggestions: “The post had less direct impact than I hoped, e.g. I haven’t seen much analysis following on from it or heard of any major donations influenced by it. Although I’ve not tried very hard to track this, so I may have missed it. However, it did have a pretty big indirect impact, of making me more interested in grantmaking and likely helping me get a position on the long-term future fund. Notably you can write posts about what orgs are good to donate to even if you don’t have $100k to donate… so I’d encourage people to do this if they have an interest in grantmaking, or scrutinize how good the grants made by existing grantmakers are.”
Jackson Wagner’s review of “The Narrowing Circle”
A highlight: “Here are some other pieces that seem relevant to the thread of ‘investigating what drives moral change’:
AppliedDivinityStudies arguing that moral philosophy is not what actually drives moral progress.
A lot of Slate Star Codex / Astral Codex Ten is about understanding cultural changes. Here for instance is a dialogue about shifting moral foundations, expanding circles, and what that might tell us about how things will continue to shift in the future.”
Seanrson’s review of “Longtermism and animal advocacy”
“Longtermism and animal advocacy are often presented as mutually exclusive focus areas. This is strange, as they are defined along different dimensions: longtermism is defined by the temporal scope of effects, while animal advocacy is defined by whose interests we focus on. Of course, one could argue that animal interests are negligible once we consider the very long-term future, but my main issue is that this argument is rarely made explicit.”
Evelyn Ciara’s review of “Doing good while clueless”
“I’m heartened to have seen progress in the areas identified in this post. For example, the Effective Institutions Project was created in 2020 to work systematically on [improving institutional decision-making]. Also, I’ve seen posts calling attention to the inadequacy of existing cause prioritization research.
Going forward, I’d like to see more systematic attempts at cause prioritization from a longtermist perspective”
Voting results
You can see them here. This describes how voting worked.
Next steps
That’s not all, folks:
Many other awesome posts were nominated, but didn’t get reviewed and didn’t make it to the final stage
We’ll organize some of this content into sequences and collections
And we’ll keep you updated on the Forum.
Thanks to everyone involved, and please continue writing and commenting!
Huge appreciation to all the authors, editors, commenters, reviewers, voters, readers, etc.
- ^
We ended up not recruiting judges for the selection of reviews that won prizes, mostly because there just weren’t that many reviews, and we were very time-constrained.
Also, this wrap-up is being posted quite late. Sorry abou that! - ^
While I really value the state of mind the post gestures to, I disagree with a more literal interpretation of the message. When describing EA to people who don’t know much about it, I think it’s somewhat misleading to insist that “EA is just a question:, ‘How can I do the most good?’” This is almost a motte-and-bailey that can make it harder to criticize the worldview. In practice, the EA community has focus areas, preferred approaches, jargon, and the rest.
- ^
A Forum Prize announcement explained: “This post discusses the fact that we ought to pay more attention when we find ourselves working with whatever data we can scrounge from data-poor environments, and consider other ways of developing our judgments and predictions.”
- ^
(My manager.)
- Announcing a contest: EA Criticism and Red Teaming by 1 Jun 2022 18:58 UTC; 276 points) (
- Sort forum posts by: Occlumency (Old & Upvoted) by 15 May 2022 3:08 UTC; 105 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- Posts from 2023 you thought were valuable (and underrated) by 21 Mar 2024 23:34 UTC; 82 points) (
- About my job: “Content Specialist” by 8 Sep 2023 18:55 UTC; 66 points) (
- Monthly Overload of EA—June 2022 by 27 May 2022 15:48 UTC; 62 points) (
- Resource for criticisms and red teaming by 1 Jun 2022 18:58 UTC; 60 points) (
- New? Start here! (Useful links) by 1 Jul 2022 21:19 UTC; 26 points) (
- 2 Jan 2023 0:17 UTC; 24 points) 's comment on Your 2022 EA Forum Wrapped 🎁 by (
- Announcing a contest: EA Criticism and Red Teaming by 2 Jun 2022 20:27 UTC; 17 points) (LessWrong;
- 22 Dec 2023 10:59 UTC; 5 points) 's comment on Jeroen De Ryck’s Quick takes by (
The submission in last place looks quite promising to me actually.
Does anyone know whether Peter Singer is a pseudonym or the author’s real name, and whether they’re involved in EA already? Maybe we can get them to sign up for an EA Intro Fellowship or send them a free copy of an EA book – perhaps TLYCS?
Peter Singer is originally a character in Scott Alexander’s “Unsong,” mentioned here (mild spoilers), so it’s a pseudonym that’s a reference for a certain ingroup.
Maybe we should send a book to all singers named Peter?
https://www.gemtracks.com/guides/view.php?title=most-famous-singers-celebrities-named-peter&id=4861
I’m not sure. – Peter Gabriel, for instance, seems to be an adherent of shorthairism, which I’m skeptical of.
You might not feel an instinctive affinity for shorthairists, but try to expand your moral circle!
Feedback: I tried and failed on my phone to read the voting results by the ranking of how people voted. I don’t know what weighting is used in the spreadsheet so the ordering feels monkeyed-with.
Can you write a bit more about what you mean? What voting results? Why would it be obvious that you could back this out?
I don’t remember the details but I remember thinking the quadratic voting formula seemed sort of “underdetermined” and left room for “post processing”, but I read this as the “designer” wasn’t confident and leaving room to get well behaved results (as opposed schemes of outright manipulation).
Uh, I spent 45 seconds looking at this, but it looks like the final determinative score was created by doubling the>1000 karma weighted votes score and adding it to the <1000 karma weighted vote score.
The above thought might be noise and not what you’re talking about (but this is because the voting formula is admittedly convoluted and not super clearly documented, it reads like quadratic voting passed through a few different hands without a clear owner).
Took me a while to find where you got your 2x+y from, I see it’s visible if you highlight the cells in the sheet.
Here’s a sheet with the score as sorted by the top 1k people, which is what I was interested in seeing: https://docs.google.com/spreadsheets/d/1VODS3-NrlBTnSMbGibhT4M2FpmfT-ojaPTEuuFIk9xc/edit?usp=sharing
I’d fine it helpful with the spreadsheet to also have people’s usernames listed beside the post.
Reading the title of this post I thought it was a decade review of the effective altruism movement. Are any of the EA orgs working on that?