Here’s a crazy idea. I haven’t run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
Basic structure:
Someone picks a book they want to review.
Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later).
They write a review, and send it to me.
If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated).
If I don’t want to give them the money, they can do whatever with the review.
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
Things directly related to traditional EA topics
Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic.
Goals:
I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking.
It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them.
Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
It might surface some talented writers and thinkers who weren’t otherwise known to EA.
It might produce good content on the EA Forum and LW that engages intellectually curious people.
Suggested elements of a book review:
One paragraph summary of the book
How compelling you found the book’s thesis, and why
The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones
Optionally, epistemic spot checks
Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
[For context, I’m definitely in the social cluster of powerful EAs, though don’t have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven’t happened, and probably won’t happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren’t very good. And so:
people in EA roles where they could adopt these suggestions choose not to
and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.
And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You’ve laid out a long list of ways that you wish EA orgs behaved differently. You’ve also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies; I’ll refer to this group as “core org EAs” for brevity. But I don’t understand how you hope to cause EA orgs to change in these ways.
Maybe your ToC is that core org EAs read your post (and similar posts) and are intellectually persuaded by your suggestions, and adopt them.
If that’s your goal, I think you should try harder to understand why core org EAs currently don’t agree with your suggestions, and try to address their cruxes. For this ToC, “upvotes on the EA Forum” is a useless metric—all you should care about is persuading a few people who have already thought about this all a lot. I don’t think that your post here is very well optimized for this ToC.
(Note that this doesn’t mean that they think your suggestions aren’t net positive, but these people are extremely busy and have to choose to pursue only a tiny fraction of the good-seeming things (which are the best-seeming-to-them things) so demonstrating that something is net positive isn’t nearly enough.)
Also, if this is the ToC, I think your tone should be one of politely suggesting ways that someone might be able to do better work by their own lights. IMO, if some EA funder wants to fund an EA org to do things a particular way, you have no particular right to demand that they do things differently, you just have the ability to try to persuade them (and it’s their prerogative whether to listen).
For what it’s worth, I am very skeptical that this ToC will work. I personally think that this post is very unpersuasive, and I’d be very surprised if I changed my mind to agree with it in the next year, because I think the arguments it makes are weak (and I’ve been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)
Maybe your ToC is that other EAs read your post and are persuaded by your suggestions, and then pressure the core org EAs to adopt some of your suggestions even though they disagree with them.
If so, you need to think about which ways EAs can actually apply pressure to the core org EAs. For example, as someone who prioritizes short-AI-timelines longtermist work over global health and development, I am not very incentivized to care about whether random GWWC members will stop associating with EA if EA orgs don’t change in some particular way. In contrast, if you convinced all the longtermist EAs that they should be very skeptical of working on longtermism until there was a redteaming process like the one you described, I’d feel seriously incentivized to work on that redteaming process. Right now, the people I want to hire mostly don’t agree with you that the redteaming process you named would be very informative; I encourage you to try to persuade them otherwise.
Also, I think you should just generally be scared that this strategy won’t work? You want core org EAs to change a bunch of things in a bunch of really specific ways, and I don’t think that you’re actually going to be able to apply pressure very accurately (for similar reasons that it’s hard for leaders of the environmentalist movement to cause very specific regulations to get passed).
(Note that I don’t think you should engage in uncooperative behavior (e.g. trying to set things up specifically so that EA orgs will experience damage unless they do a particular thing). I think it’s totally fair game to try to persuade people of things that are true because you think that that will cause those people to do better things by their own lights; I think it’s not fair game to try to persuade people of things because you want to force someone’s hand by damaging them. Happy to try to explain more about what I mean here if necessary; for what it’s worth I don’t think that this post advocates being uncooperative.)
Perhaps you think that the core org EAs think of themselves as having a duty to defer to self-identified EAs, and so if you can just persuade a majority of self-identified EAs, the core org EAs will dutifully adopt all the suggestions those self-identified EAs want.
I don’t think this is realistic–I don’t think that core EA orgs mostly think of themselves as executing on the community’s wishes, I think they (as they IMO should) think of themselves as trying to do as much good as possible (subject to the constraint of being honest and reliable etc).
I am somewhat sympathetic to the perspective that EA orgs have implied that they do think of themselves as trying to represent the will of the community, rather than just viewing the community as a vehicle via which they might accomplish some of their altruistic goals. Inasmuch as this is true, I think it’s bad behavior from these orgs. I personally try to be clear about this when I’m talking to people.
Maybe your ToC is that you’re going to start up a new set of EA orgs/projects yourself, and compete with current EA orgs on the marketplace of ideas for funding, talent, etc? (Or perhaps you hope that some reader of this post will be inspired to do this?)
I think it would be great if you did this and succeeded. I think you will fail, but inasmuch as I’m wrong it would be great if you proved me wrong, and I’d respect you for actively trying much more than I respect you for complaining that other people disagree with you.
If you wrote a post trying to persuade EA donors that they should, instead of other options, donate to an org that you started that will do many of the research projects you suggested here, I would think that it was cool and admirable that you’d done that.
For many of these suggestions, you wouldn’t even need to start orgs. E.g. you could organize/fundraise for research into “the circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought, by what criteria, and how this varies by subject/domain”.
I’ll generically note that if you want to make a project happen, coming up with the idea for the project is usually a tiny fraction of the effort required. Also, very few projects have been made to happen by someone having the idea for them and writing it up, and then some other person stumbling across that writeup and deciding to do the project.
Maybe you don’t have any hope that anything will change, but you heuristically believe that it’s good anyway to write up lists of ways that you think other people are behaving suboptimally. For example, I have some sympathy for people who write op-eds complaining about ways that their government is making poor choices, even if they don’t have a detailed theory of change.
I think this is a fine thing to do, when you don’t have more productive ways to channel your energy. In the case of this post in particular, I feel like there are many more promising theories of change available, and I think I want to urge people who agree with it to pursue those.
Overall my main complaint about this post is that it feels like it’s fundamentally taking an unproductive stance–I feel like it’s sort of acting as if its goal is to persuade core EAs, but actually it’s just using that as an excuse to ineffectually complain or socially pressure; if it were trying to persuade, more attention would be paid to tradeoffs and cruxes. People sympathetic to the perspective in this post should either seriously attempt to persuade, or they should resort to doing things themselves instead of complaining when others don’t do those things.
(Another caveat on this comment: there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them.)
(In general, I love competition. For example, when I was on the EAIF I explicitly told some grantees that I thought that their goal should be to outcompete CEA, and I’ve told at least one person that I’d love it if they started an org that directly competes with my org.)