Background on my views on the EA community and epistemics
Epistemic status: Passionate rant
I think protecting and improving the EA community’s epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.
First, I think that the EA community’s epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community’s impact. I think it might be even more important than the moral difference between EA and the rest of the world. See Ngo and Kwa for more here. In particular, in seems like we’re very bottlenecked on epistemics in AI safety, perhaps the most important cause area. See Muelhauser and MIRI conversations.
Second, I think the EA community’s epistemic culture is an extremely important thing to maintain as an attractor for people with a scout mindset and taking-ideas-seriously mentality. This is a huge reason that I and I’m guessing many others love spending time with others in the community, and I’m very very wary about sacrificing it at all. This includes people being transparent and upfront about their beliefs and the implications.
Third, the EA community’s epistemic advantage and culture are extremely rare and fragile. By default, they will erode over time as ~all cultures and institutions do. We need to try really hard to maintain them.
Fourth, I think we really need to be pushing the epistemic culture to improve rather than erode! There is so much room for improvement in quantification of cost-effectiveness, making progress on long-standing debates, making it more socially acceptable and common to critique influential organizations and people, etc. There’s a long way to go and we need to move forward not backwards.
On (2b): I’m a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I’m not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might’ve been far harder with a book that allows people to brush it off as tech-bro BS more easily.
I think this is a fair point, but even if it’s right I’m worried about trading off some community epistemic health to appear more palatable to this crowd. I think it’s very hard to consistently present your views in a fairly different way publicly than they are presented in internal conversations, and it hinders intellectual progress of the movement. I think we need to be going in the other direction; Rob Bensinger has a twitter thread on how we need to be much more open and less scared of saying weird things in public, to make faster progress.
On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that’s fine. We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I’d be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).
Sorry if I wasn’t clear here: I’m most worried about Will not being fully upfront about the implications of his own views.
On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that’s consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
Seems plausible, though I’m concerned about community epistemic health from the book and the corresponding big media push. If a lot of EAs get interested via WWOTF they may come in with a very different mindset about prioritization, quantification, etc.
The mechanism I have in mind is a bit nebulous. It’s in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn’t have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it’s not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.
Seems plausible to me, though I’d strongly prefer if we could do it in a way where we’re also very transparent about our priorities.
(also, sorry for just bringing up the community epistemic health thing now. Ideally I would have brought it up earlier in this thread and discussed it more in the post but have been just fleshing out my thoughts on it yesterday and today.)
Nodding profusely while reading; thanks for the rant.
I’m unsure if there’s much disagreement left to unpack here, so I’ll just note this:
If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there’d be much upside anyway given what’s already in the book.)
If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community, but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.
Roughly agree with both of these bullet points! I want to be very clear that I have no reason to believe that Will wasn’t being honest and on the contrary believe he very likely was, my concerns are about framing. And I agree the balance of costs and benefits regarding framing aren’t super obvious but I am pretty concerned about the possible costs.
Background on my views on the EA community and epistemics
Epistemic status: Passionate rant
I think protecting and improving the EA community’s epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.
First, I think that the EA community’s epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community’s impact. I think it might be even more important than the moral difference between EA and the rest of the world. See Ngo and Kwa for more here. In particular, in seems like we’re very bottlenecked on epistemics in AI safety, perhaps the most important cause area. See Muelhauser and MIRI conversations.
Second, I think the EA community’s epistemic culture is an extremely important thing to maintain as an attractor for people with a scout mindset and taking-ideas-seriously mentality. This is a huge reason that I and I’m guessing many others love spending time with others in the community, and I’m very very wary about sacrificing it at all. This includes people being transparent and upfront about their beliefs and the implications.
Third, the EA community’s epistemic advantage and culture are extremely rare and fragile. By default, they will erode over time as ~all cultures and institutions do. We need to try really hard to maintain them.
Fourth, I think we really need to be pushing the epistemic culture to improve rather than erode! There is so much room for improvement in quantification of cost-effectiveness, making progress on long-standing debates, making it more socially acceptable and common to critique influential organizations and people, etc. There’s a long way to go and we need to move forward not backwards.
I think this is a fair point, but even if it’s right I’m worried about trading off some community epistemic health to appear more palatable to this crowd. I think it’s very hard to consistently present your views in a fairly different way publicly than they are presented in internal conversations, and it hinders intellectual progress of the movement. I think we need to be going in the other direction; Rob Bensinger has a twitter thread on how we need to be much more open and less scared of saying weird things in public, to make faster progress.
Sorry if I wasn’t clear here: I’m most worried about Will not being fully upfront about the implications of his own views.
Seems plausible, though I’m concerned about community epistemic health from the book and the corresponding big media push. If a lot of EAs get interested via WWOTF they may come in with a very different mindset about prioritization, quantification, etc.
Seems plausible to me, though I’d strongly prefer if we could do it in a way where we’re also very transparent about our priorities.
(also, sorry for just bringing up the community epistemic health thing now. Ideally I would have brought it up earlier in this thread and discussed it more in the post but have been just fleshing out my thoughts on it yesterday and today.)
Nodding profusely while reading; thanks for the rant.
I’m unsure if there’s much disagreement left to unpack here, so I’ll just note this:
If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there’d be much upside anyway given what’s already in the book.)
If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community, but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.
Roughly agree with both of these bullet points! I want to be very clear that I have no reason to believe that Will wasn’t being honest and on the contrary believe he very likely was, my concerns are about framing. And I agree the balance of costs and benefits regarding framing aren’t super obvious but I am pretty concerned about the possible costs.