This is a very thoughtful critique. What do you make of the argument that The Precipice and WWOTF work well together as a partnership that target different markets and could be introduced at different stages as people get into EA?
Thanks John! My first-order reaction is that I’m somewhat skeptical but would need to hear the claim and argument for it fleshed out a little more to have a strong opinion.
Below I’ll list some reasons I’m initially skeptical (I maybe buy WWOTF could be better for like ~10-20% of people), though let me know if I’m misunderstanding your question as I don’t understand the details.
First, repeating a line from the post:
I’m not making strong claims about other groups like the general public or policymakers, though I’d tentatively weakly prefer The Precipice for them as well since I think it’s more accurate and we need strong reasons to sacrifice accuracy.
Note that this argument doesn’t apply if you think WWOTF is more accurate than The Precipice, which I somewhat confidently believe is wrong but I know some reasonable people disagree.
Second, I’m not sure what “target different markets” means exactly (which markets and how will they contribute to longtermist impact?), and am somewhat skeptical that it would outweigh the benefits of transparency and accuracy. I identify as consequentialist but have always had pretty strong intuitions toward transparency, being very up-front about things, etc. which could potentially be biasing my consequentialist assessment here.
Third, on “introduced at different stages as people get into EA”, first I’ll repeat a line from the appendix:
I think there are some things of interest to engaged EAs, but as I’ve argued I think the book isn’t a good introduction for potential highly engaged EAs. I understand the appeal in gentle introductions (I got in through Doing Good Better), but I think it’s 90% likely I would have also gotten very interested if I had gone straight to The Precipice and so would most highly engaged longtermist EAs.
I’ll also expand a bit on my personal experience here a bit to give a sense of what’s informing my intuition. To caveat the below, I’m not claiming that MacAskill wasn’t clear about his beliefs at the time in Doing Good Better (DGB) as I’m not sure if that’s the case. I’ll also caveat that my episodic memory isn’t that great so some of this might be revisionist/inaccurate.
I read DGB as required reading for a class in college in Spring 2018, and loved it. I was very excited to save many lives by earning to give, and started cutting out high-suffering foods from my diet. I then quickly started listening to 80,000 Hours podcasts and found out more about the diversity of causes in EA, and read some books that were discussed on the podcast which I really enjoyed (ones I remember are Superforecasting, Elephant in The Brain, The Case Against Education, and Superintelligence).
I attended my first EAG in summer 2019, and it was overall a positive experience but I was somewhat struck by the prevalence of AI risk, compared to the diversity of DGB and the 80,000 Hours podcast. I also chatted with at least one person there who was very critical of the focus on AI risk, which made me pretty hesitant about it. I came out of the EAG mostly more excited but also a bit hesitant about AI stuff. I also went vegan after it.
Over the course of the last 3 years, I’ve gotten progressively more convinced (especially on a gut level, but also intellectual) that it’s actually reasonable to worry about high levels of AI risk and this might just actually be the most important thing in the world to work on. See my recent post, especially this section for more background on my experience here.
So basically, overall I understand the appeal of gentle introductions then ramping up. But my personal experience makes me feel like I could have gotten on board with the arguments and implications a bit faster if they had been more straightforwardly introduced to me earlier. I still think a period of skepticism is healthy, but I think I could have easily been a bit more turned off and felt more bait-and-switched than I did and disengaged from EA and AI risk. And I worry that other promising people who tend to be skeptical but interested in claims that sound wild/weird might be turned off to a larger extent.
Enjoyed the post but I’d like to mention a potential issue with points like these:
I’m skeptical that we should give much weight to message testing with the “educated general public” or the reaction of people on Twitter, at least when writing for an audience including lots of potential direct work contributors.
I think impact is heavy-tailed and we should target talented people with a scout mindset who are willing to take weird ideas seriously.
I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during ‘crunch time’ or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.
So although I understand that you’re not making strong claims about other groups like the general public or policymakers, I think it’s worth mentioning that “I’d rather recommend The Precipice to people who might do impactful work” and “WWOTF should have been written differently” are very importantly distinct claims.
So although I understand that you’re not making strong claims about other groups like the general public or policymakers, I think it’s worth mentioning that “I’d rather recommend The Precipice to people who might do impactful work” and “WWOTF should have been written differently” are very importantly distinct claims.
I agree with this. The part you quoted is from the appendix and an ideal world it would be more rigorously argued with the claims you identified separated more cleanly. But in practice it should probably be thought more of as “stream-of-consciousness reactions from Eli as he read Will’s posts/comments” (which is part of why I put it in the appendix).
I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during ‘crunch time’ or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.
Epistemic status: speculation about something I haven’t thought about that much (TAI governance and public opinion)
I appreciate you making the benefits more concrete.
However, I’m still not really sure I fully understand the scenario where WWOTF moves the needle here and how it will help much compared to alternatives. I’ll list my best guess as to more explicit steps on the path to impact (let me know if I’m assuming wrong, a lot of this is guessing!), and my skepticisms about each step:
Many in the general public read WWOTF and over time through ideas spreading in various ways many people become much more on board with the general idea of longtermism.
I’m skeptical that > ~25% of the general public both (a) has the bandwidth/slack to care about the long-term future as opposed to their current issues and (b) is philosophically inclined enough to think about morality in this way. Maybe this could happen as a cultural shift over the course of several generations, but it feels like <5% to me in <40 year timelines worlds.
We either (a) convince the general public to care a specifically large amount about misaligned AI risk and elect politicians who care about it, or (b) get politicians on board with general longtermist platforms but actually the thing we care about most is misaligned AI risk.
My skepticism about (a) is that if you really believe the general public is savvy enough to get on board with a large amount of misaligned AI risk, I feel like you should also believe they’re savvy enough to feel bait-and-switched by this two-step conversion process rather than us being more upfront about our beliefs.
My skepticism about (b) is that it feels intellectually dishonest to not be upfront about what we actually care the most about by far, and this will probably backfire in some way even if hard to predict how in advance (one possibility is the most savvy journalists figuring out what’s going on, then write hit pieces and turn the public).
The politicians who care about AI risk then help avoid value lock-in during TAI crunch time.
This seems good if we can actually achieve the first two steps and I’m wrong. I’m uncertain about how good; not sure how much influence politicians will have here vs. people at top AI labs.
Some alternatives to the 3 step plan I’ve interpreted above that feel higher EV per effort spent, and often more direct:
Write a book containing a very high-quality treatment of alignment, to get some of the most savvy public intellectuals / journalists on board with alignment in particular.
Make higher quality resources to convince ML researchers in top industry labs, like Richard Ngo is doing.
This was helpful; I agree with most of the problems you raise, but I think they’re objecting to something a bit different than what I have in mind.
Agreement: 1a,1b,2a
I am also very sceptical that >25% of the general public satisfies (1a) or (1b). I don’t think these are the main mechanisms through which the general public could matter regarding TAI. The same applies to (2a).
Differences: 2b,3a,alternatives
On (2b): I’m a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I’m not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might’ve been far harder with a book that allows people to brush it off as tech-bro BS more easily.
On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that’s fine. We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I’d be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).
So, maybe what we want from politicians and policymakers during important moments is for them to be receptive to good ideas. The perceived prioritisation of AI within longtermist writing might just not turn out to be that crucial. I’m open to change my mind on this but I don’t expect there to be much conflict between different longtermist priorities such that policymakers will in fact need to choose between them. That’s a reason I’d expect that the best we can do is to make certain problems more palatable so that when an organisation tells policymakers “we need policy X, else we raise the risk of AI catastrophe” they are more likely to listen.
On (3a): I’m also very uncertain here but conditional on some kind of intent alignment, it becomes a lot more plausible to me that coordination with the world outside top labs becomes valuable, e.g., on values, managing transitions, etc. (especially if takeoff is slow).
On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that’s consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
The mechanism I have in mind is a bit nebulous. It’s in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn’t have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it’s not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.
Background on my views on the EA community and epistemics
Epistemic status: Passionate rant
I think protecting and improving the EA community’s epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.
First, I think that the EA community’s epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community’s impact. I think it might be even more important than the moral difference between EA and the rest of the world. See Ngo and Kwa for more here. In particular, in seems like we’re very bottlenecked on epistemics in AI safety, perhaps the most important cause area. See Muelhauser and MIRI conversations.
Second, I think the EA community’s epistemic culture is an extremely important thing to maintain as an attractor for people with a scout mindset and taking-ideas-seriously mentality. This is a huge reason that I and I’m guessing many others love spending time with others in the community, and I’m very very wary about sacrificing it at all. This includes people being transparent and upfront about their beliefs and the implications.
Third, the EA community’s epistemic advantage and culture are extremely rare and fragile. By default, they will erode over time as ~all cultures and institutions do. We need to try really hard to maintain them.
Fourth, I think we really need to be pushing the epistemic culture to improve rather than erode! There is so much room for improvement in quantification of cost-effectiveness, making progress on long-standing debates, making it more socially acceptable and common to critique influential organizations and people, etc. There’s a long way to go and we need to move forward not backwards.
On (2b): I’m a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I’m not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might’ve been far harder with a book that allows people to brush it off as tech-bro BS more easily.
I think this is a fair point, but even if it’s right I’m worried about trading off some community epistemic health to appear more palatable to this crowd. I think it’s very hard to consistently present your views in a fairly different way publicly than they are presented in internal conversations, and it hinders intellectual progress of the movement. I think we need to be going in the other direction; Rob Bensinger has a twitter thread on how we need to be much more open and less scared of saying weird things in public, to make faster progress.
On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that’s fine. We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I’d be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).
Sorry if I wasn’t clear here: I’m most worried about Will not being fully upfront about the implications of his own views.
On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that’s consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
Seems plausible, though I’m concerned about community epistemic health from the book and the corresponding big media push. If a lot of EAs get interested via WWOTF they may come in with a very different mindset about prioritization, quantification, etc.
The mechanism I have in mind is a bit nebulous. It’s in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn’t have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it’s not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.
Seems plausible to me, though I’d strongly prefer if we could do it in a way where we’re also very transparent about our priorities.
(also, sorry for just bringing up the community epistemic health thing now. Ideally I would have brought it up earlier in this thread and discussed it more in the post but have been just fleshing out my thoughts on it yesterday and today.)
Nodding profusely while reading; thanks for the rant.
I’m unsure if there’s much disagreement left to unpack here, so I’ll just note this:
If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there’d be much upside anyway given what’s already in the book.)
If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community, but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.
Roughly agree with both of these bullet points! I want to be very clear that I have no reason to believe that Will wasn’t being honest and on the contrary believe he very likely was, my concerns are about framing. And I agree the balance of costs and benefits regarding framing aren’t super obvious but I am pretty concerned about the possible costs.
This is a very thoughtful critique. What do you make of the argument that The Precipice and WWOTF work well together as a partnership that target different markets and could be introduced at different stages as people get into EA?
Thanks John! My first-order reaction is that I’m somewhat skeptical but would need to hear the claim and argument for it fleshed out a little more to have a strong opinion.
Below I’ll list some reasons I’m initially skeptical (I maybe buy WWOTF could be better for like ~10-20% of people), though let me know if I’m misunderstanding your question as I don’t understand the details.
First, repeating a line from the post:
Note that this argument doesn’t apply if you think WWOTF is more accurate than The Precipice, which I somewhat confidently believe is wrong but I know some reasonable people disagree.
Second, I’m not sure what “target different markets” means exactly (which markets and how will they contribute to longtermist impact?), and am somewhat skeptical that it would outweigh the benefits of transparency and accuracy. I identify as consequentialist but have always had pretty strong intuitions toward transparency, being very up-front about things, etc. which could potentially be biasing my consequentialist assessment here.
Third, on “introduced at different stages as people get into EA”, first I’ll repeat a line from the appendix:
I’ll also expand a bit on my personal experience here a bit to give a sense of what’s informing my intuition. To caveat the below, I’m not claiming that MacAskill wasn’t clear about his beliefs at the time in Doing Good Better (DGB) as I’m not sure if that’s the case. I’ll also caveat that my episodic memory isn’t that great so some of this might be revisionist/inaccurate.
I read DGB as required reading for a class in college in Spring 2018, and loved it. I was very excited to save many lives by earning to give, and started cutting out high-suffering foods from my diet. I then quickly started listening to 80,000 Hours podcasts and found out more about the diversity of causes in EA, and read some books that were discussed on the podcast which I really enjoyed (ones I remember are Superforecasting, Elephant in The Brain, The Case Against Education, and Superintelligence).
I attended my first EAG in summer 2019, and it was overall a positive experience but I was somewhat struck by the prevalence of AI risk, compared to the diversity of DGB and the 80,000 Hours podcast. I also chatted with at least one person there who was very critical of the focus on AI risk, which made me pretty hesitant about it. I came out of the EAG mostly more excited but also a bit hesitant about AI stuff. I also went vegan after it.
Over the course of the last 3 years, I’ve gotten progressively more convinced (especially on a gut level, but also intellectual) that it’s actually reasonable to worry about high levels of AI risk and this might just actually be the most important thing in the world to work on. See my recent post, especially this section for more background on my experience here.
So basically, overall I understand the appeal of gentle introductions then ramping up. But my personal experience makes me feel like I could have gotten on board with the arguments and implications a bit faster if they had been more straightforwardly introduced to me earlier. I still think a period of skepticism is healthy, but I think I could have easily been a bit more turned off and felt more bait-and-switched than I did and disengaged from EA and AI risk. And I worry that other promising people who tend to be skeptical but interested in claims that sound wild/weird might be turned off to a larger extent.
Enjoyed the post but I’d like to mention a potential issue with points like these:
I would put nontrivial weight on this claim: the support of the general public matters a lot in TAI worlds, e.g., during ‘crunch time’ or when trying to handle value lock-in. If this is true and WWOTF helps achieve this, it can justify writing a book focusing less on people who are already prone to react in ways we typically assoicate with a scout mindset. Increasing direct work in the usual sense is one thing to optimise for; another is creating an enviroment receptive to proposals and cooperation with those who do direct work.
So although I understand that you’re not making strong claims about other groups like the general public or policymakers, I think it’s worth mentioning that “I’d rather recommend The Precipice to people who might do impactful work” and “WWOTF should have been written differently” are very importantly distinct claims.
I agree with this. The part you quoted is from the appendix and an ideal world it would be more rigorously argued with the claims you identified separated more cleanly. But in practice it should probably be thought more of as “stream-of-consciousness reactions from Eli as he read Will’s posts/comments” (which is part of why I put it in the appendix).
Epistemic status: speculation about something I haven’t thought about that much (TAI governance and public opinion)
I appreciate you making the benefits more concrete.
However, I’m still not really sure I fully understand the scenario where WWOTF moves the needle here and how it will help much compared to alternatives. I’ll list my best guess as to more explicit steps on the path to impact (let me know if I’m assuming wrong, a lot of this is guessing!), and my skepticisms about each step:
Many in the general public read WWOTF and over time through ideas spreading in various ways many people become much more on board with the general idea of longtermism.
I’m skeptical that > ~25% of the general public both (a) has the bandwidth/slack to care about the long-term future as opposed to their current issues and (b) is philosophically inclined enough to think about morality in this way. Maybe this could happen as a cultural shift over the course of several generations, but it feels like <5% to me in <40 year timelines worlds.
We either (a) convince the general public to care a specifically large amount about misaligned AI risk and elect politicians who care about it, or (b) get politicians on board with general longtermist platforms but actually the thing we care about most is misaligned AI risk.
My skepticism about (a) is that if you really believe the general public is savvy enough to get on board with a large amount of misaligned AI risk, I feel like you should also believe they’re savvy enough to feel bait-and-switched by this two-step conversion process rather than us being more upfront about our beliefs.
My skepticism about (b) is that it feels intellectually dishonest to not be upfront about what we actually care the most about by far, and this will probably backfire in some way even if hard to predict how in advance (one possibility is the most savvy journalists figuring out what’s going on, then write hit pieces and turn the public).
The politicians who care about AI risk then help avoid value lock-in during TAI crunch time.
This seems good if we can actually achieve the first two steps and I’m wrong. I’m uncertain about how good; not sure how much influence politicians will have here vs. people at top AI labs.
Some alternatives to the 3 step plan I’ve interpreted above that feel higher EV per effort spent, and often more direct:
Outreach to ML academics, like Vael Gates is doing.
Write a book containing a very high-quality treatment of alignment, to get some of the most savvy public intellectuals / journalists on board with alignment in particular.
Make higher quality resources to convince ML researchers in top industry labs, like Richard Ngo is doing.
This was helpful; I agree with most of the problems you raise, but I think they’re objecting to something a bit different than what I have in mind.
Agreement: 1a,1b,2a
I am also very sceptical that >25% of the general public satisfies (1a) or (1b). I don’t think these are the main mechanisms through which the general public could matter regarding TAI. The same applies to (2a).
Differences: 2b,3a,alternatives
On (2b): I’m a bit sceptical that politicians or policymakers are sufficiently nitpicky for this to be a big issue, but I’m not confident here. WWOTF might just have the effect of bringing certain issues closer to the edges of the Overton window. I find it plausible that the most effective way to make AI risk one of these issues is in the way WWOTF does it: get mainstream public figures and magazines talking about it in a very positive way. I could see how this might’ve been far harder with a book that allows people to brush it off as tech-bro BS more easily.
On there being intellectually dishonesty: I worry a bit about this, but maybe Will is just providing his perspective and that’s fine. We can still have others in the longtermist community disagree on various estimates. Will for one has explicitly tried not to be seen as a leader of a movement of people who just follow his ideas. I’d be surprised if differences within the community become widely seen as intellectual dishonesty from the outside (though of course isolated claims like these have been made already).
So, maybe what we want from politicians and policymakers during important moments is for them to be receptive to good ideas. The perceived prioritisation of AI within longtermist writing might just not turn out to be that crucial. I’m open to change my mind on this but I don’t expect there to be much conflict between different longtermist priorities such that policymakers will in fact need to choose between them. That’s a reason I’d expect that the best we can do is to make certain problems more palatable so that when an organisation tells policymakers “we need policy X, else we raise the risk of AI catastrophe” they are more likely to listen.
On (3a): I’m also very uncertain here but conditional on some kind of intent alignment, it becomes a lot more plausible to me that coordination with the world outside top labs becomes valuable, e.g., on values, managing transitions, etc. (especially if takeoff is slow).
On alternative uses of time: Those three project seem great and might be better EV per effort spent, but that’s consistent with great writers and speakers like Will having a comparative advantage in writing WWOTF.
The mechanism I have in mind is a bit nebulous. It’s in the vein of my response to (2a), i.e., creating intellectual precedent, making odd ideas seem more normal, etc. to create an environment (e.g., in politics) more receptive to proposals and collaboration. This doesn’t have to be through widespread understanding of the topics. One (unresearched) analogue might be antibiotic resistance. People in general, including myself, know next to nothing about it, but this weird concept has become respectable enough that when a policymaker Googles it, they know it’s not just some kooky fear than nobody outside strangely named research centres worry about or respectfully engage with.
Background on my views on the EA community and epistemics
Epistemic status: Passionate rant
I think protecting and improving the EA community’s epistemics is extremely important and we should be very very careful about taking actions that could hurt it to improve on other dimensions.
First, I think that the EA community’s epistemic advantage over the rest of the world in terms of both getting to true beliefs via a scout mindset, and taking the implications seriously is extremely important for the EA community’s impact. I think it might be even more important than the moral difference between EA and the rest of the world. See Ngo and Kwa for more here. In particular, in seems like we’re very bottlenecked on epistemics in AI safety, perhaps the most important cause area. See Muelhauser and MIRI conversations.
Second, I think the EA community’s epistemic culture is an extremely important thing to maintain as an attractor for people with a scout mindset and taking-ideas-seriously mentality. This is a huge reason that I and I’m guessing many others love spending time with others in the community, and I’m very very wary about sacrificing it at all. This includes people being transparent and upfront about their beliefs and the implications.
Third, the EA community’s epistemic advantage and culture are extremely rare and fragile. By default, they will erode over time as ~all cultures and institutions do. We need to try really hard to maintain them.
Fourth, I think we really need to be pushing the epistemic culture to improve rather than erode! There is so much room for improvement in quantification of cost-effectiveness, making progress on long-standing debates, making it more socially acceptable and common to critique influential organizations and people, etc. There’s a long way to go and we need to move forward not backwards.
I think this is a fair point, but even if it’s right I’m worried about trading off some community epistemic health to appear more palatable to this crowd. I think it’s very hard to consistently present your views in a fairly different way publicly than they are presented in internal conversations, and it hinders intellectual progress of the movement. I think we need to be going in the other direction; Rob Bensinger has a twitter thread on how we need to be much more open and less scared of saying weird things in public, to make faster progress.
Sorry if I wasn’t clear here: I’m most worried about Will not being fully upfront about the implications of his own views.
Seems plausible, though I’m concerned about community epistemic health from the book and the corresponding big media push. If a lot of EAs get interested via WWOTF they may come in with a very different mindset about prioritization, quantification, etc.
Seems plausible to me, though I’d strongly prefer if we could do it in a way where we’re also very transparent about our priorities.
(also, sorry for just bringing up the community epistemic health thing now. Ideally I would have brought it up earlier in this thread and discussed it more in the post but have been just fleshing out my thoughts on it yesterday and today.)
Nodding profusely while reading; thanks for the rant.
I’m unsure if there’s much disagreement left to unpack here, so I’ll just note this:
If Will was in fact not being fully honest about the implications of his own views, then I doubt pretty strongly that this could be worth any potential benefit. (I also doubt there’d be much upside anyway given what’s already in the book.)
If the claim is purely about framing, I can see very plausible stories for costs regarding people entering the EA community, but I can also see stories for the benefits I mentioned before. I find it non-obvious that a lack of prioritisation/quantification in WWOTF leads to a notably lower-quality EA community as misconceptions may be largely corrected when people try to engage with the existing community. Though I could very easily change my mind on this; e.g., it would worry me to see lots of new members with similar misconceptions enter at the same time. The magnitude of the pros and cons of the framing seems like an interestingly tough empirical question.
Roughly agree with both of these bullet points! I want to be very clear that I have no reason to believe that Will wasn’t being honest and on the contrary believe he very likely was, my concerns are about framing. And I agree the balance of costs and benefits regarding framing aren’t super obvious but I am pretty concerned about the possible costs.