What with the potential for this kind of bait-and-switch you’re concerned about, for the relative degree of prioritization of longtermism or AI x-risk in years past in EA, that was a major problem for years.
There are some who might complain that leading organizations in EA over over-rating how much long-termist causes should be prioritized relative to near-termist ones, though that’s not the same problem of leading organizations in EA majorly misrepresenting their own priorities (or those of the movement at large). (For what it’s worth, in my opinion, the former problems of misleading marketing and easily avoidable communication errors have mostly been resolved.)
Given how much of a bait-and-switch there was for AI x-risk in the past in general, there is significant reason to suspect it will re-occur. However preferable it might be people are introduced to longtermism by The Precipice instead of WWOTF, the latter is a bestseller that will be read by more people. The Centre for Effective Altruism could start giving out way more free copies of The Precipice and encouraging everyone to read it first instead of WWOTF tomorrow. That could take years to work before more people than are introduced to longtermism by The Precipice than a bestseller like WWOTF.
Assuming it’s a major enough problem Will needs to immediately change his mind about it or set the record straight, a better solution would be for Will or another scholar to publish a paper rectifying what was published in WWOTF, or even a post on the EA Forum as a reference. Another patch to the problem could be for Will to re-write the relevant sections of WWOTF for a 2nd edition (I don’t know much about when 2nd editions are published like that for technical subject matter written for a popular readership, though I presume it can happen in as short a time as a couple years or less if the 1st edition sells out).
What with the potential for this kind of bait-and-switch you’re concerned about, for the relative degree of prioritization of longtermism or AI x-risk in years past in EA, that was a major problem for years.
There are some who might complain that leading organizations in EA over over-rating how much long-termist causes should be prioritized relative to near-termist ones, though that’s not the same problem of leading organizations in EA majorly misrepresenting their own priorities (or those of the movement at large). (For what it’s worth, in my opinion, the former problems of misleading marketing and easily avoidable communication errors have mostly been resolved.)
Given how much of a bait-and-switch there was for AI x-risk in the past in general, there is significant reason to suspect it will re-occur. However preferable it might be people are introduced to longtermism by The Precipice instead of WWOTF, the latter is a bestseller that will be read by more people. The Centre for Effective Altruism could start giving out way more free copies of The Precipice and encouraging everyone to read it first instead of WWOTF tomorrow. That could take years to work before more people than are introduced to longtermism by The Precipice than a bestseller like WWOTF.
Assuming it’s a major enough problem Will needs to immediately change his mind about it or set the record straight, a better solution would be for Will or another scholar to publish a paper rectifying what was published in WWOTF, or even a post on the EA Forum as a reference. Another patch to the problem could be for Will to re-write the relevant sections of WWOTF for a 2nd edition (I don’t know much about when 2nd editions are published like that for technical subject matter written for a popular readership, though I presume it can happen in as short a time as a couple years or less if the 1st edition sells out).