I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.
My instinct as to why people don’t find it a compelling argument;
They don’t have short timelines like me, and therefore chuck it out completely
Are struggling to imagine a hostile public response to 15% unemployment rates
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI—a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don’t have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.
>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don’t.
>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it’ll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of ‘hope’ on a technical level, though on a gut level I don’t have faith in EA. I definitely don’t blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards ‘people’, and how they’re modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn’t. While many here on the EA Forum may not agree that much—or even most—of what movements like PauseAI are doing are as effective as they could or should be, they at least haven’t succumbed to a plague of doomerism beyond what can seemingly even be justified.
To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: “now is not the time for moral cowardice.” There are some effective altruists who heeded that sort of call when it was being made. There are others who weren’t effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction—from narrow/near-term AI, to AGI/ASI.
EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety.
I wrote a downvoted post recently about how we should be warning AI Safety talent about going into labs for personal branding reasons (I think there are other reasons not to join labs, but this is worth considering).
I think people are still underweighting how much the public are going to hate labs in 1-3 years.
I was telling organizers with PauseAI like Holly Elmore they should be emphasizing this more several months ago.
I think from an advocacy standpoint it is worth testing that message, but based on how it is being received on the EAF, it might just bounce off people.
My instinct as to why people don’t find it a compelling argument;
They don’t have short timelines like me, and therefore chuck it out completely
Are struggling to imagine a hostile public response to 15% unemployment rates
Copium
At least at the time, Holly Elmore seemed to consider it at least somewhat compelling. I mentioned this was an argument I provided framed in the context of movements like PauseAI—a more politicized, and less politically averse coalition movement, that includes at least one arm of AI safety as one of its constituent communities/movements, distinct from EA.
>They don’t have short timelines like me, and therefore chuck it out completely
Among the most involved participants in PauseAI, presumably there may estimates of short timelines comparable to the rate of such estimates among effective altruists.
>Are struggling to imagine a hostile public response to 15% unemployment rates
Those in PauseAI and similar movements don’t.
>Copium
While I sympathize with and appreciate why there would be high rates of huffing copium among effective altruists (and adjacent communities, such as rationalists), others who have been picking up slack effective altruists have dropped in the last couple years, are reacting differently. At least in terms of safeguarding humanity from both the near-term and long-term vicissitudes of advancing AI, humanity has deserved better than EA has been able to deliver. Many have given up hope that EA will ever rebound to the point it’ll be able to muster living up to the promise of at least trying to safeguard humanity. That includes both many former effective altruists, and those who still are effective altruists. I consider there to still be that kind of ‘hope’ on a technical level, though on a gut level I don’t have faith in EA. I definitely don’t blame those who have any faith left in EA, let alone those who see hope in it.
Much of the difference here is the mindset towards ‘people’, and how they’re modeled, between those still firmly planted in EA but somehow with a fatalistic mindset, and those who still care about AI safety but have decided to move on in EA. (I might be somewhere in between, though my perspective as a single individual among general trends is barely relevant.) The last couple years have proven that effective altruists direly underestimated the public, and the latter group of people didn’t. While many here on the EA Forum may not agree that much—or even most—of what movements like PauseAI are doing are as effective as they could or should be, they at least haven’t succumbed to a plague of doomerism beyond what can seemingly even be justified.
To quote former effective altruist Kerry Vaughan, in a message addressed to those who still are effective altruists: “now is not the time for moral cowardice.” There are some effective altruists who heeded that sort of call when it was being made. There are others who weren’t effective altruists who heeded it too, when they saw most effective altruists had lost the will to even try picking up the ball again after they dropped it a couple times. New alliances between emotionally determined effective altruists and rationalists, and thousands of other people the EA community always underestimated, might from now on be carrying the team that is the global project of AI risk reduction—from narrow/near-term AI, to AGI/ASI.
EA can still change, though either it has to go beyond self-reflection and just change already, or get used to no longer being team captain of AI Safety.