It’s not EA because it’s for anyone who wants to PauseAI for any reason and does not share all the EA principles. It’s just about pausing AI and it’s a coalition.
I personally still identify with EA principles and I came to my work at PauseAI through them, but I increasingly dislike the community and find it a drag on my work. That, combined with PauseAI being open to all comers, makes me want distance from the community and to keep a healthy distance between PauseAI and EA. More and more I think that the cost of remaining engaged with EA is too high because of how demanding EAs are and how little they contribute to what I’m doing.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine “too much” real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we’ve been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn’t coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world’s most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well—potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
It’s not EA because it’s for anyone who wants to PauseAI for any reason and does not share all the EA principles. It’s just about pausing AI and it’s a coalition.
I personally still identify with EA principles and I came to my work at PauseAI through them, but I increasingly dislike the community and find it a drag on my work. That, combined with PauseAI being open to all comers, makes me want distance from the community and to keep a healthy distance between PauseAI and EA. More and more I think that the cost of remaining engaged with EA is too high because of how demanding EAs are and how little they contribute to what I’m doing.
My 2 cents Holly is that while you’re pointing at something acute to PauseAI, this is affecting AI Safety in general.
The majority of people entering the Safety community space in Australia & New Zealand now are NOT coming from EA.
Potentially ~ 75/25!
And honestly, I think this is a good thing.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
That sucks :(
But hammers do like nails :/
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine “too much” real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
What are those non-AI safety reasons to pause or slow down?
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we’ve been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn’t coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world’s most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well—potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.