It’s not as much a pivot as a codification of what has been long true.
“EA is (experientially) about AI” has been sorta true for a long time. Money and resources do go to other causes. But the most influential and engaged people have always been focused on AI. EA institutions have long systematically emphasized AI. For example many editions of the EA handbook spend a huge fraction of their introductions to other cause areas effectively arguing why you should work on AI instead. CEA staffers very heavily favor AI. This all pushes things very hard in one direction.
I strongly prefer the blatant honestly of the 80k announcement. Much easier to think about. And much easier for young people to make informed opinions.
For example many editions of the EA handbook spend a huge fraction of their introductions to other cause areas effectively arguing why you should work on AI instead. CEA staffers very heavily favor AI.
Just wanted to quickly add that I don’t think that this is quite accurate.
My experience facilitating the Intro Fellowship using the previous version of the EA Handbook was that AI basically didn’t come up until the week about longtermism, and glancing through the current version that doesn’t seem to have changed. Though I welcome people to read the current version of the EA Handbook and come to their own conclusions.
The most recent relevant data on CEA staff cause prio is this post about where people are donating, and I think animal welfare is more common in that list than AI safety (though this only includes a subset of staff who were interested in participating in the post).
I was also confused by that paragraph, as someone who read the handbook in ~2022. I just randomly came across this and this, and apparently this was an issue 7 years ago. I think it’s likely that several people who have been around longer than us haven’t noticed that the handbooks and CEA staff changed a lot.
I think you actually shifted me slightly to the ‘announcement was handled well’ side (even if not fully) with the idea that blatant honesty (since their work was mainly AI anyway for the last year or so) plus the very clear change descriptors.
I am a bit wary of such a prominent resource such as 80k endorsing a sudden cause shift without first reconstructing the gap- I know they don’t owe it to anyone, especially during such a tumultous time of AI risk, and there are other orgs (Probably Good, etc) but to me, 80k seemed like a very good intro into ‘EA Cause Areas’ that I can’t think of another current substitute for. The problem profiles for example not being featured/promoted is fine for individuals already aware of their existence, but when I first navigated to 80k, I saw the big list of problem profiles and that’s how I actually started getting into them, and what led to my shift from clinical medicine to a career in biosec/pandemics.
It’s not as much a pivot as a codification of what has been long true.
“EA is (experientially) about AI” has been sorta true for a long time. Money and resources do go to other causes. But the most influential and engaged people have always been focused on AI. EA institutions have long systematically emphasized AI. For example many editions of the EA handbook spend a huge fraction of their introductions to other cause areas effectively arguing why you should work on AI instead. CEA staffers very heavily favor AI. This all pushes things very hard in one direction.
I strongly prefer the blatant honestly of the 80k announcement. Much easier to think about. And much easier for young people to make informed opinions.
Just wanted to quickly add that I don’t think that this is quite accurate.
My experience facilitating the Intro Fellowship using the previous version of the EA Handbook was that AI basically didn’t come up until the week about longtermism, and glancing through the current version that doesn’t seem to have changed. Though I welcome people to read the current version of the EA Handbook and come to their own conclusions.
The most recent relevant data on CEA staff cause prio is this post about where people are donating, and I think animal welfare is more common in that list than AI safety (though this only includes a subset of staff who were interested in participating in the post).
I was also confused by that paragraph, as someone who read the handbook in ~2022. I just randomly came across this and this, and apparently this was an issue 7 years ago. I think it’s likely that several people who have been around longer than us haven’t noticed that the handbooks and CEA staff changed a lot.
I think you actually shifted me slightly to the ‘announcement was handled well’ side (even if not fully) with the idea that blatant honesty (since their work was mainly AI anyway for the last year or so) plus the very clear change descriptors.
I am a bit wary of such a prominent resource such as 80k endorsing a sudden cause shift without first reconstructing the gap- I know they don’t owe it to anyone, especially during such a tumultous time of AI risk, and there are other orgs (Probably Good, etc) but to me, 80k seemed like a very good intro into ‘EA Cause Areas’ that I can’t think of another current substitute for. The problem profiles for example not being featured/promoted is fine for individuals already aware of their existence, but when I first navigated to 80k, I saw the big list of problem profiles and that’s how I actually started getting into them, and what led to my shift from clinical medicine to a career in biosec/pandemics.