As Ben said in his comment, the key ideas page, which is the most current summary of 80k’s views, doesn’t recommend that “EA should focus on AI alone”. We don’t think the EA community’s focus should be anything close to that narrow.
That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths. The page mentions that “we’d be excited for people to explore [our list of problems we haven’t yet investigated] as well as other areas that could foreseeably have a positive effect on the long-term future” but it doesn’t say anything about what those problems are (other than a link to our problem profiles page, which has a list).
I think it makes sense that people end up focusing on the areas we mention directly and the page could do a better job of communicating that our priorities are more diverse.
The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren’t among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.
More generally, I think 80k’s content was particularly heavy on AI over the last year and, while it will likely remain our top priority, I expect it will make up a smaller portion of our content over the next few years.
[1] Many of these will be areas we haven’t yet investigated or areas that are too niche to highlight among our priority paths.
That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths.
Indeed. When Todd replied earlier that only 2 of the 9 paths were directly related to AI safety, I have to say it felt slightly disingenuous to me, even though I’m sure he did not mean it that way. Many of the other paths could be interpreted as “indirectly help AI safety.” (Other than that, I appreciated Todd’s comment.)
The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren’t among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.
I’m looking forward to this list of other potentially promising areas. Should be quite interesting.
Hi EarlyVelcro,
Howie from 80k here.
As Ben said in his comment, the key ideas page, which is the most current summary of 80k’s views, doesn’t recommend that “EA should focus on AI alone”. We don’t think the EA community’s focus should be anything close to that narrow.
That said, I do see how the page might give the impression that AI dominates 80k’s recommendations since most of the other paths/problems talked about are ‘meta’ or ‘capacity building’ paths. The page mentions that “we’d be excited for people to explore [our list of problems we haven’t yet investigated] as well as other areas that could foreseeably have a positive effect on the long-term future” but it doesn’t say anything about what those problems are (other than a link to our problem profiles page, which has a list).
I think it makes sense that people end up focusing on the areas we mention directly and the page could do a better job of communicating that our priorities are more diverse.
The good news is that we’re currently putting together a more thorough list of areas that we think might be very promising but aren’t among our priority paths/problems.[1] Unfortunately, it didn’t quite get done in time to add it to this version of key ideas.
More generally, I think 80k’s content was particularly heavy on AI over the last year and, while it will likely remain our top priority, I expect it will make up a smaller portion of our content over the next few years.
[1] Many of these will be areas we haven’t yet investigated or areas that are too niche to highlight among our priority paths.
Thank you for the thoughtful response, Howie. :)
Indeed. When Todd replied earlier that only 2 of the 9 paths were directly related to AI safety, I have to say it felt slightly disingenuous to me, even though I’m sure he did not mean it that way. Many of the other paths could be interpreted as “indirectly help AI safety.” (Other than that, I appreciated Todd’s comment.)
I’m looking forward to this list of other potentially promising areas. Should be quite interesting.