Thanks for doing these analyses. I find them very interesting.
Two relatively minor points, which I’m making here only because they refer to something I’ve seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:
I don’t think AI is a “cause area.”
I don’t think there will be a non-AI far future.
Re the first point, people use “cause area” differently, but I don’t think AI—in its entirety—fits any of the usages. The alignment/control problem does: it’s a problem we can make progress on, like climate change or pandemic risk. But that’s not all of what EAs are doing (or should be doing) with respect to AI.
This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.
So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there’s a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I’d love to see more EAs foraging those carrots.
Thanks for doing these analyses. I find them very interesting.
Two relatively minor points, which I’m making here only because they refer to something I’ve seen a number of times, and I worry it reflects a more-fundamental misunderstanding within the EA community:
I don’t think AI is a “cause area.”
I don’t think there will be a non-AI far future.
Re the first point, people use “cause area” differently, but I don’t think AI—in its entirety—fits any of the usages. The alignment/control problem does: it’s a problem we can make progress on, like climate change or pandemic risk. But that’s not all of what EAs are doing (or should be doing) with respect to AI.
This relates to the second point: I think AI will impact nearly every aspect of the long-run future. Accordingly, anyone who cares about positively impacting the long-run future should, to some extent, care about AI.
So although there are one or two distinct global risks relating to AI, my preferred framing of AI generally is as an unusually powerful and tractable lever on the shape of the long-term future. I actually think there’s a LOT of low-hanging fruit (or near-surface root vegetables) involving AI and the long-term future, and I’d love to see more EAs foraging those carrots.