Tom—you raise some fascinating issues, and your Venn diagrams, however impressionistic they might be, are useful visualizations.
I do hope that AI safety remains an important part of EA—not least because I think there is some important, under-explored overlap between AI safety and the other key cause areas, global health & development, and animal welfare.
For example, I’m working on an essay about animal welfare implications of AGI. Ideally, advanced AI wouldn’t just be ‘aligned’ with human interests, but with the interests of the other 70,000 species of sentient vertebrates (and the sentient invertebrates). But very little has been written about this so far. So, AI safety has a serious anthropocentrism bias that needs challenging. The EAs who have worked on animal welfare could have a lot to say about AI safety issues in relation to other species.
Likewise, the ‘e/acc’ cult (which dismisses AI safety concerns, and advocates AGI development ASAP), often argues that there’s a moral imperative to develop AGI, in order to promote global health and development (e.g. ‘solving longevity’ and ‘promoting economic growth’). EA people who have worked on global health and development could contribute a lot to the debate over whether AGI is strictly necessary to promote longevity and prosperity.
Tom—you raise some fascinating issues, and your Venn diagrams, however impressionistic they might be, are useful visualizations.
I do hope that AI safety remains an important part of EA—not least because I think there is some important, under-explored overlap between AI safety and the other key cause areas, global health & development, and animal welfare.
For example, I’m working on an essay about animal welfare implications of AGI. Ideally, advanced AI wouldn’t just be ‘aligned’ with human interests, but with the interests of the other 70,000 species of sentient vertebrates (and the sentient invertebrates). But very little has been written about this so far. So, AI safety has a serious anthropocentrism bias that needs challenging. The EAs who have worked on animal welfare could have a lot to say about AI safety issues in relation to other species.
Likewise, the ‘e/acc’ cult (which dismisses AI safety concerns, and advocates AGI development ASAP), often argues that there’s a moral imperative to develop AGI, in order to promote global health and development (e.g. ‘solving longevity’ and ‘promoting economic growth’). EA people who have worked on global health and development could contribute a lot to the debate over whether AGI is strictly necessary to promote longevity and prosperity.
So, the Venn diagrams need to overlap even more!