AI safety people would argue that if the world is destroyed then this improved happiness doesn’t buy us much
Animal welfare people would argue that there are a lot more low-hanging fruit to improve animal’s lives so that focusing on humans isn’t the best we can do
global health/well-being people tend towards less speculative/less “weird” interventions (?)
I still think there are probably a lot of people who could get excited about the topic and it might be the right time to start pitching it to EAs.
My guess is
AI safety people would argue that if the world is destroyed then this improved happiness doesn’t buy us much
Animal welfare people would argue that there are a lot more low-hanging fruit to improve animal’s lives so that focusing on humans isn’t the best we can do
global health/well-being people tend towards less speculative/less “weird” interventions (?)
I still think there are probably a lot of people who could get excited about the topic and it might be the right time to start pitching it to EAs.
(Also side note, maybe you’re already aware of it but Sasha Chapin is basically researching enlightenment: https://sashachapin.substack.com/p/hey-why-arent-we-doing-more-research )