I was thinking of the Global Priorities Institute as the clearest example of trying to normalize longtermism and global priorities research in the academia as a discipline. AI Safety is also getting more mainstream over time, some of it as academic work in that field.
EA tackles problems which are more neglected. Some of it is still somewhat high-status (evidence-based development as the main thing that pops to my head). So perhaps that kind of risk is almost unavoidable and can only be mitigated for the next generation (and by then, it might be less neglected)
I was thinking of the Global Priorities Institute as the clearest example of trying to normalize longtermism and global priorities research in the academia as a discipline. AI Safety is also getting more mainstream over time, some of it as academic work in that field.
EA tackles problems which are more neglected. Some of it is still somewhat high-status (evidence-based development as the main thing that pops to my head). So perhaps that kind of risk is almost unavoidable and can only be mitigated for the next generation (and by then, it might be less neglected)