EDIT: I made this comment assuming the comment I’m replying to is making a critique of longtermism but no longer convinced this is the correct reading 😅 here’s the response anyway:
Well it’s not so much that longtermists ignore such suffering, it’s that anyone who is choosing a priority (so any EA, regardless of their stance on longtermism) in our current broken system will end up ignoring (or at least not working on alleviating) many problems.
For example the problem of adults with cancer in the US is undoubtedly tragic but is well understood and reasonably well funded by the government and charitable organizations, I would argue it fails the ‘neglectedness’ part of the traditional EA neglectedness, tractability, importance system. Another example, people trapped in North Korea, I think would fail on tractability, given the lack of progress over the decades. I haven’t thought about those two particularly deeply and could be totally wrong but this is just the traditional EA framework for prioritizing among different problems, even if those problems are heartbreaking to have to set aside.
AI Safety Academic Conference
Technical AI Safety
The idea is to fund and provide logistical/admin support for a reasonably large AI safety conference along the lines of Neurips etc. Academic conferences provide several benefits: 1) Potentially increasing the prestige of an area and boosting the career capital of people who get accepted papers. 2) Networking and sharing ideas, 3) Providing feedback on submitted papers and highlighting important/useful papers. This conference would be unusual in that the work submitted shares approximately the same concrete goal (avoiding risks from powerful AI). While traditional conferences might focus on scientific novelty and complicated/”cool” papers, this conference could have a particular focus on things like reproducibility or correctness of empirical results, peer support and mentorship, non-traditional research mediums (e.g. blog posts/notebooks) , and encouraging authors to have a plausible story for why their work is actually reducing risks from AI.