I’m a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.
I recently started doing the 80,000 hours career planning course.
I’m most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.
My shortlist of cause areas is the following:
Global priorities research (depends on 2 and 4 below)
Factory farming (depends on 3 below)
Mental health
Painful medical conditions
Great power conflict
Biorisk
Climate change
AI risk (pretty confident it will happen, and decently confident it will happen somewhat soon; depends on 1 below)
though I remain uncertain, especially about the following things:
Potential badness of AI
Scale of untapped cause areas/when we will reach a “saturation point” of finding the best areas
Relative amount of animal suffering
Trajectory of future EA funding
How many AI Safety researchers would be enough? 80k emphasizes the fact that there are only 300 people working on this full-time, meaning that the problem is extremely neglected. How many people would have to be working on this problem for it to no longer be considered neglected?