I’m a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.
I recently started doing the 80,000 hours career planning course.
I’m most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.
My shortlist of cause areas is the following:
Global priorities research (depends on 2 and 4 below)
Factory farming (depends on 3 below)
Mental health
Painful medical conditions
Great power conflict
Biorisk
Climate change
AI risk (pretty confident it will happen, and decently confident it will happen somewhat soon; depends on 1 below)
though I remain uncertain, especially about the following things:
Potential badness of AI
Scale of untapped cause areas/when we will reach a “saturation point” of finding the best areas
Relative amount of animal suffering
Trajectory of future EA funding
Have you done anything with the dataset of 990-PFs? I’d be interested in helping you with the data analysis if you want.