RSS

Griffin Young

Karma: 12

I’m a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.

I recently started doing the 80,000 hours career planning course.

I’m most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.

My shortlist of cause areas is the following:

  1. Global priorities research (depends on 2 and 4 below)

  2. Factory farming (depends on 3 below)

  3. Mental health

  4. Painful medical conditions

  5. Great power conflict

  6. Biorisk

  7. Climate change

  8. AI risk (pretty confident it will happen, and decently confident it will happen somewhat soon; depends on 1 below)

though I remain uncertain, especially about the following things:

  1. Potential badness of AI

  2. Scale of untapped cause areas/​when we will reach a “saturation point” of finding the best areas

  3. Relative amount of animal suffering

  4. Trajectory of future EA funding

Griffin Young’s Quick takes

Griffin Young16 Oct 2022 20:14 UTC
1 point
1 comment1 min readEA link

What are the risks of an or­a­cle AI?

Griffin Young5 Oct 2022 6:18 UTC
6 points
2 comments1 min readEA link