Our AI focus area is part of our longtermism-motivated portfolio of grants,[2] and we focus on AI alignment and AI governance grantmaking that seems especially helpful from a longtermist perspective. On the governance side, I sometimes refer to this longtermism-motivated subset of work as “transformative AI governance” for relative concreteness, but a more precise concept for this subset of work is “longtermist AI governance.”[3]
What work is “from a longtermist perspective” doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0, so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).
If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad.
Is there another interpretation that I’m missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?
I think this only makes sense for high extinction risk. If extinction risk is less than 1 % per century, or less than 10^-4 per year, it would allow for a life expectancy longer than 10 k years. This is nothing on a cosmological timescale, but much longer than the current human life expectancy. If a generation lasts 30 years, it would take 333 (= 10^4/30) to reach 10 k years. So extinction risk has a pretty small impact on one’s life expectancy, and those of one’s children.
What work is “from a longtermist perspective” doing here? (This phrase is used 8 times in the article.) Is it: longtermists have pure time preference = 0, while neartermists have >0, so longtermists care a lot more about extinction than neartermists do (because they care more about future generations). Hence, longtermist AI governance means focusing on extinction-level AI risks, while neartermist AI governance is about non-extinction AI risks (eg. racial discrimination in predicting recidivism).
If so, I think this is misleading. Neartermists also care a lot about extinction, because everyone dying is really bad.
Is there another interpretation that I’m missing? Eg. would neartermists and longtermists have different focuses within extinction-level AI risks?
One possible response is about long vs short AI timelines, but that seems orthogonal to longtermism/neartermism.
Hi Michael,
I think this only makes sense for high extinction risk. If extinction risk is less than 1 % per century, or less than 10^-4 per year, it would allow for a life expectancy longer than 10 k years. This is nothing on a cosmological timescale, but much longer than the current human life expectancy. If a generation lasts 30 years, it would take 333 (= 10^4/30) to reach 10 k years. So extinction risk has a pretty small impact on one’s life expectancy, and those of one’s children.