Shorter timelines for transformative AI (TAI) would make me prioritise more interventions whose effects happen earlier. There will be more change soon if TAI happens earlier, and I believe effects decay faster when there is more change.
A best guess for the probability of an event has implications for the resilience of the best guess. If my best guess is that something is 50 % likely to happen, the probability of me updating towards it being 90 % likely to happen should be at most 55.6 % (= 0.50/0.90).
I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively. I have very little idea about whether increasing agricultural land, such as by saving human lives, increases or decreases welfare. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I do not know whether saving human lives sooner increases welfare more or less than saving human lives later. Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
Nice, thanks! (I gave examples of charities/work where you’re kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
I had no doubts you thought this! :) I’m just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.
I agree greater uncertainty, and therefore less resilience, about the time until TAI is a reason for prioritising interventions whose effects are expected to materialise earlier. At a high level, I would model the impact of TAI as increasing the discount rate. For a 10th, 50th, and 90th percentile time until TAI of 100, 300, and 1 k years, I would not care about the uncertainty because I expect effects after 300 years to be negligible anyway, even without accounting for the additional discounting caused by TAI. However, for a 10th, 50th, and 90th percentile time until TAI of 3, 10, and 30 years, I would care a lot about the uncertainty because I expect effects after 10 years to be significant for many interventions.
(Hey Vasco!) How resilient is your relatively high credence that AI timelines are long?
And would you agree that the less resilient it is, the more you should favor interventions that are also good under short AI timelines? (E.g., the work of GiveWell’s top charities over making people consume fewer unhealthy products, since the latter pays off far later, as you and Michael discuss in this thread.)
Hi, Jim!
Shorter timelines for transformative AI (TAI) would make me prioritise more interventions whose effects happen earlier. There will be more change soon if TAI happens earlier, and I believe effects decay faster when there is more change.
A best guess for the probability of an event has implications for the resilience of the best guess. If my best guess is that something is 50 % likely to happen, the probability of me updating towards it being 90 % likely to happen should be at most 55.6 % (= 0.50/0.90).
I recommend research informing how to increase the welfare of soil animals over pursuing whatever land use change interventions naively seem to achieve that the most cost-effectively. I have very little idea about whether increasing agricultural land, such as by saving human lives, increases or decreases welfare. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I do not know whether saving human lives sooner increases welfare more or less than saving human lives later. Assuming that saving human lives increases welfare, I agree doing it earlier increases welfare more if TAI happens earlier.
Nice, thanks! (I gave examples of charities/work where you’re kinda agnostic because of a crux other than AI timelines, but this was just to illustrate.)
I had no doubts you thought this! :) I’m just curious as to whether you see reasons for someone to optimize assuming long AI timelines, despite low resilience in their high credence in long AI timelines.
I agree greater uncertainty, and therefore less resilience, about the time until TAI is a reason for prioritising interventions whose effects are expected to materialise earlier. At a high level, I would model the impact of TAI as increasing the discount rate. For a 10th, 50th, and 90th percentile time until TAI of 100, 300, and 1 k years, I would not care about the uncertainty because I expect effects after 300 years to be negligible anyway, even without accounting for the additional discounting caused by TAI. However, for a 10th, 50th, and 90th percentile time until TAI of 3, 10, and 30 years, I would care a lot about the uncertainty because I expect effects after 10 years to be significant for many interventions.