10^-35 is such a short period of time that basically nothing can happen during it—even a laser couldn’t cut through your body that quickly.
Right, even in a vacuum, light takes 10^-9 s (= 0.3/​(3*10^8)) to travel 30 cm.
To explicitly do the calculation, lets assume a handgun bullet hits someone at around ~250m/​s, and decelerates somewhat, taking around 10^-3 seconds to pass through them. Assuming they were otherwise a normal person who didn’t often get shot at, intervening to protect them for ~10^-3 seconds would give them about 50 years ~= 10^9 seconds of extra life, or 12 orders of magnitude of leverage.
I do not think your example is structurally analogous to mine:
My point was that decreasing the risk of death over a tiny fraction of one’s life expectancy does not extend life expectancy much.
In your example, my understanding is that the life expectancy of the person about to be killed is 10^-3 s. So, for your example to be analogous to mine, your intervention would have to decrease the risk of death over a period astronomically shorter than 10^-3 s, in which case I would be super pessimistic about extending the life expectancy.
This example seems analogous to me because I believe that transformative AI basically is a one-time bullet and if we can catch it in our teeth we only need to do so once.
The mean person who is 10^-3 away from being killed, who has e.g. a bullet 25 cm (= 250*10^-3) away from the head if it is travelling at 250 m/​s, presumably has a very short life expectancy. If one thinks humanity is in a similar situation with respecto to AI, then the expected value of the future is also arguably not astronomical, and therefore decreasing the nearterm risk of human extinction need not be astronomically cost-effective. Pushing the analogy to an extreme, decreasing deaths from shootings is not the most effective way to extend human life expectancy.
Thanks for engaging, Larks!
Right, even in a vacuum, light takes 10^-9 s (= 0.3/​(3*10^8)) to travel 30 cm.
I do not think your example is structurally analogous to mine:
My point was that decreasing the risk of death over a tiny fraction of one’s life expectancy does not extend life expectancy much.
In your example, my understanding is that the life expectancy of the person about to be killed is 10^-3 s. So, for your example to be analogous to mine, your intervention would have to decrease the risk of death over a period astronomically shorter than 10^-3 s, in which case I would be super pessimistic about extending the life expectancy.
The mean person who is 10^-3 away from being killed, who has e.g. a bullet 25 cm (= 250*10^-3) away from the head if it is travelling at 250 m/​s, presumably has a very short life expectancy. If one thinks humanity is in a similar situation with respecto to AI, then the expected value of the future is also arguably not astronomical, and therefore decreasing the nearterm risk of human extinction need not be astronomically cost-effective. Pushing the analogy to an extreme, decreasing deaths from shootings is not the most effective way to extend human life expectancy.