This post was published for draft amnesty day, so it’s less polished than the typical EA forum post. Epistemic status: in the spirit of Cunningham’s Law [1].
Givewell estimates that $300 million in marginal funding would result in ~30,000 additional lives saved, that’s very roughly $0.50 per day of life.
If you believe that there’s a higher than 10% chance of extinction via AGI[2], that means that delaying AGI by one day gives you 10% · 10¹⁰[3] life-days, equivalent to ~$0.5B in GiveWell marginal dollars (as a rough order of magnitude).
Potential disagreements and uncertainties:
Delaying AGI is, in expectation, going to make lives in pre-AGI world worse. To me, this seems negligible compared to the risk of dying, unless you put the 0-point of a “life worth living” very high (e.g. you think ~half the current global population would be better off dead). If the current average value of a life is X, for an AGI transformation to make it go to 2X it would need to be extremely powerful and extremely aligned.
Under longtermism, the value of current lives saved is negligible compared to the value of future lives that are more likely to exist. So the only thing that matters is if the particular method by which you delay AGI reduces x-risks.[4] I would guess that, probably, delaying AGI by default reduces the probability of x-risks by giving more time for a “short reflection”, and for the field of AI Alignment to develop.
Delaying AGI is not tractable, e.g. regulation doesn’t work. It seems to me that lots of people believe excessive regulation raises prices and slows down industries and processes. I don’t understand how that doesn’t apply to AI in particular (and the same arguments don’t apply to nuclear power, healthcare, or other safety-sensitive very technical areas). And there are areas where differential technological development happened in practice (e.g. human cloning and embryo DNA editing).
There’s significantly less than a 1% risk from AGI for lives that morally matter. It’s possible, probably my main uncertainty, but I think it would require both narrow person affecting views and a lot of certainty on AI timelines or consequences.
Ride the current wave of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).
In general, focus less on technical / theorem-proving alignment work, or hoping AI capability companies don’t get tempted to gamble billions of lives on a chance of becoming trillionaires after some EA engineers start working there.
How would you estimate the value of delaying AGI by 1 day, in marginal donations to GiveWell?
This post was published for draft amnesty day, so it’s less polished than the typical EA forum post.
Epistemic status: in the spirit of Cunningham’s Law [1].
Givewell estimates that $300 million in marginal funding would result in ~30,000 additional lives saved, that’s very roughly $0.50 per day of life.
If you believe that there’s a higher than 10% chance of extinction via AGI[2], that means that delaying AGI by one day gives you 10% · 10¹⁰[3] life-days, equivalent to ~$0.5B in GiveWell marginal dollars (as a rough order of magnitude).
Potential disagreements and uncertainties:
Delaying AGI is, in expectation, going to make lives in pre-AGI world worse.
To me, this seems negligible compared to the risk of dying, unless you put the 0-point of a “life worth living” very high (e.g. you think ~half the current global population would be better off dead). If the current average value of a life is X, for an AGI transformation to make it go to 2X it would need to be extremely powerful and extremely aligned.
Under longtermism, the value of current lives saved is negligible compared to the value of future lives that are more likely to exist. So the only thing that matters is if the particular method by which you delay AGI reduces x-risks.[4]
I would guess that, probably, delaying AGI by default reduces the probability of x-risks by giving more time for a “short reflection”, and for the field of AI Alignment to develop.
Delaying AGI is not tractable, e.g. regulation doesn’t work.
It seems to me that lots of people believe excessive regulation raises prices and slows down industries and processes. I don’t understand how that doesn’t apply to AI in particular (and the same arguments don’t apply to nuclear power, healthcare, or other safety-sensitive very technical areas). And there are areas where differential technological development happened in practice (e.g. human cloning and embryo DNA editing).
There’s significantly less than a 1% risk from AGI for lives that morally matter.
It’s possible, probably my main uncertainty, but I think it would require both narrow person affecting views and a lot of certainty on AI timelines or consequences.
Proposals:
Signal boost Instead of technical research, more people should focus on buying time and Ways to buy time from Akash
Ride the current wave of AI skepticism by people worried about it being racist, or being replaced and left unemployed. To lobby for significantly more government involvement, to slow down progress (like the FDA in medicine).
In general, focus less on technical / theorem-proving alignment work, or hoping AI capability companies don’t get tempted to gamble billions of lives on a chance of becoming trillionaires after some EA engineers start working there.
Curious on your thoughts!
The best way to get the right answer on the Internet is not to ask a question; it’s to post the wrong answer. (Wikipedia)
If you believe it’s ~100% just multiply by 10, if you believe it’s ~1% just divide by 10
Human population is roughly 10^10 humans
Extinction, unrecoverable collapse/stagnation, or flawed realization