Iām not sure whether to count AI takeover as extinction or just as a worse futureāmaybe I should define extinction as actually just literal extinction, and leave scenarios with very small populations out of the definition. Any thoughts on the best way to define it here? I agree it needs some refining.
How about āOn the margin, work on reducing the chance of our extinction is the work that most increases the value of the futureā?
As I see it, the main issue with the framing in this post is that the work to reduce the chances of extinction might be the exact same work as the work to increase EV conditional on survival. In particular, preventing AI takeover might be the most valuable work for both. In which case the question would be asking to compare the overall marginal value of those takeover-prevention actions with the overall marginal value of those same actions.
(At first glance itās an interesting coincidence for the same actions to help the most with both, but on reflection itās not that unusual for these to align. Being in a serious car crash is really bad, both because you might die and because it could make your life much worse if you survive. Similarly with serious illness. Or, for nations/ācities/ātribes throughout history, losing a war where youāre conquered could lead to the conquerors killing you or doing other bad things to you. Avoiding something bad that might be fatal can be very valuable both for avoiding death and for the value conditional on survival.)
Thatās a really interesting solutionāIām a bit swamped today but Iāll seriously consider this tomorrowāit might be a nice way to clarify things without changing the meaning of the statement for people who have already written posts. Cheers!
I think Iāll stick with this current statementāpartly because itās now been announced for a while so people may be relying on its specific implications for their essays, but also because this new formulation (to me) doesnāt seem to avoid the problem you raise, that it isnāt clear what your vote would be if you think the same type of work is recommended for both. Perhaps the solution to that issue is in footnote 3 on the current bannerāif you think that the value from working on AI takeover is mostly from avoiding extinction, then you should vote agree. If you think it is from increasing the value of the future by another means (such as more democratic control of the future by humans), then you should vote disagree.
Iām not sure whether to count AI takeover as extinction or just as a worse futureāmaybe I should define extinction as actually just literal extinction, and leave scenarios with very small populations out of the definition. Any thoughts on the best way to define it here? I agree it needs some refining.
How about āOn the margin, work on reducing the chance of our extinction is the work that most increases the value of the futureā?
As I see it, the main issue with the framing in this post is that the work to reduce the chances of extinction might be the exact same work as the work to increase EV conditional on survival. In particular, preventing AI takeover might be the most valuable work for both. In which case the question would be asking to compare the overall marginal value of those takeover-prevention actions with the overall marginal value of those same actions.
(At first glance itās an interesting coincidence for the same actions to help the most with both, but on reflection itās not that unusual for these to align. Being in a serious car crash is really bad, both because you might die and because it could make your life much worse if you survive. Similarly with serious illness. Or, for nations/ācities/ātribes throughout history, losing a war where youāre conquered could lead to the conquerors killing you or doing other bad things to you. Avoiding something bad that might be fatal can be very valuable both for avoiding death and for the value conditional on survival.)
Thatās a really interesting solutionāIām a bit swamped today but Iāll seriously consider this tomorrowāit might be a nice way to clarify things without changing the meaning of the statement for people who have already written posts. Cheers!
I think Iāll stick with this current statementāpartly because itās now been announced for a while so people may be relying on its specific implications for their essays, but also because this new formulation (to me) doesnāt seem to avoid the problem you raise, that it isnāt clear what your vote would be if you think the same type of work is recommended for both. Perhaps the solution to that issue is in footnote 3 on the current bannerāif you think that the value from working on AI takeover is mostly from avoiding extinction, then you should vote agree. If you think it is from increasing the value of the future by another means (such as more democratic control of the future by humans), then you should vote disagree.