Interesting argumentâI donât know much about this argument, but my thoughts are that thereâs not much value in thinking in terms of conditional value. If AI Safety is doomed to fail, thereâs not much value focusing on good outcomes which wonât happen, when there are great global health interventions today. Arguably, these global health interventions could also help at least some parts of humanity have a positive future.
I donât think that logic worksâin the worlds where AI safety fails, humans go extinct, and youâre not saving lives for very long, so the value of short term EA investments is also correspondingly lower, and youâre choosing between âfocusing on good outcomes which wonât happen,â as you said, and focusing on good outcomes which end almost immediately anyways. (But to illustrate this better, Iâd need to work an example, and do the math, and then Iâd need to argue about the conditionals and the exact values Iâm using.)
Interesting argumentâI donât know much about this argument, but my thoughts are that thereâs not much value in thinking in terms of conditional value. If AI Safety is doomed to fail, thereâs not much value focusing on good outcomes which wonât happen, when there are great global health interventions today. Arguably, these global health interventions could also help at least some parts of humanity have a positive future.
I donât think that logic worksâin the worlds where AI safety fails, humans go extinct, and youâre not saving lives for very long, so the value of short term EA investments is also correspondingly lower, and youâre choosing between âfocusing on good outcomes which wonât happen,â as you said, and focusing on good outcomes which end almost immediately anyways. (But to illustrate this better, Iâd need to work an example, and do the math, and then Iâd need to argue about the conditionals and the exact values Iâm using.)
great pointâthanks you changed my view!