I don’t think that logic works—in the worlds where AI safety fails, humans go extinct, and you’re not saving lives for very long, so the value of short term EA investments is also correspondingly lower, and you’re choosing between “focusing on good outcomes which won’t happen,” as you said, and focusing on good outcomes which end almost immediately anyways. (But to illustrate this better, I’d need to work an example, and do the math, and then I’d need to argue about the conditionals and the exact values I’m using.)
I don’t think that logic works—in the worlds where AI safety fails, humans go extinct, and you’re not saving lives for very long, so the value of short term EA investments is also correspondingly lower, and you’re choosing between “focusing on good outcomes which won’t happen,” as you said, and focusing on good outcomes which end almost immediately anyways. (But to illustrate this better, I’d need to work an example, and do the math, and then I’d need to argue about the conditionals and the exact values I’m using.)
great point—thanks you changed my view!