Suppose you’re in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that’s why Buck’s number seems crazy high to Gordon.
I’m saying we need to specify more than, “The chance that the full stack of individual propositions evaluates as true in the relevant direction.” I’m not sure if we’re disagreeing, or … ?
I agree with this intuition. I suspect the question that needs to be asked is “14% chance of what?”
The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).
Suppose you’re in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that’s why Buck’s number seems crazy high to Gordon.
You don’t, but that’s a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.
I’m saying we need to specify more than, “The chance that the full stack of individual propositions evaluates as true in the relevant direction.” I’m not sure if we’re disagreeing, or … ?