Suppose youāre in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect thatās why Buckās number seems crazy high to Gordon.
Iām saying we need to specify more than, āThe chance that the full stack of individual propositions evaluates as true in the relevant direction.ā Iām not sure if weāre disagreeing, or ⦠?
Suppose youāre in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect thatās why Buckās number seems crazy high to Gordon.
You donāt, but thatās a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.
Iām saying we need to specify more than, āThe chance that the full stack of individual propositions evaluates as true in the relevant direction.ā Iām not sure if weāre disagreeing, or ⦠?