Regarding the 14% estimate, I’m actually surprised it’s this high. I have the opposite intuition, that there is so much uncertainty, especially about whether or not any particular thing someone does will have impact, that I place the likelihood of anything any particular person working on AI safety does producing positive outcomes at <1%. The only reason it seems worth working on to me despite all of this is that when you multiply it against the size of the payoff it ends up being worthwhile anyway.
Suppose you’re in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that’s why Buck’s number seems crazy high to Gordon.
I’m saying we need to specify more than, “The chance that the full stack of individual propositions evaluates as true in the relevant direction.” I’m not sure if we’re disagreeing, or … ?
Regarding the 14% estimate, I’m actually surprised it’s this high. I have the opposite intuition, that there is so much uncertainty, especially about whether or not any particular thing someone does will have impact, that I place the likelihood of anything any particular person working on AI safety does producing positive outcomes at <1%. The only reason it seems worth working on to me despite all of this is that when you multiply it against the size of the payoff it ends up being worthwhile anyway.
I agree with this intuition. I suspect the question that needs to be asked is “14% chance of what?”
The chance that the full stack of individual propositions evaluates as true in the relevant direction (work on AI vs work on something else).
Suppose you’re in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?
There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.
The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that’s why Buck’s number seems crazy high to Gordon.
You don’t, but that’s a different proposition with a different set of cruxes since it is based on ex post rather than ex ante.
I’m saying we need to specify more than, “The chance that the full stack of individual propositions evaluates as true in the relevant direction.” I’m not sure if we’re disagreeing, or … ?