If you do have a determinate credence above 50% for AI safety work, how do you arrive at this conclusion?
It happens that I do not. But I would if I believed there was evidence robust to unknown unknowns in favor of assuming “AI Safety work” is good, factoring in all the possible consequences from now until the end of time. This would require robust reasons to believe that current AI safety work actually increases rather than decreases safety overall AND that increased safety is actually good all things considered (e.g., that human disempowerment is actually bad overall). (See Guillaume’s comment on the distinction). I won’t elaborate on what would count as “evidence robust to unknown unknowns” in such a context but this is a topic for a future post/paper, hopefully.
Next, I want to push back on your claim that if ii) is correct, everything collapses. I agree that this would lead to the conclusion that we are probably entirely clueless about longtermist causes, probably the vast majority of causes in the world. However, it would make me lean toward near-term areas with much shorter causal chains, where there is a smaller margin of error—for example, caring for your family or local animals, which carry a low risk of backfiring.
Sorry, I didn’t mean to argue against that. I just meant that work you are clueless about (e.g. maybe AI safety work in your case?) shouldn’t be given any weight in your diversified portfolio. I didn’t mean to make any claim about what I personnally think we should or shouldn’t be clueless about. The “everything falls apart” was unclear and probably unwarranted.
It happens that I do not. But I would if I believed there was evidence robust to unknown unknowns in favor of assuming “AI Safety work” is good, factoring in all the possible consequences from now until the end of time. This would require robust reasons to believe that current AI safety work actually increases rather than decreases safety overall AND that increased safety is actually good all things considered (e.g., that human disempowerment is actually bad overall). (See Guillaume’s comment on the distinction). I won’t elaborate on what would count as “evidence robust to unknown unknowns” in such a context but this is a topic for a future post/paper, hopefully.
Sorry, I didn’t mean to argue against that. I just meant that work you are clueless about (e.g. maybe AI safety work in your case?) shouldn’t be given any weight in your diversified portfolio. I didn’t mean to make any claim about what I personnally think we should or shouldn’t be clueless about. The “everything falls apart” was unclear and probably unwarranted.