This is a super interesting exercise! I do worry how much it might bias you, especially in the absence of equally rigorously evaluated alternatives.
Consider the multiple stage fallacy:
If I went through any introductory EA work, I could probably identify something like 20 claims, all of which must hold for the conclusions to have moral force. It would. then feel pretty reasonable to assign each of those claims somewhere between 50% and 90% confidence.
That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%. What explains the discrepancy?
Lack of superior alternatives. I’m not sure if I’m a moral realist, but I’m also pretty unsure about moral nihilism. There’s lots of uncertainty all over the place, and we’re just trying to find the best working theory, even if it’s overall pretty unlikely. As Tyler Cowen once put it: “The best you can do is to pick what you think is right at 1.05 percent certainty, rather than siding with what you think is right at 1.03 percent. ”
Ignoring correlated probabilities
Bias towards assigning reasonable sounding probabilities
Assumption that the whole relies on each detail. E.g. even if utilitarianism is not literally correct, we may still find that pursuing a Longtermist agenda is reasonable under improved moral theories
Low probabilities are counter-acted by really high possible impacts. If the probability of longtermism being right is ~20%, that’s still a really really compelling case.
I think the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism? I play video games sometimes, but find that I have ample time to do so in my off hours. Playing video games so much that I don’t have time for work doesn’t sound pleasurable to me anyway, although you might enjoy it for brief spurts on weekends and holidays.
Or consider these notes from Nick Beckstead on Tyler Cowen’s view:
“his own interest in these issues is a form of consumption, though one he values highly.”
I think the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism?
It could be that that’s the only question I have to ask. That would happen if I work out what seems best to do from an altruistic perspective, then from a self-interested perspective, and I notice that they’re identical.
But that seems extremely unlikely. It seems likely that there’s a lot of overlap between what’s quite good from each perspective (at least given my current knowledge), since evolution and socialisation and such have led me to enjoy being and feeling helpful, noble, heroic, etc. But it seems much less likely that the very best thing from two very different perspectives is identical. See also Beware surprising and suspicious convergence.
Another way that that could be the only question I have to ask is if I’m certain that I should just act according to self-interest, regardless of what’s right from an altruistic perspective. But I see no good basis for that certainty.
This is part I said “it seems plausible that my behaviours would stay pretty similar if I lost all credence in the first four claims”, rather than making a stronger claim. I do think my behaviours would change at least slightly.
That all seems fine, until you start to multiply it out. 70%^20 is 0.08%. And yet my actual confidence in the basic EA framework is probably closer to 50%.
I think maybe you mean, or what you should mean, is that your actual confidence that you should, all things considered, act roughly as if the EA framework is correct is probably closer to 50%. And that’s what’s decision relevant. This captures ideas like those you mention, e.g.:
Well, what’s the actual alternative? Maybe they’re even less likely to be true?
Maybe this is unlikely to be true but really important if true, so I should make a “wager” on it for EV reasons?
I think a 50% confidence that the basic EA framework is actually correct seems much too high, given how uncertain we should be about metaethics, consequentialism, axiology, decision theory, etc. But that uncertainty doesn’t mean acting on any other basis actually seems better. And it doesn’t even necessarily mean I should focus on reducing those uncertainties, for reasons including that I think I’m a better fit for reducing other uncertainties that are also very decision-relevant (e.g., whether people should focus on longtermism or other EA cause areas, or how to prioritise within longtermism).
So I think I’m much less than 50% certain that the basic EA framework is actually correct, but also that I should basically act according to the basic EA framework, and that I’ll continue doing so for the rest of my life, and that I shouldn’t be constantly stressing out about my uncertainty. (It’s possible that some people would find it harder to put the uncertainty out of mind, even when they think that, rationally speaking, they should do so. For those people, this sort of exercise might be counterproductive.)
 By the framework being “actually correct”, I don’t just mean “this framework is useful” or “this framework is the best we’ve got, given our current knowledge”. I mean something like “the claims it is based on are correct, or other claims that justify it are correct”, or “maximally knowledgeable and wise versions of ourselves would endorse this framework as correct or as worth acting on”.
I agree that behaviours like my actual behaviours can be the right choice even if I don’t have high credence in all of these claims, and perhaps even if I had very low or 0 credence in some of them or in the conjunction, for the reasons you mention.
I think it’s useful to separate “my credence that X is true” from “my credence that, all things considered, I should act roughly as if X is true”. I think that that’s also key in “explaining the discrepancy” you point to.
But disagree that “the real question is, selfishly speaking, how much more do you gain from playing video games than from working on longtermism”. That would only be true if it was guaranteed that what I should do is what’s best for me selfishly, which would be suspicious convergence and/or overconfidence.
I’ll split that into three comments.
But I should note that these comments focus on what I think is true, not necessarily what I think it’s useful for everyone to think about. There are some people for whom thinking about this stuff just won’t be worth the time, or might be overly bad for their motivation or happiness.
I do worry how much it might bias you, especially in the absence of equally rigorously evaluated alternatives.
Am I correct in thinking that you mean you worry how much conducting this sort of exercise might affect anyone who does so, in the sense that it’ll tend to overly strongly make them think they should reduce their confidence in their bottom-line conclusions and actions? (Because they’re forced to look at and multiply one, single, conjunctive set of claims, without considering the things you mention?)
If so, I think I sort-of agree, and that was the main reason I consider never posting this. I also agree that each of the things you point to as potentially “explaining the discrepancy” can matter. As I note in a reply to Max Daniel above:
(Maybe here it’s worth noting that one worry I had about posting this was that it might be demotivating, since there are so many uncertainties relevant to any given action, even though in reality it can still often be best to just go ahead with our current best guess because any alternative—including further analysis—seems less promising.)
And as I note in various other replies here and in the spreadsheet itself, it’s often not obvious that a particular “crux” actually is required to support my current behaviours. E.g., here’s what I say in the spreadsheet I’d do if I lost all my credence in the 2nd claim:
Maybe get back into video games, stand-up comedy, and music? But it feels hard to say, partly because currently I think spending lots of time on EA-aligned things and little time on video games etc. is best for my own happiness, since otherwise I’d have nagging sense that I should be contributing to things that matter. But maybe that sense would go away if I lost my belief that there are substantial moral reasons? Or maybe I’d want to push that updated belief aside and keep role-playing as if morality mattered a lot.
This is why the post now says:
Perhaps most significantly, as noted in the spreadsheet, it seems plausible that my behaviours would stay pretty similar if I lost all credence in the first four claims
And this is also why I didn’t include in this post itself my “Very naive calculation of what my credence “should be” in this particular line of argument”—I just left that in the spreadsheet, so people will only see that if they actually go to where the details can be found. And in my note there, I say:
I’m not sure that these calculations are useful at all. They might be misleading, because of various complexities noted in this spreadsheet and the accompanying post. Maybe I should just delete this bit.
 Or you might’ve meant other things by the “it”, “you”, and “bias” here. E.g., you might’ve meant “I worry how much seeing this post might bias people who see it”, or “I worry how much seeing this post or conducting this exercise might cause a bias towards anchoring on one’s initial probabilities.”