Thank you for sharing! I generally feel pretty good about people sharing their personal cruxes underlying their practical life/career plans (and it’s toward the top of my implicit “posts I’d love to write myself if I can find the time” list).
I must confess it seems pretty wild to me to have a chain of cruxes like this start with one in which one has a credence of 1%. (In particular one that affects one’s whole life focus in a massive way.) To be clear, I don’t think I have an argument that this epistemic state must be unjustified or anything like that. I’m just reporting that it seems very different from my epistemic and motivational state, and that I have a hard time imagining ‘inhabiting’ such a perspective. E.g. to be honest I think that if I had that epistemic state I would probably be like “uhm I guess if I was a consistent rational agent I would do whatever the beliefs I have 1% credence in imply, but alas I’m not, and so even if I don’t endorse this on a meta level I know that I’ll mostly just ~ignore this set of 1% likely views and do whatever I want instead”.
Like, I feel like I could understand why people might be relatively confident in moral realism, even though I disagree with them. But this “moral realism wager” kind of view/life I find really hard to imagine :)
to be honest I think that if I had that epistemic state I would probably be like “uhm I guess if I was a consistent rational agent I would do whatever the beliefs I have 1% credence in imply, but alas I’m not, and so even if I don’t endorse this on a meta level I know that I’ll mostly just ~ignore this set of 1% likely views and do whatever I want instead”.
I’d have guessed that:
You’d think that, if you tried to write out all the assumptions that would mean your actions are the best actions to take, at least one would have a <1% chance of being true in your view
But you take those actions anyway, because they seem to you to have a higher expected value than the alternatives
Am I wrong about that (admittedly vague) claim?
Or maybe what seems weird about my chain of cruxes is something like the fact that one that’s so unlikely in my own view is so “foundational”? Or something like the fact that I’ve only identified ~12 cruxes (so far from writing out all the assumptions implicit in my specific priorities, such as the precise research projects I work on) and yet already one of them was deemed so unlikely by me?
(Maybe here it’s worth noting that one worry I had about posting this was that it might be demotivating, since there are so many uncertainties relevant to any given action, even though in reality it can still often be best to just go ahead with our current best guess because any alternative—including further analysis—seems less promising.)
Hmm—good question if that would be true for one of my ‘cruxes’ as well. FWIW my immediate intuition is that it wouldn’t, i.e. that I’d have >1% credence in all relevant assumptions. Or at least that counterexamples would feel ‘pathological’ to me, i.e. like weird edge cases I’d want to discount. But I haven’t carefully thought about it, and my view on this doesn’t feel that stable.
I also think the ‘foundational’ property you gestured at does some work for why my intuitive reaction is “this seems wild”.
Thinking about this, I also realized that maybe some distinction between “how it feels like if I just look at my intuition” and “what my all-things-considered belief/‘betting odds’ would be after I take into account outside views, peer disagreement, etc.”. The example that made me think about this were startup founders, or other people embarking on ambitious projects that based on their reference class are very likely to fail. [Though idk if 99% is the right quantitative threshold for cases that appear in practice.] I would guess that some people with that profile might say something like “sure, in one sense I agree that the chance of me succeeding must be very small—but it just does feel like I will succeed to me, and if I felt otherwise I wouldn’t do what I’m doing”.
When I don’t really think about it, I basically feel like moral realism is definitely true and like there’s no question there at all
When I do really think about, my independent impression is that moral realism seems to basically make no sense and be almost guaranteed to be false
But then lots of smart people who’ve thougt about metaethics a lot do seem to think moral realism is somewhere between plausible or very likely, so I update up to something like a 1% chance (0.5% in this spreadsheet)
I think that this is fairly different from the startup founder example, though I guess it ends up in a similar place of it being easy to feel like “the odds are good” even if on some level I believe/recognise that they’re not.
Actually, that comment—and this spreadsheet—implied that my all-things-considered belief (not independent impression) is that there’s a ~0.5-1% chance of something like moral realism being true. But that doesn’t seem like the reasonable all-things-considered belief to have, given that it seems to me that:
The average credence in that claim from smart people who’ve spent a while thinking about it would be considerably higher
One useful proxy is the 2013 PhilPapers survey, which suggests that, out of some sample of philosophers, 56.4% subscribe to moral realism, 27.7% subscribe to moral anti-realism, and 15.9% were “other”
I’m deeply confused about this topic (which pushes against relying strongly on my own independent impression)
So maybe actually my all-things-considered belief is (or should be) closer to 50% (i.e., ~100 times as high as is suggested in this spreadsheet), and the 0.5% number is somewhere in-between my independent impression and my all-things-considered belief.
That might further help explain why it usually doesn’t feel super weird to me to kind-of “act on a moral realism wager”.
But yeah, I mostly feel pretty confused about what this topic even is, what I should think about it, and what I do think about it.
In particular one that affects one’s whole life focus in a massive way.
I’m actually not very confident that dropping the first four claims would affect my actual behaviours very much. (Though it definitely could, and I guess we should be wary of suspicious convergence.)
To be clearer on this, I’ve now edited the post to say:
Perhaps most significantly, as noted in the spreadsheet, it seems plausible that my behaviours would stay pretty similar if I lost all credence in the first four claims
Here’s what I say in the spreadsheet I’d do if I lost all my credence in the 2nd claim:
Maybe get back into video games, stand-up comedy, and music? But it feels hard to say, partly because currently I think spending lots of time on EA-aligned things and little time on video games etc. is best for my own happiness, since otherwise I’d have nagging sense that I should be contributing to things that matter. But maybe that sense would go away if I lost my belief that there are substantial moral reasons? Or maybe I’d want to push that updated belief aside and keep role-playing as if morality mattered a lot.
And here’s what I say I’d do if I lost all my credence in the 4th claim:
If I lost belief in this claim, but thought there was a non-negligible chance we could learn about moral truths (maybe by creating a superintelligence, exploring distant galaxies, “breaking out of the simulation”, or whatever), I might try to direct all efforts and resources towards learning the moral truth, or towards setting ourselves up to learn it (and then act on it) in future.
This might look pretty similar to reducing existential risk and ensuring a long reflection can happen. (Though it also might not. And I haven’t spent much time on cause prioritisation from the perspective of someone who doesn’t act on those first four claims, so maybe my first thoughts here are mistaken in some basic way.)
FWIW, mostly, I don’t really feel like my credence is that low in those claims, except when I focus my explicit attention on that topic. I think on an implicit level I have strongly moral realist intuitions. So it doesn’t take much effort to motivate myself to act as though moral realism is true.
If you’d be interested in my attempts to explain how I think about the “moral realism wager” and how it feels from the inside to kind-of live according to that wager, you may want to check out my various comments on Lukas Gloor’s anti-realism sequence.
(I definitely do think the wager is at least kind-of weird, and I don’t know if how I’m thinking makes sense. But I don’t think I found Lukas’s counterarguments compelling.)
Thank you for sharing! I generally feel pretty good about people sharing their personal cruxes underlying their practical life/career plans (and it’s toward the top of my implicit “posts I’d love to write myself if I can find the time” list).
I must confess it seems pretty wild to me to have a chain of cruxes like this start with one in which one has a credence of 1%. (In particular one that affects one’s whole life focus in a massive way.) To be clear, I don’t think I have an argument that this epistemic state must be unjustified or anything like that. I’m just reporting that it seems very different from my epistemic and motivational state, and that I have a hard time imagining ‘inhabiting’ such a perspective. E.g. to be honest I think that if I had that epistemic state I would probably be like “uhm I guess if I was a consistent rational agent I would do whatever the beliefs I have 1% credence in imply, but alas I’m not, and so even if I don’t endorse this on a meta level I know that I’ll mostly just ~ignore this set of 1% likely views and do whatever I want instead”.
Like, I feel like I could understand why people might be relatively confident in moral realism, even though I disagree with them. But this “moral realism wager” kind of view/life I find really hard to imagine :)
I’d have guessed that:
You’d think that, if you tried to write out all the assumptions that would mean your actions are the best actions to take, at least one would have a <1% chance of being true in your view
But you take those actions anyway, because they seem to you to have a higher expected value than the alternatives
Am I wrong about that (admittedly vague) claim?
Or maybe what seems weird about my chain of cruxes is something like the fact that one that’s so unlikely in my own view is so “foundational”? Or something like the fact that I’ve only identified ~12 cruxes (so far from writing out all the assumptions implicit in my specific priorities, such as the precise research projects I work on) and yet already one of them was deemed so unlikely by me?
(Maybe here it’s worth noting that one worry I had about posting this was that it might be demotivating, since there are so many uncertainties relevant to any given action, even though in reality it can still often be best to just go ahead with our current best guess because any alternative—including further analysis—seems less promising.)
Hmm—good question if that would be true for one of my ‘cruxes’ as well. FWIW my immediate intuition is that it wouldn’t, i.e. that I’d have >1% credence in all relevant assumptions. Or at least that counterexamples would feel ‘pathological’ to me, i.e. like weird edge cases I’d want to discount. But I haven’t carefully thought about it, and my view on this doesn’t feel that stable.
I also think the ‘foundational’ property you gestured at does some work for why my intuitive reaction is “this seems wild”.
Thinking about this, I also realized that maybe some distinction between “how it feels like if I just look at my intuition” and “what my all-things-considered belief/‘betting odds’ would be after I take into account outside views, peer disagreement, etc.”. The example that made me think about this were startup founders, or other people embarking on ambitious projects that based on their reference class are very likely to fail. [Though idk if 99% is the right quantitative threshold for cases that appear in practice.] I would guess that some people with that profile might say something like “sure, in one sense I agree that the chance of me succeeding must be very small—but it just does feel like I will succeed to me, and if I felt otherwise I wouldn’t do what I’m doing”.
Weirdly, for me, it’s like:
When I don’t really think about it, I basically feel like moral realism is definitely true and like there’s no question there at all
When I do really think about, my independent impression is that moral realism seems to basically make no sense and be almost guaranteed to be false
But then lots of smart people who’ve thougt about metaethics a lot do seem to think moral realism is somewhere between plausible or very likely, so I update up to something like a 1% chance (0.5% in this spreadsheet)
I think that this is fairly different from the startup founder example, though I guess it ends up in a similar place of it being easy to feel like “the odds are good” even if on some level I believe/recognise that they’re not.
Actually, that comment—and this spreadsheet—implied that my all-things-considered belief (not independent impression) is that there’s a ~0.5-1% chance of something like moral realism being true. But that doesn’t seem like the reasonable all-things-considered belief to have, given that it seems to me that:
The average credence in that claim from smart people who’ve spent a while thinking about it would be considerably higher
One useful proxy is the 2013 PhilPapers survey, which suggests that, out of some sample of philosophers, 56.4% subscribe to moral realism, 27.7% subscribe to moral anti-realism, and 15.9% were “other”
I’m deeply confused about this topic (which pushes against relying strongly on my own independent impression)
So maybe actually my all-things-considered belief is (or should be) closer to 50% (i.e., ~100 times as high as is suggested in this spreadsheet), and the 0.5% number is somewhere in-between my independent impression and my all-things-considered belief.
That might further help explain why it usually doesn’t feel super weird to me to kind-of “act on a moral realism wager”.
But yeah, I mostly feel pretty confused about what this topic even is, what I should think about it, and what I do think about it.
I’m actually not very confident that dropping the first four claims would affect my actual behaviours very much. (Though it definitely could, and I guess we should be wary of suspicious convergence.)
To be clearer on this, I’ve now edited the post to say:
Here’s what I say in the spreadsheet I’d do if I lost all my credence in the 2nd claim:
And here’s what I say I’d do if I lost all my credence in the 4th claim:
This might look pretty similar to reducing existential risk and ensuring a long reflection can happen. (Though it also might not. And I haven’t spent much time on cause prioritisation from the perspective of someone who doesn’t act on those first four claims, so maybe my first thoughts here are mistaken in some basic way.)
That seems like a fair comment.
FWIW, mostly, I don’t really feel like my credence is that low in those claims, except when I focus my explicit attention on that topic. I think on an implicit level I have strongly moral realist intuitions. So it doesn’t take much effort to motivate myself to act as though moral realism is true.
If you’d be interested in my attempts to explain how I think about the “moral realism wager” and how it feels from the inside to kind-of live according to that wager, you may want to check out my various comments on Lukas Gloor’s anti-realism sequence.
(I definitely do think the wager is at least kind-of weird, and I don’t know if how I’m thinking makes sense. But I don’t think I found Lukas’s counterarguments compelling.)