to be honest I think that if I had that epistemic state I would probably be like “uhm I guess if I was a consistent rational agent I would do whatever the beliefs I have 1% credence in imply, but alas I’m not, and so even if I don’t endorse this on a meta level I know that I’ll mostly just ~ignore this set of 1% likely views and do whatever I want instead”.
I’d have guessed that:
You’d think that, if you tried to write out all the assumptions that would mean your actions are the best actions to take, at least one would have a <1% chance of being true in your view
But you take those actions anyway, because they seem to you to have a higher expected value than the alternatives
Am I wrong about that (admittedly vague) claim?
Or maybe what seems weird about my chain of cruxes is something like the fact that one that’s so unlikely in my own view is so “foundational”? Or something like the fact that I’ve only identified ~12 cruxes (so far from writing out all the assumptions implicit in my specific priorities, such as the precise research projects I work on) and yet already one of them was deemed so unlikely by me?
(Maybe here it’s worth noting that one worry I had about posting this was that it might be demotivating, since there are so many uncertainties relevant to any given action, even though in reality it can still often be best to just go ahead with our current best guess because any alternative—including further analysis—seems less promising.)
Hmm—good question if that would be true for one of my ‘cruxes’ as well. FWIW my immediate intuition is that it wouldn’t, i.e. that I’d have >1% credence in all relevant assumptions. Or at least that counterexamples would feel ‘pathological’ to me, i.e. like weird edge cases I’d want to discount. But I haven’t carefully thought about it, and my view on this doesn’t feel that stable.
I also think the ‘foundational’ property you gestured at does some work for why my intuitive reaction is “this seems wild”.
Thinking about this, I also realized that maybe some distinction between “how it feels like if I just look at my intuition” and “what my all-things-considered belief/‘betting odds’ would be after I take into account outside views, peer disagreement, etc.”. The example that made me think about this were startup founders, or other people embarking on ambitious projects that based on their reference class are very likely to fail. [Though idk if 99% is the right quantitative threshold for cases that appear in practice.] I would guess that some people with that profile might say something like “sure, in one sense I agree that the chance of me succeeding must be very small—but it just does feel like I will succeed to me, and if I felt otherwise I wouldn’t do what I’m doing”.
When I don’t really think about it, I basically feel like moral realism is definitely true and like there’s no question there at all
When I do really think about, my independent impression is that moral realism seems to basically make no sense and be almost guaranteed to be false
But then lots of smart people who’ve thougt about metaethics a lot do seem to think moral realism is somewhere between plausible or very likely, so I update up to something like a 1% chance (0.5% in this spreadsheet)
I think that this is fairly different from the startup founder example, though I guess it ends up in a similar place of it being easy to feel like “the odds are good” even if on some level I believe/recognise that they’re not.
Actually, that comment—and this spreadsheet—implied that my all-things-considered belief (not independent impression) is that there’s a ~0.5-1% chance of something like moral realism being true. But that doesn’t seem like the reasonable all-things-considered belief to have, given that it seems to me that:
The average credence in that claim from smart people who’ve spent a while thinking about it would be considerably higher
One useful proxy is the 2013 PhilPapers survey, which suggests that, out of some sample of philosophers, 56.4% subscribe to moral realism, 27.7% subscribe to moral anti-realism, and 15.9% were “other”
I’m deeply confused about this topic (which pushes against relying strongly on my own independent impression)
So maybe actually my all-things-considered belief is (or should be) closer to 50% (i.e., ~100 times as high as is suggested in this spreadsheet), and the 0.5% number is somewhere in-between my independent impression and my all-things-considered belief.
That might further help explain why it usually doesn’t feel super weird to me to kind-of “act on a moral realism wager”.
But yeah, I mostly feel pretty confused about what this topic even is, what I should think about it, and what I do think about it.
I’d have guessed that:
You’d think that, if you tried to write out all the assumptions that would mean your actions are the best actions to take, at least one would have a <1% chance of being true in your view
But you take those actions anyway, because they seem to you to have a higher expected value than the alternatives
Am I wrong about that (admittedly vague) claim?
Or maybe what seems weird about my chain of cruxes is something like the fact that one that’s so unlikely in my own view is so “foundational”? Or something like the fact that I’ve only identified ~12 cruxes (so far from writing out all the assumptions implicit in my specific priorities, such as the precise research projects I work on) and yet already one of them was deemed so unlikely by me?
(Maybe here it’s worth noting that one worry I had about posting this was that it might be demotivating, since there are so many uncertainties relevant to any given action, even though in reality it can still often be best to just go ahead with our current best guess because any alternative—including further analysis—seems less promising.)
Hmm—good question if that would be true for one of my ‘cruxes’ as well. FWIW my immediate intuition is that it wouldn’t, i.e. that I’d have >1% credence in all relevant assumptions. Or at least that counterexamples would feel ‘pathological’ to me, i.e. like weird edge cases I’d want to discount. But I haven’t carefully thought about it, and my view on this doesn’t feel that stable.
I also think the ‘foundational’ property you gestured at does some work for why my intuitive reaction is “this seems wild”.
Thinking about this, I also realized that maybe some distinction between “how it feels like if I just look at my intuition” and “what my all-things-considered belief/‘betting odds’ would be after I take into account outside views, peer disagreement, etc.”. The example that made me think about this were startup founders, or other people embarking on ambitious projects that based on their reference class are very likely to fail. [Though idk if 99% is the right quantitative threshold for cases that appear in practice.] I would guess that some people with that profile might say something like “sure, in one sense I agree that the chance of me succeeding must be very small—but it just does feel like I will succeed to me, and if I felt otherwise I wouldn’t do what I’m doing”.
Weirdly, for me, it’s like:
When I don’t really think about it, I basically feel like moral realism is definitely true and like there’s no question there at all
When I do really think about, my independent impression is that moral realism seems to basically make no sense and be almost guaranteed to be false
But then lots of smart people who’ve thougt about metaethics a lot do seem to think moral realism is somewhere between plausible or very likely, so I update up to something like a 1% chance (0.5% in this spreadsheet)
I think that this is fairly different from the startup founder example, though I guess it ends up in a similar place of it being easy to feel like “the odds are good” even if on some level I believe/recognise that they’re not.
Actually, that comment—and this spreadsheet—implied that my all-things-considered belief (not independent impression) is that there’s a ~0.5-1% chance of something like moral realism being true. But that doesn’t seem like the reasonable all-things-considered belief to have, given that it seems to me that:
The average credence in that claim from smart people who’ve spent a while thinking about it would be considerably higher
One useful proxy is the 2013 PhilPapers survey, which suggests that, out of some sample of philosophers, 56.4% subscribe to moral realism, 27.7% subscribe to moral anti-realism, and 15.9% were “other”
I’m deeply confused about this topic (which pushes against relying strongly on my own independent impression)
So maybe actually my all-things-considered belief is (or should be) closer to 50% (i.e., ~100 times as high as is suggested in this spreadsheet), and the 0.5% number is somewhere in-between my independent impression and my all-things-considered belief.
That might further help explain why it usually doesn’t feel super weird to me to kind-of “act on a moral realism wager”.
But yeah, I mostly feel pretty confused about what this topic even is, what I should think about it, and what I do think about it.