Thanks for this! - My tentative view is that cluelessness is an important issue with practical implications, and so I’m particularly interested in thoughtful arguments for opposing views.
I’ll post some reactions in separate comments to facilitate discussion.
Knightian uncertainty seems to me never rational. There are strong arguments that credence functions should be sharp. [...]
I agree that are strong arguments that credence functions should be sharp. So I don’t think the case for cluelessness is a slam dunk. (Granting that, roughly speaking, considering cluelessness to be an interesting problem commits one to a view using non-sharp credence functions. I’m not in fact sure if one is thus committed.) It just seems to me that the arguments for taking cluelessness seriously as a problem are stronger. Still, I’m curious what you think the best arguments for credence functions being sharp are, or where I can read about them.
I’m certainly not an expert here, and I think my thinking is somewhat unclear, and my explanation of it likely will be too. But I share the sense that Knightian uncertainty can’t be rational. Or more specifically, I have a sense that in these sorts of discussions, a lot of the work is being done by imprecise terms that imply a sort of crisp, black-and-white distinction between something we could call “regular” uncertainty and something we could call “extreme”/”radical”/”unquantifiable” uncertainty, without this distinction being properly made explicit or defended.
For example, in Hilary Greaves’s paper on cluelessness (note: similar thoughts from me would apply to Mogensen’s paper, though explained differently), she discusses cases of “simple cluelessness” and then argues they’re not really a problem, because in such cases the “unforeseeable effects” cancel out in expectation, even if not in reality. E.g.,
While there are countless possible causal stories about how helping an old lady across the road might lead to (for instance) the existence of an additional murderous dictator in the 22nd century, any such story will have a precise counterpart, precisely as plausible as the original, according to which refraining from helping the old lady turns out to have the consequence in question; and it is intuitively clear that one ought to have equal credences in such precise counterpart possible stories.
Greaves is arguing that we can therefore focus on whether the “foreseeable effects” are positive or negative in expectation, just as our intuitions would suggest.
I agree with the conclusion, but I think the way it’s juxtaposed with “complex cluelessness” (which she does suggest may be a cause for concern) highlights the sort of unwarranted (and implicit) sharp distinctions between “types” of uncertainty which I think are being made.
The three key criteria Greaves proposes for a case to involve complex cluelessness are:
(CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;
(CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;
(CC3) It is unclear how to weigh up these reasons against one another.
I think all of that actually applies to the old lady case, just very speculatively. One reason to think CC1 is that the old lady and/or anyone witnessing your kind act and/or anyone who’s told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
Importantly, there isn’t a “precise counterpart, precisely as plausible as the original”, for this story. That’d have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
I’d argue you again can’t tell a precise counterpart story that’s precisely as plausible as this, and that’s for reasons very similar to those covered in both Greaves and Mogensen’s paper—there are separate lines of evidence and argument for GiveWell type charities leading to increased population vs them leading to decreased population, and for increased population increasing vs decreasing x-risk. (And again, it seems less plausible that witnessing your good deed would make people less likely to donate to GiveWell charities than more likely—or at least, a decrease would occur via different mechanisms than an increase, and therefore not be a “precise counterpart” story.)
I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. And it doesn’t seem to me there’s a fundamental difference between their plausibility and worthiness-of-attention and that of the possibility for donations to AMF to increase vs decrease x-risk (or other claimed cases of [complex] cluelessness). I think it’s just a difference of degree—perhaps a large difference in degree, but degree nonetheless. I can’t see how we could draw some clear line somewhere to decide what uncertainties can just be dealt with in normal ways and which uncertainties make us count as clueless and thus unable to use regular expected value reasoning.
(And as for CC3, I’d say it’s at least slightly “unclear” how to weigh up these reasons against one another.)
I think my thoughts here are essentially a criticism of the idea of a sharp, fundamental distinction between “risk” and “Knightian uncertainty”*, rather than of Greaves and Mogensen’s papers as a whole. That is, if we did accept as a premise that distinction, I think that most of what Greaves and Mogensen say seems like it may well follow. (I also found their papers interesting, and acknowledge that I’m very much not an expert here and all of this could be unfounded, so for all these reasons this isn’t necessarily a case of me viewing their papers negatively.)
*I do think that those can be useful concepts for heuristic-type, practical purposes. I think we should probably act differently when are credences are massively less well-founded than usual, and probably be suspicious of traditional expected value reasoning then. But I think that’s because of various flaws with how humans think (e.g., overconfidence in inside-view predictions), not because different rules fundamentally should apply to fundamentally different types of uncertainty.
re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options). In this case, your examples for CC1 and CC2 don’t hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don’t know anything about Mondays or Tuesdays.
Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example I’m given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue we’re meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I haven’t read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
One reason to think CC1 is that the old lady and/or anyone witnessing your kind act and/or anyone who’s told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, it’s slightly more likely that someone will post about that on social media, and/or use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise would’ve. (Because me doing this on Tuesday means they’re slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much they’d be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesday—e.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict we’ll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2.
Sorry, I don’t have the time to comment in-depth. However, I think if one agrees with cluelessness, then you don’t offer an objection. You might even extend their worries by saying that “almost everything has “asymmetric uncertainty”″. I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
When I said “I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind”, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don’t devote explicit attention to such (extremely unlikely) “stories” when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could’ve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what I’d believe before updating on the fact that various smart people don’t share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn’t a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I’ve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessness—i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term “cluelessness” seems to suggest we know literally nothing about how to compare the outcomes. Whereas I’ve argued that in all cases we’ll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
we’re always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the “correct” way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that we’re dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesn’t seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.
Also, when reading Greaves and Mogensen’s papers, I was reminded of the ideas of cluster thinking (also here) and model combination. I could be drawing faulty analogies, but it seemed like those ideas could be ways to capture, in a form that can actually be readily worked with, the following idea (from Greaves; the same basic concept is also used in Mogensen):
in the situations we are considering, instead of having some single and completely precise (real-valued) credence function, agents are rationally required to have imprecise credences: that is, to be in a credal state that is represented by a many-membered set of probability functions (call this set the agent’s ‘representor’)
That is, we can consider each probability function in the agent’s representor as one model, and then either qualitatively use Holden’s idea of cluster thinking, or get a weighted combination of those models. Then we’d actually have an answer, rather than just indifference.
This seems like potentially “the best of both worlds”; i.e., a way to capture both of the following intuitively appealing ideas:
perhaps we shouldn’t present singular, sharp credence functions over extremely hard-to-predict long-term effects
we can still make educated guesses like “avoiding extinction is probably bad in expectation” and (perhaps) “giving to AMF is probably good in expectation”.
(This second intuition can rest on ideas like “Yeah, ok, I agree that it’s ‘unclear’ how to weigh up these arguments, but I weigh up arguments when it’s unclear how to do so all the time. I’m still at least slightly more convinced by argument X, so I’m going to go with what it suggests, and just also remain extremely open to new evidence.”)
Bet A If H is true, you lose $10. Otherwise you win $15.
Bet B If H is true, you win $15. Otherwise you lose $10.
First I’m going to offer you Bet A. Immediately after you decide whether to accept Bet A, I’m going to offer you Bet B.
Can’t the sequence proposal be fixed by conditioning on the past and only considering future sequences of actions? Committing to rejecting both bets A and B is rationally impermissible if you will be offered both since it’s worse than accepting both, but after your decision on A, regardless of whether you accepted or rejected, then it could be that both accepting and rejecting B are permissible at the same time. The fact that something was my past action shouldn’t matter or prevent me from completing some particular sequence of actions that includes past actions, only my future prospects and future actions matter.
I think this makes sense for sharp probabilities, too: suppose you assign some sharp probability p≤25 to H being true, and have already rejected A, even though it had positive expected value (so this decision was irrational at the time). Then, since the expected value of B is ≤0, it’s permissible to reject B, and even required if the inequality is strict. You may be rationally required to complete a sequence of actions which was irrational before you started.
You can also apply Mogensen’s maximality rule to sequences. Given some set of plausible probability distributions, if one sequence of actions θ is better in expectation than another ϕ under at least one distribution, and not worse under any other distribution, then θ≻ϕ. If neither strict inequality holds between the two options and these are the only two options, then both are permissible. (We sacrifice the independence of irrelevant alternatives, since a third option could dominate one but not the other, only ruling out the dominated one.)
How important do you think non-sharp credence functions are to arguments for cluelessness being important? If you generally reject Knightian uncertainty and quantify all possibilities with probabilities, how diminished is the case for problematic cluelessness?
My belief that cluelessness is important is fairly independent of any specific philosophical/technical account of cluelessness. In particular, I don’t think me changing my mind on whether credence functions have to be sharp would significantly change my views on the importance of cluelessness.
In this comment I’ve explained in more detail what I think about the relationship between the basic idea and specific philosophical theories trying to describe it.
(FWIW, I don’t feel like I have a well-informed view on whether credence functions have to be sharp. If anything, I have a weak intuition that it’s a bit more likely than not that I’d conclude they have to be if I spent more time looking into the question.)
Thanks for this! - My tentative view is that cluelessness is an important issue with practical implications, and so I’m particularly interested in thoughtful arguments for opposing views.
I’ll post some reactions in separate comments to facilitate discussion.
I agree that are strong arguments that credence functions should be sharp. So I don’t think the case for cluelessness is a slam dunk. (Granting that, roughly speaking, considering cluelessness to be an interesting problem commits one to a view using non-sharp credence functions. I’m not in fact sure if one is thus committed.) It just seems to me that the arguments for taking cluelessness seriously as a problem are stronger. Still, I’m curious what you think the best arguments for credence functions being sharp are, or where I can read about them.
[I know I’m late to the party but...]
I’m certainly not an expert here, and I think my thinking is somewhat unclear, and my explanation of it likely will be too. But I share the sense that Knightian uncertainty can’t be rational. Or more specifically, I have a sense that in these sorts of discussions, a lot of the work is being done by imprecise terms that imply a sort of crisp, black-and-white distinction between something we could call “regular” uncertainty and something we could call “extreme”/”radical”/”unquantifiable” uncertainty, without this distinction being properly made explicit or defended.
For example, in Hilary Greaves’s paper on cluelessness (note: similar thoughts from me would apply to Mogensen’s paper, though explained differently), she discusses cases of “simple cluelessness” and then argues they’re not really a problem, because in such cases the “unforeseeable effects” cancel out in expectation, even if not in reality. E.g.,
Greaves is arguing that we can therefore focus on whether the “foreseeable effects” are positive or negative in expectation, just as our intuitions would suggest.
I agree with the conclusion, but I think the way it’s juxtaposed with “complex cluelessness” (which she does suggest may be a cause for concern) highlights the sort of unwarranted (and implicit) sharp distinctions between “types” of uncertainty which I think are being made.
The three key criteria Greaves proposes for a case to involve complex cluelessness are:
I think all of that actually applies to the old lady case, just very speculatively. One reason to think CC1 is that the old lady and/or anyone witnessing your kind act and/or anyone who’s told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
Importantly, there isn’t a “precise counterpart, precisely as plausible as the original”, for this story. That’d have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
I’d argue you again can’t tell a precise counterpart story that’s precisely as plausible as this, and that’s for reasons very similar to those covered in both Greaves and Mogensen’s paper—there are separate lines of evidence and argument for GiveWell type charities leading to increased population vs them leading to decreased population, and for increased population increasing vs decreasing x-risk. (And again, it seems less plausible that witnessing your good deed would make people less likely to donate to GiveWell charities than more likely—or at least, a decrease would occur via different mechanisms than an increase, and therefore not be a “precise counterpart” story.)
I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. And it doesn’t seem to me there’s a fundamental difference between their plausibility and worthiness-of-attention and that of the possibility for donations to AMF to increase vs decrease x-risk (or other claimed cases of [complex] cluelessness). I think it’s just a difference of degree—perhaps a large difference in degree, but degree nonetheless. I can’t see how we could draw some clear line somewhere to decide what uncertainties can just be dealt with in normal ways and which uncertainties make us count as clueless and thus unable to use regular expected value reasoning.
(And as for CC3, I’d say it’s at least slightly “unclear” how to weigh up these reasons against one another.)
I think my thoughts here are essentially a criticism of the idea of a sharp, fundamental distinction between “risk” and “Knightian uncertainty”*, rather than of Greaves and Mogensen’s papers as a whole. That is, if we did accept as a premise that distinction, I think that most of what Greaves and Mogensen say seems like it may well follow. (I also found their papers interesting, and acknowledge that I’m very much not an expert here and all of this could be unfounded, so for all these reasons this isn’t necessarily a case of me viewing their papers negatively.)
*I do think that those can be useful concepts for heuristic-type, practical purposes. I think we should probably act differently when are credences are massively less well-founded than usual, and probably be suspicious of traditional expected value reasoning then. But I think that’s because of various flaws with how humans think (e.g., overconfidence in inside-view predictions), not because different rules fundamentally should apply to fundamentally different types of uncertainty.
re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options). In this case, your examples for CC1 and CC2 don’t hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you don’t know anything about Mondays or Tuesdays.
Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example I’m given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue we’re meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I haven’t read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, it’s slightly more likely that someone will post about that on social media, and/or use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise would’ve. (Because me doing this on Tuesday means they’re slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much they’d be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesday—e.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict we’ll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
Sorry, I don’t have the time to comment in-depth. However, I think if one agrees with cluelessness, then you don’t offer an objection. You might even extend their worries by saying that “almost everything has “asymmetric uncertainty”″. I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
When I said “I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind”, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don’t devote explicit attention to such (extremely unlikely) “stories” when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could’ve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what I’d believe before updating on the fact that various smart people don’t share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn’t a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I’ve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessness—i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term “cluelessness” seems to suggest we know literally nothing about how to compare the outcomes. Whereas I’ve argued that in all cases we’ll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
we’re always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the “correct” way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that we’re dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesn’t seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.
Also, when reading Greaves and Mogensen’s papers, I was reminded of the ideas of cluster thinking (also here) and model combination. I could be drawing faulty analogies, but it seemed like those ideas could be ways to capture, in a form that can actually be readily worked with, the following idea (from Greaves; the same basic concept is also used in Mogensen):
That is, we can consider each probability function in the agent’s representor as one model, and then either qualitatively use Holden’s idea of cluster thinking, or get a weighted combination of those models. Then we’d actually have an answer, rather than just indifference.
This seems like potentially “the best of both worlds”; i.e., a way to capture both of the following intuitively appealing ideas:
perhaps we shouldn’t present singular, sharp credence functions over extremely hard-to-predict long-term effects
we can still make educated guesses like “avoiding extinction is probably bad in expectation” and (perhaps) “giving to AMF is probably good in expectation”.
(This second intuition can rest on ideas like “Yeah, ok, I agree that it’s ‘unclear’ how to weigh up these arguments, but I weigh up arguments when it’s unclear how to do so all the time. I’m still at least slightly more convinced by argument X, so I’m going to go with what it suggests, and just also remain extremely open to new evidence.”)
Hello,
Here is a good paper on this - https://www.princeton.edu/~adame/papers/sharp/elga-subjective-probabilities-should-be-sharp.pdf
Can’t the sequence proposal be fixed by conditioning on the past and only considering future sequences of actions? Committing to rejecting both bets A and B is rationally impermissible if you will be offered both since it’s worse than accepting both, but after your decision on A, regardless of whether you accepted or rejected, then it could be that both accepting and rejecting B are permissible at the same time. The fact that something was my past action shouldn’t matter or prevent me from completing some particular sequence of actions that includes past actions, only my future prospects and future actions matter.
I think this makes sense for sharp probabilities, too: suppose you assign some sharp probability p≤25 to H being true, and have already rejected A, even though it had positive expected value (so this decision was irrational at the time). Then, since the expected value of B is ≤0, it’s permissible to reject B, and even required if the inequality is strict. You may be rationally required to complete a sequence of actions which was irrational before you started.
You can also apply Mogensen’s maximality rule to sequences. Given some set of plausible probability distributions, if one sequence of actions θ is better in expectation than another ϕ under at least one distribution, and not worse under any other distribution, then θ≻ϕ. If neither strict inequality holds between the two options and these are the only two options, then both are permissible. (We sacrifice the independence of irrelevant alternatives, since a third option could dominate one but not the other, only ruling out the dominated one.)
How important do you think non-sharp credence functions are to arguments for cluelessness being important? If you generally reject Knightian uncertainty and quantify all possibilities with probabilities, how diminished is the case for problematic cluelessness?
(Or am I just misunderstanding the words here?)
My belief that cluelessness is important is fairly independent of any specific philosophical/technical account of cluelessness. In particular, I don’t think me changing my mind on whether credence functions have to be sharp would significantly change my views on the importance of cluelessness.
In this comment I’ve explained in more detail what I think about the relationship between the basic idea and specific philosophical theories trying to describe it.
(FWIW, I don’t feel like I have a well-informed view on whether credence functions have to be sharp. If anything, I have a weak intuition that it’s a bit more likely than not that I’d conclude they have to be if I spent more time looking into the question.)