I’ve also never been satisfied with any account I’ve seen of indeterminate/imprecise credences
I’d be keen to hear more why you’re unsatisfied with these accounts.
But this isn’t a fundamental indeterminacy — rather, it’s a view that it’s often not worth expending the cognition to make them more precise
Just to be clear, are you saying: “It’s a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren’t sensitive to variation within the ranges specified by these credences”?
At any moment, we have credence (itself kind of imprecise absent further thought) about where our probabilities will end up with further thought
If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. (Maybe by “kind of” you mean to allow for a degree of imprecision that isn’t decision-relevant, per my question above?)
What’s the point of tracking all these imprecise credences rather than just single precise best-guesses?
Because if the sign of intervention X for the long-term varies across your range of credences, that means you don’t have a reason to do X on total-EV grounds. This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do (see this comment).
ETA: I’m also curious whether, if you agreed that we aren’t rationally obligated to assign determinate credences in many cases, you’d agree that your arguments about unknown unknowns here wouldn’t work. (Because there’s no particular reason to commit to one “simplicity prior,” say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)
I’d be keen to hear more why you’re unsatisfied with these accounts.
With the warning that this may be unsatisfying, since this is recounting a feeling I’ve had historically, and I’m responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:
Accounts of imprecise credences seem typically to produce something like ranges of probabilities and then treat these as primitives
I feel confusion about “where does the range come from? what’s it supposed to represent?”
Honestly this echoes some of my unease about precise credences in the first place!
So I am into exploration of imprecise credences as a tool for modelling/describing the behaviour of boundedly rational actors (including in some contexts as a normative ideal for them to follow)
But I think I get off the train before reification of the imprecise credences as a thing unto themselves
(that’s incomplete, but I think it’s the first-order bit of what seems unsatisfying)
Just to be clear, are you saying: “It’s a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren’t sensitive to variation within the ranges specified by these credences”?
Definitely not saying that!
Instead I’m saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren’t enough to justify it, so they shouldn’t do the thinking.
If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me.
I don’t see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).
That said, I’m not sure it’s crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.
Later in your comment you use the phrase “rationally obligated”. I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:
I’m sympathetic to the idea that fully rational actors should have precise credences
(for the normal vNM kind of reasons)
I don’t want to fully commit to that view, but it also doesn’t seem to me to be cruxy
I don’t think that boundedly rational actors are rationally obliged to have precise credences
But I don’t think that entails giving up on the idea of them making progress towards something (that I might think of as “the precise credence a fully rational version of them would have”) by thinking more, by saying “you have no reason to adopt a precise credence”
Because if the sign of intervention X for the long-term varies across your range of credences, that means you don’t have a reason to do X on total-EV grounds.
I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I’ll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I’d end up (somewhere close to 50%), and think that this is a good bet to take—rather than saying that EV somehow doesn’t give me any reason to like the bet.
This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do
For what it’s worth I’m often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).
ETA: I’m also curious whether, if you agreed that we aren’t rationally obligated to assign determinate credences in many cases, you’d agree that your arguments about unknown unknowns here wouldn’t work. (Because there’s no particular reason to commit to one “simplicity prior,” say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)
I don’t think I’d agree with that. Although I could see saying “yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it”.
Instead I’m saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren’t enough to justify it, so they shouldn’t do the thinking.
(I don’t think this is particularly important, you can feel free to prioritize my other comment.) Right, sorry, I understood that part. I was asking about an implication of this view. Suppose you have an intervention whose sign varies over the range of your indeterminate credences. Per the standard decision theory for indeterminate credences, then, you currently don’t have a reason to do the intervention — it’s not determinately better than inaction. (I’ll say more about this below, re: your digits of pi example.) So if by “the returns from doing that thinking aren’t enough to justify it” you mean you should just do the intervention in such a case, that doesn’t make sense to me.
I feel confusion about “where does the range come from? what’s it supposed to represent?”
Honestly this echoes some of my unease about precise credences in the first place!
Indeed. :) If “where do these numbers come from?” is your objection, this is a problem for determinate credences too. We could get into the positive motivations for having indeterminate credences, if you’d like, but I’m confused as to why your questions are an indictment of indeterminacy in particular.
Some less pithy answers to your question:
They might come from the same sort of process people go through when generating determinate credences — i.e. thinking through various considerations and trying to quantify them. But, at the step where you find yourself thinking, “Hm, it could be 0.2, but it could also be 0.3 I guess, idk…”, you don’t force yourself to pick just one number.
More formally, interval-valued credences fall out of Bradley’s (2017, sec 11.5.2) representation theorem. Even if your beliefs are just comparative judgments like “is A more/less/equally/[none-of-the-above] likely than B?” — which are realistic for bounded agents like us — if they satisfy all the usual axioms of probabilism except for completeness,[1] they have the structure of a set of probability distributions.
I don’t see probabilities as magic absolutes, rather than a tool
I’m confused about this “tool” framing, because it seems that in order to evaluate some numerical representation of your epistemic state as “helpful,” you still need to make reference to your beliefs per se. There’s no belief-independent stance from which you can evaluate beliefs as useful (see this post).[2]
The epistemic question here is whether your beliefs per se should have the structure of (in)determinacy, e.g., do you think you should always be able to say “intervention XYZ is net-good, net-bad, or net-neutral for the long-term future”. That’s what I’m talking about when talking about “rational obligation” to have (in)determinate credences in some situation. It’s independent of the kind of mere practical limitations on the precision of numbers in our heads you’re talking about.
Analogy: Your view here is like that of a hedonist saying, “Oh yeah, if I tried always directly maximizing my own pleasure, I’d feel worse. So pursuing non-pleasure things is sometimes helpful for bounded agents, by a hedonist axiology. But sometimes it actually is better to just maximize pleasure.” Whereas I’m the non-hedonist saying, “Okay but I’m endorsing the non-pleasure stuff as intrinsically valuable, and I’m not sure you’ve explained why intrinsically valuing non-pleasure stuff is confused.” (The hedonism thing is just illustrative, to be clear. I don’t think epistemology is totally analogous to axiology.)
for the normal vNM kind of reasons
The VNM theorem only tells you you’re representable as a precise EV maximizer if your preferences satisfy completeness. But completeness is exactly what defenders of indeterminate beliefs call into question. Rationality doesn’t seem to demand completeness — you can avoid money pumps / Dutch books with incomplete preferences.
For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I’ll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I’d end up
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
I don’t think I’d agree with that.
I’d like to understand why, then. As I said, if indeterminate beliefs are on the table, it seems like the straightforward response to unknown unknowns is to say, “By nature, my access to these considerations is murky, so why should I think this particular determinate ‘simplicity prior’ is privileged as a good model?”
I appreciated a bunch of things about this comment. Sorry, I’ll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I’m not actually arguing that it’s confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn’t helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it’s sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I’m not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than “where do the numbers come from?”). Is it some analogue of betting odds? Or what?
And then, you said:
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
To some extent, maybe fighting the hypothetical is a general move I’m inclined to make? This gets at “what does your range of indeterminate credences represent?”. I think if you could step me through how you’d be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don’t need to compute a determinate credence here—it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
without having such an account it’s sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here’s why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam’s razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
Is it some analogue of betting odds? Or what?
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
how you’d be inclined to think about indeterminate credences in an example like the digits of pi case
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
Suppose that Sally historically has an uncanny ability to guess the parity of digits of (conjectured-to-be) normal numbers with an accuracy of 70%. Somehow, it’s verifiable that she’s not cheating. No one quite knows how her guesses are so good.
Her accuracy varies with how happy she is at the time, though. She has an accuracy of ~95% when really ecstatic, ~50% when neutral, and only ~10% when really sad. Also, she’s never guessed parities of Nth digits for any N < 1 million.
Now, Sally also hasn’t seen the digits of pi beyond the first 10, and she guesses the 20th is odd. I don’t know how happy she is at the time, though I know she’s both gotten a well-earned promotion at her job and had an important flight canceled.
What should my credence in “the 20th digit is odd” be? Seems like there are various considerations floating around:
The principle of indifference seems like a fair baseline.
But there’s also Sally’s really impressive average track record on N ≥ 1 million.
But also I know nothing about what mechanism drives her intuition, so it’s pretty unclear if her intuition generalizes to such a small N.
And even setting that aside, since I don’t know how happy she is, should I just go with the base rate of 70%? Or should I apply the principle of indifference to the “happiness level” parameter, and assume she’s neutral (so 50%)?
But presumably the evidence about the promotion and canceled flight tell me something about her mood. I guess slightly less than neutral overall (but I have little clue how she personally would react to these two things)? How much less?
I really don’t know a privileged way to weigh all this up, especially since I’ve never thought about how much to defer to a digit-guessing magician before. It seems pretty defensible to have a range of credences between, say, 40% and 75%. These endpoints themselves are kinda arbitrary, but at least seem considerably less arbitrary than pinning down to one number.
I could try modeling all this and computing explicit priors and likelihood ratios, but it seems extremely doubtful there’s gonna be one privileged model and distribution over its parameters.
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there’s a sense in which decisions are more real than beliefs. (I originally wrote “decisions are real and beliefs are not”, but they’re both ultimately abstractions about what’s going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then “X has beliefs” is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
Maybe when I first find out the information on Sally, I quickly eyeball and think that defensible credences probably lie within the range 30% to 90%
Then later when I sit down and think about it more carefully, I think that actually the defensible credences are more like in the range 40% to 75%
If I thought about it even longer, maybe I’d tighten my range a bit further again (45% to 55%? 50% to 70%? I don’t know!)
In this picture, no realistic amount of thinking I’m going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can’t even distinguish between:
Cases where we’d collapse our estimated range of defensible credences down to a very small range or even a single point with arbitrary thinking time, but where in practice progress is so slow that it’s not viable
Cases where even in the limit with infinite thinking time, we would maintain a significant range of defensible credences
Because of this, from my perspective the question of whether credences are ultimately indeterminate is … not so interesting? It’s enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won’t be.
I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then “X has beliefs” is kind of a useful model for predicting their behaviour in the decision situations.
I think this is answering a different question, though. When talking about rationality and cause prioritization, what we want to know is what we ought to do, not how to describe our patterns of behavior after the fact. And when asking what we ought to do under uncertainty, I don’t see how we escape the question of what beliefs we’re justified in. E.g. betting on short AI timelines by opting out of your pension is only rational insofar as it’s rational to (read: you have good reasons to) believe in short timelines.
from my perspective the question of whether credences are ultimately indeterminate is … not so interesting? It’s enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won’t be
I’m not sure what you’re getting at here. My substantive claim is that in some cases, our credences about features of the far future might be sufficiently indeterminate that overall we won’t be able to determinately say “X is net-good for the far future in expectation.” If you agree with that, that seems to have serious implications that the EA community isn’t pricing in yet. If you don’t agree with that, I’m not sure if it’s because of (1) thorny empirical disagreements over the details of what our credences should be, or (2) something more fundamental about epistemology (which is the level at which I thought we were having this discussion, so far). I think getting into (1) in this thread would be a bit of a rabbit hole (which is better left to some forthcoming posts I’m coauthoring), though I’d be happy to give some quick intuition pumps. Greaves here (the “Suppose that’s my personal uber-analysis...” paragraph) is a pretty good starting point.
I’d be keen to hear more why you’re unsatisfied with these accounts.
Just to be clear, are you saying: “It’s a view that, for all/most indeterminate credences we might have, our prioritization decisions (e.g. whether intervention X is net-good or net-bad) aren’t sensitive to variation within the ranges specified by these credences”?
If your estimate of your ideal-precise-credence-in-the-limit is itself indeterminate, that seems like a big deal — you have no particular reason to adopt a determinate credence then, seems to me. (Maybe by “kind of” you mean to allow for a degree of imprecision that isn’t decision-relevant, per my question above?)
Because if the sign of intervention X for the long-term varies across your range of credences, that means you don’t have a reason to do X on total-EV grounds. This seems hugely decision-relevant to me, if we have other decision procedures under cluelessness available to us other than committing to a precise best guess, as I think we do (see this comment).
ETA: I’m also curious whether, if you agreed that we aren’t rationally obligated to assign determinate credences in many cases, you’d agree that your arguments about unknown unknowns here wouldn’t work. (Because there’s no particular reason to commit to one “simplicity prior,” say. And the net direction of our biases on our knowledge-sampling processes could be indeterminate.)
With the warning that this may be unsatisfying, since this is recounting a feeling I’ve had historically, and I’m responding to my impression about a range of accounts, rather than providing sharp complaints about a particular account:
Accounts of imprecise credences seem typically to produce something like ranges of probabilities and then treat these as primitives
I feel confusion about “where does the range come from? what’s it supposed to represent?”
Honestly this echoes some of my unease about precise credences in the first place!
So I am into exploration of imprecise credences as a tool for modelling/describing the behaviour of boundedly rational actors (including in some contexts as a normative ideal for them to follow)
But I think I get off the train before reification of the imprecise credences as a thing unto themselves
(that’s incomplete, but I think it’s the first-order bit of what seems unsatisfying)
Definitely not saying that!
Instead I’m saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren’t enough to justify it, so they shouldn’t do the thinking.
I don’t see probabilities as magic absolutes, rather than a tool. Sometimes it seems helpful to pluck a number out of the air and roll with that (and that to be better practice than investing cognition in keeping track of an uncertainty range).
That said, I’m not sure it’s crucial to me to model there being a single precise credence that is being approximated. What feels more important is to be able to model the (common) phenomenon where you can reduce your uncertainty by investing more time thinking.
Later in your comment you use the phrase “rationally obligated”. I find I tend to shy away from that phrase in this context, because of vagueness about whether it means for fully rational or boundedly rational actors. In short:
I’m sympathetic to the idea that fully rational actors should have precise credences
(for the normal vNM kind of reasons)
I don’t want to fully commit to that view, but it also doesn’t seem to me to be cruxy
I don’t think that boundedly rational actors are rationally obliged to have precise credences
But I don’t think that entails giving up on the idea of them making progress towards something (that I might think of as “the precise credence a fully rational version of them would have”) by thinking more, by saying “you have no reason to adopt a precise credence”
I reject this claim. For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I’ll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I’d end up (somewhere close to 50%), and think that this is a good bet to take—rather than saying that EV somehow doesn’t give me any reason to like the bet.
For what it’s worth I’m often pretty sympathetic to other decision procedures than committing to a precise best guess (cluelessness or not).
I don’t think I’d agree with that. Although I could see saying “yes, this is a valid argument about unknown unknowns; however, it might be overwhelmed by as-yet-undiscovered arguments about unknown unknowns that point in the other direction, so we should be suspicious of resting too much on it”.
(I don’t think this is particularly important, you can feel free to prioritize my other comment.) Right, sorry, I understood that part. I was asking about an implication of this view. Suppose you have an intervention whose sign varies over the range of your indeterminate credences. Per the standard decision theory for indeterminate credences, then, you currently don’t have a reason to do the intervention — it’s not determinately better than inaction. (I’ll say more about this below, re: your digits of pi example.) So if by “the returns from doing that thinking aren’t enough to justify it” you mean you should just do the intervention in such a case, that doesn’t make sense to me.
Thanks for explaining!
Indeed. :) If “where do these numbers come from?” is your objection, this is a problem for determinate credences too. We could get into the positive motivations for having indeterminate credences, if you’d like, but I’m confused as to why your questions are an indictment of indeterminacy in particular.
Some less pithy answers to your question:
They might come from the same sort of process people go through when generating determinate credences — i.e. thinking through various considerations and trying to quantify them. But, at the step where you find yourself thinking, “Hm, it could be 0.2, but it could also be 0.3 I guess, idk…”, you don’t force yourself to pick just one number.
More formally, interval-valued credences fall out of Bradley’s (2017, sec 11.5.2) representation theorem. Even if your beliefs are just comparative judgments like “is A more/less/equally/[none-of-the-above] likely than B?” — which are realistic for bounded agents like us — if they satisfy all the usual axioms of probabilism except for completeness,[1] they have the structure of a set of probability distributions.
I’m confused about this “tool” framing, because it seems that in order to evaluate some numerical representation of your epistemic state as “helpful,” you still need to make reference to your beliefs per se. There’s no belief-independent stance from which you can evaluate beliefs as useful (see this post).[2]
The epistemic question here is whether your beliefs per se should have the structure of (in)determinacy, e.g., do you think you should always be able to say “intervention XYZ is net-good, net-bad, or net-neutral for the long-term future”. That’s what I’m talking about when talking about “rational obligation” to have (in)determinate credences in some situation. It’s independent of the kind of mere practical limitations on the precision of numbers in our heads you’re talking about.
Analogy: Your view here is like that of a hedonist saying, “Oh yeah, if I tried always directly maximizing my own pleasure, I’d feel worse. So pursuing non-pleasure things is sometimes helpful for bounded agents, by a hedonist axiology. But sometimes it actually is better to just maximize pleasure.” Whereas I’m the non-hedonist saying, “Okay but I’m endorsing the non-pleasure stuff as intrinsically valuable, and I’m not sure you’ve explained why intrinsically valuing non-pleasure stuff is confused.” (The hedonism thing is just illustrative, to be clear. I don’t think epistemology is totally analogous to axiology.)
The VNM theorem only tells you you’re representable as a precise EV maximizer if your preferences satisfy completeness. But completeness is exactly what defenders of indeterminate beliefs call into question. Rationality doesn’t seem to demand completeness — you can avoid money pumps / Dutch books with incomplete preferences.
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
I’d like to understand why, then. As I said, if indeterminate beliefs are on the table, it seems like the straightforward response to unknown unknowns is to say, “By nature, my access to these considerations is murky, so why should I think this particular determinate ‘simplicity prior’ is privileged as a good model?”
(plus another condition that doesn’t seem controversial)
Technically, there are Dutch book and money pump arguments, but those put very little constraints on beliefs, as argued in the linked post.
I appreciated a bunch of things about this comment. Sorry, I’ll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I’m not actually arguing that it’s confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn’t helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it’s sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I’m not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than “where do the numbers come from?”). Is it some analogue of betting odds? Or what?
And then, you said:
To some extent, maybe fighting the hypothetical is a general move I’m inclined to make? This gets at “what does your range of indeterminate credences represent?”. I think if you could step me through how you’d be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don’t need to compute a determinate credence here—it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here’s why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam’s razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
Suppose that Sally historically has an uncanny ability to guess the parity of digits of (conjectured-to-be) normal numbers with an accuracy of 70%. Somehow, it’s verifiable that she’s not cheating. No one quite knows how her guesses are so good.
Her accuracy varies with how happy she is at the time, though. She has an accuracy of ~95% when really ecstatic, ~50% when neutral, and only ~10% when really sad. Also, she’s never guessed parities of Nth digits for any N < 1 million.
Now, Sally also hasn’t seen the digits of pi beyond the first 10, and she guesses the 20th is odd. I don’t know how happy she is at the time, though I know she’s both gotten a well-earned promotion at her job and had an important flight canceled.
What should my credence in “the 20th digit is odd” be? Seems like there are various considerations floating around:
The principle of indifference seems like a fair baseline.
But there’s also Sally’s really impressive average track record on N ≥ 1 million.
But also I know nothing about what mechanism drives her intuition, so it’s pretty unclear if her intuition generalizes to such a small N.
And even setting that aside, since I don’t know how happy she is, should I just go with the base rate of 70%? Or should I apply the principle of indifference to the “happiness level” parameter, and assume she’s neutral (so 50%)?
But presumably the evidence about the promotion and canceled flight tell me something about her mood. I guess slightly less than neutral overall (but I have little clue how she personally would react to these two things)? How much less?
I really don’t know a privileged way to weigh all this up, especially since I’ve never thought about how much to defer to a digit-guessing magician before. It seems pretty defensible to have a range of credences between, say, 40% and 75%. These endpoints themselves are kinda arbitrary, but at least seem considerably less arbitrary than pinning down to one number.
I could try modeling all this and computing explicit priors and likelihood ratios, but it seems extremely doubtful there’s gonna be one privileged model and distribution over its parameters.
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there’s a sense in which decisions are more real than beliefs. (I originally wrote “decisions are real and beliefs are not”, but they’re both ultimately abstractions about what’s going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then “X has beliefs” is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
Maybe when I first find out the information on Sally, I quickly eyeball and think that defensible credences probably lie within the range 30% to 90%
Then later when I sit down and think about it more carefully, I think that actually the defensible credences are more like in the range 40% to 75%
If I thought about it even longer, maybe I’d tighten my range a bit further again (45% to 55%? 50% to 70%? I don’t know!)
In this picture, no realistic amount of thinking I’m going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can’t even distinguish between:
Cases where we’d collapse our estimated range of defensible credences down to a very small range or even a single point with arbitrary thinking time, but where in practice progress is so slow that it’s not viable
Cases where even in the limit with infinite thinking time, we would maintain a significant range of defensible credences
Because of this, from my perspective the question of whether credences are ultimately indeterminate is … not so interesting? It’s enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won’t be.
I think this is answering a different question, though. When talking about rationality and cause prioritization, what we want to know is what we ought to do, not how to describe our patterns of behavior after the fact. And when asking what we ought to do under uncertainty, I don’t see how we escape the question of what beliefs we’re justified in. E.g. betting on short AI timelines by opting out of your pension is only rational insofar as it’s rational to (read: you have good reasons to) believe in short timelines.
I’m not sure what you’re getting at here. My substantive claim is that in some cases, our credences about features of the far future might be sufficiently indeterminate that overall we won’t be able to determinately say “X is net-good for the far future in expectation.” If you agree with that, that seems to have serious implications that the EA community isn’t pricing in yet. If you don’t agree with that, I’m not sure if it’s because of (1) thorny empirical disagreements over the details of what our credences should be, or (2) something more fundamental about epistemology (which is the level at which I thought we were having this discussion, so far). I think getting into (1) in this thread would be a bit of a rabbit hole (which is better left to some forthcoming posts I’m coauthoring), though I’d be happy to give some quick intuition pumps. Greaves here (the “Suppose that’s my personal uber-analysis...” paragraph) is a pretty good starting point.