Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This is exciting, I’m optimistic that digesting the formalisms here will help me.
Ideally I’d like to think about obligations or burdens as it relates to epistemic diversity, primarily a group’s obligation to seek out quality dissent. I’ve recently been deciding that echo chamber risks are actually a lot less bad than the damage I’ve taken from awareness of what unsophisticated critics are up to. This awareness has made me less magnanimous, enthusiastic about cooperation, etc. To what extent is it my burden to protect my attention better, and to what extent is it the critic’s burden to be more sophisticated? The former seems fraught: any heuristic would most likely be an operationalization of parochial preferences or cultural baggage, or I’d have no way of being sure it’s not. The latter seems fraught: it’s out of my control.
This is related to the hugboxing problem, even though I think that post was about how taking unsophisticated lines of attack seriously underserves the critic, and I’m talking about how it underserves us.
Thanks, I found this very helpful for formalising and structuring how I think about the EA community’s positive and negative idiosyncrasies.
Great, thanks!
Me as well! Thanks a lot!
Some interesting points re: considering how beliefs formed, but I think it proves too much.
One of the main value of being able to defer to a certain extent to the EA community is knowing that the community will often be using a process similar to you to come to a conclusion, so that you have an estimate of the conclusion you would have come up with if you had more time.
Of course, you also need to take into account the possibility of there being a deference cycle.
Sorry, I’m afraid I don’t follow on either count. What’s a claim you’re saying would follow from this post but isn’t true?
More weight on community opinions than you suggested.
Would you have a moment to come up with a precise example, like the one at the end of my “minimal solution” section, where the argument of the post would justify putting more weight on community opinions than seems warranted?
No worries if not—not every criticism has to come with its own little essay—but I for one would find that helpful!
Sorry, I’m trying to reduce the amount of time I spend on the forum.
Should that say lower, instead?
It should, thanks! Fixed
I’m a bit confused by this. Suppose that EA has a good track record on an issue where its beliefs have been unusual from the get-go. For example, I think that by temperament EAs tend to be more open to sci-fi possibilities than others, even before having thought much about them; and that over the last decade or so we’ve increasingly seen sci-fi possibilities arising. Then I should update towards deferring to EAs because it seems like we’re in the sort of world where sci-fi possibilities happen, and it seems like others are (irrationally) dismissing these possibilities.
On a separate note: I currently don’t think that epistemic deference as a concept makes sense, because defying a consensus has two effects that are often roughly the same size: it means you’re more likely to be wrong, and it means you’re creating more value if right.* But if so, then using deferential credences to choose actions will systematically lead you astray, because you’ll neglect the correlation between likelihood of success and value of success.
Toy example: your inside view says your novel plan has 90% chance of working, and if it does it’ll earn $1000; and experts think it has 10% chance of working, and if it does it’ll earn $100. Suppose you place as much weight on your own worldview as experts’. Incorrect calculation: your all-things-considered credence in your plan working is 50%, your all-things-considered estimate of the value of success is $550, your all-things-considered expected value of the plan is $275. Better calculation: your worldview says that the expected value of your plan is $900, the experts think the expected value is $10, average these to get expected value of $455—much more valuable than in the incorrect calculation!
Note that in the latter calculation we never actually calculated any “all-things-considered credences”. For this reason I now only express such credences with a disclaimer like “but this shouldn’t be taken as action-guiding”.
* A third effect which might be bigger than either of them: it motivates you to go out and try stuff, which will give you valuable skills and make you more correct in the future.
I’m defining a way of picking sides in disagreements that makes more sense than giving everyone equal weight, even from a maximally epistemically modest perspective. The way in which the policy “give EAs more weight all around, because they’ve got a good track record on things they’ve been outside the mainstream on” is criticizable on epistemic modesty grounds is that one could object, “Others can see the track record as well as you. Why do you think the right amount to update on it is more than they think the right amount is?” You can salvage a thought along these lines in a epistemic-modesty-criticism-proof way, but it would need some further story about how, say, you have some “inside information” about the fact of EAs’ better track record. Does that help?
Your quote is replying to my attempt at a “gist”, in the introduction—I try to spell this out a bit more further in the middle of the last section, in the bit where I say “More broadly, groups may simply differ in their ability to acquire information, and it may be that a particular group’s ability on this front is difficult to determine without years of close contact.” Let me know if that bit clarifies the point.
Re
I don’t follow. I get that acting on low-probability scenarios can let you get in on neglected opportunities, but you don’t want to actually get the probabilities wrong, right?
In any event, maybe messing up the epistemics also makes it easier for you to spot neglected opportunities or something, and maybe this benefit sometimes kind of cancels out the cost, but this doesn’t strike me as relevant to the question of whether epistemic deference as a concept makes sense. Startup founders may benefit from overconfidence, but overconfidence as a concept still makes sense.
I reject the idea that all-things-considered probabilities are “right” and inside-view probabilities are “wrong”, because you should very rarely be using all-things-considered probabilities when making decisions, for reasons of simple arithmetic (as per my example). Tell me what you want to use the probability for and I’ll tell you what type of probability you should be using.
You might say: look, even if you never actually use all-things-considered probabilities in the real world, at least in theory they’re still normatively ideal. But I reject that too—see the Anthropic Decision Theory paper for why.
I don’t fully follow this explanation, but if it’s true that defying a consensus has two effects that are the same size, doesn’t that suggest you can choose any consensus-defying action because the EV is the same regardless, since the likelihood of you being wrong is ~cancelled out by the expected value of being right?
Also the “value if right” doesn’t seem likely to be only modulated by the extent to which you are defying the consensus?
Example:
If you are flying a plane and considering a new way of landing a plane that goes against what 99% of pilots think is reasonable , the “value if right” might be much smaller than the negative effects of “value if wrong”. It’s also not clear to me that if you now decide to take an landing approach that was against what 99.9% of pilots think was reasonable you will 10x your “value if right” compared to the 99% action.
The probability of success in some project may be correlated with value conditional on success in many domains, not just ones involving deference, and we typically don’t think that gets in the way of using probabilities in the usual way, no? If you’re wondering whether some corner of something sticking out of the ground is a box of treasure or a huge boulder, maybe you think that the probability you can excavate it is higher if it’s the box of treasure, and that there’s only any value to doing so if it is. The expected value of trying to excavate is P(treasure) * P(success|treasure) * value of treasure. All the probabilities are “all-things-considered”.
I respect you a lot, both as a thinker and as a friend, so I really am sorry if this reply seems dismissive. But I think there’s a sort of “LessWrong decision theory black hole” that makes people a bit crazy in ways that are obvious from the outside, and this comment thread isn’t the place to adjudicate all that. I trust that most readers who aren’t in the hole will not see your example as demonstration that you shouldn’t use all-things-considered probabilities when making decisions, so I won’t press the point beyond this comment.
From my perspective it’s the opposite: epistemic modesty is an incredibly strong skeptical argument (a type of argument that often gets people very confused), extreme forms of which have been popular in EA despite leading to conclusions which conflict strongly with common sense (like “in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue”).
In practice, fortunately, even people who endorse strong epistemic modesty don’t actually implement it, and thereby manage to still do useful things. But I haven’t yet seen any supporters of epistemic modesty provide a principled way of deciding when to act on their own judgment, in defiance of the conclusions of (a large majority of) the 8 billion other people on earth.
By contrast, I think that focusing on policies rather than all-things-considered credences (which is the thing I was gesturing at with my toy example) basically dissolves the problem. I don’t expect that you believe me about this, since I haven’t yet written this argument up clearly (although I hope to do so soon). But in some sense I’m not claiming anything new here: I think that an individual’s all-things-considered deferential credences aren’t very useful for almost the exact same reason that it’s not very useful to take a group of people and aggregate their beliefs into a single set of “all-people-considered” credences when trying to get them to make a group decision (at least not using naive methods; doing it using prediction markets is more reasonable).
That said, thanks for sharing the Anthropic Decision Theory paper! I’ll check it out.
I appreciate the reminder that “these people have done more research” is itself a piece of information that others can update on, and that the mystery of why they haven’t isn’t solved. (Just to ELI5, we’re assuming no secret information, right?)
I suppose this is very similar to “are you growing as a movement because you’re convincing people or via selection effects” and if you know the difference you can update more confidently on how right you are (or at least how persuasive you are).
Thanks!
No actually, we’re not assuming in general that there’s no secret information. If other people think they have the same prior as you, and think you’re as rational as they are, then the mere fact that they see you disagreeing with them should be enough for them to update on. And vice-versa. So even if two people each have some secret information, there’s still something to be explained as to why they would have a persistent public disagreement. This is what makes the agreement theorem kind of surprisingly powerful.
The point I’m making here though is that you might have some “secret information” (even if it’s not spelled out very explicitly) about the extent to which you actually do have, say, a different prior from them. That particular sort of “secret information” could be enough to not make it appropriate for you to update toward each other; it could account for a persistent public disagreement. I hope that makes sense.
Agreed about the analogy to how you might have some inside knowledge about the extent to which your movement has grown because people have actually updated on the information you’ve presented them vs. just selection effects or charisma. Thanks for pointing it out!
Right, right, I think on some level this is very unintuitive, and I appreciate you helping me wrap my mind around it—even secret information is not a problem as long as people are not lying about their updates (though if all updates are secret there’s obviously much less to update on)
Yup!
I found the framing of “Is this community better-informed relative to what disagreers expect?” new and useful, thank you!
To point out the obvious: Your proposed policy of updating away from EA beliefs if they come in large part from priors is less applicable for many EAs who want to condition on “EA tenets”. For example, longtermism depends on being quite impartial regarding when a person lives, but many EAs would think it’s fine that we were “unusual from the get-go” regarding this prior. (This is of course not very epistemically modest of them.)
Here are a more not-well-fleshed-out, maybe-obvious, maybe-wrong concerns with your policy:
It’s kind of hard to determine whether EA beliefs are weird because we were weird from the get-go or because we did some novel piece of research/thinking. For example, was Toby Ord concerned about x-risks in 2009 because he had unusual priors or because he had thought about novel considerations that are obscure to outsiders? People would probably introduce their own biases while making this judgment. I think you could even try to make an argument like this about polyamory.
People probably generally think a community is better-informed than expected when spending more time engaging with it. At least this is what I see empirically. So for people who’ve engaged a lot with EA, your policy of updating towards EA beliefs if EA seems better-informed than expected probably leads to deferring asymmetrically more to EA than other communities. Since they will have engaged less with other communities. (Ofc you could try to consciously correct for that.)
I overall often have the concern with EA beliefs that “maybe most big ideas are wrong”, just like most big ideas have been wrong throughout history. In this frame, our little inside pet theories and EA research provide almost no Bayesian information (because they are likely to be wrong) and it makes sense to closely stick to whatever seems most “common sense” or “established”. But I’m not well-calibrated on how true “most big ideas are wrong” is. (This point is entirely compatible with what you said in the post but it changes the magnitude of updates you’d make.)
Side-note: I found this post super hard to parse and would’ve appreciated it a lot if it was more clearly written!
Thanks! Glad to hear you found the framing new and useful, and sorry to hear you found it confusingly written.
On the point about “EA tenets”: if you mean normative tenets, then yes, how much you want to update on others’ views on that front might be different from how much you want to update on others’ empirical beliefs. I think the natural dividing line here would be whether you consider normative tenets more like beliefs (in which case you update when you see others disagreeing—along the lines of this post, say) or more like preferences (in which case you don’t). My own guess is that they’re more like beliefs—i.e. we should take the fact that most people reject temporal impartiality as at least some evidence against longtermism—but thanks for noting that there’s a distinction one might want to make here.
On the three bullet points: I agree with the worries on all counts! As you sort of note, these could be seen as difficulties with “implementing the policy” appropriately, rather than problems with the policy in the abstract, and that is how I see them. But I take the point that if an idea is hard enough to implement then there might not be much practically to be learned from it.