Independent impressions
Your independent impression about something is essentially what you’d believe about that thing if you weren’t updating your beliefs in light of peer disagreement—i.e., if you weren’t taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Meanwhile, your all-things-considered belief can (and probably should!) also take into account peer disagreement.
Armed with this concept, I try to stick to the following epistemic/discussion norms, and I think it’s good for other people to do so as well:
I try to keep track of my own independent impressions separately from my all-things-considered beliefs
I try to feel comfortable reporting my own independent impression, even when I know it differs from the impressions of people with more expertise in a topic
I try to be clear about whether, in a given moment, I’m reporting my independent impression or my all-things-considered belief
One rationale for that bundle of norms is to avoid information cascades.
In contrast, when I actually make decisions, I try to always make them based on my all-things-considered beliefs.
For example: My independent impression is that it’s plausible that an unrecoverable dystopia is more likely than extinction and that we should prioritise such risks more than we currently do. But this opinion seems relatively uncommon among people who’ve thought a lot about existential risks. That observation pushes my all-things-considered belief somewhat away from my independent impression and towards what most of those people seem to think. And this all-things-considered belief is what guides my research and career decisions. But I think it’s still useful for me to keep track of my independent impression and report it sometimes, or else communities I’m part of might end up with overly certain and homogenous beliefs.
This term, this concept, and these suggested norms aren’t at all original to me—see in particular Naming beliefs, this comment, and several of the posts tagged Epistemic humility (especially this one). But I wanted a clear, concise description of this specific set of terms and norms so that I could link to it whenever I say I’m reporting my independent impression, ask someone for theirs, or ask someone whether an opinion they’ve given is their independent impression or their all-things-considered belief.
My thanks to Lukas Finnveden for suggesting I make this a top-level post (it was originally a shortform).
This work is licensed under a Creative Commons Attribution 4.0 International License.
- My take on What We Owe the Future by 1 Sep 2022 18:07 UTC; 353 points) (
- Deferring by 12 May 2022 23:44 UTC; 101 points) (
- Long Reflection Reading List by 24 Mar 2024 16:27 UTC; 92 points) (
- Trends in the dollar training cost of machine learning systems by 1 Feb 2023 14:48 UTC; 63 points) (
- University groups as impact-driven truth-seeking teams by 14 Mar 2024 6:43 UTC; 39 points) (
- 27 Nov 2023 23:56 UTC; 29 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
- Trends in the dollar training cost of machine learning systems by 1 Feb 2023 14:48 UTC; 23 points) (LessWrong;
- 13 Nov 2022 4:16 UTC; 20 points) 's comment on The FTX Future Fund team has resigned by (
- 20 Oct 2021 7:35 UTC; 19 points) 's comment on Charles He’s Quick takes by (
- Deferring by 12 May 2022 23:56 UTC; 18 points) (LessWrong;
- 25 May 2024 13:14 UTC; 17 points) 's comment on Survey: bioethicists’ views on bioethical issues by (
- 28 Nov 2022 19:32 UTC; 12 points) 's comment on Don’t just give well, give WELLBYs: HLI’s 2022 charity recommendation by (
- 26 Sep 2021 18:52 UTC; 7 points) 's comment on Propose and vote on potential EA Wiki entries by (
- 18 Feb 2024 16:38 UTC; 4 points) 's comment on VictorW’s Quick takes by (
- 2 Jun 2023 15:22 UTC; 4 points) 's comment on Can you control the past? by (
- 27 Mar 2024 16:15 UTC; 3 points) 's comment on University groups as impact-driven truth-seeking teams by (
- 11 Aug 2023 14:36 UTC; 2 points) 's comment on Google could build a conscious AI in three months by (
- 26 Sep 2021 18:46 UTC; 2 points) 's comment on MichaelA’s Quick takes by (
- 11 Oct 2022 9:37 UTC; 2 points) 's comment on When reporting AI timelines, be clear who you’re deferring to by (
- 独立した印象 by 18 Aug 2023 15:48 UTC; 2 points) (
- 29 Jul 2023 20:33 UTC; 1 point) 's comment on The Ultimate Argument Against Deontology And For Utilitarianism by (
- Opinioni indipendenti by 18 Jan 2023 11:21 UTC; 1 point) (
A few arguments for letting your independent impression guide your research and career decisions instead:
If everyone in EA follows the strategy of letting their independent impression guide their research and career decisions, our distribution of research and career decisions will look like the aggregate of everyone’s independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a community. By contrast, if everyone acts based on a similar all-things-considered belief, we could overweight the modal scenario.
You have more detailed knowledge of your independent impression than your all-things-considered belief. If you act on your all-things-considered belief, you might take some action and then later talk to a person you were deferring to in taking that action, and realize that a better understanding of their view actually implies that the action you took wasn’t particularly helpful.
Working based on your independent impression could also be a comparative advantage if it feels more motivating since your path to impact seems more intuitively plausible.
IMO, good rules of thumb are:
Carefully consider other peoples’ beliefs, but don’t update too much on them if you don’t find the arguments for them persuasive. (There’s a big difference between “people are unconcerned about unrecoverable dystopia because of a specific persuasive argument I haven’t heard yet” and “people are unconcerned about unrecoverable dystopia because they haven’t thought about it much and it doesn’t seem like a fashionable thing to be concerned about”.)
Defer to your all-things-considered belief in research/career decisions if there’s an incentive to do so (e.g. if you can get a job working on the fashionable thing, but not the thing you independently think is most helpful).
I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So I’m glad you raised them in your comment.)
I partially disagree with your first argument, for three main reasons:
People have very different comparative advantages (in other words, people’s labour is way less fungible than their donations).
Imagine Alice’s independent impression is that X is super important, but she trusts Bob’s judgement a fair bit and knows B thinks Y is super important, and Alice is way more suited to doing Y. Meanwhile, Bob trusts Alice’s judgement a fair bit. And they both know all of this. In some cases, it’ll be best from everyone’s perspective if Alice does Y and Bob does X. (This is sort of analogous to moral trade, but here the differences in views aren’t just moral.)
Not in all cases! Largely for the other two reasons you note. All else held constant, it’s good for people to work on things they themselves really understand and buy the case for. But I think this can be outweighed by other sources of comparative advantage.
As another analogy, imagine how much the economy would be impeded if people decided whether they overall think plumbing or politics or or physics research are the most important thing in general and then they pursue that, regardless of their personal skill profiles.
I also think it makes sense for some people to specialise much more than others for working out what our all-things-considered beliefs should be on specific things.
Some people should do macrostrategy reseach, others should learn how US politics works and what we should do about that, others should learn about specific cause areas, etc.
I think it would be very inefficient and ineffective to try to get everyone to have well-informed independent impressions of all topics that are highly relevant to the question “What career/research decisions should I make?”
I think this becomes all the more true as the EA community grows, as we have more people focused on more specific things and on doing things (vs more high-level prioritisation research and things like that), and as we move into more and more areas.
So I don’t really agree that “our distribution of research and career decisions will look like the aggregate of everyone’s independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a community”, or at least I don’t think that’s a healthy way for our community to be.
See also
I think it’s true that, “if everyone acts based on a similar all-things-considered belief, we could overweight the modal scenario” (emphasis added), but I think that need not happen. We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.
(I wrote this comment quickly, and this is a big and complex topic where much more could be said. I really don’t want readers to round this off as me saying something like “Everyone should just do what 80,000 Hours says without thinking or questioning it”.)
Good points.
It’s not enough to just track the uncertainty, you also have to have visibility into current resource allocation. The “defer if there’s an incentive to do so” idea helps here, because if there’s an incentive, that suggests someone with such visibility thinks there is an under-allocation.
I found the OP helpful and thought it would have been improved by a more detailed discussion of how and why to integrate other people’s views. If you update when you shouldn’t—e.g. when you think you understand someone’s reasons but are confident they’re overlooking something—then we get information cascades/group think scenarios. By contrast, it seems far more sensible to defer to others if you have to make a decision, but don’t have the time/ability/resources to get to the bottom of why you disagree. If my doctor tells me to take some medicine for some minor ailment, it doesn’t seem worth me even trying to check if their reasoning was sound.
I like the words inside beliefs and outside beliefs, almost-but-not-quite analogous to inside- and outside-view reasoning. The actual distinction we want to capture is “which beliefs should we report in light of social-epistemological considerations” and “which beliefs should we use to make decisions to change the world”.
Agreed that this topic warrants a wiki entry, so I proposed that yesterday just after making this post, and Pablo—our fast-moving wiki maestro—has already made such an entry!
I almost like inside beliefs and outside beliefs, but:
I feel like “outside beliefs” implies that it’s only using info about other people’s beliefs, or is in any case setting aside one’s independent impression.
Whereas I see independent impressions as a subset of what forms our all-things-considered beliefs.
I’d also worry that inside and outside beliefs sounds too close to inside and outside views, which could create confusion because independent impressions can be based on outside views, peer disagreement can be driven by other people’s inside views, etc.
One final point: I think your last sentence could be read as implying that inside views are what we should report, in light of social-epistemological considerations. (Though I’m not sure if you actually meant that.) I think whether it’s best to report an independent impression, an all-things-considered belief, or both will vary depending on the context. I’d mostly advocate for being willing to report either (rather than shying away from ever stating independent impressions) and being clear about which one is being reported.
On the social-epistemological point: Yes, it varies by context.
One thing I’d add is that I think it’s hard to keep inside/outside (or independent and all-things-considered) beliefs separate for a long time. And your independent beliefs are almost certainly going to be influenced by peer evidence, and vice versa.
I think this means that if you are the kind of person whose main value to the community is sharing your opinions (rather than, say, being a fund manager), you should try to cultivate a habit of mostly attending to gears-level evidence and to some extent ignore testimonial evidence. This will make your own beliefs less personally usefwl for making decisions, but will make the opinions you share more valuable to the community.