Independent impressions
Your independent impression about something is essentially what youād believe about that thing if you werenāt updating your beliefs in light of peer disagreementāi.e., if you werenāt taking into account your knowledge about what other people believe and how trustworthy their judgement seems on this topic. Your independent impression can take into account the reasons those people have for their beliefs (inasmuch as you know those reasons), but not the mere fact that they believe what they believe.
Meanwhile, your all-things-considered belief can (and probably should!) also take into account peer disagreement.
Armed with this concept, I try to stick to the following epistemic/ādiscussion norms, and I think itās good for other people to do so as well:
I try to keep track of my own independent impressions separately from my all-things-considered beliefs
I try to feel comfortable reporting my own independent impression, even when I know it differs from the impressions of people with more expertise in a topic
I try to be clear about whether, in a given moment, Iām reporting my independent impression or my all-things-considered belief
One rationale for that bundle of norms is to avoid information cascades.
In contrast, when I actually make decisions, I try to always make them based on my all-things-considered beliefs.
For example: My independent impression is that itās plausible that an unrecoverable dystopia is more likely than extinction and that we should prioritise such risks more than we currently do. But this opinion seems relatively uncommon among people whoāve thought a lot about existential risks. That observation pushes my all-things-considered belief somewhat away from my independent impression and towards what most of those people seem to think. And this all-things-considered belief is what guides my research and career decisions. But I think itās still useful for me to keep track of my independent impression and report it sometimes, or else communities Iām part of might end up with overly certain and homogenous beliefs.
This term, this concept, and these suggested norms arenāt at all original to meāsee in particular Naming beliefs, this comment, and several of the posts tagged Epistemic humility (especially this one). But I wanted a clear, concise description of this specific set of terms and norms so that I could link to it whenever I say Iām reporting my independent impression, ask someone for theirs, or ask someone whether an opinion theyāve given is their independent impression or their all-things-considered belief.
My thanks to Lukas Finnveden for suggesting I make this a top-level post (it was originally a shortform).
This work is licensed under a Creative Commons Attribution 4.0 International License.
- My take on What We Owe the Future by 1 Sep 2022 18:07 UTC; 353 points) (
- Deferring by 12 May 2022 23:44 UTC; 101 points) (
- Long ReflecĀtion ReadĀing List by 24 Mar 2024 16:27 UTC; 92 points) (
- Trends in the dolĀlar trainĀing cost of maĀchine learnĀing systems by 1 Feb 2023 14:48 UTC; 63 points) (
- UniverĀsity groups as imĀpact-driven truth-seekĀing teams by 14 Mar 2024 6:43 UTC; 39 points) (
- 27 Nov 2023 23:56 UTC; 29 points) 's comment on Open Phil Should AlloĀcate Most NeartĀerĀmist FundĀing to AnĀiĀmal Welfare by (
- Trends in the dolĀlar trainĀing cost of maĀchine learnĀing systems by 1 Feb 2023 14:48 UTC; 23 points) (LessWrong;
- 13 Nov 2022 4:16 UTC; 20 points) 's comment on The FTX FuĀture Fund team has resigned by (
- 20 Oct 2021 7:35 UTC; 19 points) 's comment on Charles Heās Quick takes by (
- Deferring by 12 May 2022 23:56 UTC; 18 points) (LessWrong;
- 25 May 2024 13:14 UTC; 17 points) 's comment on SurĀvey: bioethiĀcistsā views on bioethĀiĀcal issues by (
- 28 Nov 2022 19:32 UTC; 12 points) 's comment on Donāt just give well, give WELLBYs: HLIās 2022 charĀity recommendation by (
- 26 Sep 2021 18:52 UTC; 7 points) 's comment on ProĀpose and vote on poĀtenĀtial EA Wiki entries by (
- 18 Feb 2024 16:38 UTC; 4 points) 's comment on VicĀtorWās Quick takes by (
- 2 Jun 2023 15:22 UTC; 4 points) 's comment on Can you conĀtrol the past? by (
- 27 Mar 2024 16:15 UTC; 3 points) 's comment on UniverĀsity groups as imĀpact-driven truth-seekĀing teams by (
- 11 Aug 2023 14:36 UTC; 2 points) 's comment on Google could build a conĀscious AI in three months by (
- 26 Sep 2021 18:46 UTC; 2 points) 's comment on MichaelAās Quick takes by (
- 11 Oct 2022 9:37 UTC; 2 points) 's comment on When reĀportĀing AI timelines, be clear who youāre deferĀring to by (
- ē¬ē«ććå°č±” by 18 Aug 2023 15:48 UTC; 2 points) (
- 29 Jul 2023 20:33 UTC; 1 point) 's comment on The UlĀtiĀmate ArĀguĀment Against DeonĀtolĀogy And For Utilitarianism by (
- Opinioni indipendenti by 18 Jan 2023 11:21 UTC; 1 point) (
A few arguments for letting your independent impression guide your research and career decisions instead:
If everyone in EA follows the strategy of letting their independent impression guide their research and career decisions, our distribution of research and career decisions will look like the aggregate of everyoneās independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a community. By contrast, if everyone acts based on a similar all-things-considered belief, we could overweight the modal scenario.
You have more detailed knowledge of your independent impression than your all-things-considered belief. If you act on your all-things-considered belief, you might take some action and then later talk to a person you were deferring to in taking that action, and realize that a better understanding of their view actually implies that the action you took wasnāt particularly helpful.
Working based on your independent impression could also be a comparative advantage if it feels more motivating since your path to impact seems more intuitively plausible.
IMO, good rules of thumb are:
Carefully consider other peoplesā beliefs, but donāt update too much on them if you donāt find the arguments for them persuasive. (Thereās a big difference between āpeople are unconcerned about unrecoverable dystopia because of a specific persuasive argument I havenāt heard yetā and āpeople are unconcerned about unrecoverable dystopia because they havenāt thought about it much and it doesnāt seem like a fashionable thing to be concerned aboutā.)
Defer to your all-things-considered belief in research/ācareer decisions if thereās an incentive to do so (e.g. if you can get a job working on the fashionable thing, but not the thing you independently think is most helpful).
I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So Iām glad you raised them in your comment.)
I partially disagree with your first argument, for three main reasons:
People have very different comparative advantages (in other words, peopleās labour is way less fungible than their donations).
Imagine Aliceās independent impression is that X is super important, but she trusts Bobās judgement a fair bit and knows B thinks Y is super important, and Alice is way more suited to doing Y. Meanwhile, Bob trusts Aliceās judgement a fair bit. And they both know all of this. In some cases, itāll be best from everyoneās perspective if Alice does Y and Bob does X. (This is sort of analogous to moral trade, but here the differences in views arenāt just moral.)
Not in all cases! Largely for the other two reasons you note. All else held constant, itās good for people to work on things they themselves really understand and buy the case for. But I think this can be outweighed by other sources of comparative advantage.
As another analogy, imagine how much the economy would be impeded if people decided whether they overall think plumbing or politics or or physics research are the most important thing in general and then they pursue that, regardless of their personal skill profiles.
I also think it makes sense for some people to specialise much more than others for working out what our all-things-considered beliefs should be on specific things.
Some people should do macrostrategy reseach, others should learn how US politics works and what we should do about that, others should learn about specific cause areas, etc.
I think it would be very inefficient and ineffective to try to get everyone to have well-informed independent impressions of all topics that are highly relevant to the question āWhat career/āresearch decisions should I make?ā
I think this becomes all the more true as the EA community grows, as we have more people focused on more specific things and on doing things (vs more high-level prioritisation research and things like that), and as we move into more and more areas.
So I donāt really agree that āour distribution of research and career decisions will look like the aggregate of everyoneās independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a communityā, or at least I donāt think thatās a healthy way for our community to be.
See also
I think itās true that, āif everyone acts based on a similar all-things-considered belief, we could overweight the modal scenarioā (emphasis added), but I think that need not happen. We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.
(I wrote this comment quickly, and this is a big and complex topic where much more could be said. I really donāt want readers to round this off as me saying something like āEveryone should just do what 80,000 Hours says without thinking or questioning itā.)
Good points.
Itās not enough to just track the uncertainty, you also have to have visibility into current resource allocation. The ādefer if thereās an incentive to do soā idea helps here, because if thereās an incentive, that suggests someone with such visibility thinks there is an under-allocation.
I found the OP helpful and thought it would have been improved by a more detailed discussion of how and why to integrate other peopleās views. If you update when you shouldnātāe.g. when you think you understand someoneās reasons but are confident theyāre overlooking somethingāthen we get information cascades/āgroup think scenarios. By contrast, it seems far more sensible to defer to others if you have to make a decision, but donāt have the time/āability/āresources to get to the bottom of why you disagree. If my doctor tells me to take some medicine for some minor ailment, it doesnāt seem worth me even trying to check if their reasoning was sound.
I wander to what extent the maintenance of both views simultaneously is practical. Perhaps they would bleed into each otherāwhere we may start taking on other peopleās beliefs for āall-things-consideredā and accepting them into our individual beliefs without truly questioning it and recognising it.
It sounds quite nice in theory, and I will try it to see what extent to which there is bleed over.
I like the words inside beliefs and outside beliefs, almost-but-not-quite analogous to inside- and outside-view reasoning. The actual distinction we want to capture is āwhich beliefs should we report in light of social-epistemological considerationsā and āwhich beliefs should we use to make decisions to change the worldā.
Agreed that this topic warrants a wiki entry, so I proposed that yesterday just after making this post, and Pabloāour fast-moving wiki maestroāhas already made such an entry!
I almost like inside beliefs and outside beliefs, but:
I feel like āoutside beliefsā implies that itās only using info about other peopleās beliefs, or is in any case setting aside oneās independent impression.
Whereas I see independent impressions as a subset of what forms our all-things-considered beliefs.
Iād also worry that inside and outside beliefs sounds too close to inside and outside views, which could create confusion because independent impressions can be based on outside views, peer disagreement can be driven by other peopleās inside views, etc.
One final point: I think your last sentence could be read as implying that inside views are what we should report, in light of social-epistemological considerations. (Though Iām not sure if you actually meant that.) I think whether itās best to report an independent impression, an all-things-considered belief, or both will vary depending on the context. Iād mostly advocate for being willing to report either (rather than shying away from ever stating independent impressions) and being clear about which one is being reported.
On the social-epistemological point: Yes, it varies by context.
One thing Iād add is that I think itās hard to keep inside/āoutside (or independent and all-things-considered) beliefs separate for a long time. And your independent beliefs are almost certainly going to be influenced by peer evidence, and vice versa.
I think this means that if you are the kind of person whose main value to the community is sharing your opinions (rather than, say, being a fund manager), you should try to cultivate a habit of mostly attending to gears-level evidence and to some extent ignore testimonial evidence. This will make your own beliefs less personally usefwl for making decisions, but will make the opinions you share more valuable to the community.