Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
This question is studied in veritistic social epistemology. I recommend playing around with the Laputa network epistemology simulation to get some practical model feedback to notice how it’s similar and dissimilar to your model of how the real world community behaves. Here are some of my independent impressions on the topic:
Distinguish between testimonial and technical evidence. The former is what you take on trust (epistemic deference, Aumann-agreement stuff), and the latter is everything else (argument, observation, math).
Under certain conditions, there’s a trade-off between the accuracy of crowdsourced estimates (e.g. surveys on AI risk) and the widespread availability of decision-relevant current best guesses (cf. simulations of the “Zollman effect”).
Personally, I think simulations plausibly underestimate the effect. Think of it like doing Monte-Carlo Tree Search over ideaspace, where we want to have a certain level of randomness to decide which branches of the tree to go down. And we arguably can’t achieve that randomness if we get stuck in certain paradigms due to the Einstellung effect (sorry for jargon). Communicating paradigms can be destructive of underdeveloped paradigms.
To increase the breadth of exploration over ideaspace, we can encourage “community bubbliness” among researchers (aka “small-world network”), where communication inside bubbles is high, and communication between them is limited. There’s a trade-off between the speed of research progress (for any given paradigm) and the breadth and rigour of the progress. Your preference for how to make this trade-off could depend on your view of AI timelines.
How much you should update on someone’s testimony depends on your trust function relative to that person. Understanding trust functions is one of the most underappreciated leverage points for improving epistemic communities and “raising sanity waterlines”, imo.
If a community has a habit of updating trust functions naively (e.g. increase or decrease your trust towards someone based on whether they give you confirmatory testimonies), it can lead to premature convergence and polarisation of group beliefs. And on a personal level, it can indefinitely lock you out of areas in ideaspace/branches on the ideatree you could have benefited from exploring. [Laputa example] [example 2]
Committing to only updating trust functions based on direct evidence of reasoning ability and sincerity, and never on object-level beliefs, can be a usefwl start. But all evidence is entangled, and personally, I’m ok with locking myself out of some areas in ideaspace because I’m sufficiently pessimistic about there being any value there. So I will use some object-level beliefs as evidence of reasoning-ability and sincerity and therefore use them to update my trust functions.
Deferring to academic research can have the bandwidth problem[1] you’re talking about, and this is especially a problem when the research has been optimised for non-EA relevant criteria. Holden’s History is a good example: he shouldn’t defer to expert historians on questions related to welfare throughout history, because most academics are optimising their expertise for entirely different things.
Deferring to experts can also be a problem when experts have been selected for their beliefs to some extent. This is most likely true of experts on existential risk.
Deferring to community members you think know better than you is fairly harmless if no one defers to you in turn. I think a healthy epistemic community has roles for people to play for each area of expertise.
Decision-maker: If you make really high-stakes decisions, you should use all the evidence you can, testimonial or otherwise, in order to make better decisions.
Expert: Your role is to be safe to defer to. You realise that crowdsourced expert beliefs provide more value to the community if you try to maintain the purity of your independent impressions, so you focus on technical evidence and you’re very reluctant to update on testimonial evidence even from other experts.
Explorer: If most of your contributions come from contributing with novel ideas, perhaps consider taking risks by exploring neglected areas in ideaspace at the cost of potentially making your independent impressions less accurate on average compared to the wisdom of the crowd.
Honestly, my take on the EA community is that it’s surprisingly healthy. It wouldn’t be terrible if EA kept doing whatever it’s doing right now. I think it ranks unreasonably high in the possible ways of arranging epistemic communities. :p
I like this term for it! It’s better than calling it the “Daddy-is-a-doctor problem”.
[Without implying I agree with everything …]
This comment was awesome, super high density of useful stuff. I wonder if you’d consider making it a top level post?
Thanks<3
Well, I’ve been thinking about these things precisely in order to make top-level posts, but then my priorities shifted because I ended up thinking that the EA epistemic community was doing fine without my interventions, and all that remained in my toolkit was cool ideas that weren’t necessarily usefwl. I might reconsider it. :p
Keep in mind that in my own framework, I’m an Explorer, not an Expert. Not safe to defer to.
On my impressions: relative to most epistemic communities I think EA is doing pretty well. Relatively to a hypothetical ideal I think we’ve got a way to go. And I think the thing is good enough to be worth spending perfectionist attention on trying to make excellent.
Some (controversial) reasons I’m surprisingly optimistic about the community:
1) It’s already geographically and social-network bubbly and explores various paradigms.
2) The social status gradient is aligned with deference at the lower levels, and differentiation at the higher levels (to some extent). And as long as testimonial evidence/deference flows downwards (where they’re likely to improve opinions), and the top-level tries to avoid conforming, there’s a status push towards exploration and confidence in independent impressions.
3) As long as deference is mostly unidirectional (downwards in social status) there are fewer loops/information cascades (less double-counting of evidence), and epistemic bubbles are harder to form and easier to pop (from above). And social status isn’t that hard to attain for conscientious smart people, I think, so smart people aren’t stuck at the bottom where their opinions are under-utilised? Idk.
Probably more should go here, but I forget. The community could definitely be better, and it’s worth exploring how to optimise it (any clever norms we can spread about trust functions?), so I’m not sure we disagree except you happen to look like the grumpy one because I started the chain by speaking optimistically. :3
Hi Emrik, wow, I thought this was a genuinely great comment deserving of its own top-level post. From your response to Owen above and your recent lack of top-level posting history it doesn’t seem like you’ll do it anytime soon, so I’m hoping to nudge you to reconsider just in case you’ve warmed to the idea since :) (of course feel free to say no)
Thank you for appreciating! 🕊️
Alas, I’m unlikely to prioritize writing except when I lose control of my motivations and I can’t help it.[1] But there’s nothing stopping someone else extracting what they learn from my other comments¹ ² ³ re deference and making post(s) from it, no attribution required.
(Arguably it’s often more educational to learn something from somebody who’s freshly gone through the process of learning it. Knowledge-of-transition can supplement knowledge-of-target-state.)
Haphazardly selected additional points on deference:
Succinctly, the difference between Equal-Weight deference and Bayes
“They say O(H)=1:2. | Then I can infer that they updated from 1:6 to 1:2 by multiplying with a likelihood ratio of 3:1. And because C and D, I can update on that likelihood ratio in order to end up with a posterior of O(H)=6:1. | The equal weight view would have me adjust down, whereas Bayes tells me to adjust up.”
Paradox of Expert Opinion
“Ask the experts. They’re likely the most informed on the issue. Unfortunately, they’re also among the groups most heavily selected for belief in the hypothesis.”
It’s sort of paradoxical. As a result of my investigations into social epistemology 2 years ago, I came away with the conclusion that I ought to focus ~all my learning-efforts on trying to (recursively) improve my own cognition, with ~no consideration for my ability to teach anyone anything of what I learn. My motivation to share my ideas is an impurity that I’ve been trying hard to extinguish. Writing is not useless, but progress toward my goal is much faster when I automatically think in the language I construct purely to communicate with myself.
Thanks for the thoughtful & generous response and interesting links Emrik :) The natural cluster of questions that include deference has been on my mind ever since I learned about epistemic learned helplessness years ago, so I appreciate the pointers.
I confess to being a bit alarmed by your footnote. For reasoning transparency’s sake, would you be willing to share how you were led to the conclusion to turn inward? I have in my own way been trying to improve clarity of thought, although my reasons include an extrinsic component (e.g. I really like helping people figure out their problems, or fail productively in trying), and even the intrinsic component (clarity makes my heart sing) often points me outward (cf. steps 3 and 8 here) and can also look like teaching others. And I’ve noticed that both can speed up my progress greatly despite reducing time spent just thinking, the former akin to being Alice not Bob, and the latter in a way a bit like “pruning the branching factor” or making me realize I had been overlooking fruitful branches or just modeling the whole thing wrongly. This is the overall “vibe” from which I doubt the effectiveness of your inward turn.
But that’s admittedly not the real reason I’m writing this; my real reason echoes Julia’s comment.
Thanks for writing this post. I think it’s really to distinguish the two types of deference and push the conversation toward the question of when to defer as opposed to how good it is in general.
ButI think “deferring to authority” is bad branding (as you worry about below) and I’m not sure your definition totally captures what you mean. I think it’s probably worth changing even though I haven’t come up with great alternatives.
Branding. To my ear, deferring to authority has a very negative connotation. It suggests deferring to a preexisting authority because they have power over you, not deferring to a person/norm/institution/process because you’re bought into the value of coordination. Relatedly, it doesn’t seem like the most natural phrase to capture a lot of your central examples.
Substantive definition. I don’t think “adopting someone else’s view because of a social contract to do so” is exactly what you mean. It suggests that if someone were not to defer in one of these cases, they’d be violating a social contract (or at least a norm or expectation), whereas I think you want to include lots of instances where that’s not the case (e.g. you might defer as a solution to the unilateralist’s curse even if you were under no implicit contract to do so). Most of your examples also seem to be more about acting based on someone else’s view or a norm/rule/process/institution and not really about adopting their view.[1] This seems important since I think you’re trying to create space for people to coordinate by acting against their own view while continuing to hold that view.
I actually think the epistemics v. action distinction is a cleaner distinction so I might base your categories just on whether you’re changing your views v. your actions (though I suspect you considered this and decided against).
***
Brainstorm of other names for non-epistemic deferring (none are great). Pragmatic deferring. Action deferring. Praxological deferring (eww). Deferring for coordination.
(I actually suspect that you might just want to call this something other than deferring).
[1] Technically, you could say you’re adopting the view that you should take some action but that seems confusing.
Perhaps “deferring on views” vs “delegating choices” ?
I think that’s an improvement though “delegating” sounds a bit formal and it’s usually the authority doing the delegating. Would “deferring on views” vs “deferring on decisions” get what you want?
No, that doesn’t work because epistemic deferring is also often about decisions, and in fact one of the key distinctions I want to make is when someone is deferring on a decision how that can be for epistemic or authority reasons, and how those look different.
I agree it’s slightly awkward that authorities often delegate, but I think that that’s usually delegating tasks; “delegating choices” to me has much less connotation of a high-status person delegating to a low-status person.
Although … one of the examples of “deferring to authority” in my sense is a boss deferring to the authority of a subordinate after the subordinate has been tasked with making a decision, even though the boss disagrees and has the power to override it. With this example, “delegating choice” has very much the right connotation, and “deferring to authority” feels a bit of a stretch.
Just to make sure I understand correctly is”delegating choice” is “delegating a choice (of an action to be made)” ?
If so, I think this is a much better phrase at least than deferring to authority, and would even propose editing the OP to suggest this as an alternative phrase / address this so that others don’t get the wrong impression—based on our conversation it seems we have more agreement than I would have guessed from reading the OP alone.
Yeah that does sell me a bit more on delegating choice.
Related post to the importance of delegating choice, but that was not framed as a trade-off between buying into a thing vs doing it was Jan Kulveit’s What to do with people from a few years ago.
I think this is getting downvotes and I’m curious whether this is because:
People are disagreeing with the conclusions?
It’s poorly explained/confusing?
Something about tone is rubbing people the wrong way?
Something else?
[Writing in a personal capacity, etc.]
I found this post tone-deaf, indeed chilling, when I read it, in light of the current dynamics of the EA movement. I think its the combination of:
(1) lots of money appearing in EA (with the recognition this might be a big problem for optics and epistemics and there are already ‘bad omens’)
(2) the central bits of EA seeming to obviously push an agenda (EA being ‘just longtermism’ now, with CEA’s CEO, Max Dalton, indicating their content will be “70-80% longtermism”; CEA’s Julia Wise is suggesting people shouldn’t talk to high net worths themselves, but should funnel them towards LongView)
(3) this post then saying people should defer to authority.
Taken in isolation, these are somewhat concerning. Taken together, they start to look frightening—of the flavour, “join our elite society, see the truth, obey your leaders”.
I am pretty sure anyone reading this will agree that this is not how we want EA either to be or to be perceived to be. However, things do seem to be moving in that direction, and I don’t think this post helped—sorry, Owen, I am sure you wrote it with the best of intentions. But the road to hell, pavements, etc.
I am concerned about some of the long-termism push but didn’t get that vibe from this post, as an alternate perspective
Edit: wow why is Michael getting downvoted though, wtf? different people can have different impressions of the tone of a written piece of work, it’s not harmful to point it out
Perhaps people didn’t like the cult-ish comparison? But criticising someone for saying they are feeling something is cult-ish is, um, well, pretty cult-ish...
Or perhaps it’s people who can’t properly distinguish between “criticising because you care and want to improve something” and “criticising to be mean” and mistakenly assume I’m doing the latter (despite my strenuous attempts to make it clear I am doing the former).
I sort of guess the second thing?; although I never downvoted at least I felt a little defensive and negative about “tone-deaf, indeed chilling” and didn’t upvote despite having found your comment useful!
(I’ve now noticed the discrepancy and upvoted it)
I don’t think we should only downvote harmful things, we should instead look at the amount of karma and use our votes to push the score to the value we think the post should be at.
I downvoted the comment because:
Saying things like ”… obviously push an agenda....” And “I’m pretty sure anyone reading this… ” Has persuasiony vibes which I don’t like.
Saying “this post says people should defer to authority” is a bit of a straw/weak man and isn’t very charitable.
Using votes to push towards the score we think it should be at sounds worse than just individually voting according to some thresholds of how good/helpful/whatever a post needs to be? I’m worried about zero sum (so really negative sum because of the effort) attempts to move karma around where different people are pushing in different ways, where it’s hard to know how to interpret the results, compared to people straightforwardly voting without regard to others’ votes.
At least, if we should be voting to push things towards our best guess I think the karma system should be reformed to something that plays nice with that—e.g. each individual gives their preferred score, and the displayed karma is the median.
(I think that the pushing towards a score thing wasn’t a crux in downvoting, I think there are lots of reasons to downvote things that aren’t harmful as outlined in the ‘how to use the form post/moderator guidelines’)
I think that karma is supposed to be a proxy for the relative value that a post provides.
I’m not sure what you mean by zero-sum here, but I would have thought that the control system type approach is better as the steady-state values will be pushed towards the mean of what users see as the true value of the post. I think that this score + total number of votes is quite easy to interpret.
The everyone voting independently thing performs poorly when some posts have much more views than others (so it seems to be tracking something more like how many people saw it and liked it rather than is the post high quality).
I think I misunderstand your concern, but the control system approach seems, on the surface to be much better to me, but I am keen to find the crux here, if there is one.
Interesting, thanks.
So, my immediate reaction is that I can feel that kind of concern, but I think the “see the truth, obey your leaders” is exactly the kind of dynamic I’m worried about! & then I’m trying to help avoid it by helping to disambiguate between epistemic deferring and deferring to authority (because conflating them is where I think a lot of the damage comes from).
So then I’m wondering if I’ve made some bad branding decisions (e.g. should I have used a different term for what I called “deferring to authority”? It’s meant to evoke that someone has authority in a particular domain, not some kind of general purpose authority, and not that they know a lot), or if I’m failing to frame my positions correctly? I guess at least a bit of the latter, since it sounds like you read my post as saying people should defer more? Which definitely wasn’t something I intended to say (I’m confused; I’d like to see more deferring of some types and less of other types; I guess overall I’d be into a bit less deferring but not confident enough about that that I’d want to make it a headline).
(fwiw I upvoted this post, because I thought it raised a lot of interesting points that are worth discussing despite disagreeing some bits).
In sum: I think your post sometimes lacks specificity which makes people think you’re talking more generally than (I suspect) you are.
Who exactly you’re proposing doesn’t buy into the agenda—this is left vague in your post. Are you envisioning 20% of people? 50%? What kinds of roles are these folks in? Is it only junior level non-technical roles or even mid-managers doing direct work?
Those details matter because I think I’d be fine with e.g. junior ops people at an AI org not fully buying the specific research agenda of that org, but I’m not sure about the other roles here.
Who do you count as the EA community or movement? I think if we are thinking big tent EA where you have people with the needed skills for the movement but not necessarily a deep understanding of EA, I’m more sympathetic to this argument. But if we’re thinking core community EA where many people are doing things like community building or for whom EA is a bit part of their lives, I feel much more uncomfortable with people deferring to authority—perhaps I feel particularly uncomfortable with people in the meta space deferring to authority.
I vibe with the sentiment “particularly uncomfortable with people in the meta space deferring to authority”, but I think it’s too strong. e.g. I think it’s valuable for people to be able to organize big events, and delegate tasks among the event organizers.
Maybe I’m more like “I feel particularly uncomfortable with people in the meta space deferring without high bandwidth, and without explicit understanding that that’s what they’re doing”.
I think the important thing with delegation which Howie pointed out, is that there is a social contract in the example you gave of event organising between the volunteer / volunteer manager or employer / contractor where I’d expect that in the process of choosing to sign up for this job, the person makes a decision based on their own thinking (or epistemic deference) to contribute to this event—I think this is what you mean by high bandwidth?
If so, I feel in agreement with the statement: “I feel particularly uncomfortable with people in the meta space delegating choice without high bandwidth, and without explicit understanding that that’s what they’re doing”
I’m fine with junior ops people at an AI org being not really at all bought into the specific research agenda.
I’m fine with senior technical people not being fully bought in—in the sense that maybe they think if it were up to them a different agenda would be slightly higher value, or that they’d go about things a slightly different way. I think we should expect that people have slightly different takes, and don’t get the luxury of ironing all of those differences out, and that’s pretty healthy. (Of course I like them having a go at discussing differences of opinion, but I don’t think failure to resolve a difference means that the they need to adopt the party line or go find a different org.)
That makes sense, and feels mostly in line with what I would imagine.
Maybe this is a small point (since there will be many more junior than senior roles in the long run) : I feel like the senior group would likely join an org for many other reasons than deference to authority (e.g. not wanting to found an org themselves, wanting to work with particular people they feel they could get a good work environment from, or because of epistemic deference). It seems like in practice those would be much stronger motivating reasons than authority, and I’m having a hard time picturing someone doing this in practice.
Okay, well, just to report that what you said by way of clarification was reassuring but not what I picked up originally from your post! I agree with Vaidehi below that an issue was a lack of specificity, which led to me reading it as a pretty general comment.
Reading your other comments, it seems what you’re getting at is a distinction between trusting someone is right without understanding why vs just following their instructions. I agree that there’s something there: to e.g. run an organisation, it’s sometimes impractical or unnecessary to convince someone of your entire worldview vs just ask them to do something.
FWIW, what I see lots of in EA, worries me, and I was hoping your post would be about, is that people defer so strongly to community leaders that they refuse to even engage with object-level arguments against whatever it is that community leaders believe. To draw from a personal example, quite often when I talk about measuring wellbeing, people will listen and then say something to the effect of “what you say seems plausible, I can’t think of any objections, but I’m going to defer to GiveWell anyway”. Deferring may have a time and a place, but presumably we don’t want deference to this extent.
“Deferring to experts” might be a less loaded term. Also defining what experts are especially for a lot of EA fields that are newer and less well established could help.
“Deferring to experts” carries the wrong meaning, I think? At least to me that sounds more like epistemic deferring.
An alternative to “deferring to authority” a couple of people have suggested to me is “delegating”, which I sort of like (although maybe it’s confusing if one of the paradigm examples is delegating to your boss).
In light of the other discussions, delegating choice seems better than deferring to experts.
Thanks for writing this, I thought it was great.
(Apologies if this is already included, I have checked the post a few times but possible that I missed where it’s mentioned.)
Edit: I think you mention this in social defering (point 2).
One dynamic that I’m particularly worried about is belief double counting due to deference. You can imagine the following scenario:
Jemima: “People who’s name starts with J are generally super smart.”
Mark: [is a bit confused, but defers because Jemima has more experience with having a name that starts with J] “hmm, that seems right”
[Mary joins conversation]
Mary: [hmm, seems odd but 2 people think and I’m just 1 person so I should update towards their position] “hmm, I can believe that”
Bill: [hmm, seems odd but 3 people think and I’m just 1 person so I should update towards their position] “hmm, I can believe that”
From Bill’s perspective it looks like there are 3 pieces of evidence pointing in the direction of a hypothesis but really there was just one piece (Jemima’s experience) and a bunch of parroting.
I don’t think we often have these literal conversations, but sometimes I feel confused and I find myself doing belief aggregation type things in conversations to make progress on some question. I think it’s helpful to stop and be careful when making moves like “hmm most people here seem to think x therefore I should update in that direction” before seeing how much people at an individual level are themselves deferring to each other (or someone upstream of them) both to form better beliefs myself and not pollute the epistemic environment for others.
Distinguishing between your ’impression” and”all considered view” is helpful for this too.
Another way of saying this is is it can be hard to distinguish “great minds think alike” from “highly correlated error sources”.
Yeah I briefly alluded to this but your explanation is much more readable (maybe I’m being too terse throughout?).
My take is “this dynamic is worrying, but seems overall less damaging than deferral interfering with belief formation, or than conflation between epistemic deferring and deferring to authority”.
I think I roughly agree althought I haven’t thought much about the epistemic vs authority deferring thing before.
Idk if you were too terse, it seemed fine to me. That said, I would have predicted this would be around 70 karma by now, so I may be poorly calibrated on what is appealing to other people.
i thought this post by Huemer was a nice discussion of deference—https://fakenous.net/?p=550
Nice, thanks!
YES!!
My view is that for anything reasonably consequential (ie potentially worth the time spent investigating), one should at least briefly probe before deferring as a) virtually everyone lies at least occasionally and b) popular opinions are often clearly dubious due to the inertia they carry within a group (even a group of experts) from other people deferring without investigating (this can result in evidence needing to be overwhelming to shift majority opinion and overcome this self-perpetuating cycle).
I’m not sure what you mean by ‘bandwidth’, each time you use it.
Communication channels which allow for lots of information and context to flow back and forth between people. e.g. if I read an article and then go to enact the plan described in the article, that’s low-bandwidth. If I sit down with the author for three hours and interrogate them about the reasoning and ask what they think about my ideas for possible plan variations, that’s high-bandwidth.