Cool idea to run this survey and I agree with many of your points on the dangers of faulty deference.
A few thoughts:
(Edit: I think my characterisation of what deference means in formal epistemology is wrong. After a few minutes of checking this, I think what I described is a somewhat common way of modelling how we ought to respond to experts)
The use of the concept of deference within the EA community is unclear to me. When I encountered the concept in formal epistemology I remember “deference to someone on claim X” literally meaning (a) that you adopt that persons probability judgement on X. Within EA and your post (?) the concept often doesn’t seem to be used in this way. Instead, I guess people think of deference as something like (b) “updating in the direction of a persons probability judgement on X” or (c)”taking that person’s probability estimate as significant evidence for (against) X if that person leans towards X (not-X)”?
I think (a) - (c) are importantly different. For instance, adopting someones credence doesn’t always mean that you are taking their opinion as evidence for the claim in question even if they lean towards it being true: you might adopt someones high credence in X and thereby lowering your credence (because yours was even higher before). In that case, you update as though their high credence was evidence against X.
You might also update in the direction of someones credence without taking on their credence. Lastly, you might lower your credence in X by updating in someones direction even if they lean towards X.
Bottom line: these three concepts don’t refer to the same “epistemic process” so I think its good to make clear what we mean by deference.
Here is how I would draw the conceptual distinctions:
(I) deference to someones credence in X = you adopt their probability in X
(II) positively updating on someone’s view = increasing your confidence in X upon hearing their probability on X
(III) negatively updating on someones view = decreasing your confidence in X upon hearing their probability in X
I hope this comment was legible, please ask for clarification if anything was unclearly expressed :)
In addition to the EA Forum topic entry, there is a forum post from a few months ago by a researcher on topics related to epistemology in EA named Owen Cotton-Barratt reviewing a taxonomy of common types of deference in EA, and open issues with them, that I found informative.
I wrote another comment below that touched on deference, though I wrote it more quickly than carefully and I might have used the concept in a confused way as a I don’t have much formal understanding of deference outside of EA, so don’t take my word for it. How deference as a concept has been used in EA differently in the last year has seemed ambiguous to me, so I’m inclined to agree that progress in EA in understanding deference could be made through your challenge to the current understanding of the subject.
Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It’s interesting to know how the word is used in formal epistemology.
Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there’s the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.
I wonder if it would be good to create another survey to get some data not only on who people update on but also on how they update on others (regarding AGI timelines or something else). I was thinking of running a survey where I ask EAs about their prior on different claims (perhaps related to AGI development), present them with someone’s probability judgements and then ask them about their posterior. That someone could be a domain expert, non-domain expert (e.g., professor in a different field) or layperson (inside or outside EA).
At least if they have not received any evidence regarding the claim before, then there is a relatively simple and I think convincing model of how they should update: they should set their posterior odds in the claim to the product of their prior odds and someone else’s odds (this is the result of this paper, see e.g. p.18). It would then be possible to compare the way people update to this rational ideal. Running such a survey doesn’t seem very hard or expensive (although I don’t trust my intuition here at all) and we might learn a few interesting biases in how people defer to others in the context of (say) AI forecasts.
I have a few more thoughts on exactly how to do this, but I’d be curious if you have any initial thoughts on this idea!
Cool idea to run this survey and I agree with many of your points on the dangers of faulty deference.
A few thoughts:
(Edit: I think my characterisation of what deference means in formal epistemology is wrong. After a few minutes of checking this, I think what I described is a somewhat common way of modelling how we ought to respond to experts)
The use of the concept of deference within the EA community is unclear to me. When I encountered the concept in formal epistemology I remember “deference to someone on claim X” literally meaning (a) that you adopt that persons probability judgement on X. Within EA and your post (?) the concept often doesn’t seem to be used in this way. Instead, I guess people think of deference as something like (b) “updating in the direction of a persons probability judgement on X” or (c)”taking that person’s probability estimate as significant evidence for (against) X if that person leans towards X (not-X)”?
I think (a) - (c) are importantly different. For instance, adopting someones credence doesn’t always mean that you are taking their opinion as evidence for the claim in question even if they lean towards it being true: you might adopt someones high credence in X and thereby lowering your credence (because yours was even higher before). In that case, you update as though their high credence was evidence against X. You might also update in the direction of someones credence without taking on their credence. Lastly, you might lower your credence in X by updating in someones direction even if they lean towards X.
Bottom line: these three concepts don’t refer to the same “epistemic process” so I think its good to make clear what we mean by deference.
Here is how I would draw the conceptual distinctions:
(I) deference to someones credence in X = you adopt their probability in X (II) positively updating on someone’s view = increasing your confidence in X upon hearing their probability on X (III) negatively updating on someones view = decreasing your confidence in X upon hearing their probability in X
I hope this comment was legible, please ask for clarification if anything was unclearly expressed :)
In addition to the EA Forum topic entry, there is a forum post from a few months ago by a researcher on topics related to epistemology in EA named Owen Cotton-Barratt reviewing a taxonomy of common types of deference in EA, and open issues with them, that I found informative.
https://forum.effectivealtruism.org/posts/LKdhv9a478o9ngbcY/deferring
I wrote another comment below that touched on deference, though I wrote it more quickly than carefully and I might have used the concept in a confused way as a I don’t have much formal understanding of deference outside of EA, so don’t take my word for it. How deference as a concept has been used in EA differently in the last year has seemed ambiguous to me, so I’m inclined to agree that progress in EA in understanding deference could be made through your challenge to the current understanding of the subject.
Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It’s interesting to know how the word is used in formal epistemology.
Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there’s the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.
I wonder if it would be good to create another survey to get some data not only on who people update on but also on how they update on others (regarding AGI timelines or something else). I was thinking of running a survey where I ask EAs about their prior on different claims (perhaps related to AGI development), present them with someone’s probability judgements and then ask them about their posterior. That someone could be a domain expert, non-domain expert (e.g., professor in a different field) or layperson (inside or outside EA).
At least if they have not received any evidence regarding the claim before, then there is a relatively simple and I think convincing model of how they should update: they should set their posterior odds in the claim to the product of their prior odds and someone else’s odds (this is the result of this paper, see e.g. p.18). It would then be possible to compare the way people update to this rational ideal. Running such a survey doesn’t seem very hard or expensive (although I don’t trust my intuition here at all) and we might learn a few interesting biases in how people defer to others in the context of (say) AI forecasts.
I have a few more thoughts on exactly how to do this, but I’d be curious if you have any initial thoughts on this idea!