Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It’s interesting to know how the word is used in formal epistemology.
Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there’s the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.
I wonder if it would be good to create another survey to get some data not only on who people update on but also on how they update on others (regarding AGI timelines or something else). I was thinking of running a survey where I ask EAs about their prior on different claims (perhaps related to AGI development), present them with someone’s probability judgements and then ask them about their posterior. That someone could be a domain expert, non-domain expert (e.g., professor in a different field) or layperson (inside or outside EA).
At least if they have not received any evidence regarding the claim before, then there is a relatively simple and I think convincing model of how they should update: they should set their posterior odds in the claim to the product of their prior odds and someone else’s odds (this is the result of this paper, see e.g. p.18). It would then be possible to compare the way people update to this rational ideal. Running such a survey doesn’t seem very hard or expensive (although I don’t trust my intuition here at all) and we might learn a few interesting biases in how people defer to others in the context of (say) AI forecasts.
I have a few more thoughts on exactly how to do this, but I’d be curious if you have any initial thoughts on this idea!
Thanks for your comment! I agree that the concept of deference used in this community is somewhat unclear, and a separate comment exchange on this post further convinced me of this. It’s interesting to know how the word is used in formal epistemology.
Here is the EA Forum topic entry on epistemic deference. I think it most closely resembles your (c). I agree there’s the complicated question of what your priors should be, before you do any deference, which leads to the (b) / (c) distinction.
I wonder if it would be good to create another survey to get some data not only on who people update on but also on how they update on others (regarding AGI timelines or something else). I was thinking of running a survey where I ask EAs about their prior on different claims (perhaps related to AGI development), present them with someone’s probability judgements and then ask them about their posterior. That someone could be a domain expert, non-domain expert (e.g., professor in a different field) or layperson (inside or outside EA).
At least if they have not received any evidence regarding the claim before, then there is a relatively simple and I think convincing model of how they should update: they should set their posterior odds in the claim to the product of their prior odds and someone else’s odds (this is the result of this paper, see e.g. p.18). It would then be possible to compare the way people update to this rational ideal. Running such a survey doesn’t seem very hard or expensive (although I don’t trust my intuition here at all) and we might learn a few interesting biases in how people defer to others in the context of (say) AI forecasts.
I have a few more thoughts on exactly how to do this, but I’d be curious if you have any initial thoughts on this idea!