If someone said “I am not going to wear masks because I am not going to defer to expert opinions on epidemiology of COVID19” how would someone taking the advice of this article respond to that?
Overall, being a noob, I found the language in this article difficult to read. So, I am giving you a specific scenario that many people can relate to and then trying to learn what you are saying from that.
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can’t know everything about every field, and it’s not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it’s helpful to keep in mind that, for the most part, there isn’t a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.
I agree that the EA thing to do would be to work on and explore cause areas by oneself instead of just blindly relying on 80k hours cause areas or something like that.
If someone said “I am not going to wear masks because I am not going to defer to expert opinions on epidemiology of COVID19” how would someone taking the advice of this article respond to that?
Overall, being a noob, I found the language in this article difficult to read. So, I am giving you a specific scenario that many people can relate to and then trying to learn what you are saying from that.
I think the usefulness of deferring also depends on how established a given field is, how many people are experts in that field, and how certain they are of their beliefs.
If a field has 10,000+ experts that are 95%+ certain of their claims on average, then it probably makes sense to defer as a default. (This would be the case for many medical claims, such as wearing masks, vaccinations, etc.) If a field has 100 experts and they are more like 60% certain of their claims on average, then it makes sense to explore the available evidence yourself or at least keep in mind that there is no strong expert consensus when you are sharing information.
We can’t know everything about every field, and it’s not reasonable to expect everyone to look deeply into the arguments for every topic. But I think there can be a tendency of EAs to defer on topics where there is little expert consensus, lots of robust debate among knowledgeable people, and high levels of uncertainty (eg. many areas of AI safety). While not everyone has the time to explore AI safety arguments for themselves, it’s helpful to keep in mind that, for the most part, there isn’t a consensus among experts (yet), and many people who are very knowledgeable about this field still carry high levels of uncertainty about their claims.
Ah! That makes sense.
I agree that the EA thing to do would be to work on and explore cause areas by oneself instead of just blindly relying on 80k hours cause areas or something like that.