Thanks for the response. My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
If you wanted to spend a lot of time on “what causes do EA leadership favor,” one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k’s quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom’s calculation of how many beings could exist in it, then we’d come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we’re more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)
That’s probably not the best approach, but I’d like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people’s opinions but them have them rate how much they’re basing their views on the views of their peers, or just ask for their view and confidence while pretending like they’ve never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.
Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.
Thanks for the response. My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
If you wanted to spend a lot of time on “what causes do EA leadership favor,” one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k’s quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom’s calculation of how many beings could exist in it, then we’d come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we’re more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)
That’s probably not the best approach, but I’d like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people’s opinions but them have them rate how much they’re basing their views on the views of their peers, or just ask for their view and confidence while pretending like they’ve never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.
Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.