Hey Jacy thanks for the detailed comment—with EA Global London on this weekend I’ll have to be brief! :)
One partial response is that even if you don’t think this is fully representative of the set of all organisation you’d like to have seen surveyed, it’s informative about the groups that were. We list the orgs that were surveyed and point out who wasn’t near the start of the article so people understand who the answers represent:
“The reader should keep in mind this sample does not include some direct work organisations that some in the community donate to, including the Against Malaria Foundation, Mercy for Animals or the Center for Human-Compatible AI at UC Berkeley.”
You can take this information for whatever it’s worth!
As for who I chose to sample—on any definition there’s always going to be some grey area, orgs that almost meet that definition but don’t quite. I tried to find all the organisations with full-time staff who i) were a founding part of the EA movement, or, ii) were founded by people who identify strongly as part of the EA community, or, iii) are now mostly led by people who identify more strongly as part of the EA movement than other other community. I think that’s a natural grouping and don’t view AMF, MfA or CHAI as meeting that definition (though I’d be happy to be corrected if any group does meet this definition whose leadership I’m not personally familiar with).
The main problem with that question in my mind is underrepresentation of GiveWell which has a huge budget and is clearly a central EA organisation—the participants from GiveWell gave me one vote to work with but didn’t provide quantitative answers, as they didn’t have a strong or clear enough view. More generally, people from the sample who specialise in one cause were more inclined to say they didn’t have a view on fund which was most effective and so not answer it (which is reasonable but could bias the answers).
Personally like you I give more weight to the views of specialist cause priorities researchers working at cause-neutral organisations. They were more likely to answer the question and are singled out in the table with individual votes. Interestingly their results were quite similar to the full sample.
I agree we should be cautious about all piling on to the same causes and falling for an ‘information cascade’. That said, if the views in that table are a surprise to someone, it’s a reason to update in their direction, even if they don’t act on that information yet.
I’d be very keen to get more answers to this question, including folks from direct work orgs. And also increase the sample at some organisations that were included in the survey, but for which few people answered that question (GiveWell most notably). With a larger sample we’ll be able to break the answers down more finely to see how they vary by subgroup, and weight them by organisation size without giving single data points huge leverage over the result.
I’ll try to do that in the next week or two one EAG London is over!
Thanks for the response. My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
If you wanted to spend a lot of time on “what causes do EA leadership favor,” one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k’s quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom’s calculation of how many beings could exist in it, then we’d come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we’re more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)
That’s probably not the best approach, but I’d like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people’s opinions but them have them rate how much they’re basing their views on the views of their peers, or just ask for their view and confidence while pretending like they’ve never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.
Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.
Hey Jacy thanks for the detailed comment—with EA Global London on this weekend I’ll have to be brief! :)
One partial response is that even if you don’t think this is fully representative of the set of all organisation you’d like to have seen surveyed, it’s informative about the groups that were. We list the orgs that were surveyed and point out who wasn’t near the start of the article so people understand who the answers represent:
You can take this information for whatever it’s worth!
As for who I chose to sample—on any definition there’s always going to be some grey area, orgs that almost meet that definition but don’t quite. I tried to find all the organisations with full-time staff who i) were a founding part of the EA movement, or, ii) were founded by people who identify strongly as part of the EA community, or, iii) are now mostly led by people who identify more strongly as part of the EA movement than other other community. I think that’s a natural grouping and don’t view AMF, MfA or CHAI as meeting that definition (though I’d be happy to be corrected if any group does meet this definition whose leadership I’m not personally familiar with).
The main problem with that question in my mind is underrepresentation of GiveWell which has a huge budget and is clearly a central EA organisation—the participants from GiveWell gave me one vote to work with but didn’t provide quantitative answers, as they didn’t have a strong or clear enough view. More generally, people from the sample who specialise in one cause were more inclined to say they didn’t have a view on fund which was most effective and so not answer it (which is reasonable but could bias the answers).
Personally like you I give more weight to the views of specialist cause priorities researchers working at cause-neutral organisations. They were more likely to answer the question and are singled out in the table with individual votes. Interestingly their results were quite similar to the full sample.
I agree we should be cautious about all piling on to the same causes and falling for an ‘information cascade’. That said, if the views in that table are a surprise to someone, it’s a reason to update in their direction, even if they don’t act on that information yet.
I’d be very keen to get more answers to this question, including folks from direct work orgs. And also increase the sample at some organisations that were included in the survey, but for which few people answered that question (GiveWell most notably). With a larger sample we’ll be able to break the answers down more finely to see how they vary by subgroup, and weight them by organisation size without giving single data points huge leverage over the result.
I’ll try to do that in the next week or two one EAG London is over!
Thanks for the response. My main general thought here is just that we shouldn’t depend on so much from the reader. Most people, even most thoughtful EAs, won’t read in full and come up with all the qualifications on their own, so it’s important for article writers to include those themselves, and to include those upfront and center in their articles.
If you wanted to spend a lot of time on “what causes do EA leadership favor,” one project I see as potentially really valuable is a list of arguments/evidence and getting EA leaders to vote on their weights. Sort of a combination of 80k’s quantitative cause assessment and this survey. I think this is a more ideal peer-belief-aggregation because it reduces the effects of dependence. Like if Rob and Jacy both prioritize the far future entirely because of Bostrom’s calculation of how many beings could exist in it, then we’d come up with that single argument having a high weight, rather than two people highly favoring the far future. We might try this approach at Sentience Institute at some point, though right now we’re more focused on just coming up with the lists of arguments/evidence in the field of moral circle expansion, so instead we did something more like your 2017 survey of researchers in this field. (Specifically, we would have researchers rate the pieces of evidence listed on this page: https://www.sentienceinstitute.org/foundational-questions-summaries)
That’s probably not the best approach, but I’d like a survey approach that somehow tries to minimize the dependence effect. A simpler version would be to just ask for people’s opinions but them have them rate how much they’re basing their views on the views of their peers, or just ask for their view and confidence while pretending like they’ve never heard peer views, but this sort of approach seems more vulnerable to bias than the evidence-rating method.
Anyway, have fun at EAG London! Curious if anything that happens there really surprises you.