Thanks for this. It is very interesting to see the changes over time and across engagement levels. In future, Iād like to see changes in the āother causesā over time and across engagement level, if possible. For instance, it would be interesting to see if causes such as IIDM or S-risk are becoming more or less popular over time, or are mainly being suggested by new or experienced EAs.
I think that it would be very interesting if we could compare the EA communities results on this survey against a sample of āpeople who donāt identify as EAsā and people who identify as being in one or more āactivist groupsā (e.g., vegan/āclimate etc) and explore the extent of our similarities and differences in values (and how these are changing over time). This in turn could inform decisions about how to communicate and collaborate with such audiences, where relevant.
In future, Iād like to see changes in the āother causesā over time and across engagement level, if possible. For instance, it would be interesting to see if causes such as IIDM or S-risk are becoming more or less popular over time, or are mainly being suggested by new or experienced EAs.
Yeh, I agree that would be interesting. Unfortunately, if we were basing it on open comment āOtherā responses, it would be extremely noisy due to low n, as well as some subjectivity in identifying categories. (Fwiw, it seemed like people mentioning S-risk were almost exclusively high engagement, which is basically what Iād expect, since I think it requires some significant level of engagement before people would usually be exposed to these ideas).
I think that it would be very interesting if we could compare the EA communities results on this survey against a sample of āpeople who donāt identify as EAsā and people who identify as being in one or more āactivist groupsā (e.g., vegan/āclimate etc) and explore the extent of our similarities and differences in values (and how these are changing over time).
I agree this would be interesting. Iām particularly interested in examing differences in attitudes between EA and non-EA audiences. Examining differences in cause ratings directly might be more challenging due to a conceptual gap between EA understanding of certain causes and the general population (who may not even be familiar with what some of these terms mean). I think surveying more general populations on their support for different things (e.g. long-termist interventions, suitably explained) and observing changes in these across would be valuable though. Another way to examine differences in cause prioritisation would be to look at differences in the charitable portolios of the EA community vs wider donors, since that aggregate data is more widely available.
Unfortunately, if we were basing it on open comment āOtherā responses, it would be extremely noisy due to low n, as well as some subjectivity in identifying categories.
Thanks for explaining. I see what you mean. If it seems worth it (i.e., more people than me care!), you could potentially add a closed ended āother potential cause areasā item. These options could be generated from the most popular options in the prior yearās open ended responses. E.g., you could have IIDM and S-risk as close ended āother optionsā for that question next year (in addition to n other common responses) . You could keep the āopen ended other potential cause areasā as an āAre there any other causes you feel should be priorities for the EA community that we havenāt mentionedā open ended option. You could also grow the closed ended items list year as needed each year.
I agree this would be interesting. Iām particularly interested in examing differences in attitudes between EA and non-EA audiences. Examining differences in cause ratings directly might be more challenging due to a conceptual gap between EA understanding of certain causes and the general population (who may not even be familiar with what some of these terms mean). I think surveying more general populations on their support for different things (e.g. long-termist interventions, suitably explained) and observing changes in these across would be valuable though.
Yes, I agree
Another way to examine differences in cause prioritisation would be to look at differences in the charitable portolios of the EA community vs wider donors, since that aggregate data is more widely available.
Thanksāthis link is interesting. Great to see that religious institutions get the most. Thatās definitely ideal :)
I hadnāt thought about comparing donation portfolio trends. That could be very useful data if we had good data!
If it seems worth it (i.e., more people than me care!), you could potentially add a closed ended āother potential cause areasā item. These options could be generated from the most popular options in the prior yearās open ended responses. E.g., you could have IIDM and S-risk as close ended āother optionsā for that question next year
Yeh that seems like it could be useful. Itās useful to know what kinds of things people find valuable, because space in the survey is always very tight.
Thanks for this. It is very interesting to see the changes over time and across engagement levels. In future, Iād like to see changes in the āother causesā over time and across engagement level, if possible. For instance, it would be interesting to see if causes such as IIDM or S-risk are becoming more or less popular over time, or are mainly being suggested by new or experienced EAs.
I think that it would be very interesting if we could compare the EA communities results on this survey against a sample of āpeople who donāt identify as EAsā and people who identify as being in one or more āactivist groupsā (e.g., vegan/āclimate etc) and explore the extent of our similarities and differences in values (and how these are changing over time). This in turn could inform decisions about how to communicate and collaborate with such audiences, where relevant.
Yeh, I agree that would be interesting. Unfortunately, if we were basing it on open comment āOtherā responses, it would be extremely noisy due to low n, as well as some subjectivity in identifying categories. (Fwiw, it seemed like people mentioning S-risk were almost exclusively high engagement, which is basically what Iād expect, since I think it requires some significant level of engagement before people would usually be exposed to these ideas).
I agree this would be interesting. Iām particularly interested in examing differences in attitudes between EA and non-EA audiences. Examining differences in cause ratings directly might be more challenging due to a conceptual gap between EA understanding of certain causes and the general population (who may not even be familiar with what some of these terms mean). I think surveying more general populations on their support for different things (e.g. long-termist interventions, suitably explained) and observing changes in these across would be valuable though. Another way to examine differences in cause prioritisation would be to look at differences in the charitable portolios of the EA community vs wider donors, since that aggregate data is more widely available.
Thanks for explaining. I see what you mean. If it seems worth it (i.e., more people than me care!), you could potentially add a closed ended āother potential cause areasā item. These options could be generated from the most popular options in the prior yearās open ended responses. E.g., you could have IIDM and S-risk as close ended āother optionsā for that question next year (in addition to n other common responses) . You could keep the āopen ended other potential cause areasā as an āAre there any other causes you feel should be priorities for the EA community that we havenāt mentionedā open ended option. You could also grow the closed ended items list year as needed each year.
Yes, I agree
Thanksāthis link is interesting. Great to see that religious institutions get the most. Thatās definitely ideal :)
I hadnāt thought about comparing donation portfolio trends. That could be very useful data if we had good data!
Yeh that seems like it could be useful. Itās useful to know what kinds of things people find valuable, because space in the survey is always very tight.