Good point! Thanks.
I have added FHI to the text.
Since you mentioned it in your footnote, the EA Survey 2019 post on geographic distribution of EAs is out. We don’t have information on party identification, but we can see that 2.23% of EAs living in the USA are politically affiliated with the Center Right and 1.19% with the Right (12.76% with Libertarianism & 76.56% with the Left or Center Left). Keeping in mind the caveat that our data only shows where an EA currently lives so an EA reporting both living in the USA and being on the Right-hand side of the political spectrum does not necessarily mean they are a registered Republican.
I agree that “If I have observed a p < .05, what is the probability that the null hypothesis is true?” is a different question than “If the null hypothesis is true, what is the probability of observing this (or more extreme) data”. Only the latter question is answered by a p-value (the former needing some bayesian-style subjective prior). I haven’t yet seen a clear consensus on how to report this in a way that is easy for the lay reader.
The phrases I employed (highlighted in your comment) were suggested in writing by Daniel Lakens, although I added a caveat about the null in the second quote which is perhaps incorrect. His defence of the phrase “we can act as if the null hypothesis is false, and we would not be wrong more than 5% of the time in the long run” is the specific use of the word ‘act’, “which does not imply anything about whether this specific hypothesis is true or false, but merely states that if we act as if the null-hypothesis is false any time we observe p < alpha, we will not make an error more than alpha percent of the time”. I would be very interested if you have suggestions of a similar standard phrasing which captures both the probability of observing data (not a hypothesis) and is somewhat easy for a non-stats reader to grasp.
As an aside, what is your opinion on reporting p values greater than the relevant alpha level? I’ve read Daniel Lakens suggesting if you have p< .05 one could write something like “because given our sample size of 50 per group, and our alpha level of 0.05, only observed differences more extreme than 0.4 could be statistically significant, and our observed mean difference was 0.35, we could not reject the null hypothesis’.” This seems a bit wordy for any lay reader but would it be worth even including in a footnote?
On your first point, yes you are correct. Among those who prioritized Global Poverty OR Animal Welfare AND changed causes, pluralities of them changed to AI.
On your second point, I’ve now added a column in the group membership and demographics tables that shows the average for the sample as a whole. I hope this helps.
We will explore cause prioritization and the geographic distribution of EAs in a forthcoming post. We tried to keep a narrower focus in this post, on involvement in EA and just a few demographics, as we did in last year’s post.
Glad to hear you found it informative. Thanks!
We have an entire post dedicated to the geographic distribution of EAs in this year’s survey forthcoming, along the same lines as last year’s:
Like Saulius, I am pretty sceptical about the narrative I have in my mind on this issue now. One day I would like to take time and re-read some old messages and emails to tease out what I was thinking, or at least what story I was telling myself, then.
For the moment, this is how I recall events and my thinking.
I first heard of EA when a friend at Oxford gave me Doing Good Better as a gift. I recall reading it cover to cover during a trip the following week and being enthused by it to the extent of making detailed notes and re-gifting it back to my friend with my annotations. I considered it one of many interesting frameworks to guide one’s life and took onboard some ideas I surmised from it like donating based on cost-effectiveness and thinking more deeply about the suffering of others. My engagement with EA remained quite flat for few years after that and I am not sure how “involved” I could consider myself.
Later, I was accepted for intern positions with Animal Charity Evaluators and Charity Entrepreneurship. I’m unsure exactly how at the front of my mind EA was in this time. On the one hand I was applying for many positions that I thought were simply interesting, but not necessarily EA aligned. On the other hand, I attended my first EAG shortly after applying for/starting these positions and therefore must have applied to attend months before because it was happening in a city my then employer was based in. I think much of it came down to me having a lot of free time, the desire to find new interesting work and casting a very wide net, and to an extent being in the right place at the right time. In any case, meeting EAs in person, both at EAG and through Charity Entrepreneurship, and seeing the community behind the ideas was a revelation for me.
I keep working on it for a mix of selfish reasons like finding the work interesting and a sense of community, and to a lesser extent becoming more convinced of the impact of the movement and that my working on it is the best resource I can offer.
I agree with much of what the team has written.
Also, perhaps there is a stronger accountability mechanism from having a team and things like a Slack channel in comparison to a funded independent researcher-depending on how involved an EA Fund type organization are in checking in and if funds are recalled if a researcher “fails”. I don’t have a good sense of the independent researcher funding landscape though.
Maybe to the extent one couples their work as a researcher with their identity, a clearer community might exist under an umbrella organization. Though I could imagine independent researchers all funded by the same organization could establish some sort of cohort mentality if communicative structures are available.
To add to the operations support benefit, I have in mind the evidence from the “disruptive research teams” literature review that suggested “researchers should be freed from trivial or bureaucratic tasks as much as possible”, which seems to be less likely to be the case for an independent researcher.
You’re correct that we have a remote team located in many countries.
Time zone challenges are definitely present with such a global team, especially for scheduling.
There is also a barrier to having natural interactions in the way that would randomly happen in an office.
2)Frequent & Smooth communication
Slack is immensely useful for quick and easy communication.
We have daily check-ins on Slack to let each other know what we are working on.
We share what we are working on in Google Docs for others to comment and collaborate on.
Some team members have frequent calls with managers or eachother.
We have a randomized rotating system to pair people up for social calls.
RP has monthly all-staff calls.
We particiapte in the wider Rethink Charity “all-hands” calls.
Animal Charity Evaluator’s roundtable discussions about remote teams have definitely informed my own personal view of what might work well and that the issues we encounter are pretty common for remote teams.
I think it’s unlikely I’d be able to continue doing cross-cause area work, but unsure which specific cause area I would primarily be focusing on.
35% likely: EA aligned research in a non-EA organization.
15% likely: EA aligned work in a different EA organization.
15% likely: non-EA aligned work.
10% likely: Charity Entrepreneurship’s incubation program.
25% likely: Unsure
The categories of moral views presented in the graph and table were a pre-set list of those 4 answer choices. We wouldn’t want to speak to how survey respondents defined these for themselves when making a selection among these options.
This EA concepts page might be a relevant source for further reading: https://concepts.effectivealtruism.org/concepts/moral-theories/
Not on voting directly but relatedly, asking a nationally representative sample about explicit future or present attitudes did not find evidence to support the claim that younger people consider future people as equally deserving of help, though we did find that older people prioritise present people more than younger people do.
Also, see Larks’ quick literature review on psychology research, which suggested “that older people discount the future less than younger people, which might suggest giving their votes more weight.”
I believe this point on social theory was discussed during Mahendra Prasad talk at 2019’s EA Global London (I didn’t attend it but Mahendra sent me the slides). This hypothesis that deliberation could shift individual preferences toward single-peakedness appears to be lent support in deliberative poll experiments (e.g. Farrar et al. 2010). I did not see a neat way to explain this point in the essay, but have included a small mention of it instead. Thanks for offering this useful summary!