Why did you decide to move from Global Priorities Institute to 80,000 Hours?
Estonia actually has two local groups, one in Tallinn and the other in Tartu.
Do you think there’s more useful research to be done on this topic? Are there any specific questions you think researchers haven’t yet answered sufficiently? What are the gaps in the EA literature on this?
It actually might be more complicated than what you say here, alexherwix. If a research analyst role at the Open Philanthropy Project receives 800+ job applications, then you might reasonably think that it’s better for you to continue building a local community even if you were a great candidate for that option.
In addition, for the reasons that you mention, every possible local community builder might be constantly looking for new job options in the EA community making someone who doesn’t do that a highly promising candidate. Furthermore, being a community builder is actually a surprisingly difficult job.
Another consideration is that preparation and training for a specific job at an EA organization and gaining skills leading a local group might be quite different. It might suit you more to do tasks related to community building in a local context.
This is slightly relevant, in a recent 80,000 Hours’ blog post they suggest the following for people applying for EA jobs:
We generally encourage people to take an optimistic attitude to their job search and apply for roles they don’t expect to get. Four reasons for this are that, i) the upside of getting hired is typically many times larger than the cost of a job application process itself, ii) many people systematically underestimate themselves, iii) there’s a lot of randomness in these processes, which gives you a chance even if you’re not truly the top candidate, and iv) the best way to get good at job applications is to go through a lot of them.
You can decide it by asking who wants to be the leader of a particular activity (the way that your group did) as well as inquire what resources and capital people have available to successfully lead that activity. Sometimes people have the motivation to lead activities, but they don’t actually have the necessary resources to do it successfully yet.
Agreed on the failure-mode thinking. I guess if you only take the best-case scenario into consideration, then you forget to assess the risks involved. On the other hand, I’m not sure it should be included in this initial brainstorming session or later when a possible activity is selected as a top candidate.
So here are some of the main takeaways from this for me:
Involve the main volunteers/group members in the strategy development process.
Use the strategy template made available by CEA.
Share EA Denmark’s list of project ideas with other community builders.
We recently had a several-hour strategy meeting. I can attest to that when community members participate in the task of developing a strategy they understand better what’s going on and they feel more motivated as they are actually responsible for the vision now. And they can come up with wonderful ideas that you hadn’t thought of!
We have also used a simple three-dimension thinking tool for deciding what projects/activities to focus on. Every participant scores activities on some scale according to how many resources the activity requires, what’s the best outcome that it can result in, and how high is the personal fit of the leader for a particular activity.
Great overview as always. I think Open Philanthropy Project’s Funding for Study and Training Related to AI Policy Careers should be up here as well:
This program aims to provide flexible support for individuals who want to pursue or explore careers in AI policy1 (in industry, government, think tanks, or academia) for the purpose of positively impacting eventual societal outcomes from “transformative AI,” by which we mean potential future AI that precipitates a transition at least as significant as the industrial revolution …
I think this accusation is uncalled for. There is more statistics in the report and I linked to it, including things like citation impact. But a comprehensive overview of European AI research is, of course, very welcome.
For what it’s worth, according to Artificial Intelligence Index published in 2018:
Europe has consistently been the largest publisher of AI papers — 28% of AI papers on Scopus in 2017 originated in Europe. Meanwhile, the number of papers published in China increased 150% between 2007 and 2017. This is despite the spike and drop in Chinese papers around 2008.
(I’d post the graphs here, but I don’t think images can be inserted into comments.)
Here’s an article by 80,000 Hours literally titled “Advice for undergraduates”. It does not answer all of your questions, but hopefully it helps a little bit.
William MacAskill says the following in a chapter in The Palgrave Handbook of Philosophy and Public Policy:
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis. So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good.
But then he continues to highlight various normative commitments, which indicate that it is, in addition to being a question, an ideology:
The project is: • Maximizing. The point of the project is to try to do as much good as possible. • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models. • Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals. • Impartial. Everyone’s welfare is to count equally.
Open Philanthropy Project’s link doesn’t work.
Thank you for writing this! This is a useful overview of active groups for me, because I intend to move to London in September to study at LSE and now need to think about ways to engage with the community there.
In addition, what do you think should be updated in Doing Good Better?
Your link referring to bdixon and climate change leads to Joey’s post “Problems with EA representativeness and how to solve it”. Can you share the post that discusses how Doing Good Better appears to underrate the degree of warming of climate change?
I found the part about philosophers being well-suited to many aspects of EA research especially interesting. You said this:
Contrary to popular stereotypes, philosophers often excel at quantitative thinking. Many philosophy PhDs have an undergraduate background in math or science. For subfields of philosophy like formal epistemology, population ethics, experimental philosophy, decision theory, philosophy of science, and, of course, logic, a strong command of quantitative skills is essential. Even beyond these subfields, quantitative acumen is prized. In analytic philosophy in particular, papers with a lot of math and formalism are more likely to be taken seriously than comparable papers explained informally.
Do you have any data about philosophy PhDs often having an undergraduate background in math or science? I, for example, have chosen a lot of courses in mathematical economics, data analysis, and social science research methodology to support my philosophy degree, but this is very uncommon in my experience. However, this depends a lot on the region and surely USA and UK are different than continental Europe on this matter.
Can you expand on 3a and 3b? I guess 3b justifies 3a, but is that all? Watching and discussing a video with your local group appears to me to be more valuable than asking one question at a talk, but I may be missing some important benefits that you are aware. I would also add that these are not mutually exclusive. I have heard that some people struggle to set time to watch talks on their own, that is also something to consider.