Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Risto Uuk
If anyone reading this post thinks that the arguments in favor outweigh the arguments against working on EU AI governance, then consider applying for the EU Policy Analyst role that we are hiring for at the Future of Life Institute: https://futureoflife.org/2022/01/31/eu-policy-analyst/. If you have any questions about the role, you can participate in the AMA we are running: https://forum.effectivealtruism.org/posts/j5xhPbj7ywdv6aEJc/ama-future-of-life-institute-s-eu-team.
Thank you for writing this summary!
I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you’d like to discuss the proposal or have suggestions for the website. We’d like it to be a good resource for the general public but also for people interested in the regulation more closely.
Why did you decide to move from Global Priorities Institute to 80,000 Hours?
It would be nice if someone updated it regularly and had a note about when it was last updated on the top of the page. For example, according to Julia Wise there were 3855 Giving What We Can members at the beginning of 2019, whereas the number here is outdated with 1800+ members.
I found the part about philosophers being well-suited to many aspects of EA research especially interesting. You said this:
Contrary to popular stereotypes, philosophers often excel at quantitative thinking. Many philosophy PhDs have an undergraduate background in math or science. For subfields of philosophy like formal epistemology, population ethics, experimental philosophy, decision theory, philosophy of science, and, of course, logic, a strong command of quantitative skills is essential. Even beyond these subfields, quantitative acumen is prized. In analytic philosophy in particular, papers with a lot of math and formalism are more likely to be taken seriously than comparable papers explained informally.
Do you have any data about philosophy PhDs often having an undergraduate background in math or science? I, for example, have chosen a lot of courses in mathematical economics, data analysis, and social science research methodology to support my philosophy degree, but this is very uncommon in my experience. However, this depends a lot on the region and surely USA and UK are different than continental Europe on this matter.
I subscribe to CCC’s newsletter and these are the latest stories in the newsletters:
The climate debate needs less hyperbole and more rationality
The media got it wrong on the new US climate report
Don’t panic over U.N. climate change report
Don’t blame global warming for hurricane damages
The Paris climate treaty fails to fight global warming
I just wanted to provide more context on what they are focusing on.
The same is the case with the effective altruism course at the LSE titled Effective Philanthropy: Ethics and Evidence. The reason for that was that the teacher Luc Bovens moved to work for another institution. I don’t know about UCL.
This post was actually published in 2018 for the first time, but for some reason I wasn’t able to share the link with some people as it showed up as a draft. I resubmitted it and it has received some interest from the community again.
I think that the longer term evidence right now indicates that the impact of this was lower than the short-term evidence made me anticipate. I expected to have several highly engaged new members in the EA community longer term, but currently it appears that these people are only weakly involved with effective altruism. Hence, I would say that the cost-effectiveness of this project was not high. But there are some indirect effects this might have had related to marketing and reaching more people indirectly, which I don’t have a good understanding of.
Regarding your 2nd question, I think it is an important argument and it’s good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.
Thank you for the questions. I think that the biggest bottleneck right now is that very few people work on the issues we are interested in (listed here). We are trying to contribute to this by hiring a new person, but the problems are vast and there’s a lot more room for additional people. Another issue is lack of policy research that would consider the longer-term implications but would at the same time be very practical. We are happy that in addition to the Future of Life Institute, a few other organizations such as Centre for the Governance of AI, Centre for Long-Term Resilience, and some other ones are contributing more here or starting to do so. I’m not sure about the next 5-10 years, so I’ll leave it to someone else who might have some tentative answers.
I think this accusation is uncalled for. There is more statistics in the report and I linked to it, including things like citation impact. But a comprehensive overview of European AI research is, of course, very welcome.
If someone can’t apply right now due to other commitments, do you expect there to be new roles for generalist research analysts next year as well? What are the best ways one could make oneself a better candidate meanwhile?
For what it’s worth, according to Artificial Intelligence Index published in 2018:
Europe has consistently been the largest publisher of AI papers — 28% of AI papers on Scopus in 2017 originated in Europe. Meanwhile, the number of papers published in China increased 150% between 2007 and 2017. This is despite the spike and drop in Chinese papers around 2008.
(I’d post the graphs here, but I don’t think images can be inserted into comments.)
Thank you for writing this! This is a useful overview of active groups for me, because I intend to move to London in September to study at LSE and now need to think about ways to engage with the community there.
“You should not do a PhD just so you can do something else later. Only do a PhD if this is something you would like to do, in itself.”
Why do you think this is the case? For example, I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs and that proportion is greater for senior research roles and more academic think-tanks. This does not account for the unmeasurable benefits of PhDs such as being taken more seriously in policy discussions. Isn’t it possible that 4-6 years of PhD work gives you more impressive career capital than the same amount of experience progressing from more junior roles to slightly more senior ones?
Here’s an article by 80,000 Hours literally titled “Advice for undergraduates”. It does not answer all of your questions, but hopefully it helps a little bit.
William MacAskill says the following in a chapter in The Palgrave Handbook of Philosophy and Public Policy:
As defined by the leaders of the movement, effective altruism is the use of evidence and reason to work out how to benefit others as much as possible and the taking action on that basis. So defined, effective altruism is a project rather than a set of normative commitments. It is both a research project—to figure out how to do the most good—and a practical project, of implementing the best guesses we have about how to do the most good.
But then he continues to highlight various normative commitments, which indicate that it is, in addition to being a question, an ideology:
The project is: • Maximizing. The point of the project is to try to do as much good as possible. • Science-aligned. The best means to figuring out how to do the most good is the scientific method, broadly construed to include reliance on both empirical observation and careful rigorous argument or theoretical models. • Tentatively welfarist. As a tentative hypothesis or a first approximation, goodness is about improving the welfare of individuals. • Impartial. Everyone’s welfare is to count equally.
Let’s face it. Long-termism is not very intuitively compelling to most people when they first hear of it. Not only do you have to think in very consequentialist terms, you also have to be extremely committed to acting and prioritizing on the basis of fairly abstract philosophical arguments. In my view, that’s just not very appealing—sometimes even off-putting—if you’ve never even thought in terms of cost-effectiveness or total-view consequentialism before.
I agree. Because of this, the 2nd edition of the EA handbook doesn’t seem appealing at all as an EA introduction. I don’t want to hijack this thread, but along these lines, what do you think about the following content as an introduction to effective altruism?:
Week 1:
MacAskill’s intro: “How can you do the most good?” (14 pages)
MacAskill’s 1st chapter: “Just how much can you achieve?” (11 pages)
Addition: “Famine, Affluence, and Morality”: http://personal.lse.ac.uk/ROBERT49/teaching/mm/articles/Singer_1972Famine.pdf (15 pages)
Week 2:
MacAskill’s 2nd chapter: “How many people benefit, and by how much?” (14 pages)
MacAskill’s 3rd chapter: “Is this the most effective thing you can do?” (12 pages)
Addition: “How can we do the most good for the world”: https://www.ted.com/talks/will_macaskill_how_can_we_do_the_most_good_for_the_world (12 min)
Week 3:
MacAskill’s 4th chapter: “Is this area neglected?” (12 pages)
MacAskill’s 5th chapter: “What would have happened otherwise?” (12 pages)
Addition: “Prospecting for Gold”: https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/
Week 4:
MacAskill’s 6th chapter: “What are the chances of success and how good would success be?” (21 pages)
Addition: Introductions to expected value theory: https://concepts.effectivealtruism.org/concepts/expected-value-theory/
Week 5:
MacAskill’s 7th chapter: “What charities make the most difference?” (24 pages)
Addition: Read one review from here: https://animalcharityevaluators.org/charity-reviews/all-charity-reviews/ and skim GiveWell’s methodology: https://www.givewell.org/how-we-work
Week 6:
MacAskill’s 8th chapter: “How can consumers make the most difference?” (19 pages)
Addition: “Conscious consumerism is a lie. Here’s a better way to help save the world”: https://qz.com/920561/conscious-consumerism-is-a-lie-heres-a-better-way-to-help-save-the-world/?fbclid=IwAR0J-ftZl_j9jsRIP6AIOagFovM-jBLFYj80go4L9kAW41IwITMOFeLZLyg
Week 7:
MacAskill’s 9th chapter: “Which careers make the most difference?” (32 pages)
Addition: Explore 80,000 Hours’ career guide: https://80000hours.org/career-guide/
Week 8:
MacAskill’s 10th chapter: “Which causes are most important?” (17 pages)
Addition: Explore the list of the most pressing problems: https://80000hours.org/articles/cause-selection/
Week 9:
MacAskill’s conclusion: “What should you do right now?” and “The five key questions of effective altruism” (8 pages)
Addition: Reflect on the stipend
We are about to run our stipend with this content in mind. Compared to your reading list, I feel that the content we have planned is more beginner-level. What do you think? What seems to be missing in terms of EA basics?
So here are some of the main takeaways from this for me:
Involve the main volunteers/group members in the strategy development process.
Use the strategy template made available by CEA.
Share EA Denmark’s list of project ideas with other community builders.
We recently had a several-hour strategy meeting. I can attest to that when community members participate in the task of developing a strategy they understand better what’s going on and they feel more motivated as they are actually responsible for the vision now. And they can come up with wonderful ideas that you hadn’t thought of!
We have also used a simple three-dimension thinking tool for deciding what projects/activities to focus on. Every participant scores activities on some scale according to how many resources the activity requires, what’s the best outcome that it can result in, and how high is the personal fit of the leader for a particular activity.
You received almost 100 applications as far as I’m aware, but were able to fund only 23 of them. Some other projects were promising according to you, but you didn’t have time to vet them all. What other reasons did you have for rejecting applications?