Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Risto Uuk
Thank you, these are some really big questions! Most of them are beyond what we work on, so I’m happy to leave these to other people in this community and have them guide our own work. For example, the Centre for Long-Term Resilience published the Future Proof report in which they refer to a survey where the median prediction of scientists is that general human-level intelligence will be reached around 35 years from now.
I’ll try to answer the last question about where our opinions might differ. Many academics and policymakers in the EU probably still don’t think much about the longer-term implications of AI and don’t think that AI progress can have such significant impact (negative or positive) that we do or don’t think that it is reasonable to focus on it right now. That said, I don’t think that there is necessarily a very big gap between us in practice. For example, many people who are interested in bias, discrimination, fairness, and other issues that are already prevalent, can also be concerned about more general purpose AI systems that will become more available on the market in the future, as these systems can present even bigger challenges and have more significant consequences in terms of bias, discrimination, fairness, etc. In the paper On the Opportunities and Risks of Foundation Models, it was stated that, “Properties of the foundation model can lead to harm in downstream systems. As a result, these intrinsic biases can be measured directly within the foundation model, though the harm itself is only realized when the foundation model is adapted, and thereafter applied.”
Thank you, a lot of great questions. In response to question (3), some of our work focuses on EU member states as well. Because we are a small team, our ability to cover many member states is limited, but hopefully, with the new hire we can do a lot more on this front as well. If you know anybody suitable, please let us know. For example, we have engaged with Sweden, Estonia, Belgium, Netherlands, France, and a few other countries. Right now, the Presidency of the Council of the EU is held by France, next up are Czechia and Sweden, so work at the member state level in these countries is definitely important.
Regarding your 2nd question, I think it is an important argument and it’s good that some people are thinking through both the arguments in favor and against working on EU AI governance. That said, there are so many ways for EU AI governance to play a major role regardless of whether it is an AI superpower or not. Some of these are mentioned in the post that you referred to, like the Brussels Effect as well as excellent opportunities for policy work right now. Some other ideas are mentioned in the comments under the post about EU not being an AI superpower, like the importance of experimenting in the EU as well as its role in the semiconductor supply chain. For me personally, I am very well-placed to work on EU AI governance compared to this type of work in the US, China, or elsewhere in the world. Even if in absolute terms other regions were more important, considering how neglected this space is, I think EU matters a lot. And many other Europeans would be much better placed to work on this rather than, say, try to become Americans.
Thank you for the questions. I think that the biggest bottleneck right now is that very few people work on the issues we are interested in (listed here). We are trying to contribute to this by hiring a new person, but the problems are vast and there’s a lot more room for additional people. Another issue is lack of policy research that would consider the longer-term implications but would at the same time be very practical. We are happy that in addition to the Future of Life Institute, a few other organizations such as Centre for the Governance of AI, Centre for Long-Term Resilience, and some other ones are contributing more here or starting to do so. I’m not sure about the next 5-10 years, so I’ll leave it to someone else who might have some tentative answers.
If anyone reading this post thinks that the arguments in favor outweigh the arguments against working on EU AI governance, then consider applying for the EU Policy Analyst role that we are hiring for at the Future of Life Institute: https://futureoflife.org/2022/01/31/eu-policy-analyst/. If you have any questions about the role, you can participate in the AMA we are running: https://forum.effectivealtruism.org/posts/j5xhPbj7ywdv6aEJc/ama-future-of-life-institute-s-eu-team.
Thank you for writing this summary!
I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://artificialintelligenceact.eu/. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if you’d like to discuss the proposal or have suggestions for the website. We’d like it to be a good resource for the general public but also for people interested in the regulation more closely.
Yeah, I feel that too. My daughter is just 1 year and 9 months. We are constantly high-fiving and fist-pumping.
Because (i) my wife wanted to have a child and I thought it would strenghten our relationship, (ii) I assumed my child was likely to become a happy person and possibly an EA, (iii) I’d potentially have a very close friend for life.
Existential risks are not something they have worked on before, so my project is a new addition to their portfolio. I didn’t mention this but I intend to have a section for other risks depending on space. The reason climate change gets prioritized in the project is that arguably the EU has more of a role to play in climate change initiatives compared to, say, nuclear risks.
Thanks for this database! I’m currently working on a project for the Foresight Centre (a think-tank at the Estonian parliament) about existential risks and the EU’s role in reducing them. I cover risks form AI, engineered pandemics, and climate change. For each risk, I discuss possible scenarios, probabilities, and the EUs role. I’ve found a couple of sources from your database on some of these risks that I hadn’t seen before.
The same is the case with the effective altruism course at the LSE titled Effective Philanthropy: Ethics and Evidence. The reason for that was that the teacher Luc Bovens moved to work for another institution. I don’t know about UCL.
It would also be more informative to assess risks of death from COVID-19. ‘Micromorts’ normally stand for one-in-a-million chance of death because the word is combined from micro and mortality. If 1000 μCoV were a thousand-in-a-million chance of death, then engaging in activities with such a risk would be quite reckless indeed. That would be about similar to climbing quite high mountains and doing a couple of base-jumps.
I have calculated COVID-19 risks for myself in the context of Estonia where I am currently. My numbers right now are about: risk of getting COVID-19: 1^-4 and risk of dying of COVID-19: 4^-6 (about 4 micromorts). These are probably overestimates as I’m young, healthy, and very cautious and I’m using nasal swab data rather than antibody data, which indicate about 10 times larger infection rate than the nasal swab data (meaning 10 times smaller death rate in Estonia). These numbers are of course smaller in Estonia than in the Bay Area.
Another interesting topic here is what counts as too risky? I think that my risk threshold is about traveling 10 km by motorbike, which is about 1 micromort. I would engage in such activities once in a while, but in general 1 micromort seems too large in the context of activities that are easily substitutable. Can’t ride a motorbike for entertainment? Easy, just play some less risky sport and get as much pleasure.
“You should not do a PhD just so you can do something else later. Only do a PhD if this is something you would like to do, in itself.”
Why do you think this is the case? For example, I have noticed based on my search that nearly 60% of research roles in think-tanks in Europe have PhDs and that proportion is greater for senior research roles and more academic think-tanks. This does not account for the unmeasurable benefits of PhDs such as being taken more seriously in policy discussions. Isn’t it possible that 4-6 years of PhD work gives you more impressive career capital than the same amount of experience progressing from more junior roles to slightly more senior ones?
This post was actually published in 2018 for the first time, but for some reason I wasn’t able to share the link with some people as it showed up as a draft. I resubmitted it and it has received some interest from the community again.
I think that the longer term evidence right now indicates that the impact of this was lower than the short-term evidence made me anticipate. I expected to have several highly engaged new members in the EA community longer term, but currently it appears that these people are only weakly involved with effective altruism. Hence, I would say that the cost-effectiveness of this project was not high. But there are some indirect effects this might have had related to marketing and reaching more people indirectly, which I don’t have a good understanding of.
Why did you decide to move from Global Priorities Institute to 80,000 Hours?
Estonia actually has two local groups, one in Tallinn and the other in Tartu.
Do you think there’s more useful research to be done on this topic? Are there any specific questions you think researchers haven’t yet answered sufficiently? What are the gaps in the EA literature on this?
It actually might be more complicated than what you say here, alexherwix. If a research analyst role at the Open Philanthropy Project receives 800+ job applications, then you might reasonably think that it’s better for you to continue building a local community even if you were a great candidate for that option.
In addition, for the reasons that you mention, every possible local community builder might be constantly looking for new job options in the EA community making someone who doesn’t do that a highly promising candidate. Furthermore, being a community builder is actually a surprisingly difficult job.
Another consideration is that preparation and training for a specific job at an EA organization and gaining skills leading a local group might be quite different. It might suit you more to do tasks related to community building in a local context.
This is slightly relevant, in a recent 80,000 Hours’ blog post they suggest the following for people applying for EA jobs:
We generally encourage people to take an optimistic attitude to their job search and apply for roles they don’t expect to get. Four reasons for this are that, i) the upside of getting hired is typically many times larger than the cost of a job application process itself, ii) many people systematically underestimate themselves, iii) there’s a lot of randomness in these processes, which gives you a chance even if you’re not truly the top candidate, and iv) the best way to get good at job applications is to go through a lot of them.
Thank you for the questions. Regarding emotions-based advertisement, you might find our recent EURACTIV (a top EU policy media network) op-ed about AI manipulation relevant and interesting: The EU needs to protect (more) against AI manipulation. In it, we invite EU policymakers to expand the definition of manipulation and also consider societal harms from manipulation in addition to individual psychological and physical harms. And here’s a bit longer version of that same op-ed.