Risto is a Policy Researcher at FLI and is focused primarily on researching policy-making on AI to maximize the societal benefits of increasingly powerful AI systems. Previously, Risto worked for the World Economic Forum on a project about positive AI economic futures, did research for the European Commission on trustworthy AI, and provided research support at Berkeley Existential Risk Initiative on European AI policy. He completed a master’s degree in Philosophy and Public Policy at the London School of Economics and Political Science. He has a bachelor’s degree from Tallinn University in Estonia.
Risto Uuk
[Question] What Are Some Disagreements in the Area of Animal Welfare?
If you’re a thoughtful American interested in developing expertise and technical abilities in the domain of AI policy, then this may be one of your highest impact options, particularly if you have been to or can get into a top grad school in law, policy, international relations or machine learning. (If you’re not American, working on AI policy may also be a good option, but some of the best long-term positions in the US won’t be open to you.)
What do you think about similar type of work within the European Union? Could it potentially be a high-impact career path for those who are not Americans?
This post increased my interest in visiting the Boston area. Unfortunately, I cannot come to the EAGx this year, but perhaps another time. I’m quite surprised that you’d have the issue of brain drain as the area seems to be a very impressive place with top universities, lots of people interested in EA, and even a few great EA-aligned organizations. Do you have other ideas besides a full-time paid community builder to improve that?
[Question] What Courses Might Be Most Useful for EAs?
Nice idea. I wrote my bio in third-person like you did even though on my website I have it in first-person: https://ristouuk.com. Usually, I feel weird about the third-person narrative when I’m the one who is talking about me, but it feels right for the forum.
As an application of this model, the Global Priorities Project estimatesthat research into the neglected tropical diseases with the highest global DALY burden (diarrheal diseases) could be 6x more cost-effective, in terms of DALYs per dollar, than the 80,000 Hours recommended top charities.
What are 80,000 Hours’ recommended top charities? I think you mean some other organization here.
It would be nice if someone updated it regularly and had a note about when it was last updated on the top of the page. For example, according to Julia Wise there were 3855 Giving What We Can members at the beginning of 2019, whereas the number here is outdated with 1800+ members.
Let’s face it. Long-termism is not very intuitively compelling to most people when they first hear of it. Not only do you have to think in very consequentialist terms, you also have to be extremely committed to acting and prioritizing on the basis of fairly abstract philosophical arguments. In my view, that’s just not very appealing—sometimes even off-putting—if you’ve never even thought in terms of cost-effectiveness or total-view consequentialism before.
I agree. Because of this, the 2nd edition of the EA handbook doesn’t seem appealing at all as an EA introduction. I don’t want to hijack this thread, but along these lines, what do you think about the following content as an introduction to effective altruism?:
Week 1:
MacAskill’s intro: “How can you do the most good?” (14 pages)
MacAskill’s 1st chapter: “Just how much can you achieve?” (11 pages)
Addition: “Famine, Affluence, and Morality”: http://personal.lse.ac.uk/ROBERT49/teaching/mm/articles/Singer_1972Famine.pdf (15 pages)
Week 2:
MacAskill’s 2nd chapter: “How many people benefit, and by how much?” (14 pages)
MacAskill’s 3rd chapter: “Is this the most effective thing you can do?” (12 pages)
Addition: “How can we do the most good for the world”: https://www.ted.com/talks/will_macaskill_how_can_we_do_the_most_good_for_the_world (12 min)
Week 3:
MacAskill’s 4th chapter: “Is this area neglected?” (12 pages)
MacAskill’s 5th chapter: “What would have happened otherwise?” (12 pages)
Addition: “Prospecting for Gold”: https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/
Week 4:
MacAskill’s 6th chapter: “What are the chances of success and how good would success be?” (21 pages)
Addition: Introductions to expected value theory: https://concepts.effectivealtruism.org/concepts/expected-value-theory/
Week 5:
MacAskill’s 7th chapter: “What charities make the most difference?” (24 pages)
Addition: Read one review from here: https://animalcharityevaluators.org/charity-reviews/all-charity-reviews/ and skim GiveWell’s methodology: https://www.givewell.org/how-we-work
Week 6:
MacAskill’s 8th chapter: “How can consumers make the most difference?” (19 pages)
Addition: “Conscious consumerism is a lie. Here’s a better way to help save the world”: https://qz.com/920561/conscious-consumerism-is-a-lie-heres-a-better-way-to-help-save-the-world/?fbclid=IwAR0J-ftZl_j9jsRIP6AIOagFovM-jBLFYj80go4L9kAW41IwITMOFeLZLyg
Week 7:
MacAskill’s 9th chapter: “Which careers make the most difference?” (32 pages)
Addition: Explore 80,000 Hours’ career guide: https://80000hours.org/career-guide/
Week 8:
MacAskill’s 10th chapter: “Which causes are most important?” (17 pages)
Addition: Explore the list of the most pressing problems: https://80000hours.org/articles/cause-selection/
Week 9:
MacAskill’s conclusion: “What should you do right now?” and “The five key questions of effective altruism” (8 pages)
Addition: Reflect on the stipend
We are about to run our stipend with this content in mind. Compared to your reading list, I feel that the content we have planned is more beginner-level. What do you think? What seems to be missing in terms of EA basics?
Thank you for writing this summary!
Altruism: Passionate about helping others
Effectiveness: Ambitious in their altruism, with a drive to do as much good as they can. Potential to be aligned with the central tenets of EA.
Potential: Excited to dedicate their career to doing good or to donate a significant portion of their income to charity
Open-mindedness: Open-minded and flexible, eager to update their beliefs in response to persuasive evidence
Enthusiasm: Willing and able to commit ~3-4 hours per weekFit: How good a fit are they with the fellowship format? Will they be good in discussions? Will they do good work for the Impact Challenge?”
I appreciate that you explicitly listed all the traits you were looking for in the applicants. We have done that more intuitively, but it’s very useful to make them explicit. These traits align well with my intuitions for what we also look for in applicants.
I subscribe to CCC’s newsletter and these are the latest stories in the newsletters:
The climate debate needs less hyperbole and more rationality
The media got it wrong on the new US climate report
Don’t panic over U.N. climate change report
Don’t blame global warming for hurricane damages
The Paris climate treaty fails to fight global warming
I just wanted to provide more context on what they are focusing on.
If you were to organize an effective altruism course around William MacAskill’s book Doing Good Better, what additional readings would you give to students to fill in the holes of the book?
This might be slightly off-topic, but you may have some insight into it. If a donor donates money to, for example, global health s/he can find pretty concrete numbers about impact based on GiveWell’s estimates or information from specific organizations such as AMF. How can someone donating money to Meta justify those donations quantitatively and via concrete indicators?
Community Builders, Watch EAG Videos with Your Members
1. I prefer “we”.
2. I’m not sure what kind of references you are supposed to add here. Should they be accessible to everyone or can books, etc. be included as well? If the latter, then I’d add Daniel Kahneman’s book Thinking Fast and Slow to the list. There are good parts about these concepts in the book. (e.g. Kindle version location 4220)
3. To me, it seems that the definitions of “inside view” and “outside view” are not clear enough, whereas the examples are very good. https://www.hybridforecasting.com/ had nice slides about this, however, I’m not able to find their material to share here. Anyway, their definitions and explanations are the following:
Inside view: focus on the unique qualities of the case at hand.
Outside view: connect the case at hand to a reference class and rely on base rate information.
Reference classes refer to similar events from the past.
Base rates are relative frequencies of an outcome given a defined set. For example, the chance of selecting a red card from a deck of cards if 50%.
You didn’t mention anything about (a) the risk of becoming less altruistic in the future, (b) increasing your motivation to learn more about effective giving by giving now, and (c) supporting the development of the culture of effective giving. How much the giver learns over time isn’t the only consideration. I’m referring to this forum post by listing these other considerations: http://effective-altruism.com/ea/4e/giving_now_vs_later_a_summary/.
I feel that the book contains too much fluff and even these commandments, despite appearing useful, seem to lack enough specificity to be useful. Does anyone have other book recommendations or guidelines for improving one’s forecasting and probabilistic thinking? At the end of the day, it’s important to actually practice forecasting and thinking probabilistically, but specific information for how to do that would be useful. E.g. how do you actually determine 40⁄60 and 45⁄55 or even 43⁄57 probabilities?
Thanks for putting it on EA Groups Resource Map! I think it’d be better if the link was to the Google Docs document rather than to this forum post, because we might edit it in the future.
If someone can’t apply right now due to other commitments, do you expect there to be new roles for generalist research analysts next year as well? What are the best ways one could make oneself a better candidate meanwhile?
Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that’s about 30 minutes about risks from AI. Harris wasn’t his best in that discussion and Pinker came off much more nuanced and evidence and reason based.
Agreed. Research and study groups, for example, seem to be a lot more useful than events. First and foremost, participants commit to longer term attendance in advance so you don’t need to try to persuade them to participate every time. I dislike having to personally invite people to come to events. I assume that they don’t care about EA enough if they don’t come at a mere FB invitation.
Regarding attendance, we just recently organized a public AI safety event which was attended by roughly 80 people. When an ex community-builder heard that, he congratulated us on that as it sounded big success to him. Of course, it was nice to have that many people come to the event but compared to some more in-depth projects we had going on I didn’t feel as accomplished.
That said, how do you get feedback from your community with respect to online-based content? Your newsletter, for example, could easily be much more valuable than events and even other in-person activities, but as far as I’m aware very few people actually communicate how much value they receive to authors and content creators. For instance, you probably didn’t know this but I find useful content for EA Estonia’s newsletter every month from EA London’s newsletter.