For example, I emailed the following to a friend who’d enjoyed reading Doing Good Better and wanted to learn more about EA, but hadn’t further engaged with EA or longtermism. He has a technical background and (IMO) is potentially a good fit for AI Policy work, which influenced my link selection.
...
The single best article I’d recommend on doing good with your career is by 80,000 Hours, a non-profit founded by the Oxford professor who wrote Doing Good Better, incubated in Y-Combinator, and dedicated to giving career advice on how to solve pressing global problems. If you’d prefer, their founder explains the ideas in this podcast episode.
If you’re open to some new, more speculative ideas about what “doing good” might mean, here’s a few ideas about improving the long-run future of humanity:
Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing “existential risks” like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future.
Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it “the most important book I have read in quite some time”. (Or this podcast with Russell)
CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
(AI is less morally compelling if you don’t care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)
[Then I gave some info about two near-termism causes he might like: grantmaking, by linking to GiveWell and the Open Philanthropy Project, and global poverty, by linking to GiveDirectly and other GiveWell top charities.]
If anyone’s interested, here was my intro to grantmaking and global poverty:
...
If you’d prefer more mainstream ways of improving the world, here’s some top organizations and job opportunities:
Grantmakers within effective altruism are researching the most impactful donation opportunities and giving billions to important causes.
GiveWell researches top donation opportunities in global health and poverty. Founded by ex-hedge fund analysts, they focus on transparency, detailed public writeups, and justifying their decisions to outsiders. You might like their cost-effectiveness model of different charities. They’re hiring researchers and a Head of People.
The Open Philanthropy Project funds a wider range of causes—land use reform, pandemic preparedness, basic science research, and many more—in their moonshot approach of “hits-based giving”. OpenPhil has billions to donate to its causes, because it’s funded by Dustin Moskovitz, co-founder of Facebook and Asana.
World-class organizations are working directly on all kinds of highly impactful problems (and they’re hiring! :P)
GiveDirectly takes money and gives it to poor people, no strings attached. They typically hire from top private sector firms and have an incredibly well-credentialed team. They’re recommended by GiveWell as an outstanding giving opportunity.
For example, I emailed the following to a friend who’d enjoyed reading Doing Good Better and wanted to learn more about EA, but hadn’t further engaged with EA or longtermism. He has a technical background and (IMO) is potentially a good fit for AI Policy work, which influenced my link selection.
...
The single best article I’d recommend on doing good with your career is by 80,000 Hours, a non-profit founded by the Oxford professor who wrote Doing Good Better, incubated in Y-Combinator, and dedicated to giving career advice on how to solve pressing global problems. If you’d prefer, their founder explains the ideas in this podcast episode.
If you’re open to some new, more speculative ideas about what “doing good” might mean, here’s a few ideas about improving the long-run future of humanity:
Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing “existential risks” like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future.
Quick intro to longtermism and existential risks from 80,000 Hours
Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it “the most important book I have read in quite some time”. (Or this podcast with Russell)
CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
How you can help by working on US AI policy, explains 80,000 Hours
(AI is less morally compelling if you don’t care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)
Improving institutional decision-making isn’t super straightforward, but could be highly impactful if successful. Altruism aside, you might enjoy Phil Tetlock’s Superforecasting.
80,000 Hours also wrote profiles for working in climate change and nuclear war prevention, among many other things
[Then I gave some info about two near-termism causes he might like: grantmaking, by linking to GiveWell and the Open Philanthropy Project, and global poverty, by linking to GiveDirectly and other GiveWell top charities.]
If anyone’s interested, here was my intro to grantmaking and global poverty:
...
If you’d prefer more mainstream ways of improving the world, here’s some top organizations and job opportunities:
Grantmakers within effective altruism are researching the most impactful donation opportunities and giving billions to important causes.
GiveWell researches top donation opportunities in global health and poverty. Founded by ex-hedge fund analysts, they focus on transparency, detailed public writeups, and justifying their decisions to outsiders. You might like their cost-effectiveness model of different charities. They’re hiring researchers and a Head of People.
The Open Philanthropy Project funds a wider range of causes—land use reform, pandemic preparedness, basic science research, and many more—in their moonshot approach of “hits-based giving”. OpenPhil has billions to donate to its causes, because it’s funded by Dustin Moskovitz, co-founder of Facebook and Asana.
World-class organizations are working directly on all kinds of highly impactful problems (and they’re hiring! :P)
GiveDirectly takes money and gives it to poor people, no strings attached. They typically hire from top private sector firms and have an incredibly well-credentialed team. They’re recommended by GiveWell as an outstanding giving opportunity.
Effective global poverty organizations include many for-profits (Sendwave (jobs), TapTap Send (jobs)) and non-profits (Evidence Action (job), ID Insight (jobs)).
80,000 has a big ol’ job board
(You’re probably not looking for a new job, but who knows, don’t mind my nudge)