We’re back with the first edition of what will now be a monthly newsletter.
You’ll get it on the first Thursday of the month.
On the forum, the newsletter doubles as an Open Thread.
Rock on!
The Team
Articles and Community Posts
80,000 Hours has published a podcast interview with alumnus Ben West who expects to donate tens of millions for charity through tech entrepreneurship (and is currently hiring!).
Ben Todd talks about a way to have a big social impact that’s open to everyone and doesn’t require you to change your career.
Giving What We Can has created astonishing animations based on a recent paper in Nature, which show the incredible impact antimalarial bednets are having across Africa.
Watch the new debate between Giles Fraser and ‘Doing Good Better’ author Will MacAskill at Intelligence Squared, the ‘world’s premier forum for debate and intelligent discussion’.
Why do effective altruists support the causes we do? Michelle Hutchinson explains why and shows how we could find new ones.
Going In-Depth
Rob Wiblin tackles three common objections to working on global catastrophic risks and AI safety: He argues that it’s not too early to act, that the chances of success are not tiny and that computer science majors have, in fact, not convinced themselves that the best way to help the world is doing computer science research.
The Machine Intelligence Research Institute (MIRI) gives an accessible justification for their approach to AI alignment research.
80,000 Hours explains why you should evaluate small, non-profit startups just like regular startups – by focusing on their potential for growth rather than their marginal impact.
In the Media
Bloomberg has heard the news: Billionaires like Bill Gates, Elon Musk and Peter Thiel are now maximizing the impact of their donations according to effective altruist principles.
‘Big-name scientists worry that runaway artificial intelligence could pose a threat to humanity. Beyond the speculation is a simple question: Are we fully in control of our technology?’ – The Washington Post profiles Nick Bostrom and the emerging field of AI alignment.
Why you should take the Giving What We Can pledge this January: Akilnathan Logeswaran and Daniel Selwyn published a beautifully written article in the Huffington Post arguing for this lifetime-resolution.
Updates from EA Organizations
80,000 Hours
80k has released their own giving recommendations, which supplement GiveWell’s, for the advanced donor.
They’ve also finished their annual impact survey and made their end-of-year growth targets. All together, they’ve grown the rate of significant plan changes by 600% over 2015, from 2 per week to 14 per week.
GiveWell
On the blog, Open Philanthropy Project staff made suggestions for individual donors interested in supporting organizations working in the Open Philanthropy Project’s cause areas.
Giving What We Can
Giving What We Can is running their annual pledge event – they’ve had 136 new members join so far over December and January. If you haven’t joined their community yet, this is a great timeto do so! If you have already, here are some thoughts from Eleanor on how you can increase your impact.
Jacy Reese has interviewed Giving What We Can director Michelle Hutchinson, kicking off a series of interviews with high-impact people from the community.
Through the effort of a team of individual effective altruists who made videos for Project for Awesome, AMF has become one of the most voted for charities and got 25,000$!
Ozzie Gooen has released the beta version of Guesstimate, a tool for estimating things that are uncertain. He designed it partly to help fellow effective altruists calculate the expected impact of different options.
Timeless Classics
An all-time favorite: Peter Singer’s world-famous essay Famine, Affluence, and Morality asks if helping the poor is not merely good, but a moral duty.
What is Effective Altruism?
EA is a growing social movement founded on the desire to make the world as good a place as it can be, the use of evidence and reason to find out how to do so, and the audacity to actually try.
A community project of the Centre for Effective Altruism, a registered charity in England and Wales, Registered Charity Number 1149828 Centre for Effective Altruism, Oxford Uehiro Centre for Practical Ethics, Littlegate House, St Ebbes Street, Oxford OX1 1PT, UK
(Does anyone see open threads this late? Say if so!)
I just found out that the Open Philanthropy Project funded http://waitlistzero.org/ which was a small charity started by two EAs who I worked with in its early days. OPP gave $200k, presumably covering Waitlist Zero’s whole budget (way more than it used to be).
This suggests more people creating charities could get fully funded by OPP. Does anyone have any insight into this? Claire Zabel of GiveWell/OPP said:
It’s possible. It’s best if the organization fits into one of our focus areas (openphilanthropy.org/focus); we’ll have an update on these in the next month or so.
I would like some light from the EA hivemind. For a while now I have been mostly undecided about what to do with my 2016-2017 period.
Roxanne and I even created a spreadsheet so I could evaluate my potential projects and drop most of them, mid-2015. My goals are basically an oscillating mixture of
1)Making the world better by the most effective means possible.
2)Continuing to live in Berkeley
3)Receive more funding
4)Not stop PHD
5)Use my knowledge and background to do (1).
This has proven an extremely hard decision to make. Here are the things I dropped because they were incompatible with time, or goals other than 1, but still think other EAs, who share goal 1, should carry on:
(1) Moral Economics: From when it started, Moral Econ is an attempt to install a different mindset in individuals, my goal has always been to have other people pick it up and take it forwards. I currently expect this to be done, and will go back to it only if it seems like it will fall apart.
(2) Effective Giving Pledge: This is a simple idea I applied to EA ventures with, though I actually want someone else to do it. The idea is simply to copy the Gates giving pledge website for an Effective Giving Pledge, which says that the wealthy benefactors will donate according to impact, tractability and neglectedness. If 3 or 4 signatories of the original pledge signed it, it would be the biggest shift in resource allocation from the non EA-money pool to the EA-money pool in history.
(3) Stuart Russell AI-safety course: I was going to spend some time helping Stuart to make an official Berkeley AI-safety course. His book is used in 1500+ Universities, so the if the trend caught, this would be a substantial win for the AI safety community. There was a non-credit course offered last semester in which some MIRI researchers, Katja, Paul, me and others were going to present. However it was very poorly attended and was not official, and it seems to me that the relevant metric is probability that this would become a trend.
(4) X-risk dominant paper: What are the things that would dominate our priority space on top of X-risk if they were true? Me and Daniel Kokotajlo began examining that question, but considered it to be too socially costly to publish anything about it, since many scenarios are too weird and could put off non-philosophers.
These are the things I dropped for reasons other than the EA goal 1. If you are interested in carrying on any of them, let me know and I’ll help you if I can.
In the comment below, by contrast are the things between which I am still undecided the ones I want help in deciding:
1) Convergence Analysis: The idea here is to create a Berkeley affiliated research institute that operates mainly in two fronts 1)Strategy on the long term future 2)Finding Crucial Considerations that have not been considered or researched yet. We have an interesting group of academics and I would take a mixed position of CEO and researcher.
2) Altruism: past, present, propagation: this is a book whose table of contents I already wrote, and would need further research and spelling out each of the 250 sections I have in mind. It is very different in nature from Will’s book, or Singer’s book. The idea here is not to introduce to EA, but to reason about the history of cooperation and altruism that led to us, and where this can be taken in the future, inclusive by the EA movement. This would be major intellectual undertaking, likely consuming my next three years and doubling as a PHD dissertation. Perhaps, tripling as a series of blog posts, for quick feedback loops and reliable writer motivation.
3) FLI grant proposal: Our proposal intended to increase our understanding psychological theories of human morality in order to facilitate later work in formalizing moral cognition to AIs, a subset of the value loading and control problems of Artificial Generalized Intelligence. We didn’t win, so the plan here would be to try to find other funding sources for this research.
4) Accelerate the PHD: For that I need to do 3 field statements, one about the control problem in AI with Stuart, one about altruism with Deacon, and one to be determined, then only the dissertation would be still on the to do list.
All these plans scored sufficiently high in my calculations that it is hard to decide between them. Accelerating the PHD has a major disadvantage because it does not increase my funding. The book (via blog posts or not) has a strong advantage in that I think it will have sufficiently new material that it satisfies goal 1 best of all, it is probably the best for the world if I manage to get to the end of it and do it well. But again, it doesn’t increase funding. Convergence has the advantage of co-working with very smart people, and if it takes off sufficiently well, it could solve the problem of continuing to live in Berkeley and that of financial constraints all at once, putting me in a stable position to continue doing research in relevant topics almost indeterminately, instead of having to make ends meet by downsizing the EA goal substantially among my priorities. So very high stakes, but uncertain probabilities.
If AI is (nearly) all that matters, then the FLI grant will be the highest impact, followed by Convergence, the book and the acceleration.
In any event all of those are incredible opportunities which I feel lucky to even have in my consideration space. It is a privilege to be making that choice, but it is also very hard. So conditional on the goals I stated before:
1)Making the world better by the most effective means possible. 2)Continuing to live in Berkeley 3)Receive more funding 4)Not stop PHD 5)Use my knowledge and background to do (1).
I am looking for some light, some perspective from the outside that will make me lean one way or another. I have been uncomfortably indecisive for months, and maybe your analysis can help.
Three of your projects rank highest on personal interest. I think I would attempt a more granular analysis of this keeping in mind your current uncertainty about your model of your maintained future interest.
Some ideas:
Pretend that you are handing off the project to someone else and writing a short guide to what you think their curriculum of research and work will look like.
Brainstorm the key assumptions underlying the model that assumes a value for each project and see if any of those key assumptions are very cheaply testable (this can be a surprising exercise IME)
Premortem (murphyjitsu) each project and compare the likelihood of different failure modes.
If there’s a way to encourage Russell to write or teach a bit more about AI safety (even just in his textbook, or maybe in other domains), I would think that would be quite important. But you probably have a better picture of how (in)feasible that is.
Sorry that I don’t have strong opinions on the other options....
All of these seem potentially valuable. I suspect the best choice is the one you’ll be most motivated to pursue. My suggestion is that you should consider who your ‘customers’ are for each project, and figure out which group you’d most like to work with and generate deliverables for.
Also, some of these may lend themselves better to intermediate/incremental deliverables, which would be a big plus.
All of the above is fully general advice- my low-resolution take on your specific situation is that Convergence Analysis seems by far both the highest leverage (and certainly the largest variance), though the fact that you seem unsure whether to dive down that path may imply there may be some difficult hurdles or complications down that path that you’re dreading?
As Luke and Nate would tell you, the shift from researcher to CEO is a hard one to make, even when you want to do good, as Hanson puts it “Yes, thinking is indeed more fun.”
I have directed an institute in Brazil before, and that was already somewhat a burden.
The main reason for the high variance though is that setting up an institute requires substantial funding. The people most likely to fundraise would be me, Stephen Frey (who is not on the website), and Daniel, and fundraising is taxing in many ways. Would be great if we had for instance the REG fundraisers to aid us (Liv, Ruari, Igor, wink wink) either by fundraising for us, or finding someone to, or teaching us to.
[Here’s some introductory verbiage so nothing hooky shows up under ‘Recent comments’]
ATTENTION: please read this:
To help us test how many people see comments on the month’s open thread when it’s got old, please upvote this comment if you see it. (I promise to use the karma wisely.)
In the preferences page there is a box for “EA Profile Link.” How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn’t seem to have any effect...
Hey Squark, there’s a guide to using it (and other features of the forum) at http://bit.ly/1OHRd1X . As it explains, the effect is that you get a little
link to it next to your username when you comment or post.
When you tried entering your LessWrong profile you should have got the warning below about other links not working. If that didn’t happen, can you tell me what browser and OS (Mac, Linux, etc.) you’re using? Thanks!
As the help text below it says, that’s specifically for EA Profiles (which are the profiles at that link). It’ll only accept a link to one of those; if you don’t already have one, you should create one!
I think this Facebook discussion could be really useful for you. It includes both my personal impressions of how to do outreach (which is fairly highly upvoted, suggesting that other people share my experiences to some extent) as well as links to longer, more sophisticated investigations done by other EAs and EA organizations.
“Giving What We Can is running their annual pledge event – they’ve had 136 new members join so far over December and January. If you haven’t joined their community yet, this is a great time to do so!” 153 now. :D
One thing we’d be interested in seeing is if on Jan. 10th we can reach a record number of pledges on the same day. So far the number to beat is 11 (which is a fluke).
Nice Vox article on climate change-I felt that the argument was robust. Climate change may not end civilization but if humans lose 5% of their vitality over the next 10,000 years, that is terrible.
Can mosquito nets be effective to prevent further spread of Zika virus? Can they be effective at fighting Zika virus the same way they are used my AMF to fight malaria?
Also, are there charities against Zika virus? What would an effective charity dedicated against Zika look like?
There might be an opportunity to develop a gene drive targeting Aedes mosquitoes, a vector for Zika. Harvard University Effective Altruism is running a Philanthropy Advisory Fellowship and researching funding opportunities for gene drives. They will be publishing their findings next month.
I’d just like to say that this is the first month I’ve made an effort to read the EA Newsletter, and it has been a hugely rewarding experience. Although I’m not sure I’ll get through everything on there before the next one, it’s really highlighted loads of great writing and speaking that I would not otherwise have sought out.
I have yet to find good research on this. However, if anyone out there believes that farm animals are affected by the hedonic treadmill, and that farm animal suffering causes great disvalue, prioritizing donations to the Humane Slaughter Association (HSA) might be a good idea. Part of HSA’s mission is to reduce the suffering of animals during slaughter, and I find it unlikely that farm animals hedonically adapt during their short and often intensely painful deaths. It seems more likely that a chicken hedonically adapts during its time in a battery cage.
(Does anyone see open threads this late? Say if so!)
I just found out that the Open Philanthropy Project funded http://waitlistzero.org/ which was a small charity started by two EAs who I worked with in its early days. OPP gave $200k, presumably covering Waitlist Zero’s whole budget (way more than it used to be).
This suggests more people creating charities could get fully funded by OPP. Does anyone have any insight into this? Claire Zabel of GiveWell/OPP said:
nice!
Ok, so this doubles as an open thread?
I would like some light from the EA hivemind. For a while now I have been mostly undecided about what to do with my 2016-2017 period.
Roxanne and I even created a spreadsheet so I could evaluate my potential projects and drop most of them, mid-2015. My goals are basically an oscillating mixture of
1)Making the world better by the most effective means possible.
2)Continuing to live in Berkeley
3)Receive more funding
4)Not stop PHD
5)Use my knowledge and background to do (1).
This has proven an extremely hard decision to make. Here are the things I dropped because they were incompatible with time, or goals other than 1, but still think other EAs, who share goal 1, should carry on:
(1) Moral Economics: From when it started, Moral Econ is an attempt to install a different mindset in individuals, my goal has always been to have other people pick it up and take it forwards. I currently expect this to be done, and will go back to it only if it seems like it will fall apart.
(2) Effective Giving Pledge: This is a simple idea I applied to EA ventures with, though I actually want someone else to do it. The idea is simply to copy the Gates giving pledge website for an Effective Giving Pledge, which says that the wealthy benefactors will donate according to impact, tractability and neglectedness. If 3 or 4 signatories of the original pledge signed it, it would be the biggest shift in resource allocation from the non EA-money pool to the EA-money pool in history.
(3) Stuart Russell AI-safety course: I was going to spend some time helping Stuart to make an official Berkeley AI-safety course. His book is used in 1500+ Universities, so the if the trend caught, this would be a substantial win for the AI safety community. There was a non-credit course offered last semester in which some MIRI researchers, Katja, Paul, me and others were going to present. However it was very poorly attended and was not official, and it seems to me that the relevant metric is probability that this would become a trend.
(4) X-risk dominant paper: What are the things that would dominate our priority space on top of X-risk if they were true? Me and Daniel Kokotajlo began examining that question, but considered it to be too socially costly to publish anything about it, since many scenarios are too weird and could put off non-philosophers.
These are the things I dropped for reasons other than the EA goal 1. If you are interested in carrying on any of them, let me know and I’ll help you if I can.
In the comment below, by contrast are the things between which I am still undecided the ones I want help in deciding:
1) Convergence Analysis: The idea here is to create a Berkeley affiliated research institute that operates mainly in two fronts 1)Strategy on the long term future 2)Finding Crucial Considerations that have not been considered or researched yet. We have an interesting group of academics and I would take a mixed position of CEO and researcher.
2) Altruism: past, present, propagation: this is a book whose table of contents I already wrote, and would need further research and spelling out each of the 250 sections I have in mind. It is very different in nature from Will’s book, or Singer’s book. The idea here is not to introduce to EA, but to reason about the history of cooperation and altruism that led to us, and where this can be taken in the future, inclusive by the EA movement. This would be major intellectual undertaking, likely consuming my next three years and doubling as a PHD dissertation. Perhaps, tripling as a series of blog posts, for quick feedback loops and reliable writer motivation.
3) FLI grant proposal: Our proposal intended to increase our understanding psychological theories of human morality in order to facilitate later work in formalizing moral cognition to AIs, a subset of the value loading and control problems of Artificial Generalized Intelligence. We didn’t win, so the plan here would be to try to find other funding sources for this research.
4) Accelerate the PHD: For that I need to do 3 field statements, one about the control problem in AI with Stuart, one about altruism with Deacon, and one to be determined, then only the dissertation would be still on the to do list.
All these plans scored sufficiently high in my calculations that it is hard to decide between them. Accelerating the PHD has a major disadvantage because it does not increase my funding. The book (via blog posts or not) has a strong advantage in that I think it will have sufficiently new material that it satisfies goal 1 best of all, it is probably the best for the world if I manage to get to the end of it and do it well. But again, it doesn’t increase funding. Convergence has the advantage of co-working with very smart people, and if it takes off sufficiently well, it could solve the problem of continuing to live in Berkeley and that of financial constraints all at once, putting me in a stable position to continue doing research in relevant topics almost indeterminately, instead of having to make ends meet by downsizing the EA goal substantially among my priorities. So very high stakes, but uncertain probabilities. If AI is (nearly) all that matters, then the FLI grant will be the highest impact, followed by Convergence, the book and the acceleration.
In any event all of those are incredible opportunities which I feel lucky to even have in my consideration space. It is a privilege to be making that choice, but it is also very hard. So conditional on the goals I stated before: 1)Making the world better by the most effective means possible. 2)Continuing to live in Berkeley 3)Receive more funding 4)Not stop PHD 5)Use my knowledge and background to do (1).
I am looking for some light, some perspective from the outside that will make me lean one way or another. I have been uncomfortably indecisive for months, and maybe your analysis can help.
Three of your projects rank highest on personal interest. I think I would attempt a more granular analysis of this keeping in mind your current uncertainty about your model of your maintained future interest.
Some ideas:
Pretend that you are handing off the project to someone else and writing a short guide to what you think their curriculum of research and work will look like.
Brainstorm the key assumptions underlying the model that assumes a value for each project and see if any of those key assumptions are very cheaply testable (this can be a surprising exercise IME)
Premortem (murphyjitsu) each project and compare the likelihood of different failure modes.
Thanks for sharing. :)
If there’s a way to encourage Russell to write or teach a bit more about AI safety (even just in his textbook, or maybe in other domains), I would think that would be quite important. But you probably have a better picture of how (in)feasible that is.
Sorry that I don’t have strong opinions on the other options....
All of these seem potentially valuable. I suspect the best choice is the one you’ll be most motivated to pursue. My suggestion is that you should consider who your ‘customers’ are for each project, and figure out which group you’d most like to work with and generate deliverables for.
Also, some of these may lend themselves better to intermediate/incremental deliverables, which would be a big plus.
All of the above is fully general advice- my low-resolution take on your specific situation is that Convergence Analysis seems by far both the highest leverage (and certainly the largest variance), though the fact that you seem unsure whether to dive down that path may imply there may be some difficult hurdles or complications down that path that you’re dreading?
As Luke and Nate would tell you, the shift from researcher to CEO is a hard one to make, even when you want to do good, as Hanson puts it “Yes, thinking is indeed more fun.”
I have directed an institute in Brazil before, and that was already somewhat a burden.
The main reason for the high variance though is that setting up an institute requires substantial funding. The people most likely to fundraise would be me, Stephen Frey (who is not on the website), and Daniel, and fundraising is taxing in many ways. Would be great if we had for instance the REG fundraisers to aid us (Liv, Ruari, Igor, wink wink) either by fundraising for us, or finding someone to, or teaching us to.
Money speaks. And it spells high variance.
[Here’s some introductory verbiage so nothing hooky shows up under ‘Recent comments’]
ATTENTION: please read this:
To help us test how many people see comments on the month’s open thread when it’s got old, please upvote this comment if you see it. (I promise to use the karma wisely.)
In the preferences page there is a box for “EA Profile Link.” How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn’t seem to have any effect...
Hey Squark, there’s a guide to using it (and other features of the forum) at http://bit.ly/1OHRd1X . As it explains, the effect is that you get a little
link to it next to your username when you comment or post.When you tried entering your LessWrong profile you should have got the warning below about other links not working. If that didn’t happen, can you tell me what browser and OS (Mac, Linux, etc.) you’re using? Thanks!
As the help text below it says, that’s specifically for EA Profiles (which are the profiles at that link). It’ll only accept a link to one of those; if you don’t already have one, you should create one!
What are the ways that we can spread EA to others? Is there a list, and are there some outreach methods that are particularly good?
https://www.facebook.com/groups/effective.altruists/permalink/963258230397201/
I think this Facebook discussion could be really useful for you. It includes both my personal impressions of how to do outreach (which is fairly highly upvoted, suggesting that other people share my experiences to some extent) as well as links to longer, more sophisticated investigations done by other EAs and EA organizations.
“Giving What We Can is running their annual pledge event – they’ve had 136 new members join so far over December and January. If you haven’t joined their community yet, this is a great time to do so!” 153 now. :D
One thing we’d be interested in seeing is if on Jan. 10th we can reach a record number of pledges on the same day. So far the number to beat is 11 (which is a fluke).
Nice Vox article on climate change-I felt that the argument was robust. Climate change may not end civilization but if humans lose 5% of their vitality over the next 10,000 years, that is terrible.
Can mosquito nets be effective to prevent further spread of Zika virus? Can they be effective at fighting Zika virus the same way they are used my AMF to fight malaria?
Also, are there charities against Zika virus? What would an effective charity dedicated against Zika look like?
There might be an opportunity to develop a gene drive targeting Aedes mosquitoes, a vector for Zika. Harvard University Effective Altruism is running a Philanthropy Advisory Fellowship and researching funding opportunities for gene drives. They will be publishing their findings next month.
I’d just like to say that this is the first month I’ve made an effort to read the EA Newsletter, and it has been a hugely rewarding experience. Although I’m not sure I’ll get through everything on there before the next one, it’s really highlighted loads of great writing and speaking that I would not otherwise have sought out.
Interested in thoughts on this question:
Are chickens equally, more, or less susceptible to the hedonic treadmill than humans? Are pigs equally, more, or less susceptible than humans?
I have yet to find good research on this. However, if anyone out there believes that farm animals are affected by the hedonic treadmill, and that farm animal suffering causes great disvalue, prioritizing donations to the Humane Slaughter Association (HSA) might be a good idea. Part of HSA’s mission is to reduce the suffering of animals during slaughter, and I find it unlikely that farm animals hedonically adapt during their short and often intensely painful deaths. It seems more likely that a chicken hedonically adapts during its time in a battery cage.
Brian Tomasik has a good piece on HSA here.