I want to publish several posts on this forum in the coming weeks. This is an open call for reviewers for various posts. I believe it’s more important to get the information out there than for me to publish it. So, for topics for which I have insufficient content or information, I’m seeking coauthors. Here’s the list. Feel free to comment which ones you’d be willing to review below, or send me a private message. I may draft some of these posts in Google Docs, or another word processor, before I publish them, so send me a private message with your email if you like. Just comment below if you’re generally willing to review them, instead of any particular ones:
Does It Make Sense to Make A Multi-Year Donation Commitment to A Single Organization?
Essentially, this already published comment
What Doesn’t Count As Effective Altruism?
Rob Wiblin presented a talk at the 2014 Effective Altruism Summit entitled ‘What is Effective Altruism?’ Posting a summary of the whole talk on this forum seems redundant, but near the end Mr. Wiblin covered what, at least from the perspective of himself and the Centre for Effective Altruism, what’s disqualified from effective altruism. I believe this may make a good post. If the idea of this post raises red flags in your mind about possible controversy, I anticipate that, and you’re also welcome to review my post before I publish it.
Neglectedness, Tractability, and Importance/Value
The idea of heuristically identifying a cause area based on these three criteria was more or less a theme of the 2014 Effective Altruism Summit. This three-prong approach was independently highlighted by Peter Thiel, not just for non-profit work but entrepreneurship and and innovation more generally, and Holden Karnofsky, as the basis for how the Open Philanthropy Project asks questions about what cause areas to consider. Several months ago I discussed with Owen Cotton-Barratt publishing a post on this subject, or perhaps coauthoring it. Still, that hasn’t happened from either of us yet, so I’ll definitely be doing it, seeking input from yourself as well.
Effective Collaboration
Michael Vassar gave a small lightning-talk at the 2014 Effective Altruism Summit on how organizations and others within the effective altruism movement may better collaborate. In his opinion, there is or was a dearth of this within the movement, and that’s a problem. I’d like to interview or contact Mr. Vassar about this, as my notes are incomplete. If I can’t achieve that, I likely won’t publish this post unless others come forward with their detailed perspectives on this issue.
Volunteer and Human Resource Coordination
This would be a followup to the above post, with possible intent to launch or coordinate a project. Vassar noted as an aside that the effective altruism movement may greatly benefit from having something like a COO between organizations, or something like a super-secretary. This could be a person, perhaps full-time, completely dedicated to getting all of effective altruism’s logistical ducks in a row. This seems an important intermediate role. It may be fitting to have this organized by the Centre For Effective Altruism. However, just in case, this post may survey .impact and other effective altruist coalitions in an effort towards greater coordination and communication between everyone.
Crowdfunding and Effective Altruism
This would be a post exploring how to use crowdfunding effectively, how it’s previously been used across the world for effective causes, and what future potential it may hold for effective altruism. As I write this, I realize this post would also need to differentiate crowdfunding versus normal fundraising, and what the advantages and disadvantages of crowdfunding might be relative to normal fundraising. If you have experience in organizing either normal fundraisers, or crowdfunding campaigns, your input would especially be appreciated.
What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism
In the face of individuals such as Elon Musk and Peter Thiel making large donations to effective organizations, and cause areas, to the tune of millions of dollars, and in the near future Good Ventures throwing tens, perhaps hundreds, of millions of dollars at effective causes and charities, I anticipate they may exhaust giving opportunities presently available and sensible for most of us. Most of us won’t become multi-millionaires, presumably. Even in the face of donating four- or five-figure sums each year, an extremely high net-worth donor or foundation may render redundant the efforts of tens or hundreds of other effective altruists earning to give. Whether its funding our currently recommended charities to the point of room for more funding issues in one fell swoop, or the most effective cause areas requiring only huge donations to be tractable, such as policy advocacy, it poses an issue. I feel like this may pose an identity crisis for effective altruism, and may change how, e.g., 80,000 Hours recommends effective altruists enter earning to give as a career.
Reevaluating Earning to Give
This post would be related to the above, and its implications for earning to give. Also, I’d be seeking arguments both for and against earning to give as a career option worth pursuing from within the effective altruism movement, but not from 80,000 Hours.
Member Perspectives on 80,000 Hours
This post would be a retrospective and a set of critiques on how various members of 80,000 Hours think of its performance. This could range from general satisfaction with the organization, to measured evaluations of specific outcomes from 80,000 Hours. This would deliberately seeking input from others, like myself, who don’t have an affiliation with 80,000 Hours beyond latent membership.
“What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism”
I’ve been wondering the same. But I’ve got a feeling that top tier philanthropists deliberately restrict their giving to ~50% at max. of the room for more funding, both to encourage smaller donors, and also because they only want to support things in proportion to their popular appeal. The latter also explains the motivation for genuinely restricted donation matching.
These are all good points about normal philanthropy. However, I’m still concerned because effective altruism doesn’t involve normal philanthropy, or charitable giving. Thanks for responding, as this spurs me to state my case for why effective altruism is a unique movement for which we might need to take special considerations. I count explaining my rationale here in dialogue as drafting my essay on the topic.
For its classic charity recommendations, Givewell is rigorous, and evaluates its top charities of having hit a point of room for more funding issues at the point of a one of those charities receiving, e.g., >= $10 million USD in a single year. The demotion of the AMF from the most recommended charity in 2013 is an example of this. A foundation like Good Ventures could fund these top charities to the point at which each and all of them are not the best marginal donation target. From there, Givewell may be at a present loss for finding the next best set of charities to recommend. With it, effective altruism at large might be at a loss. Lots of people like myself and others I’ve observed are uncertain enough about what is the best donation target that we’re too reluctant to make 3- or 4-figure sum donations to any other charities.
Additionally, the Open Philanthropy Project seeks to release in the next year recommendations to Good Ventures to support efforts to reform criminal justice or immigration policy in the United States, or fund large-scale research efforts. From the perspective of a foundation like Good Ventures, such efforts could do good on a massive scale, and are worth funding even if it requires one million dollars or more to discover if any good can be achieved. From the perspective of the average supporter of effective altruism, such an opportunity is backed by less evidence as robust as Givewell’s classic recommendations, and entails much more risk. A multi-million dollar foundation can afford the much higher risks to reap much higher rewards than individuals.
In conjunction, I worry these two issues may squish us smaller donors. If donation is the most obvious effective way of doing good, but it becomes redundant for small-to-medium donors to donate in the name of the best altruistic opportunities, we’re at a loss. Effective altruism is dedicated to seeking the best ways of doing good. If lots of us build our careers on donating to the best causes, but our financial contributions become negligible, what next?
We may reach the point that earning to give isn’t the best common recommendation for doing good, at which point 80,000 Hours and effective altruism at large may be giving expired advice to several hundred individuals. At this point, informing individuals to seek careers which will allow them to donate more may no longer be the best recommendation. Additionally, I feel it would be irresponsible of effective altruism to recommend those aspiring to do the most good they can to pursue a career of earning to give, when the complete picture begins informing us that isn’t their best option.
At this point, my point is bleeding into my idea of “Reevaluating Earning to Give”, which is a related but separate topic.
What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism
I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don’t. Currently the thing you have to know is “there’s this thing called EA and earning to give”. As that meme spreads, you’d expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.
The number of earning-to-givers * average good done by earning to give ⇐ total amount of good available to be done.
The same equation applies to “knowing about everything that’s going on inside EA”, so creating better memes than earning to give doesn’t appear to solve the problem.
What would help though, would be:
finding where my model of what’s going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while)
increasing the “total amount of good that can be done given current resources”.
The second one would seem to suggest increasing the total resources available to doing good—this isn’t quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.
I think the EA algorithm would look something like this:
Do what everyone else in EA is doing
Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.
I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.
Neglectedness, Tractability, and Importance/Value
I have written an article which discusses a couple of technical models of cause effectiveness, and derives a 3-factor model which can be interpreted as giving a way to measure neglectedness, tractability and importance. You can find it here; the forum thread to discuss it is here.
I want to publish several posts on this forum in the coming weeks. This is an open call for reviewers for various posts. I believe it’s more important to get the information out there than for me to publish it. So, for topics for which I have insufficient content or information, I’m seeking coauthors. Here’s the list. Feel free to comment which ones you’d be willing to review below, or send me a private message. I may draft some of these posts in Google Docs, or another word processor, before I publish them, so send me a private message with your email if you like. Just comment below if you’re generally willing to review them, instead of any particular ones:
Does It Make Sense to Make A Multi-Year Donation Commitment to A Single Organization? Essentially, this already published comment
What Doesn’t Count As Effective Altruism? Rob Wiblin presented a talk at the 2014 Effective Altruism Summit entitled ‘What is Effective Altruism?’ Posting a summary of the whole talk on this forum seems redundant, but near the end Mr. Wiblin covered what, at least from the perspective of himself and the Centre for Effective Altruism, what’s disqualified from effective altruism. I believe this may make a good post. If the idea of this post raises red flags in your mind about possible controversy, I anticipate that, and you’re also welcome to review my post before I publish it.
Neglectedness, Tractability, and Importance/Value The idea of heuristically identifying a cause area based on these three criteria was more or less a theme of the 2014 Effective Altruism Summit. This three-prong approach was independently highlighted by Peter Thiel, not just for non-profit work but entrepreneurship and and innovation more generally, and Holden Karnofsky, as the basis for how the Open Philanthropy Project asks questions about what cause areas to consider. Several months ago I discussed with Owen Cotton-Barratt publishing a post on this subject, or perhaps coauthoring it. Still, that hasn’t happened from either of us yet, so I’ll definitely be doing it, seeking input from yourself as well.
Effective Collaboration Michael Vassar gave a small lightning-talk at the 2014 Effective Altruism Summit on how organizations and others within the effective altruism movement may better collaborate. In his opinion, there is or was a dearth of this within the movement, and that’s a problem. I’d like to interview or contact Mr. Vassar about this, as my notes are incomplete. If I can’t achieve that, I likely won’t publish this post unless others come forward with their detailed perspectives on this issue.
Volunteer and Human Resource Coordination This would be a followup to the above post, with possible intent to launch or coordinate a project. Vassar noted as an aside that the effective altruism movement may greatly benefit from having something like a COO between organizations, or something like a super-secretary. This could be a person, perhaps full-time, completely dedicated to getting all of effective altruism’s logistical ducks in a row. This seems an important intermediate role. It may be fitting to have this organized by the Centre For Effective Altruism. However, just in case, this post may survey .impact and other effective altruist coalitions in an effort towards greater coordination and communication between everyone.
Crowdfunding and Effective Altruism This would be a post exploring how to use crowdfunding effectively, how it’s previously been used across the world for effective causes, and what future potential it may hold for effective altruism. As I write this, I realize this post would also need to differentiate crowdfunding versus normal fundraising, and what the advantages and disadvantages of crowdfunding might be relative to normal fundraising. If you have experience in organizing either normal fundraisers, or crowdfunding campaigns, your input would especially be appreciated.
What Role Do Small-to-Medium Donors Play In the Future of Effective Altruism In the face of individuals such as Elon Musk and Peter Thiel making large donations to effective organizations, and cause areas, to the tune of millions of dollars, and in the near future Good Ventures throwing tens, perhaps hundreds, of millions of dollars at effective causes and charities, I anticipate they may exhaust giving opportunities presently available and sensible for most of us. Most of us won’t become multi-millionaires, presumably. Even in the face of donating four- or five-figure sums each year, an extremely high net-worth donor or foundation may render redundant the efforts of tens or hundreds of other effective altruists earning to give. Whether its funding our currently recommended charities to the point of room for more funding issues in one fell swoop, or the most effective cause areas requiring only huge donations to be tractable, such as policy advocacy, it poses an issue. I feel like this may pose an identity crisis for effective altruism, and may change how, e.g., 80,000 Hours recommends effective altruists enter earning to give as a career.
Reevaluating Earning to Give This post would be related to the above, and its implications for earning to give. Also, I’d be seeking arguments both for and against earning to give as a career option worth pursuing from within the effective altruism movement, but not from 80,000 Hours.
Member Perspectives on 80,000 Hours This post would be a retrospective and a set of critiques on how various members of 80,000 Hours think of its performance. This could range from general satisfaction with the organization, to measured evaluations of specific outcomes from 80,000 Hours. This would deliberately seeking input from others, like myself, who don’t have an affiliation with 80,000 Hours beyond latent membership.
I’ve been wondering the same. But I’ve got a feeling that top tier philanthropists deliberately restrict their giving to ~50% at max. of the room for more funding, both to encourage smaller donors, and also because they only want to support things in proportion to their popular appeal. The latter also explains the motivation for genuinely restricted donation matching.
These are all good points about normal philanthropy. However, I’m still concerned because effective altruism doesn’t involve normal philanthropy, or charitable giving. Thanks for responding, as this spurs me to state my case for why effective altruism is a unique movement for which we might need to take special considerations. I count explaining my rationale here in dialogue as drafting my essay on the topic.
For its classic charity recommendations, Givewell is rigorous, and evaluates its top charities of having hit a point of room for more funding issues at the point of a one of those charities receiving, e.g., >= $10 million USD in a single year. The demotion of the AMF from the most recommended charity in 2013 is an example of this. A foundation like Good Ventures could fund these top charities to the point at which each and all of them are not the best marginal donation target. From there, Givewell may be at a present loss for finding the next best set of charities to recommend. With it, effective altruism at large might be at a loss. Lots of people like myself and others I’ve observed are uncertain enough about what is the best donation target that we’re too reluctant to make 3- or 4-figure sum donations to any other charities.
Additionally, the Open Philanthropy Project seeks to release in the next year recommendations to Good Ventures to support efforts to reform criminal justice or immigration policy in the United States, or fund large-scale research efforts. From the perspective of a foundation like Good Ventures, such efforts could do good on a massive scale, and are worth funding even if it requires one million dollars or more to discover if any good can be achieved. From the perspective of the average supporter of effective altruism, such an opportunity is backed by less evidence as robust as Givewell’s classic recommendations, and entails much more risk. A multi-million dollar foundation can afford the much higher risks to reap much higher rewards than individuals.
In conjunction, I worry these two issues may squish us smaller donors. If donation is the most obvious effective way of doing good, but it becomes redundant for small-to-medium donors to donate in the name of the best altruistic opportunities, we’re at a loss. Effective altruism is dedicated to seeking the best ways of doing good. If lots of us build our careers on donating to the best causes, but our financial contributions become negligible, what next?
We may reach the point that earning to give isn’t the best common recommendation for doing good, at which point 80,000 Hours and effective altruism at large may be giving expired advice to several hundred individuals. At this point, informing individuals to seek careers which will allow them to donate more may no longer be the best recommendation. Additionally, I feel it would be irresponsible of effective altruism to recommend those aspiring to do the most good they can to pursue a career of earning to give, when the complete picture begins informing us that isn’t their best option.
At this point, my point is bleeding into my idea of “Reevaluating Earning to Give”, which is a related but separate topic.
I think this fits into a bigger picture. To punch above your weight in terms of impact, you need to know something (or have a skill) that most other people don’t. Currently the thing you have to know is “there’s this thing called EA and earning to give”. As that meme spreads, you’d expect its impact to dwindle, assuming an upper bound on the total amount of good that can be done given current resources.
The number of earning-to-givers * average good done by earning to give ⇐ total amount of good available to be done.
The same equation applies to “knowing about everything that’s going on inside EA”, so creating better memes than earning to give doesn’t appear to solve the problem.
What would help though, would be:
finding where my model of what’s going on is an oversimplification, and focussing some attention there (maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while)
increasing the “total amount of good that can be done given current resources”.
The second one would seem to suggest increasing the total resources available to doing good—this isn’t quite the same as growing the economy, because many agents in the economy are selfish, but it feels related and probably involves an entrepeneurial spirit.
I think the EA algorithm would look something like this:
Do what everyone else in EA is doing
Think of something new, and if it can be shown to be effective (in the sense of growing things, not just directing resources away from somewhere else even in an indirect sense) then roll it out to the rest of the EA movement.
End ramble.
I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.
I have written an article which discusses a couple of technical models of cause effectiveness, and derives a 3-factor model which can be interpreted as giving a way to measure neglectedness, tractability and importance. You can find it here; the forum thread to discuss it is here.