Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad Westđ¸
The issue with support roles is that itâs often difficult to assess when someone in that position truly makes a counterfactual difference. These roles can be essential but not always obviously irreplaceable. In contrast, itâs much easier to argue that without the initiator or visionary, the program might never have succeeded in the first place (or at least might have been delayed significantly). Similarly, funders who provide critical resourcesâespecially when alternative funding isnât availableâmay also be in a position where their absence would mean failure.
This perspective challenges a more egalitarian view of credit distribution. It suggests that while support roles are crucial, itâs often the key figuresâinitiators, visionaries, and fundersâwho are more irreplaceable, and thus more deserving of disproportionate recognition. This may be controversial, but it reflects the reality that some contributions, particularly at the outset, might make all the difference in whether a project can succeed at all.
I think I considered it prior to the enumerated portion, where Iâd said
âit would be valuable to see an analysisâperhaps thereâs something like this on 80,000 Hoursâof the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact.â
I agree that the âhigh autonomy and lack of ability to oversee or otherwise measure achievement of objectivesâ would be a reason that having EAs in the role might be better. The scope of jobs in this category is not clear.
There may have been an overcorrection and I still think ETG is a good default optionâthe scarcity of âEA jobsâ and frequent posts lamenting the difficulty of getting jobs at EA orgs as an EA suggests that there is no shortage of EAs looking to fill roles for which close alignment is critical. Especially in the animal welfare EA spaceâeveryone wants to be doing direct work and so little funding to enable excellent work. There may be more of an âaligned talent constraintâ problem in AI Safety.
I didnât neglect itâI specifically raised the question of in what conditions EAs occupying roles within orgs vs non-EAs adds substantial value. You assume that having EAs in (all?) roles is critical to having a âfocusedâ org. I think this assumption warrants scrutiny, and there may be many roles in orgs for which âidentifying as an EAâ may not be important and that using it as a requirement could result in neglecting a valuable talent pool.
Additionally, a much wider pool of people could align with the specific mission of an org that donât identify as EA.
One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysisâperhaps thereâs something like this on 80,000 Hoursâof the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesnât outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider whether:
They are providing specialized and scarce labor in a high-impact area where their contribution is genuinely advancing the field. This seems more applicable in specialized research than in general management or operations.
They are exceptionally competent, yet the market might not compensate them adequately, thus allowing highly effective organizations to benefit from their undercompensated talent.
I tend to agree more with you on the âdoerâ aspectâEAs who independently seek out opportunities to improve the world and act on these insights often have a significant impact.
I appreciate the depth and seriousness with which suffering-focused ethics addresses the profound impact of extreme negative experiences. Iâm sympathetic to the idea that such suffering often carries more moral weight than extreme positive experiences. For example, being tortured is not merely âworseâ than having a pleasurable experience, but it is disproportionately more severe. The extreme nature of certain sufferings makes it challenging, if not impossible, to identify positive experiences that one would reasonably trade off to endure them.
However, I maintain a classical utilitarian framework, which, while recognizing the disproportionate severity of certain forms of suffering, also acknowledges the significant value of positive experiences. The example involving a toothache and heaven illustrates why positive experiences cannot be dismissed. Ending a state of eternal bliss (or preventing it from ever occurring) simply to avoid a trivial negative experience like a stubbed toe is both absurd and morally troubling. It suggests a kind of ethical myopia that undervalues the richness and depth of joy, love, and fulfillment that life can offer.
Imagine individuals behind a veil of ignorance, choosing between two potential lives: one filled with immense joy but punctuated by occasional bad days, versus a life that is consistently mediocre, without significant pain but also devoid of substantial positive experiences. It seems intuitive that most would choose the former. The prospect of immense joy outweighs the temporary pain that accompanies it, suggesting that the value of positive experiences should not be discounted but rather carefully weighed alongside the potential for suffering.
The sensible approach, in my view, is not to eliminate or devalue the significance of joy and positive experiences, but to acknowledge the depth and intensity of potential suffering. By doing so, we can ensure that our ethical frameworks remain balanced, appropriately weighting the full spectrum of the experiences of conscious beings without overcorrecting in a way that leads to counterintuitive and undesirable outcomes.
In summary, while suffering-focused ethics rightly highlights the importance of alleviating extreme suffering, we must also recognize and value the profound positive experiences that give life its richness and meaning. Both extremes of the human condition (and those of other conscious beings)âintense suffering and intense joyâdeserve our moral attention and appropriate weighting in our ethical considerations.
I think Peter Singerâs book, The Life You Can Save, addresses this question more fully. But I would say that the obligations of people in wealthy countries is to make life choices, including sharing of their own wealth, in a way that shows some degree of consideration for their ability to help others in such an efficient way.
Failing to make some significant effort to help, perhaps to the degree of the 10% pledge (though I would probably think more than that even would in many situations be morally required). I do not know where exactly I would draw the line, but some degree of consideration similar to that of the 10% pledge would be a minimum.
I definitely think that the very demanding requirement you stated above would make more sense than none whatsoever in which one implicitly values others less than a thousandth of how one values oneself.
My intuition doesnât really change significantly if you change the obligation from a financial one to the amount of labor that would correspond to the financial one.
If I recall correctly, the value of a statistical life used by government agencies is $10 mil/âlife, which is calculated by using how much people value their own lives implicitly through choices they make that entail avoiding risk by incurring costs and getting benefits by incurring risk to themselves.
If we round up the cost to save a life in the developing world to $10k, people in the developing world could save 1,000 lives for the cost at which they value their own lives.
I simply think that acting in a way that you value another person 1,000 times less than you do yourself is immoral. This is why I do think that incorporating the value of other conscious beings to some degree is morally required.
Yeah I think in the case of both choosing not to act to save the kid and acting to kill the kid (in this narrow hypothetical) youâre violating the kidâs rights just as much (privileging your financial interests over his life).
And regarding your point regarding conscience⌠Youâre appealing to our moral intuitions which we can question the validity of, particularly with such thought experiments as these.
I suppose I would agree that acting as a moral person requires a significant consideration of other conscious beings with regard to our choices. And I think the vast majority of people fail to take adequate consideration thereof. I suppose thatâs how I consider my own âconscienceâ: am I making choices with sufficient regard for the interests of other beings across space and time? I think attempting to act accordingly is part of my âinner goodnessâ.
Iâm not saying youâre not legally entitled to the money.
Iâm saying that, in an ultimate sense, the kid is more morally entitled not to die from malaria than you are to retain your $6k.
And there are no norms that would develop in the thought experiment. Your activity would be totally secret. The further policy issues might indicate that people ought to have a right to their money, but that does not bear on whether they would be morally obligated to exercise it in certain ways.
The Ethics of AcÂtion and InÂacÂtion: AltruÂism, ObliÂgaÂtion, and the InÂvisiÂble Button
I donât think that EA should be graduated from. I think that itâs a matter of continuing to develop in both the âeffectiveâ and âaltruisticâ components.
With âEffectiveâ, Iâd say weâre talking about an epistemological process. There, youâre trying to learn the relevant knowledge about the world and yourself such that the resources within your control that you are deciding to deploy for altruistic purposes are deployed such that they can do the most good.
With âAltruismâ, that would be digging deep within yourself so that you can deploy more of those resources. The ideal, in my mind, would be having no more partiality to your own interests than those of other conscious beings across space, species, and/âor time.
So, I donât see an endpoint, but rather a constant striving for knowledge, wisdom, and will.
I would have a lot less concern about more central control of funding within EA if there was more genuine interest within those funding circles for broad exploration and development of evidence from new ideas within the community. Currently, I think there are a handful of (very good) notions about areas that are the most promising (anthropogenic short-term existential or major risks like AI, nuclear weapons, pandemics/âbioweapons, animal welfare, global health, and development) that guide the âspotlightâ under which major funders are looking. This spotlight is not just about these important areasâitâs also shaped by strong intuitions and priors about the value of prestige and the manner in which ideas are presented. While these methodologies have merit, they can create an environment where the kinds of thinking and approaches that align with these expectations are more likely to receive funding. This incentivizes pattern-matching to established norms rather than encouraging genuinely new ideas.
The idea of experimenting with a more democratic distribution of funding, as you suggest, raises an interesting question: would this approach help incentivize and enable more exploration within EA? On one hand, by decentralizing decision-making and involving the broader community in cause area selection, such a model could potentially diversify the types of projects that receive funding. This could help break the current pattern-matching incentives, allowing for a wider array of ideas to be explored and tested, particularly those that might not align with the established priorities of major funders.
However, there are significant challenges to consider. New and unconventional ideas often require deeper analysis and nuanced understanding, which may not be easily accessible to participants in a direct democratic process. The reality is that many people, even within the EA community, might not have the time or expertise to thoroughly evaluate novel ideas. As a result, they may default to allocating funds toward causes and approaches they are already familiar with, rather than taking the risk on something unproven or less understood.
In light of this, a more ârepublicanâ system, where the community plays a role in selecting qualified assessors who are tasked with evaluating new ideas and allocating funds, might offer a better balance. Such a system would allow for informed decision-making while still reflecting the communityâs values and priorities. These assessors could be chosen based on their expertise and commitment to exploring a wide range of ideas, thereby ensuring that unconventional or nascent ideas receive the consideration they deserve. This approach could combine the benefits of broad community input with the depth of analysis needed to make wise funding decisions, potentially leading to a richer diversity of projects being supported and a more dynamic, exploratory EA ecosystem.
Ultimately, while direct democratic funding models have the potential to diversify funding, they also risk reinforcing existing biases towards familiar ideas. A more structured approach, where the community helps select knowledgeable assessors, might strike a better balance between exploration and empirical rigor, ensuring that new and unconventional ideas have a fair chance to develop and prove their worth.
EDIT:
I wanted to clarify that I recognize the ârepublicâ nature in your proposal, where fund managers have the discretion to determine how best to advance the selected cause areas. My suggestion builds on this by advocating for even greater flexibility for these representatives. Specifically, I propose that the community selects assessors who would have broader autonomy not just to optimize within established areas but to explore and fund unconventional or emerging ideas that might not yet have strong empirical support. This could help ensure a more dynamic and innovative approach to funding within the EA community.
I donât believe the âmeat eater problemâ should be ignored, but rather approached with great care. Itâs easy to imagine the negative press and public backlash that could arise from expressing views suggesting it might be better for people to die or discouraging support for charities that save lives in the developing world.
The Effective Altruism community is very small, with estimates around 10,000 peopleâa tiny fraction of the nearly 8 billion people on the planet. If we want to create a world without factory farming, we need to focus on bringing more people into the fold who care about animals. Spotlighting an analysis that essentially suggests itâs good when young children die and that we should discourage saving them doesnât seem like the path to growing the movement that can end the horrors of factory farming.
By treating this problem with care, we can ensure that our efforts to improve the world are effective without alienating those who might otherwise join us in the fight against animal suffering.
The âmeat eater problemâ raises an intriguing ethical question, but Iâm inclined to think (with low confidence) that even if the concern is valid, the proliferation of this idea could have a negative expected value. By focusing on such a divisive concept, we risk alienating potential supporters of the animal welfare movement, which could ultimately hinder efforts to reduce animal suffering. That said, this is distinct from whether the impact of the average human on factory farming would alter personal donation decisions.
Thanks for doing this. For the reasons that youâve mentioned, youâre likely getting a bonus value as against a direct donation.
A note I would make is that the preferential treatment youâve referred to when incorporating this into economic activity (selling goods, higher inducements for meetings, etc.) is the same phenomena that underlies the idea of Profit for Good businesses (businesses where charities get the profit instead of other investors).
For your convenience, Iâll link to a reading list- âMaking Trillions for Effective Charities through the Consumer Economyâ and âFrom Charity Choice to Competitive Advantageâ are probably the two best reads.
Reading List
Hey, I was unable to donate to my own project with the funds (but was able to donate to othersâ projects and others were able to donate to mine). Are others having this issue?
Someone could not just eliminate their contribution to FF but be part of the solution if their contribution is a greater than one multiple of the offset. I think people might like a 1.5 to 2X offset potentially for the warm fuzzies.
Love this notion⌠So many people are struggling to find a way to find meaning in a life where most of their job is unfulfilling. But by committing to effective giving, you can transform things that seem monotonous or pointless into the means to save lives, reduce suffering, etc.
I think that it is a perspective that I need to take to heart because one of the difficulties I have with my day job is that I would much rather be spending time with the org that I run. But by doing my day job, I support myselfâit would be hard to do my nonprofit work without my own food and lodging- and can fund, to some extent, the nonprofit that I run.
Other than giving up and shutting down, they could have put offsetting front and center. I think it might be psychologically compelling to some who donât want to give up meat to be able to undo some of their contributions to the factory farming system. I actually became aware of their calculator from your quick take, as currently it is pretty hard to find.
Yes, both talks are on the same concept of Profit for Good.
I donât think either makes direct reference to the Profit for Good Initiative.