Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad West
I note some of my confusion that might have been shared by others. I initially had thought that the option from users was between binary “agree” and “disagree” and thought the method by which a user could choose was by dragging to one side or another. I see now that this would signify maximal agreement/disagreement, although maybe users like me might have done so in error. Perhaps something that could indicate this more clearly would be helpful to others.
I anticipate that others will say that you are not obligated to live your life to help others. I disagree, and think that we are obligated to do so. I agree that there is often very little difference between acting to do something that harms conscious beings and failing to do something that you are capable of doing that you know will prevent harm.
However, if you do not take care of yourself, you will (a) be less productive and (b) risk burnout and abandoning your commitment to help others. Even if you aspire to do the most good, without privileging your own interests, it is still prudent to make sure that your basic needs are met so that you are most likely to be be able and willing to do the most good throughout the course of your life.
I joined your mailing list. I will be happy to share what you are doing both personally and through my org!
A challenge with promoting animal rights is the common request for people to completely eliminate animal products from their diet, a step too significant for most. This demand can lead to inaction due to the cognitive dissonance experienced by those disturbed by factory farming but unwilling to go vegan. Thus, providing alternative ways for people to contribute can build more support and reduce harm.
Promoting meaningful labeling: When I go to the supermarket, I often see labeling that purports to signify that the animals used in the creation of the product were treated more humanely. I have no idea (a) whether or not the treatment difference they are claiming is actually true (there may be little to no enforcement) or (b) whether the treatment difference they are claiming actually is significant in terms of its welfare effect. This is an area that EAs could enable non-vegans who are sympathetic… enabling them to identify labeling that is meaningful in terms of animal welfare differences.
Promoting off-setting: The farmed animal welfare movement funding is around two to three hundred million dollars globally, if I understand correctly, orders of magnitude less than cause areas like global health and development. I think there are people who agree that it is terrible that we live in a world of mass torture for the creation of animal products, yet are unwilling to give up the products and thus continue contributing to the demand for it. Although it may not be the most rational to tie one’s donation to one’s harmful action, it is a framework that resonates with people due to some intuitions regarding special obligations stemming from harms that one causes. In my mind, we should leverage this intuition and make it easy to: (1) provide a survey to people that establishes their dietary patters; (2) provide a portfolio of effective animal welfare charities that effectively address farmed animal welfare (conservatively calculated to overestimate rather than underestimate),(3) calculate a sum corresponding with the harm caused on an annual basis, and (4) providing an easy means for them to pay it. I understand @Luke Eure is doing some work that may further this project.Making it simpler for people to engage in the farmed animal welfare movement is crucial. By offering accessible and practical ways to contribute, we can attract more individuals who share our goals, even if to a lesser degree.
Regarding #1, I would remember that orgs giving assistance optimize for avoiding Type 1 rather than Type 2 errors. This means, because of their limited resources, that they are much more interested in making sure their deployment of resources do not go to bad recipients rather than making sure that every potential good recipient is supported by their program (which would be impossible anyway). So while acceptance into a competitive program might be indicative of merit, rejection from many programs might not indicate lack of merit.
I would also listen to Sophia Balderson’s (founder of Impactful Animal Advocacy, now Hive) interview on the How I Learned to Love Shrimp podcast.
Basically, keep considering whether the path you are pursuing is the way to go, but rejection is not dispostive of the question as there are lots of rejections of worthy applicants.
I think this is more an introduction to effective giving than effective altruism generally. I think it would make more sense to frame it that way, especially because effective giving can be a good lead-in to effective altruism.
I also find the inclusion of “AI Governance” while excluding “animal welfare/ending factory farming” in the list of important causes as a bit strange, especially for general audience, with whom factory farming might be more legible.
Hi Dave,
I think businesses that donate a portion of profits should be commended. It’s important to account for the effectiveness of the charities they support as well as the portion of profits donated.
The structure of donation as a portion of profits rather than a set quantity is also sensible because it enables businesses to meet their costs and for worthy causes to share in surpluses along with normal shareholders. However, in businesses with substantial normal shareholders (non-PFGs), shareholders may demand higher prices in light of the profit-sharing. Additionally, significant donations could impair a business’s ability to compete by reinvesting profits.
The Profit for Good (PFG) business structure addresses these challenges effectively. By having charities as the primary shareholders, PFG businesses align their profit motives directly with philanthropic goals. This means that instead of traditional shareholders expecting returns, the profits are directed towards charitable causes, integrating giving into the core business model.
This alignment allows PFG businesses to maintain competitive pricing. Since charities are the shareholders, there is no pressure to maximize dividends for traditional investors. This enables the business to reinvest profits for growth, just like any other company, ensuring sustainability and a competitive edge in the market. Reinvestment increases the equity value of the business, which can enable charities to borrow against this value to access funds for urgent opportunities. The reinvestment benefits both the business and the charitable causes, as increased business value translates into greater potential for charitable funding.
Moreover, PFG businesses can leverage consumer preference for ethical consumption without compromising on competitiveness. Consumers are likely to favor products from businesses that transparently support charitable causes, potentially driving higher sales and further increasing the funds available for donation.
In essence, while any business contributing to charitable causes is a step in the right direction, the PFG model maximizes the impact by structurally aligning business success with philanthropic goals.
I love what you are doing to make it easier for people to do good. I think a lot of our community’s efforts have focused on how to empower highly-aligned people to do more good. The focus you seem to have on concrete actions people can do to better the world seems like it could potentially have a much broader audience.
As I read through the EA handbook recently, many passages seemed rather paralyzing. And I agree that, especially for highly aligned and engaged people, thoughtful reflection and analysis is very appropriate to think about how we can use our lives to do the most good. On the other hand, your concrete recommendations with tangible, clearly-articulated benefits is probably more helpful to the vast majority of people looking to do good. I look forward to seeing the future of “Increasing Happiness”
They seem like an excellent example of a Profit for Good business succeeding in part due to their charitable commitment. Selling coffee, socks, and soap online seems very scalable and we are very excited to see the Good Store’s progress.
We link to each of their product lines on our “Find a Profit for Good” page
I think that a broad moral circle follows from EA in the same way that generally directing resources to the developing world vs the developed world follows from EA. In fact, I think the adoption of a broad moral circle would be steps before the conclusion regarding preference for developing world assistance. However, I am not sure how wise it is to bundle certain moral commitments into the definition of EA when it could be defined simply as the deliberate use of reason to do the most good insofar as we are in the project of doing good, without specification of what “the good” is. Otherwise, there could be broad arguments about what all moral commitments one must make in order to be an EA.
Of course, my definition would require me to bite the bullet that one could be an “effective ‘altruist’” and be purely selfish if they adopted a position such as ethical egoism. But I think confining the definition of EA to the deliberate use of reason to best do good, and leave open what that consists of, is the cleaner path. And the EA community’s rejection of egoists would follow from the fact that such egoism either does not follow from their moral epistemology (or from whatever process they use to discern the good). This would be similar to the scientific community’s rejection of a theory in which the sun revolves around the earth. They do not point to enumerations within the definition of science which reject that possibility within the definition, but rather they point to a higher order process which leads to its refutation. Moral epistemology would follow from the more basic requirement of reason and deliberateness (we can’t do the most good unless we have some notion of what the good is).
I am a bit concerned with the “broad moral circle” being definitional to Effective Altruism (though it accords with my own moral views and with EAs generally). If I recall correctly, EA, zoomed out as far as possible, has not committed to specific moral views. There are disagreements among EAs, for instance, as to whether deontological constraints should limit actions, or whether we should act wholly to maximize welfare as utilitarians. I had thought that the essence of effective altruism is to “do good”, at least to the extent that we are trying to do so, as effectively as we can.
Consequently, I would see the fundamental difference between what EA altruists and non-EA altruists are doing as one of deliberateness, from which instrumental rationality would proceed. The non-EA altruist looks to do good without deliberation as to how to do so as best he/she can, or with bounded deliberation on this point. The EA looks to do good with deliberation as to how to do so the best he/she can.
I would agree that setting a broad moral circle would be an early part of what one would do as an EA (before more broad cause-prioritization, for instance), but EA has traditionally been open-minded as to what philosophies are morally true or false and many have viewed this as an important part of the EA project. Consequently, I would put the “adoption of a broad moral circle moral value” at least one step beyond the definition of EA.
It looks like you are looking for advice on how to fundraise, particularly in a way that contributes to creating a norm or culture of giving.
Substantively, as a step one, trying to convince someone to donate all of their income above a certain threshold is likely to be met with a degree of defensiveness, internally, if not externally. If someone is not already considering such a step, it is probably very difficult to persuade the person to make that step. If you are a part of a community of relatively wealthy people, being a part of it and forming friendships with people might be a place to start. You can make it clear that giving effectively is part of your identity without explicitly trying to pitch them on effective giving, which may influence people. You could introduce people to Giving What We Can, and let them know the pledge that you have taken. However, being influential of other people in this way is likely very hard and would involve skills that are likely very difficult to learn.
On a meta-level, you might want to include in the title of your post the kind of help that you are looking for. “EA, I love you” tells people who might want to help you virtually nothing about the kind of help you are looking for.
Good luck persuading/influencing people to use the power they can to significantly better the world.
Of course, one subset of Christians or other religious believers believe that the subjects of their religious beliefs follow from (or at least accord with) their rationality. This would contrast with the position that you seem to be indicating, which I believe is called fideism, which would hold that some religious beliefs cannot be reached by rational thinking. I would be interested in seeing what portion of EAs hold their religious beliefs explicitly in violation of what they believe to be rational, but I suspect that it would be few.
In any case, I believe truthseeking is generally a good way to live for even religious people who hold certain beliefs in spite of what they take to be good reason. Ostensibly, they would simply not apply it to one set of their beliefs.
Thank you for this insightful post. While I resonate with the emphasis on the necessity of truthseeking, it’s important to also highlight the positive aspects that often get overshadowed. Truthseeking is not only about exposing flaws and maintaining a critical perspective; it’s also about fostering open-mindedness, generating new ideas, and empirically testing disagreements. These elements require significantly more effort and resources compared to criticism, which often leads to an oversupply of the latter and can stifle innovation if not balanced with constructive efforts.
Generating new ideas and empirically testing them involves substantial effort and investment, including developing hypotheses, designing experiments, and analyzing results. Despite these challenges, this expansive aspect of truthseeking is crucial for progress and understanding. Encouraging open-mindedness and fostering a culture of curiosity and innovation are essential. This aligns with your point about the importance of embracing unconventional, “weird” ideas, which often lie outside the consensus and require a willingness to explore and challenge the status quo.
Your post reflects a general EA attitude that emphasizes the negative aspects of epistemic virtue while often ignoring the positive. A holistic approach that includes both the critical and constructive dimensions of truthseeking can lead to a more comprehensive understanding of reality and drive meaningful progress. Balancing criticism with creativity and empirical testing, especially for unconventional ideas, can create a more dynamic and effective truthseeking community.
It may have not been totally clear from the post, which I will edit in a minute, but the intended reading order would be
“What is Profit for Good”, which is included in this post
Introducing the Profit for Good Blog: Transforming Business for Charity
Yield and Spread is a Profit for Good business that provides financial advice, particularly to help further effective giving. All the profit the business generates goes to effective charities. Thought it would make sense to give them a shout out here.
This article made me wonder if we are undervaluing food labeling. Currently, if I understand correctly, there are many food labels, many of which don’t correspond to meaningful animal welfare differences. Educating the public about labels that correspond to meaningful differences in treatment may be a promising path.
https://phys.org/news/2024-05-reveals-consumers-animal-welfare-environmental.amp
Fair enough.
I still suspect that you may be underestimating marginal AI Safety funding opportunities.
Yeah, if there were markers like “neutral”, “slightly agree”, “moderately agree”, “strongly agree”, etc. that might make it clearer.
After the decision by the user registers, a visual display that states something like “you’ve indicated that you strongly agree with the statement X. Redrag if this does not reflect your view or if something changes your mind and check out where the rest of the community falls on this question by clicking here.”