Unfortunately, issues like the FTX debacle or the Bostrom stuff recently might be a significant amount of a prospective EA’s experience about EA because other aspects may not have penetrated to his/her news sources. Even a small-scale community builder might want some good answers in the face of troubling news one has been in contact with.
Retracted even the heart
Yeah. I would too… But I think people feel more compelled to not do bad things than to positively do good things.
Maybe I’m wrong about veganism: my impression was that the rate of veganism has stayed relatively constant and farmed animal welfare charities have orders of magnitude less funding than global health and development. I think there’s definitely been progress in farmed animal welfare, but not necessarily in getting broader public buy in.
It all comes down to whether or not the public would be motivated by the offset framing. I know the framing was compelling to me when I was donating to Givewell charities (now I donate all my money to my own nonprofit). I figured I should at least donate enough to compensate for my own contribution to animal torture, and maybe some multiple of that… I figured there would be an easy way to do this online, but there wasn’t really an easy button.
Anyway, I think the search costs are well worth the possibility that offset-framing might be worth exploring… But they won’t be borne by me. I’m off trying to save the world by enabling consumers discrimination in favor of effective charities (buy the same shit for the same cost, but Against Malaria Foundation gets the profit rather than traditional shareholders).
It’s not one or the other- in fact I think an offsetting campaign would be complementary to political action because it would further awareness of the hell of factory farming. Indeed, some of the effective charities in the farmed animal welfare portfolio might be very promising legislative advocacy campaigns.
I think offsetting could appeal to more people than you think. People don’t like being complicit in torture and offsetting offers them the chance not to be. Of course, there’s no way of knowing until we actually make it easier for people to do.
I just wish these “other moral perspectives” would stop impeding the betterment of welfare of conscious beings...
The animal welfare movement (if my understanding is correct) has barely been able to move the needle on veganism over the decades it has been revealing its horrors. If we can identify effective charities that can help us toward systemic change in the farmed animal welfare space, maybe we should gain mass buy
-in for creating a world with a default of consumption without torture. We need to make available an ask that could be just as, or more effective, but easier for a lot of people: fund effective farmed animal welfare charities and be part of the solution-we can help you do it in 10 minutes.
Thank you for aptly conveying the hell that is the factory farming system. Upon learning of the abomination that is this process, even those who eat meat are repulsed and horrified.
We need to provide an opportunity for the meat-eaters to eliminate the harm they cause by participating in the system. Asking people to change their dietary habits is a difficult ask and we need to provide a simple process that enables a larger set of the public that is repulsed by the process to be a part of the solution (or at least mitigate their contribution to the problem).
With a set of experts in the farmed animal welfare/charity space, create a fund that benefits the most cost effective charities.
Create a questionnaire in which people can estimate their dietary habits (amount of poultry, egg, beef, etc. consumption on a weekly basis).
Calculate the harm caused by this consumption on an annual basis.
Calculate the cost to eliminate these harms by contributing to the farmed animal welfare fund.
Give people the opportunity to negate their impact by donating that much money (negation cost)
Give people the opportunity to be part of the solution by donating a multiple of their negation cost.
The vast majority of the planet (a) eats meat and (b) hates torture. We need to provide an easy way for a portion of them to help destroy hell as well, or at least not be part of the problem.
I really hope someone runs with this. I would, but I have a full time job and also run a nonprofit that also seeks to make systemic changes that would benefit, among other causes, farmed animal welfare. It is bizarre that we have not tried to make it easy for meat eaters to offset their impact.
Yep. Acquiring capital without selfish profit motive is a key challenge.
However, if there is an environment in which PFGs enjoy a large advantage and this is clear to the relevant parties, there should be no problem raising funds through philanthropists and debt.
You can frame it as
F(C) = F(K) +P
F(C) is a firm capitalized mostly by charitable equity
F(K) is an identical firm capitalized by private equity
P is the monetary value of positive discrimination in favor of charities
If we have an environment in which P is high enough (I think this could be true in a lot of lower differentiation products), a PFG could probably be capitalized wholly by debt...
If PFGs offer a high enough value proposition (and this is clear to the relevant parties), the financing issues will work themselves out.
Thus, the question is, are the costs of creating the environment we’re looking for worth it? I think with the amount of money on the table, it is definitely worth determining what P values are possible in different contexts because money in the hands of effective charities is such high impact.
The firms would not be looking for (much) investment on behalf of typical shareholders. So your numbered points are immaterial… PFGs are 90%+ charitable equity.
Your characterization is a bit off… These Profit for Good Companies are not “nonprofit. ” They exist to make profit, but for a specific kind of shareholder.
You’re right… Currently PFGs cannot get adequate investment because this isn’t on the menu for philanthropists as means to multiply their donations. But if philanthropic money could be multiplied by leveraging consumer (and other economic actor’s) discrimination in favor of charities, there would be ample incentive to invest… Philanthropists want to multiply the funds that are available and leveraging the good will of economic participants gives them that opportunity… If people can buy your laundry detergent for the same cost and help fight malaria, they will. The fact that we are not trying to give them this power is foolishness.
Anyone would rather buy in a way that benefits charities rather than traditional shareholders, and equity being held by a particular kind of entity does not necessarily increase costs or otherwise compromise a product.
You’re right, capitalizing PFGs would compete with direct donations and “more broad investment.” In these competing cases, you’re leaving money on the table because you’re failing to leverage the good will of consumers and other economic actors.
The bottom line is that PFGs, if capitalized, have all the advantages normal firms do, plus an extra advantage in that economic participants value their success more than competitors. The only thing keeping such firms from thriving and offering a huge multiplier opportunity is that we haven’t created an environment of public awareness of the opportunity (which is what my nonprofit is trying to do).
Thanks for your thoughts.
I would push back a bit on your notion that it would only work with memetic matching. Especially if the PFG model were to take off, it may be pretty cheap and effective to signal that a company works for charities instead of shareholders. For instance, one of our thoughts with the Consumer Power Initiative is that PFGs could use a color-variant of our logo to signify a category of charity (maybe red for Global Health and Development, yellow for animal welfare, green for fighting environmental degradation). Essentially though, helping any of those causes, if you’re not paying more, or otherwise sacrificing, should give you an edge regardless of whether there’s a thematic match.
I also do not know about PFGs acting as charities themselves… I think charities in most places are limited in the degree to which they can participate in the economy this way… But in any case, a company with charities in the equity position can do most of what others can do. This is why I think this model will take off eventually. I just hope EA takes advantage of the model so that effective charities enjoy the fruits of our economies.
Thank you for adding this to those sources. I will take a look at the other entries!
It’s for posts like these being able to disagree vote without downvoting the main post would be particularly helpful...
Thank you for sharing your experience. I think your observation that there are not really big barriers to individuals effectively donating is correct.
If you’d like to check out my approach to funding charities, it would enable individuals to fund charities without personal sacrifice, by buying from companies with charities in the vast majority equity position. That way they could pay the same amount for goods and services, but charities benefit rather than traditional shareholders.
The back end of this project, which I call the Profit for Good model, is potentially pretty powerful: people can fund charities without personal sacrifice through economic discrimination. The front end is a much heavier lift however. Not only do you have to have companies with charities in the equity position, you also have to have a public that is aware of the option and has the means of exercising it easily available. Nonetheless, I think this model has a good chance of solving many major global problems because anyone would rather buy in a way that helps people if there are not associated sacrifices.
If you would like to learn more :
I think the complexity arises in evaluating the value and disvalue of different subjective states as well as determining what courses of action, considering all aspects involved, have the highest expected value.
You discuss the example of the despot regularly violating rights of subjects, yet increasing utility. Such a scenario seems inherently implausible, because if rights are prudently delineated, general respect for them, in the long run, will tend to cultivate a happier, more stable world (I.e, higher expected utility). And perhaps incursions upon these rights would be warranted in some situations. For instance, perhaps the public interest may allow someone’s property rights to be violated if there is a compelling public interest (eminent domain). This is why we have exceptions to rights (I. E.- free speech and instigating imminent violence). If the rights you are advancing tend to lower the welfare of conscious beings, I would think such formulation of rights is immoral.
You are correct that moral life is complex, but I think the complexity comes down to how we can navigate ourselves and our societies to optimize conscious experience. If you are incorporating factors into your decisions that don’t ultimately boil down to improving conscious experience, in my view, you are not acting fully morally.
I believe that rights have value insofar as they promote positive conscious states and prevent negative conscious states. Their value or disvalue would be a function of whether they make lives better. Assigning weight to them beyond that is simply creating a worse world.
I do, however, find the assignment of intrinsic value,
imaginable, though mistaken. I do not take umbrage so much at you disagreeing with me so much as you finding my view unimaginable.
Just disagree with this :
“I do not consider myself a hardcore consequentialist. In general, I find it strange to believe that a single ethical theory could/should possibly guide all aspects of one’s life.”
How is it difficult to believe that trying to promote good conscious experiences and minimize bad conscious experiences could be the key guide to one’s behavior? A lot of EAs, myself included, consider this to be the ultimate goal for our actions… Of course, we need many other areas of study and theory to guide in specific areas.
I understand that you disagree with hardcore consequentialism, but I don’t see why you think it is strange for others to adopt it. This is especially true when you acknowledge the complexity in consequentialist decision-making, as you did in this post.
I would not be surprised if there was not a very strong relationship between Karma and impact because people tend to be more likely to browse and upvote topics that are easily legible to them. Thus topics related to AI Safety or charities which people are largely familiar tend to attract readership and favorable voting.
On the other hand, novel ideas tend to not get as much attention because of the higher cognitive load on prospective readers and the feeling that a post is not “for them.” I imagine posts with low to medium amount of karma, reflecting approval by a smaller audience that read it carefully, may have much higher impact. When I see posts with hundreds in karma, I often think it’s a well known EA figure or someone coming up with a variation of a favorite EA tune.
I think outreach from EA to other organizations is great. Part of EA growing bigger is going to be showing that we are not contemptuous of others who are trying to do good that are not within our umbrella… Maybe we won’t be able to get the dog shelter volunteer to switch to studying AGI alignment, but maybe he or she might consider expanding empathy to farmed animals and donating to effective charities that address factory farming.
Meeting people where they are at with empathy and respect is a powerful way of being. If we can connect with the altruism and compassion of a broader set of people, we may be able to nudge them to channel some of their efforts in an EA way.
I have been donating about 80% of my income (about $1k/week) to the Consumer Power Initiative because the cause area of enabling consumers to discriminate in favor of effective charities is has extreme impact potential (trillions annually to effective charities could be transformative), is tractable (we can create companies that work for charities that can offer similar products at the same prices), and neglected (very little efforts and resources is being expended in this area).
If you want to learn more about our organization, feel free to check out my EA forum post and I’ll link to a draft of our upcoming newsletter.
I didn’t vote in any way on the comment, but it’s plausible you could have different strategic choices. You could try to shift a large donor to cause areas outside of existing preferences to more effective ones (as is the EA “truism”) or you could try to discover and endorse the most effective charities within existing preferences. The latter seems to be discussed by Ozzie Gooen in this thread.
Perhaps disagree votes were along the lines that they did not think lobbying for different cause areas would work with Bezos.
Thanks for having the courage to write this. Regardless of whether it’s correct, it is good to have the position represented and it is much easier in the current environment to take the other side on this.
I agree… Was very bothered by the categorical proscriptions against “ends justifying the means” as well as the seeming statements that some kinds of ethical epistemology are outside of the bounds of discourse. Seemed very contrary to the EA norm of open discourse on morality being essential to our project.