Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad West
Making Trillions for Effective Charities through the Consumer Economy
Bring legal cases to me, I will donate 100% of my cut of the fee to a charity of your choice
HearMeOut—Networking While Funding Charities (Looking for a founder and beta users)
TEDx Talk on Profit for Good as a Means of Funding Effective Charities
Compilation of Profit for Good Redteaming and Responses
I would note a consideration in terms of impact. Orgs that are larger, have more resources for better perks, can offer higher pay, and are more prestigious are going to be able to attract stronger applicants, all else being equal. Consequently, your impact is going to be the delta between the world with you in that position in the org and that of the person who would occupy that position. Consequently, your expected impact might be small or negative (or it could be high if you are exceptional at it relative to the second best option). I think EAs in general tend to conflate the value that is actualized by an org’s operation with their counterfactual impact by taking a job at such an org.
I understand the concerns with small, new, organizations with less funding. Surely in some circumstances this can be a reflection of the merits of the organization, but in some circumstances, there is a promising project that needs help getting off the ground. The counterfactual person who might occupy the position in question for that org might not exist at all or could be much less competent. If you have reason to believe an org is significantly underrated in terms of funding access, prestige, etc., helping in early stages might be the highest EV choice.
This also is probably more of a hits-based approach than joining an established, funded, prestigious org. If you join that org, you will have a high probably of seeing legible impact and feel good about being a part of it, although it is hard to surmise what difference you made versus the other person they would have counterfactually hired. On the other hand, joining a new organization that you think has a promising theory of change is much less likely to yield a legibly impactful outcome. Even if there is a sound theory, there are just a lot of variables that could prevent a new org from being impactful. On the other hand, if such an org does succeed and scales, your dedicated and competent support of the org may actually have been the but-for cause of its success, implying high utility gains. If you are talented, hardworking, bright, good at networking, organized, etc. and are good at assessing areas that might be undervalued, I think the highest impact work would be at such underrated orgs. I definitely think this approach is less likely to lead to more happy or secure lives, however.
Thanks for the excellent post!
I’ll make sure to share it and augment visibility by… Oh wait, never mind.
Unfortunately, issues like the FTX debacle or the Bostrom stuff recently might be a significant amount of a prospective EA’s experience about EA because other aspects may not have penetrated to his/her news sources. Even a small-scale community builder might want some good answers in the face of troubling news one has been in contact with.
Guided Consumption Theory: A Virtuous Dance between Altruistic Agents, Economic Discriminators, and Opportunistic Helpers
There needs to be more willingness by grantwriters and other funders to bear search-costs for new ideas. It seems like there is a strong emphasis on skepticism within EA, which is great, but it usually translates to, we should not fund this because of perceived issues X, Y, and Z or uncertainty regarding the benefits of A, B, and C, when these issues and benefits are better addressed through empirical testing than a skeptic’s intuitions. We need a community that will bear the discovery costs of promising interventions, but this seldom happens unless the proponent of the idea already has clout and/or connections within EA.
If we don’t have the information to evaluate the effectiveness of a possible solution, the answer is not to discard a potential solution, but rather evaluate the information costs, and the potential value associated with the array of reasonably possible outcomes.
What would be helpful, if this doesn’t exist, would be aggregating sets of potential solutions, listing the resources currently directed toward evaluating their EV, determining bottlenecks (often money) in assessing EV, and making reasonable estimates of potential exploitation values given various hypothesize EV. Then those with resources in EA could ensure that promising paths have the resources to be explored, and we can exploit the best solutions fully.
I am rather pessimistic about EA’s prospects for this.
What is very interesting to consider is that we forget most of our dreams, including nightmares.
Consequently, it may be the case that a tremendous amount of negative subjective experience is transpiring without being remembered… One can imagine that many people go to hell for some period of their slumbers, migrate to subsequent dreams, and they have no memory of many of the horrid experiences they had.
Also brings to mind the differences between the experienced life and the remembered life discussed by Kahneman in Thinking Fast and Slow.
It is a bit curious that the exploration of the dreamscape hasn’t been more thoroughly ventured by EAs, given their proclivity for being willing to consider out of the box subject matters and, cumulatively, so much time is spent in dreams.
I think you perhaps should consider the possibility that legal defense and offense might have declining marginal returns… After X number of dollars exploring all available evidence, hiring excellent legal minds to develop, refine, and deliver the most robust and persuasive arguments, etc., there is going to be an asymptote regarding the value obtained from additional funds. Then it will depend more on the judge, fact finders, and underlying evidence than spending a trillion dollars on lawyers, despite what legal dramas would have you believe...
Nice post and I agree that we should avoid saying things that might make people feel unwelcome or uncomfortable based on characteristics.
One thing that I bristle at a bit is that I think the exclusion that offhand comments or controversial posts cause is probably dwarfed by orders of magnitude by the exclusion caused by material considerations that prevent minorities (as well as the vast majorities of whites) from being able to contribute to the same degree in EA. If you look around at people at an EAG, you can pretty safely bet that they are not only in college/college educated, but that their parents were as well. They probably have savings, either personally, or through family that they can rely on, to be able to take risks for their personal ambitions, which in the case of EAS, are often choices that enable them to better the world. It kills me when I listen to podcasts and audiobooks that note that mornings are often the most important parts of the day, yet I, and the vast majority of people must direct most of our most productive time in a day to a job that is not impactful rather than the projects that we think can profoundly better the world.
I realize that maybe this is a less tractable issue than make EAs do less microaggressions/controversial and offensive posts. But I think the EA community is grossly negligent with regard to what may be its most valuable resource… EAs. Maybe another amnesty post will be about considering people as agents and people as patients… I think the people I’m talking about—low and middle income people in rich and middle income countries, a lot in lower income countries—basically everyone not in the top global 0.5% , are mostly not good targets as moral patients. The very poorest people, farmed animals, and future people are probably much more fruitful targets for direct utility increases. But if these people are committed to using their minds and effort as EAs do, many of them may be excellent targets as agents. This point probably applies with even greater force to people in middle and low-income countries who are disproportionately likely to be POC.
Anyways, apologies for the digressive response. I should probably just write the full amnesty post on the subject with the time I do not have because I have a full-time non-EA job and run a nonprofit.
A lot of the work with mithrilmen is keeping an argument at a level of abstraction where it sounds sensible as a principle, but yet declining to interrogate it further, perhaps because venerated people hold that position.
Thanks for having the courage to write this. Regardless of whether it’s correct, it is good to have the position represented and it is much easier in the current environment to take the other side on this.
Thanks for the post Luke… I was also rather perturbed at the language regarding the “funding overhang” and other implications that effective charities were adequately funded. The hundreds of millions in extreme poverty and countless deaths from preventable diseases speak otherwise.
What really frustrates me is that while EA has been very thoughtful and innovative at identifying opportunities where dollars have had the highest impact, it has put very little comparative thought into how to generate more funding. Some notable exceptions are your organization, GWWC, The Life You Can Save, and several other organizations, mostly oriented around motivating people to donate more.
But there are other ways to multiply donation funds.
For instance, Ribon has a system that can multiply donations by between 40-60% by essentially providing a free opportunity for people to direct a donation to an effective charity, which reliably, in aggregate, prompts people to contribute to the charity they directed the free money to.
And I am pretty confident that I have the damn solution to achieving the world envisioned by Natalie Cargill’s TED Talk, but the EA community has been largely disinterested in exploring it. Profit for Good, which is probably best explained and argued in my draft for my upcoming TEDx talk in late July, is plausibly a huge multiplier for philanthropic funds (if you are interested in helping me edit the draft or otherwise have feedback, DM and I can give you editing permission). Of course, it is possible that I am totally wrong, but the sensible response is not endless redteaming (to which I have yet to hear a particularly strong contention), but, rather, to assess the costs of empirically validating/invalidating promising solutions.
Institutions that promote effective giving have been shown to have compelling multiplier effects. However, we need to support new ideas regarding multiplying funds for effective charities if we want to create the world that we want to see. Currently, EA seems to support either interventions and cause areas that fall into categories that are already recognized as high impact, such as AI Safety, so it is great at exploiting opportunities its identified, but it has been less interested in exploration. Often we want the numbers when the most promising thing to do is spend money and effort discovering and revealing the numbers.
Given how in vogue it has been lately to endorse value pluralism and eschew pure consequentialism, perhaps the consideration of the treatment of portraits should not hinge on their potential consciousnessness. Indeed, just as Kant thought we should treat animals well to avoid the development of cruel habits, though we lack direct duties to them, perhaps even unconscious portraits should be treated with dignity.
Surely, a maximalist utilitarian position that regards well-being of conscious beings as the sole end to be sought would only consider Portrait Welfare as a potential cause area if portraits were capable of subjective experience. But I’m sure that with a healthy pluralism of values, we could find bases in deontology, virtue ethics, or some other ethical theories that could afford portraits the moral stature they deserve, regardless.
It’s for posts like these being able to disagree vote without downvoting the main post would be particularly helpful...