Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad West
I would note a consideration in terms of impact. Orgs that are larger, have more resources for better perks, can offer higher pay, and are more prestigious are going to be able to attract stronger applicants, all else being equal. Consequently, your impact is going to be the delta between the world with you in that position in the org and that of the person who would occupy that position. Consequently, your expected impact might be small or negative (or it could be high if you are exceptional at it relative to the second best option). I think EAs in general tend to conflate the value that is actualized by an org’s operation with their counterfactual impact by taking a job at such an org.
I understand the concerns with small, new, organizations with less funding. Surely in some circumstances this can be a reflection of the merits of the organization, but in some circumstances, there is a promising project that needs help getting off the ground. The counterfactual person who might occupy the position in question for that org might not exist at all or could be much less competent. If you have reason to believe an org is significantly underrated in terms of funding access, prestige, etc., helping in early stages might be the highest EV choice.
This also is probably more of a hits-based approach than joining an established, funded, prestigious org. If you join that org, you will have a high probably of seeing legible impact and feel good about being a part of it, although it is hard to surmise what difference you made versus the other person they would have counterfactually hired. On the other hand, joining a new organization that you think has a promising theory of change is much less likely to yield a legibly impactful outcome. Even if there is a sound theory, there are just a lot of variables that could prevent a new org from being impactful. On the other hand, if such an org does succeed and scales, your dedicated and competent support of the org may actually have been the but-for cause of its success, implying high utility gains. If you are talented, hardworking, bright, good at networking, organized, etc. and are good at assessing areas that might be undervalued, I think the highest impact work would be at such underrated orgs. I definitely think this approach is less likely to lead to more happy or secure lives, however.
Thanks for the excellent post!
I’ll make sure to share it and augment visibility by… Oh wait, never mind.
Unfortunately, issues like the FTX debacle or the Bostrom stuff recently might be a significant amount of a prospective EA’s experience about EA because other aspects may not have penetrated to his/her news sources. Even a small-scale community builder might want some good answers in the face of troubling news one has been in contact with.
There needs to be more willingness by grantwriters and other funders to bear search-costs for new ideas. It seems like there is a strong emphasis on skepticism within EA, which is great, but it usually translates to, we should not fund this because of perceived issues X, Y, and Z or uncertainty regarding the benefits of A, B, and C, when these issues and benefits are better addressed through empirical testing than a skeptic’s intuitions. We need a community that will bear the discovery costs of promising interventions, but this seldom happens unless the proponent of the idea already has clout and/or connections within EA.
If we don’t have the information to evaluate the effectiveness of a possible solution, the answer is not to discard a potential solution, but rather evaluate the information costs, and the potential value associated with the array of reasonably possible outcomes.
What would be helpful, if this doesn’t exist, would be aggregating sets of potential solutions, listing the resources currently directed toward evaluating their EV, determining bottlenecks (often money) in assessing EV, and making reasonable estimates of potential exploitation values given various hypothesize EV. Then those with resources in EA could ensure that promising paths have the resources to be explored, and we can exploit the best solutions fully.
I am rather pessimistic about EA’s prospects for this.
What is very interesting to consider is that we forget most of our dreams, including nightmares.
Consequently, it may be the case that a tremendous amount of negative subjective experience is transpiring without being remembered… One can imagine that many people go to hell for some period of their slumbers, migrate to subsequent dreams, and they have no memory of many of the horrid experiences they had.
Also brings to mind the differences between the experienced life and the remembered life discussed by Kahneman in Thinking Fast and Slow.
It is a bit curious that the exploration of the dreamscape hasn’t been more thoroughly ventured by EAs, given their proclivity for being willing to consider out of the box subject matters and, cumulatively, so much time is spent in dreams.
SBF likely had mixed motives, in that there was likely at least some degree to which he acted in order to further his own well-being or with partiality toward the well-being of certain entities (such as his parents). The reasoning that you mentioned above (privileging your own interests instrumentally rather than terminally such that you as an agent can perform better) is a fraught manner of thinking with extremely high risk for motivated reasoning. However, I think that it is one that serious altruists need to engage with in good faith. To not do so would imply giving until one’s welfare was at the global poverty line, which would probably impair one too much as an agent. Of course, I’m not saying he was engaged in good faith regarding this instrumental privileging argument, but I cannot preclude the possibility.
Regardless, I have been persuaded by everything that I have seen that a significant part of SBF’s motivations were to help advance a world of higher well-being. Of course, from a deontological perspective he did wrong by his dishonest and fraudulent actions. From a consequentialist perspective, the downside risks had such incalculable costs that it was terrible as well.But the sincere desire of his to make the world a better place makes me sympathetic of him in a way that I probably would not be with similarly sentenced other convicts. Given a deterministic or random world, I understand that all convicts are victims too. But I cannot help but feel more for one who was led to their crime by a sincere desire to better the world, than say, to kill their spouse in a fit of rage, or advance themselves financially without any such altruistic motivation.
I think you perhaps should consider the possibility that legal defense and offense might have declining marginal returns… After X number of dollars exploring all available evidence, hiring excellent legal minds to develop, refine, and deliver the most robust and persuasive arguments, etc., there is going to be an asymptote regarding the value obtained from additional funds. Then it will depend more on the judge, fact finders, and underlying evidence than spending a trillion dollars on lawyers, despite what legal dramas would have you believe...
SBF did terrible acts from many different moral viewpoints, including that of consequentialism. In addition to those he directly harmed, he harmed the EA movement.
However, from review of what I have read, it seems as if he acted from a sincere desire to better the world and did so to the best of his (quite poor) judgment. Thus, to me, his punishment is a tragedy, though a necessary one. From a matter of ultimate culpability, I don’t know if I would judge him more harshly than the vast majority of people in the developed world: those having the capability to save or dramatically better the lives of people in the developing world, but decline, or those who thoughtlessly contribute to the torture of animals through their participation in the animal product economy.I wish him comfort and hope that he can find a wiser path forward with the remainder of his life.
Nice post and I agree that we should avoid saying things that might make people feel unwelcome or uncomfortable based on characteristics.
One thing that I bristle at a bit is that I think the exclusion that offhand comments or controversial posts cause is probably dwarfed by orders of magnitude by the exclusion caused by material considerations that prevent minorities (as well as the vast majorities of whites) from being able to contribute to the same degree in EA. If you look around at people at an EAG, you can pretty safely bet that they are not only in college/college educated, but that their parents were as well. They probably have savings, either personally, or through family that they can rely on, to be able to take risks for their personal ambitions, which in the case of EAS, are often choices that enable them to better the world. It kills me when I listen to podcasts and audiobooks that note that mornings are often the most important parts of the day, yet I, and the vast majority of people must direct most of our most productive time in a day to a job that is not impactful rather than the projects that we think can profoundly better the world.
I realize that maybe this is a less tractable issue than make EAs do less microaggressions/controversial and offensive posts. But I think the EA community is grossly negligent with regard to what may be its most valuable resource… EAs. Maybe another amnesty post will be about considering people as agents and people as patients… I think the people I’m talking about—low and middle income people in rich and middle income countries, a lot in lower income countries—basically everyone not in the top global 0.5% , are mostly not good targets as moral patients. The very poorest people, farmed animals, and future people are probably much more fruitful targets for direct utility increases. But if these people are committed to using their minds and effort as EAs do, many of them may be excellent targets as agents. This point probably applies with even greater force to people in middle and low-income countries who are disproportionately likely to be POC.
Anyways, apologies for the digressive response. I should probably just write the full amnesty post on the subject with the time I do not have because I have a full-time non-EA job and run a nonprofit.
A lot of the work with mithrilmen is keeping an argument at a level of abstraction where it sounds sensible as a principle, but yet declining to interrogate it further, perhaps because venerated people hold that position.
Thanks for having the courage to write this. Regardless of whether it’s correct, it is good to have the position represented and it is much easier in the current environment to take the other side on this.
Thanks for the post Luke… I was also rather perturbed at the language regarding the “funding overhang” and other implications that effective charities were adequately funded. The hundreds of millions in extreme poverty and countless deaths from preventable diseases speak otherwise.
What really frustrates me is that while EA has been very thoughtful and innovative at identifying opportunities where dollars have had the highest impact, it has put very little comparative thought into how to generate more funding. Some notable exceptions are your organization, GWWC, The Life You Can Save, and several other organizations, mostly oriented around motivating people to donate more.
But there are other ways to multiply donation funds.
For instance, Ribon has a system that can multiply donations by between 40-60% by essentially providing a free opportunity for people to direct a donation to an effective charity, which reliably, in aggregate, prompts people to contribute to the charity they directed the free money to.
And I am pretty confident that I have the damn solution to achieving the world envisioned by Natalie Cargill’s TED Talk, but the EA community has been largely disinterested in exploring it. Profit for Good, which is probably best explained and argued in my draft for my upcoming TEDx talk in late July, is plausibly a huge multiplier for philanthropic funds (if you are interested in helping me edit the draft or otherwise have feedback, DM and I can give you editing permission). Of course, it is possible that I am totally wrong, but the sensible response is not endless redteaming (to which I have yet to hear a particularly strong contention), but, rather, to assess the costs of empirically validating/invalidating promising solutions.
Institutions that promote effective giving have been shown to have compelling multiplier effects. However, we need to support new ideas regarding multiplying funds for effective charities if we want to create the world that we want to see. Currently, EA seems to support either interventions and cause areas that fall into categories that are already recognized as high impact, such as AI Safety, so it is great at exploiting opportunities its identified, but it has been less interested in exploration. Often we want the numbers when the most promising thing to do is spend money and effort discovering and revealing the numbers.
Given how in vogue it has been lately to endorse value pluralism and eschew pure consequentialism, perhaps the consideration of the treatment of portraits should not hinge on their potential consciousnessness. Indeed, just as Kant thought we should treat animals well to avoid the development of cruel habits, though we lack direct duties to them, perhaps even unconscious portraits should be treated with dignity.
Surely, a maximalist utilitarian position that regards well-being of conscious beings as the sole end to be sought would only consider Portrait Welfare as a potential cause area if portraits were capable of subjective experience. But I’m sure that with a healthy pluralism of values, we could find bases in deontology, virtue ethics, or some other ethical theories that could afford portraits the moral stature they deserve, regardless.
Another pernicious aspect of Eliezer’s Zombie discussion is his insinuation that differing views from him on the matter imply that one should not take seriously their other views. Even if Yudkowsky is right and others are fantastically wrong on zombies, this provides but a very small credence update as to how we should consider their other views being accurate. History is littered with brilliant and useful people who have been famously and impressively wrong on some specific matters.
Strong disagree. If the proponent of an intervention/cause area believes the advancement of it is extremely high EV such that they believe it is would be very imprudent for EA resources not to advance it, they should use strong language.
I think EAs are too eager to hedge their language and use weak language regarding promising ideas.
For example, I have no compunction saying that advancement of the Profit for Good (companies with charities in vast majority shareholder position) needs to be advanced by EA, in that I believe it not doing results in an ocean less counterfactual funding for effective charities, and consequently a significantly worse world.
Retracted even the heart
I agree… Was very bothered by the categorical proscriptions against “ends justifying the means” as well as the seeming statements that some kinds of ethical epistemology are outside of the bounds of discourse. Seemed very contrary to the EA norm of open discourse on morality being essential to our project.
One reservation I have about people leaving the EA community is that they might be exactly the kind of people that EA needs. The fundamental project of EA is using reason, science, philosophy, and other epistemological tools to discover how we can use the resources we have to do the most good, and then acting on the information we develop.
The EA community operationalizes this project, and adopts different subordinates values. These subordinate values often strongly affect the feel and the environment of EA. But it is critical that EA has people to challenge the subordinate values and potentially are not lower-level aligned, so they can contribute to the discourse as to the fundamental EA project.
I realize this comment is a bit nonresponsive, because this post pertained more to one who is not getting needs satisfied by EA, not one who disagrees with some aspects of the community.
- 22 Jul 2022 9:46 UTC; 14 points) 's comment on Leaning into EA Disillusionment by (
Just a quick impression:
I definitely love EA for its intellectual bent… We need to evaluate how we can do the most good, which can be a tricky process with reality often confounding our intuitions.
But I also love EA for wanting to use that reason to profoundly better the world… Action. What I get from this strategy is an emphasis on the cerebral without the emphasis on action. I think EA will appeal more broadly if we highlight action as well as cogitation, and these functions in furtherance of a world with far less suffering, more joy and ability of people to pursue their dreams, and a firm foundation for a wonderful world to persist indefinitely.
It’s for posts like these being able to disagree vote without downvoting the main post would be particularly helpful...