Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.
Brad West
One way of thinking about the role is how varying degrees of competence correspond with outcomes.
You could imagine a lot of roles have more of a satisficer quality- if a sufficient degree of competence is met, the vast majority of the value possible from that role is met. Higher degrees of excellence would have only marginal value increases; insufficient competence could reduce value dramatically. In such a situation, risk-aversion makes a ton of sense: the potential benefit of getting grand slam placements is very small in relation to the harm caused by an incompetent hire.
On the other hand, you might have roles where the value scales very well with incredible placements. In these situations, finding ways to test possible fit may be very worth it even if there is a risk of wasting resources on bad hires.
Yeah, a lot of interventions/causes/worldviews that have power in EA will have more than adequate resources to do what they are trying to do. This is why, to some extent, “getting a job at an EA org” may not be a particularly high EV move because it is not clear that the counterfactual employee would be worse than you (although, this reasoning is somewhat weakened by the fact that you could ostensibly free an aligned person to do other work, and so on).
Lending your abilities and resources to promising causes/etc. that do not have power behind them is probably a way that someone of mediocre abilities could have high impact, perhaps much more impact than much more talented people serving well-resourced masters. Of course, the trick here would be identifying what are these “promising”, neglected areas, especially when the lack of attention by the powers that be may be interpreted as a lack of merit.
I had thought a public list that emphasized potential Impact of different interventions and the likely costs associated with discovering the actual impact would be great.
Reading through your articles, I can’t help but share your concern especially because of how potentially fragile people’s important and impactful altruistic decisions might be.
If my family is making 100k and they are choosing to designate 10% of that annually to effective charities, that represents vacations that are not had, savings that are not made, a few less luxuries, etc. I may be looking for a permission structure to eliminate or reduce my giving. This is probably even more true if I am only considering donation of a significant portion of my income.
Critics of effective giving can help people feel morally justified in abstaining from effective giving, which might be all that they need to maintain the status quo of not giving, or tilt a bit more of their budget to themselves and their families.
SBF likely had mixed motives, in that there was likely at least some degree to which he acted in order to further his own well-being or with partiality toward the well-being of certain entities (such as his parents). The reasoning that you mentioned above (privileging your own interests instrumentally rather than terminally such that you as an agent can perform better) is a fraught manner of thinking with extremely high risk for motivated reasoning. However, I think that it is one that serious altruists need to engage with in good faith. To not do so would imply giving until one’s welfare was at the global poverty line, which would probably impair one too much as an agent. Of course, I’m not saying he was engaged in good faith regarding this instrumental privileging argument, but I cannot preclude the possibility.
Regardless, I have been persuaded by everything that I have seen that a significant part of SBF’s motivations were to help advance a world of higher well-being. Of course, from a deontological perspective he did wrong by his dishonest and fraudulent actions. From a consequentialist perspective, the downside risks had such incalculable costs that it was terrible as well.But the sincere desire of his to make the world a better place makes me sympathetic of him in a way that I probably would not be with similarly sentenced other convicts. Given a deterministic or random world, I understand that all convicts are victims too. But I cannot help but feel more for one who was led to their crime by a sincere desire to better the world, than say, to kill their spouse in a fit of rage, or advance themselves financially without any such altruistic motivation.
To clarify, you would sacrifice consistency to achieve a more just result in an individual case, right?
But if there could be consistently applied, just, results, this would be the ideal result...
I don’t understand the disagree votes if I am understanding correctly.
Please note that my previous post took the following positions:
1. That SBF did terrible acts that harmed people.2. That it was necessary that he be punished. To the extent that it wasn’t implied by the previous comment, I clarify that what he did was illegal (EDIT: which would involve a finding of culpable mental states that would imply that his wrongdoing was no innocent or negligent mistake).
3. The post doesn’t even take a position as to whether the 25 years is an appropriate sentence.
All of the preceding is consistent with the proposition that he also acted with the intention of doing what he could to better the world. Like others have commented, his punishment is necessary for general deterrence purposes. However, his genuine altruistic motivations make the fact that he must be punished tragic.
SBF did terrible acts from many different moral viewpoints, including that of consequentialism. In addition to those he directly harmed, he harmed the EA movement.
However, from review of what I have read, it seems as if he acted from a sincere desire to better the world and did so to the best of his (quite poor) judgment. Thus, to me, his punishment is a tragedy, though a necessary one. From a matter of ultimate culpability, I don’t know if I would judge him more harshly than the vast majority of people in the developed world: those having the capability to save or dramatically better the lives of people in the developing world, but decline, or those who thoughtlessly contribute to the torture of animals through their participation in the animal product economy.I wish him comfort and hope that he can find a wiser path forward with the remainder of his life.
There’s a lot of competition the “frontpage” regarding linked articles and direct posts by forum participants. I can understand why people would think this article should not be displacing other things. I do not understand this fetishization of criticism of EA.
For comparison, a link to an article by Peter Singer on businesses like Humanitix with charities in the shareholder position with some commentary that benefit charities got 16 cumulative karma. I don’t understand why every self-flagellating post has to be a top post.
One thought re self-funding charities is that it might best for entities to focus on what they are best at: charities on interventions and for-profit businesses on providing goods and services to consumers or businesses.
A model that funds charities while enabling entities to focus on what they do best is Profit for Good, in which charities are in the vast majority shareholder position of for-profit companies. I explain why I believe that this model could be quite powerful in my TEDx Talk here:
I was contemplating writing something similar… The question of whether a person is worthy of all the “praise credit” is different than the question of whether the valuable outcome is causally attributable to the agent.
Definitely agree that ETG is very much underrated. I think if you are looking to maximize your impact, you should be looking at how you can bring something to the table in terms of skills/knowledge/insight/etc that money cannot buy or is very difficult/costly for money to buy. Something like this might be building of specialized research skills/knowledge, connections, influence, idea development/cultivation. I am a bit skeptical that generally working for a high impact org in positions with skills that are available in the general employment market is, in expectation, high impact. I may, however, be underestimating the importance of securing alignment in such roles with job. If I could not see the opportunity in my career to build something money cannot buy, I would probably look at earning to give.
I agree that outreach is well-directed to elite colleges. Students of these institutions are, all else being equal, more capable, better-connected, and generally have more resources to deploy to EA because they tend to come from wealthier backgrounds. I think these audience might not be the best target for material support because they may well have the resources to make choices with their lives that can better help the world. The most promising EAs outside of the elite are probably the best targets for material support because their impact is quite likely to be severely curtailed by their own economic/social circumstances. Rereading your third paragraph, I think we are largely in agreement.
Yeah, I think the crux is that you want to weight counterfactual analysis less and myself and EAs generally think this is the ultimate question (at least to the extent consequentialism is motivating our actions as opposed to non-consequentialist moral considerations).
I think that the way to evaluate Alec’s impact is to say, if Alec had not taken action, would those thousand people be dead or would they be alive? (in this hypothetical, I’m assuming Alec is playing a founder role regarding a new intervention). Regarding the twenty other people, ask yourself if the same is true of them. If they are volunteering, would there have been others to volunteer, or would the project been able to procure the funds to fund employees? If they are working for pay, was their work such that the project would not have been able to happen without them? Maybe it is the case that some or all of these people were truly indispensable to the project, such that a proper impact analysis would attribute much or even most of the impact to the twenty people other than Alec.
On the other hand, it may be the case that Alec secured funding to pay these twenty other people and if they had not taken the position, other competent people would. In this situation, provided that there were not other sources of funding for Alec, I would say an impact analysis would attribute half of the lives saved to Alec and half to the funder.
I acknowledge that determining the counterfactual is hard (for instance, maybe the 20 workers freed up other actors to do other impactful work). But as the endpoint of analysis, I definitely think we should be trying to determine what the world looks like if we do X rather than if we did not do X, rather than if we do something that other people consider admirable or otherwise feels good.
EDIT: I realize you put “and those thousand people would not be saved but for the twenty others”. If this is true, then the impact “credit” should definitely be spread among them. I think it bears considering whether that is true.
Regarding the impact attribution point-
You simply need to try to evaluate the world that would have transpired if not for a specific agent(s) actions. In the case of your vaccine creation and distribution, let’s take the individual or team that created the initial vaccine and the companies (and their employees) that manufacture and distribute the vaccines.
If the individual or team did not did not create the initial vaccine, it likely would have been discovered later. On the other hand, if the manufacturers and distributors did not go into that manufacturing and distributing roles, other members of society would have filled these roles. So, the world is better to the degree that the discoverers accelerated this benefit to the world. However, the other agents (to the extent there were not other bottlenecks) did not counterfactually have an impact because if they hadn’t been in that position, someone else would have.
I agree that it is hard to evaluate counterfactuals, but declining to do so will prevent us from looking for important gaps that help us achieve the best outcomes together.
Nice post and I agree that we should avoid saying things that might make people feel unwelcome or uncomfortable based on characteristics.
One thing that I bristle at a bit is that I think the exclusion that offhand comments or controversial posts cause is probably dwarfed by orders of magnitude by the exclusion caused by material considerations that prevent minorities (as well as the vast majorities of whites) from being able to contribute to the same degree in EA. If you look around at people at an EAG, you can pretty safely bet that they are not only in college/college educated, but that their parents were as well. They probably have savings, either personally, or through family that they can rely on, to be able to take risks for their personal ambitions, which in the case of EAS, are often choices that enable them to better the world. It kills me when I listen to podcasts and audiobooks that note that mornings are often the most important parts of the day, yet I, and the vast majority of people must direct most of our most productive time in a day to a job that is not impactful rather than the projects that we think can profoundly better the world.
I realize that maybe this is a less tractable issue than make EAs do less microaggressions/controversial and offensive posts. But I think the EA community is grossly negligent with regard to what may be its most valuable resource… EAs. Maybe another amnesty post will be about considering people as agents and people as patients… I think the people I’m talking about—low and middle income people in rich and middle income countries, a lot in lower income countries—basically everyone not in the top global 0.5% , are mostly not good targets as moral patients. The very poorest people, farmed animals, and future people are probably much more fruitful targets for direct utility increases. But if these people are committed to using their minds and effort as EAs do, many of them may be excellent targets as agents. This point probably applies with even greater force to people in middle and low-income countries who are disproportionately likely to be POC.
Anyways, apologies for the digressive response. I should probably just write the full amnesty post on the subject with the time I do not have because I have a full-time non-EA job and run a nonprofit.
I think the way an EA would view this would still be in terms of the most utility-effective use of their time, however, the opportunity for leverage may significantly impact the calculation, and may enable cost-effective uses of time outside of typical cause areas.
For instance, there might be an EA endorsed charity for which marginal donations would generate utility at a rate of 10 utils/dollar. There might be an organization in the developed world that generates utility at an average rate of 1 util per dollar, and has an average annual budget of $10 million.
Suppose an EA sees an opportunity to dramatically increase the effectiveness of the non-EA charity, by about 50%, increasing its utility to 1.5 utils per dollar, and taking the EA about a full-time of work of its time. Alternatively, the EA could Earn to Give, generating a $120k salary, being able to donate $40k to the EA endorsed charity.
The EA working for non-EA charity and increasing its average utility/$ from 1 to 1.5 generates $5 million utility, assuming the same budget. On the other hand, by ETG and generating 40k to the EA charity, only 400k utility is generated.
In this circumstance, the EA generates far more utility by working for the non-Ea charity and rendering it more efficient than ETG for the EA charity.
I like the idea of enabling different domains (which may not, themselves, be the most cost effective marginal recipient) to reach local maximums as a particularly effective opportunity. There may not be as many opportunities to increase the effectiveness of the most effective charities as there are some of the less effective charities that are still significant recipients of funds.
One might say that it is better not to support such charities and let them die. This logic may be more applicable in the for-profit world, where failure to generate sufficient returns is often a death knell. However, survival in the nonprofit world can be more tied to being able to make donors happy than it is to demonstrate QALYs/dollar.
Wanted to be clear, in your Appendix A, are you suggesting categorically that people not use alcohol, regardless of whether they have reason to believe they are/would be an alcoholic?
I would certainly agree with you that this advice would be prudently taken by alcoholics.
However, many (most?) people can enjoy alcohol occasionally and in moderations for pleasant experience without this usage causing problems in their lives. If you are someone who occasionally drinks, enjoys it, and this usage isn’t causing problems in your life, I think it is advisable to continue occasional, responsible drinking.
Would be interesting to see an argument that the EA forum is net negative. It creates the impression that new ideas are being considered and voices are being heard, but people who have power and influence seldom actually are open to influence from EA posts, nor are there effective mechanisms by which others (like gatekeepers) disseminate such information. The most highly upvoted, and thus accessible posts are either cute, meta-level clever commentary that’s often not actionable or by high status EAs or orgs that have little difficulty having their voices be heard (although having a convenient place for them to share stuff is a useful function).
I do feel like as a place for new ideas to translate into research and, ultimately, impactful action, the EA forum is quite overrated. While I wouldn’t agree that it’s net negative, I worry that there is an assumption by community members that it is doing things that it isn’t.
I see people disagree with me. I can see a lot of bases on which people would disagree and it would be interesting to see which ones apply.
Because OP’s job is technical rather than policy oriented, it is unlikely that a difference in character in the person doing the job would make a difference in outcomes. I might agree in a context where there the occupant of the job might be able to make a difference in policy choice.
Taking a job and supporting a morally wrong industry is wrong regardless of whether the same wrong would result counterfactually.
There are reasons to believe the counterfactual of OP taking the job would be better (for instance, OP might be significantly more competent than the one who would be counterfactually hired).
Other reasons? Curious
I messaged you. Good for you for looking to make a difference and develop your knowledge/skills.