I think this suggests the cause prioritization factors should ideally take the size of the marginal investment we’re prepared to make into account, so Neglectedness should be
“% increase in resources / extra investment of size X”
“% increase in resources / extra person or $”,
since the latter assumes a small investment. At the margin, a small investment in a neglected cause has little impact because of setup costs (so Solvability/Tractability is low), but a large investment might get us past the setup costs and into better returns (so Solvability/Tractability is higher).
As you suggest, if you’re spreading too thin between neglected causes, you don’t get far past their setup costs, and Solvability/Tractability remains lower for each than if you’d just chosen a smaller number to invest in.
I would guess that for baitfish, fish stocking, and rodents fed to pet snakes, there’s a lot of existing expertise in animal welfare and animal interventions (e.g. corporate outreach/campaigns) that’s transferable, so the setup costs wouldn’t be too high. Did you find this not to be the case?
If we’re being careful, should these considerations just be fully captured by Tractability/Solvability? Essentially, the marginal % increase in resources only solves a small % of the problem, based on the definitions here.
Besides the risks of harm by omission and focusing on the wrong things, which I agree with others here is a legitimate place for debate in cause prioritization, there are risks of contributing to active harm, which is a slightly different concern (although not fundamentally different for a consequentialist, but it might have greater reputational costs for EA). I think this passage is illustrative:
For example, consider the following scenario from Olle Häggström (2016); quoting him at length:
“Recall … Bostrom’s conclusion about how reducing the probability of existential catastrophe by even a minuscule amount can be more important than saving the lives of a million people. While it is hard to find any flaw in his reasoning leading up to the conclusion [note: the present author objects], and while if the discussion remains sufficiently abstract I am inclined to accept it as correct, I feel extremely uneasy about the prospect that it might become recognized among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelet, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.”
Häggström offers several reasons why this scenario might not occur. For example, he suggests that “the annihilation of Germany would be bad for international political stability and increase existential risk from global nuclear war by more than one in a million.” But he adds that we should wonder “whether we can trust that our world leaders understand [such] points.” Ultimately, Häggström abandons total utilitarianism and embraces an absolutist deontological constraint according to which “there are things that you simply cannot do, no matter how much future value is at stake!” But not everyone would follow this lead, especially when assessing the situation from the point of view of the universe; one might claim that, paraphrasing Bostrom, as tragic as this event would be to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—it wouldn’t significantly affect the total amount of human suffering or happiness or determine the long-term fate of our species, except to ensure that we continue to exist (thereby making it possible to colonize the universe, simulate vast numbers of people on exoplanetary computers, and so on).
I think you don’t need Bostroniam stakes or utilitarianism for these types of scenarios, though. Consider torture, collateral civilian casualties in war, the bombings of Hiroshima and Nagasaki. Maybe you could argue in many cases that more civilians will be saved, so the trade seems more comparable, actual lives for actual lives, not actual lives for extra lives (extra in number, not in identity, for a wide person-affecting view), but it seems act consequentialism is susceptible to making similar trades generally.
I think one partial solution is to just not promote act consequentialism publicly unless you preface with important caveats. Another is to correct naive act consequentialist analyses in high stakes scenarios as they come up (like Phil is doing here, but also to individual comments).
You could just ask orgs which roles were filled within the last X days/months, since they should know, so it wouldn’t require ongoing monitoring, but this might still be substantial work for you and them (cumulatively) to get this info, depending on how many orgs you need to contact.
The author still cares about x-risks, just not in the Bostroniam way. Here’s the first sentence from the abstract:
This paper offers a number of reasons for why the Bostromian notion of existential risk is useless.
Weird that you made a throwaway just to leave a sarcastic and misguided comment.
For whatever it’s worth, my own tentative guess would actually be that saving a life in the developing world contributes more to growth in the long run than saving a life in the developed world. Fertility in the former is much higher, and in the long run I expect growth and technological development to be increasing in global population size (at least over the ranges we can expect to see).
Is this taking more immediate existential risks into account and to what degree and how people in the developing and developed worlds affect them?
It seems as though much of the discussion assumes a hedonistic theory of well-being (or at least uses a hedonistic theory as a synecdoche for theories of well-being taken as a whole?) But, as the authors themselves acknowledge, some theories of well-being are not purely hedonistic.
It is also a bit misleading to say that “many effective altruists are not utilitarians and care intrinsically about things besides welfare, such as rights, freedom, equality, personal virtue and more.” On some theories, these things are components of welfare.
It’s discussed a bit here:
The two main rivals of hedonism are desire theories and objective-list theories. According to desire theories only the satisfaction of desires or preferences matters for an individual’s wellbeing, as opposed to the individual’s conscious experiences. Objective list theories propose a list of items that constitute wellbeing. This list can include conscious experiences or preference-satisfaction, but it rarely stops there; other common items that ethicists might put on their objective list include art, knowledge, love, friendship and more.
Do non-utilitarian moral theories have readily available solutions to infinite ethics either?
I think it isn’t a problem in the first place for non-consequentialist theories, because the problem comes from trying to compare infinite sets of individuals with utilities when identities (including locations in spacetime) aren’t taken to matter at all, but you could let identities matter in certain ways and possibly get around it this way. I think it’s generally a problem for consequentialist theories, utilitarian or not.
I’d also recommend the very repugnant conclusion as an important objection (at least to classical or symmetric utilitarianism).
It’s worth considering that avoiding it (Weak Quality Addition) is one of several intuitive conditions in an important impossibility theorem (of which there are many similar ones, including the earlier one which is cited in the post you cite), which could be a response to the objection.
EDIT: Or maybe the impossibility theorems and paradoxes should be taken to be objections to consequentialism generally, because there’s no satisfactory way to compare outcomes generally, so we shouldn’t rely purely on comparing outcomes to guide actions.
the idea that suffering is the dominant component of the expected utility of the future is both consistent with standard utilitarian positions, and also captures the key point that most EA NU thinkers are making.
I don’t think it quite captures the key point. The key point is working to prevent suffering, which “symmetric” utilitarians often do. It’s possible the future is positive in expectation, but it’s best for a symmetric utilitarian to work on suffering, and it’s possible that the future is negative in expectation, but it’s best for them to work on pleasure or some other good.
Symmetric utilitarians might sometimes try to improve a situation by creating lots of happy individuals rather than addressing any of the suffering, and someone with suffering-focused views (including NU) might find this pointless and lacking in compassion for those who suffer.
In case you’re missing context before you vote on my comment, they have a page for objections.
I’m surprised by the downvotes. There’s a page on types of utilitarianism, and NU is not mentioned, but “variable value theories, critical level theories and person-affecting views” are at least named, and NU seems better known than variable value and critical level theories. Average utilitarianism also isn’t mentioned.
My impression of variable value theories and critical level theories is that these are mostly academic theories, constructed as responses to the repugnant conclusion and other impossibility results, and pretty ad hoc for this purpose, with little independent motivation and little justification for their exact forms. Exactly where should be the critical level? Exactly what should the variable value function look like? They don’t seem to be brought up much in the literature except in papers actually developing different versions of them or in comparing different theories. Maybe my impression is wrong.
But we also might not want to count someone moving between orgs in the same kind of role, which makes things more complicated. There can also be people shifting around into different roles within the movement, and even possibly cycles because of it, e.g. A goes from X to Y, B goes from Y to Z and C goes from Z to X. Maybe you’d want to deal with openings caused by people leaving a position differently.
Should we be looking at recently filled roles instead of all currently filled ones, for roles needed “on the margin”? E.g., of all recently open roles, how many remain open? It seems like both the ratio and number could be important, though, and all else equal, a greater number means more needed, and a greater ratio means more needed. Of course, some roles might also be more important than others, on top of this.
Compare the following:
X: We need 6 people in total in roles of type X, and 2 roles have been filled for a while, 2 were filled recently, and the last 2 are open. The ratio of open out of all roles of type X is 2/6=1/3, but the ratio of open to recently open is 2/4=1/2.
Y: We need 7 people in total in roles of type Y, and 5 roles have been filled for a while, none were filled recently, and the last 2 are open. The ratio of open out of all roles of type Y is 2⁄7, but the ratio of open to recently open is 2/2=1.
Z: We need 12 people in total in roles of type Z, and 6 roles have been filled for a while, 3 were filled recently, and the last 3 are open. The ratio of open out of all roles of type Z is 3/12=1/4, but the ratio of open to recently open is 3/6=1/2.
It seems like we should push people to fill Y or Z more than X, because they’re harder to fill on the margin, either proportionally (2/2 of recent roles open for Y > 2⁄4 for recent roles open for X, but both 2 open roles) or absolutely (3 openings for Z > 2 for X, but both 1⁄2 of recently open still open), even though the proportion of roles of type X open out of all roles of type X needed is highest.
It’s harder to judge between Y and Z; Y has a proportionally harder time being filled (2/2=1>1/2=3/6), but Z has more openings (3>2).
Some objections worth covering (EDIT: on the objections page), although not necessarily applicable to all versions:
1. Mere receptacles/vessels objection, replaceability, separateness of persons, and tradeoffs between the suffering of one and the pleasure of others
2. Headaches vs lives (dust specks vs torture)
3. Infinite ethics: no good solutions?
4. Population ethics: impossibility theorems, paradoxes, no good solutions? (inspired by antimonyanthony)
Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality?
Better democracy won’t help much with EA causes if people generally don’t care about them, and we choose EA causes in part based on their neglectedness, i.e. the fact that others don’t care enough. Causes have to be made salient to people, and that’s a role for advocacy to play, and when they remain neglected after that, that’s where philanthropy should come in. I think people would care more about animal welfare if they had more access to information and were given opportunities to vote on it (based on ballot initiatives and surveys), but you need advocates to drive this, and I’m not sure you can or should try to capture this all without philanthropy. Most people don’t care much about the x-risks EAs are most concerned with, and some of the x-risks are too difficult to understand for the average person to get them to care.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values. Some issues may remain extremely expensive to address, including some medical conditions, and wild animal welfare generally, so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
Oh, I agree solving coordination failures to finance public goods doesn’t solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren’t coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.
What is “just the right amount”? And how do you see the UN coming to fund it if they haven’t so far?
I don’t think AI safety’s current and past funding levels were significantly lower than otherwise due to coordination failures, but rather information asymmetries, like you say, as well as differences in values, and differences in how people form and combine beliefs (e.g. most people aren’t Bayesian).
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
I’ve always wondered what the unifying theme was behind RadicalxChange, but after writing this post, I had the sudden realization that it’s about making philanthropy obsolete.
It seems like it wouldn’t address many of the issues discussed in this article, especially politically unempowered moral beings, or many of the EA causes. Maybe it can make solving them easier, but it doesn’t offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.
To make philanthropy obsolete, I think you’d have to either make advocacy obsolete or be able to capture it effectively without philanthropy. As long as sentient individuals have competing values and interests and tradeoffs to be made, which I’d still expect to be true even if nonhuman animals, future individuals and artificial sentiences gain rights and representation, I think there will be a need for advocacy. I don’t expect ethical views to converge in the future, and as long as they don’t, there should be room for advocacy.
Are there any charity ideas outside of the four here that you’d like to see incubation program applicants suggest?
What kinds of charities (and specific interventions?) are on the radar to cofound for this round of applicants?