See also The person-affecting value of existential risk reduction by Gregory Lewis.
MichaelStJules
My own skepticism of longtermism stems from a few main considerations:
I often can’t tell longtermist interventions apart from Play Pumps or Scared Straight (an intervention that actually backfired). At least for these two interventions, we measured outcomes of interest and found they they didn’t work or were actively harmful. By the nature of many proposed longtermist interventions, we often can’t get good enough feedback to know we’re doing more good than harm or much of anything at all.
Many specific proposed longtermist interventions don’t look robustly good to me, either (i.e. their expected value is either negative or it’s a case of complex cluelessness, and I don’t know the sign). Some of this may be due to my asymmetric population ethics. If you aren’t sure about your population ethics, check out the conclusion in this paper (although you might need to read some more or watch the talk for definitions), which indicates quite a lot of sensitivity to population ethics.
I’m not convinced that we can ever identify robustly positive longtermist interventions, essentially due to 1, or that what I could do would actually support robustly positive longtermist interventions according to my views (or views I’d endorse upon reflection). GPI’s research is insightful, impressive and has been useful to me, but I don’t know that supporting it further is robustly positive, since I am not the only one who can benefit from it, and others may use it to pursue interventions that aren’t robustly positive to me.
Tentatively, I’m hopeful we can hedge with a portfolio of interventions, shorttermist or longtermist or both. If you’re worried about population effects of AMF, you could pair it with a family planning charity. If you’re worried about economic effects, too, I don’t know what to do for that. I don’t know that it’s always possible to come up with a portfolio that manages side effects and all these different considerations well enough that you should be confident it’s robustly positive. I wrote a post about this here.
A portfolio containing animal advocacy, s-risk work and research on and advocacy for suffering-focused views seems like it would be my best bet.
- 12 Feb 2021 13:45 UTC; 60 points) 's comment on Complex cluelessness as credal fragility by (
- 16 Nov 2020 3:06 UTC; 14 points) 's comment on some concerns with classical utilitarianism by (
- 16 Nov 2020 5:40 UTC; 11 points) 's comment on some concerns with classical utilitarianism by (
- 23 Dec 2020 18:47 UTC; 4 points) 's comment on A case against strong longtermism by (
One of my main high-level hesitations with AI doom and futility arguments is something like this, from Katja Grace:
My weak guess is that there’s a kind of bias at play in AI risk thinking in general, where any force that isn’t zero is taken to be arbitrarily intense. Like, if there is pressure for agents to exist, there will arbitrarily quickly be arbitrarily agentic things. If there is a feedback loop, it will be arbitrarily strong. Here, if stalling AI can’t be forever, then it’s essentially zero time. If a regulation won’t obstruct every dangerous project, then is worthless. Any finite economic disincentive for dangerous AI is nothing in the face of the omnipotent economic incentives for AI. I think this is a bad mental habit: things in the real world often come down to actual finite quantities. This is very possibly an unfair diagnosis. (I’m not going to discuss this later; this is pretty much what I have to say.)
“Omnipotent” is the impression I get from a lot of the characterization of AGI.
Another recent specific example here.
Similarly, I’ve had the impression that specific AI takeover scenarios don’t engage enough with the ways they could fail for the AI. Some are based primarily on nanotech or engineered pathogens, but from what I remember of the presentations and discussions I saw, they don’t typically directly address enough of the practical challenges for an AI to actually pull them off, e.g. access to the materials and a sufficiently sophisticated lab/facility with which to produce these things, little or poor verification of the designs before running them through the lab/facility (if done by humans), attempts by humans to defend ourselves (e.g. the military) or hide, ways humans can disrupt power supplies and electronics, and so on. Even if AI takeover scenarios are disjunctive, so are the ways humans can defend ourselves and the ways such takeover attempts could fail, and we have a huge advantage through access to and control over stuff in the outside world, including whatever the AI would “live” on and what powers it. Some of the reasons AI could fail across takeover plans could be common across significant shares of otherwise promising takeover plans, potentially placing a limit on how far an AI can get by considering or trying more and more such plans or more complex plans.
I’ve seen it argued that it would be futile to try to make the AI more risk-averse (e.g. sharply decreasing marginal returns), but this argument didn’t engage with how risks for the AI from human detection and possible shutdown, threats by humans or the opportunity to cooperate/trade with humans would increasingly disincentivize such an AI from taking extreme action the more risk-averse it is.
I’ve also heard an argument (in private, and not by anyone working at an AI org or otherwise well-known in the community) that AI could take over personal computers and use them, but distributing computations that way seems extremely impractical for computations that run very deep, so there could be important limits on what an AI could do this way.
That being said, I also haven’t personally engaged deeply with these arguments or read a lot on the topic, so I may have missed where these issues are addressed, but this is in part because I haven’t been impressed by what I have read (among other reasons, like concerns about backfire risks, suffering-focused views and very low probabilities of the typical EA or me in particular making any difference at all).
Have you guys considered rebranding like the Effective Altruism Foundation has to the Center on Long-term Risk or just updating the organization’s description to better reflect your priorities and key ideas?
I look at 80,000 Hours’ front page, and I see
You have 80,000 hours in your career.
How can you best use them to help solve the world’s most pressing problems?
We’re a nonprofit that does research to answer this question. We provide free advice and support to help you have a greater impact with your career.
But this doesn’t mention anything about longtermism, which seems to be one of the major commitments of 80,000 Hours, one which people coming to 80,000 Hours will often disagree with, and probably the one most responsible for the “bait-and-switch” perception. Possibly also population ethics, although I’m not sure how committed 80,000 Hours is to particular views or to ruling out certain views, or how important this is to 80,000 Hours’ recommendations, anyway. It seemed to have a big impact on the problem quiz (which I really like, by the way!).
I’d imagine rebranding has significant costs, and of course 80,000 Hours still provides significant value to non-longtermist causes and to the EA community as a whole, so I expect it not to make sense. Even updating the description to refer to longtermism might turn away people who could otherwise benefit from 80,000 Hours.
EDIT: Looks like this was mentioned by NunoSempere.
A single user with a decent amount of karma can unilaterally decide to censor a post and hide it from the front page with a strong downvote. Giving people unilateral and anonymous censorship power like this seems bad.
Thirdly, each of these are (broadly) free market firms, who exist only because they are able to persuade people to continue using their services. It’s always possible that they are systematically mistaken, and that CEA really does understand social network advertising, management consulting, trading and banking better than these customers… but I think our prior should be a little more modest than this. Usually when people want to buy something it is because they want that thing and think it will be useful for them.
I consider this to be a pretty weak argument, so it doesn’t contribute much to my priors, which although weak (and so the particulars of a company matter much more), are probably centered near neutral on net welfare effects (in the short to medium term). I think a large share of goods people buy and things they do are harmful to themselves or others before even considering the loss of income/time as a result, or worse for them than the things they compete with. It’s enough that I wouldn’t have a prior strongly in favour of what profitable companies are doing being good for us. Here are reasons pushing towards neutral or negative impacts:
A lot of goods are mostly for signaling, especially signaling wealth, which often has negative externalities and I’d guess little positive value for the individual. Brand name versions of things, clothing, jewelry, cars.
Many modern ways people spend their time (enabled by profitable companies) have probably made us less active, more indoor-bound, less close with others, and less pursuant of meaning and meaningful goals, which may conflict with people’s reflective preferences, as well as generally be bad for health, mental health and other measures of wellbeing. Basically a lot of the things we do on our computers and phones.
Many things are stimulating and addictive, and companies are optimizing for want, not welfare. Want and welfare can come apart when we optimize for want. So we get cigarettes, addictive video games, junk food, algorithms optimizing for clicks when we’d be better off stepping away from the internet or doing more substantial things online, and lots of salt, sugar and calories in our foods.
Media companies may optimize for revenue over accurate reporting. This includes outrage, playing to our fears, demonizing and polarization.
Some companies make us want their stuff for fear of missing out or social pressure, so it can be closer to coercion than providing a valuable opportunity.
I’d guess relatively little is spent on advertisement for things that we have good evidence for improving our welfare, because most of those things are hard to profit from: basic healthy foods, exercise (although there are certainly exercise products and programs that get advertised, but less so just gym memberships, joining sports leagues, running outside), just spending more time with your friends and family (in cheap ways, although travel and amusement parks are advertised), pursuing meaning or meaningful goals, helping others (even charity ads are relatively rare). So, advertisement seems to push us towards things that are worse for us than the alternatives we’d have gone with. To capitalize on the things that do make us substantially better off, companies may sell us more expensive versions that aren’t (much) better or things to go with them that don’t substantially help.
I’d expect a lot of hedonic adaptation for many goods and services, but not mental health (almost by definition), physical pain and to a lesser extent general health and mobility, which are worsened by a lot of the things companies provide, directly or indirectly by competing with the things that are better for health.
Company valuations don’t usually substantially reflect their externalities, and shorting companies is riskier and more costly than buying and holding shares, so this biases markets towards positively valuing companies even if their overall value for the world is negative.
There are often negative externalities on nonhuman animals in particular, although the overall effects on nonhuman animals may be complicated when you also consider the effects on wild animals.
I do think it’s plausible McKinsey and Goldman have done and do more good than harm for humans in the short term, based on the arguments you give, but I don’t have a strong view either way. It could depend largely on whether raising people’s consumption levels makes them better off overall (and how much) in the places where people are most affected by these companies. Measures of well-being do seem to positively correlate with income/wealth/consumption at the individual level, and I’d guess also at the aggregate level for developing countries, but I’d guess not for developed countries, or at best weakly so. There are negative externalities for increasing an individual’s income on others’ life satisfaction, although it’s possible a large share is due to rescaling, not actually thinking your life is worse absolutely than otherwise. See:
Haushofer, J., Reisinger, J., & Shapiro, J. (2019). Is your gain my pain? Effects of relative income and inequality on psychological well-being.
Based on GiveDirectly in Kenya. They had multiple measures of wellbeing, but negative effects were only observed for life satisfaction for non-recipient households of cash transfers in the same village. See Table A5.
This table from Veenhoven, R. (2019). The Origins of Happiness: The Science of Well-Being over the Life Course., reproduced in this post.
This graph, reproduced in this post.
Other writing on the Easterlin Paradox.
Some companies may also contribute to relative inequality or even counterfactually make the median or poor person absolutely poorer through their political activities.
The categories of things I’m optimistic about for human welfare in the short to medium term are:
Things that save us time, so we can spend more time on things that actually make us better off.
Things that improve or protect our health (including mental health).
Things that make us (feel) safer/more secure (physically, financially, etc.).
Things that make us more confident, but without substantially net negative externalities (negative externalities may come from positional goods, costly signaling, peer pressure).
Things that help us make better decisions, without important negative effects.
I’m neutral to optimistic about these (possibly neutral because they just replace cheaper versions of themselves that would be just as good):
In-person activities with friends/family.
Things for hobbies or projects.
Restaurants.
I’m about neutral and pretty uncertain about screen-based entertainment (TV, movies, video games), and recreational substances that aren’t extremely addictive or harmful (alcohol, marijuana).
I’m pessimistic about:
Social media.
Status-signaling goods/positional goods/luxuries.
Processed foods.
Cigarettes.
I also wish all the EA Funds and Open Phil would do this/make their numbers more accessible.
This is my first year donating, and I was earning to give until now. I welcome feedback.
My general plan is to support animal welfare, specifically intervention and (sub-)cause prioritization research, international movement growth and the current best-looking interventions, filtered through the judgment of full-time researchers/grantmakers.
I donated $7K (Canadian) to the EA Animal Welfare Fund about a month ago. I think they’re the best-positioned to identify otherwise neglected animal welfare funding opportunities when evidence is relatively scarce, given grantmakers working at several different animal protection orgs, and Lewis Bollard’s years of experience in grantmaking.
I’m looking at donating another $30-40K (Canadian) to be split primarily between the following groups, roughly in decreasing order of proportion of funding, although I haven’t decided on the exact amounts:
1. ACE’s Recommended Charity Fund. I think the EAA community’s research supporting corporate campaigns and ACE’s research specifically have improved considerably in the past while, so I’m pretty confident in their choices working on these. I’m also happy to see expansion to countries previously neglected by EAA funding and support for further research.
2. Rethink Priorities. I’ve been consistently impressed by their research for animals so far, and I’m keen to see further research, especially on ballot initiatives, for which I’m pretty optimistic. Also, it looks like they’ve got a lot of room for funding, and it would be pretty cool if they hired Peter Hurford full-time. Btw, they have an AMA going on now.
3. Charity Entrepreneurship. Also very impressed by their research for animals so far, both exploratory and in-depth, including a cluster-thinking approach. I hope to see more of it, and any new animal welfare charities they might start.
4. Possibly the EA Animal Welfare Fund again.
5. RC Forward. Both for my own donations and as a public good for EAs in Canada, since they allow Canadians to get tax credits for donations to EA charities. More here and here.
It’s worth noting that Rethink Priorities and Charity Entrepreneurship have each received funding from Open Philanthropy Project (Farm Animal Welfare) and EA Funds recently; RP from the Animal Welfare Fund and CE most recently from the Meta Fund (and previously from the Animal Welfare Fund).
I have a few other research orgs in mind, and I might also donate to Sentience Politics, for their campaign to support the referendum to end factory farming in Switzerland (some discussion here on Facebook). I’m also wondering about Veganuary, but I’m not in a good position to judge their counterfactual impact from the numbers they present.
(Edited.)
This seems bordering on strawmanning. We should try to steelman their suggestions. It seems fine that some may be incompatible or all together would make us indistinguishable from the left (which I wouldn’t expect to happen anyway; we’d probably still care far more about impact than the left on average), since we wouldn’t necessarily implement them all or all in the same places, and there can be other ways to prevent issues.
Furthermore, overly focusing on specific suggestions can derail conversations too much into the details of those suggestions and issues with them over the problems in EA highlighted in the post. It can also discourage others from generating and exploring other proposals. It may be better to separate these discussions, and this one seems the more natural one to start with. This is similar to early broad cause area research for a cause (like 80,000 Hours profiles), which can then be followed by narrow intervention (and crucial consideration) research in various directions.
As a more specific example where I think your response borders on a strawman: in hiring non-EA experts and democratizing orgs, non-EAs won’t necessarily make up most of the org, and they are joining an org with a specific mission and set of values, so will often self-select for at least some degree of alignment, and may also be explicitly filtered during hiring for some degree of alignment. This org can remain an EA org. There is a risk that it won’t, but there are ways to mitigate such risks, e.g. requiring supermajorities for certain things, limiting the number of non-EAs, ensuring the non-EAs aren’t disproportionately aligned in any particular non-EA directions by hiring them to be ideologically diverse, retaining (or increasing) the power of a fairly EA-aligned board over the org so that it can step in if it strays too far from EA. There are also other ways to involve non-EA experts so that they wouldn’t get voting rights, e.g. as contractors or collaborators.
Or, indeed, some orgs should be democratized and others should hire more non-EA experts, but none need do both.
There are other appeals of neutrality (about adding “positive” lives or “goods”) besides just avoiding the RC:
It can avoid the Very Repugnant Conclusion, although some of your proposed solutions like critical levels would work, too.
Adding people at the cost to existing or otherwise necessary people. See here and here. I pretty much have the opposite intuition on the extinction vs party example from you, but I think the use of a party may confound people with intuitions against frivolousness or hedonism and is relatively low stakes for existing people. We can imagine cases where what’s at stake for existing people seems much more serious: dreams or important life goals, suffering, freedom, spending time with loved ones, their lives (including replacement arguments, and the logic of the larder), and so on. The views you defend here allow all of these to be outweighed by the addition of new people. Furthermore, while there may still be strong instrumental reasons for respecting reproductive freedom regardless (which you’ve discussed elsewhere), neutrality seems to give a stronger principled reason, since the welfare of a new child wouldn’t make up for a parent’s overall loss in welfare on its own under any circumstance. Getting the right answer for more principled reasons is more satisfying and on firmer ground.
In intrapersonal tradeoffs on theories where preferences matter terminally, it fits liberal, pluralistic and anti-paternalistic intuitions better. Under preference views that allow the addition of new contingent preferences to outweigh the lesser satisfaction of necessary preferences (and so violate neutrality with respect to adding satisfied preferences, but in a specific way), and ignoring indirect and instrumental reasons (which of course matter substantially in practice), it would in principle be good for the individual for you to violate any or all of their own existing preferences in order to induce/create and satisfy sufficiently strong new preferences in them. Preference-affecting views — basically person-affecting views, but treating individual preferences like persons* — can avoid this problem, and some can avoid “symmetric” problems at the same time, e.g. violating preferences to eliminate or prevent frustrated preferences (at least in the cases where it seems worst to do so).
* although there are more fundamental distinctions we could make if we wanted, e.g. between intrapersonal and interpersonal tradeoffs.
- 24 Sep 2022 21:43 UTC; 15 points) 's comment on Puzzles for Everyone by (
Based on their dashboard, EA charities got ~$200,000 of the first $250,000 in matching funds.
The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective.
700/100=7, not 0.7.
I think the arguments that most animal products are bad at all in expectation to consume based on short term effects are weak, given the very uncertain effects on wild animals. Abstaining from some may mean replacing support for an active moral atrocity with a moral atrocity by omission (wild animal suffering).
The arguments are strongest (even if not very persuasive, as you note) against consuming chicken meat and eggs, farmed insects and maybe herbivorous farmed aquatic animals (other than bivalves), because of the higher density of direct suffering relative to externalities compared to other animal products.
So, if you’re looking for something relatively flexible and without having to experiment with your health (much), maybe just omit these from your diet?
It’s not clear the loss of human life dominates the welfare effects in the short term, depending on how much moral weight you assign to nonhuman animals and how their lives are affected by continued human presence and activity. It seems like human extinction would be good for farmed animals (dominated by chickens, fish and invertebrates), and would have unclear sign for wild animals (although my own best guess is that it would be bad for wild animals).
Of course, if you take a view that’s totally neutral about moral patients who don’t yet exist, then few of the nonhuman animals that would be affected are alive today, and what happens to the rest wouldn’t matter in itself.
claims like 99% of meat being factory farmed are just intuitively false to anyone that has spent any significant amount of time in the countryside and farms outside of the USA
I don’t think this is the claim typically being made. Rather, X% of farmed animals, as individuals, not by weight, are factory farmed. The vast majority of farmed land vertebrates are chickens, and the vast majority of them are factory farmed. The vast majority of farmed vertebrates (land or aquatic) are farmed fish, and the vast majority of them are factory farmed. Factory farms produce disproportionate numbers of animals relative to the number of farms, and countryside farms are badly unrepresentative of the average animal’s life. To be fair, this is a subtle issue, and we shouldn’t expect people to have a good sense of such numbers just through experience.
For example, from Sentience Institute:
We estimate that over 90% of farmed animals globally are living in factory farms at present. This includes an estimated 74% of farmed land animals (vertebrates only) and virtually all farmed fish.[1] However, there is substantial uncertainty in these figures given the land animal estimates’ heavy reliance on information from Worldwatch Institute with unclear methodology[2] and limited data on fish farming.
It’s worth pointing out that ACE’s estimates/models (mostly weighted factor models, including ACE’s versions of Scale-Tractability-Neglectedness, or STN) are often already pretty close to being BOTECs, but aren’t quite BOTECs. I’d guess the smallest fixes to make them more scope-sensitive are to just turn them into BOTECs, or whatever parts of them you can into BOTECs[1], whenever not too much extra work. BOTECs and other quantitative models force you to pick factors, and scale and combine them in ways that are more scope-sensitive.
For the cost-effective criterion, ACE makes judgements about the quality of charities’ achievements with Achievement Quality Scores. For corporate outreach and producer outreach, ACE already scores factors from which direct average impact BOTECs could pretty easily be done with some small changes, which I’d recommend:
Score “Scale (1-7)” = “How many locations and animals are estimated to be affected by the commitments/campaign, if successful?” in terms of the number of animals (or animal life-years) per year of counterfactual impact instead of 1-7.
Ideally, “Impact on animals (1-7)” should be scored quantitatively using Welfare Footprint Project’s approach (some rougher estimates here and here) instead of 1-7, but this is a lower priority than other changes. Welfare improvements per animal or per year of animal life can probably vary much more than 7 times, though, and can end up negative instead, so I’d probably at least adjust the range to be symmetric around 0 and let researchers select 0 or values very close to it.
The BOTEC is then just the product of “Impact on animals (1-7)” (the average[2] welfare improvement with successful implementation), “Scale”, “Likelihood of implementation (%)”, expected welfare range and the number of years of counterfactual impact (until similar welfare improvements for the animals would have happened anyway and made these redundant). Similar BOTECs could be done for the direct impacts of other interventions.
For groups aiming to impact decision-making or funding in the near term with research like Faunalytics, ACE could also highlight some of the most important decisions that have been (or are not too unlikely to be) informed by their research so that we can independently judge how they compare to corporate outreach or other interventions. ACE could also use RP’s model or something similar to get impact BOTECs to make comparisons with more direct work.
For other charities, ACE could also think about how to turn the models into BOTECs or quantitative models of important outcomes. These can be intermediate outcomes or outputs that aren’t necessarily comparable across all interventions, if impact for animals is too speculative, but the potential upside is high enough and the potential downside small enough.[1]
For the Impact Potential criterion, ACE uses STN a lot and cites the 80,000 Hours’ article where 80,000 Hours explains how to get a BOTEC by interpreting and scoring the factors in specific ways. ACE could just follow that procedure and then the STN estimates would be BOTECs.
That being said, STN is really easy to misapply generally (e.g. various critiques here), and I’d be careful about relying on it even if you were to follow 80,000 Hours’ procedure to get BOTECs. For example, only a tiny share of a huge but relatively intractable problem, like wild animal welfare/suffering, may be at all tractable, so it’s easy to overestimate the combination of Scale and Tractability in those cases. See also Joey’s Why we look at the limiting factor instead of the problem scale and Saulius’s Why I No Longer Prioritize Wild Animal Welfare. STN can be useful for guiding what to investigate further and filtering charities for review, but I’d probably go for BOTECs done other ways, like above to replace Achievement Quality Scores, and with more detailed theories of change.
- ^
For example, you could do a BOTEC of the number of additional engagement-weighted animal advocates, which could be part of a BOTEC for impact on animals, but going from engagement-weighted animal advocates to animals could be too speculative, so you stop at engagement-weighted animal advocates. This could be refined further, weighing by country scores.
- ^
Per animal or per animal life-year, to match Scale.
- ^
It seems ACE did so for the Scale factor, but no specific quantitative interpretation for the others.
I think there is however no record of the actual IQ of these people.
FWIW, I think IQ isn’t what we actually care about here; it’s the quality, cleverness and originality of their work and insights. A high IQ that produces nothing of value won’t get much reverence, and rightfully so. People aren’t usually referring to IQ when they call someone intelligent, even if IQ is a measure of intelligence that correlates with our informal usage of the word.
Conditional on animals mattering, how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?
I think comparing animal suffering to extra human life is easily subject to bias if you do it directly. I think it would be better to compare nonhuman animal suffering and human suffering first, and then human suffering and human life. How miserable are farmed chickens compared to the human misery caused by chronic depression or chronic pain, and how do you compare saving a year of good human life to curing chronic depression or pain in humans?
I actually think chickens are among the worst off animals in existence, similar to the worst off humans. Many are in chronic pain from being lame, breathing toxic air, stressed from high stocking densities and deprived of natural behaviours. About 0.4 chickens suffer to death per American per year (not adjusted for elasticities).
In a study of broiler (meat) chickens from the UK:
At a mean age of 40 days, over 27.6% of birds in our study showed poor locomotion and 3.3% were almost unable to walk.
Between food laced with painkillers and food without, lame chickens are more likely than healthy chickens to choose the one with painkiller.[1] Lame chickens also walk twice as fast as they would otherwise if given painkillers, but still slower than healthy chickens.[2]
See Charity Entrepreneurship’s report on welfare conditions.
Since I get much more than $0.43 of enjoyment out of a year’s worth of eating animal products, veganism looks like a really bad altruistic tradeoff to me.
I always find this kind of comparison weird. This is primarily the instrumental value of your enjoyment, right? Otherwise, you should compare your going vegan directly to the suffering of animals by not going vegan, which on a standard diet, should include about 0.3 chickens suffering to death per year (adjusting for elasticity) and whatever number of factory farmed animals. I wouldn’t torture a chicken to death every 3 years and keep several more in factory farming conditions for the inherent value of my personal enjoyment if I thought I enjoyed it as much as the average person does from eating meat for 3 years and there were no risks. (I don’t think you would, either.)
Thanks for writing this!
I don’t think the theorem provides support for total utilitarianism, specifically, unless you add extra assumptions about how to deal with populations of different sizes or different populations generally. Average utilitarianism is still consistent with it, for example. Furthermore, if you don’t count the interests of people who exist until after they exist or unless they come to exist, it probably won’t look like total utilitarianism, although it gets more complicated.
You might be interested in Teruji Thomas’ paper “The Asymmetry, Uncertainty, and the Long Term” (EA Forum post here), which proves a similar result from slightly different premises, but is compatible with all of 1) ex post prioritarianism, 2) mere addition, 3) the procreation asymmetry, 4) avoiding the repugnant conclusion and 5) avoiding antinatalism, and all five of these all at the same time, because it sacrifices the independence of irrelevant alternatives (the claim that how you rank choices should not depend on what choices are available to you, not the vNM axiom). Thomas proposes beatpath voting to choose actions. Christopher Meacham’s “Person-affecting views and saturating counterpart relations” also provides an additive calculus which “solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox” and satisfies the asymmetry, also by giving up the independence of irrelevant alternatives, but hasn’t, as far as I know, been extended to deal with uncertainty.
I’ve also written about ex ante prioritarianism in the comments on the EA Forum post about Thomas’ paper, and in my own post here (with useful feedback in the comments).
- 20 Feb 2020 18:48 UTC; 8 points) 's comment on Harsanyi’s simple “proof” of utilitarianism by (
- 21 Feb 2020 20:57 UTC; 6 points) 's comment on Harsanyi’s simple “proof” of utilitarianism by (
- 20 Feb 2020 17:50 UTC; 3 points) 's comment on Harsanyi’s simple “proof” of utilitarianism by (
What are your main takeaways and ways forward from the pretty pessimistic report on cultivated meat Open Phil commissioned?