I’m a student of moral sciences at the university of Ghent. I’ve also started an EA group in Ghent.
Thank you! The cropping in photoshop only takes five minutes at most, so it isn’t a big deal. All of the images are made with creative commons images, except for “moral anti-realism” (which I took from Lukas’ own page, so I assume he has the rights) and “rwas library” which I found on a bunch of websites with no indication of it’s status (if it does get copyrightstriked I’ll photoshop a similar looking image).
Btw, could you add an “EA Forum (meta)” tag to this post? I can’t add tags at the moment.
I love that sequence, but it’s specifically about motivation and how to cultivate it. An “Introduction to EA” sequences would ideally focus on introducing some of the key concepts and organizations. Something like Doing Good Better, but with a little more focus on the movement.
No problem! As a non-native english speaker this was an extremely difficult post to write, hence why I leaned so heavily on images. If you (or anyone) have any suggestions for how I could reword this post to make it clearer, please let me know.
EDIT: I’ve changed the word “standard utilitarianism” into “moment utilitarianism”, I hope this clears up some of the confusion.
I think so too, because you can’t really talk about ethics without a timeframe. I wasn’t trying to argue that people don’t use timeframes, but rather that people automatically use total timeline utilitarianism without realizing that other options are even possible. This was what I was trying to get at by saying:
Usually when people talk about different types of utilitarianism they automatically presuppose “total timeline utilitarianism”. In fact, the current debate between total and average utilitarianism is actually a debate between “total total utilitarianism” and “total average utilitarianism”.
Please, let me know about any source discussing this.
If with “this” you mean timeline utilitarianism, then there isn’t one unfortunately (I haven’t published this idea anywhere else yet). Once I’ve finished university I hope some EA institution will hire me to do research into descriptive population ethics. So hopefully I can provide you with some data on our intuitions about timelines in a couple years.
I suspect that people more concerned with the quality of life will tend to favor average timeline utilitarianism, and all the people in this community that are so focused on x-risk and life-extension might be a minority with their stronger preference for the quantity of life (anti-deathism is the natural consequence of being a strong total timeline utilitarian).If you want to read something similar to this then you could always check out the wider literature surrounding population ethics in general.
Yes, (total) total utilitarianism is both across time and space, but you can aggregate across time and space in many different ways. E.g median total utilitarianism is also both across time and space, but it aggregates very differently.
I made two visual guides that could be used to improve online discussions. These could be dropped into any conversation to (hopefully) make the discussion more productive.
The first is an update on Grahams hierarchy of disagreement
I improved the lay-out of the old image and added a top layer for steelmanning. You can find my reasoning here and a link to the pdf-file of the image here.
The second is a hierarchy of evidence:
I added a bottom layer for personal opinion. You can find the full image and pdf-file here.
Lastly I wanted to share the Toulmin method of argumentation, which is an excellent guide for a general pragmatic approach to arguments
[Meta-note for the mods. Can you please make it easier to put images into the comments, because this took a lot of tries]
The reason I find the definition not very useful is because it can be interpreted in so many different ways. The aim of this post was to show the four main ways you could interpreted it. When I read the definition my first interpretation was “hinge broadness”, while I suspect your interpretation was “hinge reduction”. I’m not saying that hinge broadness is the ‘correct’ definition of hingeyness, because there is no ‘correct’ definition of hingeyness until a community of language users has made it a convention. There is no convention yet so I’m purposefully splitting the concept into more quantifiable chunks in the hope that we can avoid the confusion that comes from multiple people using the same terms for different concepts.Since I failed to convey this I will slightly edit this post to clear it up for the next confused reader. I added one sentence, and tweaked another sentence and a subtitle. The old version of the post can be found on LessWrong.
That’s a very useful link, thank you.
Also mod-team, this comment isn’t visible underneath my post in any of my browsers. Is there any way to fix that?
EDIT: Thank you mod-team!
It just occurred to me that some people may find the shift in range also important for hingeyness. I’ll illustrate what I mean with a new image:
(I can’t post images in comments so here is a link to the image I will use to illustrate this point)
Here the “range of possible utility in endings” tick 1 has (the first 10) is [0-10] and the “range of possible utility in endings” the first 0 has (tick 2) is [0-10] which is the same. Of course the probability has changed (getting an ending of 1 utility is not even an option anymore), but the minimum and maximum stay the same.
But we don’t care about just the endings, we care about the rest of the journey too. The width of the “range of the total amount of utility you could potentially experience over all branches (not just the endings)” can shrink or stay the same. But the range itself can shift. For example the lowest possible utility tick 1 can experience is 10->0->0 = 10 utility and the highest possible utility that it can experience is 10->0->10 = 20 utility. The difference between the lowest and highest is 10 utility. The lowest total utility that the 0 on tick 2 can experience is 0->0 = 0 utility and the highest is 0->10 = 10 utility, which is once again a difference of 10 utility.
The probability has changed: Ending with a weird number like 19 is impossible for the ‘0 on tick 2’. The probability for a good ending has also become much more favorable (50% chance to end with a 10 instead of 25% it was before). Probability is important for the precipiceness.
But while the width of the range stayed the same, the range itself has shifted downwards from [10-20] to [0-10]. Maybe this also an important factor in what some people call hingeyness? Maybe call that ‘hinge shift’?
This will effect the probability that you end up in certain futures and not others. I used the word precipiceness in my post to refer to high-risk high-reward probability distributions. Maybe it’s also important to have a word for a time in which the probability that we will generate low amounts of utility in the future is increasing. We call this “increase in x-risk” now because going extinct is most of the time a good way to ensure you will generate low amounts of utility. But as I showed in my post, you can have an awesome extinction and a horrible long existence. Maybe I shouldn’t be trying to attach words to all the different variants of probability distributions and just draw them instead.
To recap “the range of total amount of utility you can potentially generate” aka “hinge broadness” can:
1) Shrink by a certain amount (aka hinge reduction) this can be because the most amount of utility you can potentially generate is decreasing (I’ll call this “top-reduction”) or because the least amount of utility you can potentially generate is increasing (I’ll call this “bottom-reduction”). Top-reduction is bad, bottom-reduction is good.
2) Shift upward or downward in utility by a certain amount (aka hinge shift) Upward shift is good, downward shift is bad.
Some other examples of things you could coordinate with this website:
Leaving the current social media giants en masse for a more privacy concerned/bubble-breaking/fact-checking alternative. Everyone hates facebook/twitter/tiktok etc and yet everyone uses them because everyone uses them. By coordinating the switch you can effectively take away their biggest driving force: that everyone uses them.
Switching to a different language. Many of the spelling rules are dumb, yet we use them because we are expected to use them. If we all collectively switched to simplified spelling rules everyone wouldn’t need to keep those unnecessarily complex rules in mind.
Organizing boycotts of unethical companies. Your single action will not effect the supply chain which makes people unmotivated to act. This website would change that.
Switching from cars to other modes of transportation.
Doing illegal things in very large groups so you can’t be arrested (e.g not wearing a burqa)
Starting a local project (e.g exercise group)
Redefining/reclaiming a word.
Organizing a strike against your exploitative employer.
Wearing no/less clothes during hot summer months.
Switching to a different currency.
Having one person pick up groceries for everyone in the local community instead of everyone driving separately.
Organizing/attending an event.
Starting a crowdsourced project (e.g a wiki)
In short; the list of things I only do because everyone else does them is gigantic, but that list is tiny compared to all the things I would do if more people started doing them. I could keep going but I hope this gives some idea as to why this site might be useful.
This post is a crosspost from Less Wrong. Below I leave a comment by Lukas Gloor that explained the implications of that post way better than I did:
This type of procedure may look inelegant for folks who expect population ethics to have an objectively correct solution. However, I think it’s confused to expect there to be such an objective solution. In my view at least, this makes the procedure described in the original post here look pretty attractive as a way to move forward.
Because it includes some very similar considerations as are presented in the original post here, I’ll try to (for those who are curious enough to bear with me) describe the framework I’ve using to think about population ethics:
Ethical value is subjective in the sense that if someone’s life goal is to strive toward state x, it’s no one’s business to tell them that they should focus on y instead. (There may be exceptions, e.g., in case someone’s life goals are the result of brain washing).
For decisions that do not involve the creation of new sentient beings, preference utilitarianism or “bare minimum contractualism” seem like satisfying frameworks. Preference utilitarians are ambitiously cooperative/altruistic and scale back any other possible life goals at the expense of getting maximal preference satisfaction for everyone, whereas “bare-minimum contractualists” obey principles like do no harm while still mostly focusing on their own life goals. A benevolent AI should follow preference utilitarianism, whereas individual people are free to decide for anything on the spectrum between full preference utilitarianism and bare-minimum contractualism. (Bernard William’s famous objection to utilitarianism is that it undermines a person’s “integrity” by alienating them from their own life goals. By focusing all their actions on doing what’s best from everyone’s point of view, people don’t get to do anything that’s good for themselves. This seems okay if one consciously chooses altruism as a way of life, but it seems overly demanding as an all-encompassing morality).
When it comes to questions that affect the creation of new beings, the principles behind preference utilitarianism or bare-minimum contractualism fail to constrain all of the possibility space. In other words: population ethics is underdetermined.
That said, it’s not the case that “anything goes.” Just because present populations have all the power doesn’t mean that it’s morally permissible to ignore any other-regarding considerations about the well-being of possible future people. A bare-minimum version of population ethics could be conceptualized as a set of appeals or principles by which newly created beings can hold accountable their creators. This could include principles such as:
All else equal, it seems objectionable to create minds that lament their existence.
All else equal, it seems objectionable to create minds and place them in situations where their interests are only somewhat fulfilled, if one could have easily provided them with better circumstances.
All else equal, it seems objectionable to create minds destined to constant misery, yet with a strict preference for existence over non-existence.
(While the first principle is about which minds to create, the second two principles apply to how to create new minds.)
Is it ever objectionable to fail to create minds – for instance, in cases where they’d have a strong interest in their existence?
This type of principle would go beyond bare-minimum population ethics. It would be demanding to follow in the sense that it doesn’t just tell us what not to do, but also gives us something to optimize (the creation of new happy people) that would take up all our caring capacity.
Just because we care about fulfilling actual people’s life goals doesn’t mean that we care about creating new people with satisfied life goals. These two things are different. Total utilitarianism is a plausible or defensible version of a “full-scope” population ethical theory, but it’s not a theory that everyone will agree with. Alternatives like average utilitarianism or negative utilitarianism are on equal footing. (As are non-utillitarian approaches to population ethics that say that the moral value of future civilization is some complex function that doesn’t scale linearly with increased population size.)
So what should we make of moral theories such as total utilitarianism, average utilitarianism or negative utilitarianism? They way I think of them, they are possible morally-inspired personal preferences, rather than personal preferences inspired by the correct all-encompassing morality. In other words, a total/average/negative utilitarian is someone who holds strong moral views related to the creation of new people, views that go beyond the bare-minimum principles discussed above. Those views are defensible in the sense that we can see where such people’s inspiration comes from, but they are not objectively true in the sense that those intuitions will appeal in the same way to everyone.
How should people with different population-ethical preferences approach disagreement?
One pretty natural and straightforward approach would the proposal in the original post here.
Ironically, this would amount to “solving” population ethics in a way that’s very similar to how common sense would address it. Here’s how I’d imagine non-philosophers to think approach population ethics:
Parents are obligated to provide a very high standard of care for their children (bare-minimum principle).
People are free to decide against becoming parents (principle inspired by personal morality).
Parents are free to want to have as many children as possible (principle inspired by personal morality), as long as the children are happy in expectation (bare-minimum principle).
People are free to try to influence other people’s stances and parenting choices (principle inspired by personal morality), as long as they remain within the boundaries of what is acceptable in a civil society (bare-minimum principle).
For decisions that are made collectively, we’ll probably want some type of democratic compromise.
I get the impression that a lot of effective altruists have negative associations with moral theories that leave things underspecified. But think about what it would imply if nothing was underspecfied: As Bernard Williams has noted, if the true morality left nothing underspecified, then morally-inclined people would have no freedom to choose what to live for. I no longer think it’s possible or even desirable to find such an all-encompassing morality.
One may object that the picture I’m painting cheapens the motivation behind some people’s strongly held population-ethical convictions. The objection could be summarized this way: “Total utilitarians aren’t just people who self-orientedly like there to be a lot of happiness in the future! Instead, they want there to be a lot of happiness in the future because that’s what they think makes up the most good.”
I think this objection has two components. The first component is inspired by a belief in moral realism, and to that, I’d reply that moral realism is false. The second component of the objection is an important intuition that I sympathize with. I think this intuition can still be accommodated in my framework. This works as follows: What I labelled “principle inspired by personal morality” wasn’t a euphemism for “some random thing people do to feel good about themselves.” People’s personal moral principles can be super serious and inspired by the utmost desire to do what’s good for others. It’s just important to internalize that there isn’t just one single way to do good for others. There are multiple flavors of doing good.