Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://www.centreforeffectivealtruism.org/careers
Ben_West
I’m Ben – I’ve been earning to give as a software developer for the past few years, and recently started a company so I can hopefully donate even more!
Thanks Tom and everyone else – it looks great!
There are surveys about this – for example U.S. News & World Report’s’s annual survey of dietitians. Vegan diets are always rated as much more healthful than meat heavy diets like Paleo, and presumably paleo in turn is better than “meat + supplements”.
So transitively, it seems like experts do perceive a relevant disanalogy.
Thanks Larks!
We create software which calculates the quality of care that physicians provide, so that physicians who provide better care get paid more. More technically, we make it simple for physicians to participate in programs like PQRS. This is our website.
Thank you Michelle for posting this, and to the Giving What We Can staff for being willing to revise their beliefs in such an open and thoughtful manner.
I support the change – My girlfriend and I are starting an EA Meetup group. We were originally going to make this a GWWC chapter but decided against that once we learned that GWWC isn’t cause-neutral. So that’s one behavior change which would clearly come out of the name change.
My old company required charity groups associated with the company to help local charities. It wasn’t “blowback” but rather just bureaucratic stonewalling that led me to give up (admittedly I didn’t try superhard).
Problems and Solutions in Infinite Ethics
Calling this “critical level utilitarianism” opens you to concerns raised by Ryan (which I share) and doesn’t seem to buy you anything.
Just say that $300 is the point at which life is worth living (i.e. that’s the point at which utility is zero). Then you don’t run into weird crap like “someone making $200 per year has a life that’s worth living for them, but makes society a worse place.”
(I would call this point the “neutral level” if you’re looking for terminology.)
Updated, thanks!
Thanks for the feedback. Couple thoughts:
I actually agree with you that most people shouldn’t be worried about this (hence my disclaimer that this is not for a general audience). But that doesn’t mean no one should care about it.
Whether we are concerned about an infinite amount of time or an infinite amount of space doesn’t really seem relevant to me at a mathematical level, hence why I grouped them together.
As per (1), it might not be a good use of your time to worry about this. But if it is, I would encourage you to read the paper of Nick Bostrom’s that I linked above, since I think “just look in a local region” is too flippant. E.g. there may be an infinite number of Everett branches we should care about, even if we restrict our attention to earth.
I think this is a cool idea. Owen and others pointed out that this is overly simplistic, but I think it can serve as a useful prod.
One thing which I think would be interesting is to tie this to specific policy choices. I’m not sure how top-rated charities affect population size, but if there’s a trade-off between quantity and quality that’s a useful thing to think about.
Thanks Brian – insightful as always.
It might be the case that life will end after time T. But that’s different than saying it doesn’t matter whether life ends after time T, which a truncated utility function would say.
(But of course see theorem 4.8.1 above)
Thanks for the insight about multiverses – I haven’t thought much about it. Is what you say only true in a level one multiverse?
1) interesting, thanks! 3) I don’t think I know enough about physics to meaningfully comment. It sounds like you are disagreeing with the statement “we can plausibly only affect a finite subset of the universe”? And I guess more generally if physics predicts a multiverse of order w_i, you claim that we can affect w_i utils (because there are w_i copies of us)?
The problems with extending standard total utilitarianism to the infinite case are the easiest to understand, which is why I put that in the summary, but I don’t think most of the article was about that.
For example, the fact that you can’t have intergenerational equity (Thm 3.2.1) seems pretty important no matter what your philosophical bent.
Thanks!
3.2 good catch – I knew I was gonna mess those up for some paper. I’m not sure how to talk about the measurability result though; any thoughts on how to translate it?
4.3 basically, yeah. It’s easier for me to think about it just as a truncation though
4.5 yes you’re right – updated
4.7 yes, that’s what I mean. Introducing quantifiers seems to make things a lot more complicated though
Good question. It’s easiest to imagine the one-dimensional spatial case like (...,L2, L1, me, R1, R2, …) where {Li} are people to my left and {Ri} are those to my right. If I turn 180° this permutes the vector to (..., R1, me, L1, …) Which is obviously an infinite number of permutations, but seems morally unobjectionable.
Thank you for the thoughtful comment.
For the two sequences in your example, it does not seem to be the case that xt and yt give the utility of the same individual in two possible states. Rather, it seems that we are re-indexing the individuals.
This is true. I think an important unstated assumption is that you only need to know that someone has utility x, and you shouldn’t care who that person is.
Now I agree that an ethical preference relation should be invariant under some (and possibly all) infinite permutations IF the permutation is performed to both sequences. But it is hard to give an argument for why we should have invariance under general permutations of only one stream.
I’m not sure what the two sequences you are referring to are. Anonymity constraints simply say that if y is a permutation of x, then x~y.
in almost all of the literature (in particular, in all three references in the original post), we consider one-sided sequences, indexed by time starting today and to the infinite future. Are you aware of example in this context?
It is a true and insightful remark that whether we consider vectors to be infinite or doubly infinite makes a difference.
To my mind, the use of vectors is misleading. What it means to not care about temporal location is really just that you treat populations as sets (not vectors) and so anonymity assumptions aren’t really required.
I guess you could phrase that another way and say that if you don’t believe in infinite anonymity, then you believe that temporal location matters. This disagrees with general utilitarian beliefs. Nick Bostrom talks about this more in section 2.2 of his paper linked above.
A more mathy way that’s helpful for me is to just remember that the relation should be continuous. Say s_n(x) is a permutation of _n_ components. By finite anonymity we have that x~s_n(x) for any finite n. If lim {n → infinity} s_n = y, yet y was morally different from x, the relation is discontinuous and this would be a very odd result.
The question that I had and still have is whether you know of any arguments for why infinite anonymity is suitable to operationalize this idea.
Maybe I am missing something, but it seems obvious to me. Here is my thought process; perhaps you can tell me what I am overlooking.
For simplicity, say that A is the assumption that we shouldn’t care who people are, and IA is the infinite anonymity assumption. We wish to show A IA.
Suppose A. Observe that any permutation of people can’t change the outcome, because it’s not changing any information which is relevant to the decision (as per assumption A). Thus we have IA.
Suppose IA. Observe that it’s impossible to care about who people are, because by assumption they are all considered equal. Thus we have A.
Hence A IA.
These seems so obviously similar in my mind that my “proof” isn’t very insightful… But maybe you can point out to me where I am going wrong.
One form of anonymity says that x ~ y if there is a permutation, say pi, (in some specified class) that takes x to y. Another (sometimes called relative anonymity) says that if x is at least as good as y, then pi(x) is at least as good as pi(y). These two notions of anonymity are not generally the same.
I hadn’t heard about this – thanks! Do you have a source? Google scholar didn’t find much.
In your above example is the pi in pi(X) the same as the pi in pi(y)? I guess it must be because otherwise these two types of anonymity wouldn’t be different, but that seems weird to me.
If x and y above are two possible futures for the same infinite-horizon society, then I think that any utilitarian should be able to rank x above y without having to be critisized for caring about temporal location. Do you agree?
I certainly understand the intuition, but I’m not sure I fully agree with it. The reason I think that x better than y is because it seems to me that x is a Pareto improvement. But it’s really not – there is no generation in x who is better off than another generation in y (under a suitable relabeling of the generations).
I would be very interested to see another example of sequences (or whatever you replace them with) that are infinite permutations of each other, but not finite permutations of each other, and where you do think that equivalence should.
(0,1,0,1,0,1,...) and (1,0,1,0,1,0,...) come to mind.
Fair enough. Let me phrase it this way: suppose you were blinded to the location of people in time. Do you agree that infinite anonymity would hold?
I think this is an excellent idea but one thing I didn’t understand: you said “catastrophic” risks and then mentioned foot and mouth disease which doesn’t seem very catastrophic to me.
Are you proposing this for what the EA community would call “existential” risks (e.g. unfriendly AI)? Or just things on the order of a few billion dollars of damage?