AppliedDivinityStudies
I’m going off memory and could be wrong, but in my recollection the thought here was not very thorough. I recall some throwaway lines like “of course this isn’t liquid yet”, but very little analysis. In hindsight, it feels like if you think you have between $0 and $1t committed, you should put a good amount of thought into figuring out the distribution.
One instance of this mattering a lot is the bar for spending in the current year. If you have $1t the bar is much lower and you should fund way more things right now. So information about the movement’s future finances turns out to have a good deal of moral value.
I might have missed this though and would be interested in reading posts from before 11⁄22 that you can dig up.
Oh, I think you would be super worried about it! But not “beat ourselves up” in the sense of feeling like “I really should have known”. In contrast to the part I think we really should have known, which was that the odds of success were not 100% and that trying to figure out a reasonable estimate for the odds of success and the implications of failure would have been a valuable exercise that did not require a ton of foresight.
Bit of a nit, but “we created” is stronger phrasing than I would use. But maybe I would agree with something like “how can we be confident that the next billionaire we embrace isn’t committing fraud”. Certainly I expect there will be more vigilance the next time around and a lot of skepticism.
What I wish I had said about FTX
Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I’ve shared link posts to my blog, and others have shared link posts to their substacks, but I haven’t see anyone share a link post to their own paid substack before.
I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future. Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.
The idea that “the future might not be good” comes up on the forum every so often, but this doesn’t really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don’t fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we’re pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today
Yeah, it’s difficult to intuit, but I think that’s pretty clearly because we’re bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I’m fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I’m also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It’s harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I’m also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it’s hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule
Etc etc.
Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.
R.e.
> For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
I just don’t trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what’s real
And to be clear, I share the intuition that experience machines seem bad, and yet I’m often totally content to play video games all day long because it doesn’t violate those two conditions.
So what I’m roughly arguing is: We have some good reasons to be wary of experience machines, but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
people alive today have negative terminal value
This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)
You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.
At the very least, this doesn’t feel as obviously objectionable to me as the other proposed solutions to the “mere addition paradox”.
The Repugnant Conclusion Isn’t
The problem (of worrying that you’re being silly and getting mugged) doesn’t arise when probabilities are tiny, it’s when probabilities are tiny and you’re highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that “spending the next year of my life on AI Safety research” will prevent x-risk.
In the former cases, we have base rates and many trials. In the latter case, I’m just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, which when multiplied out, is just really horrendous.
Anyway, I mostly agree with what you’re saying, but it’s possible that you’re somewhat misunderstanding where the anxieties you’re responding to are coming from.
Thanks this is interesting, I wrote a bit about my own experiences here:
Under mainstream conceptions of physics (as I loosely understand them), the number of possible lives in the future is unfathomably large, but not actually infinite.
Longtermism does mess with intuitions, but it’s also not basing its legitimacy on a case from intuition. In some ways, it’s the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.
I originally wrote this post for my personal blog and was asked to cross-post here. I stand by the ideas, but I apologize that the tone is a bit out of step with how I would normally write for this forum.
Punching Utilitarians in the Face
I read the title and thought this was a really silly approach, but after reading through the list I am fairly surprised how sold I am on the concept. So thanks for putting this together!
Minor nit: One concern I still have is over drilling facts into my head which won’t be true in the future. For example, instead of:
> The average meat consumption per capita in China has grown 15-fold since 1961
I would prefer:
> Average meat consumption per capita in China grew 15x in the 60 years after 1961
This is great, thanks Michael. I wasn’t aware of the recent 2022 paper arguing against the Stevenson/Wolfers result. A couple questions:
In this talk (starting around 6:30), Peter Favaloro from Open Phil talks about how they use a utility function that grows logarithmically with income, and how this is informed by Stevenson and Wolfers (2008). If the scaling were substantially less favorable (even in poor countries), that would have some fairly serious implications for their cost-effectiveness analysis. Is this something you’ve talked to them about?
Second, just curious how the Progress Studies folk responded when you gave this talk at the Austin workshop.
Yeah this is all right, but I see EA as being since it’s founding, much closer to Protestant ideals than Catholic ones, at least on this particular axis.
If you had told me in 2018 that EA was about “supporting what Dustin Moskovitz chooses to do because he is the best person who does the most good”, or “supporting what Nick Bostrom says is right because he has given it the most thought”, I would have said “okay, I can see how in this world, SBF’s failings would be really disturbing to the central worldview”.
But instead it feels like this kind of attitude has never been central to EA, and that in fact EA embraces something like the direct opposite of this attitude (reasoning from first principles, examining the empirical evidence, making decisions transparently). In this way, I see EA as already having been post-reformation (so to speak).