N N
I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky’s views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone’s thoughts on the function mapping percentile performance to total realized value.
Thanks for explaining how you think about this issue, I will have to consider that more. My first thought is that I’m not utilitarian enough to say that a universe full of happy biological beings is ~0.0% as good as if they were digital, even conditioning on being biological being the wrong decision. But maybe I would agree on other possible disjunctive traps.
FWIW, when I first saw that I wondered “what’s the difference between the A-aesthetic and the B-aesthetic?” It might be clearer to say “non-aesthetic” or just something like “no frills”.
Thanks for this post. I wonder if it would be good to somehow target different outside of EA subcultures with messaging corresponding to their nearest-neighbor EA subculture. To some extent I guess this already happens, but maybe there is an advantage to explicitly thinking about it in these terms.
Why, if you don’t mind me asking?
His name is Carrick Flynn, not Flynn Carrick.
Have his thoughts on the mathematical universe idea changed since he first put it forward?
A small thing, but citing a particular person seems less culty to me than saying “some well-respected figures think X because Y”. Having a community orthodoxy seems like worse optics than valuing the opinions of specific named people.
What category would you put ideas like the unilateralist’s curse or Bostrom’s vulnerable world hypothesis? They seem like philosophical theories to me, but not really moral theories (and I think they attract a disproportionate amount of criticism).
Can you be more specific about what this journalist wants to talk about? What do you mean by risk mitigation when traveling?
I don’t share this view, and I agree that it is weird. But maybe the feeling behind it is something like: if I, personally, were in extreme poverty I would want people to prioritize getting me material help over mental health help. I imagine I would be kind of baffled and annoyed if some charity was giving me CBT books instead of food or malaria nets.
That’s just a feeling though, and it doesn’t rigorously answer any real cause prioritization question.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term’s agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.
I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.
Thank you!
The content of this comment seems reasonable to me. How is it “LARPing”?