Founder and organizer of EA Eindhoven, EA Tilburg and their respective AI safety groups.
BSc. Biomedical Engineering > Community building gap year on Open Phil grant > MSc. Philosophy of Data and Digital Society. Interested in many cause areas, but increasingly focusing on AI governance and field building for my own career.
Jelle Donders
Right, so even with near-c von Neumann probes in all directions, vacuum collapse or some other galactic x-risk moving at c would only allow civilization to survive as a thin spherical shell of space on a perpetually migrating wave front around the extinction zone that would quickly eat up the center of the colonized volume.
Such a civilization could still contain many planets and stars if they can get a decent head start before a galactic x-risk occurs + travel at near c without getting slowed down much by having to make stops to produce and accelerate more von Neumann probes. Yeah, that’s a lot of if’s.
20 billion ly estimate seems accurate, so cosmic expansion only protects against galactic x-risks on very long timescales. And without very robust governance it’s doubtful we might not get to that point.
Interstellar travel will probably doom the long-term future
Some quick thoughts:By the time we’ve colonized numerous planets and cumulative galactic x-risks are starting to seriously add up, I expect there to be von Neumann probes traveling at a significant fraction of the speed of light (c) in many directions. Causality moves at c, so if we have probes moving away from each other at nearly 2c, that suggests extinction risk could be permanently reduced to zero. In such a scenariomostvalue of our future lightcone could still be extinguished, but notall.
A very long-term consideration is that as the expansion of the universe accelerates so does the number of causally isolated islands. For example, in 100-150 billion years the Local Group will be causally isolated from the rest of the universe, protecting it from galactic x-risks happening elsewhere.
I guess this trades off with your 6th conclusion (Interstellar travel should be banned until galactic x-risks and galactic governance are solved). Getting governance right before we can build von Neumann probes at >0.5c is obviously great, but once we can build them it’s a lot less clear if waiting is good or bad.
Thinking out loud, if any of this seems off lmk!
I’m leaning to disagree because existential risks are a lot broader than extinction risk.
If the question replaced ‘extinction’ with ‘existential’ and ‘survive’ with ‘thrive’ (retain most value of the future), I would lean towards agree!
AIS Netherlands is looking for a Founding Executive Director (EOI form)
Not really an answer to your questions, but I think this guide to SB 1047 gives a good overview of a related aspects.
Sensemaking of AI governance. What do people think is most promising and what are their cruxes.
Besides posts, I would like to see some kind of survey that quantifies and graphs people’s believes.
I appreciate the frankness and reasoning transparency of this post.
I expect this was very much taken into account by the people that have quit, which makes their decision to quit anyway quite alarming.
How many safety-focused people have left since the board drama now? I count 7, but I might be missing more. Ilya Sutskever, Jan Leike, Daniel Kokotajlo, Leopold Aschenbrenner, Cullen O’Keefe, Pavel Izmailov, William Saunders.
This is a big deal. A bunch of the voices that could raise safety concerns at OpenAI when things really heat up are now gone. Idk what happened behind the scenes, but they judged now is a good time to leave.
Possible effective intervention: Guaranteeing that if these people break their NDA’s, all their legal fees will be compensated for. No idea how sensible this is, so agree/disagree voting encouraged.
Interesting post. I’ve always wondered how sensitive the views and efforts of the EA community are to the arbitrary historical process that led to its creation and development. Are there any in-depth explorations that try to answer this question?
Or, since thinking about alternative history can only get us so far, are there any examples of EA-adjacent philosophies or movements throughout history? E.g. Mohism, a Chinese philosophy from 400 BC, sounds like a surprisingly close match in some ways.
FHI almost singlehandedly made salient so many obscure yet important research topics. To everyone that contributed over the years, thank you!
Sounds good overall. 1% each for priorities, cb and giving seems pretty low. 1.75% for mental health might also be on the low side, as there appears to be quite a bit of interest for global mental health in NL. I think the focus on entrepreneurship is great!
Hard to say, but his behavior (and the accounts from other people) seems most consistent with 1.
For clarity, it’s on Saturday, not Friday! :)
The board must have thought things through in detail before pulling the trigger, so I’m still putting some credence on there being good reasons for their move and the subsequent radio silence, which might involve crucial info they have and we don’t.
If not, all of this indeed seems like a very questionable move.
If OP disagrees, they should practice reasoning transparency by clarifying their views
OP believes in reasoning transparency, but their reasoning has not been transparent
Regardless of what Open Phil ends up doing, would really appreciate them to at least do this :)
I’ve shared very similar concerns for a while. The risk of successful narrow EA endeavors that lack transparency backfiring in this manner feels very predictable to me, but many seem to disagree.
Agreed. In a pinned comment of his he elaborates on why he went for the optimistic tone:
honestly, when I began this project, I was preparing to make a doomer-style “final warning” video for humanity. but over the last two years of research and editing, my mindset has flipped. it will take a truly apocalyptic event to stop us, and we are more than capable of avoiding those scenarios and eventually reaching transcendent futures. pessimism is everywhere, and to some degree it is understandable. but the case for being optimistic is strong… and being optimistic puts us on the right footing for the upcoming centuries. what say the people??
It seems melodysheep went for a more passive “it’s plausible the future will be amazing, so let’s hope for that” framing over a more active “both a great, terrible or nonexistent are possible, so let’s do what we can to avoid the latter two” framing. A bit of a shame, since it’s this call to action where the impact is to be found.
7 votes
Overall karma indicates overall quality.
Total points: 0
Agreement karma indicates agreement, separate from overall quality.
This has appeared to me as a serious bottleneck in the AI safety space for a while now. Does anyone know why this kind of work is rarely funded?