Astrobiologist @ Imperial College London
Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
Astrobiologist @ Imperial College London
Partnerships Coordinator @ SGAC Space Safety and Sustainability Project Group
Interested in: Space Governance, Great Power Conflict, Existential Risk, Cosmic threats, Academia, International policy
Interestingly, the singularity could actually have the opposite effect. Where originally human exploration of the Solar System was decades away, extremely intelligent AI could speed up technology to where it’s all possible within a decade.
The space policy landscape is not ready for that at all. There is no international framework for governing the use of space resources, and human exploration is still technically illegal on Mars due to contamination of the surface (and the moon! Yes we still care a lot).
So I lean more towards superintelligent AI being a reason to care more about space, not less. Will Macaskill discusses it in more detail here.
Haha good point, that’s precisely why I asked.
I’ve just put together the trial version of “Actions for Impact”, the newsletter to leverage the effective altruism (EA) community’s size to complete quick high impact tasks to support EA cause areas. I’m getting feedback on the first version at the moment.
DM me on the forum if you’re interested and I’ll send you the first version—very welcome to feedback!
Great post! I’ve never been convinced that the Precipice ends when we become multi-planetary. So I really enjoyed this summary and critique of Thorstad. And I might go even further and argue that not only does space settlement not mitigate existential risk, but it actually might make it worse.
I think it’s entirely possible that the more planets in our galaxy that we colonise, the higher the likelihood of the extinction of life in the universe will be. It breaks down like this:
Assumption 1: The powers of destruction will always be more powerful than the powers of construction or defence. i.e. at the limits of technology, there will be powers that a galactic civilisation would not be able to defend against if they were created. Even if the colonies do not communicate with others and remain isolated.
Examples:
Vacuum collapse (an expanding bubble of the true vacuum destroys everything in the universe).
unaligned superintelligence. I think an unaligned superintelligence would be able to destroy a galactic civilisation even if an aligned superintelligence was trying to protect it because of assumption 1. Especially if the superintelligence was aligned with destroying everything.
self-replicating robots. Spaceships that mine resources on a planet to replicate themselves, and then move on. This could quickly become
Space lasers. Lasers that travel at the speed of light through the vacuum of space. No planet would be able to see them coming so might not be able to defend against them. This favours a strike-first strategy—the only way to protect yourself is to destroy everyone else before they destroy you.
Assumption 2: For any of the above examples (only one of them has to be possible), it would only take one civilisation in the galaxy to create one of them (by accident or otherwise) and all life in the galaxy could be at risk.
Assumption 3: It would be extremely difficult to centrally govern all of these colonies and detect the development of these technologies as each colony will be lightyears apart. It would take potentially thousands of years to send and receive messages between the different colonies.
Assumption 3: The more colonies that exist in our galaxy, the higher the likelihood that one of those galaxy-ending inventions will be created.
So if the above is true, then I see 3 options:
We colonise the galaxy and all life in the universe becomes extinct due to the above argument. No long term future.
Once we start colonising exoplanets, there’s no stopping the wave of galactic colonisation. So we stay on Earth or within our own solar system until we can figure out a governance system that works to protect us against galactic civilisation destroying x-risks. This limits the importance of the long term future.
We colonise the galaxy with extreme surveillance of every colony through independently acting artificial intelligence systems that are capable of detecting and destroying any dangerous technologies. This sounds a lot like it could become an s-risk/ devolve into tyranny, but it might be the best option.
I would like to look into this further. If it’s true then longtermism is pretty much bust and we should focus on saving animals from factory farming instead… or solve the galaxy destroying problem… it would be nice to have a long pause to do that.
This event is now open to virtual attendees! It is happening today at 6:30PM BST. The discussion will focus on how the space sector can overcome international conflicts, inspired by the great power conflict and space governance 80K problem profiles.
Maybe information on how much good someone can do with the money they donate to charity?
I have written this post introducing space and existential risk and this post on cosmic threats, and I’ve come up with some ideas for stuff I could do that might be impactful. So, inspired by this post, I am sharing a list of ideas for impactful projects I could work on in the area of space and existential risk. If anyone working on anything related to impact evaluation, policy, or existential risk feels like ranking these in order of what sounds the most promising, please do that in the comments. It would be super useful! Thank you! :)
(a) Policy report on the role of the space community in tackling existential risk: Put together a team of people working in different areas related to space and existential risk (cosmic threats, international collaborations, nuclear weapons monitoring, etc.). Conduct research and come together to write a policy report with recommendations for international space organisations to help tackle existential risk more effectively.
(b) Anthology of articles on space and existential risk: Ask researchers to write articles about topics related to space and existential risk and put them all together into an anthology. Publish it somewhere.
(c) Webinar series on space and existential risk: Build a community of people in the space sector working on areas related to existential risk by organising a series of webinars. Each webinar will be available virtually.
(d) Series of EA forum posts on space and existential risk: This should help guide people to an impactful career in the space sector, build a community in EA, and better integrate space into the EA community.
(e) Policy adaptation exercise SMPAG > AI safety: Use a mechanism mapping policy adaptation exercise to build on the success of the space sector in tackling asteroid impact risks (through the SMPAG) to figure out how organisations working on AI safety can be more effective.
(f) White paper on Russia and international space organisations: Russia’s involvement in international space missions and organisations following its invasion of Ukraine could be a good case study for building robust international organisations. E.g. Russia was ousted from ESA, is still actively participating on the International Space Station, and is still a member of SMPAG but not participating. Figuring out why Russia stayed involved or didn’t with each organisation could be useful.
(g) Organise an in-person event on impactful careers in the space sector: This would be aimed at effective altruists and would help gauge interest and provide value.
Ah yes good spot thank you! I got the wrong law of thermodynamics S:
I have corrected this in the post
I don’t think there’s anything we can do right now about rogue celestial bodies—so not worth thinking about for me.
For space weather stuff like solar flares, the main jobs are proofing technology against high amounts of radiation, especially when it comes to nuclear reactors and national defence infrastructure. Researching exactly what the impacts might be from different threats, and their probabilities, would definitely help governments defend against these threats more effectively.
I’m not sure I agree with your appreciation of upper and lower cased EA. The article refers to “lower case” effective altruism as:
“A philosophy that advocates using evidence and reasoning to try to do the most good possible with a given amount of resources.”
That is EA and all of our cause areas and strategies have come out of that.
But the article seems to insinuate that some cause areas are somehow different to this philosophy of EA, being injected into us by people like SBF. The article defines “Upper case” effective altruism as “assigning numerical values to human suffering and the “worth” of current and future human beings”, which includes longtermism and earning to give. I don’t think this distinction makes sense, and I don’t think it’s good for the movement to oust unpopular areas of EA onto a different boat, like a plague that’s unjustified by the core of EA.
Near-termist vs longtermist EA is a neat distinction that helps when talking about the EA community and funding areas. But honestly I have no idea how the author of this article is drawing a line between “lower case” and “upper case” EA other than how the general public has responded to the movement. In my view of EA, everything is occurring in the same arena of debate under the umbrella of EA’s core principles—so it’s all “lower case”.
Yes good point, thank you. I have updated the post to clarify that the probability estimate is for a scenario as bad as the worst case.
I think that if I do it as severity in terms of population loss, it will be a lot harder to pin down. In the severity scores I’m also thinking about how badly it will affect our long term future, and how it affects the probability of other x-risks. So if I assessed it on population loss I might have to add other factors, and it might bit out of the scope of what I’m going for with the post. The severity estimates are fairly open to interpretation as I’ve done them, and I think that’s fine for this, which is an introduction/overview of cosmic threats.
Thanks for the feedback :)
Good point I have edited the post. Lazy writing on my part. Thank you!
Yeah basically that was my reasoning. I’m super sceptical about this risk. The virus may destroy one ecosystem in an extreme environment or be a very effective pathogen in specific circumstances but would be unlikely to be a pervasive threat.
This theoretical microbe would have invested so many stat points in adaptations like extreme UV radiation resistance, resistance to toxins in Mars soil like perchlorates and H2O2, and totally unseen levels of desiccation, salinity, and ionic strength resistance that would be useless on Earth. And it would have to power all of these useless abilities on a food source that it is likely not suited to metabolising, and definitely not under the conditions it is used to. I just can’t imagine how it would be a huge threat around the World. But in a worst case scenario, it could kill a lot of people or damage an ecosystem we rely on heavily with massive global implications, so 7⁄10.
Thank you, and very good question! The short answer is not really. I think that building momentum on existential risk reduction from the space sector could be tractable. One way to do this would be to found organisations that tackle some of the cosmic threats with unknown severity and probability. But to be honest I’m not sure if that’s necessary, maybe the LTTF or other governments and organisations should just fund some more research into these threats.
I think the main area in which EAs can have an impact is by developing existing organisations, with the aim of increasing their power to enforce policy, developing their interconnectedness, and increasing their prevalence. By doing this, we may be able to increase great power collaboration, build up institutions that will naturally evolve into space governance structures for the long term, while helping to tackle natural existential risks directly.
I’m making a post about this strategy at the moment, so happy to elaborate, but I don’t want to write the whole post in one comment! Here’s a diagram from the post draft to show how well covered most areas in space are:
Plugging this into EAometer....
We can propose a project to “direct charitable donations to popular but low-impact causes to the charities with the highest impact within each low-impact cause”
We can score this project on importance, tractability, and neglectdness to help decide if it’s worth working on.
Importance: Probably a 3⁄10 as this project is directed at low-impact causes. But the causes may be fairly important as lots of people care about them/are impacted by them enough to donate.
Tractability: I think 5⁄10. Charities like Cancer Research and WWF have monopolies over giving to these causes, and dominate advertising. So I’m not sure how we could peel people away from that. But the fact that lots of people donate to these causes would probably make it easier to get donations to grant funds on these cause areas—but maybe they wont attract the type of people who give through GWWC/EA.
Neglectedness: Not sure, I’d have to do some research. But I would guess it’s low because these are popular causes, so they would be very busy with researchers to trying to increase impact.
So to conclude, I would say it would be hard to implement this project and compete in such busy and giant cause areas that invest a lot of money in advertising. The change in impact is most likely not as great as just directing people to more effective cause areas. Popular cause areas are so over crowded that probably everything gets funded anyway.
Woah, a really nice article that identified the most common criticisms of EA that I’ve come across, namely, cause prioritization, earning to give, billionaire philanthropy, and longtermism. Funnily enough, I’ve come across these criticisms on the EA forum more than anywhere else!
But it’s nice to see a well-researched, external, and in-depth review of EA’s philosophy, and as a non-philosopher, I found it really accessible too. I would like to see an article of a similar style arguing against EA principles though. Does anyone know where I can find something like that? A search for EA criticism on the web brings up angry journalists and media articles that often miss the point.
I’m thinking about organising a seminar series on space and existential risk. Mostly because it’s something I would really like to see. The webinar series would cover a wide range of topics:
Asteroid Impacts
Building International Collaborations
Monitoring Nuclear Weapons Testing
Monitoring Climate Change Impacts
Planetary Protection from Mars Sample Return
Space Colonisation
Cosmic Threats (supernovae, gamma-ray bursts, solar flares)
The Overview Effect
Astrobiology and Longtermism
I think this would be an online webinar series. Would this be something people would be interested in?
Thank you for these updates! They are super useful for me as someone who is just starting to get more involved with EA. The updates are really helping me get a good overview of what EA’s priorities are and what measurable differences the movement is making. I come out of the post with a list of things to look further into :D
Greetings! I’m a doctoral candidate and I have spent three years working as a freelance creator, specializing in crafting visual aids, particularly of a scientific nature. However, I’m enthusiastic about contributing my time to generate visuals that effectively support EA causes.
Typically, my work involves producing diagrams for academic grant applications, academic publications, and presentations. Nevertheless, I’m open to assisting with outreach illustrations or social media visuals as well. If you find yourself in need of such assistance, please don’t hesitate to get in touch! I’m happy to hop on a zoom chat
Awesome, thanks for sharing!
Thanks for this analysis! I was thinking about this issue a lot recently and I’ll definitely refer to your post going forward.
I wonder if large scale space resource activities might incidentally increase the risk of an asteroid impact in the long term though? Our Solar System has had billions of years for asteroids to settle into orbits. Asteroid mining and small alterations of asteroid trajectories could build up in the long term into chaotic effects. So we might get a situation where asteroid orbits are way more chaotic and difficult to predict across the Solar System, posing threats (or, more likely, extreme costs for deflection activities) to humans that might live in space or to anything with a gravity well. This might be similar to sending our Solar System into a situation more similar to the Late Heavy Bombardment 4bya, where large asteroid impacts were extremely common.
I think this threat is addressed by the same recommendations you make though for asteroid weaponization though, I’m particularly hopeful about advocating for more equitable governance of space resources. These policies will start to be locked in during the 2030s with the ISRU for lunar bases—unfortunately many nations like USA and Luxembourg appear to have little intention of sharing space resources and are using unilateral space policy to incentivize private industry.