Disclaimer: I have also applied to Forethought and won’t comment on the post directly due to competing interests.
On space governance, you assume 2 scenarios:
We don’t solve all the alignment/safety problems and everything goes very badly
We solve all the problems and AGI leads to utopian effects
I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:
Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
Scenario C: AGI that’s aligned to someone but not globally aligned.
Scenario D: Multiple AGIs controlled by different actors with competing goals.
And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios:
Maybe AI remains under control but important people don’t listen to or trust the wisdom from the ASI.
Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.
Legal or societal forces prevent AI from taking a leading role in governance.
So, under many scenarios, I don’t expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:
Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.
In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss.
The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/ take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important.
Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs don’t come about and we need ridiculous amounts of energy/compute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.
As I argued in my post yesterday, even without the close links between space governance and AGI, it isn’t an arbitrary choice of problem. I think that if a global hegemony doesn’t emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops.
All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesn’t reshape all governance are fairly likely.
This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/alignment/control problems or value lock-in problems, which I would lump in with safety/alignment/control and/or b) weaknesses in AI capabilities that either mean we don’t actually achieve AGI after all, or we develop something that maybe technically or practically counts as “AGI” — or something close enough — and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isn’t human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/AGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I don’t see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesn’t require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground — barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a “research moonshot”. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceX’s Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, that’s ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space — we have plenty of places to put them — but cost and the logistical/bureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies can’t employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be “a former NASA engineer/scientist with a PhD in space electronics” and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Musk’s accomplishments with SpaceX and Tesla are considerable and shouldn’t be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4⁄5 autonomy and robotaxis, or about robotic automation of Tesla’s factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was “the stupidest thing ever” and that “it’s super obviously not going to work”.
In recent years, Musk’s reliability has gotten significantly worse, as he’s become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Tesla’s stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. It’s terrible because if he had taken a different path, he could have contributed so much more to the world.
It’s hard to know for sure how to interpret Musk’s recent comments on space-based solar and computing, but it seems like it’s probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that “Grok 5 will be AGI or something indistinguishable from AGI” (Grok 5 is supposed to launch in Q1 2026)
Musk’s recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply he’s talking about a post-AGI or post-superintelligence scenario. It’s not clear that he believes there’s any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.
Disclaimer: I have also applied to Forethought and won’t comment on the post directly due to competing interests.
On space governance, you assume 2 scenarios:
We don’t solve all the alignment/safety problems and everything goes very badly
We solve all the problems and AGI leads to utopian effects
I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:
Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
Scenario B: Uneven AGI (not catastrophically misaligned, multipolar, corporate-controlled).
Scenario C: AGI that’s aligned to someone but not globally aligned.
Scenario D: Multiple AGIs controlled by different actors with competing goals.
And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios:
Maybe AI remains under control but important people don’t listen to or trust the wisdom from the ASI.
Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.
Legal or societal forces prevent AI from taking a leading role in governance.
So, under many scenarios, I don’t expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:
Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.
In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss.
The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/ take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important.
Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs don’t come about and we need ridiculous amounts of energy/compute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.
As I argued in my post yesterday, even without the close links between space governance and AGI, it isn’t an arbitrary choice of problem. I think that if a global hegemony doesn’t emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops.
All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesn’t reshape all governance are fairly likely.
This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/alignment/control problems or value lock-in problems, which I would lump in with safety/alignment/control and/or b) weaknesses in AI capabilities that either mean we don’t actually achieve AGI after all, or we develop something that maybe technically or practically counts as “AGI” — or something close enough — and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isn’t human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/AGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I don’t see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesn’t require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground — barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a “research moonshot”. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceX’s Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, that’s ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space — we have plenty of places to put them — but cost and the logistical/bureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies can’t employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be “a former NASA engineer/scientist with a PhD in space electronics” and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Musk’s accomplishments with SpaceX and Tesla are considerable and shouldn’t be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4⁄5 autonomy and robotaxis, or about robotic automation of Tesla’s factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was “the stupidest thing ever” and that “it’s super obviously not going to work”.
In recent years, Musk’s reliability has gotten significantly worse, as he’s become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Tesla’s stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. It’s terrible because if he had taken a different path, he could have contributed so much more to the world.
It’s hard to know for sure how to interpret Musk’s recent comments on space-based solar and computing, but it seems like it’s probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that “Grok 5 will be AGI or something indistinguishable from AGI” (Grok 5 is supposed to launch in Q1 2026)
Musk’s recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply he’s talking about a post-AGI or post-superintelligence scenario. It’s not clear that he believes there’s any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.