This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/alignment/control problems or value lock-in problems, which I would lump in with safety/alignment/control and/or b) weaknesses in AI capabilities that either mean we don’t actually achieve AGI after all, or we develop something that maybe technically or practically counts as “AGI” — or something close enough — and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isn’t human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/AGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I don’t see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesn’t require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground — barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a “research moonshot”. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceX’s Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, that’s ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space — we have plenty of places to put them — but cost and the logistical/bureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies can’t employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be “a former NASA engineer/scientist with a PhD in space electronics” and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Musk’s accomplishments with SpaceX and Tesla are considerable and shouldn’t be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4⁄5 autonomy and robotaxis, or about robotic automation of Tesla’s factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was “the stupidest thing ever” and that “it’s super obviously not going to work”.
In recent years, Musk’s reliability has gotten significantly worse, as he’s become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Tesla’s stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. It’s terrible because if he had taken a different path, he could have contributed so much more to the world.
It’s hard to know for sure how to interpret Musk’s recent comments on space-based solar and computing, but it seems like it’s probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that “Grok 5 will be AGI or something indistinguishable from AGI” (Grok 5 is supposed to launch in Q1 2026)
Musk’s recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply he’s talking about a post-AGI or post-superintelligence scenario. It’s not clear that he believes there’s any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.
This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/alignment/control problems or value lock-in problems, which I would lump in with safety/alignment/control and/or b) weaknesses in AI capabilities that either mean we don’t actually achieve AGI after all, or we develop something that maybe technically or practically counts as “AGI” — or something close enough — and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isn’t human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/AGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I don’t see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesn’t require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground — barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a “research moonshot”. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceX’s Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, that’s ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space — we have plenty of places to put them — but cost and the logistical/bureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies can’t employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be “a former NASA engineer/scientist with a PhD in space electronics” and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Musk’s accomplishments with SpaceX and Tesla are considerable and shouldn’t be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4⁄5 autonomy and robotaxis, or about robotic automation of Tesla’s factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was “the stupidest thing ever” and that “it’s super obviously not going to work”.
In recent years, Musk’s reliability has gotten significantly worse, as he’s become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Tesla’s stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. It’s terrible because if he had taken a different path, he could have contributed so much more to the world.
It’s hard to know for sure how to interpret Musk’s recent comments on space-based solar and computing, but it seems like it’s probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that “Grok 5 will be AGI or something indistinguishable from AGI” (Grok 5 is supposed to launch in Q1 2026)
Musk’s recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply he’s talking about a post-AGI or post-superintelligence scenario. It’s not clear that he believes there’s any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.