âŚfor AI to be plateauing, several different consistent trends that have been roughly constant for many years would have to slow down very dramaticallyâeven if they merely slowed down by 90%, youâd still get an intelligence explosion.
If the trends over the last, say, five years in deep learning continued, say, for another ten years, that would not lead to AGI, let alone an intelligence explosion, and would not necessarily even lead to practical applications of AI that contribute significantly to economic growth.
There are several odd turns of logic in this post. For instance:
AIs are currently very useful. However, they are very inefficient at learning compared to humans. This should lead us to suspect that there is lots of room for possible improvement in AIâs ability to learn. Even small improvements in this could lead to extreme progress.
A fundamental weakness in deep learning/âdeep RL, namely data inefficiency/âsample inefficiency, is reframed as âlots of room for possible improvementâ and turned into an argument for AI capabilities optimism? Huh? This is a real judo move.
Overall, the argument is very sketchy and hand-wavy, invoking strange, vague ideas like: we should assume that trends that have existed in the past will continue to exist in the future (why?), if you look at trends, weâre moving toward AGI (why? how? in what way, exactly?), and even if trends donât continue, progress will probably continue to happen anyway (how? why? on what timeline?).
There are also strange, weak arguments like this one:
Thereâs good reason to think AGI is possible in principle. Billions of dollars are being poured into it, and the trend lines look pretty good. These facts make me hesitant to bet against continued progress.
Compelling virtual reality experiences are possible in principle. Billions of dollars have been invested in trying to make VR compelling â by Oculus/âMeta, Apple, HTC, Valve, Sony, and others. So far, it hasnât really panned out. VR remains very small compared to conventional gaming consoles or computing devices. So, being possible in principle and receiving a lot of investment are not particularly strong arguments for success.
VR is just one example. There are many others: cryptocurrencies/âblockchain and autonomous vehicles are two examples where the technology is possible in principle and investment has been huge, but actual practical success has been quite tiny. (E.g., you canât use cryptocurrency to buy practically anything and there isnât a single popular decentralized app on the blockchain. Waymo has only deployed 2,000 vehicles and only plans to build 2,000 more vehicles in 2026.)
With claims this massive, we should have a very high bar for the evidence and reasoning being offered to support them. Perhaps a good analogy is claims of the supernatural. If someone claims to be able to read minds, foresee the future through precognition, or âremote viewâ physically distant locations with only their mind, we should apply a high standard of evidence for these claims. Iâm thinking specifically of how James Randi applied strict scientific standards of testing for such claims â and offered a large cash reward to anyone who could pass the test. In a way, claims about imminent superintelligence and an intelligence explosion and so on are more consequential and more radical than the claim that humans are capable of clairovoyance or extrasensory perception. So, we should not accept flimsy arguments or evidence for these claims.
I wouldnât even accept this kind of evidence or reasoning for a much more conservative and much less consequential claim, such as that VR will grow significantly in popularity over the next decade and VR companies are a good investment. So, why would I accept this for a claim that the end of the world as we know it â in one way or another â is most likely imminent? This doesnât pass basic rationality/âscientific thinking tests.
On the topic of Forethoughtâs work, or at least as Will MacAskill has presented it here on the forum, I find the idea of âAGI preparednessâ very strange. Particularly the subtopic of space governance, which is briefly, indirectly touched on in this post. I donât agree with the AGI alignment/âsafety view, but at least that view is self-consistent. If you donât solve all the alignment/âsafety problems (and, depending on your view, the control problems as well), then the creation of AGI will almost certainly end badly. If you do solve all these problems, the creation of AGI will lead to these utopian effects like massively faster economic growth and progress in science and technology. So, solving these problems should be the priority. Okay, I donât agree, but that is a self-consistent position to hold.
How is it self-consistent to hold the position that space governance should be a priority? Suppose you believe AGI will be invented in 10 years and will very quickly lead to this rapid acceleration in progress, such that a century of progress happens every decade. That means if you spent 10 years on space governance pre-AGI, in the post-AGI world, you would be able to accomplish all that work in just a year. Surely space governance is not such an urgent priority that it canât wait an extra year to be figured out? 11 years vs. 10? And if you donât trust AGI to figure out space governance or assist humans with figuring it out, isnât that either a capabilities problem â which raises its own questions â or an alignment/âsafety problem, in which case, shouldnât you be trying to figure that out now, instead of worrying about space governance?
Would it be any less strange if, instead of space governance, one of MacAskillâs priorities for AGI preparedness was, say, developing drought-resistant crops, or making electric vehicles more affordable, or reconciling quantum theory with general relativity, or copyright reform? Is space governance any less arbitrary than any of these other options, or a million more?
In my view, MacAskill has made a bad habit of spinning new ideas off of existing ones in a way that results in a particular failure mode. Longtermism was a spin-off of existential risk, and after about 8 years of thought and discussion on the topic, there isnât a single compelling longtermist intervention that isnât either 1) something to do with existential risk or 2) something that makes sense to do anyway for neartermist reasons (and that people are already trying to do to some extent, or even the fullest extent). Longtermism doesnât offer any novel, actionable advice over and above the existential risk scholarship that preceded it â and the general tradition of long-term thinking that preceded that. MacAskill came up with longtermism while working at Oxford University, which is nearly 1,000 years old. (âYou think you just fell out of a coconut tree?â)
AGI preparedness seems to be the same mistake again, except with regard to AGI alignment/âsafety and AI existential risk, specifically, rather than existential risk in general.
Itâs not clear what it would make sense for AGI preparedness to recommend over and above AGI alignment/âsafety, if anything. Once again, this is a new idea spun-off of an idea with a longer intellectual history, presented as actionable, and presented with a kind of urgency (yes, perhaps ironically, longtermism was presented with urgency), even though itâs hard to understand what about this new idea is actionable or practically important. Space governance is easy to pick on, but it seems like the same considerations that apply to space governance â it can safely be delayed or, if it canât, that points to an AGI alignment/âsafety problem that should get the attention instead â apply to the other subtopics of AGI preparedness, except the ones that involve AGI alignment/âsafety.
Moral patienthood of AGIs is possibly the sole exception. That isnât an AGI alignment/âsafety problem, and it isnât something that can be safely delayed until after AGI is deployed for a year, because AGIs might already be enslaved, oppressed, imprisoned, tortured, killed, or whatever before that year is up. But in this case, rather than a whole broad category of AGI preparedness, we just need the single topic of AGI moral patienthood. Which is not a novel topic, and predates the founding of Forethought and MacAskillâs discussion of AGI preparedness.
Disclaimer: I have also applied to Forethought and wonât comment on the post directly due to competing interests.
On space governance, you assume 2 scenarios:
We donât solve all the alignment/âsafety problems and everything goes very badly
We solve all the problems and AGI leads to utopian effects
I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:
Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
Scenario C: AGI thatâs aligned to someone but not globally aligned.
Scenario D: Multiple AGIs controlled by different actors with competing goals.
And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios:
Maybe AI remains under control but important people donât listen to or trust the wisdom from the ASI.
Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.
Legal or societal forces prevent AI from taking a leading role in governance.
So, under many scenarios, I donât expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:
Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.
In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss.
The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/â take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important.
Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs donât come about and we need ridiculous amounts of energy/âcompute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.
As I argued in my post yesterday, even without the close links between space governance and AGI, it isnât an arbitrary choice of problem. I think that if a global hegemony doesnât emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops.
All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesnât reshape all governance are fairly likely.
This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/âalignment/âcontrol problems or value lock-in problems, which I would lump in with safety/âalignment/âcontrol and/âor b) weaknesses in AI capabilities that either mean we donât actually achieve AGI after all, or we develop something that maybe technically or practically counts as âAGIâ â or something close enough â and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isnât human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/âAGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I donât see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesnât require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground â barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a âresearch moonshotâ. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceXâs Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, thatâs ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space â we have plenty of places to put them â but cost and the logistical/âbureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies canât employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be âa former NASA engineer/âscientist with a PhD in space electronicsâ and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Muskâs accomplishments with SpaceX and Tesla are considerable and shouldnât be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4â5 autonomy and robotaxis, or about robotic automation of Teslaâs factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was âthe stupidest thing everâ and that âitâs super obviously not going to workâ.
In recent years, Muskâs reliability has gotten significantly worse, as heâs become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Teslaâs stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. Itâs terrible because if he had taken a different path, he could have contributed so much more to the world.
Itâs hard to know for sure how to interpret Muskâs recent comments on space-based solar and computing, but it seems like itâs probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that âGrok 5 will be AGI or something indistinguishable from AGIâ (Grok 5 is supposed to launch in Q1 2026)
Muskâs recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply heâs talking about a post-AGI or post-superintelligence scenario. Itâs not clear that he believes thereâs any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.
This claim is not true:
If the trends over the last, say, five years in deep learning continued, say, for another ten years, that would not lead to AGI, let alone an intelligence explosion, and would not necessarily even lead to practical applications of AI that contribute significantly to economic growth.
There are several odd turns of logic in this post. For instance:
A fundamental weakness in deep learning/âdeep RL, namely data inefficiency/âsample inefficiency, is reframed as âlots of room for possible improvementâ and turned into an argument for AI capabilities optimism? Huh? This is a real judo move.
Overall, the argument is very sketchy and hand-wavy, invoking strange, vague ideas like: we should assume that trends that have existed in the past will continue to exist in the future (why?), if you look at trends, weâre moving toward AGI (why? how? in what way, exactly?), and even if trends donât continue, progress will probably continue to happen anyway (how? why? on what timeline?).
There are also strange, weak arguments like this one:
Compelling virtual reality experiences are possible in principle. Billions of dollars have been invested in trying to make VR compelling â by Oculus/âMeta, Apple, HTC, Valve, Sony, and others. So far, it hasnât really panned out. VR remains very small compared to conventional gaming consoles or computing devices. So, being possible in principle and receiving a lot of investment are not particularly strong arguments for success.
VR is just one example. There are many others: cryptocurrencies/âblockchain and autonomous vehicles are two examples where the technology is possible in principle and investment has been huge, but actual practical success has been quite tiny. (E.g., you canât use cryptocurrency to buy practically anything and there isnât a single popular decentralized app on the blockchain. Waymo has only deployed 2,000 vehicles and only plans to build 2,000 more vehicles in 2026.)
With claims this massive, we should have a very high bar for the evidence and reasoning being offered to support them. Perhaps a good analogy is claims of the supernatural. If someone claims to be able to read minds, foresee the future through precognition, or âremote viewâ physically distant locations with only their mind, we should apply a high standard of evidence for these claims. Iâm thinking specifically of how James Randi applied strict scientific standards of testing for such claims â and offered a large cash reward to anyone who could pass the test. In a way, claims about imminent superintelligence and an intelligence explosion and so on are more consequential and more radical than the claim that humans are capable of clairovoyance or extrasensory perception. So, we should not accept flimsy arguments or evidence for these claims.
I wouldnât even accept this kind of evidence or reasoning for a much more conservative and much less consequential claim, such as that VR will grow significantly in popularity over the next decade and VR companies are a good investment. So, why would I accept this for a claim that the end of the world as we know it â in one way or another â is most likely imminent? This doesnât pass basic rationality/âscientific thinking tests.
On the topic of Forethoughtâs work, or at least as Will MacAskill has presented it here on the forum, I find the idea of âAGI preparednessâ very strange. Particularly the subtopic of space governance, which is briefly, indirectly touched on in this post. I donât agree with the AGI alignment/âsafety view, but at least that view is self-consistent. If you donât solve all the alignment/âsafety problems (and, depending on your view, the control problems as well), then the creation of AGI will almost certainly end badly. If you do solve all these problems, the creation of AGI will lead to these utopian effects like massively faster economic growth and progress in science and technology. So, solving these problems should be the priority. Okay, I donât agree, but that is a self-consistent position to hold.
How is it self-consistent to hold the position that space governance should be a priority? Suppose you believe AGI will be invented in 10 years and will very quickly lead to this rapid acceleration in progress, such that a century of progress happens every decade. That means if you spent 10 years on space governance pre-AGI, in the post-AGI world, you would be able to accomplish all that work in just a year. Surely space governance is not such an urgent priority that it canât wait an extra year to be figured out? 11 years vs. 10? And if you donât trust AGI to figure out space governance or assist humans with figuring it out, isnât that either a capabilities problem â which raises its own questions â or an alignment/âsafety problem, in which case, shouldnât you be trying to figure that out now, instead of worrying about space governance?
Would it be any less strange if, instead of space governance, one of MacAskillâs priorities for AGI preparedness was, say, developing drought-resistant crops, or making electric vehicles more affordable, or reconciling quantum theory with general relativity, or copyright reform? Is space governance any less arbitrary than any of these other options, or a million more?
In my view, MacAskill has made a bad habit of spinning new ideas off of existing ones in a way that results in a particular failure mode. Longtermism was a spin-off of existential risk, and after about 8 years of thought and discussion on the topic, there isnât a single compelling longtermist intervention that isnât either 1) something to do with existential risk or 2) something that makes sense to do anyway for neartermist reasons (and that people are already trying to do to some extent, or even the fullest extent). Longtermism doesnât offer any novel, actionable advice over and above the existential risk scholarship that preceded it â and the general tradition of long-term thinking that preceded that. MacAskill came up with longtermism while working at Oxford University, which is nearly 1,000 years old. (âYou think you just fell out of a coconut tree?â)
AGI preparedness seems to be the same mistake again, except with regard to AGI alignment/âsafety and AI existential risk, specifically, rather than existential risk in general.
existential risk : longtermism :: AGI alignment/âsafety : AGI preparedness
Itâs not clear what it would make sense for AGI preparedness to recommend over and above AGI alignment/âsafety, if anything. Once again, this is a new idea spun-off of an idea with a longer intellectual history, presented as actionable, and presented with a kind of urgency (yes, perhaps ironically, longtermism was presented with urgency), even though itâs hard to understand what about this new idea is actionable or practically important. Space governance is easy to pick on, but it seems like the same considerations that apply to space governance â it can safely be delayed or, if it canât, that points to an AGI alignment/âsafety problem that should get the attention instead â apply to the other subtopics of AGI preparedness, except the ones that involve AGI alignment/âsafety.
Moral patienthood of AGIs is possibly the sole exception. That isnât an AGI alignment/âsafety problem, and it isnât something that can be safely delayed until after AGI is deployed for a year, because AGIs might already be enslaved, oppressed, imprisoned, tortured, killed, or whatever before that year is up. But in this case, rather than a whole broad category of AGI preparedness, we just need the single topic of AGI moral patienthood. Which is not a novel topic, and predates the founding of Forethought and MacAskillâs discussion of AGI preparedness.
Disclaimer: I have also applied to Forethought and wonât comment on the post directly due to competing interests.
On space governance, you assume 2 scenarios:
We donât solve all the alignment/âsafety problems and everything goes very badly
We solve all the problems and AGI leads to utopian effects
I agree that early space governance work is plausibly not that important in those scenarios, but in what percentage of futures do you see us reaching one of these extremes? Capabilities allowing for rapid technological progress can be achieved under various scenarios related to alignment and control that are not at the extremes:
Scenario A: Fast progress without AGI (scaling-limits-overcome, algorithmic breakthroughs, semi-autonomous robotics).
Scenario B: Uneven AGI (not catastrophically misaligned, multipolar, corporate-controlled).
Scenario C: AGI thatâs aligned to someone but not globally aligned.
Scenario D: Multiple AGIs controlled by different actors with competing goals.
And capabilities allowing for rapid technological progress can be developed independently of capabilities allowing for the great wisdom to solve and reshape all our space governance problems. This independence of capabilities could happen under any of those scenarios:
Maybe AI remains under control but important people donât listen to or trust the wisdom from the ASI.
Some challenges associated with developing AGI are not solved and AI emerges as narrowly intelligent but still capable of advanced robotic capabilities like autonomous construction and self-replication (so we still get rapid technological progress), but not the wisdom to solve all our governance problems.
Legal or societal forces prevent AI from taking a leading role in governance.
So, under many scenarios, I donât expect AGI to just solve everything and reshape all our work on space governance. But even if it does reshape the governance, some space-related lock-ins remain binding even with AGI:
Resource distribution lock-in: which states and corporations have physical access to asteroids, propellant depots, lunar poles, launch capacity.
Institutional lock-in: whatever coordination mechanisms exist at AGI creation time are what AGI-augmented institutions will inherit.
Strategic stability lock-in: early military architectures (Lagrange-point sensors, autonomous interceptors) become entrenched.
In all these scenarios, early space industrialisation and early high-ground positions create durable asymmetries that AGI cannot trivially smooth over. AGI cannot coordinate global actors instantly. Some of these lock-ins occur before or during the emergence of transformative AI. Therefore, early space-governance work affects the post-AGI strategic landscape and cannot simply be postponed without loss.
The disagreement could then be over whether we reach AGI in a world where space industrialisation has already begun creating irreversible power asymmetries. If a large scale asteroid mining industry or significant industry on the moon emerges before AGI, then a small group controlling this infrastructure could have a huge first mover advantage in using that infrastructure to take advantage of rapid technological progress to lock in their power forever/â take control of the long-term future through the creation of a primitive Dyson swarm or the creation of advanced space denial capabilities. So, if AI timelines are not as fast as many in this community think they are, and an intelligence explosion happens closer to 2060 than 2030, then space governance work right now is even more important.
Space governance is also totally not arbitrary and is an essential element of AGI preparedness. AGI will operate spacecraft, build infrastructure, and manage space-based sensors. Many catastrophic failure modes (post-AGI power grabs, orbital laser arrays, autonomous swarms, asteroid deflection misuse) require both AGI and space activity. If it turns out that conceptual breakthroughs donât come about and we need ridiculous amounts of energy/âcompute to train superintelligence, then space expansion is also a potential pathway to achieving superintelligence. Google is already working on Project Suncatcher to scale machine learning in space, and Elon Musk, who has launched 9000 Starlink satellites into Earth orbit, has also discussed the value of solar powered satellites for machine learning. All of this ongoing activity is linked to the development of AGI and locks in physical power imbalances post-AGI.
As I argued in my post yesterday, even without the close links between space governance and AGI, it isnât an arbitrary choice of problem. I think that if a global hegemony doesnât emerge soon after the development of ASI, then it will likely emerge in outer space through the use of AI or self-replication to create large scale space infrastructure (allowing massive energy generation and access to interstellar space). So, under many scenarios related to the development of AI, competition and conflict will continue into outer space, where the winner could set the long-term trajectory of human civilisation or the ongoing conflict could squander the resources of the galaxy. This makes space governance more important than drought-resistant crops.
All you have to admit for space governance to be exceptionally important is that some of these scenarios where AGI initiates rapid technological progress but doesnât reshape all governance are fairly likely.
This reply seems to confirm that my objection about space governance is correct. The only reasons to worry about space governance pre-AGI, if AGI is imminent, are a) safety/âalignment/âcontrol problems or value lock-in problems, which I would lump in with safety/âalignment/âcontrol and/âor b) weaknesses in AI capabilities that either mean we donât actually achieve AGI after all, or we develop something that maybe technically or practically counts as âAGIâ â or something close enough â and is highly consequential but lacks some extremely important cognitive or intellectual capabilities, and isnât human-level in every domain.
If either (a) or (b) are true, then this indicates the existence of fundamental problems or shortcomings with AI/âAGI that will universally affect everything, not just affect outer space specifically. For instance, if space governance is locked-in, that implies a very, very large number of other things would be locked-in, too, many of which are far more consequential (at least in the near term, and maybe in the long term too) than outer space. Would authoritarianism be locked-in, indefinitely, in authoritarian countries? What about military power? Would systemic racism and sexism be locked-in? What about class disparities? What about philosophical and scientific ideas? What about competitive advantages and incumbent positions in every existing industry on Earth? What about all power, wealth, resources, advantages, ideas, laws, policies, structures, governments, biases, and so on?
The lock-in argument seems to imply that a certain class of problems or inequalities that exists pre-AGI will exist post-AGI forever, and in a sense significantly worsen, not just in the domain of outer space but in many other domains, perhaps every domain. This would be a horrible outcome. I donât see why the solution should be to solve all these problems and inequalities within the next 10 years (or whatever it is) before AGI since that seems unrealistic. If we need to prepare a perfect world before AGI because AGI will lock in all our problems forever, then AGI seems like a horrible thing, and it seems like we should focus on that, rather than rushing to try to perfect the world in the limited remaining time before it arrives (which is forlorn).
Incidentally, I do find it puzzling to try to imagine how we would manage to create an AI or AGI that is vastly superhuman at science and technology but not superhuman at law, policy, governance, diplomacy, human psychology, social interactions, philosophy, and so on, but my objection to space governance doesnât require any assumption about whether this is possible or likely. It still works just as well either way.
As an aside, there seems to be no near-term prospect of solar panels in outer space capturing energy at a scale or cost-effectiveness that rivals solar or other energy sources on the ground â barring AGI or some other radical discontinuity in science and technology. The cost of a single satellite is something like 10x to 100x to 1,000x more than the cost of a whole solar farm on the ground. Project Suncatcher is explicitly a âresearch moonshotâ. The Suncatcher pre-print optimistically foresees a world of ~$200 per kilogram of matter sent to orbit (down from around a 15x to 20x higher cost on SpaceXâs Falcon 9 currently). ~$200 per kilogram is still immensely costly. In my quick math, thatâs ~$4,000 per typical ground-based solar panel, which is ~10-30x the retail price of a rooftop solar panel. This is to say nothing of the difficulties of doing maintenance on equipment in space.[1]
The barriers to deploying more solar power on the ground are not a lack of space â we have plenty of places to put them â but cost and the logistical/âbureaucratic difficulties with building things. Putting solar in space increases costs by orders of magnitude and comes with logistical and bureaucratic difficulties that swamp anything on the ground (e.g. rockets are considered advanced weapon tech and American rocket companies canât employ non-Americans).[2]
A recent blog post by a pseudonymous author who claims to be âa former NASA engineer/âscientist with a PhD in space electronicsâ and a former Google employee with experience with the cloud and AI describes some of the problems with operating computer equipment in space. An article in MIT Technology Review from earlier this year quotes an expert citing some of the same concerns.
Elon Muskâs accomplishments with SpaceX and Tesla are considerable and shouldnât be downplayed or understated. However, you also have to consider the credibility of his predictions about technology in light of his many, many, many failed predictions, projects, and ideas, particularly around AI and robotics. See, for example, his long history of predictions about Level 4â5 autonomy and robotaxis, or about robotic automation of Teslaâs factories. The generous way to interpret this is as a VC-like model where a 95%+ failure rate is acceptable because the small percentage of winning bets pay off so handsomely. This is more or less how I interpreted Elon Musk in roughly the 2014-2019 era. Incidentally, in 2012, Musk said that space-based solar was âthe stupidest thing everâ and that âitâs super obviously not going to workâ.
In recent years, Muskâs reliability has gotten significantly worse, as heâs become addicted to ketamine, been politically radicalized by misinformation on Twitter, his personal life has become more chaotic and dysfunctional (e.g. his fraught relationships with the many mothers of his many children, his estrangement from his oldest daughter), and as, in the wake of the meteoric rise in Teslaâs stock price and the end of its long period of financial precarity, he began to demand loyalty and agreement from those around him, rather than keeping advisors and confidants who can push back on his more destructive impulses or give him sober second thoughts. Musk is, tragically, no longer a source I consider credible or trustworthy, despite his very real and very important accomplishments. Itâs terrible because if he had taken a different path, he could have contributed so much more to the world.
Itâs hard to know for sure how to interpret Muskâs recent comments on space-based solar and computing, but it seems like itâs probably conditional on AGI being achieved first. Musk has made aggressive forecasts on AGI:
In December 2024, he predicted on Twitter that AGI would be achieved by the end of 2025 and superintelligence in 2027 or 2028, or by 2030 at the latest
In February 2025, he predicted in an interview that AGI would be achieved in 2026 or 2027 and superintelligence by 2029 or 2030
In September 2025, he predicted in an interview that AGI would be achieved in 2026 and superintelligence by 2030
In October 2025, he predicted on Twitter that âGrok 5 will be AGI or something indistinguishable from AGIâ (Grok 5 is supposed to launch in Q1 2026)
Muskâs recent comments about solar and AI in outer space are about the prospects in 2029-2030, so, combining that with his other predictions, this would seem to imply heâs talking about a post-AGI or post-superintelligence scenario. Itâs not clear that he believes thereâs any prospect of this in the absence of AGI. But even if he does believe that, we should not accept his belief as credible evidence for that prospect, in any case.