My list is very similar to yours. I believe items 1, 2, 3, 4, and 5 have already been achieved to substantial degrees and we continue to see progress in the relevant areas on a quarterly basis. I don’t know about the status of 6.
For clarity on item 1, AI company revenues in 2025 are on track to cover 2024 costs, so on a product basis, AI models are profitable; it’s the cost of new models that pull annual figures into the red. I think this will stop being true soon, but that’s my speculation, not evidence, so I remain open that scaling will continue to make progress towards AGI, potentially soon.
Your accusation of bad faith seems to rest on your view that the restraints imposed by the laws of physics on space travel make an alien invasion or attack extremely improbable. Such an event may indeed be extremely improbable, but the laws of physics do not say so.
I have to imagine that you are referring to the speeds of spacecraft and the distances involved. The Milky Way Galaxy is 100,000 light-years in diameter organized along a plane in a disc shape that is 1,000 light-years thick. NASA’s Parker Space Probe has travelled at 0.064% the speed of light. Let’s round it to 0.05% of the speed of light for simplicity. At 0.05% the speed of light, the Parker Space Probe could travel between the two farthest points in the Milky Way Galaxy in 200 million years.
That means that if the maximum speed of spacecraft in the galaxy were limited to only the top speed of NASA’s fastest space probe today, an alien civilization that reached an advanced stage of science and technology — perhaps including things like AGI, advanced nanotechnology/atomically precise manufacturing, cheap nuclear fusion, interstellar spaceships, and so on — more than 200 million years ago would have had plenty of time to establish a presence in every star system of the Milky Way. At 1% the speed of light, the window of time shrinks to 10 million years, and so on.
Designs for spacecraft that credible scientists and engineers thought Earth could actually build in the near future include a light sail-based probe that would supposedly travel at 15-20% the speed of light. Such a probe could traverse the diameter of the Milky Way in under 1 million years at top speed. Acceleration and deceleration complicate the picture somewhat, but the fundamental idea still holds.
If there are alien civilizations in our galaxy, we don’t have any clear, compelling scientific reason to think they wouldn’t be many millions of years older than our civilization. The Earth formed 4.5 billion years ago, so if a habitable planet elsewhere in the galaxy formed just 10% sooner and put life on that planet on the same trajectory as on ours, the aliens would be 450 million years ahead of us. Plenty of time to reach everywhere in the galaxy.
The Fermi paradox has been considered and discussed by people working in physics, astronomy, rocket/spacecraft engineering, SETI, and related fields for decades. There is no consensus on the correct resolution to the paradox. Certainly, there is no consensus that the laws of physics resolve it.
So, if I’m understanding your reasoning correctly — that surely I must be behaving in a dishonest or deceitful way, i.e. engaging in bad faith, because obviously everyone knows the restraints imposed by the laws of physics on space travel make an alien attack on Earth extremely improbable — then your accusation of bad faith seems to rest on a mistake.
Thanks for giving me the opportunity to talk about this because the Fermi paradox is always so much fun to talk about.
My list is very similar to yours. I believe items 1, 2, 3, 4, and 5 have already been achieved to substantial degrees and we continue to see progress in the relevant areas on a quarterly basis. I don’t know about the status of 6.
It’s hard to know what “to substantial degrees” means. That sounds very subjective. Without the “to substantial degrees” caveat, it would be easy to prove that 1, 3, 4, and 5 have not been achieved, and fairly straightforward to make a strong case that 2 has not been achieved.
For example, it is simply a fact that Waymo vehicles have a human in the loop — Waymo openly says so — so Waymo has not achieved Level 4⁄5 autonomy without a human in the loop. Has Waymo achieved Level 4⁄5 autonomy without humans in the loop “to a substantial degree”? That seems subjective. I don’t know what “to a substantial degree” means to you, and it might mean something different to me, or to other people.
Humanoid robots have not achieved any profitable new applications in recent years, as far as I’m aware. Again, I don’t know what achieving this “to a substantial degree” might mean to you.
I would be curious to know what progress you think has been made recently on the fundamental research problems I mentioned, or what the closest examples are to LLMs engaging in the sort of creative intellectual act I described. I imagine the examples you have in mind are not something the majority of AI experts would agree fit the descriptions I gave.
For clarity on item 1, AI company revenues in 2025 are on track to cover 2024 costs, so on a product basis, AI models are profitable; it’s the cost of new models that pull annual figures into the red. I think this will stop being true soon, but that’s my speculation, not evidence, so I remain open that scaling will continue to make progress towards AGI, potentially soon.
Distinguish here between gold mining vs. selling picks and shovels. I’m talking about applications of LLMs and AI tools that are profitable for end users. Nvidia is extremely profitable because it sells GPUs to AI companies. In theory, in a hypothetical scenario, AI companies could become profitable by selling AI models as a service (e.g. API tokens, subscriptions) to businesses. But then would those business customers see any profit from the use of LLMs (or other AI tools)? That’s what I’m talking about. Nvidia is selling picks and shovels, and to some extent even the AI companies are selling picks and shovels. Where’s the gold?
The six-item list I gave was a list of some things that — each on their own but especially in combination — would go a long way toward convincing me that I’m wrong and my near-term AGI skepticism is a mistake. When you say your list is similar, I’m not quite sure what you mean. Do you mean that if those things didn’t happen, that would convince you that the probability or level of credence you assign to near-term AGI is way too high? I was trying to ask you what evidence would convince you that you’re wrong.
My list is very similar to yours. I believe items 1, 2, 3, 4, and 5 have already been achieved to substantial degrees and we continue to see progress in the relevant areas on a quarterly basis. I don’t know about the status of 6.
For clarity on item 1, AI company revenues in 2025 are on track to cover 2024 costs, so on a product basis, AI models are profitable; it’s the cost of new models that pull annual figures into the red. I think this will stop being true soon, but that’s my speculation, not evidence, so I remain open that scaling will continue to make progress towards AGI, potentially soon.
Do you stand by your accusation of bad faith?
Your accusation of bad faith seems to rest on your view that the restraints imposed by the laws of physics on space travel make an alien invasion or attack extremely improbable. Such an event may indeed be extremely improbable, but the laws of physics do not say so.
I have to imagine that you are referring to the speeds of spacecraft and the distances involved. The Milky Way Galaxy is 100,000 light-years in diameter organized along a plane in a disc shape that is 1,000 light-years thick. NASA’s Parker Space Probe has travelled at 0.064% the speed of light. Let’s round it to 0.05% of the speed of light for simplicity. At 0.05% the speed of light, the Parker Space Probe could travel between the two farthest points in the Milky Way Galaxy in 200 million years.
That means that if the maximum speed of spacecraft in the galaxy were limited to only the top speed of NASA’s fastest space probe today, an alien civilization that reached an advanced stage of science and technology — perhaps including things like AGI, advanced nanotechnology/atomically precise manufacturing, cheap nuclear fusion, interstellar spaceships, and so on — more than 200 million years ago would have had plenty of time to establish a presence in every star system of the Milky Way. At 1% the speed of light, the window of time shrinks to 10 million years, and so on.
Designs for spacecraft that credible scientists and engineers thought Earth could actually build in the near future include a light sail-based probe that would supposedly travel at 15-20% the speed of light. Such a probe could traverse the diameter of the Milky Way in under 1 million years at top speed. Acceleration and deceleration complicate the picture somewhat, but the fundamental idea still holds.
If there are alien civilizations in our galaxy, we don’t have any clear, compelling scientific reason to think they wouldn’t be many millions of years older than our civilization. The Earth formed 4.5 billion years ago, so if a habitable planet elsewhere in the galaxy formed just 10% sooner and put life on that planet on the same trajectory as on ours, the aliens would be 450 million years ahead of us. Plenty of time to reach everywhere in the galaxy.
The Fermi paradox has been considered and discussed by people working in physics, astronomy, rocket/spacecraft engineering, SETI, and related fields for decades. There is no consensus on the correct resolution to the paradox. Certainly, there is no consensus that the laws of physics resolve it.
So, if I’m understanding your reasoning correctly — that surely I must be behaving in a dishonest or deceitful way, i.e. engaging in bad faith, because obviously everyone knows the restraints imposed by the laws of physics on space travel make an alien attack on Earth extremely improbable — then your accusation of bad faith seems to rest on a mistake.
Thanks for giving me the opportunity to talk about this because the Fermi paradox is always so much fun to talk about.
It’s hard to know what “to substantial degrees” means. That sounds very subjective. Without the “to substantial degrees” caveat, it would be easy to prove that 1, 3, 4, and 5 have not been achieved, and fairly straightforward to make a strong case that 2 has not been achieved.
For example, it is simply a fact that Waymo vehicles have a human in the loop — Waymo openly says so — so Waymo has not achieved Level 4⁄5 autonomy without a human in the loop. Has Waymo achieved Level 4⁄5 autonomy without humans in the loop “to a substantial degree”? That seems subjective. I don’t know what “to a substantial degree” means to you, and it might mean something different to me, or to other people.
Humanoid robots have not achieved any profitable new applications in recent years, as far as I’m aware. Again, I don’t know what achieving this “to a substantial degree” might mean to you.
I would be curious to know what progress you think has been made recently on the fundamental research problems I mentioned, or what the closest examples are to LLMs engaging in the sort of creative intellectual act I described. I imagine the examples you have in mind are not something the majority of AI experts would agree fit the descriptions I gave.
Distinguish here between gold mining vs. selling picks and shovels. I’m talking about applications of LLMs and AI tools that are profitable for end users. Nvidia is extremely profitable because it sells GPUs to AI companies. In theory, in a hypothetical scenario, AI companies could become profitable by selling AI models as a service (e.g. API tokens, subscriptions) to businesses. But then would those business customers see any profit from the use of LLMs (or other AI tools)? That’s what I’m talking about. Nvidia is selling picks and shovels, and to some extent even the AI companies are selling picks and shovels. Where’s the gold?
The six-item list I gave was a list of some things that — each on their own but especially in combination — would go a long way toward convincing me that I’m wrong and my near-term AGI skepticism is a mistake. When you say your list is similar, I’m not quite sure what you mean. Do you mean that if those things didn’t happen, that would convince you that the probability or level of credence you assign to near-term AGI is way too high? I was trying to ask you what evidence would convince you that you’re wrong.