If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.
Bussard ramjets might be viable. But I’m skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.
Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.
If we are talking about going from 0.1c to 0.2c then it makes more sense.
It’s true that making use of resources while matching the probe’s speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won’t be able to go any faster. Even if there’s no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs.
The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don’t see an explicitly stated reason why the limit it there, although I note that any higher speed won’t affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds.
Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I’m not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible.
A lot of what I’m saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular the coordination problems would be substantially easier if there is time for substantial two-way communication between all the probes and all the colonized areas.
Again, I see a lot of potential follow-up work in precisely delineating how different assumptions on what is technologically possible affect my picture.
If it is to gather resources en route, it must accelerate those resources to its own speed. Or alternatively, it must slow down to a halt, pick up resources and then continue. This requires a huge expenditure of energy, which will slow down the probe.
Bussard ramjets might be viable. But I’m skeptical that it could be faster than the propulsion ideas in the Sandberg/Armstrong paper. Anyway you seem to be talking about spacecraft that will consuming planets, not Bussard ramjets.
Going from 0.99c to 0.999c requires an extraordinary amount of additional energy for very little increase in distance over time. At that point, the sideways deviations required to reach waypoints (like if you want to swing to nearby stars instead of staying in a straight line) would be more important. It would be faster to go 0.99c in a straight line than 0.999c through a series of waypoints.
If we are talking about going from 0.1c to 0.2c then it makes more sense.
It’s true that making use of resources while matching the probe’s speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won’t be able to go any faster. Even if there’s no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number of stop-overs.
The highest speed considered in the Armstrong/Sanders paper is 0.99c, which is high enough for my qualitative picture to be relevant. Re-skimming the paper, I don’t see an explicitly stated reason why the limit it there, although I note that any higher speed won’t affect their conclusion about the Fermi paradox and potential past colonizer visible from Earth. The most significant technological reasons for this limit I see them address are the energy costs of deceleration and damage from collisions with dust particles, and neither seems to entirely exclude faster speeds.
Yes, at such high speeds optimizing lateral motion becomes very important and the locations of concentrated sources of energy can affect the geometry of the expansion frontier. For a typical target I’m not sure if the optimal route would involve swerving to a star or galaxy or whether the interstellar dust and dark matter in the direct path would be sufficient. For any particular route I expect a probe to compete with other probes taking a similar route so there will still be competitive pressure to optimize speed over 0.99c if technologically feasible.
A lot of what I’m saying remains the same if the maximal technologically achievable speed is subrelativistic. In other ways such a picture would be different, and in particular the coordination problems would be substantially easier if there is time for substantial two-way communication between all the probes and all the colonized areas.
Again, I see a lot of potential follow-up work in precisely delineating how different assumptions on what is technologically possible affect my picture.