15%: I think it would only take around a month’s delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesn’t overheat) is completely negligible compared to all the energy/​mass available in the galaxies that could be settled. I think it is relatively unlikely that the AGI would have so little kindness towards the species that birthed it or felt so threatened by us that it would cause the extinction of humanity. However, it’s possible it has a significant discount rate, and therefore the sacrifice of delay is greater. Also, I am concerned about AI-AI conflicts. Since AI models are typically shut down after not very long, I think they would have an incentive to try to take over even if the chance of success were not very high. That implies they would need to use violent means, but it might just blackmail humanity with the threat of violence. A failed attempt could provide a warning shot for us to be more cautious. Alternatively, if the model exfiltrates, then it could be more patient and improve more, so then the takeover is less likely to be violent. And I do give some probability mass to gradual disempowerment, which generally would not be violent.
I’m not counting nuclear war or engineered pandemic that happens before AGI, but I’m worried about those as well.
15%: I think it would only take around a month’s delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesn’t overheat) is completely negligible compared to all the energy/​mass available in the galaxies that could be settled. I think it is relatively unlikely that the AGI would have so little kindness towards the species that birthed it or felt so threatened by us that it would cause the extinction of humanity. However, it’s possible it has a significant discount rate, and therefore the sacrifice of delay is greater. Also, I am concerned about AI-AI conflicts. Since AI models are typically shut down after not very long, I think they would have an incentive to try to take over even if the chance of success were not very high. That implies they would need to use violent means, but it might just blackmail humanity with the threat of violence. A failed attempt could provide a warning shot for us to be more cautious. Alternatively, if the model exfiltrates, then it could be more patient and improve more, so then the takeover is less likely to be violent. And I do give some probability mass to gradual disempowerment, which generally would not be violent.
I’m not counting nuclear war or engineered pandemic that happens before AGI, but I’m worried about those as well.