Executive summary: The author argues that as AI capabilities accelerate, we may face fast, high-stakes “lock-in” moments that shape the long-term future under deep uncertainty, so we should proactively draft principles and “seed documents” now to influence those decisions and establish a minimal moral floor that prevents catastrophic dystopias.
Key points:
The author compares the 1494 Treaty of Tordesillas to potential AI “lock-in” moments, where powerful actors could rapidly make irreversible decisions about the long-term future under severe empirical and moral uncertainty.
They suggest that treaties, constitutions, and power transitions are historically “sticky” events that define who has authority and which value systems shape long-term governance.
The author highlights plausible near-term lock-in scenarios such as an international AI governance convention, frontier labs handing control to AI systems, or US government nationalization or secret agreements with AI companies.
They argue that publicly drafting “seed documents” and principles now could influence future high-pressure negotiations by shaping the language, frames, and assumptions used by decision-makers.
The post claims that even current AI “constitutions” and model specs may already be embedding assumptions that could shape long-term governance, especially if AI systems help design their successors.
The author proposes building a broadly shared moral “floor” by identifying futures to lock out, such as “immortal sadistic dictators” or designing minds to misreport suffering, while acknowledging deep uncertainty about ultimate values.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author argues that as AI capabilities accelerate, we may face fast, high-stakes “lock-in” moments that shape the long-term future under deep uncertainty, so we should proactively draft principles and “seed documents” now to influence those decisions and establish a minimal moral floor that prevents catastrophic dystopias.
Key points:
The author compares the 1494 Treaty of Tordesillas to potential AI “lock-in” moments, where powerful actors could rapidly make irreversible decisions about the long-term future under severe empirical and moral uncertainty.
They suggest that treaties, constitutions, and power transitions are historically “sticky” events that define who has authority and which value systems shape long-term governance.
The author highlights plausible near-term lock-in scenarios such as an international AI governance convention, frontier labs handing control to AI systems, or US government nationalization or secret agreements with AI companies.
They argue that publicly drafting “seed documents” and principles now could influence future high-pressure negotiations by shaping the language, frames, and assumptions used by decision-makers.
The post claims that even current AI “constitutions” and model specs may already be embedding assumptions that could shape long-term governance, especially if AI systems help design their successors.
The author proposes building a broadly shared moral “floor” by identifying futures to lock out, such as “immortal sadistic dictators” or designing minds to misreport suffering, while acknowledging deep uncertainty about ultimate values.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.