I think the type of early deal that would be most valuable is where the US and China both agree to produce a joint âconsensusâ ASI aligned to âthe goodâ. In more detail:
The US and China, as you note, are unsure who will win, and would be better off making a deal to preserve some minimum amount of future influence. But I think I am more worried than you about the costs of continued multipolarity into space colonisation. You write âEven having two alternative systems might open up the possibility for comparison, healthy competition, and moral trade.â War, threats, and unhealthy (e.g., burning the cosmic commons) competition also seem like important possibilities here.
Instead, I think having a joint superintelligence that coordinates using our cosmic endowment would be better, with some amount of influence within the âmoral parliamentâ of the ASI for each of the US and China.
Just that would be preferable to dividing up the universe into two camps I thinkâit is easier to do moral trades within one agent acting under moral uncertainty than coordinating between two agents.
A better version, though, could involve the US and China agreeing on some core moral precepts, or just a moral reflection process, and then jointly designing a moral curriculum for the proto-ASI including plenty of Western and Chinese texts, and letting the ASI do as it sees fit. Presumably both sides genuinely believe they are right and that an appropriate moral training process for the AI will lead to liberalism/âSocialism with Chinese characteristics. So this exploits the two sides having different credences (where as you note your proposed deals are possible even if both sides have the same credences). This creates a larger surplus for posisble agreements.
Of course, agreeing to create a joint ASI could also have big nearer term benefits, e.g. avoiding racing and slowing down AI progress and investing more in safety.
This proposal is clearly very far outside the overton window currently, but I donât think this is that much worse on feasibility than your proposed great power resource-sharing deals. It also solves the enforcement challenge as well which is convenient since we might have needed to create such a consensus AI to enforce a different sort of deal.
I am tentatively excited about this proposal, but I expect there isnât much to do to further it until the relevant parties are taking things more seriously.
While I donât work in GHD, I still enjoy reading GHD content on the Forum and on Substack. I agree that interesting questions in GHD are far from solved, but I wonder if a lot of the low-hanging intellectual fruit has been picked (your number 5)? I wasnât around in early GiveWell days but I imagine that would have been an amazing time to be thinking about GHD and coming up with lots of new approaches and ideas. I havenât found GiveWellâs research to be that surprising or interesting lately for instance (vibes-based, I donât engage that closely with them anymore).
I would be keen to hear more from CE charities about what things they are learning and what questions they are facing!
Re your solution #2, I think I probably wouldnât want the Forum team to show âfavouritismâ, but the decline of GHD curated posts is interesting, and maybe that should change.