My current first-pass answer is that
Windfall shares. Some fraction of AI stocks should be given one-time to every human alive
This still requires some form of largesse/threat but one-time largesse feels less scary to me than continuously need to uphold the norm.
And it’s not exactly largesse while people (especially outside of AI companies) still have real power, more like a structured negotiation
For reasons of political-economy realities, probably with more given towards rich countries and/or countries that are closer to developing AGI
I’m imagining maybe ratios like 10:1
Not sure about the exact amount of shares but should be way more than enough to support everybody indefinitely at significantly above modern Western standards, excepting positional goods
After the initial transfer, this completely solves the largesse and political economy problems. The “dignity” problem of having your consumption no longer tied to your labor is still there but I’m less worried about this (seems more like a framing problem).
Children can still be a problem. My guess is that normal inheritance stuff is enough though in edge cases maybe we say that you aren’t allowed to disown your children completely from your windfall shares.
If people live forever maybe we have a rule that reproduction means a minimum fraction of your shares automatically go to your children I dunno.
Charter. Later on, some version of this is also written directly into the charters of the AIs, so at minimum something like 0.1-10% of their values ought to care something like all of current humanity’s preferences
Assuming alignment is solved, now superintelligence is (0.1-10%) on the side of all humanity.
(probably optional) some form of protection against manipulation/theft/expropriation
If there’s a transition period where AIs are good enough to do most work in the economy and generate a lot of wealth and/or disemploy most people but AI alignment and capabilities aren’t enough that #2 solves all the new AI-generated problems (eg if we’re worried about superpersuader thieves) we have ad hoc paternalism stuff to prevent obvious ways to steal people’s windfall shares.
how heavy the paternalism is defends on how serious different concerns look. Eg if AI superpersuasion scams are common maybe we’d just make it legally impossible to transfer windfall shares, in the same way you can’t legally sell your organs in most countries.
To ease the transition, this should be seen in earlier stages as a complement to existing welfare systems rather than a substitute to them. Eg if someone’s dumb enough to gamble their monthly AI windfall dividends away, different societies can either choose to let them starve or (my preferred solution) still feed them, perhaps until AI-assisted tools can cure their gambling addictions. In general, don’t let “the windfall shares solution can’t solve all of society’s problems” be a blocker to implementing it.
__
tbc I don’t think this is an amazing answer. I worry both that this won’t be enough and that we won’t implement anything as good as this. I don’t know what the bottlenecks to better answers are, and why other people aren’t working on this. Two obvious answers come to mind:
It’s just kind of a hard problem!
Most people don’t “feel the AGI”, and the people who do think they have more important/tractable problems to work on.
I guess the default for me is that Scientist AI won’t be competitive, so we live in a world with both scientist AI and non-scientist AI. Conditional upon successfully tamping down other approaches enough that Scientist AI gets to the weakly superhuman point while we’re still alive, I’m more optimistic that we can continue to coordinate on doing things safely.