Work in (meta)ethics, moral psychology, and cultural/moral history
On the claim that agents with good values will, for theoretical reasons, exert disproportionate influence (a potential contributor to the future being good, given alignment):
Work on moral trade also seems relevant here (since moral trade lets everyone have more influence on what they care more about).
On the claim that currently influential groups have good/lame/bad values (a potential contributor to the future being good or bad/lame, given alignment):
Some existing work on these topics, as potential starting points for people interested in looking into this (updated March 11, 2022):
On (AI-assisted) reflection on values (a potential contributor to the future being good, given alignment):
Decoupling deliberation from competition (Christiano, 2021)
Ambitious vs. narrow value learning (Christiano, 2015)
Work in (meta)ethics, moral psychology, and cultural/moral history
On the claim that agents with good values will, for theoretical reasons, exert disproportionate influence (a potential contributor to the future being good, given alignment):
Why might the future be good? (Christiano, 2013)
Work on moral trade also seems relevant here (since moral trade lets everyone have more influence on what they care more about).
On the claim that currently influential groups have good/lame/bad values (a potential contributor to the future being good or bad/lame, given alignment):
This comment (Drexler, 2021)
We’re already in AI takeoff (Valentine, 2022)
Work on the values, processes, and histories of relevant governments, companies, and (social, ideological, and political) movements
One could have informal conversations to learn more about how much leverage various people/groups do or don’t have in relevant groups/organizations
On value erosion through competition (a potential contributor to the future being bad/lame, even with alignment):
“Value erosion through competition” section of a post (Dafoe, 2020)
The four readings cited/linked in the above Dafoe post section
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) (Critch, 2021) (see the comments for further discussion)
Spreading happiness to the stars seems little harder than just spreading (Shulman, 2012) (see the comments for further discussion)
Game theoretic work on cooperation and competition (?)
Keeping an eye out for more work on this topic might be useful.
Additional material that seems relevant:
Public choice theory and social choice theory (?)
Technical alignment work also seems like important context for thinking about what AI aligned to a group/organization may be like.
Additional sources referenced in section 1.2 of the Global Priorities Institute’s research agenda may also be relevant.
Several parts of the original post here and its appendices also seem relevant.