AI Benefits Post 2: How AI Benefits Differs from AI Alignment & AI for Good

This is a post in a series on ā€œAI Benefits.ā€ It is cross-posted from my personal blog. For other entries in this series, navigate to the AI Benefits Blog Series Index page.

This post is also discussed on LessWrong.

For comments on this series, I am thankful to Katya Klinova, Max Ghenis, Avital Balwit, Joel Becker, Anton Korinek, and others. Errors are my own.

If you are an expert in a relevant area and would like to help me further explore this topic, please contact me.

How AI Benefits Differs from AI Alignment & AI for Good

The Values Served by AI Benefits Work

Benefits plans need to optimize for a number of objectives.[1] The foremost is simply maximizing wellbeing. But AI Benefits work has some secondary goals, too. Some of these include:

  1. Equality: Benefits are distributed fairly and broadly.[2]

  2. Autonomy: AI Benefits respect and enhance end-beneficiariesā€™ autonomy.[3]

  3. Democratization: Where possible, AI Benefits decisionmakers should create, consult with, or defer to democratic governance mechanisms.

  4. Modesty: AI benefactors should be epistemically modest, meaning that they should be very careful when predicting how plans will change or interact with complex systems (e.g., the world economy).

These secondary goals are largely inherited from the stated goals of many individuals and organizations working to produce AI Benefits.

Additionally, since the rate of improvements to wellbeing probably decreases with income, the focus on maximizing wellbeing implies a focus on the distributional aspects of Benefits.

How AI Benefits differs from AI Alignment

Another important clarification is that AI Benefits differ from AI Alignment.

Both alignment and beneficiality are ethically relevant concepts. Alignment can refer to several different things. Iason Gabriel of DeepMind provides a useful taxonomy of existing conceptions of alignment. According to Gabriel, ā€œAI alignmentā€ can refer to alignment with:

  1. ā€œInstructions: the agent does what I instruct it to do.ā€

  2. ā€œExpressed intentions: the agent does what I intend it to do.ā€

  3. ā€œRevealed preferences: the agent does what my behaviour reveals I prefer.ā€

  4. ā€œInformed preferences or desires: the agent does what I would want it to do if I were rational and informed.ā€

  5. ā€œInterest or well-being: the agent does what is in my interest, or what is best for me, objectively speaking.ā€

  6. ā€œValues: the agent does what it morally ought to do . . . .ā€

A system can be aligned in most of these senses without being beneficial. Being beneficial is distinct from being aligned in senses 1ā€“4 because those deal only with the desires of a particular human principal, which may or may not be beneficial. Being beneficial is distinct from conception 5 because beneficial AI aims to benefit many or all moral patients. Only AI that is aligned in the sixth sense would be beneficial by definition. Conversely, AI need not be well-aligned to be beneficial (though it might help).

How AI Benefits differs from AI for Good

A huge number of projects exist under the banner of ā€œAI for Good.ā€ These projects are generally beneficial. However, AI Benefits work is different from simply finding and pursuing an AI for Good project.

AI Benefits work aims at helping AI labs craft a long-term Benefits strategy. Unlike AI for Good, which is tied to specific techniques/ā€‹capabilities (e.g., NLP) in certain domains (e.g., AI in education), AI Benefits is capability- and domain-agnostic. Accordingly, the pace of AI capabilities development should not dramatically alter AI Benefits plans at the highest level (though it may of course change how they are implemented). Most of my work therefore focuses not on concrete beneficial AI applications themselves, but rather on the process of choosing between and improving possible beneficial applications. This meta-level focus is particularly useful at OpenAI, where the primary mission is to benefit the world by building AGIā€”a technology with difficult-to-foresee capabilities.


  1. ā†©ļøŽ

    Multi-objective optimization is a very hard problem. Managing this optimization problem both formally and procedurally is a key desideratum for Benefits plans. I do not think I have come close to solving this problem, and would love input on this point.

  2. ā†©ļøŽ

    OpenAIā€™s Charter commits ā€œto us[ing] any influence we obtain over AGIā€™s deployment to ensure it is used for the benefit of all . . . .ā€

  3. ā†©ļøŽ

    OpenAIā€™s Charter commits to avoiding ā€œunduly concentrat[ing] power.ā€

No comments.