AI Value Alignment Speaker Series Presented By EA Berkeley

AI Value Alignment Speaker Series

Presented By EA Berkeley

Value alignment, roughly speaking, is the problem of ensuring that AI agents behave in a manner consistent with the values of their human principals. This modern incarnation of the principal-agent problem has historical roots in politics (i.e., how do we ensure that government acts on behalf of the people) and economics (i.e., how do we ensure that a corporation acts on behalf of those who have invested their resources into the corporation), but is at the crux of recent existential risk work in AI safety, such as Stuart Russell’s Human Compatible and Brian Christian’s The Alignment Problem. The purpose of the AI value alignment speaker series is to introduce persons, early in their studies and careers, to the issues of value alignment so they can consider educational and career paths to help defuse the alignment problem.

In addition to the chance to learn from and speak with experts in the field, participating in this series has additional opportunities. For examples, Brian Christian, the co-author of the Top 5 Amazon Best Seller in Computer Science (six years after publication), Algorithms to Live By, and Rohin Shah (DeepMind) who is editor of the Alignment Newsletter, have each agreed to do a 30 minute one-on-one virtual meal with a consistent attendee of the series. Also, there will be book giveaways and other opportunities. Please attend regularly to be considered for these opportunities.

Here is our current working schedule (all times US Pacific):

Brian Christian (The Alignment Problem author and Algorithms to Live By co-author)

The Alignment Problem: A Q&A

March 1st, 2022 5P-6P

Rick Ferri (President of the John C. Bogle Center & Bogleheads’ Guide to Retirement Planning co-author)

Ellen Quigley (Centre for the Study of Existential Risk & Advisor to the CFO, University of Cambridge)

Universal Ownership: Is Index Investing the New Socially Responsible Investing?

March 15th, 2022 11A-1P

Aaron Tucker (Cornell), Caroline Jeanmarie (Oxford), Jon Ward (Open AI)

Value Alignment: Early Career and Educational Guidance from Experts

April 5th, 5P-6P

Roman Yampolskiy (Chapman & Hall/​CRC Artificial Intelligence and Robotic Series Editor)

A Fireside Chat

April 8th, 2022 9A-11A

Seth Baum (Executive Director of the Global Catastrophic Risk Institute)

Limits of the Value Alignment Paradigm

April 13th, 2022 10A-11A

Rohin Shah (Deep Mind and Founding Editor of the Alignment Newsletter)

What is Value Alignment?

April 26th, 2022 Noon-1:30P

How to Attend: https://​​berkeley.zoom.us/​​j/​​958068842EA76? Password: EASpeaker

No comments.