RSS

AI Align­ment Forum

TagLast edit: 14 Mar 2022 15:21 UTC by Pablo

The AI Alignment Forum is a forum for discussing technical research on AI alignment that superseded the Agent Foundations Forum, established around 2015.[1]

A beta version of the site, at the time named the Alignment Forum, was announced on 10 July 2018.[2] The site under its current name was officially launched on 29 October 2018.[3] The authors describe its purpose as follows:

Our first priority is obviously to avert catastrophic outcomes from unaligned Artificial Intelligence. We think the best way to achieve this at the margin is to build an online-hub for AI Alignment research, which both allows the existing top researchers in the field to talk about cutting-edge ideas and approaches, as well as the onboarding of new researchers and contributors.

We think that to solve the AI Alignment problem, the field of AI Alignment research needs to be able to effectively coordinate a large number of researchers from a large number of organisations, with significantly different approaches. Two decades ago we might have invested heavily in the development of a conference or a journal, but with the onset of the internet, an online forum with its ability to do much faster and more comprehensive forms of peer-review seemed to us like a more promising way to help the field form a good set of standards and methodologies.

The AI Alignment Forum is built by Lightcone Infrastructure.

Further reading

Habryka, Oliver et al. (2018) Introducing the AI Alignment Forum (FAQ), AI Alignment Forum, October 29.

External links

AI Alignment Forum. Official website.

Related entries

Alignment Newsletter | LessWrong | Lightcone Infrastructure

  1. ^

    LaVictoire, Patrick (2015) Welcome, new contributors, Agent Foundations Forum, March 23.

  2. ^

    Arnold, Raymond (2018) Announcing AlignmentForum.org beta, AI Alignment Forum, July 10.

  3. ^

    Habryka, Oliver et al. (2018) Introducing the AI Alignment Forum (FAQ), AI Alignment Forum, October 29.

Listen to more EA con­tent with The Non­lin­ear Library

Kat Woods19 Oct 2021 12:24 UTC
187 points
94 comments8 min readEA link

Nat­u­ral­ism and AI alignment

Michele Campolo24 Apr 2021 16:20 UTC
17 points
3 comments7 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link
No comments.