RSS

Aligned AI

TagLast edit: 29 Aug 2022 14:54 UTC by Leo

Aligned AI is a benefit corporation focused on reducing existential risk from artificial intelligence via value extrapolation. It launched in February 2022.

Further reading

Armstrong, Stuart (2022) Why I’m co-founding Aligned AI, AI Alignment Forum, February 17.

External links

Aligned AI. Official website.

Apply for a job.

Related entries

AI alignment | artificial intelligence | existential risk

Large Lan­guage Models as Fi­du­cia­ries to Humans

johnjnay24 Jan 2023 19:53 UTC
25 points
0 comments34 min readEA link
(papers.ssrn.com)

AGI mis­al­ign­ment x-risk may be lower due to an over­looked goal speci­fi­ca­tion technology

johnjnay21 Oct 2022 2:03 UTC
20 points
1 comment1 min readEA link

We’re Aligned AI, AMA

Stuart Armstrong1 Mar 2022 11:57 UTC
28 points
18 comments1 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnay26 Oct 2022 1:24 UTC
7 points
1 comment1 min readEA link

Large Lan­guage Models as Cor­po­rate Lob­by­ists, and Im­pli­ca­tions for So­cietal-AI Alignment

johnjnay4 Jan 2023 22:22 UTC
10 points
6 comments8 min readEA link

Align­ing AI with Hu­mans by Lev­er­ag­ing Le­gal Informatics

johnjnay18 Sep 2022 7:43 UTC
20 points
11 comments3 min readEA link
No comments.