RSS

Aligned AI

TagLast edit: Aug 29, 2022, 2:54 PM by Leo

Aligned AI is a benefit corporation focused on reducing existential risk from artificial intelligence via value extrapolation. It launched in February 2022.

Further reading

Armstrong, Stuart (2022) Why I’m co-founding Aligned AI, AI Alignment Forum, February 17.

External links

Aligned AI. Official website.

Apply for a job.

Related entries

AI alignment | artificial intelligence | existential risk

Large Lan­guage Models as Fi­du­cia­ries to Humans

johnjnayJan 24, 2023, 7:53 PM
25 points
0 comments34 min readEA link
(papers.ssrn.com)

AGI mis­al­ign­ment x-risk may be lower due to an over­looked goal speci­fi­ca­tion technology

johnjnayOct 21, 2022, 2:03 AM
20 points
1 comment1 min readEA link

In­tent al­ign­ment should not be the goal for AGI x-risk reduction

johnjnayOct 26, 2022, 1:24 AM
7 points
1 comment1 min readEA link

Large Lan­guage Models as Cor­po­rate Lob­by­ists, and Im­pli­ca­tions for So­cietal-AI Alignment

johnjnayJan 4, 2023, 10:22 PM
10 points
6 comments8 min readEA link

We’re Aligned AI, AMA

Stuart ArmstrongMar 1, 2022, 11:57 AM
28 points
18 comments1 min readEA link

Align­ing AI with Hu­mans by Lev­er­ag­ing Le­gal Informatics

johnjnaySep 18, 2022, 7:43 AM
20 points
11 comments3 min readEA link

[Question] Launch­ing Ap­pli­ca­tions for the Global AI Safety Fel­low­ship 2025!

Impact AcademyNov 27, 2024, 3:33 PM
9 points
1 comment1 min readEA link
No comments.