RSS

Prometheus

Karma: 248

Back to the Past to the Future

Prometheus18 Oct 2023 16:51 UTC
4 points
0 comments1 min readEA link

Why Is No One Try­ing To Align Profit In­cen­tives With Align­ment Re­search?

Prometheus23 Aug 2023 13:19 UTC
17 points
2 comments4 min readEA link
(www.lesswrong.com)

Slay­ing the Hy­dra: to­ward a new game board for AI

Prometheus23 Jun 2023 17:04 UTC
3 points
2 comments1 min readEA link

Light­ning Post: Things peo­ple in AI Safety should stop talk­ing about

Prometheus20 Jun 2023 15:00 UTC
5 points
3 comments1 min readEA link

Aligned Ob­jec­tives Prize Competition

Prometheus15 Jun 2023 12:42 UTC
8 points
0 comments1 min readEA link

AI Safety Strat­egy—A new or­ga­ni­za­tion for bet­ter timelines

Prometheus14 Jun 2023 20:41 UTC
8 points
0 comments2 min readEA link

Prometheus’s Quick takes

Prometheus13 Jun 2023 23:26 UTC
3 points
2 comments1 min readEA link

Us­ing Con­sen­sus Mechanisms as an ap­proach to Alignment

Prometheus11 Jun 2023 13:24 UTC
14 points
0 comments1 min readEA link

Hu­mans are not pre­pared to op­er­ate out­side their moral train­ing distribution

Prometheus10 Apr 2023 21:44 UTC
12 points
0 comments1 min readEA link

Wi­den­ing Over­ton Win­dow—Open Thread

Prometheus31 Mar 2023 10:06 UTC
12 points
5 comments1 min readEA link
(www.lesswrong.com)

4 Key As­sump­tions in AI Safety

Prometheus7 Nov 2022 10:50 UTC
5 points
0 comments1 min readEA link

Five Areas I Wish EAs Gave More Focus

Prometheus27 Oct 2022 6:13 UTC
8 points
14 comments4 min readEA link