RSS

Prometheus

Karma: 254

Back to the Past to the Future

PrometheusOct 18, 2023, 4:51 PM
4 points
0 commentsEA link

Why Is No One Try­ing To Align Profit In­cen­tives With Align­ment Re­search?

PrometheusAug 23, 2023, 1:19 PM
17 points
2 comments4 min readEA link
(www.lesswrong.com)

Slay­ing the Hy­dra: to­ward a new game board for AI

PrometheusJun 23, 2023, 5:04 PM
3 points
2 commentsEA link

Light­ning Post: Things peo­ple in AI Safety should stop talk­ing about

PrometheusJun 20, 2023, 3:00 PM
5 points
3 commentsEA link

Aligned Ob­jec­tives Prize Competition

PrometheusJun 15, 2023, 12:42 PM
8 points
0 commentsEA link

AI Safety Strat­egy—A new or­ga­ni­za­tion for bet­ter timelines

PrometheusJun 14, 2023, 8:41 PM
8 points
0 comments2 min readEA link

Prometheus’s Quick takes

PrometheusJun 13, 2023, 11:26 PM
3 points
2 commentsEA link

Us­ing Con­sen­sus Mechanisms as an ap­proach to Alignment

PrometheusJun 11, 2023, 1:24 PM
14 points
0 commentsEA link

Hu­mans are not pre­pared to op­er­ate out­side their moral train­ing distribution

PrometheusApr 10, 2023, 9:44 PM
12 points
0 commentsEA link

Wi­den­ing Over­ton Win­dow—Open Thread

PrometheusMar 31, 2023, 10:06 AM
12 points
5 comments1 min readEA link
(www.lesswrong.com)

4 Key As­sump­tions in AI Safety

PrometheusNov 7, 2022, 10:50 AM
5 points
0 commentsEA link

Five Areas I Wish EAs Gave More Focus

PrometheusOct 27, 2022, 6:13 AM
8 points
14 comments4 min readEA link