RSS

Roko

Karma: −224

Tur­ing-Test-Pass­ing AI im­plies Aligned AI

Roko31 Dec 2024 20:22 UTC
0 points
0 comments5 min readEA link

Prize Money ($100) for Valid Tech­ni­cal Ob­jec­tions to Icesteading

Roko18 Dec 2024 23:40 UTC
−2 points
2 comments1 min readEA link
(twitter.com)

[Question] What is MIRI cur­rently do­ing?

Roko14 Dec 2024 2:55 UTC
9 points
2 comments1 min readEA link

The Dis­solu­tion of AI Safety

Roko12 Dec 2024 10:46 UTC
−7 points
0 comments1 min readEA link
(www.transhumanaxiology.com)

The ELYSIUM Proposal

Roko16 Oct 2024 2:14 UTC
−10 points
0 comments1 min readEA link
(transhumanaxiology.substack.com)

“AI Align­ment” is a Danger­ously Over­loaded Term

Roko15 Dec 2023 15:06 UTC
20 points
2 comments3 min readEA link