RSS

Remmelt

Karma: 1,047

See explainer on why AGI could not be controlled enough to stay safe:
https://​​www.lesswrong.com/​​posts/​​xp6n2MG5vQkPpFEBH/​​the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/​philosophy’s overreaches. I still post here about AI safety.

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

Remmelt9 Nov 2024 16:10 UTC
0 points
0 comments5 min readEA link
(docs.google.com)

AI Safety Camp 10

Robert Kralisch26 Oct 2024 11:36 UTC
12 points
0 comments18 min readEA link
(www.lesswrong.com)

Ex-OpenAI re­searcher says OpenAI mass-vi­o­lated copy­right law

Remmelt24 Oct 2024 1:00 UTC
9 points
0 comments1 min readEA link
(suchir.net)

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
19 points
1 comment2 min readEA link

Rem­melt’s Quick takes

Remmelt21 Oct 2024 1:39 UTC
6 points
1 comment1 min readEA link

Why Stop AI is bar­ri­cad­ing OpenAI

Remmelt14 Oct 2024 7:12 UTC
−29 points
28 comments6 min readEA link
(docs.google.com)

An AI crash is our best bet for re­strict­ing AI

Remmelt11 Oct 2024 2:12 UTC
20 points
1 comment1 min readEA link