RSS

Remmelt

Karma: 1,137

See explainer on why AGI could not be controlled enough to stay safe:
https://​​www.lesswrong.com/​​posts/​​xp6n2MG5vQkPpFEBH/​​the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/​philosophy’s overreaches. I still post here about AI safety.

We don’t want to post again “This might be the last AI Safety Camp”

Remmelt21 Jan 2025 12:03 UTC
38 points
2 comments1 min readEA link
(manifund.org)

[Question] What do you mean with ‘al­ign­ment is solv­able in prin­ci­ple’?

Remmelt17 Jan 2025 15:03 UTC
10 points
1 comment1 min readEA link

Fund­ing Case: AI Safety Camp 11

Remmelt23 Dec 2024 8:39 UTC
42 points
2 comments6 min readEA link
(manifund.org)

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

Remmelt9 Nov 2024 16:10 UTC
2 points
0 comments5 min readEA link
(docs.google.com)

AI Safety Camp 10

Robert Kralisch26 Oct 2024 11:36 UTC
15 points
0 comments18 min readEA link
(www.lesswrong.com)

Ex-OpenAI re­searcher says OpenAI mass-vi­o­lated copy­right law

Remmelt24 Oct 2024 1:00 UTC
11 points
0 comments1 min readEA link
(suchir.net)

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
19 points
1 comment2 min readEA link

Rem­melt’s Quick takes

Remmelt21 Oct 2024 1:39 UTC
6 points
1 comment1 min readEA link