RSS

Remmelt

Karma: 1,418

See explainer on why AGI could not be controlled enough to stay safe:
https://​​www.lesswrong.com/​​posts/​​xp6n2MG5vQkPpFEBH/​​the-control-problem-unsolved-or-unsolvable

I post here about preventing unsafe AI.

Note that I’m no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that’s bent on proselytising both while not listening deeply enough to integrate other perspectives).

The AI bub­ble cov­ered in the Atlantic

Remmelt11 Nov 2025 4:12 UTC
13 points
1 comment2 min readEA link
(www.theatlantic.com)

AI Safety Camp 11

Robert Kralisch7 Nov 2025 14:27 UTC
7 points
1 comment15 min readEA link

Heuris­tics for as­sess­ing how much of a bub­ble AI is in/​will be

Remmelt28 Oct 2025 8:08 UTC
14 points
1 comment2 min readEA link
(www.wired.com)

De­sign­ing for per­pet­ual control

Remmelt12 Oct 2025 2:06 UTC
6 points
0 comments2 min readEA link

Evolu­tion is dumb and slow, right?

Remmelt16 Sep 2025 15:15 UTC
6 points
1 comment6 min readEA link

MAGA speak­ers at NatCon were mostly against AI

Remmelt8 Sep 2025 4:03 UTC
17 points
1 comment2 min readEA link
(www.theverge.com)

Hawley: AI Threat­ens the Work­ing Man

Remmelt8 Sep 2025 3:59 UTC
17 points
1 comment10 min readEA link
(www.dailysignal.com)

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2026)

Robert Kralisch6 Sep 2025 13:34 UTC
4 points
0 comments4 min readEA link

Hunger strike #2, this time in front of DeepMind

Remmelt6 Sep 2025 1:43 UTC
4 points
1 comment1 min readEA link
(x.com)

Hunger strike in front of An­thropic by one guy con­cerned about AI risk

Remmelt5 Sep 2025 4:00 UTC
19 points
18 comments1 min readEA link

An­thropic’s lead­ing re­searchers acted as mod­er­ate accelerationists

Remmelt1 Sep 2025 23:23 UTC
79 points
4 comments42 min readEA link

Some mis­takes in think­ing about AGI evolu­tion and control

Remmelt1 Aug 2025 8:08 UTC
7 points
0 comments1 min readEA link

De­con­fus­ing ‘AI’ and ‘evolu­tion’

Remmelt22 Jul 2025 6:56 UTC
6 points
1 comment28 min readEA link

Our bet on whether the AI mar­ket will crash

Remmelt8 May 2025 8:37 UTC
54 points
18 comments1 min readEA link

List of pe­ti­tions against OpenAI’s for-profit move

Remmelt25 Apr 2025 10:03 UTC
13 points
4 comments1 min readEA link

Crash sce­nario 1: Rapidly mo­bil­ise for a 2025 AI crash

Remmelt11 Apr 2025 6:54 UTC
8 points
0 comments1 min readEA link

Who wants to bet me $25k at 1:7 odds that there won’t be an AI mar­ket crash in the next year?

Remmelt8 Apr 2025 8:31 UTC
7 points
5 comments1 min readEA link

We’re not pre­pared for an AI mar­ket crash

Remmelt1 Apr 2025 4:33 UTC
28 points
4 comments2 min readEA link

OpenAI lost $5 billion in 2024 (and its losses are in­creas­ing)

Remmelt31 Mar 2025 4:17 UTC
0 points
3 comments12 min readEA link
(www.wheresyoured.at)

CoreWeave Is A Time Bomb

Remmelt31 Mar 2025 3:52 UTC
10 points
2 comments2 min readEA link
(www.wheresyoured.at)