RSS

Remmelt

Karma: 962

See explainer on why AGI could not be controlled enough to stay safe:
https://​​www.lesswrong.com/​​posts/​​xp6n2MG5vQkPpFEBH/​​the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/​philosophy’s overreaches. I still post here about AI safety.

Twelve Law­suits against OpenAI

Remmelt9 Mar 2024 12:22 UTC
58 points
5 comments1 min readEA link

Why I think it’s net harm­ful to do tech­ni­cal safety re­search at AGI labs

Remmelt7 Feb 2024 4:17 UTC
36 points
29 comments1 min readEA link

This might be the last AI Safety Camp

Remmelt24 Jan 2024 9:29 UTC
87 points
32 comments1 min readEA link

The con­ver­gent dy­namic we missed

Remmelt12 Dec 2023 22:50 UTC
2 points
0 comments3 min readEA link

Fund­ing case: AI Safety Camp

Remmelt12 Dec 2023 9:05 UTC
45 points
12 comments5 min readEA link
(manifund.org)

My first con­ver­sa­tion with An­nie Altman

Remmelt21 Nov 2023 21:58 UTC
0 points
0 comments1 min readEA link
(open.spotify.com)

Why a Mars colony would lead to a first strike situation

Remmelt4 Oct 2023 11:29 UTC
−18 points
12 comments1 min readEA link
(mflb.com)

We are not alone: many com­mu­ni­ties want to stop Big Tech from scal­ing un­safe AI

Remmelt22 Sep 2023 17:38 UTC
28 points
30 comments4 min readEA link

How teams went about their re­search at AI Safety Camp edi­tion 8

Remmelt9 Sep 2023 16:34 UTC
13 points
1 comment1 min readEA link

4 types of AGI se­lec­tion, and how to con­strain them

Remmelt9 Aug 2023 15:02 UTC
7 points
0 comments3 min readEA link

[Question] What did AI Safety’s spe­cific fund­ing of AGI R&D labs lead to?

Remmelt5 Jul 2023 15:51 UTC
24 points
17 comments1 min readEA link

The Con­trol Prob­lem: Un­solved or Un­solv­able?

Remmelt2 Jun 2023 15:42 UTC
4 points
9 comments14 min readEA link

An­chor­ing fo­cal­ism and the Iden­ti­fi­able vic­tim effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt7 Jan 2023 9:59 UTC
−2 points
1 comment1 min readEA link

Illu­sion of truth effect and Am­bi­guity effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt5 Jan 2023 4:05 UTC
1 point
1 comment1 min readEA link

Nor­malcy bias and Base rate ne­glect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt4 Jan 2023 3:16 UTC
5 points
0 comments1 min readEA link

Sta­tus quo bias; Sys­tem justification

Remmelt3 Jan 2023 2:50 UTC
4 points
1 comment1 min readEA link

Belief Bias: Bias in Eval­u­at­ing AGI X-Risks

Remmelt2 Jan 2023 8:59 UTC
5 points
0 comments1 min readEA link

Challenge to the no­tion that any­thing is (maybe) pos­si­ble with AGI

Remmelt1 Jan 2023 3:57 UTC
−19 points
3 comments1 min readEA link

Curse of knowl­edge and Naive re­al­ism: Bias in Eval­u­at­ing AGI X-Risks

Remmelt31 Dec 2022 13:33 UTC
5 points
0 comments1 min readEA link

Re­ac­tive de­val­u­a­tion: Bias in Eval­u­at­ing AGI X-Risks

Remmelt30 Dec 2022 9:02 UTC
2 points
9 comments1 min readEA link