RSS

tcelferact

Karma: 320

A one-sen­tence for­mu­la­tion of the AI X-Risk ar­gu­ment I try to make

tcelferact2 Mar 2024 0:44 UTC
3 points
0 comments1 min readEA link

Look­ing for Cana­dian sum­mer co-op po­si­tion in AI Governance

tcelferact26 Jun 2023 17:27 UTC
6 points
2 comments1 min readEA link

Prior X%—<1%: A quan­tified ‘epistemic sta­tus’ of your pre­dic­tion.

tcelferact2 Jun 2023 15:51 UTC
11 points
1 comment1 min readEA link

A re­quest to keep pes­simistic AI posts ac­tion­able.

tcelferact11 May 2023 15:35 UTC
27 points
9 comments1 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferact19 Apr 2023 21:55 UTC
5 points
3 comments1 min readEA link

An ‘AGI Emer­gency Eject Cri­te­ria’ con­sen­sus could be re­ally use­ful.

tcelferact7 Apr 2023 16:21 UTC
27 points
3 comments1 min readEA link

We might get lucky with AGI warn­ing shots. Let’s be ready!

tcelferact31 Mar 2023 21:37 UTC
22 points
2 comments1 min readEA link

Selec­tive truth-tel­ling: con­cerns about EA lead­er­ship com­mu­ni­ca­tion.

tcelferact15 Nov 2022 19:45 UTC
90 points
45 comments5 min readEA link