RSS

tcelferact

Karma: 323

We Should Be Warier of Overconfidence

tcelferactJul 16, 2024, 11:38 PM
3 points
2 comments2 min readEA link

A one-sen­tence for­mu­la­tion of the AI X-Risk ar­gu­ment I try to make

tcelferactMar 2, 2024, 12:44 AM
3 points
0 comments1 min readEA link

Look­ing for Cana­dian sum­mer co-op po­si­tion in AI Governance

tcelferactJun 26, 2023, 5:27 PM
6 points
2 comments1 min readEA link

Prior X%—<1%: A quan­tified ‘epistemic sta­tus’ of your pre­dic­tion.

tcelferactJun 2, 2023, 3:51 PM
11 points
1 comment1 min readEA link

A re­quest to keep pes­simistic AI posts ac­tion­able.

tcelferactMay 11, 2023, 3:35 PM
27 points
9 comments1 min readEA link

‘AI Emer­gency Eject Cri­te­ria’ Survey

tcelferactApr 19, 2023, 9:55 PM
5 points
4 comments1 min readEA link

An ‘AGI Emer­gency Eject Cri­te­ria’ con­sen­sus could be re­ally use­ful.

tcelferactApr 7, 2023, 4:21 PM
27 points
3 comments1 min readEA link

We might get lucky with AGI warn­ing shots. Let’s be ready!

tcelferactMar 31, 2023, 9:37 PM
22 points
2 comments1 min readEA link