RSS

Andrew Critch

Karma: 873

Cog­ni­tive Bi­ases Con­tribut­ing to AI X-risk — a deleted ex­cerpt from my 2018 ARCHES draft

Andrew CritchDec 3, 2024, 9:29 AM
14 points
1 comment1 min readEA link

LLM chat­bots have ~half of the kinds of “con­scious­ness” that hu­mans be­lieve in. Hu­mans should avoid go­ing crazy about that.

Andrew CritchNov 22, 2024, 3:26 AM
11 points
3 comments1 min readEA link

My mo­ti­va­tion and the­ory of change for work­ing in AI healthtech

Andrew CritchOct 12, 2024, 12:36 AM
47 points
1 comment1 min readEA link

Safety isn’t safety with­out a so­cial model (or: dis­pel­ling the myth of per se tech­ni­cal safety)

Andrew CritchJun 14, 2024, 12:16 AM
95 points
3 comments1 min readEA link

Acausal normalcy

Andrew CritchMar 3, 2023, 11:35 PM
21 points
4 comments8 min readEA link

SFF Spec­u­la­tion Grants as an ex­pe­d­ited fund­ing source

Andrew CritchDec 3, 2022, 6:34 PM
71 points
2 comments1 min readEA link

An­nounc­ing En­cul­tured AI: Build­ing a Video Game

Andrew CritchAug 18, 2022, 2:17 AM
34 points
5 comments4 min readEA link

En­cul­tured AI, Part 2: Pro­vid­ing a Service

Andrew CritchAug 11, 2022, 8:13 PM
10 points
0 comments3 min readEA link

En­cul­tured AI, Part 1: En­abling New Benchmarks

Andrew CritchAug 8, 2022, 10:49 PM
17 points
0 comments6 min readEA link

Cofound­ing team sought for WordSig.org

Andrew CritchAug 3, 2022, 11:56 PM
16 points
0 comments1 min readEA link

Pivotal out­comes and pivotal processes

Andrew CritchJun 17, 2022, 11:43 PM
49 points
1 comment4 min readEA link

Steer­ing AI to care for an­i­mals, and soon

Andrew CritchJun 14, 2022, 1:13 AM
224 points
37 comments1 min readEA link

In­ter­gen­er­a­tional trauma im­ped­ing co­op­er­a­tive ex­is­ten­tial safety efforts

Andrew CritchJun 3, 2022, 5:27 PM
82 points
2 comments3 min readEA link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew CritchMay 13, 2022, 5:26 PM
51 points
5 comments4 min readEA link