RSS

Andrew Critch

Karma: 723

Acausal normalcy

Andrew Critch3 Mar 2023 23:35 UTC
21 points
4 comments8 min readEA link

SFF Spec­u­la­tion Grants as an ex­pe­d­ited fund­ing source

Andrew Critch3 Dec 2022 18:34 UTC
71 points
2 comments1 min readEA link

An­nounc­ing En­cul­tured AI: Build­ing a Video Game

Andrew Critch18 Aug 2022 2:17 UTC
34 points
5 comments4 min readEA link

En­cul­tured AI, Part 2: Pro­vid­ing a Service

Andrew Critch11 Aug 2022 20:13 UTC
10 points
0 comments3 min readEA link

En­cul­tured AI, Part 1: En­abling New Benchmarks

Andrew Critch8 Aug 2022 22:49 UTC
17 points
0 comments5 min readEA link

Cofound­ing team sought for WordSig.org

Andrew Critch3 Aug 2022 23:56 UTC
16 points
0 comments1 min readEA link

Pivotal out­comes and pivotal processes

Andrew Critch17 Jun 2022 23:43 UTC
49 points
1 comment5 min readEA link

Steer­ing AI to care for an­i­mals, and soon

Andrew Critch14 Jun 2022 1:13 UTC
221 points
37 comments1 min readEA link

In­ter­gen­er­a­tional trauma im­ped­ing co­op­er­a­tive ex­is­ten­tial safety efforts

Andrew Critch3 Jun 2022 17:27 UTC
82 points
2 comments3 min readEA link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew Critch13 May 2022 17:26 UTC
51 points
5 comments4 min readEA link

“Pivotal Act” In­ten­tions: Nega­tive Con­se­quences and Fal­la­cious Arguments

Andrew Critch19 Apr 2022 20:24 UTC
80 points
10 comments7 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
12 points
1 comment56 min readEA link
(alignmentforum.org)

AI Re­search Con­sid­er­a­tions for Hu­man Ex­is­ten­tial Safety (ARCHES)

Andrew Critch21 May 2020 6:55 UTC
29 points
0 comments3 min readEA link
(acritch.com)