RSS

Andrew Critch

Karma: 669

SFF Spec­u­la­tion Grants as an ex­pe­d­ited fund­ing source

Andrew Critch3 Dec 2022 18:34 UTC
64 points
1 comment1 min readEA link

An­nounc­ing En­cul­tured AI: Build­ing a Video Game

Andrew Critch18 Aug 2022 2:17 UTC
34 points
5 comments4 min readEA link

En­cul­tured AI, Part 2: Pro­vid­ing a Service

Andrew Critch11 Aug 2022 20:13 UTC
10 points
0 comments3 min readEA link

En­cul­tured AI, Part 1: En­abling New Benchmarks

Andrew Critch8 Aug 2022 22:49 UTC
17 points
0 comments5 min readEA link

«Boundaries», Part 2: trends in EA’s han­dling of boundaries

Andrew Critch6 Aug 2022 0:43 UTC
8 points
1 comment7 min readEA link

Cofound­ing team sought for WordSig.org

Andrew Critch3 Aug 2022 23:56 UTC
16 points
0 comments1 min readEA link

Pivotal out­comes and pivotal processes

Andrew Critch17 Jun 2022 23:43 UTC
42 points
1 comment5 min readEA link

Steer­ing AI to care for an­i­mals, and soon

Andrew Critch14 Jun 2022 1:13 UTC
205 points
38 comments1 min readEA link

In­ter­gen­er­a­tional trauma im­ped­ing co­op­er­a­tive ex­is­ten­tial safety efforts

Andrew Critch3 Jun 2022 17:27 UTC
82 points
2 comments3 min readEA link

“Tech com­pany sin­gu­lar­i­ties”, and steer­ing them to re­duce x-risk

Andrew Critch13 May 2022 17:26 UTC
51 points
5 comments4 min readEA link

“Pivotal Act” In­ten­tions: Nega­tive Con­se­quences and Fal­la­cious Arguments

Andrew Critch19 Apr 2022 20:24 UTC
74 points
10 comments7 min readEA link

Some AI re­search ar­eas and their rele­vance to ex­is­ten­tial safety

Andrew Critch15 Dec 2020 12:15 UTC
11 points
0 comments56 min readEA link
(alignmentforum.org)

AI Re­search Con­sid­er­a­tions for Hu­man Ex­is­ten­tial Safety (ARCHES)

Andrew Critch21 May 2020 6:55 UTC
29 points
0 comments3 min readEA link
(acritch.com)