Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Andrew Critch
Karma:
893
All
Posts
Comments
New
Top
Old
Cognitive Biases Contributing to AI X-risk — a deleted excerpt from my 2018 ARCHES draft
Andrew Critch
3 Dec 2024 9:29 UTC
14
points
1
comment
5
min read
EA
link
LLM chatbots have ~half of the kinds of “consciousness” that humans believe in. Humans should avoid going crazy about that.
Andrew Critch
22 Nov 2024 3:26 UTC
11
points
3
comments
5
min read
EA
link
My motivation and theory of change for working in AI healthtech
Andrew Critch
12 Oct 2024 0:36 UTC
47
points
1
comment
14
min read
EA
link
Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)
Andrew Critch
14 Jun 2024 0:16 UTC
99
points
3
comments
4
min read
EA
link
Acausal normalcy
Andrew Critch
3 Mar 2023 23:35 UTC
22
points
4
comments
8
min read
EA
link
SFF Speculation Grants as an expedited funding source
Andrew Critch
3 Dec 2022 18:34 UTC
71
points
2
comments
1
min read
EA
link
Announcing Encultured AI: Building a Video Game
Andrew Critch
18 Aug 2022 2:17 UTC
34
points
5
comments
4
min read
EA
link
Encultured AI, Part 2: Providing a Service
Andrew Critch
11 Aug 2022 20:13 UTC
10
points
0
comments
3
min read
EA
link
Encultured AI, Part 1: Enabling New Benchmarks
Andrew Critch
8 Aug 2022 22:49 UTC
17
points
0
comments
6
min read
EA
link
Cofounding team sought for WordSig.org
Andrew Critch
3 Aug 2022 23:56 UTC
16
points
0
comments
1
min read
EA
link
Pivotal outcomes and pivotal processes
Andrew Critch
17 Jun 2022 23:43 UTC
49
points
1
comment
4
min read
EA
link
Steering AI to care for animals, and soon
Andrew Critch
14 Jun 2022 1:13 UTC
239
points
37
comments
1
min read
EA
link
Intergenerational trauma impeding cooperative existential safety efforts
Andrew Critch
3 Jun 2022 17:27 UTC
82
points
2
comments
3
min read
EA
link
“Tech company singularities”, and steering them to reduce x-risk
Andrew Critch
13 May 2022 17:26 UTC
51
points
5
comments
4
min read
EA
link
“Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments
Andrew Critch
19 Apr 2022 20:24 UTC
80
points
10
comments
7
min read
EA
link
Some AI research areas and their relevance to existential safety
Andrew Critch
15 Dec 2020 12:15 UTC
12
points
1
comment
56
min read
EA
link
(alignmentforum.org)
AI Research Considerations for Human Existential Safety (ARCHES)
Andrew Critch
21 May 2020 6:55 UTC
29
points
0
comments
3
min read
EA
link
(acritch.com)
Seeking information on three potential grantee organizations
Andrew Critch
9 Dec 2018 20:12 UTC
11
points
0
comments
1
min read
EA
link
Back to top