Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Anthony DiGiovanni 🔸
Karma:
2,275
Researcher at the Center on Long-Term Risk. All opinions my own.
All
Posts
Comments
New
Top
Old
How to not do decision theory backwards
Anthony DiGiovanni 🔸
17 Mar 2026 7:22 UTC
33
points
1
comment
16
min read
EA
link
When do intuitions need to be reliable?
Anthony DiGiovanni 🔸
15 Mar 2026 4:18 UTC
21
points
2
comments
3
min read
EA
link
Resolving radical cluelessness with metanormative bracketing
Anthony DiGiovanni 🔸
29 Oct 2025 23:33 UTC
59
points
5
comments
25
min read
EA
link
What to do about near-term cluelessness in animal welfare
Anthony DiGiovanni 🔸
8 Oct 2025 20:56 UTC
87
points
2
comments
15
min read
EA
link
Resource guide: Unawareness, indeterminacy, and cluelessness
Anthony DiGiovanni 🔸
7 Jul 2025 9:54 UTC
20
points
6
comments
7
min read
EA
link
Clarifying “wisdom”: Foundational topics for aligned AIs to prioritize before irreversible decisions
Anthony DiGiovanni 🔸
20 Jun 2025 21:55 UTC
25
points
1
comment
12
min read
EA
link
4. Why existing approaches to cause prioritization are not robust to unawareness
Anthony DiGiovanni 🔸
2 Jun 2025 8:55 UTC
36
points
32
comments
17
min read
EA
link
3. Why impartial altruists should suspend judgment under unawareness
Anthony DiGiovanni 🔸
2 Jun 2025 8:54 UTC
41
points
1
comment
16
min read
EA
link
2. Why intuitive comparisons of large-scale impact are unjustified
Anthony DiGiovanni 🔸
2 Jun 2025 8:54 UTC
41
points
22
comments
16
min read
EA
link
1. The challenge of unawareness for impartial altruist action guidance: Introduction
Anthony DiGiovanni 🔸
2 Jun 2025 8:54 UTC
89
points
24
comments
17
min read
EA
link
Should you go with your best guess?: Against precise Bayesianism and related views
Anthony DiGiovanni 🔸
27 Jan 2025 20:25 UTC
88
points
3
comments
22
min read
EA
link
[Question]
Neartermist crucial considerations?
Anthony DiGiovanni 🔸
7 Nov 2024 4:27 UTC
18
points
3
comments
1
min read
EA
link
Winning isn’t enough
Anthony DiGiovanni 🔸
5 Nov 2024 11:43 UTC
33
points
3
comments
9
min read
EA
link
[Question]
What are your cruxes for imprecise probabilities / decision rules?
Anthony DiGiovanni 🔸
31 Jul 2024 15:44 UTC
21
points
1
comment
1
min read
EA
link
[linkpost] When does technical work to reduce AGI conflict make a difference?: Introduction
Anthony DiGiovanni 🔸
16 Sep 2022 14:35 UTC
31
points
0
comments
1
min read
EA
link
(www.lesswrong.com)
A longtermist critique of “The expected value of extinction risk reduction is positive”
Anthony DiGiovanni 🔸
1 Jul 2021 21:01 UTC
155
points
10
comments
32
min read
EA
link
antimonyanthony’s Quick takes
Anthony DiGiovanni 🔸
19 Sep 2020 16:05 UTC
3
points
38
comments
EA
link
Back to top