RSS

Jan_Kulveit

Karma: 4,932

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

ACS is hiring: why work here and why not

Jan_Kulveit23 Oct 2025 9:38 UTC
39 points
4 comments2 min readEA link

Do Not Tile the Light­cone with Your Con­fused Ontology

Jan_Kulveit13 Jun 2025 12:45 UTC
45 points
4 comments5 min readEA link
(boundedlyrational.substack.com)

Ap­ply now to Hu­man-al­igned AI Sum­mer School 2025

Pivocajs6 Jun 2025 19:34 UTC
8 points
1 comment2 min readEA link

Grad­ual Disem­pow­er­ment: Sys­temic Ex­is­ten­tial Risks from In­cre­men­tal AI Development

Jan_Kulveit30 Jan 2025 17:07 UTC
45 points
4 comments2 min readEA link
(gradual-disempowerment.ai)

“Char­ity” as a con­fla­tion­ary al­li­ance term

Jan_Kulveit13 Dec 2024 7:53 UTC
76 points
4 comments5 min readEA link
(www.lesswrong.com)

Jan_Kul­veit’s Quick takes

Jan_Kulveit12 Dec 2024 22:22 UTC
7 points
9 commentsEA link

Dis­tanc­ing EA from ra­tio­nal­ity is foolish

Jan_Kulveit25 Jun 2024 21:02 UTC
135 points
33 comments2 min readEA link

An­nounc­ing Hu­man-al­igned AI Sum­mer School

Jan_Kulveit22 May 2024 8:55 UTC
33 points
0 comments1 min readEA link
(humanaligned.ai)

Box in­ver­sion revisited

Jan_Kulveit7 Nov 2023 11:09 UTC
22 points
1 comment8 min readEA link

We don’t un­der­stand what hap­pened with cul­ture enough

Jan_Kulveit9 Oct 2023 14:56 UTC
29 points
2 comments6 min readEA link

Talk­ing pub­li­cly about AI risk

Jan_Kulveit24 Apr 2023 9:19 UTC
152 points
13 comments6 min readEA link

Why Si­mu­la­tor AIs want to be Ac­tive In­fer­ence AIs

Jan_Kulveit11 Apr 2023 9:06 UTC
22 points
0 comments8 min readEA link
(www.lesswrong.com)

The space of sys­tems and the space of maps

Jan_Kulveit22 Mar 2023 16:05 UTC
12 points
0 comments5 min readEA link
(www.lesswrong.com)

Cy­borg Pe­ri­ods: There will be mul­ti­ple AI transitions

Jan_Kulveit22 Feb 2023 16:09 UTC
68 points
1 comment6 min readEA link

Deon­tol­ogy and virtue ethics as “effec­tive the­o­ries” of con­se­quen­tial­ist ethics

Jan_Kulveit17 Nov 2022 9:20 UTC
57 points
12 comments10 min readEA link

We can do bet­ter than argmax

Jan_Kulveit10 Oct 2022 10:32 UTC
113 points
36 comments10 min readEA link

Limits to Legibility

Jan_Kulveit29 Jun 2022 17:45 UTC
106 points
3 comments5 min readEA link
(www.lesswrong.com)

Ways money can make things worse

Jan_Kulveit21 Jun 2022 15:26 UTC
167 points
3 comments9 min readEA link

Con­ti­nu­ity Assumptions

Jan_Kulveit13 Jun 2022 21:36 UTC
44 points
4 comments4 min readEA link
(www.alignmentforum.org)

Differ­ent forms of capital

Jan_Kulveit25 Apr 2022 8:05 UTC
101 points
8 comments2 min readEA link