RSS

Robert Kralisch

Karma: 43

Hey, I am Robert Kralisch, an independent conceptual/​theoretical Alignment Researcher. I have a background in Cognitive Science and I am interested in collaborating on an end-to-end strategy for AGI alignment.

I am one of the organizers for the AI Safety Camp, working as a research coordinator by evaluating and supporting research projects.

The three main branches that I aim to contribute to are conceptual clarity (what should we mean by agency, intelligence, embodiment, etc), the exploration of more inherently interpretable cognitive architectures, and Simulator theory.

One of my concrete goals is to figure out how to design a cognitively powerful agent such that it does not become a Superoptimiser in the limit.

AI Safety Camp 11

Robert Kralisch7 Nov 2025 14:27 UTC
7 points
1 comment15 min readEA link

In­vi­ta­tion to lead a pro­ject at AI Safety Camp (Vir­tual Edi­tion, 2026)

Robert Kralisch6 Sep 2025 13:34 UTC
4 points
0 comments4 min readEA link

Fund­ing Case: AI Safety Camp 11

Remmelt23 Dec 2024 8:39 UTC
42 points
2 comments6 min readEA link
(manifund.org)

AI Safety Camp 10

Robert Kralisch26 Oct 2024 11:36 UTC
15 points
0 comments18 min readEA link
(www.lesswrong.com)