Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.
Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.
Travel: mostly planned (conferences, some research retreats).
We expect closely coordinated team work on the LLM psychology direction, with a bit looser connections to the gradual disempowerment / macrostrategy work. Broadly ACS is small enough that anyone is welcome to participate in anything they are interested in, and generally everyone has idea what others work on.