Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.
Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.
I. It might be worth reflecting upon how large part of this seem tied to something like “climbing the EA social ladder”.
E.g. just from the first part, emphasis mine
Replace “EA” by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.
II. This isn’t to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.
III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.
I don’t think this has any clear bottom line—I do agree for many people caring about EA topics it’s useful to come to the Bay from time to time. Compared to the original post I would probably mainly suggest to also consult virtue ethics and think about what sort of person you are changing yourself to, and if you, for example, most want to become “a highly cool and well networked EA” or e.g. “do things which need to be done”, which are different goals.