Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.
Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.
Thanks for explanation. My guess is this decision should not be delegated to LLMs but mostly to authors (possibly with some emphasis on correct classification in the UI).
I think the “the post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn’t care about the EA community” should not be interpreted extensively, otherwise it can easily mean “any controversy or criticism”. I will repost it without the links to current discussions—these are non-central, similar points are raised repeatedly over the years and it is easy to find dozens of texts making them.