Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.
Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.
I think it could be a helpful response for people who are able to respond to signals of the type “someone who has demonstrably good forecasting skills, is an expert in the field, and works on this long time claims X” by at least re-evaluating if their models make sense and are not missing some important considerations.
If someone is at least able to that, they can for example ask a friendly AI or some other friendly AI and they will tell you, based on conservative estimates and reference classes, that the original claim is likely wrong. They will still miss important considerations—in a way in which typical forecaster would also do—so the results are underestimates.
I think at the level of [some combination of lack of ability to think and motivated reasoning] when people are uninterested in e.g. sanity checking their thinking with AIs, it is not worth the time correcting them. People are wrong on the internet all the time.
(I think the debate was moderately useful—I made an update from this debate & voting patterns, broadly in the direction EA Forum descending to a level of random place on the internet where confused people talk about AI and it is broadly not worth to read or engage. I’m no longer that much active on EAF, but I’ve made some update)