* Any moral philosophy arguments that imply the long term future dwarfs the present in significance (though fwiw I agree with these arguments—everything below I have at least mixed feelings on)
* The classic arguments for ‘orthogonality’ of intelligence and goals (which I criticised here)
* The classic arguments for instrumental convergence towards certain goals
* Claims about the practical capabilities of future AI agents
* Many claims about the capabilities of current AI agents, e.g. those comparing them to intelligent high schoolers/university students (when you can quickly see trivial ways in which they’re nowhere near the reflexivity of an average toddler)
* Claims that working on longtermist-focused research is likely to be better for the long term than working on nearer term problems
* Claims that, within longtermist-focused research, focusing on existential risks (in the original sense, not the very vague ‘loss of potential sense’) is better than working on ways to make he long term better conditional on it existing (or perhaps looking for ways to do both)
* Metaclaims about who should be doing such research, e.g. on the basis that they published other qualitative arguments that we agree with
* Almost everything on the list linked in the above bullet
Off the top of my head:
* Any moral philosophy arguments that imply the long term future dwarfs the present in significance (though fwiw I agree with these arguments—everything below I have at least mixed feelings on)
* The classic arguments for ‘orthogonality’ of intelligence and goals (which I criticised here)
* The classic arguments for instrumental convergence towards certain goals
* Claims about the practical capabilities of future AI agents
* Many claims about the capabilities of current AI agents, e.g. those comparing them to intelligent high schoolers/university students (when you can quickly see trivial ways in which they’re nowhere near the reflexivity of an average toddler)
* Claims that working on longtermist-focused research is likely to be better for the long term than working on nearer term problems
* Claims that, within longtermist-focused research, focusing on existential risks (in the original sense, not the very vague ‘loss of potential sense’) is better than working on ways to make he long term better conditional on it existing (or perhaps looking for ways to do both)
* Metaclaims about who should be doing such research, e.g. on the basis that they published other qualitative arguments that we agree with
* Almost everything on the list linked in the above bullet
[edit: I forgot * The ITN framework]