A model of one’s own (or what I say to myself):
Defer a bit less today—think for yourself!
What would the world look like if X was not true?
Make a prediction—don’t worry if it turns out to be false.
Articulate an argument and find at least one objection to it.
Helpful post, Zach! I think it’s more useful and concrete to focus on asking about specific capabilities instead of asking about AGI/TAI etc. and I’m pushing myself to ask such questions (e.g., when do you expect to have LLMs that can emulate Richard Feynmann-level -of-text). Also, I like the generality vs capability distinction. We already have a generalist (Gato) but we don’t consider it to be an AGI (I think).