Hey Daniel, sorry for the very late reply here! I directionally agree, but a few thoughts:
I think a substantial percentage of people should think for themselves a lot on this issue, but I’m not sure I agree with literally everyone (or almost everyone) especiallygiven the distressing nature of the topic. I’d be very reluctant to try to convince someone to “think for themselves and defer less” if they are both:
Not working on AI safety in a role where it seems important (technical research, governance, strategy, community building, etc.)
Don’t enjoy thinking about how likely it is that they and everyone they know will be killed by an AI, and when it will happen
I personally am very interested in thinking through the object-level on AI safety, but found it challenging for quite a while. I view a lot of the more emotional and deference based stuff in this post as me working through my feelings as a prerequisite to allowing me to reason about AI safety stuff without being reluctant to arrive at conclusions of high levels of near-ish risk. See also Yes Requires the Possibility of No.
And since this post I think I have been doing a decent amount of object-level thinking, e.g. here, here, and here
Hey, no need to apologize, and besides I wasn’t even expecting a reply since I didn’t ask a question.
Your points 1 and 2 are good. I should have clarified what I meant by “people.” I didn’t mean everyone, I guess I meant something like “Most of the people who are likely to read this.” But maybe I should be less extreme, as you mentioned, and exclude people who satisfy 1a+1b. Fair enough.
Re point 2: Yeah. I think your post is good; explicitly thinking about and working through feelings & biases etc. is an important complement to object-level thinking about a topic. I guess I was coming from a place of frustration with the way meta-level stuff seems to get more attention/clicks/discussion on forums like this, than object-level analysis. At least that’s how it seems to me. But on reflection I’m not sure my impression is correct; I feel like the ideal ratio of object level to meta stuff should be 9:1 or so, and I haven’t bothered to check maybe we aren’t that far off on this forum (on the subject of timelines).
Hey Daniel, sorry for the very late reply here! I directionally agree, but a few thoughts:
I think a substantial percentage of people should think for themselves a lot on this issue, but I’m not sure I agree with literally everyone (or almost everyone) especially given the distressing nature of the topic. I’d be very reluctant to try to convince someone to “think for themselves and defer less” if they are both:
Not working on AI safety in a role where it seems important (technical research, governance, strategy, community building, etc.)
Don’t enjoy thinking about how likely it is that they and everyone they know will be killed by an AI, and when it will happen
I personally am very interested in thinking through the object-level on AI safety, but found it challenging for quite a while. I view a lot of the more emotional and deference based stuff in this post as me working through my feelings as a prerequisite to allowing me to reason about AI safety stuff without being reluctant to arrive at conclusions of high levels of near-ish risk. See also Yes Requires the Possibility of No.
And since this post I think I have been doing a decent amount of object-level thinking, e.g. here, here, and here
Hey, no need to apologize, and besides I wasn’t even expecting a reply since I didn’t ask a question.
Your points 1 and 2 are good. I should have clarified what I meant by “people.” I didn’t mean everyone, I guess I meant something like “Most of the people who are likely to read this.” But maybe I should be less extreme, as you mentioned, and exclude people who satisfy 1a+1b. Fair enough.
Re point 2: Yeah. I think your post is good; explicitly thinking about and working through feelings & biases etc. is an important complement to object-level thinking about a topic. I guess I was coming from a place of frustration with the way meta-level stuff seems to get more attention/clicks/discussion on forums like this, than object-level analysis. At least that’s how it seems to me. But on reflection I’m not sure my impression is correct; I feel like the ideal ratio of object level to meta stuff should be 9:1 or so, and I haven’t bothered to check maybe we aren’t that far off on this forum (on the subject of timelines).