One thing I appreciate about both of these tests is that they seem to (at least partially) tap into something like “can you think for yourself & reason about problems in a critical way?” I think this is one of the most important skills to train, particularly in policy, where it’s very easy to get carried away with narratives that seem popular or trendy or high-status.
I think the current zeitgeist has gotten a lot of folks interested in AI policy. My sense is that there’s a lot of potential for good here, but there are also some pretty easy ways for things to go wrong.
Examples of some questions that I hear folks often ask/say:
What do the experts think about X?
How do I get a job at X org?
“I think the work of X is great”--> “What about their work do you like?” --> “Oh, idk, just like in general they seem to be doing great things and lots of others seem to support X.”
What would ARC evals think about this plan?
Examples of some questions that I often encourage people to ask/say:
What do you think about X?
What do you think X is getting wrong?
If the community is wrong about X, what do you think it’s getting wrong? Do you think we could be doing better than X?
What do I think about this plan?
So far, my experience engaging with AI governance/policy folks is that these questions are not being asked very often. It feels more like a field where people are respected for “looking legitimate” as opposed to “having takes”. Obviously, there are exceptions, and there are a few people whose work I admire & appreciate.
But I think a lot of junior people (and some senior people) are pretty comfortable with taking positions like “I’m just going to defer to people who other people think are smart/legitimate, without really asking myself or others to explain why they think those people are smart/legitimate”, and this is very concerning.
As a caveat, it is of course important to have people who can play support roles and move things forward, and there’s a failure mode of spending too much time in “inside view” mode. My thesis here is simply that, on the current margin, I think the world would be better off if more people shifted toward “my job is to understand what is right and evaluate plans/people for myself” and fewer people adopted the “my job is to find a credible EA leader and row in the direction that they’re currently rowing.”
And as a final point, I think this is especially important in a context where there is a major resource/power/status imbalance between various perspectives. In the absence of critical thinking & strong epistemics, we should not be surprised if the people with the most money & influence end up shaping the narrative. (This model necessarily mean that they’re wrong, but it does tell us something like “you might expect to see a lot of EAs rally around narratives that are sympathetic toward major AGI labs, even if these narratives are wrong. And it would take a particularly strong epistemic environment to converge to the truth when one “side” has billions of dollars and is offering a bunch of the jobs and is generally considered cooler/higher-status.”
One thing I appreciate about both of these tests is that they seem to (at least partially) tap into something like “can you think for yourself & reason about problems in a critical way?” I think this is one of the most important skills to train, particularly in policy, where it’s very easy to get carried away with narratives that seem popular or trendy or high-status.
I think the current zeitgeist has gotten a lot of folks interested in AI policy. My sense is that there’s a lot of potential for good here, but there are also some pretty easy ways for things to go wrong.
Examples of some questions that I hear folks often ask/say:
What do the experts think about X?
How do I get a job at X org?
“I think the work of X is great”--> “What about their work do you like?” --> “Oh, idk, just like in general they seem to be doing great things and lots of others seem to support X.”
What would ARC evals think about this plan?
Examples of some questions that I often encourage people to ask/say:
What do you think about X?
What do you think X is getting wrong?
If the community is wrong about X, what do you think it’s getting wrong? Do you think we could be doing better than X?
What do I think about this plan?
So far, my experience engaging with AI governance/policy folks is that these questions are not being asked very often. It feels more like a field where people are respected for “looking legitimate” as opposed to “having takes”. Obviously, there are exceptions, and there are a few people whose work I admire & appreciate.
But I think a lot of junior people (and some senior people) are pretty comfortable with taking positions like “I’m just going to defer to people who other people think are smart/legitimate, without really asking myself or others to explain why they think those people are smart/legitimate”, and this is very concerning.
As a caveat, it is of course important to have people who can play support roles and move things forward, and there’s a failure mode of spending too much time in “inside view” mode. My thesis here is simply that, on the current margin, I think the world would be better off if more people shifted toward “my job is to understand what is right and evaluate plans/people for myself” and fewer people adopted the “my job is to find a credible EA leader and row in the direction that they’re currently rowing.”
And as a final point, I think this is especially important in a context where there is a major resource/power/status imbalance between various perspectives. In the absence of critical thinking & strong epistemics, we should not be surprised if the people with the most money & influence end up shaping the narrative. (This model necessarily mean that they’re wrong, but it does tell us something like “you might expect to see a lot of EAs rally around narratives that are sympathetic toward major AGI labs, even if these narratives are wrong. And it would take a particularly strong epistemic environment to converge to the truth when one “side” has billions of dollars and is offering a bunch of the jobs and is generally considered cooler/higher-status.”