I upvoted this because AI-related advocacy has become a recent focus of mine. My background is from organising climate protests, and I think EAs have a bit of a blindspot when it comes to valuing advocacy. So it’s good to have this discussion. However, I do disagree on a few points.
1. Just Ask: In broad strokes, I think people tend to overestimate exactly how unreasonable and persistent initial objections will be. My simplest rebuttal would be: How do you know these advocates would even disagree with your approach? An approach I’m considering now is to find a decent AI Governance policy proposal, present it to the advocates explaining how it solves their problem and see who says yes. If half of them say no, you work with the other half. Before assuming the “neo-Luddites” won’t listen to reason, shouldn’t you … ask? Present them with options? I don’t see why it’s not at least worth reaching out to potential allies, and I don’t see why it’s an irredeemable sin to be angry at something with no clear solutions, when no one has presented a solution. It’s perhaps ironic the assumptions given here.
2. Counterfactuals I think by most estimates, anti-AI advocacy only grows from here. Having a lot of structurally unemployed angry people is historically a recipe for trouble. You then have to consider that reactionary responses will happen regardless of whether “we align with them”. If they are as persistently unreasonable as you say they are, they will force bad policy regardless. They will influence mainstream discourse towards their views, and be loud enough to crowd out our “more reasonable” views. I just think it makes a lot of sense to engage these groups early on, and make an earnest effort to make our case. Because the counterfactual is that they get bad policies passed without our input.
3. False dichotomy of advocates and researchers I speak more generally here. In my time in climate risk, everyone had an odd fixation on separating climate advocates and researchers.[1] I don’t think this split was helpful for epistemics or strategy overall. Because then you had scientists who had all the solutions and epistemics that the public/policymakers generally ignored out of lack of engagement, and the advocates who started latching onto poorly-informed and counterproductive radical agendas, and were constantly rebutted with “why are we listening to you clueless youngsters and not the scientists (who we ignore anyway)”. It was just a constant headache to have two subgroups needlessly divide themselves while the clock ran down. Like sure, the advocates were … not the most epistemically rigorous. And the scientists generally struggled to put across their concerns. But I’d greatly prefer if everyone valued more communication/coordination, and not less.
And for my sanity’s sake, I’d like the AI risk community to not repeat this dynamic.
I suspect most of this dichotomy was not made in good faith, but simply by people uncomfortable with the premise of anthropogenic climate change and throwing out fallacies to discredit any arguments they’re confronted with in their daily lives.
I upvoted this because AI-related advocacy has become a recent focus of mine. My background is from organising climate protests, and I think EAs have a bit of a blindspot when it comes to valuing advocacy. So it’s good to have this discussion. However, I do disagree on a few points.
1. Just Ask: In broad strokes, I think people tend to overestimate exactly how unreasonable and persistent initial objections will be. My simplest rebuttal would be: How do you know these advocates would even disagree with your approach? An approach I’m considering now is to find a decent AI Governance policy proposal, present it to the advocates explaining how it solves their problem and see who says yes. If half of them say no, you work with the other half. Before assuming the “neo-Luddites” won’t listen to reason, shouldn’t you … ask? Present them with options? I don’t see why it’s not at least worth reaching out to potential allies, and I don’t see why it’s an irredeemable sin to be angry at something with no clear solutions, when no one has presented a solution. It’s perhaps ironic the assumptions given here.
2. Counterfactuals I think by most estimates, anti-AI advocacy only grows from here. Having a lot of structurally unemployed angry people is historically a recipe for trouble. You then have to consider that reactionary responses will happen regardless of whether “we align with them”. If they are as persistently unreasonable as you say they are, they will force bad policy regardless. They will influence mainstream discourse towards their views, and be loud enough to crowd out our “more reasonable” views. I just think it makes a lot of sense to engage these groups early on, and make an earnest effort to make our case. Because the counterfactual is that they get bad policies passed without our input.
3. False dichotomy of advocates and researchers I speak more generally here. In my time in climate risk, everyone had an odd fixation on separating climate advocates and researchers.[1] I don’t think this split was helpful for epistemics or strategy overall. Because then you had scientists who had all the solutions and epistemics that the public/policymakers generally ignored out of lack of engagement, and the advocates who started latching onto poorly-informed and counterproductive radical agendas, and were constantly rebutted with “why are we listening to you clueless youngsters and not the scientists (who we ignore anyway)”. It was just a constant headache to have two subgroups needlessly divide themselves while the clock ran down. Like sure, the advocates were … not the most epistemically rigorous. And the scientists generally struggled to put across their concerns. But I’d greatly prefer if everyone valued more communication/coordination, and not less.
And for my sanity’s sake, I’d like the AI risk community to not repeat this dynamic.
I suspect most of this dichotomy was not made in good faith, but simply by people uncomfortable with the premise of anthropogenic climate change and throwing out fallacies to discredit any arguments they’re confronted with in their daily lives.