I’m seeing a few comments so far with the sentiment that “lawsuits don’t have the ultimate aim of reducing x-risk, so we shouldn’t pursue them”. I want to push back on this.
Let’s say you’re an environmental group trying to stop a new coal power plant from being built. You notice that the proposed site has not gone through proper planning permissions, and the locals think the plant will ruin their nice views. They are incredibly angry about this, and are doing protests and lawsuits on the matter. Do you support them?
Under the logic above, the answer would be no. Your ultimate aim has nothing to do with planning permissions or nice views, it’s stopping carbon emissions. If they moved it to a different location, the locals objections would be satisfied, but yours wouldn’t be.
But you’d still be insane not to support the locals here. The lawsuits and protest damage the coal project, in terms of PR, money, and delays. New sites are hard to find, and it’s quite possible that if the locals win, the project will end up cancelled. Most of the work is being done by people who wouldn’t have otherwise helped you in your cause (and might be persuaded to join your cause in solidarity!). And while protecting nice views may not be your number one priority, it’s still a good thing to do.
I hope you see that in this analogy, the AI x-risk person is the environmental group, and the AI ethics person is the locals (or vice versa, depending on which view you believe). Sure, protecting creatives from plagiarism might not be your highest priority, but forcing creative compliance might also have the side effect of slowing down AI development for all companies at once, which you may think helps with x-risk. And it’s likely to be easier to implement than a full AI pause, thanks to the greater base of support.
I recognise that what is going to appeal to others here concerned about extinction risk is the instrumental reasons. And those instrumental reasons are sufficient to offer some money to cash-strapped communities organising to restrict AI.
(From my perspective, paths to extinction involve a continuation of current harmful AI exploitation, but that’s another story.)
I’m seeing a few comments so far with the sentiment that “lawsuits don’t have the ultimate aim of reducing x-risk, so we shouldn’t pursue them”. I want to push back on this.
Let’s say you’re an environmental group trying to stop a new coal power plant from being built. You notice that the proposed site has not gone through proper planning permissions, and the locals think the plant will ruin their nice views. They are incredibly angry about this, and are doing protests and lawsuits on the matter. Do you support them?
Under the logic above, the answer would be no. Your ultimate aim has nothing to do with planning permissions or nice views, it’s stopping carbon emissions. If they moved it to a different location, the locals objections would be satisfied, but yours wouldn’t be.
But you’d still be insane not to support the locals here. The lawsuits and protest damage the coal project, in terms of PR, money, and delays. New sites are hard to find, and it’s quite possible that if the locals win, the project will end up cancelled. Most of the work is being done by people who wouldn’t have otherwise helped you in your cause (and might be persuaded to join your cause in solidarity!). And while protecting nice views may not be your number one priority, it’s still a good thing to do.
I hope you see that in this analogy, the AI x-risk person is the environmental group, and the AI ethics person is the locals (or vice versa, depending on which view you believe). Sure, protecting creatives from plagiarism might not be your highest priority, but forcing creative compliance might also have the side effect of slowing down AI development for all companies at once, which you may think helps with x-risk. And it’s likely to be easier to implement than a full AI pause, thanks to the greater base of support.
Very well put. Love the detailed analogy.
I recognise that what is going to appeal to others here concerned about extinction risk is the instrumental reasons. And those instrumental reasons are sufficient to offer some money to cash-strapped communities organising to restrict AI.
(From my perspective, paths to extinction involve a continuation of current harmful AI exploitation, but that’s another story.)