In terms of policy recommendations, these differences don’t seem to matter.
Maybe I’m nitpicking, but I see this point often and I think it’s a little too self-serving. There are definitely policy ideas in both spheres that trade-off against the others. E.g. many AI X-risk policy analysts (used to) want few players to reduce race dynamics, while such concentration of power would be bad for present-day harms. Or keeping significant chip production out of developing countries.
More generally, if governments really took x-risk seriously, they would be willing to sacrifice significant civil liberties, which wouldn’t be acceptable at low x-risk estimates.
That’s a good note. But it seems to me a little like pointing out there’s a friction between a free market policy and a pro-immigration policy because
a) Some pro-immigration policies would be anti-free market (e.g. anti-discrimination law) b) Americans who support one tend to oppose the other
While that’s true, philosophically, the positions support each other and most pro-free market policies are presumably neutral or positive for immigration.
Similarly, you can endorse the principles that guide AI ethics while endorsing less popular solutions because of additional, x-risk considerations. If there are disagreements, they aren’t about moral principles, but empirical claims (x-risk clearly wouldn’t be an outcome AI ethics proponents support). And the empirical claims themselves (“AI causes harm now” and “AI might cause harm in the future”) support each other & correlated in my sample. My guess is that they actually correlate in academia as well.
It seems to me the negative effects of the concentration of power can be eliminated by other policies (e.g. Digital Markets Act, Digital Services Act, tax reforms)
Maybe I’m nitpicking, but I see this point often and I think it’s a little too self-serving. There are definitely policy ideas in both spheres that trade-off against the others. E.g. many AI X-risk policy analysts (used to) want few players to reduce race dynamics, while such concentration of power would be bad for present-day harms. Or keeping significant chip production out of developing countries.
More generally, if governments really took x-risk seriously, they would be willing to sacrifice significant civil liberties, which wouldn’t be acceptable at low x-risk estimates.
That’s a good note. But it seems to me a little like pointing out there’s a friction between a free market policy and a pro-immigration policy because
a) Some pro-immigration policies would be anti-free market (e.g. anti-discrimination law)
b) Americans who support one tend to oppose the other
While that’s true, philosophically, the positions support each other and most pro-free market policies are presumably neutral or positive for immigration.
Similarly, you can endorse the principles that guide AI ethics while endorsing less popular solutions because of additional, x-risk considerations. If there are disagreements, they aren’t about moral principles, but empirical claims (x-risk clearly wouldn’t be an outcome AI ethics proponents support). And the empirical claims themselves (“AI causes harm now” and “AI might cause harm in the future”) support each other & correlated in my sample. My guess is that they actually correlate in academia as well.
It seems to me the negative effects of the concentration of power can be eliminated by other policies (e.g. Digital Markets Act, Digital Services Act, tax reforms)