If AI doomers think the expected harms of AI are too low to justify even temporary tweaks to US immigration policy, that suggests the risk of AI killing us all isn’t that high.
Focusing on immigration isn’t a clear win in that it would require the expenditure of political capital/lobbying resources and potentially burn a lot of credibility among the Democrats.
But I think the deeper issue is that this doesn’t seem like a good way of identifying the truth. I guess maybe you could make the argument that if the doom worldview suggests that we should make immigration changes and people with that worldview irrationally reject it, then maybe we can be a bit more skeptical of their reasoning abilities in general.
However, given basically any group, you can find one thing they are irrational about and then try to use this to discredit them. So this isn’t a very reliable method of reasoning.
When I was writing this- and when I think about AI Risk in general- as someone without a ML background, I tend to fall back on looking for non-technical heuristics like interest rates/market caps of hardware companies. So I am influenced perhaps more than a more technical person would be by these kind of meta or revealed preferences arguments.
I think Democrats (and left-wingers in other countres) could embrace increasing high-skilled immigration in ways that steer talent away from AI. In the US. H1-B visas could be changed to not permit work on AI or types of AI, and federal funding of science could steer people away from AI. So I think there is a path for Democrats to use immigration to reduce AI risk. The right could potentially use all three tactics, I think.
I guess my perspective is that all that these revealed preferences show is that people prefer to maintain their social status (benefit accrues to them personally) rather than support an unpopular change that is extremely unlikely to happen and where their support is extremely unlikely to make a difference (benefits are distributed).
So even if I accept this method of finding truth, it actually shows less than it might appear at first glance.
I agree with this up until:
Focusing on immigration isn’t a clear win in that it would require the expenditure of political capital/lobbying resources and potentially burn a lot of credibility among the Democrats.
But I think the deeper issue is that this doesn’t seem like a good way of identifying the truth. I guess maybe you could make the argument that if the doom worldview suggests that we should make immigration changes and people with that worldview irrationally reject it, then maybe we can be a bit more skeptical of their reasoning abilities in general.
However, given basically any group, you can find one thing they are irrational about and then try to use this to discredit them. So this isn’t a very reliable method of reasoning.
Thanks for the feedback.
When I was writing this- and when I think about AI Risk in general- as someone without a ML background, I tend to fall back on looking for non-technical heuristics like interest rates/market caps of hardware companies. So I am influenced perhaps more than a more technical person would be by these kind of meta or revealed preferences arguments.
I think Democrats (and left-wingers in other countres) could embrace increasing high-skilled immigration in ways that steer talent away from AI. In the US. H1-B visas could be changed to not permit work on AI or types of AI, and federal funding of science could steer people away from AI. So I think there is a path for Democrats to use immigration to reduce AI risk. The right could potentially use all three tactics, I think.
I guess my perspective is that all that these revealed preferences show is that people prefer to maintain their social status (benefit accrues to them personally) rather than support an unpopular change that is extremely unlikely to happen and where their support is extremely unlikely to make a difference (benefits are distributed).
So even if I accept this method of finding truth, it actually shows less than it might appear at first glance.