[Main takeaway: to some degree, this might increase the expected value of making AI safety measures performant.]
One I thought of:
Consider the forces pushing for untethered, rapid, maximal AI development and those pushing for controlled, safe, possibly slower AI development. If something happens in the future such that the “safety forces” become much stronger than the “development forces”—this could be due to some AI accident that causes significant harm, generates a lot of public attention, and leads to regulations being imposed on the development of AI—this could make AI development slower or mean that AI doesn’t get as advanced as it otherwise would. I haven’t read too much on the case for economic growth improving welfare, but if those arguments are true and the above scenario significantly reduces economic growth, and thus, welfare, then this could be one avenue for harm.
There are some caveats to this scenario:
If AI safety work goes really well, then it may not hinder AI development or performance. I’m not yet very knowledgeable on the field of AI safety, but from what I’ve heard, making AI safety measures performant is an area of active consideration (and possibly work) in the field. If development / performance isn’t hindered and economic growth is thus unaffected, then the above scenario isn’t cause for harm.
This scenario and line of reasoning rely on the harm from stunted economic growth outweighing the benefit of having safer AI. This is a very questionable assumption.
[Main takeaway: to some degree, this might increase the expected value of making AI safety measures performant.]
One I thought of:
Consider the forces pushing for untethered, rapid, maximal AI development and those pushing for controlled, safe, possibly slower AI development. If something happens in the future such that the “safety forces” become much stronger than the “development forces”—this could be due to some AI accident that causes significant harm, generates a lot of public attention, and leads to regulations being imposed on the development of AI—this could make AI development slower or mean that AI doesn’t get as advanced as it otherwise would. I haven’t read too much on the case for economic growth improving welfare, but if those arguments are true and the above scenario significantly reduces economic growth, and thus, welfare, then this could be one avenue for harm.
There are some caveats to this scenario:
If AI safety work goes really well, then it may not hinder AI development or performance. I’m not yet very knowledgeable on the field of AI safety, but from what I’ve heard, making AI safety measures performant is an area of active consideration (and possibly work) in the field. If development / performance isn’t hindered and economic growth is thus unaffected, then the above scenario isn’t cause for harm.
This scenario and line of reasoning rely on the harm from stunted economic growth outweighing the benefit of having safer AI. This is a very questionable assumption.