Thank you for this excellent post Archana! There is so much to discuss here, but I am particularly excited about two points you are making:
You wrote that “Although some AI risk work is purely technical, a possible majority of potential risks...are deeply sociotechnical and dependent on institutional practices such as decision-making, coordination, and collaboration procedures, emergency stopgaps, and norms.” I completely agree! One broad institutional trend I’m particularly concerned about is the fragmentation of work in the tech industry. I am under the impression that work in the tech industry and elsewhere may be increasingly divided into discrete, atomized projects. It may be increasingly normal for people to switch jobs frequently and to conceive their careers as divisible into many, many small, often unrelated tasks. This organization of work may promote specialization and improve economic efficiency in the short run, but it does not encourage people to think about the long-term social consequences of their work. I’d love to see research on whether people would develop potentially dangerous technologies in a more careful manner, if they think of their projects as continuous processes that require perpetual follow-ups, rather than tasks that one should be done with as soon as possible.
I agree that ethics drift is an issue that needs to be discussed in EA. There is such a stress of moral consistency in EA circles that few people seem to recognize that moral values may be a malleable product of social interactions and political debates. However, I’m wondering if ethics drift is always something to be avoided. Some of the dominant ethical values today may appear indisputable only because some powerful actors are backing those values up to maintain their hegemonic position in society. I’m not sure what these values are (e.g. anthropocentrism? neoliberalism?? effective altruism?????) today, but surely some deeply held ethical values in the past no longer seems fully politically correct today (e.g. Confucianism). Some ethical values may deserve to drift. I am just as worried about ethics entrenchment as ethics drift. To me, what is key seems to be a forum that allows all sorts of ethical views—old and new—to be debated and carefully evaluated.
Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA.
I especially liked how the “societal friction, governance capacity, and democracy” section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current events.
A common argument is that on the margin, short and medium term AI issues are likely not neglected (as opposed to long-term issues) so one would not be able to make a big impact. I’d especially be curious about targeted, tractable interventions you believe may be worth looking into, where an additional EA on the margin would make a contingent impact or significantly leverage existing resources.
Thank you for this excellent post Archana! There is so much to discuss here, but I am particularly excited about two points you are making:
You wrote that “Although some AI risk work is purely technical, a possible majority of potential risks...are deeply sociotechnical and dependent on institutional practices such as decision-making, coordination, and collaboration procedures, emergency stopgaps, and norms.” I completely agree! One broad institutional trend I’m particularly concerned about is the fragmentation of work in the tech industry. I am under the impression that work in the tech industry and elsewhere may be increasingly divided into discrete, atomized projects. It may be increasingly normal for people to switch jobs frequently and to conceive their careers as divisible into many, many small, often unrelated tasks. This organization of work may promote specialization and improve economic efficiency in the short run, but it does not encourage people to think about the long-term social consequences of their work. I’d love to see research on whether people would develop potentially dangerous technologies in a more careful manner, if they think of their projects as continuous processes that require perpetual follow-ups, rather than tasks that one should be done with as soon as possible.
I agree that ethics drift is an issue that needs to be discussed in EA. There is such a stress of moral consistency in EA circles that few people seem to recognize that moral values may be a malleable product of social interactions and political debates. However, I’m wondering if ethics drift is always something to be avoided. Some of the dominant ethical values today may appear indisputable only because some powerful actors are backing those values up to maintain their hegemonic position in society. I’m not sure what these values are (e.g. anthropocentrism? neoliberalism?? effective altruism?????) today, but surely some deeply held ethical values in the past no longer seems fully politically correct today (e.g. Confucianism). Some ethical values may deserve to drift. I am just as worried about ethics entrenchment as ethics drift. To me, what is key seems to be a forum that allows all sorts of ethical views—old and new—to be debated and carefully evaluated.
Thank you so much for writing this. It was very comprehensive and highlighted how the intersection of social values and technology may be overlooked in EA.
I especially liked how the “societal friction, governance capacity, and democracy” section of the forum post ties together strengthening democracy, inter-group dynamics, disenfranchised groups, and long-term technological development risk through the path dependence framework; it seems like a very relevant & eloquent explanation for government competence that we see play out even in current events.
A common argument is that on the margin, short and medium term AI issues are likely not neglected (as opposed to long-term issues) so one would not be able to make a big impact. I’d especially be curious about targeted, tractable interventions you believe may be worth looking into, where an additional EA on the margin would make a contingent impact or significantly leverage existing resources.