My big take-away is seeing fear itself as a policy-related variable, and that effective AI governance must consider emotional infrastructure alongside institutional infrastructure.
I’m left wondering how psychology concepts like self-determination theory scale across the individual/micro to the macro (e.g. collective action, institutional behavior, movement building, etc...).
On a meta-note: As a career advisor in this space, a common bottleneck I observe from mid-career professionals is deep uncertainty as to how non-technical experts can contribute to reducing AI catastrophic risk. I hope this work signals what multi-disciplinary thinking can bring to the space.
My big take-away is seeing fear itself as a policy-related variable, and that effective AI governance must consider emotional infrastructure alongside institutional infrastructure.
I’m left wondering how psychology concepts like self-determination theory scale across the individual/micro to the macro (e.g. collective action, institutional behavior, movement building, etc...).
On a meta-note: As a career advisor in this space, a common bottleneck I observe from mid-career professionals is deep uncertainty as to how non-technical experts can contribute to reducing AI catastrophic risk. I hope this work signals what multi-disciplinary thinking can bring to the space.