I’m not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I’m trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that’s because of a misalignment of values between the user and the company or because it’s just really hard to learn human preferences because they’re complex. In doing this, it’s really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users’ actions.
Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is “meaningful human control”. Automation bias is an issue even up to the nuclear level—the concern is that people will more blindly trust ML systems, and won’t disbelieve them as people did in several Cold War close calls (eg Petrov not believing his computer warning of an attack). See Autonomy and machine learning at the interface of nuclear weapons, computers and people.
I’m not Hayden but I think behavioural science is useful area for thinking about AI governance, in particular about the design of human-computer interfaces. One example with current widely deployed AI systems is recommender engines (this is not a HCI eg). I’m trying to understand the tendencies of recommenders towards biases like concentration, or contamination problems, and how they impact user behaviour and choice. Additionally, how what they optimise for does/does not capture their values, whether that’s because of a misalignment of values between the user and the company or because it’s just really hard to learn human preferences because they’re complex. In doing this, it’s really tricky to actually distinguish in the wild between the choice architecture (behavioural parts) vs the algorithm when it comes to attributing to users’ actions.
Hi both,
Yes behavioural science isn’t a topic I’m super familiar with, but it seems very important!
I think most of the focus so far has been on shifting norms/behaviour at top AI labs, for example nudging Publication and Release Norms for Responsible AI.
Recommender systems are a great example of a broader concern. Another is lethal autonomous weapons, where a big focus is “meaningful human control”. Automation bias is an issue even up to the nuclear level—the concern is that people will more blindly trust ML systems, and won’t disbelieve them as people did in several Cold War close calls (eg Petrov not believing his computer warning of an attack). See Autonomy and machine learning at the interface of nuclear weapons, computers and people.
Jess Whittlestone’s PhD was in Behavioural Science, now she’s Head of AI Policy at the Centre for Long-Term Resilience.