Thank you for this post. I do agree that institutions can consider AI risk quite seriously (e. g. banning harmful AI and mandating human oversight in the most recent EU AI white paper) and that their regard can increase over time (comparing the 2021 white paper with the 2020 one, which focuses on ‘winning the race’). Still, however, some institutions may have a way to go (e. g. specifics of the ‘algorithmic bias’ measurement in the EU).
As a European seeking to advance AI safety, I am offering an anecdotal story: My college agreed to subsidize ski trips to address disadvantaged groups’ exclusion in skiing. Following this, angry flyers appeared in bathrooms asking for more subsidies. Nothing changed (the outdoor subsidy scheme has diversified the following year) but an approach like this can demotivate well-meaning entities.
A parallel can be drawn in AI safety advocacy. I think that it is quite awesome that in just 1 year, so much positive development has occurred. We can only work at the pace of the legislator’s will. (Judging from the Commission’s highly understanding response on the confinement ban in animal agriculture Citizens’ Initiative, there are people who are willing to take innovative approaches to safety seriously.) Otherwise, we could demotivate well-meaning entities.
This parallel is likely, however, not applicable in reality. Legislators are looking (EU, US) for useful insights that follow on their questions, relevant to AI safety, presented in a concise manner. I am aware of only one online submission of EA community members (CSER co-authors) to a governance body. This piece seems that it, rather than concisely and actionably addressing the Commission’s project’s mandate, talks broadly about somewhat related research that should be done. So, it is a step backward: presuming that the Commission does not care and that some broad speaking has to be done to motivate the Commission to action. I would suggest that this is not due to trauma, considering the explicit welcomingness of EU’s HUMAINT and the general reputation of the EU in being thoughtful about its legislation.
So, I can add to your recommendations that people can also review the development of an institution’s (or department’s or project’s, …) thinking about a specific AI topic (it can be less than up to 10 years ago) and understand its safety objectives. Then, any support of the institution in AI safety can be much more effective, whether done by the reviewer or an expert on (researching and) addressing the specific concerns.
I almost forgot: a comment on your language: you repeatedly use ‘screw’ to denote perhaps the spirit of this post, powerlessness in changing others’ thinking (I argue that this is inaccurate because institutions are already developing their safety considerations) which would be addressed by the use of force or the threat of such. This approach is suboptimal because an allusion to the threat of the use of force can reduce persons’ critical thinking abilities.
Alternatives include: 1) thinking ‘I cannot trust these institutions,’ 2) healthy to think ‘that one should leave the relationship,’ 3) trying to ‘screw together a different humanity’ is not an effective strategy for dealing with the world, and 4) ‘see what emotions due to which perceived approaches you feel:’ resentment (due to ignorance, personal disrespect, closemindedness, unwillingness to engage in a rational dialogue, …), hate (due to inability to gain power over another, inconsideration of certain individuals, non-confirmation of one’s biases, …), suspicion (due to the institution’s previous limited engagement with a topic, its limited deliberation of risky actions), shame (due to your limited ability to contribute usefully while increasing safety, the institution’s limited capacity to develop sound legislation, …), fear (due to reputational loss risk knowledge and limited ability to know what can be well received, harsh rejections of previous arguments, …), etc . Then, analyze whether these emotions are rationally justified and if so, how can they be best addressed.
Thank you for this post. I do agree that institutions can consider AI risk quite seriously (e. g. banning harmful AI and mandating human oversight in the most recent EU AI white paper) and that their regard can increase over time (comparing the 2021 white paper with the 2020 one, which focuses on ‘winning the race’). Still, however, some institutions may have a way to go (e. g. specifics of the ‘algorithmic bias’ measurement in the EU).
As a European seeking to advance AI safety, I am offering an anecdotal story: My college agreed to subsidize ski trips to address disadvantaged groups’ exclusion in skiing. Following this, angry flyers appeared in bathrooms asking for more subsidies. Nothing changed (the outdoor subsidy scheme has diversified the following year) but an approach like this can demotivate well-meaning entities.
A parallel can be drawn in AI safety advocacy. I think that it is quite awesome that in just 1 year, so much positive development has occurred. We can only work at the pace of the legislator’s will. (Judging from the Commission’s highly understanding response on the confinement ban in animal agriculture Citizens’ Initiative, there are people who are willing to take innovative approaches to safety seriously.) Otherwise, we could demotivate well-meaning entities.
This parallel is likely, however, not applicable in reality. Legislators are looking (EU, US) for useful insights that follow on their questions, relevant to AI safety, presented in a concise manner. I am aware of only one online submission of EA community members (CSER co-authors) to a governance body. This piece seems that it, rather than concisely and actionably addressing the Commission’s project’s mandate, talks broadly about somewhat related research that should be done. So, it is a step backward: presuming that the Commission does not care and that some broad speaking has to be done to motivate the Commission to action. I would suggest that this is not due to trauma, considering the explicit welcomingness of EU’s HUMAINT and the general reputation of the EU in being thoughtful about its legislation.
So, I can add to your recommendations that people can also review the development of an institution’s (or department’s or project’s, …) thinking about a specific AI topic (it can be less than up to 10 years ago) and understand its safety objectives. Then, any support of the institution in AI safety can be much more effective, whether done by the reviewer or an expert on (researching and) addressing the specific concerns.
I almost forgot: a comment on your language: you repeatedly use ‘screw’ to denote perhaps the spirit of this post, powerlessness in changing others’ thinking (I argue that this is inaccurate because institutions are already developing their safety considerations) which would be addressed by the use of force or the threat of such. This approach is suboptimal because an allusion to the threat of the use of force can reduce persons’ critical thinking abilities.
Alternatives include: 1) thinking ‘I cannot trust these institutions,’ 2) healthy to think ‘that one should leave the relationship,’ 3) trying to ‘screw together a different humanity’ is not an effective strategy for dealing with the world, and 4) ‘see what emotions due to which perceived approaches you feel:’ resentment (due to ignorance, personal disrespect, closemindedness, unwillingness to engage in a rational dialogue, …), hate (due to inability to gain power over another, inconsideration of certain individuals, non-confirmation of one’s biases, …), suspicion (due to the institution’s previous limited engagement with a topic, its limited deliberation of risky actions), shame (due to your limited ability to contribute usefully while increasing safety, the institution’s limited capacity to develop sound legislation, …), fear (due to reputational loss risk knowledge and limited ability to know what can be well received, harsh rejections of previous arguments, …), etc . Then, analyze whether these emotions are rationally justified and if so, how can they be best addressed.