I agree with (and really appreciate) Max’s comment, so maybe there isn’t a need for a grand strategy. However, I suspect that there are probably still many good opportunities to do research to understand and change attitudes and behaviours related to AI safety if that work is carefully co-designed with experts.
With that in mind, I just wanted to comment to ask that READI be kept in the loop about anything that comes out of this.
We might be interested in helping in some way. For instance, that could be a literature/practice review of what is known on influencing desired behaviours, surveys to understand related barriers and enables, experimental work to test the impact of potential/ongoing interventions, and/or, brainstorming and disseminating approaches for ‘systemic change’ that might be effective.
Ideally, anything we did together would be in collaboration/supervised by individuals with more domain specific expertise (e.g., Max and other people working in the field) who could make sure it is well-planned and useful in expectation and leverage and disseminate resultant insights. We have a process that has worked well with other projects and that could potentially make sense here also.
Thank you so much for your feedback on my post, Peter! I really appreciate it.
It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.
(And thank you very much for letting me know about Vael’s extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)
Hey Peter, thanks for writing this up.
I agree with (and really appreciate) Max’s comment, so maybe there isn’t a need for a grand strategy. However, I suspect that there are probably still many good opportunities to do research to understand and change attitudes and behaviours related to AI safety if that work is carefully co-designed with experts.
With that in mind, I just wanted to comment to ask that READI be kept in the loop about anything that comes out of this.
We might be interested in helping in some way. For instance, that could be a literature/practice review of what is known on influencing desired behaviours, surveys to understand related barriers and enables, experimental work to test the impact of potential/ongoing interventions, and/or, brainstorming and disseminating approaches for ‘systemic change’ that might be effective.
Ideally, anything we did together would be in collaboration/supervised by individuals with more domain specific expertise (e.g., Max and other people working in the field) who could make sure it is well-planned and useful in expectation and leverage and disseminate resultant insights. We have a process that has worked well with other projects and that could potentially make sense here also.
Also, have you seen this? https://docs.google.com/document/d/1KqbASWSxcGH1WjXrgfFTaDqmOxn3RWzfVw28mrFP74k/edit#
Thank you so much for your feedback on my post, Peter! I really appreciate it.
It seems like READI is doing some incredible and widely applicable work! I would be extremely excited to collaborate with you, READI, and people working in AI safety on movement-building. Please keep an eye out for a future forum post with some potential ideas on this front! We would love to get your feedback on them as well.
(And thank you very much for letting me know about Vael’s extremely important write-up! It is brilliant, and I think everyone in AI safety should read it.)