Three people said that in working to bridge this gap, it seems important that these meetings need to be facilitated in a way that makes sure it does not lead to a (further) divide between communities. This mainly refers to STEM people interpreting a risk-reducing focus as an attempt to hinder progress. They suggested that open communication, mutual respect, and finding common ground should be prioritized to ensure a healthy exchange of ideas and prevent misunderstandings.
There is problem here. Many of us in the EA/X-risk community (including me) think that it would be better if AI progress was significantly slowed down. I think that asking us to play down this disagreement, with at least some of the STEM community, would be very bad.
I’m generally in favour of bridge building, and as you write in the post such interactions can be very beneficial. I do think that a good starting point is to assume everyone have good intentions (mutual respect), but I don’t think finding common ground should be prioritized. I think its more valuable to acknowledge and discuss differences in opinion.
I’m working as an AI Safety field builder, feel free to reach out.
“If the Foresight Institute aims to contribute to the existential risk field, it would be more valuable to invest its efforts in debunking widely-accepted existential risk claims and strategies, especially those intending to restrict or slow down technological advancement.”
I think this is written by an EA? It seems like they personally are not in favour of slowing down AI, and I think they share this opinion with many other EAs. I don’t think we have a community consensus on this. But saying that its a myth that should be debunked, is just wrong.
There are many EAs who signed this letter, and FLI is arguably an EA org (weather or not they identify as EA, they are part of the EA/X-risk/long-termism network). Pause Giant AI Experiments: An Open Letter—Future of Life Institute I know of others in our network who think this letter don’t go far enough.
That’s a good point. I’m unsure of what the best way of facilitating these meetings would be, so that it doesn’t downplay the seriousness of the questions. But assuming good intentions, allowing for disagreements, and acknowledging the differences is enough and the best option.
There is problem here. Many of us in the EA/X-risk community (including me) think that it would be better if AI progress was significantly slowed down. I think that asking us to play down this disagreement, with at least some of the STEM community, would be very bad.
I’m generally in favour of bridge building, and as you write in the post such interactions can be very beneficial. I do think that a good starting point is to assume everyone have good intentions (mutual respect), but I don’t think finding common ground should be prioritized. I think its more valuable to acknowledge and discuss differences in opinion.
I’m working as an AI Safety field builder, feel free to reach out.
I think this is written by an EA? It seems like they personally are not in favour of slowing down AI, and I think they share this opinion with many other EAs. I don’t think we have a community consensus on this. But saying that its a myth that should be debunked, is just wrong.
There are many EAs who signed this letter, and FLI is arguably an EA org (weather or not they identify as EA, they are part of the EA/X-risk/long-termism network).
Pause Giant AI Experiments: An Open Letter—Future of Life Institute
I know of others in our network who think this letter don’t go far enough.
That’s a good point. I’m unsure of what the best way of facilitating these meetings would be, so that it doesn’t downplay the seriousness of the questions. But assuming good intentions, allowing for disagreements, and acknowledging the differences is enough and the best option.