“If the Foresight Institute aims to contribute to the existential risk field, it would be more valuable to invest its efforts in debunking widely-accepted existential risk claims and strategies, especially those intending to restrict or slow down technological advancement.”
I think this is written by an EA? It seems like they personally are not in favour of slowing down AI, and I think they share this opinion with many other EAs. I don’t think we have a community consensus on this. But saying that its a myth that should be debunked, is just wrong.
There are many EAs who signed this letter, and FLI is arguably an EA org (weather or not they identify as EA, they are part of the EA/X-risk/long-termism network). Pause Giant AI Experiments: An Open Letter—Future of Life Institute I know of others in our network who think this letter don’t go far enough.
That’s a good point. I’m unsure of what the best way of facilitating these meetings would be, so that it doesn’t downplay the seriousness of the questions. But assuming good intentions, allowing for disagreements, and acknowledging the differences is enough and the best option.
I think this is written by an EA? It seems like they personally are not in favour of slowing down AI, and I think they share this opinion with many other EAs. I don’t think we have a community consensus on this. But saying that its a myth that should be debunked, is just wrong.
There are many EAs who signed this letter, and FLI is arguably an EA org (weather or not they identify as EA, they are part of the EA/X-risk/long-termism network).
Pause Giant AI Experiments: An Open Letter—Future of Life Institute
I know of others in our network who think this letter don’t go far enough.
That’s a good point. I’m unsure of what the best way of facilitating these meetings would be, so that it doesn’t downplay the seriousness of the questions. But assuming good intentions, allowing for disagreements, and acknowledging the differences is enough and the best option.