Having thought more about this, I suppose you can divide opinions into two clusters and be pointing at something real. That’s because people’s views on different aspects of the issue correlate, often in ways that make sense. For instance, people who think AGI will be achieved by scaling up current (or very similar to current) neural net architectures are more excited about practical alignment research on existing models.
However, such clusters would be quite broad. My main worry is that identifying two particular points as prototypical of them would narrow their range. People would tend to let their opinions drift closer to the point closest to them. This need not be caused by tribal dynamics. It could be something as simple as availability bias. This narrowing of the clusters would likely be harmful, because the AI safety field is quite new and we’ve still got exploring to do. Another risk is that we may become too focused on the line between the two points, neglecting other potentially more worthwhile axes of variation.
If I were to divide current opinions into two clusters, I think that Scott’s two points would in fact fall in different clusters. They would probably even be not too far off their centers of mass. However, I strongly object to pretending the clusters are points, and then getting tribal about it. I think labeling clusters could be useful, if we made it clear that they are still clusters.
On the paths to understanding AI risk without accepting weird arguments, maybe getting people worried about ML unexplainability may be worthwhile to explore, though I suspect most people would think you were pointing to algorithmic bias and the like.
Having thought more about this, I suppose you can divide opinions into two clusters and be pointing at something real. That’s because people’s views on different aspects of the issue correlate, often in ways that make sense. For instance, people who think AGI will be achieved by scaling up current (or very similar to current) neural net architectures are more excited about practical alignment research on existing models.
However, such clusters would be quite broad. My main worry is that identifying two particular points as prototypical of them would narrow their range. People would tend to let their opinions drift closer to the point closest to them. This need not be caused by tribal dynamics. It could be something as simple as availability bias. This narrowing of the clusters would likely be harmful, because the AI safety field is quite new and we’ve still got exploring to do. Another risk is that we may become too focused on the line between the two points, neglecting other potentially more worthwhile axes of variation.
If I were to divide current opinions into two clusters, I think that Scott’s two points would in fact fall in different clusters. They would probably even be not too far off their centers of mass. However, I strongly object to pretending the clusters are points, and then getting tribal about it. I think labeling clusters could be useful, if we made it clear that they are still clusters.
On the paths to understanding AI risk without accepting weird arguments, maybe getting people worried about ML unexplainability may be worthwhile to explore, though I suspect most people would think you were pointing to algorithmic bias and the like.
Thank you!