I think there’s something quite interesting here...I feel like one of the main things I see in the post is sort of the opposite of the intended message.
(I realise this is an old post now but I’ve only just read it and—full disclosure—I’ve ended up reading it now because I think my skepticism about AI risk arguments is higher than it’s been for a long time and so I’m definitely coming at it from that point of view).
If I may paraphrase a bit flippantly, I think that one of the messages is sort of supposed to be: ‘just because the early AI risk crowd were very different for me and kind of irritating(!), it doesn’t mean that they were wrong’ and so ‘sometimes you need to pay attention to messages coming from outside of your subculture’.
But actually what happens in the narrative is that you only start caring about AI risk when an old friend who ‘felt like one of your own’ - and who was “worried”—manages to make you “feel viscerally” about it. So it wasn’t that, without direct intervention from either ‘tribe’, you actually sat down with the arguments/data and understood things logically. Nor was it that you, say, found a set of AI/technology/risk experts to defer to. It was that someone with whom you had more of an affinity made you feel like we should care more and take it seriously. This sounds sort of like the opposite of the intended message, does it not?. i.e. it sounds like more attention was paid to an emotional appeal from an old friend than to whatever arguments were available at the time.
Yep, that’s all true. I think what I’m pointing to is that de facto people do decide what to pay attention to and what arguments to dig into based on arbitrary factors and tribalism. Ideally I’d have had some less arbitrary way to decide where to focus my attention, but here we are.
I think there’s something quite interesting here...I feel like one of the main things I see in the post is sort of the opposite of the intended message.
(I realise this is an old post now but I’ve only just read it and—full disclosure—I’ve ended up reading it now because I think my skepticism about AI risk arguments is higher than it’s been for a long time and so I’m definitely coming at it from that point of view).
If I may paraphrase a bit flippantly, I think that one of the messages is sort of supposed to be: ‘just because the early AI risk crowd were very different for me and kind of irritating(!), it doesn’t mean that they were wrong’ and so ‘sometimes you need to pay attention to messages coming from outside of your subculture’.
But actually what happens in the narrative is that you only start caring about AI risk when an old friend who ‘felt like one of your own’ - and who was “worried”—manages to make you “feel viscerally” about it. So it wasn’t that, without direct intervention from either ‘tribe’, you actually sat down with the arguments/data and understood things logically. Nor was it that you, say, found a set of AI/technology/risk experts to defer to. It was that someone with whom you had more of an affinity made you feel like we should care more and take it seriously. This sounds sort of like the opposite of the intended message, does it not?. i.e. it sounds like more attention was paid to an emotional appeal from an old friend than to whatever arguments were available at the time.
Yep, that’s all true. I think what I’m pointing to is that de facto people do decide what to pay attention to and what arguments to dig into based on arbitrary factors and tribalism. Ideally I’d have had some less arbitrary way to decide where to focus my attention, but here we are.