I’m a bit concerned that both of your arguments here are a bit strawmannish, but again I might be missing something
Indeed ,my comment was regarding the 99.999 percent of people ( including myself) who are not AI researchers. I completely agree that researchers should be working on the latest models and paying for chat GPT 4, but that wasn’t my point.
I think it’s borderline offensive to call people “amish” who boycott potentially dangerous tech which can increase productivity. First it could be offensive to the Amish, as you seem to be using it as a perogative, and second boycotting any 1 technology for harm minimisation reasons while using all other technology can’t get compared to the Amish way of life. I’m not saying boycott all AI, that would be impossible anyway. Just perhaps not contributing financially to the company making the most cutting edge models.
This is a big discussion, but I think discarding not paying for chat GPT under the banner of poor scope sensitivity and virtue signaling is weak at best and straw Manning at worst. The environmentalists I know who don’t fly, don’t use it to virtue signal at all, they are doing it to help the world a little and show integrity with their lifestyles. This may or may not be helpful to their cause, but the little evidence we have also seems to show that more radical actions like this do not alienate regular people but instead pull people towards the argument your are trying to make, in this case that an AI frontier arms race might be harmful.
I actually changed my mind on this on seeing the forum posts here a few months ago, I used to think that radical life decisions and activism was likely to be net harmful too. what research we have on the topic shows that more radical actions attract more people to mainstream climate/animal activist ideals, so I think your comment “has knock-on effects in who is attracted to the movement, etc.” It’s more likely to be wrong than right.
Indeed ,my comment was regarding the 99.999 percent of people ( including myself) who are not AI researchers. I completely agree that researchers should be working on the latest models and paying for chat GPT 4, but that wasn’t my point.
I’d extend this not just to include AI researchers, but people who are involved in AI safety more generally. But on the question of the wider population, we agree.
The environmentalists I know who don’t fly, don’t use it to virtue signal at all, they are doing it to help the world a little and show integrity with their lifestyles, which is admirable whether you agree it’s helpful or not.
“show integrity with their lifestyles” is a nicer way of saying “virtue signalling”, it just happens to be signalling a virtue that you agree with. I do think it’s an admirable display of non-selfishness (and far better than vice signalling, for example), but so too are plenty of other types of costly signalling like asceticism. A common failure mode for groups people trying to do good is “pick a virtue that’s somewhat correlated with good things and signal the hell out of it until it stops being correlated”. I’d like this not to happen in AI safety (more than it already has: I think this has already happened with pessimism-signalling, and conversely happens with optimism-signalling in accelerationist circles).
“show integrity with their lifestyles” is a nicer way of saying “virtue signalling”,
I would describe it more as a spectrum. On the more pure “virtue signaling” end, you might choose one relatively unimportant thing like signing a petition, then blast it all over the internet while not doing other more important actions that’s the cause.
Whereas on the other end of the spectrum, “showing integrity with lifestyle” to me means something like making a range of lifestyle choices which might make only s small difference to your cause, while making you feel like you are doing what you can on a personal level. You might not talk about these very much at all.
Obviously there are a lot of blurry lines in between.
Maybe my friends are different from yours, but climate activists I know often don’t fly, don’t drive and don’t eat meat. And they don’t talk about it much or “signal” this either. But when they are asked about it, they explain why. This means when they get challenged in the public sphere, both neutral people and their detractors lack personal ammunition to car dispersion on their arguments, so their position becomes more convincing.
I don’t call that virtue signaling, but I suppose it’s partly semantics.
I’m a bit concerned that both of your arguments here are a bit strawmannish, but again I might be missing something
Indeed ,my comment was regarding the 99.999 percent of people ( including myself) who are not AI researchers. I completely agree that researchers should be working on the latest models and paying for chat GPT 4, but that wasn’t my point.
I think it’s borderline offensive to call people “amish” who boycott potentially dangerous tech which can increase productivity. First it could be offensive to the Amish, as you seem to be using it as a perogative, and second boycotting any 1 technology for harm minimisation reasons while using all other technology can’t get compared to the Amish way of life. I’m not saying boycott all AI, that would be impossible anyway. Just perhaps not contributing financially to the company making the most cutting edge models.
This is a big discussion, but I think discarding not paying for chat GPT under the banner of poor scope sensitivity and virtue signaling is weak at best and straw Manning at worst. The environmentalists I know who don’t fly, don’t use it to virtue signal at all, they are doing it to help the world a little and show integrity with their lifestyles. This may or may not be helpful to their cause, but the little evidence we have also seems to show that more radical actions like this do not alienate regular people but instead pull people towards the argument your are trying to make, in this case that an AI frontier arms race might be harmful.
I actually changed my mind on this on seeing the forum posts here a few months ago, I used to think that radical life decisions and activism was likely to be net harmful too. what research we have on the topic shows that more radical actions attract more people to mainstream climate/animal activist ideals, so I think your comment “has knock-on effects in who is attracted to the movement, etc.” It’s more likely to be wrong than right.
I’d extend this not just to include AI researchers, but people who are involved in AI safety more generally. But on the question of the wider population, we agree.
“show integrity with their lifestyles” is a nicer way of saying “virtue signalling”, it just happens to be signalling a virtue that you agree with. I do think it’s an admirable display of non-selfishness (and far better than vice signalling, for example), but so too are plenty of other types of costly signalling like asceticism. A common failure mode for groups people trying to do good is “pick a virtue that’s somewhat correlated with good things and signal the hell out of it until it stops being correlated”. I’d like this not to happen in AI safety (more than it already has: I think this has already happened with pessimism-signalling, and conversely happens with optimism-signalling in accelerationist circles).
“show integrity with their lifestyles” is a nicer way of saying “virtue signalling”,
I would describe it more as a spectrum. On the more pure “virtue signaling” end, you might choose one relatively unimportant thing like signing a petition, then blast it all over the internet while not doing other more important actions that’s the cause.
Whereas on the other end of the spectrum, “showing integrity with lifestyle” to me means something like making a range of lifestyle choices which might make only s small difference to your cause, while making you feel like you are doing what you can on a personal level. You might not talk about these very much at all.
Obviously there are a lot of blurry lines in between.
Maybe my friends are different from yours, but climate activists I know often don’t fly, don’t drive and don’t eat meat. And they don’t talk about it much or “signal” this either. But when they are asked about it, they explain why. This means when they get challenged in the public sphere, both neutral people and their detractors lack personal ammunition to car dispersion on their arguments, so their position becomes more convincing.
I don’t call that virtue signaling, but I suppose it’s partly semantics.