Ya, I think thatâs right. I think making bad stuff more salient can make it more likely in certain contexts.
For example, I can imagine it to be naive to be constantly transmitting all sorts of detailed information, media, and discussion about specific weapons platforms. Raising awareness that you really hope the bad guys donât develop because it might make them too strong. I just read âPower to the People: How Open Technological Innovation Is Arming Tomorrowâs Terroristsâ by Audrey Kurth Cronin and I think it has a really relevant vibe here. Sometimes I worry about EAs doing unintentional advertisement for eg. bioweapons and superintelligence.
On the other hand, I think that topics like s-risk are already salient enough for other reasons. Like, I think extreme cruelty and torture have arisen independently at a lot of times throughout history and nature. And there are already ages worth of pretty unhinged torture porn stuff that people write which exist already on a lot of other parts of the internet. For example, the Christian conception of hell or horror fiction.
This seems sufficient to say we are unlikely to significantly increase the likelihood of âblind grabs from the memeplexâ leading to mass suffering. Even cruel torture is already pretty salient. And suffering is in some sense simple if it is just âthe opposite of pleasureâ or whatever. Utilitarians commonly talk in these terms already.
I will agree that I donât think itâs good to carelessly spread memes about specific bad stuff sometimes. I donât always know how to navigate the trade offs here; probably there is at least some stuff broadly related to GCRs and s-risks which is better left unsaid. But also a lot of stuff related to s-risk is there whether you acknowledge it or not. I submit to you that surely some level of âraise awareness so that more people and resources can be used on mitigationâ is necessary/âgood?
Ya, I think thatâs right. I think making bad stuff more salient can make it more likely in certain contexts.
For example, I can imagine it to be naive to be constantly transmitting all sorts of detailed information, media, and discussion about specific weapons platforms. Raising awareness that you really hope the bad guys donât develop because it might make them too strong. I just read âPower to the People: How Open Technological Innovation Is Arming Tomorrowâs Terroristsâ by Audrey Kurth Cronin and I think it has a really relevant vibe here. Sometimes I worry about EAs doing unintentional advertisement for eg. bioweapons and superintelligence.
On the other hand, I think that topics like s-risk are already salient enough for other reasons. Like, I think extreme cruelty and torture have arisen independently at a lot of times throughout history and nature. And there are already ages worth of pretty unhinged torture porn stuff that people write which exist already on a lot of other parts of the internet. For example, the Christian conception of hell or horror fiction.
This seems sufficient to say we are unlikely to significantly increase the likelihood of âblind grabs from the memeplexâ leading to mass suffering. Even cruel torture is already pretty salient. And suffering is in some sense simple if it is just âthe opposite of pleasureâ or whatever. Utilitarians commonly talk in these terms already.
I will agree that I donât think itâs good to carelessly spread memes about specific bad stuff sometimes. I donât always know how to navigate the trade offs here; probably there is at least some stuff broadly related to GCRs and s-risks which is better left unsaid. But also a lot of stuff related to s-risk is there whether you acknowledge it or not. I submit to you that surely some level of âraise awareness so that more people and resources can be used on mitigationâ is necessary/âgood?