How nervous should we be about talking about/recommending action on AI risk?
I think a lot of people in the EA community worry that AI risk is “weird”, sufficiently weird that you should probably be careful talking about it to a broad audience or recommending what they donate to. Many would fear alienating people or damaging credibility. (Especially when “AI risk” refers to the existential risks from AI, as opposed to, e.g., how algorithms could cause inadvertent bias/prejudice)
A thought experiment to make this more concrete: imagine you were organising a big sponsored event where lots of people would see 3 recommended charities. Would you recommend that (say) MIRI would be one of the recommended charities?
this is a complex question. But I think I agree with whoever it was (Eliezer?) who said that there are weirdness points: you are allowed to be only so weird before people stop taking you seriously. You can decide to spend those weirdness points how you like, but once you spend them, they’re gone. AI risk is obviously a lot more expensive in weirdness points than, say, deworming. So you’ll be able to talk about it less before people start thinking of you as the weird AI-obsessed guy.
I do think, though, that you can still do it, if you can explain that you’re using the same processes—expected value etc—to reach the conclusions in AI as you did with more prosaic things like bednets or deworming. That’s sort of what I did here. And if you try to pretend that AI/X-risk isn’t part of what you’re worrying about, then it looks like you’re doing a scientology and hiding the weird stuff behind a friendly facade.
All that being said, in your concrete example, I wouldn’t include MIRI unless you’re really sure that that is where you want to go. I speak as someone who really likes MIRI! But if it’s a “this is your first taste of effective altruism” deal, then you’re already asking people to take on board the idea that, actually, donating to Cancer Research UK is severely suboptimal and you should give it all to very specific infectious-disease charities in sub-Saharan Africa or whatever. That’s weird and counterintuitive enough already, and I think taking people along that route one step at a time is probably wisest.
But I think I agree with whoever it was (Eliezer?) who said that there are weirdness points: you are allowed to be only so weird before people stop taking you seriously.
Peter opens the post by writing “I’ve heard of the concept of “weirdness points” many times before, but after a bit of searching I can’t find a definitive post describing the concept, so I’ve decided to make one”
A commenter on the post notes that “The idiom used to describe that concept in social psychology is “idiosyncrasy credits”, so searching for that phrase produces more relevant material (though as far as I can tell nothing on Less Wrong specifically).”
How nervous should we be about talking about/recommending action on AI risk?
I think a lot of people in the EA community worry that AI risk is “weird”, sufficiently weird that you should probably be careful talking about it to a broad audience or recommending what they donate to. Many would fear alienating people or damaging credibility. (Especially when “AI risk” refers to the existential risks from AI, as opposed to, e.g., how algorithms could cause inadvertent bias/prejudice)
A thought experiment to make this more concrete: imagine you were organising a big sponsored event where lots of people would see 3 recommended charities. Would you recommend that (say) MIRI would be one of the recommended charities?
this is a complex question. But I think I agree with whoever it was (Eliezer?) who said that there are weirdness points: you are allowed to be only so weird before people stop taking you seriously. You can decide to spend those weirdness points how you like, but once you spend them, they’re gone. AI risk is obviously a lot more expensive in weirdness points than, say, deworming. So you’ll be able to talk about it less before people start thinking of you as the weird AI-obsessed guy.
I do think, though, that you can still do it, if you can explain that you’re using the same processes—expected value etc—to reach the conclusions in AI as you did with more prosaic things like bednets or deworming. That’s sort of what I did here. And if you try to pretend that AI/X-risk isn’t part of what you’re worrying about, then it looks like you’re doing a scientology and hiding the weird stuff behind a friendly facade.
All that being said, in your concrete example, I wouldn’t include MIRI unless you’re really sure that that is where you want to go. I speak as someone who really likes MIRI! But if it’s a “this is your first taste of effective altruism” deal, then you’re already asking people to take on board the idea that, actually, donating to Cancer Research UK is severely suboptimal and you should give it all to very specific infectious-disease charities in sub-Saharan Africa or whatever. That’s weird and counterintuitive enough already, and I think taking people along that route one step at a time is probably wisest.
The most often cited post on this is Peter Hurford’s You have a set amount of “weirdness points”. Spend them wisely. But the concept/term isn’t original to Peter, given that:
Peter opens the post by writing “I’ve heard of the concept of “weirdness points” many times before, but after a bit of searching I can’t find a definitive post describing the concept, so I’ve decided to make one”
A commenter on the post notes that “The idiom used to describe that concept in social psychology is “idiosyncrasy credits”, so searching for that phrase produces more relevant material (though as far as I can tell nothing on Less Wrong specifically).”