In light of this discussion about whether people would find this article alienating, I sent it to four very smart/reasonable friends who aren’t involved in EA, don’t work on AI, and don’t live in the Bay Area (definitely not representative TIME readers, but maybe representative of the kind of people EAs want to reach). Given I don’t work on AI/have only ever discussed AI risk with one of them, I don’t think social desirability bias played much of a role. I also ran this comment by them after we discussed. Here’s a summary of their reactions:
Friend 1: Says it’s hard for them to understand why AI would want to kill everyone, but acknowledges that experts know much more about this than they do and takes seriously that experts believe this is a real possibility. Given this, they think it makes sense to err on the side of caution and drastically slow down AI development to get the right safety measures in place.
Friend 2: Says it’s intuitive that AI being super powerful, not well understood, and rapidly developing is a dangerous combination. Given this, they think it makes sense to implement safeguards. But they found the article overwrought, especially given missing links in the argument (e.g., they think it’s unclear whether/why AI would want our atoms, given immense uncertainty about what AI would want; compared their initial reaction to this argument to their initial reaction to Descartes’ ontological argument).
Friend 3: Says they find this article hard to argue with, especially because they recognize how little they know on the topic relative to EY; compared themselves disagreeing with it to anti-vaxxers arguing with virologists. Given the uncertainty about risks, they think it’s pretty obvious we ought to slow down.
Friend 4: Says EY knows vastly more about this issue than they do, but finds the tone of the article a little over the top, given missing links. Remains optimistic AI will make the world better, but recognizes possible optimism bias. Generally agrees there should be more safeguards in place, especially given there are ~none.
Anyways, I would encourage others to add their own anecdata to the mix, so we can get a bit more grounded on how people interpret articles like this one, since this seems important to understand and we can do better than just speculate.
This comment was fantastic! Thanks for taking the time to do this.
In a world where the most prominent online discussants tend to be weird in a bunch of ways, we don’t hear enough reactions from “normal” people who are in a mindset of “responding thoughtfully to a friend”. I should probably be doing more friend-scanning myself.
I strongly disagree with sharing this outside rationalist/EA circles, especially if people don’t know much about AI safety or x risk. I think this could drastically shift someone’s opinion on Effective Altruism if they’re new to the idea.
Given the typical correlation between upvotes and agreevotes, this is actually much more upvoted than you would expect (holding constant the disagreevotes).
I didn’t actually downvote, but I did consider it, because I dislike PR-criticism of people for for disclosing true widely-available information in the process of performing a useful service.
Given the typical correlation between upvotes and agreevotes, this is actually much more upvoted than you would expect (holding constant the disagreevotes).
Fair point.
I didn’t actually downvote, but I did consider it, because I dislike PR-criticism of people for for disclosing true widely-available information in the process of performing a useful service.
I think it makes sense to downvote if one thinks:
The comment should be less visible.
It would have been better for the comment not to have been published.
Thanks for reporting back! I’m sharing it with my friends as well (none of which are in tech and most of them live in fairly rural parts in Canada) to see their reaction.
In light of this discussion about whether people would find this article alienating, I sent it to four very smart/reasonable friends who aren’t involved in EA, don’t work on AI, and don’t live in the Bay Area (definitely not representative TIME readers, but maybe representative of the kind of people EAs want to reach). Given I don’t work on AI/have only ever discussed AI risk with one of them, I don’t think social desirability bias played much of a role. I also ran this comment by them after we discussed. Here’s a summary of their reactions:
Friend 1: Says it’s hard for them to understand why AI would want to kill everyone, but acknowledges that experts know much more about this than they do and takes seriously that experts believe this is a real possibility. Given this, they think it makes sense to err on the side of caution and drastically slow down AI development to get the right safety measures in place.
Friend 2: Says it’s intuitive that AI being super powerful, not well understood, and rapidly developing is a dangerous combination. Given this, they think it makes sense to implement safeguards. But they found the article overwrought, especially given missing links in the argument (e.g., they think it’s unclear whether/why AI would want our atoms, given immense uncertainty about what AI would want; compared their initial reaction to this argument to their initial reaction to Descartes’ ontological argument).
Friend 3: Says they find this article hard to argue with, especially because they recognize how little they know on the topic relative to EY; compared themselves disagreeing with it to anti-vaxxers arguing with virologists. Given the uncertainty about risks, they think it’s pretty obvious we ought to slow down.
Friend 4: Says EY knows vastly more about this issue than they do, but finds the tone of the article a little over the top, given missing links. Remains optimistic AI will make the world better, but recognizes possible optimism bias. Generally agrees there should be more safeguards in place, especially given there are ~none.
Anyways, I would encourage others to add their own anecdata to the mix, so we can get a bit more grounded on how people interpret articles like this one, since this seems important to understand and we can do better than just speculate.
This comment was fantastic! Thanks for taking the time to do this.
In a world where the most prominent online discussants tend to be weird in a bunch of ways, we don’t hear enough reactions from “normal” people who are in a mindset of “responding thoughtfully to a friend”. I should probably be doing more friend-scanning myself.
I strongly disagree with sharing this outside rationalist/EA circles, especially if people don’t know much about AI safety or x risk. I think this could drastically shift someone’s opinion on Effective Altruism if they’re new to the idea.
This article was published in TIME, which has a print readership of 1.6 million
The article doesn’t even use the words “effective altruism”
These non-EAs were open to the ideas raised by the article
Value of information seems to exceed the potential damage done at these sample sizes for me.
Hi Wil,
Thanks for sharing your thoughts. I am slightly confused your comment is overall downvoted (-8 total karma now). I upvoted it, but disagreed.
Given the typical correlation between upvotes and agreevotes, this is actually much more upvoted than you would expect (holding constant the disagreevotes).
I didn’t actually downvote, but I did consider it, because I dislike PR-criticism of people for for disclosing true widely-available information in the process of performing a useful service.
Thanks for the feedback, Larks.
Fair point.
I think it makes sense to downvote if one thinks:
The comment should be less visible.
It would have been better for the comment not to have been published.
Thanks! It’s okay. This is a very touchy subject and I wrote a strongly opinionated piece so I’m not surprised. I appreciate it.
Thanks for reporting back! I’m sharing it with my friends as well (none of which are in tech and most of them live in fairly rural parts in Canada) to see their reaction.