Maybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?
I saw Jeff Hawkins mention (in some online video) that someone had sent Human Compatible to him unsolicited but he didn’t say who. And then (separately) a bit later the mystery was resolved: I saw some EA-affiliated person or institution mention that they had sent Human Compatible to a bunch of AI researchers. But I can’t remember where I saw that, or who it was. :-(
Interesting anyway, thanks! Did you by any chance notice if he reacted positively or negatively to being send the book? I was a bit worried it might be considered spammy. On the other hand, I remember reading that Andrew Gelman regularly gets send copies of books he might be interested in for him to write a blurp or review, so maybe it’s just a thing that happens to scientists and one needn’t be worried.
See here, the first post is a video of a research meeting where he talks dismissively about Stuart Russell’s argument, and then the ensuing forum discussion features a lot of posts by me trying to sell everyone on AI risk :-P
Perfect, so he appreciated it despite finding the accompanying letter pretty generic, and thought he received it because someone (the letter listed Max Tegmark, Joshua Bengio and Tim O’Reilly, though w/o signatures) believed he’d find it interesting and that the book is important for the field. Pretty much what one could hope for.
And thanks for the work trying to get them to take this more seriously, would be really great if you could find more neuroscience people to contribute to AI safety.
Maybe one could send a free copy of Brian Christians „The Alignment Problem“ or Russel‘s „Human Compatible“ to the office addresses of all AI researchers that might find it potentially interesting?
I saw Jeff Hawkins mention (in some online video) that someone had sent Human Compatible to him unsolicited but he didn’t say who. And then (separately) a bit later the mystery was resolved: I saw some EA-affiliated person or institution mention that they had sent Human Compatible to a bunch of AI researchers. But I can’t remember where I saw that, or who it was. :-(
Interesting anyway, thanks! Did you by any chance notice if he reacted positively or negatively to being send the book? I was a bit worried it might be considered spammy. On the other hand, I remember reading that Andrew Gelman regularly gets send copies of books he might be interested in for him to write a blurp or review, so maybe it’s just a thing that happens to scientists and one needn’t be worried.
See here, the first post is a video of a research meeting where he talks dismissively about Stuart Russell’s argument, and then the ensuing forum discussion features a lot of posts by me trying to sell everyone on AI risk :-P
(Other context here.)
Perfect, so he appreciated it despite finding the accompanying letter pretty generic, and thought he received it because someone (the letter listed Max Tegmark, Joshua Bengio and Tim O’Reilly, though w/o signatures) believed he’d find it interesting and that the book is important for the field. Pretty much what one could hope for.
And thanks for the work trying to get them to take this more seriously, would be really great if you could find more neuroscience people to contribute to AI safety.