Is that tweet the only (public) evidence that Andrew Yang understands/cares about x-risk?
A cynical interpretation of the tweet is that we learned that Yang has one (maxed out) donor who likes Bostrom.
My impression is that: 1) it’d be very unusual for somebody to understand much about x-risk from one phone call; 2) sending out an enthusiastic tweet would be the polite/savvy thing to do after taking a call that a donor enthusiastically set up for you; 3) a lot of politicians find it cool to spend half an hour chatting with a famous Oxford philosophy professor with mind blowing ideas. I think there are a lot of influential people who’d be happy to take a call on x-risk but wouldn’t understand or feel much different about it than the median person in their reference class.
I know virtually nothing about Andrew Yang in particular and that tweet is certainly *consistent* with him caring about this stuff. Just wary of updating *too* much.
Advances in automation and Artificial Intelligence (AI) hold the potential to bring about new levels of prosperity humans have never seen. They also hold the potential to disrupt our economies, ruin lives throughout several generations, and, if experts such as Stephen Hawking and Elon Musk are to be believed, destroy humanity.
Cool. That’s a bit more distinctive although not more than Hillary Clinton said in her book.
Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.
I am in general more trusting, so I appreciate this perspective. I know he’s a huge fan of Sam Harris and has historically listened to his podcast, so I imagine he’s head Sam’s thoughts (and maybe Stuart Russell’s thoughts) on AGI.
[I am not an expert on any of this.]
Is that tweet the only (public) evidence that Andrew Yang understands/cares about x-risk?
A cynical interpretation of the tweet is that we learned that Yang has one (maxed out) donor who likes Bostrom.
My impression is that: 1) it’d be very unusual for somebody to understand much about x-risk from one phone call; 2) sending out an enthusiastic tweet would be the polite/savvy thing to do after taking a call that a donor enthusiastically set up for you; 3) a lot of politicians find it cool to spend half an hour chatting with a famous Oxford philosophy professor with mind blowing ideas. I think there are a lot of influential people who’d be happy to take a call on x-risk but wouldn’t understand or feel much different about it than the median person in their reference class.
I know virtually nothing about Andrew Yang in particular and that tweet is certainly *consistent* with him caring about this stuff. Just wary of updating *too* much.
On Yang’s site (a):
Cool. That’s a bit more distinctive although not more than Hillary Clinton said in her book.
https://lukemuehlhauser.com/hillary-clinton-on-ai-risk/
Yeah, though from my quick look it’s not mentioned on her 2016 campaign site: 1, 2
I am in general more trusting, so I appreciate this perspective. I know he’s a huge fan of Sam Harris and has historically listened to his podcast, so I imagine he’s head Sam’s thoughts (and maybe Stuart Russell’s thoughts) on AGI.
Presumably should say “that Yang has one (maxed out) donor who likes Bostrom”
Thanks
If I recall correctly, Weinstein & Yang talk about apocalyptic / dark future stuff a bit during their interview, but not about x-risk specifically.