I think a real life scenarios where AI kills the most people today is governance stuff and military stuff.
I feel like I have heard the most unhinged haunted uses of LLMs in government and policy spaces. I think that certain people have just âlearned to stop worrying and love the hallucinationâ. They are living like it is the future already and getting people killed with their ignorance and spreading /âusing AI bs in bad faith.
Plus, there is already a lot of slaughter bot stuff going on eg. âRobots Firstâ war in Ukraine.
Maybe job automation is worth saying too. I believe Andrew Yangâs stance for example is that it is already largely here and most people just do have less labor power already, but I could be mischaracterizing this. I think âjobs stuffâ plausibly shades right into doom via âindustrial dehumanizationâ /â gradual disempowerment. In the mean time it hurts people too.
Thanks for everything Holly! Really cool to have people like you actively calling for international pause on ASI!
Hot take: Even if most people hear a really loud ass warning shot, it is just going to fuck with them a lot, but not drive change. What are you even expecting typical poor and middle class nobodies to do?
March in the street and become activists themselves? Donate somewhere? Post on social media? Call representatives? Buy ads (likely from Google or Meta)? Divest in risky AI projects? Boycott LLMs/âcompanies?
Ya, okay, I feel like the pathway from âworryâ to any of that if generally very windy, but sure. I still feel like that is just a long way from the kind of galvanized political will and real change you would need for eg. major AI companies with huge market cap to get nationalized or wiped off the market or whatever.
I donât even know how to picture a transition to an intelligence explosion resistant world and I am pretty knee deep in this stuff. I think the road from here to good outcome is just too blurry for much a lot of the time. It is easy to feel and be disempowered here.
I think a real life scenarios where AI kills the most people today is governance stuff and military stuff.
I feel like I have heard the most unhinged haunted uses of LLMs in government and policy spaces. I think that certain people have just âlearned to stop worrying and love the hallucinationâ. They are living like it is the future already and getting people killed with their ignorance and spreading /âusing AI bs in bad faith.
Plus, there is already a lot of slaughter bot stuff going on eg. âRobots Firstâ war in Ukraine.
Maybe job automation is worth saying too. I believe Andrew Yangâs stance for example is that it is already largely here and most people just do have less labor power already, but I could be mischaracterizing this. I think âjobs stuffâ plausibly shades right into doom via âindustrial dehumanizationâ /â gradual disempowerment. In the mean time it hurts people too.
Thanks for everything Holly! Really cool to have people like you actively calling for international pause on ASI!
Hot take: Even if most people hear a really loud ass warning shot, it is just going to fuck with them a lot, but not drive change. What are you even expecting typical poor and middle class nobodies to do?
March in the street and become activists themselves? Donate somewhere? Post on social media? Call representatives? Buy ads (likely from Google or Meta)? Divest in risky AI projects? Boycott LLMs/âcompanies?
Ya, okay, I feel like the pathway from âworryâ to any of that if generally very windy, but sure. I still feel like that is just a long way from the kind of galvanized political will and real change you would need for eg. major AI companies with huge market cap to get nationalized or wiped off the market or whatever.
I donât even know how to picture a transition to an intelligence explosion resistant world and I am pretty knee deep in this stuff. I think the road from here to good outcome is just too blurry for much a lot of the time. It is easy to feel and be disempowered here.