Yes, to some extent there is the thought “this will exist anyway” and “we’re in a race that I can’t stop”, but at some point someone very high up needs to turn their steering wheel. They say they are worried, but actions speak louder than words. Take the financial/legal/reputational hit and just quit! Make a big public show of it. Pull us back from the brink.
Or maybe I’m looking at it the wrong way. Maybe they think x-risk is “only” 10% likely, and they are willing to gamble the lives of 8 billion people for a shot at utopia that is 90% likely to succeed. In which case, I think they should be shut down, immediately. Where is their democratic mandate to do that!?
Another option is that they actually think that anything smart enough to be existentially dangerous is still a long way away, and statements that seem to imply the contrary are actually a kind of disguised commercial hype.
Or they might think that safety is relatively easy, and so long as you care about it a decent amount and take reasonable known precautions you’re effectively guaranteed to be fine. I.e risk is under .01%, not 10%. (Yes, that is probably still bad on expected value grounds, but most people don’t think like that, and actually on person-affecting views where transformative AI would massively boost lifespans, might actually be a deal most people would take.)
anything smart enough to be existentially dangerous is still a long way away
I don’t think this is really a tenable position any more, post GPT-4 and AutoGPT. See e.g. Connor Leahy explaining that LLMs are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2″ type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). If this isn’t clear now, it will be in a few months once Google DeepMind releases the next version of it’s multimodal (text, images, video, robotics) AI.
Some experts still seem to hold it: i.e. Yann LeCun: https://twitter.com/ylecun/status/1621805604900585472 Whether or not they in fact have good reason to think this, it’s surely evidence that people at DeepMind could be thinking this way too.
Yes, to some extent there is the thought “this will exist anyway” and “we’re in a race that I can’t stop”, but at some point someone very high up needs to turn their steering wheel. They say they are worried, but actions speak louder than words. Take the financial/legal/reputational hit and just quit! Make a big public show of it. Pull us back from the brink.
Or maybe I’m looking at it the wrong way. Maybe they think x-risk is “only” 10% likely, and they are willing to gamble the lives of 8 billion people for a shot at utopia that is 90% likely to succeed. In which case, I think they should be shut down, immediately. Where is their democratic mandate to do that!?
Another option is that they actually think that anything smart enough to be existentially dangerous is still a long way away, and statements that seem to imply the contrary are actually a kind of disguised commercial hype.
Or they might think that safety is relatively easy, and so long as you care about it a decent amount and take reasonable known precautions you’re effectively guaranteed to be fine. I.e risk is under .01%, not 10%. (Yes, that is probably still bad on expected value grounds, but most people don’t think like that, and actually on person-affecting views where transformative AI would massively boost lifespans, might actually be a deal most people would take.)
I don’t think this is really a tenable position any more, post GPT-4 and AutoGPT. See e.g. Connor Leahy explaining that LLMs are basically “general cognition engines” and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid “System 2″ type thinking, which are now freely being offered by the AutoGPT enthusiasts and Open AI). If this isn’t clear now, it will be in a few months once Google DeepMind releases the next version of it’s multimodal (text, images, video, robotics) AI.
Some experts still seem to hold it: i.e. Yann LeCun: https://twitter.com/ylecun/status/1621805604900585472 Whether or not they in fact have good reason to think this, it’s surely evidence that people at DeepMind could be thinking this way too.
I think multimodal models kind of make his points about text moot. GPT-4 is already text + images (making “LLM” a misnomer).