As an Australian and therefore beholden to both China and USA, the answer doesn’t seem so clear cut to me. China have what seems to be an aggressive green agenda and a focus on social cohesion/harmony which fades into oppression. They seem to be able to get massive engineering projects completed and don’t seem interested in getting involved in other countries politics via proxy wars. Apparently they’re alright with harvesting the organs of political prisoners.
America puts its self forward as the bastion of freedom but has massive inequality, large prison populations and can’t figure out universal healthcare. Americans are creative, confident and murder each other frequently. Their president is a Christian who loves to grab pussies and dreams of hereditary rule.
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
I don’t want to harp too much on “lived experiences”, but both stated and revealed preferences from existing denizens of either the US or China will strongly suggest otherwise for the preferences of most other people. It’s possible you’d have an unusual preference if you lived in those countries, but I currently suspect otherwise.
An average North Korean may well think that AGI based on their values would be a great thing to overtake the universe, but most of us would disagree. The view from inside a system is very different than the view from the outside. Orwell spoke of a jackboot on the face of humanity forever. I feel like the EA community are doing their best to avoid that outcome, but I’m not sure major world powers are. Entrenching the power of current world governments is unlikely, in my view, to lead to great outcomes. Perhaps the wild card is a valid choice. More than I want to be a paperclip, I want to live in a world where building a billion humanoid robots is not a legitimate business plan and where AGI development is slowly slowly. That doesn’t seem to be an option. So maybe no control of AGI is better than control by pyschopaths?
Yeah, kinda hoping 1) there exists a sweet spot for alignment where AIs are just nice enough from e.g. good values picked up during pre-training, but can’t be modified during post-training so much to have worse values, and that 2) given that this sweet spot does exist we do hit it with AGI / ASI.
I think there’s some evidence pointing to this happening with current models but I’m not highly confident that it means what I think it means. If this is the case though, further technical alignment research might be bad and acceleration might be good.
As an Australian and therefore beholden to both China and USA, the answer doesn’t seem so clear cut to me. China have what seems to be an aggressive green agenda and a focus on social cohesion/harmony which fades into oppression. They seem to be able to get massive engineering projects completed and don’t seem interested in getting involved in other countries politics via proxy wars. Apparently they’re alright with harvesting the organs of political prisoners.
America puts its self forward as the bastion of freedom but has massive inequality, large prison populations and can’t figure out universal healthcare. Americans are creative, confident and murder each other frequently. Their president is a Christian who loves to grab pussies and dreams of hereditary rule.
My personal preference is to take my chances with unaligned ASI as the thought of either of these circuses being the ringmaster of all eternity is terrifying. I’d much rather be a paper clip than a communist/corporate serf.
I don’t want to harp too much on “lived experiences”, but both stated and revealed preferences from existing denizens of either the US or China will strongly suggest otherwise for the preferences of most other people. It’s possible you’d have an unusual preference if you lived in those countries, but I currently suspect otherwise.
An average North Korean may well think that AGI based on their values would be a great thing to overtake the universe, but most of us would disagree. The view from inside a system is very different than the view from the outside. Orwell spoke of a jackboot on the face of humanity forever. I feel like the EA community are doing their best to avoid that outcome, but I’m not sure major world powers are. Entrenching the power of current world governments is unlikely, in my view, to lead to great outcomes. Perhaps the wild card is a valid choice. More than I want to be a paperclip, I want to live in a world where building a billion humanoid robots is not a legitimate business plan and where AGI development is slowly slowly. That doesn’t seem to be an option. So maybe no control of AGI is better than control by pyschopaths?
I guess the crux of my snarky comment is that if your only choice for master of the universe is between 2 evil empires, your kinda screwed either way.
Yeah, kinda hoping 1) there exists a sweet spot for alignment where AIs are just nice enough from e.g. good values picked up during pre-training, but can’t be modified during post-training so much to have worse values, and that 2) given that this sweet spot does exist we do hit it with AGI / ASI.
I think there’s some evidence pointing to this happening with current models but I’m not highly confident that it means what I think it means. If this is the case though, further technical alignment research might be bad and acceleration might be good.