Five types of people on AI risks:
Wants AGI as soon as possible, ignores safety.
Wants AGI, but primarily cares about alignment.
Doesn’t understand AGI/doesn’t think it’ll happen anytime during her lifetime; thinks about robots that might take people’s jobs.
Understands AGI, but thinks the timelines are long enough not to worry about it right now.
Doesn’t worry about AGI; being locked-in in our choices and “normal accidents” are both more important/risky/scary.
mmm, would quibble about believing that robots could take people’s jobs means that a person doesn’t understand AGI...
Five types of people on AI risks:
Wants AGI as soon as possible, ignores safety.
Wants AGI, but primarily cares about alignment.
Doesn’t understand AGI/doesn’t think it’ll happen anytime during her lifetime; thinks about robots that might take people’s jobs.
Understands AGI, but thinks the timelines are long enough not to worry about it right now.
Doesn’t worry about AGI; being locked-in in our choices and “normal accidents” are both more important/risky/scary.
mmm, would quibble about believing that robots could take people’s jobs means that a person doesn’t understand AGI...