Great to see a nuanced different perspective
I’d be interested in how work on existing multi-agent problems can be translated into improving the value-alignment of a potential singleton (reducing the risk of theoretical abstraction uncoupling from reality with).
Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?
Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?
Don’t think so. It’s too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There’s not many other factors that a complicated calculation could add.
Great to see a nuanced different perspective I’d be interested in how work on existing multi-agent problems can be translated into improving the value-alignment of a potential singleton (reducing the risk of theoretical abstraction uncoupling from reality with).
Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?
Don’t think so. It’s too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There’s not many other factors that a complicated calculation could add.