I’d love to hear his thoughts on defensive measures for “fuzzier” threats from advanced AI, e.g. manipulation, persuasion, “distortion of epistemics”, etc. Since it seems difficult to delineate when these sorts of harms are occuring (as opposed to benign forms of advertising/rhetoric/expression), it seems hard to construct defenses.
This is a related concept mechanisms for collective epistemics like prediction markets or community notes, which Vitalik praises here. But the harms from manipulation are broader, and could route through “superstimuli”, addictive platforms, etc. beyond just the spread of falsehoods. See manipulation section here for related thoughts.
I’d love to hear his thoughts on defensive measures for “fuzzier” threats from advanced AI, e.g. manipulation, persuasion, “distortion of epistemics”, etc. Since it seems difficult to delineate when these sorts of harms are occuring (as opposed to benign forms of advertising/rhetoric/expression), it seems hard to construct defenses.
This is a related concept mechanisms for collective epistemics like prediction markets or community notes, which Vitalik praises here. But the harms from manipulation are broader, and could route through “superstimuli”, addictive platforms, etc. beyond just the spread of falsehoods. See manipulation section here for related thoughts.
And also: about the “AI race” risk a.k.a. Moloch a.k.a. https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic