I’ve been thinking about transparent societies (democratic surveillance) for a while. While I’m still concerned about free thought effects, where cultures living under radical transparency might develop a global preference falsification monoculture (situations where everyone in the open world is lying about what kind of world we want due to a repressive false consensus, crushing innovation, healthy criticism of mainstream ideas, etc)… that concern is decreasing as I go, I think it’s going to turn out to be completely defeatable.
This will be approximate, I hope to do a full post about it eventually, but, a way of summing up my current view is...
Radical transparency is already steadily happening because it is incredibly useful (this surprises me too). Celebrity, twitter, Disclosure movements, open-source intelligence.
Weird people will always exist, you will always have to look at them, no amount of social pressure will make them go away, and some of them are critical specialists, who we need and love. Most of the thinkers and doers and processes of dialog that I actually admire and respect are weird in a way that is resilient to those anti-weird anti-free-thought effects that we were worried about, and I’m not really afraid of those effects at all on most days.
People will start to exult a new virtue of brazenness, once they see that free thought is a hard dependency for original work. Everyone I know (including you) already sees that is. Even transparency’s best critics are stridently admitting that it is. On the other side: The people who stop exploring when they’re being watched, will also very visibly stop being able to produce any original thoughts at all. Communities of othering and repression of small differences will quickly become so insane and ineffective that it will alienate everyone who ever believed in them, even their own members will start to notice (this is already happening under the radical transparency of twitter, which, note, interestingly, was completely voluntary, and mostly unremarked upon). And the people of brazenness will very visibly continue producing things, and so I expect brazenness to become fashionable. Transparency will harm experimental work momentarily, if at all, before the great gardener sees in this new light, that the pitiful things they’ve been treading on all of this time were young flowers, learns to be more careful with rough and burgeoning things, and then western culture will adapt to transparency, and then we will fear it no more.
But the largest obstacle is that the technologies for fair transparency still don’t quite exist yet ( consistent, reliable and convenient and trustworthy recording systems, methods for preventing harrassment mobs (DDOS protection, better spam prevention)). But I’ve found that the solutions to these issues (hardware, protocols, distributed storage, webs of trust) are not very complicated, and I think they’ll arrive without much deliberate effort.
The next largest obstacle is mass simultaneous adoption, which you rightly single out with the discussion global democratic agreement. A transparent society is not interested in going halfway and building a panopticon or building a transparent state that will simply crumble in the face of an opaque one. I’m not confident that a global order will manage to get over the hump.
I have some pretty big objections to some of the things you said on this, though. Mainly that the advantages for the majority of signatories in universal transparency are actually great:
Even just on the margin: Note that celebrity is a kind of radical transparency. Note that the best practitioners tend to want to publish their work because the esteem of releasing it outweighs whatever competitive advantage it might have won their company to not release it.
It would allow their field to progress faster as a result of more sharing, and of course it means that they can progress more safely. You assert that you consider it unlikely that you’ll live to see a catastrophe. I think that’s uninformed. Longtermist arguments work even if the chance is small and far off, but the chance actually isn’t small or far off. Ajeya Cotra found that biological anchors for intelligence set a conservative median estimate for the arrival of AGI in about 2050, but Ajeya’s personal median estimate is now 2040. Regardless, (afaict) most decisionmakers have kids and will care what happens to their grandkids.
It’s still going to be difficult to get every state that could harbor strong AI work to sign up for the unprecedented levels of reporting and oversight required to limit proliferation. I’m not hopeful that those talks will work out. I’ll become hopeful if we reach a point where the leaders in the field safely demonstrate the presence of danger beyond reasonable doubt (Demonstration of Cataclysmic Trajectory). At that point, it might be possible.
I’ve been thinking about transparent societies (democratic surveillance) for a while. While I’m still concerned about free thought effects, where cultures living under radical transparency might develop a global preference falsification monoculture (situations where everyone in the open world is lying about what kind of world we want due to a repressive false consensus, crushing innovation, healthy criticism of mainstream ideas, etc)… that concern is decreasing as I go, I think it’s going to turn out to be completely defeatable.
This will be approximate, I hope to do a full post about it eventually, but, a way of summing up my current view is...
Radical transparency is already steadily happening because it is incredibly useful (this surprises me too). Celebrity, twitter, Disclosure movements, open-source intelligence.
Weird people will always exist, you will always have to look at them, no amount of social pressure will make them go away, and some of them are critical specialists, who we need and love. Most of the thinkers and doers and processes of dialog that I actually admire and respect are weird in a way that is resilient to those anti-weird anti-free-thought effects that we were worried about, and I’m not really afraid of those effects at all on most days.
People will start to exult a new virtue of brazenness, once they see that free thought is a hard dependency for original work. Everyone I know (including you) already sees that is. Even transparency’s best critics are stridently admitting that it is. On the other side: The people who stop exploring when they’re being watched, will also very visibly stop being able to produce any original thoughts at all. Communities of othering and repression of small differences will quickly become so insane and ineffective that it will alienate everyone who ever believed in them, even their own members will start to notice (this is already happening under the radical transparency of twitter, which, note, interestingly, was completely voluntary, and mostly unremarked upon). And the people of brazenness will very visibly continue producing things, and so I expect brazenness to become fashionable.
Transparency will harm experimental work momentarily, if at all, before the great gardener sees in this new light, that the pitiful things they’ve been treading on all of this time were young flowers, learns to be more careful with rough and burgeoning things, and then western culture will adapt to transparency, and then we will fear it no more.
But the largest obstacle is that the technologies for fair transparency still don’t quite exist yet ( consistent, reliable and convenient and trustworthy recording systems, methods for preventing harrassment mobs (DDOS protection, better spam prevention)). But I’ve found that the solutions to these issues (hardware, protocols, distributed storage, webs of trust) are not very complicated, and I think they’ll arrive without much deliberate effort.
The next largest obstacle is mass simultaneous adoption, which you rightly single out with the discussion global democratic agreement. A transparent society is not interested in going halfway and building a panopticon or building a transparent state that will simply crumble in the face of an opaque one. I’m not confident that a global order will manage to get over the hump.
I have some pretty big objections to some of the things you said on this, though. Mainly that the advantages for the majority of signatories in universal transparency are actually great:
Even just on the margin: Note that celebrity is a kind of radical transparency. Note that the best practitioners tend to want to publish their work because the esteem of releasing it outweighs whatever competitive advantage it might have won their company to not release it.
It would allow their field to progress faster as a result of more sharing, and of course it means that they can progress more safely. You assert that you consider it unlikely that you’ll live to see a catastrophe. I think that’s uninformed. Longtermist arguments work even if the chance is small and far off, but the chance actually isn’t small or far off. Ajeya Cotra found that biological anchors for intelligence set a conservative median estimate for the arrival of AGI in about 2050, but Ajeya’s personal median estimate is now 2040. Regardless, (afaict) most decisionmakers have kids and will care what happens to their grandkids.
It’s still going to be difficult to get every state that could harbor strong AI work to sign up for the unprecedented levels of reporting and oversight required to limit proliferation. I’m not hopeful that those talks will work out. I’ll become hopeful if we reach a point where the leaders in the field safely demonstrate the presence of danger beyond reasonable doubt (Demonstration of Cataclysmic Trajectory). At that point, it might be possible.