GenAI and a Skeptical Hypothesis World

Epistemology is the philosophical inquiry into knowledge. It addresses questions like “what are the necessary and sufficient conditions for a person to know a proposition P?” (an account of knowledge) and “what does it mean for a person to be justified in believing P”? A standard problem for an account of knowledge is a Skeptical Hypothesis. The most famous skeptical hypotheses are Descartes’ deceiving demon (DDD), and the brain in the vat (BIV). DDD is a scenario where all your sensory perceptions are just as they would be if you were a person with a body in the world with correctly functioning senses, but instead of the senses receiving information from the world, you are actually a mind that is receiving misinformation perceptions piped directly into your brain/​mind. BIV is a similar scenario, but instead of a demon, it is some unspecified source that is piping senses directly to your brain, which is not in a body, but in a vat. These skeptical hypotheses are not meant to live possibilities for the how the world might be, but rather are meant to undermine accounts of knowledge. If I cannot know that I am NOT a BIV, and my being a BIV contradicts with my being a brain in a body with functioning eyes and ears perceiving the external world, then I cannot know that I am a brain in a body correctly perceiving the external world. So, knowledge of the external world is not possible.

Epistemologists tend to respond to this problem with a position called Fallibilism. I can know P even if I haven’t ruled out every case in which not-P might hold (Skeptical Hypotheses). We can justify this position in lots of different ways, including through pragmatic considerations, or through properly ignoring certain hypotheses that are not relevant context. So we can retain knowledge.

However, we currently face a situation that is increasingly at risk of becoming a sort of actual Skeptical Hypothesis, with the proliferation of highly realistic generative AI. Far from the rarefied heights of academic epistemology, this technology, and the mediation of our experience of the world through screens, makes deep fake based skeptical hypotheses not only contextually relevant, but in many situations actually the case. Just as the Skeptical Hypotheses of epistemology threaten to undermine the possibility of empirical knowledge, Deep Fake Hypotheses seem to threaten even a Fallibilist account knowledge. The result of this is a rapidly deteriorating epistemic situation for all of us. We cease to feel like we can know things, we stop trusting information channels, as one by one they get exploited by deep fakes and become susceptible to relevant skeptical hypotheses. I am not sure what the downstream consequences of this look like, other than power grabs by those who are able to successfully manipulate and maneuver in this environment. Things are not good, though.

A solution to this takes the form of a system. In this system, it is the norm for everyone to have a public/​private key pair associated with their various identities, some of which are tied to them as individuals, some tied to a company, some tied to some other type of organization. When members of this system release information into the ether, they cryptographically sign it. This can serve both to prove they are the source, and also to prove that the content of the information has not been manipulated.

The problem is that generative AI capabilities are advancing faster than a Schelling point norm of public key infrastructure. There is not a single tool or platform for provisioning and managing these keys in a trusted way that has achieved equilibrium, and people mostly do not want to bother learning about this stuff.

But if we had such a platform or system, and it was the default to use it, people would use it, and it would solve our impending problem. Just like how early websites were vulnerable because they did not use encryption by default, when Google and Firefox and Microsoft made HTTPS the default and pushed for the widespread use of certificates, things became much more secure, although obviously not perfectly secure. Now, we know not to enter any sensitive information on websites that just use HTTP, and we are cautioned against using them in general. The same sort of equilibrium will emerge when we have a default use of public keys for information broadcasting. Items that lack a cryptographic signature from a known source will be untrusted, because they might be from malicious actors.

At the moment, it is a collective action problem. Who will stand this up and manage it? Who will coordinate with the technology companies at the backbone of the Internet so that it is easy to use by default? I think it’s a problem worth throwing a few EA Bucks at.

No comments.