From the perspective of a PhD student in quantum computing, I would say that one should not worry excessively about quantum computing breaking cryptography. This is mainly for two reasons:
1. As pointed on other comments by RavenclawPrefect and beth, so called “post-quantum” cryptographic algorithms are being developed that should not be vulnerable to cryptography (NIST holds a contest to develop the future standard). I am not specially skilled on particularly this topic, but it seems that some approaches regarding Hash functions or lattices could be feasible. This are just the usual kind of public key mathematical cryptography, but with harder problems.
2. Even in the highly unlikely situation where the above point fails, quantum stuff gives you a solution: quantum cryptography is theoretically invulnerable to almost any kind of attacks. I say theoretically, because quantum devices are not perfect and an adversary may be able to exploit this to take advantage. The most famous quantum key distribution algorithms are called BB84 (the first one to be discovered) or Arthur Ekert’s one based on Bell Inequalities. To the best of my knowledge, the research edge now is on the topic of “Device independent quantum cryptography”, in which you are supposed to be using a device from a supplier that you may not trust. This path to secure cryptography is more physical one: just find a way to perform private key distribution in a safe way.
In conclusion, I do not expect cryptography to suffer from QC making it unfeasible, but rather it seems more likely that cryptography will become even harder to break.
That said, I am actually trying to figure out in my PhD if there could be any interesting areas of research where QC may be useful in the field of AI Safety. Arguments for it are that there exists a research topic called Quantum ML which is in its infancy still. On the other hand, AI safety may not require any specially compute expensive algorithms, but rather the right approaches (also maybe with higher level of abstraction you would have in QC). I say this because I would be very interested in hearing from anyone who would like to work on similar topics (because they have this background in particular) and/or would have particular hindsights for ideas that could help.
From the perspective of a PhD student in quantum computing, I would say that one should not worry excessively about quantum computing breaking cryptography. This is mainly for two reasons:
1. As pointed on other comments by RavenclawPrefect and beth, so called “post-quantum” cryptographic algorithms are being developed that should not be vulnerable to cryptography (NIST holds a contest to develop the future standard). I am not specially skilled on particularly this topic, but it seems that some approaches regarding Hash functions or lattices could be feasible. This are just the usual kind of public key mathematical cryptography, but with harder problems.
2. Even in the highly unlikely situation where the above point fails, quantum stuff gives you a solution: quantum cryptography is theoretically invulnerable to almost any kind of attacks. I say theoretically, because quantum devices are not perfect and an adversary may be able to exploit this to take advantage. The most famous quantum key distribution algorithms are called BB84 (the first one to be discovered) or Arthur Ekert’s one based on Bell Inequalities. To the best of my knowledge, the research edge now is on the topic of “Device independent quantum cryptography”, in which you are supposed to be using a device from a supplier that you may not trust. This path to secure cryptography is more physical one: just find a way to perform private key distribution in a safe way.
In conclusion, I do not expect cryptography to suffer from QC making it unfeasible, but rather it seems more likely that cryptography will become even harder to break.
That said, I am actually trying to figure out in my PhD if there could be any interesting areas of research where QC may be useful in the field of AI Safety. Arguments for it are that there exists a research topic called Quantum ML which is in its infancy still. On the other hand, AI safety may not require any specially compute expensive algorithms, but rather the right approaches (also maybe with higher level of abstraction you would have in QC). I say this because I would be very interested in hearing from anyone who would like to work on similar topics (because they have this background in particular) and/or would have particular hindsights for ideas that could help.
Thanks!