Experts in cryptography remain unconvinced about the applications of quantum computers in their field. Opening for the RSA Conference 2021, a panel of academics, cryptographers, and security professionals called into question the feasibility of the technology.
They also touched on the implications of machine learning in a cybersecurity context, and the topic of high-profile supply chain hacks hitting the industry.
Technologists and even governments are animated about the potential for quantum computing to offer new levels of computational speed and cryptography capabilities.
But the science of these future generations of computers relies on the murky laws of quantum mechanics. Consequently, quantum computing has potentially huge though largely hypothetical implications for those with invested interests in encryption technology, such as major cybersecurity player and conference organiser RSA.
Adi Shamir, who is Borman Professor of Computer Science at The Weizmann Institute in Israel and one of the founders of RSA, expressed scepticism about the technology.
“This year, progress in quantum computing has been two steps ahead, one step back,” the cryptography veteran said. Just a short time ago, questions arose over the existence of the fundamental quantum object on which Microsoft based all its quantum computing research, he explained.
After investing the past ten years on the use of Majorana’s fermions as the fundamental qubit of its quantum computers, Microsoft hit a setback when the foundational physics paper had to be retracted. The qubit is to quantum computing what the bit is in conventional computer systems.
Currently, Shamir explained, it is no longer clear if Majorana’s fermions even exist, and if Microsoft will be able to proceed in the way it has done for a decade.
“There are very strong claims being made, and then they are backpedalled or are diluted,” Shamir remarked of the technology generally.
Ron Rivest, Professor at Massachusetts Institute of Technology, and another of RSA’s founders, expressed amazement about the extreme commitment being seen in the pursuit of an unproven technology.
“It is astonishing to me how much energy is going into the commercialisation of technology that doesn’t yet exist,” the MIT professor said, adding that the amount of money being invested in the nascent technology is incredible.
Rivest centred his position on two questions: can a quantum computer be built at scale that lasts long enough to do a useful computation? And are there useful applications for this technology, even if it can be built?
“And I think the answers so far are ‘not clear,’ and ‘maybe’ — so we’ll see,” Rivest said.
Ross Anderson, Professor of Security Engineering at Cambridge University and Edinburgh University, remains similarly unconvinced.
“As far as quantum cryptography is concerned, I’m entirely unimpressed, because all you can do is rekey a line in crypto — and we’ve known how to do that for 40 years,” he said.
Proofs based on quantum entanglement are unconvincing because they only work in certain interpretations of quantum mechanics, he said.
Anderson is a quantum computing sceptic.
“I’m not surprised that nobody has seen any real quantum speed up yet,” he said, adding that he would be surprised if that happened, though “of course, it would be great if it does.”
Machine learning and adversarial environments
Discussion also moved to machine learning and the implications of neural networks for cybersecurity.
“At a high level, we have the maxim in security that complexity is the enemy of security,” explained Adi Shamir, adding that the more complicated you make a system, the more vulnerable it becomes to all kinds of faults and penetrations.
“Machine learning is nothing but complicated,” he said.
Carmela Troncoso, Assistant Professor at the Swiss Federal Institute of Technology Lausanne, focuses her research on machine learning, privacy evaluation, and privacy-preserving systems.
Her investigations point to an issue with the underlying constituents of so-called “task-worthy” machine learning: robustness, fairness, explainability, and privacy preservation.
“There are more and more results that indicate that these four dimensions may not be compatible. So, when you put more privacy you have less robustness, or you put more robustness, you end up losing privacy,” Troncoso explained.
There are a lot of companies diving into privacy preserving machine learning, Troncoso said, but the question then becomes what are these companies going to do with the data?
“What happens when the business model is not aligned with societal interest?” the researcher remarked.
In addition to privacy and explainability issues, there are a set of robustness issues when people start using neural networks to do real work in safety critical systems, such as in health care or vital infrastructure, explained Ross Anderson. This is because neural networks are very highly optimised.
An adversary can try to undermine these optimised systems, using certain inputs, to make machine learning models take as long as possible, or burn as much energy as possible.
“We found that, in particular, natural language processing systems, which are getting everywhere nowadays — are very, very fragile to this kind of attack,” Anderson said.
Natural language processing systems are used to apply machine learning to human language, an example being automated translation platforms. Anderson and his colleagues found that if foreign characters and symbols are used as input for these systems “it typically sends them haywire.” Machine learning is brittle when faced with unfamiliar or cynically distorted input.
The academic suggested that these issues might play out in very complicated ways if nation states start learning how to undermine algorithms that have control of armed drones or military systems.