The continuous development and global use of digital technology has led to developments in areas of science that could be described as revolutionary.
One of the most recent revolutionary developments concerns the way in which artificial intelligence “learns”. Within a few months, it turned out that convergent neural networks and deep neural networks – algorithmic constructs used in computing intelligence – could drastically reduce voice or object recognition errors.
This was followed by the “revolution” of data volume, which is mainly due to the huge increase in social media (2 billion images are posted on Instagram and Facebook daily) and their ability to “train” these algorithms related to “learning” “of artificial intelligence.
In recent years, another “revolution”, this time in terms of computing power, has allowed – almost – each of us to be able to manage large volumes of data, thanks to processors that perform one trillion calculations per second, at a cost of less than 1,000 euros per processor.
Most of the electronic components we use today were not designed to run the energy-demanding algorithms used by neural networks. Processing units – either for standard computers or for specific applications – are much more powerful and are widely used for machine learning applications. Nevertheless, the electronic architecture remains the same.
The editing functions are still separate from the memory functions, resulting in a loss of power. In fact, the loss of energy is going to increase as the systems we know today, with millions of artificial neurons, are constantly evolving and approaching the 100 billion neurons in the human brain!
This realization led to the idea of developing an artificial nano-neuron that mimics the function of the human brain, while being 10,000 times more energy efficient.
It is an idea based on the pioneering work of Nobel laureate Albert Fert in the field of spintronics (a technique that uses the property of quantum rotation of the electron to store information).
Using the quantum rotation of the electron – a property analogous to magnetism – the research team combines data storage and processing, using electricity instead of a magnetic field.
The MELVIN as an example combining AI and quantum experiment
One of the most famous examples that AI (artificial intelligence) helps to the quantum experiment is from Kren. He wrote a computer program that took the experimental setup as input and calculated the result. He then upgraded it so that it could include in its calculations the building blocks of the experiments: Lasers, nonlinear crystals, ray separators, phase modifiers, holograms, etc. The program was looking for a wide range of layouts, randomly mixing different blocks and matching them with each other, doing the calculations and getting the result. It was MELVIN. Within hours the program found a solution that scientists – three experimenters and one theorist – could not find for months.
Kren, collaborated with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output. While it would take Kren and his colleagues days or even weeks to understand MELVIN’s meanderings, they can almost immediately figure out what THESEUS is saying.
Quantum physicist Nora Titzler believes that such an experiment would not have been possible without an algorithm. Ephraim Steinberg, an experimenter at the University of Toronto, agrees that this is a generalization that no human has thought of for decades and probably would never have thought of.
In their first attempts to simplify and generalize what MELVIN discovered, Kren and his colleagues realized that the solution was similar to the mathematical forms called letters. MELVIN first produced such a complex graph and then performed mathematical operations on it, making it easier to calculate the quantum state, although this method is more difficult for humans to understand. That is why his successor, THESEUS, produces much simpler graphs, removing from them the non-essential vertices and edges (to the point that further simplification would destroy the solution). This makes it much easier for scientists to understand the meaning of the solution produced by AI.