Nobel Prize in Physics awarded for teaching artificial intelligence

Princeton University (USA) Professor John Hopfield and University of Toronto (Canada) Professor Geoffrey Hinton have been awarded the 2024 Nobel Prize in Physics “for their seminal discoveries and inventions that enable machine learning using artificial neural networks.”

As you know, Nobel Prizes are not awarded in mathematics, and research in artificial intelligence is essentially a field of mathematics. However, the Nobel Committee found a loophole to celebrate the achievements of those who, more than forty years ago, laid the foundations for one of the most dynamically developing areas of research today (Hopfield is already 91 years old, and Hinton is 76 years old). Both winners of this year’s Nobel Prize in Physics used physical laws to develop techniques that became the basis of today’s powerful machine learning, a critical tool in the development of artificial intelligence.

Professor of Princeton University (USA) John Hopfield, creator of a neural network that works as associative memory (photo by Princeton University)

University of Toronto (Canada) professor Geoffrey Hinton is the “godfather” of modern deep learning for large neural networks. Speaking at the University of Toronto in 2023.

Schematic representation of a Hopfield network. Circles are neurons with values ​​0 (black) and 1 (white). Lines show connections.

The creation of artificial neural networks is an attempt to simulate the functioning of the human brain. They consist of a large number of nodes – neurons that receive signals (values) from other neurons and transmit them further along connections, which are characterized by numbers that determine the strength of a given connection. The goal of machine learning is to select these numbers so that the network solves the desired problem.

John Hopfield has developed a structure that can act as an associative memory, capable of storing and retrieving images and other types of data patterns. Now this neural network model is better known as the Hopfield network. When analyzing images, neurons in it can be represented as pixels. Hopfield described the overall state of his network with an expression that is equivalent to the energy in a spin system considered in physics. This energy is calculated using a formula that uses all the values ​​of the neurons and all the strengths of the connections between them.

The Hopfield network is programmed using an image fed to neurons, which are assigned the value of black (0) or white (1). The network connections are then adjusted using an energy formula so that the stored image has the lowest energy. The network, as it were, saves and remembers this image. When a different image is fed into the network, there is a rule to go through the neurons one by one and check if the network has lower energy if the value of the neuron in question changes. For example, it turns out that the energy will decrease if a black pixel is replaced with a white one, then it changes color. This procedure continues until no further improvements can be found. In fact, we follow the well-known position in physics that the equilibrium position corresponds to a minimum of energy. Thus, the network tends to one of the equilibrium states. When this point is reached, the network often reproduces the original image, or a close approximation, on which it was trained. For example, the problem of image recognition, in particular faces, is solved. The network simply finds the stored image that is most similar to the one it was fed.

Geoffrey Hinton used the Hopfield network as the basis for a new network that uses a different method: the Boltzmann machine. It can independently learn to recognize characteristic elements in a given data type. Hinton used the laws of statistical physics, in particular, the method received its name in honor of the statistical Boltzmann distribution used in the sample function on which it is trained. The Boltzmann machine can be used to classify images, identify certain elements in images, or create new examples on which it has been trained. This method formed the basis of deep learning, which is important for the large, multi-layer artificial neural networks used today. It is to him that we owe the amazing successes that artificial neural networks have achieved in the last decade. Hinton is sometimes even called the “godfather of deep learning.”

Let us note another important contribution of Hinton to the development of artificial neural networks. In the 1960-1970s, they did not yet know how to train multilayer networks, which did not allow solving many problems. Then for some time interest in them even dropped. But then in the 1980s, the so-called backpropagation algorithm was developed for training multilayer neural networks, which led to an explosion in the industry. One of the developers of this method was Geoffrey Hinton.

Source: www.nkj.ru