Why Nobel winners are nervous about AI

 The 2024 Nobel Prize in Physics has been awarded to two scientists for their groundbreaking contributions to artificial intelligence, the latest buzzword in the field of computer technology.

When scientists were trying to find how computers could learn from the data provided to them and produce output like a human brain would, they thought of neural networks or the network of nerve cells in human brains. Thus came the idea of artificial neural networks. This, in turn, led to what we call machine learning – learning by computing machines.

Only when machines are able to learn can they lead to artificial intelligence. For example, machines (=computing machines = computers) are given a lot of data and prompted to learn how humans compose sentences out of words; that led to language processing. When machines learnt to find patterns in human faces - how human faces differ and how facial muscles, skin, eyes, nostrils, lips, etc. change with emotions – they could recognize faces and facial expressions. When machines were given enough data on what is found in images, e.g. hills, trees, humans, cats and fish, they learnt to recognize different types of images and also compose new images based on text inputs.

This machine learning has many aspects, and two of them are essential for any type of machine learning: making sense out of data, and learning from mistakes. The two Nobel laureates, John J. Hopfield and Geoffrey E. Hinton, are pioneers respectively in these two pillars of machine learning.

It is interesting to know that, though the two scientists are pioneer in the field of artificial intelligence, they feel nervous about the possible misuse or the new technology or it going out of hand. Let me end this report with their two recent quotes:

“That's the question AI is pushing. Despite modern AI systems appearing to be absolute marvels, there is a lack of understanding about how they function, which is very, very unnerving,” Hopfield says.

Hilton is equally concerned. “I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control,” he told reporters after the Nobel announcement.