Your independent source for Harvard news since 1898 | SUBSCRIBE

Your independent source for Harvard news since 1898

Science

Neural-Network Pioneer Yann LeCun on AI and Physics

9.18.19

Photograph of Yann LeCun

Yann LeCun
Photograph by Marlene Awaad/Bloomberg via Getty Images


Yann LeCun
Photograph by Marlene Awaad/Bloomberg via Getty Images

The machine-learning revolution has touched every area of science and engineering, transforming speech recognition, medical imaging, and even particle physics. Most of the recent interest comes from breakthroughs in artificial neural networks: mathematical frameworks inspired by the brain’s structure that enable computers to recognize complex patterns in data sets. Neural networks are used to automate many kinds of tasks, from finding a familiar face in a crowd of people to detecting malignant tumors in an MRI. Time-consuming work that once required a team of specialists can now be done instantaneously.

These revolutionary tools burst into prominence only in 2012, when Alex Krizhevsky, an unknown graduate student at the University of Toronto, beat researchers around the world in a competition to classify images. He used an unusually big neural network—something most poorly funded labs would shy away from because they simply require too much time to train (on the scale of months or years). But Krizhevsky used a special kind of computer chip capable of reducing the learning process to less than a week. He created his groundbreaking algorithm after some trial-and-error, and his paper now has more than 40,000 citations. 

Fifteen years before Krizhevsky lit the powder keg, though, a scientist trained in the same Toronto laboratory laid the intellectual groundwork that made the revolution possible. Yann LeCun, then a researcher at Bell Labs, invented the modern “convolutional neural network” in 1998. He noticed that images contain a lot of redundant information: if one pixel shows part of a white shirt, for example, then its neighbors are also probably white. LeCun’s convolutional neural network takes advantage of this natural structure, combining it with insights from calculus to create an algorithm that can train itself to recognize any object in an image. During the next decade, his method gradually won acceptance. But it wasn’t until Krizhevsky’s algorithm shocked the world that LeCun became a legend: last year alone, he won the Turing Award (the Association for Computing Machinery’s top honor) and earned almost 30,000 citations.

On Monday, LeCun appeared at Harvard to deliver the first of his Morris Loeb Lectures, sponsored by Harvard’s physics department. Most Loeb lecturers are important physicists (last semester, Nobel laureate Donna Strickland came to speak). LeCun is not, but no one seemed to care: this year the series had to be moved into the Science Center because of overwhelming interest—reflecting the breadth of faculty and student engagement in computer science and its increasing applications across a dizzying array of fields of inquiry. At LeCun’s first talk, every seat was filled and crowds packed into the back of the lecture hall.

LeCun moved quickly between sophisticated mathematics and gentle humor, usually at the expense of physicists (all physics is, he said, is fiddling with fundamental constants until the universe pops out). He outlined a few breakthroughs in physics that have been made possible by his work, pointing to a few discoveries made at Harvard. For example, professor of physics Matthew Schwartz has used neural networks to identify subatomic particles in supercolliders from the fragments they leave behind; the computer learns high-energy physics on its own in a matter of days, leaving in its dust the graduate students who have suffered through Schwartz’s legendary quantum field theory class.

But many scientists have an uneasy relationship with artificial intelligence, or AI. A key problem is interpretability: neural networks make decisions in mysterious ways. They far outperform humans in games people have spent lifetimes mastering, like Go, so machines must “think” in alien ways. Unlike industry, which cares more about getting technology to market than understanding its subtleties (if the car drives itself without crashing, who cares how it works?), academic scientists are dedicated to decoding the universe. Perhaps a neural network could find a pattern in physical data no human ever noticed—but if researchers don’t understand the pattern, then no new physics can be learned. It’s one thing to notice that objects fall, but it still takes a Newton to discover gravity. LeCun downplayed this problem in his first lecture, arguing that technology usually runs ahead of science: Galileo, he pointed out, used the telescope long before scientists understood optics. Modern researchers seem to have taken this approach with neural networks, using them as a tool to process the massive amount of data that emerges from equipment like particle accelerators, even as their inner workings remain hidden.

LeCun focused his lecture not on the weaknesses of AI, but on how physicists could help computer scientists overcome current challenges in research. Neural networks, for example, are terrible at handling uncertainty. If asked to predict the next frame in a movie, a neural network tends to proffer a blurry image; an actor or camera could move in several different directions, but the computer doesn’t know which one, so the network smears together several possible futures to create a distorted picture. Physicists, on the other hand, have developed sophisticated methods for handling uncertainty. In the nineteenth century, a scientist named Ludwig Boltzmann realized that focusing on the energy of a system was enough to understand many of its statistical properties. Knowing the temperature of a gas, for example, tells you the odds that any atom has a given velocity: one number contains information about the uncertain motion of many particles. Perhaps neural networks could eventually generalize the idea of energy to solve a much wider set of problems.

Although neural networks have revolutionized science, they have a darker side as well. The same convolutional neural networks that speed up cancer diagnoses are used by the Chinese government to oppress its Uighur minority. LeCun did not wade into the ethical consequences of his work, even as public scrutiny of AI rises. A Loeb lecture that delves into morality would be unusual—but so is Yann LeCun, and the tool he created. 

As neural networks reach a decade in the spotlight, they occupy an increasingly strange place in intellectual life. They seem to have infinite promise in science, but researchers still don’t understand how they work. They create a wide array of interesting research questions, but it is unclear what the consequences of those questions will be. Will LeCun’s technology be remembered as a tool that helped cure cancer and crack the laws of the universe, or one that supercharged racial profiling? LeCun’s lecture left listeners with plenty of questions, many of which will take years to answer.

You Might Also Like:

Computer artwork of an autoradiogram of DNA sequences.

Computer artwork by Alfred Pasieka/Science Source

There’s (Still) No Gay Gene

A crow perched on a branch inside an aviary holds a stick tool in its beak during downtime from the experiment

A crow holds a stick tool during its downtime from the experiment.

Photograph by Dakota McCoy

Crows Know How to Have Fun

You Might Also Like:

Computer artwork of an autoradiogram of DNA sequences.

Computer artwork by Alfred Pasieka/Science Source

There’s (Still) No Gay Gene

A crow perched on a branch inside an aviary holds a stick tool in its beak during downtime from the experiment

A crow holds a stick tool during its downtime from the experiment.

Photograph by Dakota McCoy

Crows Know How to Have Fun