Fei-Fei Li's memoir ponders artificial intelligence ethics : Short Wave AI is popping up everywhere nowadays. From medicine to science to the Hollywood strikes. Today, with computer scientist and AI pioneer Fei-Fei Li, we dig deeper into the history of the field, how machines really learn and how computer scientists take inspiration from the human brain in their work. Li's new memoir The Worlds I See traces the history of her move to the U.S. from China as a high school student and her coming-of-age with AI.

Host Regina G. Barber talks to Li about her memoir, where the field may be going and the importance of centering humans in the development of new technology.

Got science to share? Email us at shortwave@npr.org.

Trailblazing computer scientist Fei-Fei Li on human-centered AI

Trailblazing computer scientist Fei-Fei Li on human-centered AI

  • Download
  • <iframe src="https://www.npr.org/player/embed/1198908536/1212007747" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

The cover of Fei-Fei Li's new memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI. Fei-Fei Li hide caption

toggle caption
Fei-Fei Li

The cover of Fei-Fei Li's new memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI.

Fei-Fei Li

What is the boundary of the universe? What is the beginning of time?

These are the questions that captivated computer scientist Fei-Fei Li as a budding physicist. As she moved through her studies, she began to ask new questions — ones about human and machine intelligence.

Now, Li is best known for her work in artificial intelligence. Her memoir, The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI, came out this week. It weaves together her personal narrative with the history and development of AI.

Throughout her career, Li has advocated for "human-centered" AI. To her, this means creating technology inspired by human intelligence and biology, using AI to enhance human capabilities rather than replace them and consider the potential impact on humans when developing new technology.

From physics to vision

Li's journey as a scientist began with physics. She was captivated by the way physicists questioned everything.

While reading works by famous physicists, she saw them asking some new questions – and not just about the atomic world, but about life and intelligence. An internship at the University of California, Berkeley further ignited her interest in the brain. She was intrigued by how layers of connected neurons could result in complex, high-level awareness and perception.

In particular, Li was fascinated by vision.

"Rather than bury us in the innumerable details of light, color and form, vision turns our world into the kind of discrete concepts we can describe with words," she writes in her book.

Li later learned about a field of AI called computer vision, or the way scientists train computers to recognize and respond to objects. It's used for things like self-driving cars and x-rays. Li says the process is inspired by the human visual system – but instead of eyes and retinas, computers use cameras and sensors to capture images and data. Then, they need to make sense of that data.

To achieve this goal, computer scientists use something called a neural network, which Li says is also inspired by the human brain. While the brain's fundamental unit is a neuron, neural networks are made of millions of "nodes" stacked together in layers. Like neurons in the brain, these layers of nodes take in and process that data.

The mystery of machine intelligence

Despite advances in the field, Li says there are still mysteries about how AI learns.

"Now everybody uses powerful AI products like Chat GPT," she says. "But even there, how come it can talk to you in human-like language, but it does stupid errors in math?"

Li says this generation of AI models is trained on data from across the internet, but how all of that data is processed and how models make decisions is still unknown.

To illustrate this point, she rhetorically asks how computers see, "Because what you get in a photo are just lights and colors and shades — yet you read out a cat."

These questions will only continue to grow as the use of AI becomes more widespread and more researchers enter the field.

Keeping AI ethical

Mystery aside, Li says AI can be used for bad or good. In order to ensure it's used for good, she says scientists must commit to exploring potential problems with AI, like bias.

One solution, she thinks, is for society to start coming up with ways to regulate the technology.

"The biggest issue of today's AI is that the technology is developing really fast, but the governance model is still incomplete. And in a way, it's inevitable," she says. "I don't think we ever create governance models before a technology is ready to be governed. That's just not how our society works."

One solution, she says, is to use AI to enhance human work rather than replace it. This is one reason why she founded the Stanford Institute for Human-Centered Artificial Intelligence and why she thinks the future of AI should include both scientists and non-scientists from all disciplines.

"We should put humans in the center of the development, as well as the deployment applications and governance of AI," Li says.

Got science to share? Email us at shortwave@npr.org.

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

Today's episode was produced by Rachel Carlson. It was edited by Berly McCoy. Brit Hanson checked the facts. Patrick Murray was the audio engineer.