Google's Go-playing software defeated a human champion.
Google's Go-playing software defeated a human champion.
An artificially intelligent computer system built by Google has just beaten the world's best human, Lee Sedol of South Korea, at an ancient strategy game called Go. Go originated in Asia about 2,500 years ago and is considered many, many times more complex than chess, which fell to AI back in 1997.
And here's what's really crazy. Google's programmers didn't explicitly teach AlphaGo to play the game. Instead, they built a sort of model brain called a neural network that learned how to play Go by itself.
The Google program, known as Alpha Go, actually learned the game without much human help. It started by studying a database of about 100,000 human matches, and then continued by playing against itself millions of times.
As it went, it reprogrammed itself and improved. This type of self-learning program is known as a neural network, and it's based on theories of how the human brain works.
Here at NPR Ed, our mission is to cover how learning happens. The advent of computer systems that "learn" surely falls into this category.
And it poses some fascinating questions. First: What could AI technologies do for human education? Second: How should human education respond to the challenges posed by AI?
I spoke to Laurie Forcier, one of the authors. She told me that existing computer systems can already provide some of the benefits of one-on-one tutoring. They can also facilitate and moderate group discussions and simulate complex environments for the purpose of learning.
The Pearson report predicts software that will bring helpful feedback in an instant about students' progress, their knowledge state and even their state of mind — eliminating the need to stop and give a standardized test.
On a more futuristic, somewhat creepier note, Forcier and her co-authors also suggest the development of something called a "lifelong learning companion." This is a concept first introduced by early AI researchers decades ago.
Like an imaginary friend, learning companions would accompany students — asking questions, providing encouragement, offering suggestions and connections to resources, helping you talk through difficulties. Over time, the companion would "learn" what you know, what interests you, and what kind of learner you are.
With all its data in the cloud, accessible by phone or laptop, it could follow you from school to soccer practice to internship to college and beyond, and be a valuable record of learning in all contexts. Maybe your learning companion could even write you a letter of recommendation that could serve as a credential.
"Why is the pamphlet called, 'an argument for AI in education' ?" I asked Forcier. "Who's arguing against it?"
She said that, of course, there are fears about AI being used to replace human teachers. Although the pamphlet states "Teachers — alongside learners and parents — should be central to the design of AIEd tools, and the ways that they are used," it also talks about using AI to address teacher shortages, especially where subject matter expertise is missing.
And here we slam right into the second question: How should human education best respond to the challenges, even the threats, posed by AI?
Because, when computer systems aren't winning at chess, Jeopardy or Go, they're working. They're booking appointments, preparing legal documents, helping you file your taxes — jobs that used to be done by humans with at least a little bit of education.
The World Economic Forum recently projected that automation will eliminate at least 5 million jobs worldwide by 2020.
The continued progress of AI thus poses a new framework for thinking about the relevance of education. It would be a mistake to spend too much time focusing narrowly on skills and content areas that are quickly being sidelined by technology.
In other words, in a world where computers are taking more and more of the jobs, what is it that humans most need to learn? It probably isn't primarily memorizing facts or figures, or simple rules for problem solving.
An immediate answer is that more of us need to get better at building and interacting with software tools. That's why President Obama, for example, has called for all U.S. students to be exposed to computer science.
A second answer is complementary to the first. Responding to the AlphaGo victory, Geoff Colvin, an editor for Fortune magazine and the author of a book about human capabilities, wrote in The New York Times:
"Advancing technology will profoundly change the nature of high-value human skills and that is threatening, but we aren't doomed. The skills of deep human interaction, the abilities to manage the exchanges that occur only between people, will only become more valuable."
As examples, Colvin names empathy, managing collaboration in groups and storytelling — that is, creative communication.
This takes us back to the "argument" about AI in education, though. It's most likely that human teachers are best at teaching human students how to manage what Colvin called "the exchanges that occur only between people."
But to beat people out for jobs, computer systems don't necessarily have to be better at those jobs. (Have you had a great experience with an automated phone menu recently?) Because they work, more or less, for free.
So one great fear when it comes to the Pearson vision of AIEd is that we reproduce existing inequalities. Some students get individualized attention from highly skilled human teachers who use the best learning software available to inform their practice. Other students get less face time with lower-skilled teachers plus TutorBots that imperfectly simulate human interaction.
It's a genuine human challenge, one that's a lot more complex than any game.