On campuses today almost every educational interaction leaves digital traces. Assignments and feedback are given through online portals; debates and discussions happen via learning management systems as well as in classrooms, cafes and dorm rooms.
Those and other digital crumbs give technologists the opportunities to examine the processes, practices and goals of higher education in ways that were largely impossible a decade or so ago.
We've reported here and here on Stanford physics Noble Laureate Carl Wieman's "active learning" revolution.
Another physicist-turned-education-innovator (is there something in the physics lab water?) named Timothy McKay sees great promise in "learning analytics" — using big data and research to improve teaching and learning.
McKay, a professor of physics, astronomy and education at the University of Michigan argues in a recent white paper, that higher ed needs to "break down the perceived divide between research and practice."
There are privacy and ethical concerns, of course, which in turn has prompted fledgling codes of conduct to spring up.
I reached out to Professor McKay, who also heads Michigan's Digital Innovation Greenhouse, to dig deeper on how learning analytics work in higher ed.
Give us an example of how new and better data is helping universities and professors understand students better.
I'll give you an example that's drawn from my own experience. I've been teaching here at the University of Michigan for more than 20 years. Most of my teaching has been large, introductory physics courses ... from 400 to 700 students. Now, the way universities have traditionally done this is to provide a kind of industrial approach, to go to that large group of people and to offer them the same materials, ask them to do the same kind of activities at the same pace, and evaluate all those people in exactly the same way. Everybody gets the same course.
If it's well-designed, it's pitched perhaps for the median student in that class. It kind of works well for that median student, but it doesn't work well for anybody else.
What I discovered when I began to look at data about my own classes is something that should have been obvious from the start but wasn't really until I examined the data. I came to understand just how different all the students in my class were, how broadly spread they are across a variety of different spectra of difference, and that if I wanted to teach them all equally well, it doesn't work to deliver exactly the same thing to every student.
You're better able to personalize and narrow-cast for students who might need help, who might have a different background, who might have a different perspective?
Or different goals. A lot of times, the discussion will be about students who might be behind or at-risk, but it's also true for students who are really excelling academically. They also need special kinds of attention. The first thing that happened for me was to open my eyes to the real challenge, the real importance of personalizing, even when we're teaching at scale.
Then what followed that was a realization that since we had, in fact, information about the backgrounds and interests and goals of every one of our students, if we could build tools, use information technology, we might be able to speak to every one of those students in different ways to provide them with different feedback and encouragement and advice.
We've built this tool here called ECoach, which is a computer-tailored communication system that allows us to speak to a student with detailed knowledge of their background, interests and goals, and be able to do that at scale.
Some of that is automated, but you can tailor it to each student?
It's interesting. It's automated in a way, but in another way, it's all generated by people. The content that we are going to provide, the way we create it, is to sit down together and look at the kinds of people who are present in our classes and think about how we would change the message if one of those students sat down in front of us.
We might be changing, of course, what we're saying. Some students are very well prepared to take a physics class and, in fact, might have studied it for two years in high school before they get to my class. There's one kind of message for them. There are other kinds of students who have never seen this subject before. And there, I might want to really focus on points like how taking a physics class is different from taking other kinds of classes that they have.
We sit down and think about what we would say to these people if they sat in front of us, and technology like ECoach just enables us to say it to all the students, instead of just the few who can get appointments in our office hours.
OK, say a bunch of freshmen in a 20th century American lit class, the papers they do for that class, is there relevant data there that could be useful in a learning analytics way?
Absolutely. That's a great example of the new kinds of data that are emerging, the new forms of data. It used to be, when you and I went to college, that you wrote that paper for that class and you handed it in, perhaps, on typing paper. Right? The instructor took it and marked it up with a pen and handed it back to you, and then it was gone from the system. It left no record. The only record that it left, in fact, was the grade that your instructor wrote in a column in a little accounting book.
Now, since those assignments are all turned in through online systems, it's actually possible to go back after the class is over and examine all the work that students did. You could even imagine, for example, if you'd taught a course like that year after year, being able to begin to understand whether student writing was changing in any significant way over the year because that evidence remains. It exists, and it is possible to use it as input to the process of understanding and improving teaching and learning in a way that it didn't used to be. It just was inaccessible before.
What, if anything, changed in 2016 in higher ed learning analytics? Has it been more widely adopted? Has the kind of data you're going after changed?
A kind of tool that many, many institutions have adopted are tools that aim to make sure that they don't fail to recognize students who might be in trouble. I would say that the first big application of learning analytics systems has been to notice when a student, even in a large institution, is running into the kind of difficulty that might be crucial for them, that they might fail a class or that they might drop out of a semester or that they might not complete their degree.
A lot of institutions have done some really good work in using the data that they have to identify students who might be at risk, and then thinking carefully about how they might go to work to support those students to move them back to a track that's leading towards success. Most of the time, the actions that have been taken are actually human actions.
What we're beginning to see is people putting this kind of information to work in richer ways. One example of that is the kind of coaching technology we're building. It enables us to build on the experience of thousands of students who've taken these classes before, and share the lessons that are learned from that with each individual student.
In 2016, schools got better at using learning analytics for more things than, "Joe Smith is going to fail freshman physics. He might need tutoring. He might need an intervention." Some schools are now looking at a broader range of things — from the idea of the college transcript to the admissions process?
Yeah. We are asking questions about our own admissions criteria. It turns out that many of our admissions, our sense of how we should do college admissions, is grounded as much in tradition as it is in evidence.
We're having a big conversation about what we reflect in the transcript. You know, the transcript is the famous, official record of a student, the things that a university provides to the world to reflect on the nature of their experience while they were in college. The transcripts we use right now really were invented in the early 20th century and are stuck in a very industrial mode of education and even in some kind of prior technologies.
You know the way a transcript mostly lists with one line about every one of your classes? That was done so that the transcript would fit on a few pages, so that it could be folded up and stuck in an envelope and mailed to somebody. There's really no reason in an information age to say that the record we keep that reflects on what you did in a class needs to be restricted to a single line on a page, right?
If we could enrich that record: If, for example, in that literature class that you described, instead of just assigning a grade to a student, if we actually kept some of the work of the student as the object that represents what you did in that class. In other words, in principle it could be made available to people who wanted to know, What did you do in that class? You would approach writing that paper as a student in a very different way from the way you do today.
The paper wouldn't just be for the instructor, or you wouldn't just be after the grade. You would actually be after producing a paper that you would be proud of showing to the world to represent what you did in this class.
We're really turning this kind of analytic approach to thinking about pretty existential questions about the nature of how we do our business on campus. I think in the next few years you'll see a lot of change as campuses take advantage of the opportunity that all this information provides to better understand what's going on.
What would you say to a more traditionalist professor who says, 'My teaching is more art than science, and you have to be open to serendipity and improvisation, and I don't want to be led around by my nose and big data?' Harumph.
I totally understand this perspective. People who assert that are often quite correct. Another thing that has emerged in our understanding of the Michigan campus is that we teach an incredible variety of classes. We have 9,200 classes on this campus, and they range in enrollment from one to 2,000.
They're very different, one from the other. I would say that this kind of learning analytic approach is especially important in environments where we're teaching a lot of people, where we're teaching people who come from a wide variety of backgrounds or have a wide variety of interests and goals, and those environments are typically the places where we're teaching kind of foundational classes when students come into a campus like this.
Then there are a lot of teaching and learning environments where the best thing we can do is to put that expert faculty member in a room with 18 students. A really, truly great learning experience can happen there.
So for those in big college classes, don't fear the data?
I think they should not, because I think it has a lot to bring to help them. I do have colleagues who still are skeptical about this, and a part of the challenge for all applications of data across our lives is for us all to assess the way we feel about reducing experienced to a limited number of data points and then trying to learn from that. I think in most of our lives, we've seen that data perhaps in your Netflix recommender, it's kind of useful. Right? It doesn't solve all your problems, but it has a role to play.
I think as we expose more and better ways to put data to work in support of students, we will see people get comfortable with the idea that, yeah, it does have something to bring to the table. It doesn't solve every problem, but it does have an important contribution to make.