DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

‘We Have to Really Rethink the Purpose of Education’

May 13, 2025
in News
‘We Have to Really Rethink the Purpose of Education’
495
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

This is an edited transcript of an episode of “The Ezra Klein Show.” You can listen to the conversation by following or subscribing to the show on the NYT Audio App, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts.

Here’s a statistic I’ve been thinking about recently: In 1976, if you asked high school seniors whether they had read any books in the last year for fun, about 40 percent of them had read at least six books for fun in the last year, and only about 11 percent hadn’t read a single book for fun.

Today, those numbers are basically reversed: About 40 percent haven’t read a single book for fun.

You see it everywhere right now: There are all these headlines about how kids are not reading the way they once did. There are all these stories quoting professors, even at Ivy League universities, about how, when they have tried to assign the reading they’ve assigned their entire careers, their students just can’t do it anymore.

So the professors are adjusting. They’re changing the books, making them shorter, making them simpler, making the reading just less burdensome.

We’re losing something. We can see it on test scores. Over the last decade, we’ve seen the number of kids reading at grade level slipping. And then, of course, the pandemic accelerated all of this. So if you were simply asking: How are the kids doing with some of these intellectual faculties that we once thought were at the core of what education was trying to promote? They’re not doing well.

And then — as if we summoned it or wrote it into the script — here comes a technology, generative A.I., that can do it all: that will read the book and summarize it for you; write the essay for you; do the math problem, even showing its work, for them.

We call using A.I. this way cheating. But to the students, why wouldn’t you do it? We know Gen A.I. is being used at mass scale by students to cheat, but its challenge is more fundamental than that, of course.

If you have this technology that not only can, but will, be doing so much of this for you, for us, for the economy, why are we doing any of this work at all? Why are we reading these books ourselves when they can just be summarized for us? Why are we doing this math ourselves when a computer can just do it for us? Why am I writing this essay myself when I can get a first draft in a couple minutes from Claude or from ChatGPT?

I have a 3- and a 6-year-old. And one of the ways that my uncertainty about our A.I.-inflected future manifests is this deep uncertainty about how they should be educated. What are they going to need to know?

I don’t know what the economy or society is going to want from them in 16 or 20 years. And if I don’t know what it’s going to want from them, what it’s going to reward in them, how do I know how they should be educated? How do I know if the education I am creating for them is doing a good job? How do I know if I’m failing them? How do you prepare for the unpredictable?

My guest today is Rebecca Winthrop, the director of the Center for Universal Education at the Brookings Institution. Her latest book, coauthored with Jenny Anderson, is “The Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better.”

Ezra Klein: Rebecca Winthrop, welcome to the show.

Rebecca Winthrop: Lovely to be here, Ezra.

So I have a 3-year-old and a 6-year-old. I feel like I cannot predict with A.I. what it is that society will want from or reward in them in 15 or 16 years, which leads to these questions in the interim — How should they be educated? What should they be educated toward? — feel really uncertain to me. My confidence is very, very low that schools are set up now for the world they’re going to graduate into.

You study education. You’ve been thinking a lot about education and A.I. What advice would you give me?

Approximately one-third of kids are deeply engaged — so two-thirds of the kids are not. We need to have learning experiences that motivate kids to dig in and engage and be excited to learn.

When friends or relatives ask me the same question, I usually say: Look, we have to think about three parts to the answer: Why do you want your kids to be educated? What is the purpose of education? Because actually, now that we have A.I. that can write essays and pass the bar exam and do Advanced Placement exams just as good or better than kids can, we have to really rethink the purpose of education.

The second thing we have to think about is how kids learn. We know a lot about that.

And the third thing is: What should they learn? What’s the content? What are the skills?

People always think of education as a transactional transmission of knowledge, which is one important piece of it. But it is actually so much more than that: learning to live with other people, to know yourself and for developing the flexible competencies to be able to navigate a world of uncertainty. Those are the “whys” for me.

I might ask you: What are your hopes and dreams for your kids under the “why,” before we get to the details of the skills?

I have a lot of hopes and dreams for my kids. I would like them to live happy, fulfilling lives. I think I’m not naive.

Certainly in my lifetime, the implicit purpose of education — the way we say to ourselves: Did this kid’s education work out? — is, Do they get a good job?

Right.

That’s really what we’re pointing the arrow toward.

Right.

The fact that maybe they developed their faculties as a human being or learned things that were beautiful or fascinating, that’s all great. But if they do all that and they don’t get a good job, then we failed them. And if they do none of that, but they do get a good job, then we succeeded.

I think that’s been the reality of education, but I also think that reality relies a little bit on an economy in which we’ve asked people to act very often as machines of a kind. And now we’ve created these machines that can act or mimic as people of a kind, so now the whole transaction is being thrown into some chaos.

Exactly. The skills that I think are going to be most important are how motivated and engaged kids are to be able to learn new things. That is maybe one of the most important skills in a time of uncertainty. That they are go-getters. They’re going to be wayfinders. Things are going to shift and change, and they’re going to be able to navigate and constantly learn new things and be excited to learn new things. Because when kids are motivated, that’s actually a huge predictor of how they do.

And we’re going to want kids absolutely to know enough content so that they can be a judge of what is real and what is fake. We’re also going to want them to have experiences where they’re learning and testing how to come up with creative new solutions to things, which is not really what traditional public education has been about.

I think sometimes about this distinction between education as a virtue, and education as something that is instrumental, education as training.

Studying the classics was important, not because it made it likely that you got into law school but because it deepened your appreciation of beauty and your capacities as a human being.

I think for reasons that make a lot of sense, in many ways, we drifted away from that. I don’t know that you build a society off people just enjoying what they’re studying. At the same time, I worry now we have pulled people onto a conveyor belt, that when they get to the other side of it, there’s not going to be that much there.

I don’t even think you need to imagine A.I. for that — that’s already happening to a lot of people. I think one reason you see a lot of anger among young people today is that the deal often doesn’t come through. You do all the extracurriculars, you get your good grades, you show up on time — and then you graduate college and the good jobs and the interesting life you were promised just aren’t there.

There’s something there that feels like it is getting thrown into question. If we don’t know what the future is going to ask of us, how can we be instrumental in the way we train people for it?

We can’t be superinstrumental, so we have to come up with a new plan. We did not know — collectively, us, the world — that we would have generative A.I. that could basically write every seventh-grade essay or college essay to get into university. Or of the whole host of exams that are being administered and passed by A.I. just as well or better than kids.

We have to come up with a new plan — that is not the plan for success.

I want to push back on something you said: that if kids just enjoy what they’re learning, you don’t know if it’s going to help or if people are really going to benefit from that.

Engagement is very powerful. It’s basically how motivated you are to really dig in and learn, and it relates to what you do: Do you show up? Do you participate? Do you do your homework?

It relates to how you feel: Do you find school interesting? Is it exciting? Do you feel you belong at school?

It relates to how you think: Are you cognitively engaged? Are you looking at what you learn in one class and applying it to what it might mean in your life outside or in other classes?

It’s also how proactive you are about your learning. All those dimensions really work together in education. It’s a very powerful construct to predict better achievement, better grades, better mental health, more enrollment in college, better understanding of content and lots of other benefits to boot.

We need to have kids build that muscle of doing hard things because I worry greatly that A.I. will basically make a frictionless world for young people.

It’s great for me. I’m loving generative A.I., but I have several decades of brain development where I know how to do hard things. But kids are developing their brains. They’re literally being neurobiologically wired for how to attend, how to focus, how to try, how to connect ideas, how to relate to other people — and all of those are not easy things.

You have in your book four modes of engagement. Do you want to talk through them?

Absolutely. We found after three years of research that kids engage in four different ways. There is passenger mode, in which kids are coasting; achiever mode, in which they’re trying to get perfect outcomes; resistor mode, where they’re avoiding and disrupting; and explorer mode, when they really love what they’re learning, and they dig in and they’re superproactive. So that’s the high-level framework. What part do you want to dig in on?

Well, why don’t you go through them? I think passenger mode is particularly interesting here. Why don’t we start there?

Passenger mode is difficult to spot, often for parents and sometimes teachers, because many kids in passenger mode get really good grades but are just bored to tears. They show up to school, they do the homework, but they have dropped out of learning.

Passenger mode is when kids are really coasting, doing the bare minimum. Some signs of this are when your kid comes home and they do their homework as fast as possible. Another sign is that they say: Oh, school’s boring, it’s just boring, I learned nothing. Kids are in passenger mode because school is actually too easy for them.

We talked to so many kids who said: Look, I’m in class, and the teacher’s going over the math homework from yesterday, and I got every one right, and I know the answers — and it’s 45 minutes of that. I understand the kids who don’t get it — they need the help — but I’m going to shop online.

Or I have kids who say: I got the homework home, and I know how to do this stuff, so I just put it in ChatGPT, and it did my problem set for me, and then I turned it in. So that’s when it’s too easy.

Another version of why kids get into passenger mode is when school is too hard. You could have a neurodivergent kid. Kids don’t feel they belong, so they’re not tuning in, and they’ve missed certain pieces of skill sets that they really need — knowledge and education is cumulative in many ways — and they get overwhelmed, and they need particular special attention.

That’s what’s going on in passenger mode.

One reason I wanted to start in passenger mode is that when I think about ways A.I. can be very harmful — and probably already is now — is the connection with that mode.

Many of us have done passenger mode at work or at school. In some ways passenger mode was what I aspired to at school — I just wasn’t able to achieve it. But in passenger mode, say you’re reading something you think is boring or that you don’t want to be reading, but you want to get a good grade.

Maybe at an earlier point you would buy the SparkNotes, but now you just have ChatGPT summarize it, and — more than that — you can have ChatGPT write the essay. Kids are getting better at telling ChatGPT: No, you actually wrote too good of an essay — like, dumb it down a little bit.

You’ve basically hired your own fill-in student who can help you coast. That will help you get — if you’re able to do it adroitly enough — decent grades. But whatever metaskills — forget the knowledge — are being taught — how to read a book, how to write an essay — you’re not actually learning them.

When people think educationally about A.I., that’s a bit of the fear — and I believe that’s something that everybody believes is happening now.

How do you think about that interaction?

I think you’re 100 percent right. I’ve talked to kids all over the country. I’ve seen lots of incidents or cases of highly motivated, highly engaged kids who are using A.I. really well. They’ll write the paper themselves. They’ll go in and use A.I. for research and help them copy-edit. They’re doing the thinking, they’ve lined up the evidence to create a thesis, and they’ve presented it in logical order on their own.

That is the art of thinking. That’s why we assign seventh graders or 10th graders to write essays. It’s not that they’re going to create incredible works of art. It’s to train them how to think logically and how to think in steps. That is a core component of critical thinking. As long as kids are mastering that and the A.I. is helping, that’s a good use.

But a lot of kids are using it to do exactly like you said: shortcut the assignments. An example, one high school kid I talked to said: For my essay, I break the prompt into three parts, run it through three different generative A.I. models, put it together, run it through three antiplagiarism checkers, and then I turn it in.

Another kid said: I run it through ChatGPT, then I run it through an A.I. humanizer, which goes in and puts typos in and makes it, you know —

These kids are getting good at something. I’m not sure I totally want them getting good at it, but they’re getting good at something.

Kids will find a way. No matter what, kids will find a way. We cannot outmaneuver them with technology.

The first response when generative A.I. came in was: Ban it, block it, get antiplagiarism checkers in — which are bad, by the way. I talked to one kid who showed me he had this essay, and the plagiarism checker flagged 40 percent of it. He changed two words, and then it went away.

We cannot out-technologize ourselves. What we need to do is shift what we’re doing in our teaching and learning experiences.

I have very personally complicated feelings on the question of A.I. in education — and just on the question of education in general.

I hated school. Hated it. Did terribly in it. Starting in middle school, going through high school, I failed classes and just found the whole thing impenetrable.

And not because I wasn’t smart. Not because I wasn’t interested even in things related to it. Just somehow the whole construct didn’t work for me. And I couldn’t make it work for me.

It wasn’t exactly that I was bored. I think today I probably could have muscled through it. But for whatever reason, I couldn’t back then.

But I was voracious outside of school. I spent three or four nights a week at Barnes & Noble. I loved reading deeply into things I was interested in.

I’ve related the story before, and one of the reactions I get has been: Well, you should really then recognize the way school fails kids.

And in a way I do. But it’s not obvious to me that schools should be tuned for me. One thing that I recognize, as someone who studies bureaucracies, is that if you just think of U.S. public education — to say nothing of private education or global education — it’s educating a lot of kids. And its ability to tune itself to every kid is going to be pretty modest.

What kids need is different, but somehow you have to be orienting toward something that works for most of them, even if you’re not sure how to make it work for all of them.

I’m curious how you think about that.

I am not sure I agree.

I agree with several things. One: You are not alone. There are many kids currently going through the system who feel like you.

Two: I agree with you that, as a bureaucratic system, it is actually quite miraculous if you think about it. In every community across our country, kids as young as 3 to 18, at the same time of day, are getting themselves to a place Monday through Friday for a certain amount of days in the year.

That is an organizational feat. And the thing I don’t agree with is that once you’re there, you just have to design for the mean and the average.

I think there are lots of examples that are relatively big scale — or at least not just one little school in a corner by one fabulous homespun teacher — that do things differently. And I think it actually just gets down to how we orchestrate teaching and learning experiences.

Give me one of those examples — of a schooling system able to educate in a personalized way at scale that seems to you to be replicable.

I’ll give you a couple. There’s an example of schools in North Dakota that have created studios for their adolescents. And what are studios? They are self-created classes that a student can design. They have to tell the teacher what standards they’re meeting.

I’ll give you an example. We have a great character in “The Disengaged Teen,” the book I’ve done with Jenny Anderson, named Kia. She was totally disengaged, doomscrolling in middle school.

And then the studios showed up. She got super into it because she was learning history and science, and she decided to design an escape room. She had to list out for herself: These are the standards I’m meeting — for whatever grade she was in — 10th grade, I think — history and science.

And she did an escape room around the assassination of Abraham Lincoln and John F. Kennedy. She had to design this escape room. And that turned her on like nobody else. She got superexcited. And she did several of those. And then she actually said she was so motivated, she went back to sort of normal classes.

They’re doing that across the district. That’s one small example.

There are other examples of schools that — we’re talking about A.I. — do tech-based education on core subjects for a couple hours a day: math, science, reading, social studies. And then for the rest of the day, the students are doing projects together on whatever it may be that they so decide. And there’s a curriculum, there are things the teachers want them to learn. It’s not: Every kid do whatever you want.

But that’s supermotivating. There’s no reason that we couldn’t do that with the existing staff and people and school buildings and infrastructure. We just have to have the willpower to decide to do things differently.

I’m going to zoom in on something in that story: When the student you brought up found the thing that lit her up, she was then able to do better in all the other classes.

This was a little bit of my own experience of life. For me, it was political blogging, of all things, which I found as a freshman in college. And once I activated, then I became much better at doing things that I didn’t want to do, or didn’t exactly see the point of, even in unrelated fields.

I love that. What’s an example? You started political blogging, and then what happened?

What would have been the conventional line on me from the adults who knew me was: Smart kid, can’t get it together. Just can’t seem to get the homework in. Can’t seem to do things he’s not that interested in doing. And can’t even seem to do the things he is interested in doing in a way that fits what we want from him.

I read every book in English class. I enjoyed doing the essays. And I’m a good writer — I think I’m willing to say that at this point of my life. [Laughs.]

And I still did badly on the essays. Because it wasn’t what they wanted for me, in some way or another. That was the broad experience of my life — that I couldn’t fit what I did to what the world wanted for me. And now I’m just much better at doing that in ways that are not related to my core set of interests.

I’m not trying to overextrapolate my experience. It’s actually important to me not to overextrapolate my experience.

But something I’ve seen you talk about is this quality of: When students find the teacher, find the subject, find the approach that activates them, all of a sudden the things that are not that activating to them become easier. That there is a lock-in-a-key dynamic to learning.

This is something we talk about around finding your spark. Kids need to find their spark. They may have many sparks, and their sparks may change, but when kids find their spark — for Kia it was this idea of doing an escape room around historical presidential assassinations.

For other students, they find sparks in other places. One of the characters in our book, Samir, absolutely loved local politics and dove in — getting himself on the school board ultimately in high school. Another student, Mateo was superexcited and turned on by robotics, and that’s what really turned him around.

When you’re motivated, this internal drive makes you engage more. You lean in more. You enjoy it more. There’s a virtuous upward cycle, and there’s lots of evidence to show that it often spills over.

Kia talks about doing these studios for a couple years, which really helped her re-engage and care about school. And then she went back and did some high school college credit courses, which were very traditionally structured. And she said she didn’t love the structure, but she had enough motivation to figure out how to bend the class to her interests. So that’s the best-case scenario.

It doesn’t always spill over automatically. What you talked about when you said you loved English, but you didn’t give the teachers what they wanted, it’s probably because you were a total explorer. And we do not reward engaging in school in a way that supports explorers. In general. Some schools do. And that is what we have to change.

So this gets to the A.I. optimist case. I take the A.I. optimist case as something like this: It’s pretty hard to do personalized learning, even if you have examples that you’ve seen work, because you have one teacher for a classroom of 20, 30 kids oftentimes.

But A.I. makes this completely different. A.I. gives you more tutors than there are children. It allows you to have tutors who adapt to that kid’s individual learning style in any way you want it to, in any way they want it to.

If this kid is a visual learner, A.I. can do visual learning. If pop quizzes are helpful for them, they can do pop quizzes. A.I. can turn it into a podcast they listen to if you are more audio focused. Everything can be turned into a poem if you absorb information better through the sonnet form.

As we get better at this and as we build these systems and tune them better — although they’re already pretty capable here — our ability to personalize education using artificial intelligence as tutors will be like nothing ever seen before in human history. It’s a complete quantum leap in educational possibility. And as such, it allows you to bring every child into their educational utopia, whatever that is, to spark them, to turn them on, to make them into an explorer.

How do you feel about that more utopic vision?

I think we’re on the same page. Schools exist. They’re important for many reasons. We need to change what we do inside of them, particularly because of generative A.I.

And we need to do it quickly, in addition to regulating generative A.I., so it isn’t so massively in students’ and young people’s hands without being designed for that purpose. I would say those are the two big things we need to do.

But I don’t think our goal inside schools, when we’re educating young people, is to have a 100 percent personalized learning journey for every kid.

What I think you’re talking about is actually the ability for generative A.I. to help teachers — which I think is very real. I think there’s a big difference, and we need to make a big distinction between A.I. supporting educators in doing what they do versus going direct to young people.

Let me push you on this for a second, because if I’m taking the position of the A.I. optimist, what I’d say is: No, I’m not saying that. I’m saying the A.I. will be better than the teachers.

Better at what?

If we are saying that A.I. is going to be better than the median for many people at many kinds of work, why would we not assume that this system we’ll be able to build in six years, given how fast these things are developing, won’t per kid be better than the teacher?

I’m not saying I believe this. But I want to make you argue with the A.I. optimist case.

Yes, you’re pushing on it. I get it. But the question is: Better at what?

Teachers do many, many things. Kids learn in relationships with other humans. We’ve evolved to do that. I do not think that we will go away from that. Or we may go away, and then we’ll be like: Oh, my God, that was a huge mistake. And 10 years later, go back.

So there’s a question around skill development and knowledge transmission. That is one thing a teacher does. And I think that’s what you’re talking about. That is an area where I think technology can be really good.

And actually we see it even without generative A.I. There’s adaptive learning software that helps kids learn to read that is incredibly helpful, especially if you have access gaps — you don’t have good teachers, you have large classes, you have substitute teachers that aren’t trained on how to teach kids to read.

So complemented with things that motivate kids and get them excited and help them see the relevance of what they’re doing — which is often in-person — adaptive learning could be a great thing to do inside the classroom.

We see private schools doing that. There’s a group of schools that I have not visited and I don’t know up close, but Alpha Schools are doing this. And they’ve been doing it for 10 years, actually, pre-generative A.I. They do a couple hours of adaptive learning on key academic subjects. And then the rest of the time kids are working together to build bridges or learn about financial literacy or play sports or identify a passion that they want to go learn about in their community. It’s together. It’s alone.

What we don’t want to do is bring A.I. in and have every kid sitting in front of an A.I. tutor alone at their desk for eight hours a day. That’s not the future that is going to help our kids.

Another way you might think about it is that this changes the job of the teacher quite substantially.

Absolutely.

And I’ll say — I don’t believe what I’m about to say, so I don’t want to get yelled at by everybody for every take —

No, no — oh, you’re not talking about me.

I’m not talking about you. I’m talking about my beloved audience. [Laughs.]

[Laughs.] Fair enough.

It seems to me that where A.I. is going to push is toward the skills of the manager, the editor, the supervisor — the fact checker, in a way — and often away from the skills — that are right now more numerous and needed in more numerous quantities — of the worker, the writer or, in this case, maybe the teacher.

So if you think about the world that you were just describing as the one we don’t want, where you have 25 kids in a class all staring at a screen, working with an individualized A.I. tutor, you could imagine a world where you think about every one of those screens as a junior teacher, as an individual tutor.

And there’s some master teacher in the room who the kids can go talk to, who can be pulled in to oversee the learning and reshape what’s happening. There is testing and other things to help us evaluate how the kids are doing. But the teacher, who’s already managing a classroom of students, is now also managing a classroom of helpers, of tutors.

I think that would be the kind of vision you would hear from the more A.I.-pilled among us.

The role of the teacher in traditional public schools is damn near impossible. Honestly. They have to master a certain subject. They have to get kids to grade level. And usually we have a wide difference of grade levels in school — between three and four different grade levels. So they have to differentiate and figure out who needs what — the bored kid who’s the passenger, the struggling kid who’s also the passenger. Both of them silent and quiet — and you don’t even know.

They’ve got to manage classroom dynamics. Kids have to not hit each other or disrupt each other or ruin the furniture.

And teachers have to increasingly be social workers. Kids are not doing well. There are lots of mental health problems. They’ve got to spot that and help. They also have to be relationship managers. They have to work with parents, etc.

So it’s very hard for one teacher to do this all. Absolutely, I think the wave of the future is a different model where you have multiple people, and one of those could be an A.I. tutor helping support our kids’ growth and development.

The interaction with A.I. can help with skill development, knowledge acquisition. But that is one slice of what happens in a classroom. And it is one slice of what it really means for kids to be educated.

Kids are learning all sorts of things in a classroom. They’re learning how to self-regulate emotions in a group. They’re learning how to understand different perspectives from kids who are different from themselves. They’re learning how to ask for help when they need it.

There’s a whole bunch of things that kids are learning that are much more person-to-person that we want to maintain, I would argue.

Here’s where I actually am: I think we’ve just been going through a catastrophic experiment with screens and children.

And right now, we are starting to figure out that this was a bad idea. Schools are banning phones. My sense is that they are not relying very much on laptops and iPads. There was a big vogue for a while that every kid gets their own laptop or tablet. I think that’s beginning to go away, if I’m reading the tea leaves of this right. So I feel a bit better about that as a parent of young kids.

I really feel badly for the parents whose kids have been navigating this over the past 10 years or so. And right now I see A.I. coming, and I don’t think we understand it at all. I don’t think we understand how to teach with it. I don’t think the studies we’re doing right now are good yet — there are too many other effects we’re not going to be measuring.

There’s the narrow thing that a program does, and then there’s what it does for a kid to be staring at a screen all the time in a deeper way. I believe human beings are embodied. And if you made me choose between sending my kids to a school that has no screens at all and one that is trying the latest in A.I. technology, I would send them to the school with no screens at all in a second.

But we’re going to be working through this somehow. And what scares me, putting aside what world my kids graduate into, is their moving into schools at the exact time that educators don’t know what the hell to do with this technology. And they’re about to try a lot of things that don’t work and probably try it badly.

I wonder, as somebody who has tracked this, what you think the lessons of the screens and phones debacle of the 2000s or the 2010s have been?

I agree with you 100 percent. It was a massive, uncontrolled experiment, and our kids were the guinea pigs.

We just had a wait-and-see approach. We cannot take a wait-and-see approach again, and I think that there are lots of lessons.

First off, do not use generative A.I. unless you really know what you’re using it for. There is a real sense of FOMO among educators, parents — young people, even — that there’s this thing happening out there, and I should use it because it’s the newest thing.

I saw that with groups who were working on student well-being, and they had done teacher training around a well-being curriculum for teachers, and they said: Oh, we need to train parents how to do it.

So their idea was: Let’s use generative A.I. It will be great because parents also need to reinforce well-being messages that teachers are giving in school — which is true — and what we’ll do is we’ll create an app.

So this is what they had suggested: Imagine you sitting down around the dinner table, you pull up your phone, you have an app, and your kids have their phone, and you say: OK, how are you feeling today?

You’re looking at your phone, and they’re telling you how they feel, and then you click through and ask: Why are you feeling that way? — mediated through a phone.

It’s crazy. We’ve lost our mind if we need A.I. to talk to our kids. So if there’s not a real problem you’re trying to solve, don’t use it.

No. 2, and I really do believe this: Any company that wants to work with kids in schools should be a benefit corporation. Because legally you have a lot of companies who are creating perhaps really good stuff if used well, but they have to maximize profits. They can’t maximize social benefit and well-being.

One thing that worries me is the way in which this might widen, or already has widened, the inequality between parents who can pay for private schools and parents who can’t.

Private schools can adapt more quickly. They don’t have to go through legislatures, and have the boards. They’re just a little bit more independent. They can take the screens out, they can put them in, they can limit what comes in. Whereas the public school systems tend to be somewhat more slow moving.

Living out in the Bay Area, I knew a lot of tech people who were paying money to send their kids to private schools that had banned the products they made starting many years ago. The rest were sending their kids to public schools that had not done that.

And when things are very fast moving, being able to be fast moving is really important. So as somebody who cares a lot about public education, what should the orientation of the public schools be?

They’re going to need to attract parents who think: Don’t their kids need to know how to use A.I.? But also how do public schools not end up flat-footed if this turns out to be a disaster?

This is a really tricky question, and you point out something that is a real issue, which is around the deep equity issues that have already emerged.

Think about the schools that ban A.I. for a kid who has no access to A.I. at home versus a kid who goes home and has full access to all the A.I. tools. That right there is a huge cleavage in our country.

There’s also a huge equity gap in terms of language. Large language models work off language that is written down. There are a lot of languages that aren’t written down that much or have very little written down. So there you’re seeing a global gap between African and Indigenous languages and communities versus English-speaking or other large languages. So equity is a huge one.

On your question about public versus private, I would say to public education systems: Do not have FOMO. That is what the gut instinct is when a new technology comes: I’m missing out. I have a fear of missing out. And I need to adopt it.

I see this. So don’t have FOMO. Don’t use it unless it’s a real problem you want to solve. Do give it to the adults in the school building. Give it to teachers. Have them use it and figure out how it will help them today.

Then give it to novel school leaders to think about how they could maybe restructure the teaching and learning experiences. What are the things that A.I. can do?

There is so much that A.I. could actually do to help make public schools work better: bus schedules, calendaring, school meals, cafeteria, assessment input. There’s so much time that could be really freed up.

Let me try to sharpen the argument that will be used to give people FOMO. It goes something like this: If A.I. is a very potent technology that’s going to be integrated into virtually everything in the future — not literally everything but quite a lot — then not only your literacy but your competency in it becomes paramount.

You’re not going to be replaced by A.I. You’re going to be replaced by a person who knows how to use A.I. So what you need to learn is to use the A.I. — how to manage it, how to prompt it, a sense of what it can and can’t do. And there’s no way to do that other than relentless familiarity and experimentation and exposure.

So a kid who goes to some Luddite school when they’re young where the toys are made out of wood and when they’re older the books are all printed on paper and there’s not a generative A.I. in sight is going to lose out. And it will be like having not taught them mathematics or having not taught them how to drive or type.

How do you take that argument?

I think it is 50 percent right. And I think the 50 percent depends on the age of the child. I absolutely 100 percent think you should send your kids to the Waldorf School with the woodblocks when they’re young.

We know that in early childhood, the more screen time kids have, the less language acquisition they have. We know that when infants are learning language, they learn a lot of language from human-to-human contact. And if you put the same sentences on a screen, they don’t learn it.

Our neurobiology is not going to change in five years. So that’s the only confines I think we really have to work with. Everything else I think we can reimagine.

But it’s true that when kids get older, you do want to teach A.I. literacy. This is true for social media, too.

There’s been great research on this. When kids learn about: Oh, these big companies are trying to addict me. I’m doing it for free, but staying on it longer is how they make money — you tell that to teenagers, and they get [expletive] off.

I think we need to do the same with A.I. literacy. This is how it works. It’s not some magical thing. It’s not another human being.

So when kids get older we need to teach them about that. And when they get older, they need to start playing with it, using it. But my huge caveat is with A.I. that is designed for kids.

Right now there is a spring-fling race by the large A.I. labs to get students to sign up. You’ve got ChatGPT giving two months free of ChatGPT Plus. Then you’ve got xAI coming in with two months free for SuperGrok. And then Google, not to be outdone, is like: Well, you can get a year free, and I’ll give you two terabytes of storage.

And these are largely for college students. Google just made Gemini available for kids through parents with a family plan. And they are racing to get the allegiance of young kids.

This is terrible because those products are not designed for children and for learning.

To go back to your equity point, there’s the argument from the opposite direction about inequity: It is the kids with the least access to enrichment materials and tutors with whom a well-structured generative A.I. tutor might be able to make a difference really fast.

We know what rich kids in urban centers get compared with what you’re getting in parts of America that are rural and don’t have wide access to broadband — to say nothing of a kid in rural Nigeria.

You’ve talked a bit about a study in Nigeria. I never quite know how seriously to take these studies. But why don’t you say what it did and what it found?

I think that A.I. has real potential for very specific use cases, particularly around access gaps.

In Nigeria, what was done was: After school, twice a week, an A.I. tutor helped kids learn English. It was for six weeks, which is not long, in June, July, I think. And it moves to a randomized controlled trial. We’re still waiting for all the evidence to come through — but 0.3 standard deviations, which is pretty good. Equivalent to maybe two years of average English learning.

And we see that difference with other technologies, too. It doesn’t have to be generative A.I. — it can be rule-based A.I. or predictive A.I.

We’ve seen similar benefits, for example, in Malawi teaching literacy and numeracy to kids with offline tablets, where teachers have maybe 80 to 100 kids in a class and each kid is having a personalized adaptive learning experience. That is hugely beneficial, as well. So that’s one use case.

Another use case that I think is really great is for neurodivergent kids. Superhelpful. There are all sorts of kids who have different learning differences, who struggle in school, who don’t have access to the specialists that they need — and who would benefit greatly from being in a classroom where they could have a little assistant to help them navigate.

My youngest son has dyslexia, and speech to text has been game changing for him.

There are also use cases here in the U.S. You see A.I. being used and experimented around supporting wellness advisers who fill the gap for school counselors in rural school districts, for example, where they don’t have school counselors — which is an actual person. But A.I. is boosting that person’s ability to have a helpful conversation with a kid. And it’s bringing, through tech, mental health resources into a community that didn’t have any.

So there are lots of use cases actually, if done well, contained well, designed well — and we humans have our hand on the steering wheel.

Ethan Mollick, who’s an A.I. expert, has this idea that has been influential for me about the best available human: In a certain circumstance, is A.I. better for you than — not the best human but the best human available to you at that given moment?

Exactly.

So yes, having a professional, excellent editor, like my editor at The New York Times would be better. But most people don’t have that available. So A.I. is better than the best available editor to them.

There’s a lot more demand for therapy than there are therapists. So A.I. — and particularly where it’s going, even for me sometimes — is a better therapist than the best available therapist I have available at a given moment.

It certainly seems plausibly true in education, too. There are all kinds of times when you are confused by what you are reading or learning, and you’re in a big class, and it’s embarrassing to ask 55 questions. Or there isn’t even time to ask 55 questions. And you don’t want to seem stupid.

But if you could contain the system somehow — and that seems more plausible here, where there’s a fundamental prompt — if we got that right, it could be really powerful in a lot of these use cases.

Absolutely. And the key is what you said: “contain the system.” We can’t just bring commercial tech into our schools and hope it will solve these problems. It has to have guardrails. We have to make sure that the data it’s being trained on is legit and not going to create harmful prompts for kids.

We’ve seen terrible things with commercial A.I. companions, with young people developing relationships and being really manipulated emotionally.

But you can put guardrails. It’s totally possible. Frankly, it gets back to the incentives. It gets back to the business model, which is where regulation and government could and should step in.

So yes — “if contained” is the question.

So then let me ask you about the other impulse somebody might have, which is not that you’re going to be replaced by somebody who knows how to use A.I., but that in a world where we have A.I., the most important thing for human beings to be is as human as possible.

And what we need to do is return to more classical education. Reading the great books. Developing the attentional faculties that a lot of data and anecdata suggests that even very elite students are losing, to read a long book and think about it, to write a long essay, to be educated in the way that was considered high-civilization education 70 years ago — something you might get at a place like St. John’s College or University of Chicago or certain private schools today.

A.I. is going to be everywhere. School should be a place not where we learn how to partner with machines — because the rest of society is going to tell you how to do that. School should be a place where we develop specifically human faculties such that we are capable and flexible and attentive in moving through a world that we just cannot predict.

We 100 percent want kids to have the capacity for deep attention. You are thinking about your own kiddos, who are young, and I’m thinking about my own teenagers, who are 13 and 16.

And I see the undermining of attentive faculties from when my 16-year-old got his phone. For a long time, he didn’t want a phone because I’d been droning on and on for years — because he has me as a mother — about addiction and opportunity costs and that it’s OK to enjoy it a little bit, but you can’t sacrifice sleep and physical exercise and in-person communication.

And then he did get his phone. And he struggles with it. And he says: Mom, this is really hard.

It’s eroding his ability to do his homework or to follow something he wants to do. The only thing that it doesn’t seem to distract him from doing is playing the piano. Because he loves playing the piano.

So anything that we can do to actually ensure young people are developing the muscle. And it’s not just attention. Attention is the entry point, the doorway that gets you through. It’s actually reflection and meaning-making, which is what you get from deep reading and reading full books, which a lot of young people struggle to do today.

You also can get it from other means. You could get it from long Socratic dialogues in community with diverse people over time.

But it has to be an experience where you reflect. You think about meaning. You think about different perspectives. And it changes how you see the world.

But what do you think about this idea that school should be a rare screen-free oasis in a child’s life?

I sometimes imagine a school that I could send my kids to — I’m not saying it exists, just in my head — where what they do is they go in, and somebody is watching them and helping them read books and think through math. And there are long periods. And they have a certain amount of exploratory capacity in that — you can choose between different books.

But the idea is that maybe this one space in their life would just be a place that is trying to encourage in them that capacity for meaning-making, deep attention, deep contemplation.

It seems to me to be more valuable than it seems to be to other people to just have a teacher sit there and watch kids read for an hour and a half at a time, and then there’s a discussion — than to do a lot of what we do in school.

This idea that schools should explicitly counter the trends of the moment because they need to develop things that the moment will not naturally develop.

How do you think about that?

I think that’s right. If I had to choose for my own kids — and I do — we would have a school that has no phones for all the reasons we know.

Jonathan Haidt has done a great job on catalyzing that movement here in the U.S. and bringing it from across the globe to our schools. We should have cellphone bans in school, bell to bell. Don’t have it at recess, because that’s where you start interacting and playing with kids.

And I think we should make school a place where kids can actually interact with each other, develop human-to-human socialization capacities, because there is massive commercial tech the minute they leave school that is vying for their attention.

And make sure to do some high-quality A.I. literacy. A.I. literacy is way different than using A.I. to learn. A.I. literacy is: What is this? How is it made? What are the risks? What are the benefits? And let’s talk about what are our ethics around this new tool and how to incorporate it into our lives with an adult instructor talking about how it works and what it is.

That’s A.I. literacy, and that’s important.

I hope you’re right. I’ve been, in general, very skeptical of how much literacy will do. But I guess this goes back to the point you were making about —

There’s a question about how much we will do. But your question is: Will it make a difference?

I’m as phone literate as I think you can almost be. I’ve been writing about this for years.

I’m functionally extremist on this issue, and still the only way for me to modulate my own use to the point I would like to is to use a device that hobbles my phone — the Brick — every time I touch it to the RFID chip.

I have known Jon Haidt for many years. He has been on this show. I’ve read “The Anxious Generation.” Literacy doesn’t do me that much good. Because that’s just not how the brain works, any more than knowing that I shouldn’t eat so many Oreos keeps me from eating them if they’re on the table in front of me.

Yes, and I think you bring something up that’s really important, which is: These things need to be regulated. It’s ridiculous that they’re out there being used by kids. And it’s ridiculous to say that it’s your willpower that should be the deciding factor. It’s ridiculous for adults. It’s ridiculous for kids.

These are incredibly seductive technologies. So this is a really tough one for me. Because you do want kids to be fluent in the new technology of the time, and you do want them to have an ethics and awareness about it. You don’t want them to be seduced by it.

The large A.I. labs are perfectly capable, if they wanted to, of creating a generative A.I. product that is designed for kids that will not be as seductive.

It’s interesting. I was just thinking about that. I think they are. But I also wouldn’t overstate: How will they even understand what it is they are doing? They don’t fully understand the systems they’re making now.

Relentlessly, the kids are more capable and ingenious than the eight or 40 or 100 developers on any given project. When you’re building something that has only hundreds of people building it, and then it’s used by 40,000 kids, I think our experience is that kids are clever in ways, typically, that you are not.

I do think that over time we can create things that are curbed. It’s just that I’m not sure we even know exactly what we are targeting.

Well, I would say they have to change how they’re developing the products. You can’t create an A.I. that will be great for kids and teachers and teaching and learning without having teachers, kids, education experts and child development experts in the development process with you. And so few are.

I think about what the Dutch government is doing. They’re doing a partnership with the teacher unions and the academics and the tech companies. And they’re having a little lab to figure out: What would A.I. look like in schools?

But any of that bottom-up experimentation is the way to go before you roll it out. Because most A.I. developers, although they might be good people, are not child development specialists. But if they change the way they develop their products, they could.

I want to go back to where we began, which is: If you’ve got young kids now, and they’re going to be going into school in the age of generative A.I., how should you think about their schooling?

We can’t really predict the shape of society in 15 or 20 years. I don’t think that’s a question we could answer on the show. If we could, we should probably be investing, not podcasting.

But what we have in education now are constant markers that are supposed to tell us as parents how well our kids’ education is going. And that’s basically grades and maybe, to some degree, counselor reports.

And the idea is if they get good grades and they seem happy and well-adjusted, then at the end of that process, they’ll go to a good college or go to a trade school and get a good job. And it’s going to be a pretty straight line: All A’s equals good job.

The future is foggier. What they’ll need to know is maybe a little foggier. What then should a parent be trying to watch in the meantime? How do you think about whether or not your kid’s education is going well if you’re a little suspicious that the grades designed for — and maybe even not that well designed for — the society we have had are not going to correlate all that well to the society we will have?

And I think as a parent, you yourself, but also other parents out there, are right to be suspicious. Because I think that linear line is going to be much more complicated as the years go on with A.I. in our world.

So what I would think about is a couple of things. One, getting back to the research I’ve done with my co-author and colleague Jenny Anderson, grades don’t show you how much kids are engaged. Schools are not designed to give kids agency. Schools are designed to help kids comply.

And it’s actually not really the fault of the teacher. Teachers are squished from above with all sorts of standards and squished from below with parents putting a lot of pressure on teachers about their kids’ performance and outcome.

And what you really want are some feedback loops that are beyond just grades and behavior to know: Is my kid developing agency over their learning?

And what I mean by that is: Are they able to reflect and think about things they’re learning in a way that they can identify what’s interesting and they can have the skills to pursue new information?

That right there is, I think, going to be the core skill. It is the core skill for learning new things in an uncertain world, which is I think one of the No. 1 things we think about.

In addition to that, I would say make sure kids are learning to interact with other human beings — any school that has them working with peers but even connecting with community members.

Our social networks are getting smaller. There’s going to be a premium on human-to-human interaction as more and more skills get automated and done by A.I., which are the more knowledge-cognitive tasks. The interpersonal caregiving and teaching skills are going to continue to be important for some time. I’m not sure for how long, but for some time.

And then the last thing, which may seem silly to you, but I increasingly keep thinking about: Think about listening and speaking as the missing piece of literacy alongside reading and writing. We are going to need to show our merit and our credentials more and more through what the British call oracy skills. I think we’ve lost the art of listening and speaking.

I think that’s a good place to end. Thank you for speaking and listening with me. Always our final question: What are three books you’d recommend to the audience?

The first one is “Democracy and Education” by John Dewey, which is over a hundred years old. We are now seeing, through lots of great neuroscience, that his observations around what makes for a good teaching and learning experience were right.

He has some great discussions around the importance of reflection. Not just ingesting knowledge but reflecting on it. Making meaning, figuring out how to do things with it.

And I love it because we didn’t talk about this much, but the role of schools in our society is more than just your and my kids’ education and getting a job — even though that’s what we care about most as a parent. They are about creating a democratic society or not. So that’s an oldie but goody. I love it.

The second book is by Gaia Bernstein. It’s called “Unwired: Gaining Control Over Addictive Technologies.” She’s a law professor at Seton Hall University. I really enjoy this book because it gives a really good overview, particularly around kids and young people, of the incentives that commercial tech has and some strategies for resisting that and getting to a better place.

And the last one is called “Blueprint for Revolution: How to Use Rice Pudding, Lego Men, and Other Nonviolent Techniques to Galvanize Communities, Overthrow Dictators, or Simply Change the World” by Srdja Popovic and Matthew Miller. Popovic was the Serbian student leader who started a movement to overthrow Slobodan Milosevic and now is doing quite a bit of work on nonviolent protest against authoritarianism. And to me this book is the updated version of nonviolent activism. He really gets media. He really gets social media. And I just think it’s incredibly relevant today.

Rebecca Winthrop, thank you very much.

Thank you.

You can listen to this conversation by following “The Ezra Klein Show” on NYT Audio App, Apple, Spotify, Amazon Music, YouTube, iHeartRadio or wherever you get your podcasts. View a list of book recommendations from our guests here.

This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our executive producer is Claire Gordon. The show’s production team also includes Marie Cascione, Rollin Hu, Elias Isquith, Marina King, Jan Kobal, Kristin Lin and Jack McCordick. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The director of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Switch and Board Podcast Studio.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

Ezra Klein joined Opinion in 2021. Previously, he was the founder, editor in chief and then editor at large of Vox; the host of the podcast “The Ezra Klein Show”; and the author of “Why We’re Polarized.” Before that, he was a columnist and editor at The Washington Post, where he founded and led the Wonkblog vertical. He is on Threads. 

The post ‘We Have to Really Rethink the Purpose of Education’ appeared first on New York Times.

Share198Tweet124Share
‘SNL’ Star Sarah Sherman Cares More Than You Think
News

‘SNL’ Star Sarah Sherman Cares More Than You Think

by Vanity Fair
May 13, 2025

“I’m such a psychotic perfectionist.” This is how Sarah Sherman begins a story about her proudest recent moment on Saturday ...

Read more
News

Coinbase is joining the S&P 500 in a big moment for crypto — and the stock soars

May 13, 2025
News

No Naked Dressing? How Will Stars Make News?

May 13, 2025
News

Trump’s Attacks on International Students Are the Start of a Wider War

May 13, 2025
News

Guardian agents: New approach could reduce AI hallucinations to below 1%

May 13, 2025
I’ve gone on nearly 20 girls’ trips. My favorite destination doesn’t require a US passport and has stunning beaches.

I’ve gone on nearly 20 girls’ trips. My favorite destination doesn’t require a US passport and has stunning beaches.

May 13, 2025
A Bleached Buzz Cut and an Irish Sweater

A Bleached Buzz Cut and an Irish Sweater

May 13, 2025
Dancing to the Beating Heart of a Library’s Collection

Dancing to the Beating Heart of a Library’s Collection

May 13, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.