Back to Blog

When "Teaching Feels Like Policing": A PBS Report That Made Us Reflect

A few weeks ago, PBS NewsHour published a report that stopped us in our tracks. Not because it told us something new—but because it put into words what so many educators and students have been feeling.

📺 We encourage you to watch the full report:
How AI is reshaping college for students and professors
Source: PBS NewsHour, November 25, 2025 (~8 min)

What the Report Reveals

According to PBS NewsHour, an estimated 86% of college students now use AI tools like ChatGPT and Claude for their coursework. This year's senior class is the first to have spent nearly their entire college career in the age of generative AI.

But what struck us most wasn't the statistic. It was what Professor Megan Fritts, a philosophy instructor, shared about her experience:

"Teaching feels like policing." — Professor Megan Fritts, as reported by PBS NewsHour

She described using eight different detection tools to identify potential AI-generated work. Eight. The mental and emotional toll of that process—constantly suspecting rather than trusting your students—is something we think deserves more attention.

The Human Cost

The report also shared the story of Ashley Dunn, a Louisiana State senior who was initially flagged by a detection tool for an essay she wrote herself. She eventually received an A after discussing it with her professor—but that moment of being wrongly accused stays with you.

We're not sharing this to criticize detection tools. They exist because there's a real problem. But we wonder: is there a way to build trust without surveillance?

A Question Worth Sitting With

Provost Ravi Bellamkonda posed a question in the report that we haven't been able to stop thinking about:

"What if there exists a technology that lets students produce work of very high quality? How do we distinguish between authentic learning enhancement and academic dishonesty?" — Provost Ravi Bellamkonda, as reported by PBS NewsHour

We don't pretend to have all the answers. But this question is exactly why we started building UKEKA.

What We're Exploring

We've been asking ourselves: what if, instead of trying to detect AI after the fact, we could observe the learning process itself?

Not to police. Not to catch. But to understand.

Our early exploration focuses on tracking the journey of learning—the moments of struggle, the breakthroughs, the connections formed between concepts—rather than just evaluating the final output.

We're not claiming this solves everything. We're still early in this journey, still learning, still iterating. But we believe this direction is worth pursuing.

An Honest Admission

UKEKA isn't launched yet. We haven't reached the scale of mainstream educational platforms. We're still figuring things out.

But we wanted to share this report because it articulates the problem better than we ever could. If you're an educator exhausted by the detection arms race, or a student tired of being suspected, or a parent worried about what credentials will mean for your child—we see you.

We don't know if our approach will work for everyone. But we're committed to trying something different: building a system where trust comes from transparency, not surveillance.

We'd Love to Hear From You

If any of this resonates—or if you disagree—we genuinely want to hear your perspective. What would a better system look like to you?

The conversation is just beginning.


Source: This article references and reflects on reporting from PBS NewsHour. We encourage readers to watch the original report for the complete context. All quotes are attributed to PBS NewsHour's coverage.

Share this article

Related Articles

Opinion & Reflection

"The Ability to Learn Is Even More Important"

Dec 22, 2025
Industry Insights

The "Devil's Bargain" in Education

Dec 6, 2025
Industry Insights

When "Teaching Feels Like Policing"

Dec 8, 2025