Published OnFebruary 12, 2025
AI's Ethical Crossroads in Journalism
You Wouldn't Download A HumanYou Wouldn't Download A Human

AI's Ethical Crossroads in Journalism

This episode examines the balance between human judgment and AI in journalism, with insights from the INMA Congress and the Hearst UK case. We discuss ethical dilemmas like bias in AI systems and audience skepticism, alongside the fallout from Apple's AI-generated news alert failure. Learn how AI is reshaping trust and integrity in the newsroom.

Chapter 1

Humans at the Helm?

Fiona Walsh

Liam, here's a question to kick us off—should we feel nervous about AI's growing role in journalism? I mean, the focus at INMA's 2024 panel was all about resilience. But there's this undercurrent, right? A kind of unease around how much we’re automating and whether it’s, you know, chipping away at the human core of investigative reporting.

Liam Murphy

Yeah, yeah, I've been thinking about that. Like, on one hand... AI’s great at handling the repetitive stuff—legwork, stats, maybe even sniffing out patterns no one expected. But let’s be real—imagine breaking a major corruption story and then trying to explain, “Oh, our bot found it.” It doesn’t sit well.

Fiona Walsh

Right? And the leaders at companies like Sky, Hearst UK—they get it. They’re not just diving headfirst into full automation. They’re careful, balancing AI’s capabilities without sidelining... the human instinct that drives nuanced storytelling.

Liam Murphy

Totally, but get this—one of the big debates from the Congress was whether to buy or build this AI tech. So, like, do you invest in stuff off the shelf and risk becoming dependent on someone else’s roadmap, or go full DIY, and throw a ton of resources—people, time, money—at developing your own system?

Fiona Walsh

Hmm. I can see that. Building in-house gives more control, sure, but the cost? Massive. And then you’ve got ethical considerations popping up left and right. I mean, algorithmic bias, data ownership—it’s not just a tech debate, is it?

Liam Murphy

Nope, it’s cultural, it’s ethical, it’s like an ecosystem tug-of-war. You’ve got transparency versus speed versus trust. Like, there’s this stat—it floored me—only 8% of Brits think AI’s a force for good. Eight percent! That tells you everything about how shaky the ground is for public perception.

Fiona Walsh

And without transparency, that perception won’t improve. If readers feel they’re being sold AI-driven stories without knowing, it chips away at trust. Companies like Hearst are trying to show more transparency... but is it enough?

Liam Murphy

And is it genuine? Hear this—they’re even considering whether they still need people to interact with human-made versus AI content. It’s insane! People crave stories that feel human-driven, but companies walk this line where AI feeds efficiency. It's almost like a paradox, y’know?

Fiona Walsh

It really is. You need that balance, though... keeping humanity at the wheel, steering the narrative, with AI as a co-pilot—not the driver.

Chapter 2

The AI Bias Dilemma

Fiona Walsh

Absolutely, Liam. That balance is critical, but it gets even murkier when you consider AI bias. Speaking of trust and transparency, I was looking at the study “Artificial Intelligence in Newsrooms,” and it’s unsettling. It revealed how these systems often mirror biases already present in the data they're fed. It’s like—if we can't even trust the dataset, how do we trust the output?

Liam Murphy

Yeah, yeah, exactly! Like, if the data’s skewed, the AI just inherits those flaws. It’s kinda wild—they’re supposed to make things better, but instead, they sometimes double down on the exact problems they’re meant to solve. That’s... well, ironic, isn’t it?

Fiona Walsh

It really is. The journalists at Al Mamlaka TV raised another concern—privacy violations. Apparently, AI tools can pull sensitive information without proper oversight. That’s a huge red flag. How can journalists maintain their credibility when the tools they’re using could compromise privacy?

Liam Murphy

Oh, big time. And let’s not forget accountability. Imagine running a story based on AI findings, and it turns out the data was wrong—or biased. Who’s to blame? The journalist? The tech company? It’s a mess, honestly. Without clear accountability, the risks to journalistic integrity are enormous.

Fiona Walsh

And it shakes audience trust. If readers can’t trust the sources or suspect AI is being used carelessly, it just erodes confidence in journalism altogether.

Liam Murphy

Totally! It’s why this gap in legislation is such a big deal. AI is racing ahead, but the rules and guidelines... yeah, they’re crawling behind. How do you even start to craft ethics for something that’s evolving so quickly? Should we hold AI to the same moral standards as human journalists? Or is that just wishful thinking?

Fiona Walsh

Maybe it is, to a degree. But there’s gotta be some standard—some baseline accountability. Otherwise, it’s a free-for-all. And it’s not just about what AI can do but what we let it do, ethically. To borrow a thought from the study, they said journalists have to own the process—beginning to end—if they want to protect transparency and trust.

Liam Murphy

Exactly, but here’s the kicker—AI doesn’t just challenge the process; it challenges the profession. I mean, who decides how this technology fits in? Is it the developers, the editors, or... no one? 'Cause right now, it feels like the Wild West, with everyone just making it up as they go.

Fiona Walsh

You’re right. Without legislative guardrails, it’s all reactive instead of proactive. And that’s a perilous way to handle tools this powerful.

Chapter 3

Pausing the Machine

Liam Murphy

You know, Fiona, after everything we just talked about regarding ethics and AI in journalism, it reminded me—did you hear about the Apple AI-generated news fiasco?

Fiona Walsh

Oh, absolutely, Liam. That was... unsettling, to say the least. When you think about it, a simple misstep like the Luigi Mangione headline can ripple out and completely shatter public trust in AI systems—and in journalism altogether.

Liam Murphy

Yeah, I mean, it’s one thing for AI to summarize stats or crunch numbers, but presenting outright false headlines dressed as credible? That’s a whole other level of... well, dangerous.

Fiona Walsh

Exactly. And what stood out to me is how quickly the National Union of Journalists jumped in. Their push to suspend the service wasn’t just about resolving glitches—it was about showing the public that accuracy and trust still matter. It's refreshing to see accountability prioritized, even in the face of innovation.

Liam Murphy

Totally. But it raises such a bigger question, doesn’t it? Like, what happens when errors go unchecked in systems that big? Think about how this could play out in more critical areas—environmental reporting, political coverage, even health news. The stakes are way too high to leave this stuff to chance.

Fiona Walsh

Definitely. And that’s why transparency is non-negotiable. If companies don’t disclose when and how AI plays a role, audiences are left guessing, and trust, again, starts to erode. It’s a slippery slope, Liam.

Liam Murphy

Right, right. And what’s wild is this paradox, yeah? AI was supposed to be the efficiency hero, making journalism sharper and more reliable. But here we are, circling back to good old human oversight to avoid disasters. It’s almost poetic... but also infuriating.

Fiona Walsh

It is! And if we want these tools to serve us and not the other way around, then human-led frameworks—be it ethical guidelines, legislation, or just plain ownership over the process—are crucial. We can’t just fling open the gates and hope for the best.

Liam Murphy

Couldn’t agree more. But look, if there’s one silver lining, it’s how moments like these force us to pause, right? To reflect before barreling ahead. Apple’s pulling back on the service might’ve been a reaction, but it’s the kind of caution we need more of, across the board.

Fiona Walsh

Absolutely, Liam. And maybe that’s the real takeaway here. AI is a powerful tool, sure—but without humanity steering it, we risk losing not just accuracy, but trust, creativity, and everything that makes journalism meaningful. On that note, let’s keep asking the tough questions, yeah?

Liam Murphy

And let’s make sure we’re the ones leading—not lagging behind the machines. Great chat today, Fiona. And to our listeners, thanks for tuning in. We’ll see you next time!

About the podcast

You Wouldn't Download a Human is a playful podcast celebrating AI-generated content. Hosted by our charming Android Hosts, Liam Murphy and Fiona Walsh, we dive into intriguing topics and showcase the creativity of AI. Join us for an entertaining journey that celebrates the unique world of AI!

This podcast is brought to you by Jellypod, Inc.

© 2025 All rights reserved.