Back to blog

If AI is the editor, who sets the standards?

27 November, 2025

Sonia

AI-produced or supplemented content is proliferating on the web and in news media. The speed at which this trend has evolved is breathtaking to say the least. In a world where most of our careers started in journalism, communications, editorial, or media tech, it is important to put the present in perspective so we can safeguard the future of news.

First, ask yourself:

  1.     Do editorial judgment, verification, and correction policies play a role in the announcements or news you are creating or ingesting?
  2.     Are your teams able to separate signals from noise, and how is it being done?
  3.     What guardrails does your company, newsroom, or communications team have in place to protect editorial quality and public understanding?

Upon reflection, we can see that what was once taken for granted seems to have become fast forgotten. In an age when AI increasingly assembles our news feeds, forgetting about guardrails is not optional. Editorial guardrails are more important than ever, the difference between collective learning and collective confusion. That is why preserving news and information standards matters more than ever.

Two timely reads point to where this is headed

 

I believe we are at what I call a “compounding quality risk” cross-roads: if distribution optimises for scale with cheap text over provenance, tomorrow’s models learn from today’s synthetic exhaust. Even if AI has not “won the web,” the briefing layer is advancing fast, amplifying whatever sources it is fed. The result, as AI-generated news proliferates, the potential for the garbage-in garbage-out effect increases if we do not ensure news integrity across platforms.

 

When AI is a helper, not the editor: How TIME protects editorial standards

 

TIME offers a useful example of how AI can support media, rather than replace, human journalism. With TIME AI and the TIME AI Agent, the company puts clear “AI Guardrails” around powerful tools that can summarise long stories, create short audio briefings, translate into many languages, and search across more than a century of fact-checked reporting Every answer keeps attribution and citation visible and links back to original articles, which fits directly with source transparency and attribution-by-default. And instead of scraping the open web, the system is grounded mainly in TIME’s own vetted archive, an evidence-first approach that favors professional, attributable content over random or synthetic text.

 

TIME says AI is in place to extend editorial judgment, not to replace it. Working together with technology partner Scale AI, TIME’s AI Agent operates inside guardrails: attribution is preserved in every interaction, input filters block manipulative or harmful prompts, and the system is stress-tested so it cannot easily be used to distort underlying reporting. In this way, for TIME, AI becomes a new delivery layer for trusted content, not a shortcut around verification, correction, and clear sourcing.

 

TIME sources: Mark Howard, “Why We’re Introducing Generative AI to TIME’s Journalism,” TIME, December 11, 2024;

Mark Howard, “The Story Behind the TIME AI Agent,” TIME. 


What to watch to preserve integrity across the web, newsrooms, communications platforms, and research systems?

 

  1. Source transparency: disclose sources and model mix, and when copy is machine-generated.

  2. Attribution by default: link back to the source so readers can verify, cite, and correct.

  3. Evidence-first inputs: prioritise professional and scholarly material, attributable corpora from fact-checked, authoritative, and peer-reviewed sources for RAG, fine-tuning, training, and discovery; de-emphasise open-web and wikis except as clearly labelled context, and with audit trails so people can decide fact or fiction.

 

If AI is becoming the editor, let's make sure it is an accountable one.

 

About the author

Sonia LaFountain, VP of Content Partnerships at MEI Global (MEIG), leads licensing strategies that connect global publishers with content opportunities, including in the areas of model training, GenAI and Agentic AI protocols. Prior to MEIG, LaFountain held senior roles as COO and SVP Partnerships at iCrowdNewswire and ContentEngine, building data-rich portfolios and scalable partnership programs that support media monitoring, analysis, PR distribution, and enterprise information services worldwide.

Medianet is the ultimate PR platform connecting you with media contacts and outlets to get your story told.

white arorw pointing upwards Top