Media Research Information and Insights

If AI is the editor, who sets the standards?

AI-produced or supplemented content is proliferating on the web and in news media. The speed at which this trend has evolved is breathtaking to say the least. In a world where most of our careers started in journalism, communications, editorial, or media tech, it is important to put the present in perspective so we can safeguard the future of news.

First, ask yourself:

  1.     Do editorial judgment, verification, and correction policies play a role in the announcements or news you are creating or ingesting?
  2.     Are your teams able to separate signals from noise, and how is it being done?
  3.     What guardrails does your company, newsroom, or communications team have in place to protect editorial quality and public understanding?

Upon reflection, we can see that what was once taken for granted seems to have become fast forgotten. In an age when AI increasingly assembles our news feeds, forgetting about guardrails is not optional. Editorial guardrails are more important than ever, the difference between collective learning and collective confusion. That is why preserving news and information standards matters more than ever.

Two timely reads point to where this is headed

 

I believe we are at what I call a “compounding quality risk” cross-roads: if distribution optimises for scale with cheap text over provenance, tomorrow’s models learn from today’s synthetic exhaust. Even if AI has not “won the web,” the briefing layer is advancing fast, amplifying whatever sources it is fed. The result, as AI-generated news proliferates, the potential for the garbage-in garbage-out effect increases if we do not ensure news integrity across platforms.

 

What to watch to preserve integrity across the web, newsrooms, communications platforms, and research systems?

 

  1. Source transparency: disclose sources and model mix, and when copy is machine-generated.

  2. Attribution by default: link back to the source so readers can verify, cite, and correct.

  3. Evidence-first inputs: prioritise professional and scholarly material, attributable corpora from fact-checked, authoritative, and peer-reviewed sources for RAG, fine-tuning, training, and discovery; de-emphasise open-web and wikis except as clearly labelled context, and with audit trails so people can decide fact or fiction.

 

If AI is becoming the editor, let's make sure it is an accountable one.

 

About the author

Sonia LaFountain, VP of Content Partnerships at MEI Global (MEIG), leads licensing strategies that connect global publishers with content opportunities, including in the areas of model training, GenAI and Agentic AI protocols. Prior to MEIG, LaFountain held senior roles as COO and SVP Partnerships at iCrowdNewswire and ContentEngine, building data-rich portfolios and scalable partnership programs that support media monitoring, analysis, PR distribution, and enterprise information services worldwide.