AI-produced or supplemented content is proliferating on the web and in news media. The speed at which this trend has evolved is breathtaking to say the least. In a world where most of our careers started in journalism, communications, editorial, or media tech, it is important to put the present in perspective so we can safeguard the future of news.
First, ask yourself:
Upon reflection, we can see that what was once taken for granted seems to have become fast forgotten. In an age when AI increasingly assembles our news feeds, forgetting about guardrails is not optional. Editorial guardrails are more important than ever, the difference between collective learning and collective confusion. That is why preserving news and information standards matters more than ever.
I believe we are at what I call a “compounding quality risk” cross-roads: if distribution optimises for scale with cheap text over provenance, tomorrow’s models learn from today’s synthetic exhaust. Even if AI has not “won the web,” the briefing layer is advancing fast, amplifying whatever sources it is fed. The result, as AI-generated news proliferates, the potential for the garbage-in garbage-out effect increases if we do not ensure news integrity across platforms.
Source transparency: disclose sources and model mix, and when copy is machine-generated.
Attribution by default: link back to the source so readers can verify, cite, and correct.
Evidence-first inputs: prioritise professional and scholarly material, attributable corpora from fact-checked, authoritative, and peer-reviewed sources for RAG, fine-tuning, training, and discovery; de-emphasise open-web and wikis except as clearly labelled context, and with audit trails so people can decide fact or fiction.
If AI is becoming the editor, let's make sure it is an accountable one.
About the author