AI isn't just changing how media is distributed. It's changing where meaning is made and who gets to decide what the public understands.
In conversation with
Sanjay Trehan
Advisor, PwC
For most of the digital era, people in the media looked at technology in fairly straightforward terms. It helped us distribute faster, publish more efficiently, reach larger audiences, and measure performance with much greater precision. Those changes were real, and they changed the business in lasting ways. But even through all that disruption, the basic structure of media still felt familiar. Journalists reported, editors exercised judgment, publishers shaped the final product, and audiences encountered information through institutions that were still expected to separate what mattered from what merely moved quickly.
Artificial intelligence is changing that arrangement far more deeply.
It is no longer just a useful layer beneath the system. It is moving closer to the center of how stories are discovered, ranked, framed, translated, summarized, recommended, and increasingly drafted. In many ways, it already behaves like an invisible editorial presence, shaping what gets seen, what gathers momentum, what sounds authoritative, and what quietly disappears before it ever reaches public attention. That is why the future of media now feels less like an extension of the digital age and more like the beginning of a different era altogether.
The challenge ahead is also changing. For years, the media worried about distribution because distribution was power. Today, distribution is abundant. Publishing tools are everywhere. Content creation has become radically easier. The deeper pressure now sits somewhere else. It sits in the struggle to preserve meaning, coherence, and trust in an environment where information is effectively endless, synthetic output is growing, and human attention remains limited. In that environment, the institutions that matter most will not simply be the ones with the best technology. They will be the ones that know how to keep judgment alive while everything around them speeds up.
From publishing more to meaning more
Every earlier media revolution widened access. The printing press expanded the circulation of text. Radio and television brought voice and image into people’s lives at scale. The internet removed geography as a barrier. Social platforms turned distribution into a live, continuous network effect. Each of those shifts changed speed and scale, but the deeper logic remained recognizable. Information still moved from identifiable sources towards audiences, and trust still depended, however imperfectly, on the perceived reliability of those sources.
AI changes the site of influence itself.
A search engine that ranks ten results has already made an editorial choice. A platform that determines what appears at the top of a feed is shaping the informational environment in which opinions are formed. A language model that summarizes a complex event is making decisions about what to emphasize, what to compress, and what to leave out. Those choices once sat more clearly in the hands of editors. Today, they increasingly happen inside systems that operate at extraordinary scale and with very little visible accountability.
That is why I believe the media is moving from the mechanics of distribution to the architecture of meaning. The real struggle ahead is not just over visibility. It is over interpretation. Once that becomes clear, many of the industry’s old assumptions begin to weaken.
Why volume is starting to lose its power
For a long time, scale looked like the obvious answer to sustainability. More stories meant more inventory. More inventory meant more traffic. More traffic meant more advertising. It was a model that made intuitive sense in a world where publishing capacity still carried some scarcity.
That logic weakens very quickly in an AI-driven environment.
When systems can generate competent text at negligible cost, summarize documents instantly, adapt content across formats, translate between languages, and support production at immense speed, generic output loses value fast. What once looked like a productive advantage starts looking like a commodity supply. The ability to produce more stops being especially meaningful when almost everyone can do the same.
I saw an earlier version of this before generative AI became the industry’s dominant theme. At Hindustan Times Digital, increasing output did not necessarily deepen the relationship with readers. In fact, the relationship improved when we balanced volume with value and invested more in depth and coherence. Engagement strengthened. Advertiser value improved. Readers responded to focus. That experience stayed with me because it revealed something simple and important. People do not build lasting relationships with institutions because those institutions publish endlessly. They build them when the institution makes better use of their attention.
That insight matters even more now. In a world where content can be generated almost infinitely, the premium shifts toward something harder to manufacture: originality, credibility, context, and the ability to turn information into understanding. This is why the idea of value density feels increasingly relevant. Media will be judged less by how much it produces and more by how much usefulness, interpretive depth, and trust it can pack into each unit of output.
The newsroom is changing, but its purpose is not
AI will not make the newsroom irrelevant. It will change the kind of work the newsroom does best.
The journalist of the next era will spend less time on repetitive production and more time on verification, contextualization, synthesis, interpretation, and editorial judgment. That shift matters because it pushes the role closer to what journalism was always meant to do at its highest level: not simply move information, but help society make sense of it.
That is also why I find the distinction between artificial intelligence and augmented intelligence useful. Artificial intelligence suggests autonomy. Augmented intelligence suggests extension. It treats technology as something that strengthens human capacity rather than replaces it. In journalism, that difference is fundamental because the most important work rarely sits in the sentence itself. It sits around the sentence. Is the framing fair? Is the source credible? Is the emphasis proportionate? What is missing? What may be factually correct yet still deeply misleading without context? Those questions still belong to human beings.
A machine can summarize a story. It cannot carry moral responsibility for the summary.
A machine can identify a pattern. It cannot decide how much social weight that pattern deserves.
A machine can surface what is likely to perform. It cannot decide what the public most needs to understand.
That is why the future of journalism depends on keeping the human role strong, visible, and deliberate inside machine-assisted systems.
Why oversight will matter more than ever
One election cycle made this issue uncomfortably clear. A ranking algorithm began surfacing entertainment content above public-affairs reporting during a sensitive political period. No ideological bias had been programmed into the system. It was simply optimizing around behavior and producing a result that was editorially unacceptable.
The fix required more than a technical adjustment. It required a principle.
From that moment onward, every automated decision of consequence remained subject to human validation. We called it the editorial override. The phrase may sound procedural, but its significance is larger than that. It reflects a simple institutional truth: progress without oversight becomes drift. Any serious media organization entering the AI era will need the ability to pause, review, and intervene.
That becomes even more important as AI spreads into moderation, translation, summarization, recommendation, and synthetic media. The more invisible these systems become, the more explicit governance has to be. The real question for media leaders is no longer how much can be automated. The harder and more important question concerns custody. Which decisions must remain human? Which outputs need review? Which systems require clear accountability before they touch public trust?
Those are not engineering details. They are leadership decisions.
The quieter danger: sameness
Much of the public discussion around AI still focuses on misinformation, deepfakes, and synthetic manipulation. Those concerns are serious. But another problem is gathering force beneath the surface, and over time it may prove just as damaging.
It is the problem of sameness.
When organizations train on similar datasets, optimize around similar signals, and rely on similar tools, the range of outputs begins to narrow. Content still appears diverse in form, yet its structure, emotional tone, framing habits, and implied assumptions start converging. Different publishers begin sounding strangely alike. Journalism becomes flatter, even when it remains technically competent.
That matters because trust depends partly on identity, and identity depends partly on distinction. When every institution starts sounding broadly similar, audiences lose one of the most important reasons they attach credibility to one source over another. Originality stops being a style choice and starts becoming a strategic necessity.
This is one reason India matters so much in any serious global discussion about media. India’s media ecosystem cannot rely on homogeneity. It operates across multiple languages, layered audiences, uneven connectivity, and strong regional identities. That complexity forces publishers to think contextually. It pushes them toward locality, language, and lived reality. In a world increasingly threatened by editorial flattening, that kind of diversity becomes a source of resilience.
Trust will have to be managed like infrastructure
All of this points to a larger conclusion. Trust can no longer be treated as something soft and ambient that somehow emerges if everything else works. It has to be treated as infrastructure.
Infrastructure is what carries weight. It is what allows the rest of the system to function. It belongs at the foundation. In the next era of media, credibility will work in exactly that way. It will need to be measured, protected, audited, and managed like any other strategic asset.
This is where the conversation becomes more serious. Once credibility becomes measurable, it becomes governable. Once it becomes governable, leadership can work on strengthening it intentionally rather than relying only on instinct or legacy reputation. Frameworks such as NewsGuard and The Trust Project matter because they signal the direction the industry is moving in. Correction discipline, sourcing clarity, labeling, disclosure, and editorial transparency are no longer peripheral concerns. They are becoming central to institutional strength.
The commercial implications are equally important. Trust lowers friction in audience acquisition. It improves retention. It gives advertisers more confidence. It attracts stronger partners and stronger talent. It creates the conditions for pricing power in a market flooded with low-cost content. In that sense, trust is not just ethically important. It is commercially decisive.
The product itself has to change
If AI changes production economics and editorial authority, it will also change the design of media products.
Much of digital media was built around speed of consumption. Infinite scroll, fragmented feeds, algorithmic sequence, and constant interruption all emerged from a world in which grabbing attention was the central strategic battle. In a landscape of infinite content, that logic begins to weaken. The more information there is, the more valuable comprehension becomes.
That means the next generation of media products will need to help readers understand, not simply consume.
In practice, that points toward slower forms of journalism gaining fresh economic relevance, more thoughtful narrative design, stronger context layers, clearer disclosure cues, and interface choices that reduce cognitive overload. Product design will take on a larger editorial role. It will have to help readers stay oriented in complexity, understand why a story matters, see what remains uncertain, and move through information with greater clarity.
The institutions that lead this shift will stop treating product as a wrapper around content. They will treat it as part of the meaning-making system itself.
What changes, and what stays constant
AI and technology will reshape the future of media in several lasting ways. Newsroom labor will move toward supervision, verification, and sense-making. Generic output will lose value while trust, originality, and context gain premium. Governance and oversight will become central to institutional strength. Success metrics will move closer to comprehension, retention, and quality of relationship.
Yet something deeper will remain constant.
Media will still rise or fall on whether audiences believe it deserves their time. Institutions will still need editors in the deepest sense of that word, even if editorial power is now distributed differently across systems, products, and people. The organizations that endure will still be the ones that understand a basic truth: technology can extend the reach of information, but judgment is what preserves its meaning.
We are entering a world where content is infinite. Under those conditions, discernment becomes more valuable every year.
AI will reshape the future of media. That much is already clear. What remains open is how institutions respond. Some will use technology to accelerate volume and chase efficiency. Others will use it to deepen understanding, strengthen editorial seriousness, preserve plurality, and build trust into the system itself.
That choice will decide which organizations retain real authority in a world flooded with content.