When AI Plays God

Birdella INSIGHTS

What a Viral Image Tells

Us About Where We Are

19th April, 2026

An AI-generated image by Donald Trump depicting himself as Jesus Christ — ethereal light, descending figures, a stricken man receiving what appeared to be divine intervention — briefly appeared on Truth Social last week before being quietly removed.

The President said he thought it showed him as a doctor.

Much of the world saw something rather different.

The incident sparked immediate condemnation, even from within Trump’s own Christian nationalist base. “Blasphemous,” said Douglas Wilson, a prominent Christian nationalist and confidant of Defence Secretary Pete Hegseth. “This should be deleted immediately. There’s no context where this is acceptable,” posted Sean Feucht, a Christian activist currently partnering with the Trump administration.

The furore didn’t last long — within hours, many of those voices had walked their criticism back — but the episode raises questions that go well beyond US politics. It sits at the intersection of AI-generated content, public trust, and the accelerating erosion of shared reality.

The Image as Weapon — and as Mirror

What makes this incident so instructive is not the political controversy it generated, but how effortlessly an AI tool produced something so potent. The image wasn’t crafted by a skilled illustrator over days. It was conjured in seconds, and it was good enough — believable enough, striking enough — to land on the feed of a sitting president and make it into the global news cycle.

This is where the real story lies. AI-generated imagery has crossed a threshold. It is no longer a curiosity or a novelty. It is a tool for narrative construction, for identity projection, for the manufacturing of symbolism at scale. The question for every institution — political, legal, corporate, religious — is no longer can this happen? but what do we do when it does?

The Legal and Policy Landscape Is Scrambling to Catch Up

From a policy standpoint, the Trump-as-Jesus image is a case study in the gap between what AI can produce and what existing frameworks can address. Whose liability is it when an AI tool generates blasphemous, defamatory, or politically incendiary content? The platform that hosted it? The user who posted it? The model that generated it?

These are not abstract questions. They are the questions that legal teams, regulators, and policymakers are actively wrestling with right now — and in most jurisdictions, the answers remain dangerously unclear. The EU AI Act, the UK’s evolving AI governance framework, and a patchwork of US state-level legislation are all attempting to catch a moving target.

Meanwhile, the technology moves faster than the rule-making. Every week brings new capability. Every new capability brings new risk vectors that existing law wasn’t written to handle.

What This Means for Leaders and Institutions

For anyone operating at the intersection of technology, law, and communications — which, increasingly, is everyone — the lesson from this week is simple: AI-generated content is now a reputational, legal, and political variable that cannot be ignored.

The organisations that will navigate this landscape most effectively are those that treat it as an intelligence problem. Not a PR problem to be managed reactively, but a landscape to be understood proactively — tracked, analysed, and anticipated before the image, the deepfake, or the synthetic narrative arrives on your doorstep.

The Trump incident will be forgotten by next week’s news cycle.

The underlying dynamic it exposed will not.

 

When AI can manufacture symbolism at scale, “trust” becomes your most vulnerable asset. Don’t wait for a synthetic narrative to arrive on your doorstep before defining your defence strategy.

Discover how Birdella’s Intelligence Frameworks protect institutional reputation, or Schedule a Consultation today.