Site icon Eitan Blumin's blog

When Nothing Is True – Everything Is Permitted

There is a growing anxiety in the digital age that goes beyond partisanship, beyond specific leaders, beyond even artificial intelligence itself. It is the fear that we are entering an era in which truth is no longer socially enforceable.

If every video can be fabricated, every voice cloned, every image synthesized – then every piece of real evidence can be dismissed as fake.

And if real evidence can be dismissed at will, then accountability erodes.

This is no longer science fiction. It is a structural vulnerability in our emerging information ecosystem.


NOTHING IS TRUE – EVERYTHING IS PERMITTED (The Origins)

The phrase “Nothing is true; everything is permitted” entered popular culture most memorably through the Assassin’s Creed video game series, where it serves as the philosophical maxim of the Assassin Brotherhood. In that fictional universe, the statement is a meditation on epistemic humility and moral responsibility.

“Nothing is true” reflects the idea that rigid dogma and imposed narratives should be questioned.

“Everything is permitted” reminds that, once illusions fall away, individuals bear full responsibility for their choices. It is a warning, not a license. Freedom without wisdom becomes recklessness.

But in the age of generative AI and synthetic media, the phrase risks acquiring a darker meaning. When audiovisual evidence itself becomes contestable at scale, “nothing is true” ceases to be philosophical skepticism and starts to resemble informational and epistemic collapse. And when every incriminating recording can be dismissed as fabricated, “everything is permitted” no longer signifies moral agency – it signals practical blanket impunity.

The phrase, therefore, deserves a modern reinterpretation: not as a creed of liberation, but as a WARNING about what happens when shared standards of verification erode. In a world of programmable reality, preserving truth is no longer merely a virtue – it has to be part of a mandatory infrastructure.


The Liar’s Dividend

We already see the early stages of what scholars call the liar’s dividend – the ability of bad actors to deny authentic evidence simply because convincing fakes exist in the medium.

The mechanism is simple:

  1. AI-generated media becomes widely known.
  2. The public becomes aware that video and audio can be fabricated.
  3. A politician or powerful figure faces incriminating evidence.
  4. They claim: “That’s AI-generated.”
  5. A (large enough) portion of the public believes it.

No forensic report is needed for doubt to spread. The doubt itself is the shield.

This does not require sophisticated deception. It only requires enough epistemic fog to prevent consensus.

The danger is not that people will believe every fake.
The danger is that people will disbelieve every real thing.


“Technology Panic” with Precedence

Every major media innovation has triggered anxiety in the past:

But… Society adapted. Courts developed authentication standards. Journalism evolved verification processes.

But generative AI represents something different in terms of scale and accessibility. It dramatically lowers the cost of impersonation. Anyone with some modest technical access can now fabricate plausible depictions of real individuals doing… well… anything.

That changes something in the public discourse.


Synthetic Identity as Identity Abuse

When AI is used to generate depictions of real people without consent – especially in ways that portray them committing crimes, making statements they never made, or engaging in conduct they never engaged in – it becomes more than “creative expression.”

It becomes synthetic identity misuse.

Consider identity theft. It is a severe crime because impersonating someone can:

A malicious deepfake can do exactly the same thing.

The difference is merely technical implementation.

If impersonating someone with forged documents is criminal, why should fabricating audiovisual “evidence” of them be treated as ordinary speech?


The Regulatory Question

The issue is not whether AI should or should not exist. Whether we like it or not, the Pandora’s box is already open and the tools are out there for anyone to use.

The issue is whether non-consensual synthetic depictions of real people should be lawful by default.

For your consideration, a balanced framework would include:

1. Criminalization of Malicious Synthetic Depiction

If someone generates content depicting a real person in criminal, fraudulent, or reputationally harmful contexts – without consent and presented as authentic – that should be treated as a serious offense.

Most likely equivalent to public defamation and/or false accusation of a crime.

2. Mandatory Provenance Systems

Commercial AI systems should embed cryptographic provenance (i.e. metadata within the media file) and watermarking mechanisms so that compliant outputs are traceable.

As a comparison, we already require disclaimers in dramatized reenactments. For example:

“The person depicted is an actor.”
“Not a real doctor.”
“This is a dramatization.”
“Not actual footage.”

AI-generated content depicting real individuals should meet at least that standard.

3. Procedural Safeguards in Courts

If a public figure claims incriminating evidence is “AI-generated”, that claim should trigger formal evidentiary scrutiny – not casual dismissal.

The burden should not be rhetorical – it should be procedural.

As a result, bad faith “it’s fake” defenses should carry consequences.

It shouldn’t be much different from someone accusing a person of murder.
If indeed the accusation is true – that should be immediate justification for a judicial process.
If the accusation is false – that should also be immediate justification for a judicial process, because the other side of this coin is defamation.

Similarly, if a person makes an accusation of “it’s fake, it’s AI generated” – that should be immediate justification for a judicial process, since there must be punishment for this severe digital crime.
But if the accusation is false – that should also be immediate justification for a judicial process, because again – the other side of this coin is defamation. (i.e. “you wrongly accused me of a crime I did not commit”)


The Institutional Reality

Yes, corruption exists. Yes, governments sometimes fail. Yes, institutions can be captured. Our current unfortunate reality is a stark proof of that.

But corruption is not binary – it exists on a spectrum. Strengthening legal frameworks still increases the cost of abuse in even partially functioning systems.

The alternative – doing nothing – guarantees that malicious synthetic identity attacks proliferate faster than the law can respond.

One does not simply refuse to criminalize fraud because fraud laws might be imperfectly enforced.
We must refine enforcement.


What Is at Stake

This is not merely about embarrassing deepfakes or viral misinformation.

It is about whether audiovisual evidence remains socially binding. The risks will only increase and compound as the quality of AI generated content improves more and more.

Legal systems, financial markets, journalism, and democratic accountability all rely on a baseline assumption:

Some evidence can be authenticated.

If that assumption collapses, the result is not freedom – it is impunity for those most capable of exploiting ambiguity.


Technology Is Neutral. Power Is Not.

AI itself is not malevolent. It is a general-purpose tool with enormous benefits.

But when tools enable scalable impersonation of real people, the law must evolve accordingly.

Regulation does not mean banning creativity.
It means distinguishing between expression and harmful impersonation.

It means preserving the ability to tell the difference between:


When Nothing Is True

The phrase “When nothing is true, everything is permitted” captures a moral risk: when shared standards of verification erode, the constraints on bad actors weaken.

The answer must not be panic, nor should it be surrender to a total epistemic collapse. Otherwise, we’ll find ourselves in a dystopia not much different from George Orwell’s “1984” (or worse).

The answer must be structure:

Truth does not defend itself automatically.

It is upheld by institutions, norms, and law.

If we fail to update those safeguards for the age of synthetic media, we should not be surprised when plausible deniability becomes the most powerful political weapon of all.

The question is not whether AI should exist.

The question is whether accountability will evolve alongside it.


This isn’t theoretical anymore – the policy terrain is already taking shape

If you’re looking for evidence that governments can move on synthetic identity abuse (and that the debate is now about scope, definitions, and enforcement), it’s already here.

In the US, Congress has enacted the TAKE IT DOWN Act (Public Law 119-12, signed May 19, 2025) targeting nonconsensual intimate imagery, explicitly encompassing AI-generated depictions and imposing notice-and-removal obligations on covered platforms.

In parallel, Congress has introduced bills that go beyond intimate imagery toward broader “digital replica” harms: the DEFIANCE Act focuses on relief for victims of nonconsensual intimate digital forgeries, while the NO FAKES Act proposes a federal framework for unauthorized “digital replicas” of voice/likeness with a notice-and-takedown approach and express attention to First Amendment constraints.

Outside the US, the regulatory direction is even more explicit on transparency. The EU AI Act (Regulation (EU) 2024/1689) creates a Europe-wide legal regime that includes deepfake transparency obligations (i.e., disclosure duties for certain synthetic or manipulated media). And in the UK, the government is moving toward aggressive 48-hour removal expectations for nonconsensual intimate imagery (including AI-enabled abuse), with enforcement teeth discussed in connection with Ofcom and major penalties.

Taken together, these initiatives point to a clear trajectory: policymakers are converging on (1) victim remedies, (2) platform duties, and (3) mandatory disclosure/provenance – exactly the multi-layered approach needed to prevent “AI” from becoming a universal alibi.

What to support

What we all need to do, is support proposals that:

Because the core point remains:

The danger isn’t that people will believe every fake.

It’s that powerful people will be able to deny every real thing.

Exit mobile version