Tech Souls, Connected.

X Cracks Down on Monetized AI War Videos Without Disclosure

New policy targets misleading AI-generated conflict footage but leaves broader AI misinformation largely untouched


X is tightening its rules around AI-generated war content—but only when creators fail to disclose it.

The platform announced it will suspend creators from its Revenue Sharing Program for 90 days if they post AI-generated videos depicting armed conflicts without labeling them as AI-made.

The decision reflects growing concern that generative AI can easily produce convincing fake war footage, potentially misleading millions during real-world crises.


Monetization Ban for Misleading AI War Content

The policy was announced by Nikita Bier, X’s head of product, who emphasized the risks of misinformation during conflicts.

“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote.

Under the new rule:

  • Creators posting AI-generated war videos without disclosure will face a 90-day suspension from the Creator Revenue Sharing Program
  • Repeated violations after the suspension ends will result in permanent removal from the program

Importantly, the policy targets monetization—not the posts themselves.

Creators can still post the content but won’t be able to earn advertising revenue from it.


How X Plans to Detect the Content

X says it will identify misleading AI-generated posts using a hybrid moderation approach.

The system will combine:

  • AI detection tools designed to identify generative content
  • The platform’s crowdsourced Community Notes fact-checking program

Community Notes allows users to collaboratively flag misleading posts and add contextual information visible to other users.


Creator Revenue Program Under Scrutiny

X’s Creator Revenue Sharing Program allows users to earn income from advertising revenue tied to their posts.

The program was designed to encourage creators to publish engaging, high-performing content on the platform.

However, critics argue the system can encourage sensational or inflammatory posts.

Common criticisms include:

  • Incentives for clickbait and outrage-driven content
  • Weak enforcement against misleading media
  • Requirements that creators must be paid X subscribers to participate

In other words, the same system meant to reward creativity can sometimes reward controversy instead.


A Narrow Policy in a Larger AI Misinformation Problem

While the new rule targets AI-generated war footage, it only addresses a small slice of the broader misinformation landscape.

AI-generated media is frequently used for:

  • Political misinformation
  • Deepfake videos
  • Misleading influencer promotions
  • Deceptive product advertising

Under X’s current policy, much of this content remains allowed—and potentially monetizable—so long as it doesn’t involve armed conflict scenarios without disclosure.

The rule therefore acts more like a guardrail during wartime, rather than a comprehensive approach to AI misinformation.


The Growing Challenge of AI Content Moderation

Generative AI has dramatically lowered the barrier to creating convincing fake videos and images.

A short prompt can now generate battle scenes, explosions, or military footage that look authentic enough to spread rapidly on social media.

That raises a difficult question for platforms:

How do you moderate synthetic media when the tools to create it are improving faster than detection systems?

X’s new policy represents one attempt to address that challenge—but it’s likely only the beginning.


TL;DR:
X will suspend creators from its revenue-sharing program for 90 days if they post AI-generated war videos without labeling them as AI-made. Repeat offenders face permanent removal from monetization. The policy targets misinformation during conflicts but does not address broader AI misinformation on the platform.

AI summary

  • X introduces 90-day monetization suspension for unlabeled AI war videos
  • Repeat violations lead to permanent removal from revenue sharing
  • Detection uses AI tools and Community Notes
  • Policy only applies to armed conflict content
  • Broader AI misinformation on the platform remains largely unaffected
Share this article
Shareable URL
Prev Post

Apple Raises MacBook Prices as Memory Chips Become Scarce

Next Post

Why Big Pharma Is Quietly Leaving India’s Generics Market

Read next