Tech

X suspends revenue sharing for unlabelled AI war videos — a creator’s crisis and the search for truth

On a crowded afternoon feed, a grainy clip shows missiles chasing a jet through an orange sky. The post, reshared dozens of times in minutes, promises frontline footage; some viewers react with alarm, others with disbelief. In the churn of outrage and clicks, the lowercase x appears in the policy update that will touch the livelihoods of creators — a rule aiming to stem the flood of synthetic battle footage.

What X is changing and why

The platform announced a new enforcement in response to a surge of AI-generated war videos that have filled social feeds since the start of the Iran conflict. Creators who post AI-generated videos of an armed conflict without disclosing the material was made with AI will be suspended from creator revenue sharing for 90 days; a second violation will lead to permanent removal from the program. The action follows a wave of faked battle scenes that included altered footage and AI-generated imagery shared at scale.

How will the new rule work?

The policy targets posts that lack an AI disclosure and that depict armed conflict. Nikita Bier, head of product at the platform, framed the move in stark terms: “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people. Starting now, users who post AI-generated videos of an armed conflict — without adding a disclosure that it was made with AI — will be suspended from creator revenue sharing for 90 days. Subsequent violations will result in a permanent suspension from the programme. “

The change is explicitly economic: a suspension removes a creator’s ability to earn through the platform’s creator revenue share. The platform’s model can reward creators who build substantial followings approaching 100, 000 people, with users able to make hundreds of dollars a month. That monetary incentive can push some creators toward sensational content that attracts rapid engagement.

Can a revenue ban stop AI deepfakes?

Experts and fact-checkers are cautious. Steve Nowottny, editor at the fact-checking organisation Full Fact, described an escalating problem: “In the last few days we’ve seen lots of examples of AI images shared across different social media platforms as if they are real, including fake pictures of an aircraft carrier and the Burj Khalifa on fire, and an image supposedly showing the body of Ayatollah Khamenei. ” He added that even low-quality AI images are shared at scale, and that sheer volume and ease of generation are central concerns.

Sam Stockwell, who researches AI in online information at the Centre for Emerging Technology and Security, noted a new behaviour among users: asking AI chatbots to verify whether videos are fakes. “Unfortunately chatbots are not very good at assessing real-time events, ” he said, and people then circulate chatbot responses as evidence of authenticity. That pattern complicates enforcement: detection signals can include community notes or metadata and other generative-AI indicators, but human-driven amplification and manipulation remain powerful drivers.

Who is affected — and what are the social costs?

The policy affects creators who rely on platform income and audiences seeking news from feeds. For creators, the enforcement imposes a tangible risk to income streams tied to sensational posts. For audiences, misleading AI-generated footage can reshape perceptions of a conflict in minutes, spreading false impressions of events on the ground and inflaming public sentiment.

Beyond economics, the policy recognizes a human reality: during armed conflict, access to trustworthy information can have immediate consequences for safety, public opinion and political choices. The platform’s change aims to reduce incentive structures that reward incendiary content, but it does not erase the existing archive of viral fakes or the motivations behind them, including political manipulation and the pursuit of attention.

What is being done beyond suspensions?

The policy combines enforcement with detection signals: posts may be flagged through community annotations or by indicators embedded in content metadata and other generative-AI signals. The platform positions the suspension as part of a broader move to protect information integrity during a major news event. Meanwhile, fact-checking organisations and researchers continue to catalogue examples of AI-augmented misinformation and call for improved verification tools and clearer disclosures.

For creators, the path forward involves adjusting to a new norm where disclosure is a requirement for certain kinds of synthetic content. For researchers and verifiers, the challenge is to build faster tools that can keep pace with the scale and sophistication of AI generation and the human practices that amplify it.

Back in the feed where the missile clip first circulated, the post now sits beside a short official note and a growing thread of corrections and questions. The policy change has given the clip a different context: what felt like an instant of raw reporting now reads as part of a contested information environment. The lowercase x in the policy reminder is a blunt instrument — it punishes monetized dissemination of unlabelled AI conflict footage — but many observers say it is only one step in a longer effort to restore trust in a chaotic information landscape.

As creators and audiences navigate the new rule, the central tension remains: how to preserve reporting from the ground while removing the financial incentives that reward viral fakes. The answer, for now, is unsettled, and the feed keeps scrolling.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button