Compliance with the AI Act's transparency obligations goes beyond simply adding a label; it demands that both providers and deployers build robust governance structures to ensure consistent marking and disclosure of AI-generated content across all content types and distribution channels. This guide sets out the concrete steps required to comply with these obligations from 2 August 2026.
In February 2026, German public broadcaster ZDF faced a major scandal after its flagship news program, heute journal, broadcasted AI-generated video footage without any editorial labelling aired. The synthetic video depicted dramatic, yet fake scenes of a woman and her children being led away by ICE officers – used to illustrate a sensitive political report on US immigration enforcement.
Such missteps highlight that, without clear technical and visual marking, synthetic content is becoming increasingly indistinguishable from authentic material to the human eye. To address this new reality, the European legislator introduced the Artificial Intelligence Act (“AI Act”), establishing transparency obligations designed to ensure that synthetic content remains identifiable as such (Art. 50 AI Act).
While the AI Act’s transparency obligations set commendable goals, the practical steps required for its technical execution remains largely undefined. With transparency obligations becoming legally binding as early as 2 August 2026, organisations currently lack clarity on the concrete steps necessary for compliance.
To provide practical guidance, the European Commission is currently developing a Code of Practice, the latest draft of which was published on 5 March 2026 (“Draft CoP”). While voluntary, signing and complying with the CoP offers a presumption of conformity with Art. 50 AI Act vis-à-vis regulatory bodies, reducing legal uncertainty for signatories.
To navigate the transparency obligations, the first step is to determine which transparency rules apply to which actor in the AI value chain. Unlike the bulk of the AI Act, which targets high-risk systems, transparency duties apply horizontally to all providers and deployers, regardless of an AI system’s risk classification.
An organisation qualifies as a provider if it develops an AI system, or has one developed by a third party, and places it on the market or puts it into service under its own name or trademark. In practice, this covers companies offering AI-powered tools or services, whether as AI-as-a-Service-platforms, enterprise software with integrated AI analytics, or proprietary chatbot solutions.
A deployer is any company using a third-party AI system under its own authority in a professional context. This captures an exceptionally broad range of organisations, from law firms using AI-powered legal research tools to marketing agencies generating campaign visuals, or companies relying on AI-driven customer communications.
As established by the AI Act, providers must
The Draft CoP makes clear that simply attaching a label will not suffice. Crucially, providers must ensure that the origin of AI-generated content remains detectable even after sharing, editing, or reuse, a requirement that poses significant practical challenges, as content routinely undergoes compression, format conversion, cropping, and redistribution across platforms, each of which can degrade or strip embedded markings. To address this, the Draft CoP requires providers to adopt a multi-layered marking strategy combining several complementary techniques.
Because providers control the platforms where content is created, they must ensure it is “born” with a traceable digital identity:
This level of detail indicates that regulators are unlikely to accept generic claims such as “we use watermarks." Providers may be expected to demonstrate where and how content is marked, that marking survives common transformations, and how these measures are tested and documented. In effect, providers must treat transparency as an auditable system capability.
The practical effectiveness of these measures may depend on which standards major industry players adopt. The C2PA standard, already embraced by Microsoft, Adobe, and Google, is emerging as a leading candidate.
Based on the Draft CoP, providers should prepare to meet the following key obligations:
As established by the AI Act, deployers must
The most stringent rules under the Draft CoP target deepfakes. While commonly associated with realistic face-swaps, the AI Act's definition is significantly broader: it encompasses any AI-generated or manipulated image, audio, or video creating a realistic but false impression of people, places, objects, or events. Even standard “photoshopped” modifications may therefore trigger transparency obligations.
According to the Draft CoP, deepfakes must be disclosed through a visible label. The requirements vary by format: non-real-time videos or images require a permanent, visible icon, while audio-only formats need a spoken disclaimer at the beginning. Even deepfakes in artistic, fictional, or satirical works require disclosure, albeit in a non-intrusive manner preserving creative expression while safeguarding third-party rights.
Transparency requirements also extend to AI-generated or manipulated text published to inform the public on matters of public interest, such as news articles, policy papers, or official corporate statements. Disclosure is required unless the deployer can rely on the exception for human review and editorial responsibility. The Draft CoP significantly narrows this exception by requiring documented internal procedures, identified responsible persons, and traceable approval processes – making clear that it cannot serve as a shortcut for superficial editorial oversight.
In practice, companies will need to use a standard icon to label content. Until a harmonised EU-wide interactive icon is developed, deployers can use a temporary visual label using the acronym “AI” or a local-language equivalent such as “KI” for German speakers similar to these examples:
The icon must be clearly visible the first time a user encounters the content and placed in a consistent, appropriate location. In the longer term, the Draft CoP envisages a standardised interactive EU icon that users can click to access more detailed information about the content's origin, drawing on the machine-readable markings embedded by providers.
Transparency must thus become integral to a deployer’s compliance management. Deployers must maintain documentation detailing their labelling practices, train personnel on disclosure requirements, and implement monitoring mechanisms allowing users and authorities to flag mislabelled or unlabelled synthetic content for swift correction.
Based on the Draft CoP, deployers should start preparation to meet the following key obligations:
The Draft CoP makes clear that compliance with the AI Act’s transparency obligations goes well beyond adding a label. Both providers and deployers will need internal guidelines and governance structures to ensure consistent marking and disclosure across content types, distribution channels, and user journeys.
To ensure timely compliance, organisations should
[1] As established by the CoP, technical marking solutions must generally adhere to four qualitative principles: effectiveness (reliable detection), interoperability (cross-platform compatibility), robustness (resistance to tampering or removal), and reliability (minimization of false positives). The Draft CoP further clarifies that while these must be met “as far as technically feasible”, providers should lean toward emerging open standards like C2PA to ensure future-proof compliance.