AI Labelling under the AI Act: An Operational Guide for Providers and Deployers of AI Systems
Compliance with the AI Act's transparency obligations goes beyond simply adding a label; it demands that both providers and deployers build robust governance structures to ensure consistent marking and disclosure of AI-generated content across all content types and distribution channels. This guide sets out the concrete steps required to comply with these obligations from 2 August 2026.
In February 2026, German public broadcaster ZDF faced a major scandal after its flagship news program, heute journal, broadcasted AI-generated video footage without any editorial labelling aired. The synthetic video depicted dramatic, yet fake scenes of a woman and her children being led away by ICE officers – used to illustrate a sensitive political report on US immigration enforcement.
Such missteps highlight that, without clear technical and visual marking, synthetic content is becoming increasingly indistinguishable from authentic material to the human eye. To address this new reality, the European legislator introduced the Artificial Intelligence Act (“AI Act”), establishing transparency obligations designed to ensure that synthetic content remains identifiable as such (Art. 50 AI Act).
While the AI Act’s transparency obligations set commendable goals, the practical steps required for its technical execution remains largely undefined. With transparency obligations becoming legally binding as early as 2 August 2026, organisations currently lack clarity on the concrete steps necessary for compliance.
To provide practical guidance, the European Commission is currently developing a Code of Practice, the latest draft of which was published on 5 March 2026 (“Draft CoP”). While voluntary, signing and complying with the CoP offers a presumption of conformity with Art. 50 AI Act vis-à-vis regulatory bodies, reducing legal uncertainty for signatories.
Who is Affected?
To navigate the transparency obligations, the first step is to determine which transparency rules apply to which actor in the AI value chain. Unlike the bulk of the AI Act, which targets high-risk systems, transparency duties apply horizontally to all providers and deployers, regardless of an AI system’s risk classification.
An organisation qualifies as a provider if it develops an AI system, or has one developed by a third party, and places it on the market or puts it into service under its own name or trademark. In practice, this covers companies offering AI-powered tools or services, whether as AI-as-a-Service-platforms, enterprise software with integrated AI analytics, or proprietary chatbot solutions.
A deployer is any company using a third-party AI system under its own authority in a professional context. This captures an exceptionally broad range of organisations, from law firms using AI-powered legal research tools to marketing agencies generating campaign visuals, or companies relying on AI-driven customer communications.
Implementing Provider-Side Marking
As established by the AI Act, providers must
- design AI systems, such as customer service chatbots, in a way that ensures humans are informed they are interacting with an AI system (Art. 50 (1) AI Act). This disclosure is mandatory, unless a reasonable person would recognise the communication’s artificial nature from the circumstances.
- in case of AI systems that generate synthetic audio, image, video, or text, ensure that their outputs are marked in a machine-readable format (Art. 50 (2)).[1]
- For formats supporting metadata, such as images, videos or documents, providers are expected to embed digitally signed provenance information (metadata), identifying the AI system and the type of operation performed.
- Since metadata can be easily removed, it must be backed up by invisible watermarks embedded directly into the content. These watermarks are designed to survive compression, cropping, and other alterations, ensuring automated detection of synthetic origin.
- Where both techniques prove insufficient, particularly for text-based or transformed outputs, the Draft CoP envisages fingerprinting or logging mechanisms as a fallback. In practice, the provider's system retains a mathematical fingerprint of each output, allowing users to upload content to a verification portal where the system checks against its internal logs whether it produced the specific content. Notably, a growing number of technical approaches are available for implementing these requirements. Beyond the techniques outlined above, emerging methods such as token-level watermarking (which embeds provenance signals at the level of individual text tokens during generation) and blockchain-based verification systems (which create immutable, decentralised records of content origin) illustrate the breadth of complementary measures that providers may deploy to further strengthen the traceability of AI-generated outputs.
Because providers control the platforms where content is created, they must ensure it is “born” with a traceable digital identity:
This level of detail indicates that regulators are unlikely to accept generic claims such as “we use watermarks." Providers may be expected to demonstrate where and how content is marked, that marking survives common transformations, and how these measures are tested and documented. In effect, providers must treat transparency as an auditable system capability.
The practical effectiveness of these measures may depend on which standards major industry players adopt. The C2PA standard, already embraced by Microsoft, Adobe, and Google, is emerging as a leading candidate.
Provider Obligations – At a Glance
Based on the Draft CoP, providers should prepare to meet the following key obligations:
- Ensure that users of AI systems that are designed for direct interaction are clearly aware that they are interacting with an AI system.
- Implement machine-readable marking for all synthetic audio, image, video, and text outputs, combining (i) digitally signed provenance metadata, (ii) invisible watermarks designed to survive common transformations such as compression or cropping, and (iii) where metadata and watermarking prove insufficient, fingerprinting or logging mechanisms as a supplementary fallback.
- Evaluate and, where appropriate, adopt emerging technical standards such as the C2PA standard for content provenance and authenticity.
- Establish internal processes for testing, monitoring, and documenting the effectiveness of all marking techniques on an ongoing basis, ensuring that transparency measures remain auditable.
- Continuously monitor the further development of the Code of Practice and assess whether updates require adjustments to existing compliance measures.
Setting Up Deployer-Side Disclosure – Deployer Obligations
As established by the AI Act, deployers must
- visibly disclose when content constitutes a deepfake or when AI-generated text is published to inform the public on matters of public interest (Art. 50 (4)). An exception applies where the text has undergone meaningful human review and is published under editorial responsibility.
- inform individuals when they are exposed to systems designed to detect emotions or categorise them based on physical traits (Art. 50 (3)). This means if a company uses, for example, AI systems to monitor employee fatigue, those individuals may have to be clearly notified that they are being “read” by an AI.
The most stringent rules under the Draft CoP target deepfakes. While commonly associated with realistic face-swaps, the AI Act's definition is significantly broader: it encompasses any AI-generated or manipulated image, audio, or video creating a realistic but false impression of people, places, objects, or events. Even standard “photoshopped” modifications may therefore trigger transparency obligations.
According to the Draft CoP, deepfakes must be disclosed through a visible label. The requirements vary by format: non-real-time videos or images require a permanent, visible icon, while audio-only formats need a spoken disclaimer at the beginning. Even deepfakes in artistic, fictional, or satirical works require disclosure, albeit in a non-intrusive manner preserving creative expression while safeguarding third-party rights.
Transparency requirements also extend to AI-generated or manipulated text published to inform the public on matters of public interest, such as news articles, policy papers, or official corporate statements. Disclosure is required unless the deployer can rely on the exception for human review and editorial responsibility. The Draft CoP significantly narrows this exception by requiring documented internal procedures, identified responsible persons, and traceable approval processes – making clear that it cannot serve as a shortcut for superficial editorial oversight.
In practice, companies will need to use a standard icon to label content. Until a harmonised EU-wide interactive icon is developed, deployers can use a temporary visual label using the acronym “AI” or a local-language equivalent such as “KI” for German speakers similar to these examples:


The icon must be clearly visible the first time a user encounters the content and placed in a consistent, appropriate location. In the longer term, the Draft CoP envisages a standardised interactive EU icon that users can click to access more detailed information about the content's origin, drawing on the machine-readable markings embedded by providers.
Transparency must thus become integral to a deployer’s compliance management. Deployers must maintain documentation detailing their labelling practices, train personnel on disclosure requirements, and implement monitoring mechanisms allowing users and authorities to flag mislabelled or unlabelled synthetic content for swift correction.
Deployer Obligations at a Glance
Based on the Draft CoP, deployers should start preparation to meet the following key obligations:
- Visibly label all deepfakes in the sense of the AI Act, using a standardised icon or, for audio-only formats, a spoken disclaimer at the beginning of each clip.
- Disclose AI-generated or manipulated text published to inform the public on matters of public interest, or, alternatively, establish sufficient and documented human review processes and publish content under identified editorial responsibility.
- Inform users when they are exposed to AI systems designed to detect emotions or categorise them based on biometric data.
- Establish internal compliance documentation, train personnel on disclosure requirements, and implement monitoring mechanisms for flagging and correcting mislabelled or unlabelled synthetic content.
- Continuously monitor the further development of the Code of Practice and assess whether updates require adjustments to existing compliance measures.
Conclusion
The Draft CoP makes clear that compliance with the AI Act’s transparency obligations goes well beyond adding a label. Both providers and deployers will need internal guidelines and governance structures to ensure consistent marking and disclosure across content types, distribution channels, and user journeys.
To ensure timely compliance, organisations should
- assess their role as provider or deployer under the AI Act,
- identify gaps in current practices against the transparency obligations under Art. 50 AI Act and the Draft CoP,
- where required, develop internal policies and governance structures for AI transparency, including labelling workflows, and train relevant personnel on applicable obligations, and
- to the extent necessary, implement technical marking and disclosure mechanisms, set up monitoring and complaint-handling procedures, and finalise compliance documentation ahead of enforcement.
[1] As established by the CoP, technical marking solutions must generally adhere to four qualitative principles: effectiveness (reliable detection), interoperability (cross-platform compatibility), robustness (resistance to tampering or removal), and reliability (minimization of false positives). The Draft CoP further clarifies that while these must be met “as far as technically feasible”, providers should lean toward emerging open standards like C2PA to ensure future-proof compliance.
About us
YPOG stands for You + Partners of Gamechangers – forward-thinking legal and tax advice. Supporting companies that are focused on emerging technologies, YPOG embraces change as an opportunity to develop cutting-edge solutions. The YPOG team offers comprehensive expertise in the areas of Funds, Tax, Transactions, Corporate, Banking, Regulatory + Finance, IP/IT/Data Protection, Litigation, and Corporate Crime + Compliance + Investigations. YPOG is one of the leading law firms in Germany for venture capital, private equity, fund structuring, and the implementation of distributed ledger technology (DLT) in financial services. Both the firm and its partners are regularly recognized by renowned national and international publications such as JUVE, Best Lawyers, Chambers and Partners, Leaders League, and Legal 500. YPOG is home to more than 180 experienced attorneys, tax advisors and tax specialists as well as a notary, working across offices in Berlin, Cologne, Hamburg, Munich, Cambridge and London.
Further information: www.ypog.law/en/ and www.linkedin.com/company/ypog

