EU AI Act Article 50(4) for content publishers.
A working reference for the regulation, the documentation we produce, and the limits of what we promise. Read it before your first scoping call.
Last reviewed 7 April 2026. Tracks the second draft Code of Practice published 5 March 2026.
The human review exemption.
Article 50 of Regulation (EU) 2024/1689 sits in Chapter IV of the EU AI Act and sets transparency obligations for providers and deployers of certain AI systems. The paragraph that matters for content publishers is Article 50(4). It targets deployers, the natural or legal persons using an AI system in the course of a business or professional activity, and applies whenever AI-generated text is published with the purpose of informing the public on matters of public interest.
The default obligation is a label. Where the obligation applies, the publisher must disclose that the text was artificially generated or manipulated. The exemption is the second sentence of the second paragraph: the labelling obligation does not apply where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content. That carve-out is the entire reason this product exists.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
Article 50 obligations become applicable on 2 August 2026. The Act itself entered into force on 1 August 2024, with obligations phased in over the following two years. There is no grace period after the August date. Publishers either have a documented review process and a named editorially responsible person on file, or they label every AI-assisted article.
Five documentation elements you must be able to produce.
The Code of Practice on Marking and Labelling of AI-generated content is the European Commission's operational guide to Article 50. The first draft was published 17 December 2025. The second draft followed on 5 March 2026. The Commission expects the final text in early June 2026, ahead of the August application date. The Code is technically voluntary, but in practice it is the benchmark national market surveillance authorities will use to assess compliance. It fleshes out what "human review" means in evidentiary terms.
The second draft sets out the documentation a signatory should be able to produce. There are five concrete elements.
A written review procedure
A document that demonstrates the AI-generated text underwent human review prior to publication. The procedure describes how content is received, who reviews it, what the review covers, how sign-off happens, and where the record is stored. We produce this as a versioned procedure document, customised to each client's workflow, and we update it whenever the workflow changes.
Identification of the editorially responsible person
The Code requires three sub-elements: name, role, and contact details. The editorially responsible party is the natural or legal person who holds editorial responsibility under Article 50(4). We capture this in a reviewer roster document, refreshed when staffing or designations change, and we keep every prior version for the audit trail.
An overview of organisational measures
The concrete organisational measures allocated to ensure compliance. This covers workflow controls, escalation routes, quality assurance steps, retention policy, and the procedure for responding to a regulator inquiry. We include this as a section of the procedure document so the regulator-facing record sits in one place.
An overview of human resources
The human resources allocated to ensure compliance. The Code expects an overview of the people doing the review work and the time allocated to it. We document the reviewer team in the roster, their qualifications, their tier assignments, and the volume each is rated for.
Demonstrability per article
The signatory should be able to show, for any specific piece of content, who reviewed it, what the review involved, and who held editorial responsibility. We satisfy this with a per-article attestation log, generated automatically from the reviewer's submission, archived in the customer's compliance vault, and indexed for quick retrieval.
Three ways to allocate responsibility under Article 50(4).
Article 50(4) requires that a natural or legal person holds editorial responsibility for the publication of the content. Somebody must be on the regulator-facing record. The Code permits three configurations of who that somebody is. The customer chooses one based on their internal structure, their internal editorial leadership, and how much of the compliance burden they want to retain.
Customer holds responsibility
The customer's own employee or designated person remains the editorially responsible party. We provide review services and documentation evidence in support of their compliance posture. The named person on the regulator-facing record is the customer's. This is the default and the cleanest contractual arrangement, included in every standard subscription.
Best fit: publishers with an existing editor-in-chief, head of content, or compliance lead.
Sygil holds responsibility
A named Sygil reviewer is listed as the editorially responsible party under a Premium Editorial Responsibility Addendum. The customer's name is not on the regulator-facing record. The reviewer takes real authority, including authority to refuse publication. Available as a paid add-on with volume limits per responsible party.
Best fit: solo operators, small healthcare publishers, customers with no internal editorial leadership.
Joint or shared responsibility
Both a named person on the customer side and an Sygil reviewer are listed as jointly responsible. Used in regulated sectors where dual sign-off is the norm and in clinical content where both editorial control and subject-matter accuracy are part of the record. Custom contract, custom pricing, entered into only with explicit lawyer review.
Best fit: clinical publishers, regulated finance, pharma communications, multi-party publishing.
The regulation reaches outside the EU.
Article 2(1)(c) of the AI Act extends the regulation to providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union. That clause is the extraterritorial hook. It catches anyone, anywhere, whose AI output is used in the EU. The structure is identical to GDPR Article 3, which has been enforced against US companies since 2018, with multi-billion-euro fines as precedent.
The reasonable interpretation of "used in the Union," supported by GDPR analogy and the AI Act recitals, is that output is in scope when a natural person in the EU consumes the content, when an EU business relies on it for decision-making, when the content is targeted at an EU audience, or when a meaningful share of the audience is in the EU.
The practical scope test is four steps. Do you have EU users. Do you have EU subscribers. Do you have EU enterprise customers. Do you have content that reaches EU readers. If any of those four are true, you are in scope. Most US digital publishers are in scope and most do not realise it.
Penalty tiers.
Article 99 establishes the penalty regime for non-compliance with the AI Act. There are three tiers. Article 50 transparency obligations sit in the middle tier under Article 99(4)(g). The fine is "whichever is higher" between the absolute amount and the percentage of global annual turnover.
| Tier | Maximum penalty | What triggers it |
|---|---|---|
| Top tier Article 99(3) |
EUR 35,000,000 or 7% of global annual turnover, whichever is higher | Prohibited AI practices under Article 5: social scoring, untargeted facial scraping, emotion recognition in workplace and education, and other Article 5 categories. |
| Middle tier Article 99(4)(g) |
EUR 15,000,000 or 3% of global annual turnover, whichever is higher | Most operator obligations including the Article 50 transparency duties for providers and deployers. This is where Article 50(4) sits. |
| Information tier Article 99(5) |
EUR 7,500,000 or 1% of global annual turnover, whichever is higher | Supplying incorrect, incomplete, or misleading information to national market surveillance authorities or notified bodies in response to a request. |
Article 99(6) flips the cap for SMEs, including start-ups: the fine is the lower of the absolute amount and the percentage, not the higher. SME definition under EU Recommendation 2003/361/EC is fewer than 250 employees and either annual turnover up to EUR 50M or balance sheet up to EUR 43M.
Article 99(7)(g) treats organisational measures as a mitigating factor in fine calculation. A documented human review procedure is the kind of organisational measure that reduces exposure under that subsection.
In scope, grey zone, out of scope.
Article 50(4) applies to text "published with the purpose of informing the public on matters of public interest." Neither the Act nor the Code defines that phrase. The reading below follows the conservative legal posture: if a reader could rely on the content for substantive decisions, treat it as in scope.
Likely covered
- News and journalism
- Health, medicine, life sciences
- Finance, investment, personal finance
- Legal and regulatory commentary
- Political, policy, public administration
- Climate, energy, environmental reporting
- Public safety, cybersecurity advisories
- Consumer protection content
Reasonable lawyers disagree
- B2B SaaS long-form guides on regulated topics
- Trade association and industry analysis
- Educational content for general readers
- Newsletter content on professional topics
- Product reviews of regulated products
- Pharma communications and medical marketing
- Thought leadership on company accounts
- Podcast transcripts on substantive topics
Probably not covered
- Pure marketing and ad copy
- Ecommerce product descriptions
- UI text and microcopy
- Customer support knowledge bases
- Internal corporate communications
- Personal non-professional messages
- Pure entertainment and lifestyle content
- Internal memos not published to the public
What you receive each month.
Every customer receives the same six-component package. The components map directly to the five documentation elements the Code requires, with a vault layer for retention and retrieval. Nothing in the package is optional. Premium tiers add framework mappings and additional report depth, not new component types.
Editorial review procedure
A versioned document describing the workflow from intake through attestation, the review tiers, escalation routes, retention policy, and the regulator inquiry procedure. Cites Article 50(4) and the current Code text. Updated whenever your workflow changes. Satisfies elements 01 and 03 of the Code.
Named reviewers and editorially responsible party
A document listing every reviewer authorised on your account with name, role, qualifications, contact, and tier. The editorially responsible party is named with role and contact details, per your chosen configuration. Updated when staffing or designations change. Satisfies elements 02 and 04 of the Code.
Configuration A, B, or C declaration
The signed document that fixes which editorial responsibility configuration applies to your account. For Configuration A, this names your designate. For Configuration B or C, this is the Premium Editorial Responsibility Addendum that names the Sygil reviewer who carries the responsibility, the scope, and the duration.
One record per reviewed article
An automatically generated record per article showing identifier, title, author, reviewer, review timestamp, methodology version, and an attestation hash. Satisfies element 05 of the Code, the demonstrability requirement. Available as PDF, JSON, or a structured database export.
Period-level summary for your file
A monthly PDF summarising articles reviewed, reviewer activity, escalations, and the procedure and roster versions active in the period. Designed to slot directly into the customer's compliance file with no further preparation. Five-year retention by default.
Per-customer archive and retrieval
A per-customer archive containing every version of every component, every per-article attestation, every monthly report, and the annual audit-ready bundle. Read access during the contract, read-only access for ninety days after, retrieval available thereafter. Designed to be handed to a regulator on first request.
ARTICLE_ID: 2026-04-A0481
TITLE: Statin therapy in patients over 75
AUTHOR: (in-house, name on file)
REVIEWER: Dr. M. Iqbal, MBBS, internal medicine
REVIEW_TIMESTAMP: 2026-04-07T11:42:18Z
METHODOLOGY_VERSION: ERP-1.2
ATTESTATION_HASH: 7a9f2e8b3c1d05f9
PROCEDURE_DOC_REF: Sygil-PROC-2026-Q2-v3
EDITORIAL_RESP: Configuration A, customer designate on file
Article 50(4) is the spine. We also map to:
The same human review work supports parallel evidence under several adjacent frameworks. We do not certify against any of them. We document where our procedure already satisfies the relevant clauses, so your compliance team can use the same evidence in more than one regulator-facing or procurement-facing file.
US risk management framework
Our procedure document and reviewer roster map to the Govern 1.4 and Manage 4.1 sub-functions, the human oversight and content review controls. Useful for US enterprise procurement reviews that ask for NIST alignment.
AI Management System Standard
Our procedure clauses map to ISO 42001 clauses 8.3 through 8.7, the operational planning and human oversight controls. Useful for enterprise customers preparing for ISO 42001 certification through an accredited body.
AI Transparency Act
Effective 1 January 2026. Our per-article attestation provides the named-human review record that complements the Act's disclosure requirements for covered providers and downstream publishers operating in California.
Consumer protections in high-risk AI
Effective 1 February 2026. Where AI-assisted content informs consequential consumer decisions, our human review evidence supports the reasonable care defence under the Act's developer and deployer duties.
US deceptive practices
FTC enforcement guidance treats undisclosed AI content with material claims as a potential Section 5 deceptive practice. Our review evidence supports the substantiation file your counsel maintains for claim review.
Meaningful human involvement
Where AI-assisted content informs automated decisions about individuals, our review evidence supports the meaningful human involvement standard that allows the Article 22 exception to apply.
What we do not promise.
This section is the most credible thing on this page. Read it before you decide we are the right vendor. If any of these statements is a problem for your use case, the answer is to talk to a law firm, not to subscribe to us.
- We are not a law firm. We do not provide legal advice. Nothing on this site, in our procedure documents, or in our attestations constitutes a legal opinion. For binding interpretation of the AI Act, consult qualified EU counsel.
- We do not guarantee any regulator's reaction. We document the human review and editorial control in line with the Code of Practice. Whether a particular national market surveillance authority accepts the documentation in any specific case is a legal evaluation that belongs to your counsel.
- We do not promise your content is accurate, fair, or free of bias. We document that a credentialed human reviewed it under our procedure. Substantive accuracy of claims is the publisher's responsibility, not ours.
- We do not certify content as "EU AI Act compliant." No vendor can lawfully promise that. Compliance is determined by national authorities, not by service providers. We provide the evidence file that supports your compliance posture.
- We are not a substitute for your editorial team. If you already have an editor-in-chief and a documented review process, we are the documentation layer that turns your existing work into regulator-ready evidence. If you do not, we can provide reviewers, but the content responsibility framework is still yours unless you opt into Configuration B or C.
- We do not handle deepfakes. Article 50(4) first paragraph applies to image, audio, and video deepfakes. The human review exemption does not apply to that paragraph. We serve text only.
- We do not provide certification under ISO 42001 or any other standard. Only accredited certification bodies can issue those. Our framework mapping documents are evidence you can present in your own certification or audit work, not substitutes for it.
- We do not retain content beyond contract terms. Customer content is held under the retention policy in your procedure document. We do not train models on it, license it, or use it for any purpose outside the review work itself.
Want to walk through this with us?
Fifteen minutes on a call. We will look at your content categories, your current review workflow if any, and the configuration that fits. No proposal, no slide deck, no obligation. Bring your compliance lead if you have one.