Provider
Develops, or commissions the development of, an AI system and places it on the market under its own brand.
The broadest scope of obligations and documentation.
We help you deploy AI in line with the law, protecting the business, the board, and the pace of product development.
The AI Act has extraterritorial effect: if an AI system operates on the EU market or its outputs are used in the EU, the regulation may apply regardless of where the entity is established.
Develops, or commissions the development of, an AI system and places it on the market under its own brand.
The broadest scope of obligations and documentation.
Uses an AI system in a professional capacity and is responsible for how it is operated.
Key duties: human oversight, monitoring, and a fundamental rights impact assessment (FRIA) where required.
Brings AI systems from outside the EU into the Union or distributes them further.
Requires supply-chain controls and manufacturer documentation.
Adapts an AI model to a specific business use case.
In practice, it may take on the status of provider of a new system.
If your AI system profiles natural persons and falls within Annex III, it is automatically high-risk. You cannot rely on the Article 6 section 3 exception, which softens the obligations for other systems.
Prohibited practices (e.g. social scoring, subliminal manipulation, certain biometric uses).
Consequence: immediate withdrawal and the highest financial penalties.
Systems that affect fundamental rights, safety, or significant decisions about individuals.
Consequence: the full set of requirements under Articles 8-17 and a conformity assessment.
Chatbots, deepfakes, generative systems, and other transparency-related areas.
Consequence: disclosure obligations and labelling of AI-generated content.
No dedicated sectoral obligations beyond AI competencies and good governance practices.
Consequence: basic control rules and user training.
Legal status / content as of: 19 February 2026.
Classification drives the budget, the implementation pace, and the organisation's level of liability. The biggest cost is misclassifying a system.
We identify practices that must be discontinued to avoid a direct breach of Article 5.
We map the system to the requirements of Articles 8-17: conformity assessment, logging, human oversight, and quality control.
We design transparency for chatbots, synthetic content, and interfaces that interact with users.
We build a lightweight governance model: an AI register, usage rules, and a mechanism for reporting incidents and unauthorised use of AI.
Users must know they are interacting with AI.
The content requires clear labelling and publication rules.
For AI-generated content, we prepare technical and operational labelling standards.
We work in 10 stages. Each one ends with a document you can show to the regulator, the auditor, and the board.
We check whether the organisation falls under the AI Act and in what role — we inventory AI systems, qualify their use cases, and assign regulatory roles.
Deliverables: an AI systems register and an organisational qualification sheet.
We determine the risk level of each AI system and eliminate prohibited practices — we analyse Annex III, review Article 5, classify profiling, and map transparency obligations.
Deliverables: a classification opinion, a risk-category matrix, and a GPAI status report.
We build compliance for high-risk systems — implementing risk management, data governance, technical documentation, human oversight, cybersecurity, and conformity assessment.
Deliverables: a high-risk compliance pack and an EU declaration pathway.
We implement obligations for general-purpose AI models (GPAI) and their integrators: documentation under Annexes XI/XII, a copyright policy, a training-data summary, and, where there is systemic risk, also evaluations and incident reporting.
Deliverable: a GPAI compliance pack.
We implement the required disclosures for users and recipients of AI content — designing notices for chatbots and deepfakes, the AI content labelling standard, and publication procedures.
Deliverables: a transparency-notice pack and a content-labelling specification.
We set up internal accountability and the AI governance model — we appoint an AI officer or committee, draft usage and procurement policies, vet vendors, and, where required by law, prepare a fundamental rights impact assessment (FRIA).
Deliverables: an AI governance model and an impact assessment report (FRIA).
We link the AI Act with existing sectoral and cybersecurity obligations — connecting it with GDPR, DORA, NIS2, MiCA/PSD2/AML, and CRA/DSA, and consolidating everything into a single compliance task list.
Deliverables: a regulatory matrix and a compliance maintenance plan.
We build team competencies in line with Article 4 of the AI Act — preparing a training matrix for the board, compliance, the technical team, HR, sales, and the whole organisation.
Deliverables: an AI competency programme, a training register, and certificates.
We maintain compliance once the system is live in production — running continuous monitoring, an incident reporting procedure, quarterly reports, and change impact assessments.
Deliverables: a post-deployment monitoring plan and an incident reporting procedure.
We move from a list of obligations to a lasting advantage in sales and partner due diligence — preparing audit readiness, an audit plan, certification support, and materials that build trust in AI.
Deliverables: an annual AI audit report and a compliance evidence pack.
When planning the budget and priorities, specific dates matter. Below are the milestones that most often drive the order of the implementation project.
Start of the ban on prohibited practices (Article 5) and of the AI literacy obligation (Article 4).
Obligations for GPAI providers take effect, and supervisory mechanisms become operational.
Key date: broad obligations for high-risk and transparency systems, plus full enforcement of the rules.
Further obligations for high-risk systems under Annex I and the close-out of transitional provisions for general-purpose AI models (GPAI).
Final deadline for selected large-scale AI systems listed in Annex X.
Legal status / content as of: 19 February 2026.
A single coherent implementation covering several regulations is cheaper and safer than running separate compliance projects.
We combine the AI risk assessment with the data protection impact assessment (DPIA), profiling, and Article 22 GDPR, so that privacy protection does not conflict with AI governance.
We plug AI systems into IT risk management, resilience testing, and operational continuity plans.
We synchronise cybersecurity, supply-chain management, and incident reporting.
For fintechs, we classify AI in customer risk assessment, anti-money laundering (AML/KYC), and fraud detection processes as potentially high-risk.
We organise product cybersecurity requirements and algorithmic transparency across platform channels.
Effective AI Act implementation requires a permanent decision-making structure, operational policies, and a training plan covering the entire organisation.
If you already have policies in place, we align them with a single operating model and organise the documentation for regulator review.
Book a project qualification callIn the first call we'll determine where you stand: diagnosis, implementation for high-risk systems, or post-launch compliance maintenance.
Tell us whether you need an AI Act diagnosis, an implementation project for high-risk systems, or a monitoring and annual-audit model.