AI Act for businesses

We help you deploy AI in line with the law, protecting the business, the board, and the pace of product development.

AI Act for businesses

Brands we've worked with

1koszyk
Adresowo
AnyPark
Atomstore
Autopay
Baselinker
BMG Goworowski
Booksy
Booste
Bratna
Cashbene
Codility
DPay
EasySend
Fenalabs
Fenige
FiberPay
Happy Birds

AI Act scope: roles, risk, and penalties

The AI Act has extraterritorial effect: if an AI system operates on the EU market or its outputs are used in the EU, the regulation may apply regardless of where the entity is established.

Provider

Develops, or commissions the development of, an AI system and places it on the market under its own brand.

The broadest scope of obligations and documentation.

Deployer

Uses an AI system in a professional capacity and is responsible for how it is operated.

Key duties: human oversight, monitoring, and a fundamental rights impact assessment (FRIA) where required.

Importer / Distributor

Brings AI systems from outside the EU into the Union or distributes them further.

Requires supply-chain controls and manufacturer documentation.

Fine-tuner

Adapts an AI model to a specific business use case.

In practice, it may take on the status of provider of a new system.

If your AI system profiles natural persons and falls within Annex III, it is automatically high-risk. You cannot rely on the Article 6 section 3 exception, which softens the obligations for other systems.

Prohibited

Prohibited practices (e.g. social scoring, subliminal manipulation, certain biometric uses).

Consequence: immediate withdrawal and the highest financial penalties.

High-risk

Systems that affect fundamental rights, safety, or significant decisions about individuals.

Consequence: the full set of requirements under Articles 8-17 and a conformity assessment.

Limited-risk

Chatbots, deepfakes, generative systems, and other transparency-related areas.

Consequence: disclosure obligations and labelling of AI-generated content.

Thumbnail

No dedicated sectoral obligations beyond AI competencies and good governance practices.

Consequence: basic control rules and user training.

Penalties: the risk borne by the board and the company

  • Up to EUR 35 million or 7% of global turnover - for prohibited practices.
  • Up to EUR 15 million or 3% of global turnover - for breaches of high-risk system requirements.
  • Up to EUR 20 million or 4% of global turnover - including for breaches of transparency obligations.

Legal status / content as of: 19 February 2026.

risk classification and GPAI

Classification drives the budget, the implementation pace, and the organisation's level of liability. The biggest cost is misclassifying a system.

Prohibited

We identify practices that must be discontinued to avoid a direct breach of Article 5.

High-risk

We map the system to the requirements of Articles 8-17: conformity assessment, logging, human oversight, and quality control.

Limited-risk

We design transparency for chatbots, synthetic content, and interfaces that interact with users.

Thumbnail

We build a lightweight governance model: an AI register, usage rules, and a mechanism for reporting incidents and unauthorised use of AI.

Transparency checklist

Chatbot

Users must know they are interacting with AI.

Deepfake / synthetic content

The content requires clear labelling and publication rules.

Labelling of AI-generated content

For AI-generated content, we prepare technical and operational labelling standards.

AI Act implementation: stages 0-9

We work in 10 stages. Each one ends with a document you can show to the regulator, the auditor, and the board.

Stage 0

Applicability and roles

We check whether the organisation falls under the AI Act and in what role — we inventory AI systems, qualify their use cases, and assign regulatory roles.

Deliverables: an AI systems register and an organisational qualification sheet.

Stage 1

Classification and prohibited practices

We determine the risk level of each AI system and eliminate prohibited practices — we analyse Annex III, review Article 5, classify profiling, and map transparency obligations.

Deliverables: a classification opinion, a risk-category matrix, and a GPAI status report.

Stage 2

High-risk obligations

We build compliance for high-risk systems — implementing risk management, data governance, technical documentation, human oversight, cybersecurity, and conformity assessment.

Deliverables: a high-risk compliance pack and an EU declaration pathway.

Stage 3

GPAI

We implement obligations for general-purpose AI models (GPAI) and their integrators: documentation under Annexes XI/XII, a copyright policy, a training-data summary, and, where there is systemic risk, also evaluations and incident reporting.

Deliverable: a GPAI compliance pack.

Stage 4

Transparency

We implement the required disclosures for users and recipients of AI content — designing notices for chatbots and deepfakes, the AI content labelling standard, and publication procedures.

Deliverables: a transparency-notice pack and a content-labelling specification.

Stage 5

AI governance and impact assessment

We set up internal accountability and the AI governance model — we appoint an AI officer or committee, draft usage and procurement policies, vet vendors, and, where required by law, prepare a fundamental rights impact assessment (FRIA).

Deliverables: an AI governance model and an impact assessment report (FRIA).

Stage 6

Regulatory integration

We link the AI Act with existing sectoral and cybersecurity obligations — connecting it with GDPR, DORA, NIS2, MiCA/PSD2/AML, and CRA/DSA, and consolidating everything into a single compliance task list.

Deliverables: a regulatory matrix and a compliance maintenance plan.

Stage 7

AI literacy

We build team competencies in line with Article 4 of the AI Act — preparing a training matrix for the board, compliance, the technical team, HR, sales, and the whole organisation.

Deliverables: an AI competency programme, a training register, and certificates.

Stage 8

Post-deployment monitoring and incidents

We maintain compliance once the system is live in production — running continuous monitoring, an incident reporting procedure, quarterly reports, and change impact assessments.

Deliverables: a post-deployment monitoring plan and an incident reporting procedure.

Stage 9

Audit and market advantage

We move from a list of obligations to a lasting advantage in sales and partner due diligence — preparing audit readiness, an audit plan, certification support, and materials that build trust in AI.

Deliverables: an annual AI audit report and a compliance evidence pack.

AI Act timeline

When planning the budget and priorities, specific dates matter. Below are the milestones that most often drive the order of the implementation project.

2 February 2025

Start of the ban on prohibited practices (Article 5) and of the AI literacy obligation (Article 4).

2 August 2025

Obligations for GPAI providers take effect, and supervisory mechanisms become operational.

2 August 2026

Key date: broad obligations for high-risk and transparency systems, plus full enforcement of the rules.

2 August 2027

Further obligations for high-risk systems under Annex I and the close-out of transitional provisions for general-purpose AI models (GPAI).

31 December 2030

Final deadline for selected large-scale AI systems listed in Annex X.

Legal status / content as of: 19 February 2026.

integrating the AI Act with other regulations

A single coherent implementation covering several regulations is cheaper and safer than running separate compliance projects.

AI Act + GDPR

We combine the AI risk assessment with the data protection impact assessment (DPIA), profiling, and Article 22 GDPR, so that privacy protection does not conflict with AI governance.

AI Act + DORA

We plug AI systems into IT risk management, resilience testing, and operational continuity plans.

AI Act + NIS2

We synchronise cybersecurity, supply-chain management, and incident reporting.

AI Act + MiCA / PSD2 / AML

For fintechs, we classify AI in customer risk assessment, anti-money laundering (AML/KYC), and fraud detection processes as potentially high-risk.

AI Act + CRA / DSA

We organise product cybersecurity requirements and algorithmic transparency across platform channels.

AI governance and team competencies

Effective AI Act implementation requires a permanent decision-making structure, operational policies, and a training plan covering the entire organisation.

Accountability structure

  • AI officer or committee at board level
  • Clear division of responsibility for product, legal, and security decisions
  • Regular AI risk reporting to the board

Policy package

  • AI governance policy
  • Acceptable AI use policy
  • AI procurement policy
  • AI vendor assessment model

AI vendor assessment

  1. Assessment of the vendor's role and AI Act responsibility split
  2. Verification of technical documentation, logging, and human oversight
  3. Contractual review (SLA, intellectual property, liability, incident clauses)
  4. Go/no-go decision and post-deployment monitoring

If you already have policies in place, we align them with a single operating model and organise the documentation for regulator review.

Book a project qualification call

Get in touch about the AI Act

In the first call we'll determine where you stand: diagnosis, implementation for high-risk systems, or post-launch compliance maintenance.

AI Act services are led by:

Tomasz Klecor

Tomasz Klecor

Managing Partner

FinTech navigator. Lawyer.

+48 797 711 924
info@legalgeek.pl

Describe your project stage

Tell us whether you need an AI Act diagnosis, an implementation project for high-risk systems, or a monitoring and annual-audit model.

Your data will be processed in accordance with our privacy policy.