HomeFAQEU AI Act Compliance Checklist & FAQ
FAQ

EU AI Act Compliance Checklist & FAQ

A comprehensive FAQ and checklist regarding the EU AI Act, high-risk systems, general-purpose models, and critical enforcement dates.

Practical Compliance for Enterprise AI

Navigating the EU AI Act requires an understanding of your organization's role (Provider vs. Deployer) and the classification of your AI systems. This FAQ breaks down the critical compliance mechanisms every European enterprise must implement.

Conclusion

The AI Act transforms AI from a purely technical engineering discipline into a highly regulated compliance operation. Partnering with a specialized execution platform streamlines the most difficult technical mandates of the Act.

Frequently asked questions

When is the final deadline for EU AI Act compliance?+

The rules for General Purpose AI (GPAI) became enforceable in August 2025. The most critical deadline for enterprises operating 'High-Risk' AI systems (under [Annex III](https://artificialintelligenceact.eu/annex/3/)) is **August 2026**. Non-compliance after this date carries extreme financial penalties.

What are the penalties for violating the EU AI Act?+

Fines can reach up to €35 million or 7% of absolute global annual turnover (whichever is higher) for prohibited AI practices. Non-compliance with obligations for high-risk AI systems can result in fines up to €15 million or 3% of global turnover.

Am I an AI 'Provider' or an AI 'Deployer'?+

A Provider develops an AI system to place it on the market under its own name. A Deployer is an organization utilizing an AI system under its authority in a professional context. Using an HR AI tool makes you a Deployer. Building the HR tool to sell makes you a Provider.

What makes an AI system 'High-Risk'?+

The Act classifies AI as high-risk if it determines crucial outcomes in specific domains. This includes software used in critical infrastructure management (water, gas, electricity), employment/HR (CV screening), essential public services, and law enforcement.

What AI systems are entirely prohibited (Unacceptable Risk)?+

The AI Act bans systems involving subliminal manipulation, social scoring by public authorities, untargeted scraping of facial images from the internet, and real-time remote biometric categorization in public spaces.

Do 'Low-Risk' AI systems have any obligations?+

AI systems posing limited risk (like general-purpose chatbots or deepfakes) primarily face transparency obligations. Users must be explicitly informed that they are interacting with an AI system or viewing AI-generated content.

What is required for High-Risk system compliance?+

Deployers of high-risk systems must establish a Quality Management System (QMS), create comprehensive Technical Documentation, maintain [automated system event logs (Article 12)](https://artificialintelligenceact.eu/article/12/), ensure [Human Oversight (Article 14)](https://artificialintelligenceact.eu/article/14/), and conduct a Conformity Assessment before deployment.

What is a 'Conformity Assessment'?+

A conformity assessment is a rigorous audit verifying that the AI system meets all the requirements of the AI Act. This results in the system receiving a CE marking. Most systems allow for internal self-assessment, while some biometric tools require an assessment by an independent 'notified body'.

Do we need an AI-specific logging system?+

Yes. High-risk systems must automatically log their execution lifecycle. This includes capturing the data inputs, the specific model weights triggered, the time of execution, and the deterministic outputs, ensuring post-market traceablity in the event of an algorithm-induced failure.

How does the AI Act affect General Purpose AI (GPAI)?+

Providers of GPAI models (like OpenAI or Mistral) must publish detailed technical documentation, respect EU copyright law during training, and publish summaries of the data used. GPAI models with 'systemic risk' face even stricter adversarial testing mandates.

Are AI systems used purely for internal R&D exempt?+

Yes, the AI Act generally exempts AI systems developed and put into service for the sole purpose of scientific research and development, provided they are not placed on the market or put into service in the real world.

Is open-source AI exempt from the Act?+

Partially. Many obligations do not apply to open-source models *unless* those models are placed on the market as high-risk systems, or they qualify as a GPAI model with systemic risk. The deployment context matters more than the licensing model.

Do we need a Fundamental Rights Impact Assessment (FRIA)?+

Yes, deployers governed by public law (e.g., municipalities, hospitals) or private entities providing essential public services must conduct a FRIA before deploying a high-risk AI system.

How does human oversight work in practice?+

The system must be designed so that natural persons can fully understand its capabilities and limitations. They must be able to intervene in its operation or safely interrupt the system via a 'stop' button if an anomaly is detected.

Can a cloud provider guarantee AI Act compliance?+

A cloud provider alone cannot guarantee complete compliance, as much relies on your internal corporate processes (the Deployer). However, a platform like NeuroCluster structurally enforces the necessary logging, isolation, and human-in-the-loop technical capabilities required to pass the audit.

Still have questions?

Talk to our team — we work with regulated organisations daily.

Get in touch →