The EU AI Act entered into force in August 2024. For HealthTech startups building AI for clinical use, it is not an abstract regulatory development — it is a concrete set of obligations that will determine whether your product can reach the European market.
What Changed with the EU AI Act
The EU AI Act is the world's first comprehensive legal framework specifically governing artificial intelligence. Unlike GDPR, which focused on data, the AI Act focuses on the systems themselves — their design, validation, deployment, and monitoring. It introduces a risk-based tier system and applies some of its most demanding requirements to AI used in healthcare.
The Act became enforceable in stages: provisions relating to prohibited AI systems from February 2025, high-risk AI obligations from August 2026. For startups building now, the clock is already running — CE mark equivalents for AI take time, and investors and hospital procurement officers are already asking about AI Act readiness.
Risk Classification for Medical AI
The AI Act categorises AI systems into four risk tiers: unacceptable (banned), high-risk, limited risk, and minimal risk. Most clinical AI falls into the high-risk category by default. Annex III of the Act explicitly includes AI systems used for:
- Triage and clinical decision support that influences patient management decisions
- Prognosis and diagnosis of disease when the output is intended to guide treatment
- Monitoring of patient vital parameters where errors could cause physical harm
Importantly, if your AI system is already regulated as a medical device under EU MDR (Regulation 2017/745), it is automatically classified as high-risk AI. The two frameworks interact, but do not cancel each other out — you need to satisfy both.
"The AI Act doesn't replace MDR — it layers on top of it. Healthcare AI startups need a compliance strategy that addresses both simultaneously."
— Lorenzo Nitoglia, CFO Bilobe
High-Risk AI Obligations
Being classified as high-risk triggers a substantial set of requirements. The key ones for healthcare AI are:
- Risk management system: A documented, iterative process covering the entire lifecycle — design, development, deployment, and post-market monitoring.
- Data governance: Training, validation and test data must be subject to documented governance practices. Bias identification and mitigation must be evidenced.
- Technical documentation: Comprehensive docs covering system purpose, architecture, training methodology, performance metrics, and known limitations.
- Transparency and instructions for use: Users must receive clear information about the system's capabilities, limitations, and appropriate use cases.
- Human oversight: The system must be designed to allow effective human review, correction, and override of its outputs.
- Accuracy, robustness and cybersecurity: Systems must achieve appropriate levels of accuracy across the intended population, be robust to adversarial inputs, and meet cybersecurity standards.
Technical Documentation Requirements
One of the most demanding obligations is maintaining a technical file equivalent to what MDR already requires, but extended to cover AI-specific aspects. For a clinical decision support system, this means documenting not just what the software does, but how the model was trained, on what data, with what evaluation methodology, and with what performance guarantees across relevant subgroups.
This is where many startups are caught underprepared. If your model was trained on a public dataset and fine-tuned internally without proper documentation of the data governance process, you will struggle to produce a compliant technical file. Starting this documentation from the first day of development is far less painful than reconstructing it post-hoc.
Conformity Assessment Pathways
High-risk AI systems must undergo a conformity assessment before being placed on the market. For most healthcare AI that is already subject to MDR, this assessment will be conducted by a notified body — the same body that reviews your MDR technical file. For standalone clinical AI that falls below the MDR device threshold, a self-assessment pathway exists, but only if you can demonstrate full compliance with all Annex IV documentation requirements.
After conformity assessment, you must affix the CE marking and register in the EU AI database, which will be publicly accessible. This introduces a degree of regulatory transparency that the sector is not yet accustomed to.
Key Takeaways
- Most clinical AI is automatically classified as high-risk under the EU AI Act
- High-risk obligations include risk management, data governance, technical documentation, and human oversight
- If already under MDR, you face both frameworks simultaneously — plan accordingly
- Start documentation from day one — retroactive compliance is expensive and often incomplete
- Conformity assessment by a notified body is required for most clinical AI before EU market placement
- Post-market monitoring obligations are continuous, not one-off events
What This Means for Startups
The AI Act raises the compliance bar significantly, but it also creates competitive advantage for startups that take it seriously early. A hospital procurement committee in 2026 will ask for AI Act compliance evidence alongside MDR certification. Startups with well-documented, auditable development processes will have a credible answer. Those who built fast and hoped to patch compliance later will face difficult conversations.
At Bilobe, we have made compliance-by-design a core part of our development methodology. Our technical documentation practices, bias evaluation pipelines, and human oversight mechanisms were designed with both MDR and the AI Act in mind. We believe this is not just a regulatory necessity — it is the right way to build AI that actually works in healthcare.