top of page

The EU AI Act: Navigating Compliance for High-Risk Businesses

  • Writer:  Kristina Romanenko
    Kristina Romanenko
  • 4 days ago
  • 17 min read

Updated: 2 days ago

The EU AI Act: Navigating Compliance for High-Risk Businesses

The European Union's Artificial Intelligence Act (AI Act), effective since August 1, 2024, is the world’s first comprehensive legal framework for regulating artificial intelligence. It aims to ensure AI systems are safe, transparent, and respectful of fundamental rights while fostering innovation. For companies developing or using AI solutions across industries — from healthcare and finance to legaltech, digital services, and beyond — the AI Act introduces critical compliance requirements.


This article provides a clear and actionable guide to understanding the AI Act, its implications for businesses, and the steps needed to achieve compliance.



What is the AI Act?

The EU AI Act (Regulation 2024/1689) is the world’s first horizontal legal framework governing artificial intelligence — and it’s a game changer for companies operating in the EU market.


Key Features:


  • Risk-Based Classification: AI systems are divided into four categories: unacceptable risk, high risk, limited risk, and minimal or no risk.

  • Prohibited Practices: Certain AI applications, such as those manipulating human behavior, are banned.

  • High-Risk Requirements: Stringent obligations apply to high-risk systems, including those used in regulated products and critical sectors.

  • General-Purpose & Generative AI: Foundation models and generative AI (like large language models) face dedicated transparency and safety rules.


Whether you’re building AI for fraud detection, compliance automation, contract review, financial risk analysis, or patient engagement tools, the AI Act introduces new compliance duties, documentation standards, and cybersecurity requirements to ensure trust, safety, and ethical alignment. [1]



Who Needs to Comply?

The AI Act applies to a wide range of stakeholders involved in developing, distributing, or using AI systems within the EU market — even if the company is based outside the EU.


If your AI system is placed on the EU market or its outputs are used in the Union (for example, when an AI model hosted abroad delivers decisions, recommendations, or generated content to EU users), the Act applies. It primarily places obligations on:

Category of Operator

Description

Providers

Companies developing or placing AI systems, or a general-purpose AI model, on the EU market (e.g., credit scoring tools, AI tools for loan approvals, medical image analysis).

Deployers

Organizations using AI systems under their authority, such as banks, law firms, hospitals, or logistics operators.

Other Parties

Distributors, importers, and authorized representatives responsible for ensuring compliance before EU market entry.

The most significant regulatory burdens under the Act are placed on Providers


⚠️ The AI Act does not apply in certain situations, including:


  • Military, defense, and national security uses of AI.

  • Pure research and testing activities (before the system is placed on the market).

  • Personal, non-professional use of AI by individuals.

  • Open-source models released under free/open licenses — unless they are placed on the market as high-risk AI or considered general-purpose AI models with systemic risk.


AI is no longer experimental — it’s treated as a regulated product category,  similar to medical devices, financial services, or other technologies that directly affect people’s lives.


If your AI system shapes decisions or access related to people’s rights, safety, or essential services — such as determining loan approvals or eligibility for social security benefits — it likely falls under the Act. [2]



What is considered an AI system?

The AI Act defines an AI system broadly to ensure wide coverage. According to Article 3(1), an AI system is:

A machine-based system designed to operate with varying levels of autonomy and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

An AI system includes several main elements:


  • Systems operating with varied levels of autonomy

  • Systems that can adapt as they learn

  • Systems that learn and generate outputs

  • Generated outputs that can influence physical or virtual environments


The last element is especially significant, as any system capable of influencing our surroundings can create societal risks — reinforcing the need for regulation to protect individuals from harmful consequences.



AI Risk Categories

The AI Act uses a risk-based approach: the higher the potential impact on people’s health, safety, or rights, the stricter the rules — from light oversight to strict controls, or even a full ban.

Risk Category

Description

Examples

Regulatory Requirements

Unacceptable Risk

AI systems are considered a clear violation of fundamental rights or EU values.

Social scoring, untargeted scraping of facial images, and emotion recognition in workplaces or schools.

Prohibited in the EU — cannot be placed on the market or used.

High Risk (HRAI)

AI systems that significantly affect health, safety, or fundamental rights. Often applies to AI in regulated products or critical use cases listed in Annex III.

Credit scoring, medical diagnostics, employee monitoring, AI in critical infrastructure, and law enforcement.

Subject to the strictest controls, including risk management, high-quality datasets, conformity assessment, technical documentation, human oversight, registration in the EU database, and post-market monitoring.

Limited Risk ”Transparency Requirements”

AI systems that interact with humans or generate/modify content, where transparency is essential to prevent manipulation.

Chatbots, virtual assistants, contract drafting tools, deepfakes.

Providers must ensure clear disclosure when users are interacting with AI or when content has been generated or manipulated by AI. These rules may apply as additional requirements for certain HRAI or GPAI systems.

Minimal Risk

AI systems with little to no impact on rights or safety..

Wellness reminder apps, spam filters, and AI in video games.

No legal obligations, but providers may adopt voluntary codes of conduct for trustworthy AI.


⚠️ General-Purpose & Generative AI (GPAI): Large foundation and generative models (such as advanced language models) fall under separate rules. Due to their powerful, high-impact capabilities, they can create systemic risks — for example, by spreading bias, amplifying misinformation, or exposing systems to cyberattacks. These models face additional obligations for documentation, transparency, adversarial testing, incident reporting, and robust cybersecurity before they can be integrated into high-impact applications. (See the section on GPAI & Systemic Risk below for details.) [3]



Prohibited AI Systems

The AI Act bans certain AI practices deemed to pose unacceptable risks. These systems are considered fundamentally incompatible with EU values and are strictly prohibited, meaning they cannot be developed, marketed, or used within the Union.


Prohibited practices include:


  • Behavioral Manipulation: AI systems using subliminal techniques or manipulative prompts to distort human behavior in ways that may cause harm.

  • Exploitation of Vulnerabilities: AI systems that take advantage of people’s age, disability, or socio-economic situation to influence decisions or actions to their detriment.

  • Biometric Identification: Real-time or post-remote biometric identification in publicly accessible spaces, with only narrow exceptions for law enforcement purposes.

  • Social scoring evaluations: AI systems that evaluate or classify people’s trustworthiness, social behavior, or personal characteristics based on social scoring, particularly when this leads to unjustified or disproportionate treatment, exclusion, or disadvantage.


Regardless of the sector — finance, healthcare, legal, or beyond — providers must ensure their AI solutions are not designed in ways that could be interpreted as manipulative, exploitative, or intrusive.


📌 For the full list of prohibited activities, see the European Commission’s Guidelines on Prohibited Artificial Intelligence Practices. [4]



How Do I Know If An AI System Is High-Risk?

Not every AI tool is treated equally under the AI Act. Some are banned, while others operate with little oversight. The category that matters most is high-risk AI — where the strictest obligations apply and many critical applications fall by default.


Two main pathways for classifying an AI system as high-risk:


1️⃣ Product Safety Route


An AI system is considered high-risk if it is a safety component of a product, or the product itself, that:


  • falls under existing EU product safety law (e.g., medical devices, machinery, vehicles),

  • and requires a third-party conformity assessment before being placed on the EU market.


2️⃣ Annex III Route


Even if not linked to a regulated product, an AI system may still be high-risk if it falls into one of the use cases listed in Annex III. These include:


  • Biometric identification and categorisation

  • Critical infrastructure (transport, energy, water supply)

  • Education and vocational training

  • Employment and worker management

  • Access to essential services (such as banking, credit, insurance, and healthcare)

  • Law enforcement

  • Migration, asylum, and border control

  • Justice and democratic processes


Ask Yourself:

  • Is the AI a safety component of a product that requires EU conformity assessment?

  • Does it directly influence people’s rights, safety, or access to essential services?

  • Does it fall into one of the Annex III use cases such as employment, education, or biometric identification?


👉 If yes, your system likely qualifies as high-risk and will need to meet the strict compliance requirements of the AI Act. [5]



Conformity Assessment Process by Notified Body

Once an AI system is classified as high-risk, it cannot be placed on the EU market without passing a conformity assessment. This process verifies that the system meets the AI Act’s requirements before it can be used by customers or the public.


Depending on the category of high-risk AI, the assessment may involve either:


1️⃣ Notified Body assessment – for high-risk AI linked to regulated products (e.g., medical devices, machinery, vehicles), an independent Notified Body reviews the system.


2️⃣ Provider self-assessment – for most standalone high-risk AI systems under Annex III, providers may follow the self-assessment route, provided they operate under a compliant Quality Management System (QMS).


The assessment typically covers:


  • Quality Management System (QMS) review – evaluation of the provider’s governance, processes, and risk management.

  • Technical documentation review – assessment of data governance, transparency measures, and system performance.

  • Ongoing surveillance – audits and monitoring to ensure continued compliance.


The EU AI Act compliance process

After a successful assessment, the provider must sign an EU Declaration of Conformity, register the system in the EU database for high-risk AI systems, and affix the CE marking, confirming that the system meets all applicable obligations.


Conformity is not a one-time exercise — companies must maintain compliance and be ready for post-market monitoring and reassessment if the AI system undergoes significant changes. [6]



Essential Requirements for High-Risk AI Operators

Obligations of Providers

The strictest obligations for high-risk AI systems fall on providers. Once an AI system is classified as high-risk, providers must comply with a set of binding requirements before it can be placed on the EU market.

Requirement

Article

Description

Risk Management System

Article 9

Set up and maintain a process to identify, monitor, and reduce risks to health, safety, and fundamental rights.

Data & data governance

Article 10

Use datasets that are relevant, representative, and well-managed, with steps to prevent bias and protect sensitive information.

Technical Documentation

Article 11 & Annex IV

Keep detailed records on how the system was built, its purpose, design, risk controls, and cybersecurity — updated whenever changes are made.

Logging & Traceability

Articles 12 & 19

Build systems that automatically record key events, helping with audits, monitoring, and accountability.

Provision of Information and Instructions

Article 13

Give deployers accessible guidance covering system use, risks, oversight measures, and technical requirements.

Human Oversight

Article 14

Design systems so people can supervise, understand outputs, intervene, and stop them when needed.

Accuracy, Robustness & Cybersecurity

Article 15

Make systems reliable, resilient to errors, and protected against cyber threats.

Transparency & Accessibility

Articles 16 & 48

Mark systems with provider details and CE conformity, and ensure accessibility for persons with disabilities.

Quality management system

Article 17

Maintain a structured compliance framework covering design, testing, monitoring, reporting, and accountability.

Record-keeping

Article 18

Retain all key compliance documentation for 10 years.

Corrective actions

Article 20

Take immediate steps to fix or withdraw non-compliant systems, including disabling or recalling them if needed.

Cooperation with authorities

Article 21

Work with regulators on request, providing documentation, information, and support during investigations.

EU Representatives

Article 22

Non-EU providers must appoint an EU-based contact to handle compliance.

Conformity Assessment

Article 43

Ensure the system undergoes the required conformity checks before being placed on the market.

EU Declaration of Conformity

Article 47

Issue and sign a declaration confirming the system meets all requirements of the AI Act.

CE Marking

Article 48

Affix the CE marking to show compliance with the AI Act and related EU laws.

Registration

Article 49

Register the provider and system in the EU’s public database before market entry.

Post-market Monitoring

Article 72

Track system performance after release, collect data, and address new risks.

Incident Reporting

Article 73

Report serious incidents quickly, investigate root causes, and take corrective action.

Providers must also verify that their AI systems can resist manipulation, model inversion, or data poisoning attacks. This often requires proactive testing of model inputs and outputs to detect vulnerabilities in real-world conditions. Learn more about AI and LLM Penetration Testing and how it supports compliance with Article 15 of the AI Act.



Obligations of Deployers

While providers bear the strictest responsibilities, deployers — the organizations that actually use high-risk AI systems — also carry binding duties under the AI Act. Their role is to ensure that systems are used responsibly, in line with the provider’s instructions, and in a way that protects people affected by AI decisions.

Requirement

Article

Description

General operations

Art. 26(1&2&3)

Use AI systems only as instructed by the provider and assign oversight to trained staff with the right resources.

Input data quality

Art. 26(4)

Ensure input data is relevant, representative, and suitable to avoid bias or misuse.

Post-market monitoring and incident reporting

Art. 26(5) & 72

Monitor system performance, share data with providers, pause use if risks appear, and report incidents immediately.

Event log retention

Art. 26(6)

Keep system logs for at least six months (or longer if required) to support audits and accountability.

Provision of information to workers

Art. 26(7)

Notify employees and their representatives before introducing AI in the workplace, in line with labour laws.

Registration obligation (public sector)

Art. 26(8) & 49

Public bodies must register intended use of high-risk AI in the EU database before deployment. Unregistered systems cannot be used.

Data protection impact assessment (DPIA)

Art. 26(9)

Use the provider’s information to carry out a DPIA where required under GDPR or other EU data laws.

Biometric identification restrictions

Art. 26(10)

Remote biometric ID needs prior authorisation, must be strictly necessary, documented, and reported. Untargeted or disproportionate use is banned; data must be deleted if authorisation is denied.

Provision of information to individuals

Art. 26(11)

Inform people when AI is used in Annex III decisions affecting them, unless EU law allows exceptions for law enforcement.

Cooperation with competent authorities

Art. 26(12)

Provide full cooperation to regulators during checks and investigations.

Obligations of Distributors, Importers and Authorized Representatives

Beyond providers and deployers, the AI Act also sets responsibilities for distributors, importers, and authorised representatives. Their duties focus on ensuring that only compliant high-risk AI systems enter and circulate in the EU market. This includes verifying CE marking, checking conformity documentation, cooperating with authorities, and taking corrective actions when risks or non-compliance are identified. [7]



Fundamental Rights Impact Assessment (FRIA)

One of the entirely new obligations introduced by the AI Act is the Fundamental Rights Impact Assessment (FRIA). Unlike GDPR’s DPIA, which focuses on data protection, the FRIA requires certain deployers of high-risk AI systems to assess how their systems may affect fundamental rights — including privacy, non-discrimination, and equal access — before deployment.


When FRIA applies

FRIA applies only to high-risk AI systems listed in Annex III and under the following conditions:


1️⃣ Public service contexts – when deployed by public bodies or private entities delivering public services (e.g., public schools, hospitals, or employment agencies):


  • Biometric identification & categorisation systems

  • Education & vocational training systems

  • Employment & worker management systems

  • Access to essential services (e.g., healthcare, banking, social security)


2️⃣ Always required, regardless of deployer type, for:


  • Credit scoring AI systems

  • Insurance risk/eligibility AI systems

  • Law enforcement AI systems

  • Migration, asylum & border control AI system

  • Judicial/justice & democratic process AI systems


⚠️ Exception:


  • High-risk AI for critical infrastructure (Annex III, point 2) is explicitly excluded from FRIA obligations.

  • If your AI system is high-risk because of product safety law (e.g., a medical device AI under MDR/IVDR), you do not need to conduct a FRIA — those are covered under conformity assessments.


What FRIA Involves

Deployers must:


  • Identify and document potential impacts on fundamental rights.

  • Evaluate risks of discriminatory or unfair outcomes.

  • Define mitigation measures before the system goes live.


To ensure consistency, the AI Office will issue a template that deployers must use as the basis for their assessment.


The FRIA goes beyond technical safety. It forces organizations to consider the societal and rights-based impact of their AI systems, ensuring fairness and trust where AI decisions directly affect people’s lives. [8]



AI Regulatory Sandboxes

Alongside obligations, the AI Act also introduces AI regulatory sandboxes — supervised environments set up by national authorities where companies can develop, train, and test AI systems in cooperation with regulators.


These sandboxes are designed to:


  • Help providers, especially startups and SMEs, experiment with innovative AI under regulatory guidance.

  • Allow companies to test high-risk systems in real-world conditions while ensuring safeguards for users.

  • Provide early feedback on compliance, reducing uncertainty before entering the market.

  • Support the development of AI for the public interest, such as healthcare, education, or justice.


📅 Current status: Member States are required to establish national sandboxes by 2026, with coordination from the European AI Office. Some pilot projects may launch earlier, but widespread availability will align with the rollout of high-risk AI obligations.

Participation in sandboxes is voluntary but can be especially valuable when companies work on novel, high-risk AI where requirements are complex or not yet fully standardised. They serve as a bridge between innovation and compliance, giving businesses a safe space to adapt their systems to EU rules before full deployment. [9]



General-Purpose AI & Systemic Risk

Not all AI systems are designed for a single use. Some — known as General-Purpose AI (GPAI) — are built as foundation models that can be adapted to many different applications. They can be integrated into downstream systems for tasks like analytics, automation, content generation, or decision support. According to Article 3 (63):

‘general-purpose AI model’ means an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and whether as a standalone or embedded in an AI system.

Obligations for GPAI Providers

Under (Chapter V, Section 1, Article 52), providers of General-Purpose AI (GPAI) models must meet baseline transparency and accountability requirements, even when no systemic risk is identified.


These include:


  • Technical Documentation (Annex XI): Prepare and maintain detailed documentation describing the model’s design, purpose, architecture, parameters, modalities (inputs/outputs), training/testing methods, and integration requirements.

  • Information Sharing (Annex XII): Provide downstream AI system providers with sufficient information to understand the model’s capabilities, limitations, and risks — enabling them to meet their own compliance obligations.

  • Training Transparency: Publish a public summary of the data sources used for training, following the template issued by the EU AI Office.

  • Copyright Compliance: Implement and enforce policies that ensure compliance with EU copyright law, including text and data mining restrictions under Directive (EU) 2019/790.


⚠️ If a provider of GPAI is based outside the EU, they must appoint an authorised EU representative to handle compliance and communication with the AI Office and national regulators.



Systemic Risk Designation

A general-purpose AI model is treated as having systemic risk — and therefore subject to stricter rules — if it demonstrates very powerful capabilities. This can happen in three ways:


1️⃣ If the model is judged to have high-impact capabilities (based on benchmarks and technical indicators).


2️⃣ If the European Commission designates it as systemic risk, either on its own initiative or following advice from the AI scientific panel.


3️⃣ If training the model required more than 10²⁵ FLOPs (floating-point operations), it is automatically presumed to have systemic risk.


‼️ If a provider’s model crosses this threshold, they must notify the European Commission within two weeks and provide evidence. Failing to do so can trigger enforcement and penalties.



Additional Obligations for GPAI with Systemic Risk

When systemic risk is confirmed, GPAI providers face stricter obligations, including (Chapter V, Section 2, Article 55):


  • Model Evaluation & Adversarial Testing: Conduct independent testing to identify vulnerabilities, stress-test robustness, and evaluate resilience against malicious use or unintended harms.

  • Union-Level Risk Management: Carry out ongoing risk assessments and mitigation strategies that address potential cross-sectoral and cross-border impacts — not just individual applications.

  • Serious Incident Reporting: Notify the AI Office and relevant national authorities without delay of any serious incidents, malfunctions, or emerging risks linked to the model’s deployment.

  • Enhanced Cybersecurity: Implement technical and organisational measures to protect the model and its supporting infrastructure against AI-specific threats such as data poisoning, model manipulation, adversarial attacks, and confidentiality breaches.

  • EU Cooperation: Engage with the AI Office, scientific panels, and regulators to support monitoring, audits, and updates to systemic risk methodologies.


Most companies won’t build GPAI themselves but will rely on foundation models developed by others. This means companies must verify their suppliers’ compliance and adapt their own integration processes to meet EU requirements. [10]



What Are the Penalties for Infringement?

The AI Act doesn’t just set rules — it also establishes serious financial and reputational consequences for companies that fail to comply. The fines are designed to mirror GDPR-style enforcement, making non-compliance a significant business risk across all industries.


Penalties under the AI Act:


  • Prohibited AI Practices: Up to €35 million or 7% of global annual turnover (whichever is higher).

  • Non-Compliance with High-Risk Obligations: Up to €15 million or 3% of global annual turnover.

  • Supplying Incorrect, Incomplete, or Misleading Information: Up to €7.5 million or 1% of global annual turnover.


Small and medium-sized enterprises (SMEs) and startups may face lower ceilings for fines, but the reputational and operational damage of non-compliance can still be severe.


❗For companies building or deploying AI in sensitive or high-impact areas, penalties could be existential. Beyond the financial cost, failing to comply may mean losing EU market access, eroding trust with partners, and damaging brand credibility in sectors where safety, fairness, and accountability are non-negotiable.



EU AI Act Compliance Timeline

The AI Act introduces obligations in stages, allowing companies time to adapt. Deadlines vary depending on the type of AI system, with the most stringent rules applying later to high-risk and general-purpose AI.

Key milestones:

Date

Provision

Description

August 1, 2024

AI Act enters into force

The regulation became law across all EU Member States.

February 2025

Prohibited practices ban applies

Any AI system falling under unacceptable risk (e.g., manipulative AI, exploitative systems, biometric surveillance) must be withdrawn or never placed on the EU market.

August 2025

Rules for general-purpose AI models

Providers of foundation models and generative AI must comply with transparency, documentation, and testing requirements.

August 2026

Obligations for high-risk AI systems

High-risk AI systems must undergo conformity assessment and meet all technical, documentation, and oversight obligations.

August 2027

Full enforcement

All remaining provisions, including detailed obligations for general-purpose AI with systemic risk, enter into application. By this date, every AI system in scope must fully comply.

📌 The critical milestone is August 2026, when high-risk AI rules take effect. Companies should use the lead time to:


✔️ Map whether their AI falls into the high-risk category.

✔️ Prepare technical documentation and risk management processes.

✔️ Engage early with a Notified Body (if required) for conformity assessment.



Harmonised Standards & Guidelines for the European AI Act

The AI Act sets the legal obligations, but the practical details of compliance are still being shaped. The European Commission, the new AI Office, and EU standardisation bodies are developing harmonised standards, guidelines, and codes of practice that will translate the law into concrete, testable requirements. [11]


In the meantime, the new ISO/IEC 42001:2023 offers a structured framework for governance, risk management, transparency, and accountability — helping companies prepare ahead of time.


Although designed for organisational AI governance, many of its principles map directly to the AI Act's essential requirements. Aligning with ISO/IEC 42001 now can help companies:


  • Bridge the gap until EU harmonised standards are finalised.

  • Reduce compliance risks by embedding verifiable processes for risk, quality, and security.

  • Demonstrate responsibility to regulators, partners, and patients by showing proactive adoption of international best practice.


The AI Act defines what must be achieved, but harmonised standards — likely building on ISO/IEC 42001 practices — will define how to achieve it in a verifiable, consistent way. Companies that adopt these frameworks early will be better positioned for smooth conformity assessments in 2026. [12]



How to get started

The AI Act is broad and complex, but companies can start preparing today by breaking compliance down into clear, practical steps. To get started, consider these essential steps:


1️⃣ Identify Your Role

Are you a provider (developing or placing AI on the market), a deployer (using AI in operations), or another party (importer, distributor, authorised representative)?

→ Your role determines which obligations apply.


2️⃣ Classify Your AI System

Determine your system’s risk category: unacceptable, high, limited, or minimal.

→ High-risk systems carry the heaviest obligations.


3️⃣ Check Annex III Applicability

Review Annex III to see if your AI system falls into one of the listed high-risk use cases (e.g., employment, essential services, law enforcement).

→ If yes, additional obligations such as a Fundamental Rights Impact Assessment (FRIA) may apply.


4️⃣ Determine Conformity Assessment Requirements

If your AI is high-risk:

  • Standalone high-risk AI (Annex III) → usually self-assessment if a compliant Quality Management System (QMS) is in place.

  • AI in regulated products (e.g., medical devices, vehicles) → requires assessment by a Notified Body.

→ Plan early, as conformity assessments can be resource-intensive.


5️⃣ Check for GPAI Dependencies

  • If you rely on general-purpose AI models (foundation or generative models), verify whether providers meet baseline obligations.

  • If the model is classified as systemic risk, prepare for stricter downstream integration requirements.


By answering these questions, you’ll cut through the complexity of the AI Act and know exactly which rules apply, what’s at stake, and how to prepare.


👉 Not sure where your AI system fits, or which obligations you can’t afford to miss? Sekurno helps you pinpoint your risk level, decode the AI Act, and implement frameworks like ISO 42001 — turning compliance into a competitive advantage while others struggle to catch up.



References

  1. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng

  2. EU AI Act, Chapter I, Article 2, Scope

  3. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=,opportunity to obtain a loan

  4. https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act

  5. EU AI Act, Chapter III, Article 6, Classification rules for high-risk AI systems

  6. EU AI Act, Chapter III, Section 5, Standards, conformity assessment, certificates, registration

  7. EU AI Act, Chapter III, Section 2, Requirements for high-risk AI systems

  8. EU AI Act, Chapter III, Article 27, Fundamental rights impact assessment for high-risk AI systems

  9. EU AI Act, Chapter VI, Article 57, AI regulatory sandboxes

  10. EU AI Act, Chapter V, GENERAL-PURPOSE AI MODELS

  11. https://publications.jrc.ec.europa.eu/repository/handle/JRC139430

  12. https://www.iso.org/standard/42001

Do you know all risks in your application?

Get a free threat modeling from our experts!

Got it! We'll process your request and get back to you.

Recent Blog Posts

An invaluable resource for staying up-to-date on the latest cybersecurity news, product updates, and industry trends. 

bottom of page