2024 | Felix Busch, Jakob Nikolas Kather, Christian Johner, Marina Moser, Daniel Truhn, Lisa C. Adams & Keno K. Bressem
The European Union's Artificial Intelligence (AI) Act, adopted in March 2024, is the first comprehensive legal framework for AI. It aims to promote human-centered, trustworthy AI while protecting individuals' health, safety, and fundamental rights. The Act applies to all AI systems in the EU market, regardless of their origin, and extraterritorially if their output is used in the EU. It sets harmonized rules for the placement on the market and use of AI systems, with most provisions taking effect within 24 months, and prohibitions on high-risk AI applications within 6 months.
The AI Act follows a risk-based approach, prohibiting certain AI practices with unacceptable risks, such as manipulative and deceptive practices, biometric categorization, and facial recognition. It classifies AI systems as high risk if they are used in safety-critical products or pose significant risks to health, safety, or fundamental rights. High-risk AI systems must comply with additional requirements, including risk management, cybersecurity, and transparency.
General-purpose AI (GPAI) models, such as large language models, are also subject to the AI Act. GPAI models with high-impact capabilities or those using a large number of computational operations may be classified as presenting systemic risks. These models must meet specific requirements, including evaluation, risk mitigation, and cybersecurity.
The AI Act also imposes transparency obligations on AI systems that interact with individuals or generate content. These include providing transparency information to downstream providers and informing users about the use of AI. AI systems not classified as high risk or GPAI are not subject to strict requirements but are encouraged to voluntarily comply with some of the mandatory requirements.
The AI Act has significant implications for the healthcare sector, as existing regulations such as the Medical Device Regulation (MDR) and In Vitro Diagnostic Medical Device Regulation (IVDR) do not explicitly cover medical AI applications. Most medical AI products classified as high risk under the MDR will need to comply with the AI Act's requirements. The Act may also impact the global market by setting high standards for AI development and use, which can be adopted by other regulators.
The AI Act introduces regulatory sandboxes to facilitate the development and testing of AI systems. However, the effective implementation of the AI Act within existing vertical regulations, such as the MDR and IVDR, remains unclear. The Act may increase regulatory complexity and costs for medical AI products, particularly for small and medium-sized enterprises. The AI Act aims to ensure the safe and fair development and implementation of AI across various industries, including healthcare.The European Union's Artificial Intelligence (AI) Act, adopted in March 2024, is the first comprehensive legal framework for AI. It aims to promote human-centered, trustworthy AI while protecting individuals' health, safety, and fundamental rights. The Act applies to all AI systems in the EU market, regardless of their origin, and extraterritorially if their output is used in the EU. It sets harmonized rules for the placement on the market and use of AI systems, with most provisions taking effect within 24 months, and prohibitions on high-risk AI applications within 6 months.
The AI Act follows a risk-based approach, prohibiting certain AI practices with unacceptable risks, such as manipulative and deceptive practices, biometric categorization, and facial recognition. It classifies AI systems as high risk if they are used in safety-critical products or pose significant risks to health, safety, or fundamental rights. High-risk AI systems must comply with additional requirements, including risk management, cybersecurity, and transparency.
General-purpose AI (GPAI) models, such as large language models, are also subject to the AI Act. GPAI models with high-impact capabilities or those using a large number of computational operations may be classified as presenting systemic risks. These models must meet specific requirements, including evaluation, risk mitigation, and cybersecurity.
The AI Act also imposes transparency obligations on AI systems that interact with individuals or generate content. These include providing transparency information to downstream providers and informing users about the use of AI. AI systems not classified as high risk or GPAI are not subject to strict requirements but are encouraged to voluntarily comply with some of the mandatory requirements.
The AI Act has significant implications for the healthcare sector, as existing regulations such as the Medical Device Regulation (MDR) and In Vitro Diagnostic Medical Device Regulation (IVDR) do not explicitly cover medical AI applications. Most medical AI products classified as high risk under the MDR will need to comply with the AI Act's requirements. The Act may also impact the global market by setting high standards for AI development and use, which can be adopted by other regulators.
The AI Act introduces regulatory sandboxes to facilitate the development and testing of AI systems. However, the effective implementation of the AI Act within existing vertical regulations, such as the MDR and IVDR, remains unclear. The Act may increase regulatory complexity and costs for medical AI products, particularly for small and medium-sized enterprises. The AI Act aims to ensure the safe and fair development and implementation of AI across various industries, including healthcare.