What is the EU’s Artificial Intelligence Act – AI Act for short?

The Artificial Intelligence Act(AI Act) is the world’s first binding law regulating artificial intelligence (AI). The AI Act was adopted by the European Commission on March 13, 2024, after being introduced as a legislative proposal in April 2021.

The AI Act is part of a broader European Union (EU) AI strategy that aims to harness the potential of AI while minimizing risks, maximizing opportunities, combating discrimination and ensuring transparency in the use of AI technologies.

Objectives of the AI Act

The main objective of the AI Act is to create a harmonized legal framework for the development, useand distribution of AI systems within the EU. This legal framework pursues several objectives:

  • Risk mitigation: The AI Act adopts a risk-based approach where AI systems are subject to different levels of regulation depending on the potential risks they pose to society and individuals. By setting out specific requirements for AI systems that are classified as “high-risk”, the Act aims to minimize potential harm and improve the safety and reliability of AI applications.
  • Creating opportunities: By creating a clear and predictable regulatory environment, the AI Act promotes innovation and investment in AI technologies. Companies and developers benefit from clear guidelines that help them to develop compliant products and at the same time strengthen user confidence in AI-based solutions.
  • Combating discrimination: The AI Act sets out strict requirements for the transparency, data quality and monitoring of AI systems to ensure that they do not lead to discrimination or injustice. This includes bans on certain applications of AI that are considered unacceptably risky, such as technologies based on unethical or biased data.
  • Ensuring transparency: The Act requires transparency in the functioning of AI systems, especially when they interact with humans. This is to ensure that users understand when they interact with AI systems and promotes awareness and understanding of the decision-making processes behind AI applications.

Relevance of the AI Act

The Artificial Intelligence Actrepresents a significant step in the global regulation of AI technologies and positions the European Union as a pioneer in the creation of a regulated space for the development and use of artificial intelligence.

By setting ethical and legal standards in AI regulations, the AI Act aims to make Europe a leading global center for trustworthy and safe AI.

With its risk-based approach, the AI Act offers a balanced approachthat takes into account both the protection of citizensfundamental rights and freedoms and the promotion of innovation and economic growth.

The AI Act underlines the EU’s commitment to a“human-centeredapproach to AI, which aims to use technology for the benefit of society while preventing potential negative impacts on the individual and society.

Context and background of the Artificial Intelligence Act (AI Act)

Development of the AI Act

The development of the European Union’s Artificial Intelligence Act (AI Act) is a significant event in the regulation of digital technologies and marks a decisive step towards shaping the global AI landscape. The proposal for the AI Act was presented by the European Commission in April 2021 and is the result of a comprehensive, three-year consultation process involving various stakeholders from industry, academia, civil society and government institutions. This process aimed to create a balanced legal framework that promotes innovation while addressing the risks associated with the use and deployment of artificial intelligence.

The initiative for the AI Act arose from the growing realization that AI technologies have the potential to have a profound impact on society, the economy and people’s personal lives. With AI applications increasingly being used in critical and sensitive areas such as healthcare, education, justice and employment, the need for regulatory intervention to ensure that these technologies are developed and used in a way that is consistent with the EU’s fundamental values and rights has become clear.

Significance in the context of data protection

The AI Act does not stand alone, but complements existing EU regulations such as the General Data Protection Regulation (GDPR), which has governed the processing of personal data within the EU since May 2018. The GDPR focuses on protecting personal data and ensuring the privacy of citizens, while the AI Act is specifically designed to address the risks posed by the specific characteristics and applications of AI systems.

The importance of the AI Act in the context of data protection can be illustrated by several key aspects:

  • Supplement to the GDPR: The AI Act builds on the principles of data protection established by the GDPR and extends them by introducing specific requirements for AI systems that process personal data. This will further strengthen the protection of citizens in the digital world.
  • Strengthening protection against automated decision-making: The AI Act specifically addresses the challenges and risks associated with automated decision making and profiling by AI systems by promoting transparency, accuracy and fairness in these processes.
  • Human oversight: The AI Act emphasizes the importance of human oversight in the use of AI systems, especially in sensitive areas, to ensure that decisions that have a significant impact on individuals are verifiable and responsible.

The AI Act is therefore a fundamental building block in the EU strategy for digital ethics and the protection of citizens’ rights. It represents a forward-looking approach to AI regulation that aims to maximize the benefits of these technologies while minimizing their risks, thus complementing existing data protection regimes with specific regulations for AI.

Key points of the Artificial Intelligence Act (AI Act)

The Artificial Intelligence Act (AI Act) represents an important milestone in the regulation of artificial intelligence (AI) within the European Union. It aims to shape the development and use of AI systems in such a way that they are safe, ethical and in line with the fundamental rights of citizens. The key points of the AI Act are explained in detail below.

Risk-based approach

The AI Act introduces a risk-based approach that classifies AI systems into different risk categories. This approach is central to understanding and applying the regulation, as it determines what type of regulation is applied to a particular AI system.

4 Risk categories of the AI Act

4 Risk categories of the AI Act

The 4 risk categories of the AI Act:

The approach basically distinguishes between four risk categories:

  1. Unacceptable risk: AI systems that pose a clear threat to people’s safety, livelihoods and rights will be banned.
  2. High risk: AI systems that can have a significant impact on people’s health, safety or fundamental rights are subject to strict safety requirements. These requirements include aspects such as data quality, transparency, human monitoring and robustness.
  3. Limited risk: AI systems that must meet certain transparency requirements but are not subject to any other strict regulations. This could include chatbots, for example, where users have to be informed that they are interacting with an AI system.
  4. Minimal or no risk: Most AI applications fall into this category and can be used freely as long as they comply with applicable laws.

Prohibited practices

The AI Act sets out specific AI applications that are prohibited due to their potential threat to the fundamental rights and safety of citizens. These prohibited practices include:

  • Biometric categorization: The automated recognition of sensitive characteristics such as gender, race, political opinion or sexual orientation in a way that could promote discrimination.
  • Untargeted reading of facial images: The collection of facial images from the internet or surveillance cameras without specific targeting and consent.
  • Emotion recognition systems: Use in workplaces and educational institutions that could compromise the privacy and psychological integrity of the individuals concerned.
  • Social scoring: Systems that evaluate the behavior of people in different contexts and can lead to social or economic advantages or disadvantages.

Restrictions and exceptions

While the AI Act restricts the use of certain AI applications in law enforcement, such as automated facial recognition in public spaces, it also allows for certain exceptions. These exceptions are generally subject to strict conditions and must pursue clearly defined objectives, such as the prevention of serious crime or the search for missing persons. The use of such technologies must be proportionate, necessary and covered by appropriate legislation.

These key points of the AI Act reflect the European Union’s ambition to find a balanced approach that promotes innovation while protecting the security and fundamental rights of citizens. The implementation of the risk-based approach, the prohibition of certain practices and the consideration of exceptions together form a framework that is intended to guide the development and use of AI systems in an ethically responsible manner.

Statements and criticism of the Artificial Intelligence Act (AI Act)

Since its presentation by the European Commission in April 2021, the Artificial Intelligence Act (AI Act) has generated a wide range of comments and criticism from various sectors. The most important voices include EU Internal Market Commissioner Thierry Breton, the rapporteurs of the EU Parliament as well as data protection officers and consumer advocates.

Statements by EU Internal Market Commissioner Thierry Breton

Thierry Breton has described the AI Act as an important step towards the safe and ethical use of AI technologies in the EU. He emphasized that Europe is setting global standards for trustworthy AI with the AI Act and explained that the Act creates the world’s first binding law on artificial intelligence to reduce risks, create opportunities, combat discrimination and ensure transparency. Breton emphasized that the AI Act is a balanced approach that promotes innovation while protecting the security and fundamental rights of citizens.

Opinions of the rapporteurs of the EU Parliament

The rapporteurs of the EU Parliament, including Brando Benifei and Dragos Tudorache, were also positive about the AI Act. They emphasized that with the Act, the EU is taking a leading role in the global discussion on the regulation of AI. Benifei emphasized that the Act combats discrimination and promotes transparency, while Tudorache pointed out that the AI Act is only a first step and that further regulation will be necessary as AI has a profound impact on the social contract.

Criticism from data protection officers and consumer advocates

Despite the positive reviews, there were also critical voices. Data protection officers such as the Federal Data Protection Commissioner Ulrich Kelber praised the fact that the AI Act strengthens protection against automated decision-making, but criticized the lack of a clear ban on biometric remote recognition in public spaces. Kelber recommended that the German government should use the opening clause for stricter national bans.

Consumer advocates, represented by organizations such as the German Federation of Consumer Organizations (vzbv), welcomed the improvements in terms of transparency and the ban on manipulative practices. However, they pointed out that the AI Act in its current form is not sufficient to effectively protect consumers from the risks of AI. One particular criticism was that some manipulative practices are only prohibited if they are intentional, which can be difficult to prove in practice.

Criticism of dystopian technologies and automated decision-making

A recurring point of criticism concerns the insufficiently addressed dystopian technologies and the large gaps in protection against automated decision-making. Critics argue that the AI Act does not sufficiently prohibit or restrict certain applications of AI, such as blanket surveillance systems and the use of “video lie detectors”. These technologies carry the risk of massively violating the privacy and fundamental rights of citizens without adequate safeguards.

In summary, it can be said that the AI Act is seen as a significant step towards the regulated use of AI technologies in the EU. However, there are clear calls for improvements, particularly with regard to the protection of privacy and the prevention of misuse by dystopian technologies. The debate surrounding the AI Act highlights the need to find a balance between promoting innovation and protecting fundamental rights.

Effects and future regulations of the Artificial Intelligence Act (AI Act)

Role of the AI Act as a supplement to the GDPR

The Artificial Intelligence Act (AI Act) is an important addition to the European Union’s existing General Data Protection Regulation (GDPR). While the GDPR focuses on the protection of personal data and strengthens the rights of individuals with regard to the processing of their data, the AI Act specifically addresses the challenges and risks associated with the development and use of artificial intelligence.

One of the most important aspects of the AI Act is the strengthening of protection against automated decision-making. The AI Act sets out detailed transparency and oversight requirements for AI systems that are capable of making decisions that can have a significant impact on individuals. This includes provisions to ensure that such systems are transparent, fair and non-discriminatory. These requirements supplement the rights that individuals already have under the GDPR, in particular the right not to be subject exclusively to automated decision-making that produces legal effects concerning them or similarly significantly affects them.

Proposals and recommendations for stricter national bans and extended protective measures

Data protection officers and consumer protection organizations have submitted a series of proposals and recommendations to further strengthen the protective measures offered by the AI Act:

  1. Clear ban on biometric remote recognition in public spaces: Data protection officers have repeatedly called for a clear ban on the use of biometric remote recognition technologies in public spaces. This would strengthen the protection of citizens’ privacy and fundamental rights against invasive surveillance.
  2. Stricter national bans: It was recommended that Member States should take the opportunity to introduce stricter national bans that go beyond the provisions of the AI Act. This could, for example, include stricter regulations for the use of AI in certain sensitive areas such as law enforcement or healthcare.
  3. Enhanced safeguards against discrimination: Consumer protection organizations are calling for stronger measures to ensure that AI systems do not make discriminatory decisions. This could include developing stricter guidelines for the design and review of algorithms to minimize bias.
  4. Increased transparency and accountability: It was proposed to increase the requirements for the transparency of AI systems and the accountability of their providers. This could include the obligation to subject algorithms to independent audits and to make detailed information on the functioning and decision-making processes of AI systems publicly available.
  5. Strengthening the rights of data subjects: In order to further improve the protection of individuals, additional measures could be taken to give those affected more control over the decisions made by AI. This includes the right to request a human review and improved opportunities to take action against decisions that are perceived as unfair or discriminatory.

The AI Act is therefore seen as an important step towards regulating the use of AI technologies, but further measures are needed at both EU and national level to ensure the protection of fundamental rights and the ethical use of AI.

The proposals and recommendations of data protection officers and consumer protection organizations underline the need to continuously work on improving and adapting the regulatory framework in order to meet the challenges of rapid technological development.

Preparation for the implementation of the Artificial Intelligence Act (AI Act)

The introduction of the Artificial Intelligence Act (AI Act) represents a significant regulatory change for players in the AI ecosystem. Early and thorough preparation is essential to ensure that companies, research institutions and other affected organizations can meet the new requirements. Here are detailed tips and recommendations for preparing for the implementation of the AI Act.

Importance of early preparation

  • Understanding the requirements: It is crucial that all players in the AI ecosystem understand the AI Act and its specific requirements. This includes knowing which AI systems are classified as high-risk and what specific obligations are associated with them.
  • Risk-based assessment: Organizations should begin to conduct a risk-based assessment of their AI systems to determine which of their applications may fall under the more stringent requirements of the AI Act.
  • Early adaptation: Early adaptation to the requirements of the AI Act enables organizations to minimize potential compliance risks and ensure the smooth deployment of their AI systems.

Integration into existing AI systems

  • Quality management: Organizations should examine how the requirements of the AI Act can be integrated into their existing quality management systems. This could include the development of new quality criteria for the development and use of AI systems to ensure their safety, transparency and fairness.
  • Risk management: The AI Act requires a risk-based approach to assessing and managing the risks associated with AI systems. Organizations should review and adapt their existing risk management processes to ensure that they can effectively identify, assess and manage the specific risks of AI systems.
  • Documentation and reporting: The AI Act places particular emphasis on documentation and reporting in order to improve the transparency and traceability of AI systems. Organizations should implement procedures to document in detail the development, deployment and performance of their AI systems, including information about the training data, algorithms and decision-making processes used.
  • Training and awareness: It is important that all employees involved in the development, deployment or monitoring of AI systems are trained in the requirements of the AI Act. This also includes raising awareness of ethical aspects and the potential risks of AI systems.

Recommendations for a successful AI implementation

  • Early involvement of legal and compliance departments: To ensure full compliance with the AI Act, organizations should involve their legal and compliance teams early in the process.
  • Collaboration with external experts: In some cases, it may make sense to bring in external experts or consultants to address specific aspects of compliance with the AI Act.
  • Ongoing monitoring and adaptation: As the regulatory framework for AI will continue to evolve, it is important that organizations regularly review their compliance measures and adapt them as necessary.

Preparing for the implementation of the AI Act requires a proactive and strategic approach. By integrating the requirements into existing quality and risk management systems, organizations can not only ensure compliance, but also strengthen trust in their AI systems and thus help to promote the ethical and responsible use of AI.

For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Datenschutzerklärung.

Rock the Prototype Podcast

The Rock the Prototype Podcast and the Rock the Prototype YouTube channel are the perfect place to go if you want to delve deeper into the world of web development, prototyping and technology.

🎧 Listen on Spotify: 👉 Spotify Podcast: https://bit.ly/41pm8rL

🍎 Enjoy on Apple Podcasts: 👉 https://bit.ly/4aiQf8t

In the podcast, you can expect exciting discussions and valuable insights into current trends, tools and best practices – ideal for staying on the ball and gaining fresh perspectives for your own projects. On the YouTube channel, you’ll find practical tutorials and step-by-step instructions that clearly explain technical concepts and help you get straight into implementation.

Rock the Prototype YouTube Channel

🚀 Rock the Prototype is 👉 Your format for exciting topics such as software development, prototyping, software architecture, cloud, DevOps & much more.

📺 👋 Rock the Prototype YouTube Channel 👈  👀 

✅ Software development & prototyping

✅ Learning to program

✅ Understanding software architecture

✅ Agile teamwork

✅ Test prototypes together

THINK PROTOTYPING – PROTOTYPE DESIGN – PROGRAM & GET STARTED – JOIN IN NOW!

Why is it worth checking back regularly?

Both formats complement each other perfectly: in the podcast, you can learn new things in a relaxed way and get inspiring food for thought, while on YouTube you can see what you have learned directly in action and receive valuable tips for practical application.

Whether you’re just starting out in software development or are passionate about prototyping, UX design or IT security. We offer you new technology trends that are really relevant – and with the Rock the Prototype format, you’ll always find relevant content to expand your knowledge and take your skills to the next level!