On December 9, 2023, the EU Commission announced that it welcomes the provisional agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act) it proposed in April 2021.
However, the text version of the AI Act on which the provisional agreement is based has not yet been published, as the formal approval of the final text version by the negotiating partners is still pending.
Definitions and scope
To ensure that the definition of an AI system (see blockquote below) provides sufficiently clear criteria for distinguishing AI from simpler software systems, the provisional agreement aligns the definition with the approach proposed by the OECD, although it does not repeat it word for word.
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
The provisional agreement also clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect member states’ competences in national security or any entity entrusted with tasks in this area. Furthermore, the AI act will not apply to systems which are used exclusively for military or defence purposes. Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons.
According to article 4a of the compromise text of the AI Act, all actors to whom the AI Act applies are bound by the following fundamental principles with regard to the development of AI systems:
Human action and control
AI systems should serve humans and respect human dignity and personal autonomy, and function in such a way that they can be controlled and monitored by humans.
Technical robustness and safety
Unintentional and unexpected damage should be minimized and AI systems should be robust in the event of unintentional problems.
Data protection and data governance
AI systems should be developed and used in accordance with data protection regulations.
Traceability and explainability must be possible and people must be made aware that they are interacting with an AI system.
Diversity, non-discrimination and fairness
AI systems should include different stakeholders and promote equal access, gender equality and cultural diversity, and conversely avoid discriminatory effects.
Social and environmental wellbeing
AI systems should be sustainable and environmentally friendly and developed and used for the benefit of all people.
New governance architecture
Following the new rules on GPAI models and the obvious need for their enforcement at EU level, an AI Office within the Commission is set up tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high impact foundation models, and monitoring possible material safety risks related to foundation models.
The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.
However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.
The compromise agreement also makes clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.
The political agreement is now subject to formal approval by the European Parliament and the Council and will entry into force 20 days after publication in the Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions: Prohibitions will already apply after 6 months while the rules on General Purpose AI will apply after 12 months.
To bridge the transitional period before the Regulation becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.
To promote rules on trustworthy AI at international level, the European Union will continue to work in fora such as the G7, the OECD, the Council of Europe, the G20 and the UN. Just recently, we supported the agreement by G7 leaders under the Hiroshima AI process on International Guiding Principles and a voluntary Code of Conduct for Advanced AI systems.
International scope of application: impact on Switzerland
Swiss providers who place AI systems on the market or put them into operation in the EU are also covered by the territorial scope of application of the AI Act. In addition, the AI Act will apply to Swiss providers and users of AI systems if the result produced by the AI system is used in the EU.
If Swiss AI providers develop their products not only for Switzerland, but also for marketing and use in the EU, the European AI standards of the AI Act will also de facto apply in Switzerland, regardless of whether and when the Swiss legislator enacts its own AI regulation (see “Regulatory approaches in Switzerland” below).
Regulatory approach in Switzerland
According to a press release dated November 22, 2023, “the Swiss Federal Council wants to harness the potential of artificial intelligence (AI) while minimizing the risks it poses for society“.
The Federal Council also announced that “after examining the developments, opportunities and challenges associated with AI, the Federal Council has instructed the Federal Department of the Environment, Transport, Energy and Communications (DETEC) to identify potential approaches to regulating it by the end of 2024, and to involve all federal agencies responsible in the legal areas affected. (….) With the analysis, the Swiss Federal Council wants to create the basis to issue a concrete mandate for an AI regulatory proposal in 2025 and clarify areas of responsibility.”