European regulation on artificial intelligence (AI)
Recently, the European Commission has published its proposal on regulation of artificial intelligence (AI). With its proposal, the European Commission aims to protect fundamental rights of citizens and companies while promoting (the development of) AI systems and innovation within this sector. AI is a broad concept and encompasses many applications, for example, self-driving cars, facial recognition software, chatbots and software that can detect various diseases in a human body.
In its proposal, the European Commission distinguishes several AI systems involving different risk levels:
- AI systems that lead to an unacceptable risk;
- AI systems that are high risk;
- AI systems that are low (or minimum) risk.
For each category, the European Commission proposes certain rules. AI systems that lead to an unacceptable risk will be banned, e.g. concerning systems that manipulate human behaviour or set aside the free will of the user of the AI system. Similarly, AI systems that involve social scoring, allowing governments to give their citizens a certain score based upon their behaviour, will be forbidden.
One risk level lower are AI systems that are high risk. For example, AI systems that allow for surgeries assisted by robots, AI systems that automatically screen job applicants and their resumes considering a job application procedure, and AI systems that assess if someone is granted unemployment benefits. These high-risk systems are not banned but have to comply with strict rules. Also, high-quality data sets are required to train such AI systems. Last, low-risk AI systems may involve chatbots or (elements of) video games; these systems need to comply with less strict rules. The categorization of certain AI systems may be updated in the future, for example, in evolving technological developments.
The regulation as proposed by the European Commission will apply to both public and private parties inside and outside the EU, whenever they place AI systems on the European market or the use of AI systems has an effect on citizens within the EU. Moreover, the regulation applies to both producers of AI systems as well as users of these systems. This means that e.g. both the producer of a screening system and the company that buys the system and uses it must comply with the regulation.
National authorities will oversee compliance with the regulation. These authorities can impose fines for non-compliance of up to a maximum of 30 million euros or six percent of the worldwide annual turnover of the company that breaches the rules (in case the latter is higher). The European Commission’s proposal still needs to be approved by the European Parliament and the Council. Once the regulation has been approved, it will directly apply within the EU.
Many companies are involved in AI, and developments are following up one after another quickly. Legal counsels also have to deal with these developments more and more often. Halsten will continue to share important developments within this field to help clients with preparation timely.