Algorithms come in many forms, but what law governs them? In the past, we have had a patchwork of laws across the EU but on Wednesday last week, the European Commission published a legal framework on artificial intelligence, plus a coordinated plan with member states to implement it. After discussion and development since 2018, with a white paper on trustworthy AI released last year, this announcement indicates a potential turning point in how AI-driven technologies will be legally governed and draws lines of the law they cannot cross. Some even say it could be as big as GDPR was, when it was released in 2018 – but what are the details and how will it affect businesses?
Okay, but what is AI?
That’s a great question. Under the framework, the definition of AI is: ‘‘Artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.’
This definition covers a number of different use cases of AI, and addresses the techniques to create AI (machine learning models, logic based approaches and statistical approaches being listed). But isn’t it the negative results of AI we are looking to protect against? This approach could potentially result in companies looking to find ways to circumvent the new framework by not being classified as AI. Those who have helped shape this regulation have also highlighted the definitions devised by the High Level Expert Group on AI convened by the European Commission and the OECD, have not been fully considered in this definition. Nonetheless, at least we know for now what we are dealing with when the EU Commission is referring to ‘AI’.
The Legal Framework
“The new AI regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide”
The European Commission Press Release
The framework proposes a risk-based model, with a different level of protection depending on the potential risk level of an AI’s usage. Under the regulation, AI is categorised into four different categories: unacceptable, high, limited and minimal risk. At the top of the scale, any AI that is seen as a threat to human life, rights or wellbeing will be considered an unacceptable risk and therefore banned. This also includes social scoring, where people in society are ranked according to determined parameters.
Unlike in Black Mirror where people rated each other, using AI in social scoring (on a government level) is now considered an ‘unacceptable risk’ and is banned .
High risk AI is then categorised as AI used in less risky but still vital systems. The list features industries where the safety of the software is vital, including critical infrastructures, healthcare robots, and where the incorrect outcome or lack of transparency would be unfair or unjust, including law enforcement, employment, migration and private and public services. If a product utilises a high risk AI, it will be subject to risk assessments, documentation and even human oversight before being released to the public. The unacceptable and high-risk categories in particular agree with the European Commission’s previous guidelines on trustworthy AI, where it was proposed AI needs to be lawful, ethical and robust.
The use of AI which isn’t as hazardous has a lower threshold of oversight. Limited risk then covers AI which is used in products and services such as chatbots and requires transparency that AI is used. Any other products are considered to be in the minimal risk category, with the example of AI-enabled video games or spam filters given by the European Commission.
Where does the framework fall short?
While the framework is a milestone moment, some areas of the framework are not as reflective of the practical use of AI we see in society today.
AI Categorisation
Firstly, society does not use only life endangering AI on a day to day basis. The categorisation of AI raises the question of where algorithmic use on social media and product recommendations fall. While the usage of AI in this way is not life threatening on the face of it, heavy use in our online society can affect mental wellbeing, and take away freedom of choice.
Facial Recognition Exceptions
Real-time remote biometric identification systems, which are utilised in facial recognition software and surveillance, are highlighted as high risk and as ‘particularly intrusive in the rights and freedoms of the concerned persons’. However a number of exceptions for their usage which could be stretched without oversight. Considering the bias that is common in these technologies (you can read more about that here from the Algorithmic Justice League, or even watch Coded Bias), it is problematic that the law could allow its usage without oversight, under a caveat.
Supervisory Board
Finally, the framework has also been introduced with a supervisory board and a fine penalty similar to the approach taken in the GDPR. While good in theory, the success of the supervisory board has been criticised with the enforcement of fines increasing over time, but the actual fines issues being significantly lower.
Do we now have an AI Law?
Technology companies and startups who develop AI will need to pay attention to how the regulation will be implemented across the European Union. In particular, companies who create AI to ensure safety mechanisms function in products or are in the healthcare sector, will be affected. However, this is only the beginning for the framework: it is yet to be shown how different industries such as advertising technology (AdTech) will respond, how local jurisdictions react and how the law will evolve in the future in conjunction with other areas such as data protection and competition law.
In the meantime, you can hear about how AI can affect the work and lives of lawyers in our latest episode. By the way, how AI in legal tech will be regulated all comes down to the use case and purpose of the technology as discussed!