The EU wants to regulate Artificial Intelligence
The desire to understand human intelligence and create an intelligent being is a very old human desire. Until now, art was the appropriate conduit to explore it. Characters like Dr. Frankenstein’s creature or HAL, the intelligent computer from the movie 2001: A Space Odyssey, respond to this impulse.
With scientific and technical advances in the field of artificial intelligence (“AI”), engineers have gradually entering what was until then the exclusive area of artists. Following the passing of engineers and the introduction of artificial intelligence systems to the market, a new wave of settlers is preparing to enter the field of AI: European legislators and regulators.
Implications of the Proposal for a regulation on artificial intelligence
In April last year, the European Parliament published the Proposal for a regulation on artificial intelligence (the “Proposal ”), which is expected to be discussed and approved shortly.
The Proposal bans certain allegedly harmful AI practices, regulates high-risk AI systems, and establishes harmonized standards for the development, introduction into the market and use of AI systems in the Union. In addition, it proposes a robust regime of fines and sanctions, and the creation of a European Artificial Intelligence Committee to monitor compliance with it.
The common thread of the Proposal is the classification of AI systems according to their level of risk, in prohibited systems, high risk, medium risk or low, and the rest.
Article 5 outlines four AI systems that the European Parliament and the Council believe present such a serious risk to the rights and freedoms of citizens of the Union that is better prohibited altogether. These are:
- (I) AI that manipulates human behavior with subliminal techniques, causing physical or psychological damage
- (II) AI that exploits certain vulnerabilities of a group to manipulate the behavior of its members, causing physical or psychological damage
- (III) Social credit systems that lead to harmful or unfavorable treatment for a person or group, in a context other than the one in which the data was collected or that is disproportionate or unjustified in relation to their behavior
- (IV) Biometric identification in public spaces for law enforcement purposes
Bans can be separated into two categories:
Those that protect an inner sphere of autonomy, and those that protect the individual in public space. Prohibitions (I) and (II) protect people from techniques such as nudging and fraudulent attacks targeting the elderly. While phishing or other attacks are clearly harmful to people, with nudging the measures will have to be evaluated on a case-by-case basis.
On the other hand, prohibitions (III) and (IV) seem designed to prevent the implantation in Europe of a system of social control similar to the one existing in China and other Asian countries.
The scope of the ban is considerable taken on its own, but even more so with the definition of artificial intelligence proposed in article 3. According to it, AI is any software “that can, for a given set of human-defined goals, generate output information such as content, predictions, recommendations, or decisions that influence the environments with which it interacts”. With this definition, the prohibition covers a very wide range of systems, from machine learning and probabilistic systems, to logical reasoning systems.
The Proposal is nothing more than a first attempt by the European legislator to regulate the use of AI, and as such it still needs to be debated and voted on, but it is a step in the right direction. It provides clarity on the thinking of the legislator and contributes to the legal certainty necessary to manage the expectations of users, programmers and investors. Remind artists and inventors that they are not alone, but that their creations will be adopted by society. At Alice we are prepared and expectant of legislative developments in this area, in order to provide the best service in the market.
Will the proposed regulation on artificial intelligence affect our startup?
At Alice, our facial recognition technology uses machine learning modules and neural networks to carry out the service. Both techniques fall under the definition of artificial intelligence contained in article 3 of the Proposal. However, our system does not fall into any of the prohibited categories. Not even in the prohibition of biometric identification, since our solution is dedicated to authenticating the identity upon request and with the consent of the user, while article 5 prohibits the identification of an individual in a public space, probably without their knowledge. Therefore, Alice’s facial recognition services will continue to function as normal if the Proposal is approved.
There are many other behavioral and physical biometric authentication techniques on the market and each has its own advantages and disadvantages. However, one thing is for sure: biometric methods are much more secure than regular passwords due to their functionality based on unique human characteristics. With so many options now available to us, the future is definitely passwordless.