David BECK analyzes technological issues from a geopolitical, economic and social perspective.
eu
923485692

The EU regulates artificial intelligence with the AI Act

David BECK Academic - Geo economics & Tech

Regulatory issues must be confronted with technical reality. AI is not a lawless area. Existing regulations already apply to AI, whether it is the RGPD for personal data, or sector-specific regulations in the field of health (medical devices), finance (trading models, solvency), or automotive for example.

The term Artificial Intelligence (AI) is used when a machine mimics the cognitive functions that humans associate with other human minds. Compared to human-programmed intelligence, AI is able to create its own algorithms through the process of Machine Learning (ML).

The European AI Act proposes to add a specific regulation

AI software, and in particular Machine Learning (ML) software, poses new problems. Traditional” software — symbolic AI sometimes called “good old fashioned AI” — is developed from precise specifications, with certain and provable output data. These are deterministic algorithms: input “a” plus input “b” will always lead to output “c”. If this is not the case, there is a bug.

In ML, the algorithms create themselves by learning from large volumes of data and operate probabilistically. Their results are accurate most of the time.
Moreover, they can base their predictions on irrelevant correlations that they have learned from the training data. The risk of error is an unavoidable feature of probabilistic ML models, which raises new regulatory issues, especially for high-risk AI systems. Is it possible to use a probabilistic algorithm in a critical system, such as image recognition in an autonomous car? Especially since ML algorithms are relatively unintelligible.

The Arizona crash of the Uber autonomous car in 2018 is a perfect illustration of the problem. The challenge for the future will be to surround these probabilistic systems — which are very efficient for tasks like image recognition — with safeguards. Hybrid systems, which combine ML and symbolic AI, are a promising avenue.

Regulation to address this problem

The draft EU AI regulation will require compliance testing and CE marking for any high-risk AI system placed on the market in Europe.
The first challenge is to define what is meant by a high-risk AI system! Currently, this would include software used by the police, for credit scoring, for reviewing applicants for university or corporate positions, software embedded in cars, etc. The list will continue to grow. Real-time facial recognition used by police for identification purposes will be subject to special constraints, including independent testing, and at least two human operators before a “match” is confirmed.

For other high-risk systems, the draft regulation contemplates compliance testing by the company itself. Each system will have to be subject to a risk assessment and be accompanied by documentation explaining the risks, if any. The systems will have to ensure effective human control. The operator of the system must generate event logs allowing the auditability of the system. For AI systems integrated into systems already covered by regulation, the testing and compliance regime will be governed by the sectoral regulation. This avoids creating regulatory duplication.

Why is there so much distrust of ML algorithms when there are accepted risks in other areas?

The Tricot report of 1975 (the report that led to the adoption of the French law on data processing and liberties in 1978) already evoked distrust of computer systems that reduce human beings to a series of statistical probabilities. By reducing us to numbers, such systems deny our individuality and our humanity. We are used to statistical profiling when it comes to receiving an ad or a music recommendation. But for more serious decisions — a hiring decision, admission to a university, triggering a tax audit, or getting a loan — being judged solely on a statistical profile is problematic, especially when the algorithm that creates the profile is unintelligible!

The algorithm must therefore bring statistical insight to the question, but never replace the discernment and nuance of a human decision maker. Nor should human shortcomings be downplayed — in the U.S., data suggests that judges make heavier prison decisions before lunch when they are hungry. Algorithms can help compensate for these human biases.

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.