European AI Act – Possible risks for innovative European SMEs?

European AI Act - Possible risks for innovative European SMEs?

  • European AI regulation is on the way: the draft proposal for a European Act published in April 2021 is now in the hands of the European Parliament for deputies to discuss and vote.

  • While DIGITAL SME welcomes a harmonised European approach, we fear the current proposal could burden innovative European SMEs.

  • The complex and costly conformity assessment process described in the Act might push innovative solutions out of the market, while placing a lot of responsibilities in the hands of standardisation bodies in which SMEs are under-represented.

  • The proposal should reflect the very open, distributed and iterative nature of AI development when it comes to liability and go-to-market requirements.

Artificial intelligence (AI), its innovation potential and impact on society, have been on the radar of legislators, business and citizens in the past years. The European Union is now in the midst of setting a new law for AI. In April 2021, the Commission published the proposal for the European AI Act. The draft law is now in the hands of the European Parliament, where deputies will be discussing and voting the legislative draft in the coming months. To fuel the discussion and highlight the perspective of European SMEs, our Task Force AI & Standards has developed a position paper that summarises the main points of concern regarding the draft legislative proposal.

What does the AI Act propose?

The new AI Act comes in the form of a regulation, which means that the new law will be directly and uniformly applicable in 27 EU member states. This harmonised approach is good news for small companies, as it will avoid different rules in different countries.

In terms of content, the new AI Act builds on a so-called “risk-based approach”. This means that AI applications are grouped into different categories: forbidden uses, high-risk, AI with limited risk (subject to information/transparency obligations) and AI with no or minimal risk. For instance, biometric mass surveillance and general purpose social scoring will be forbidden, while chat bots will mainly need to fulfil information obligations (so you know you’re talking to a chat bot).

However, the devil lies in the details of the high-risk categories. Here, a complex system of conformity assessments is described, which will be similar in nature, and integrated with the so-called new legislative framework (in short — NLF: This is Europe’s current approach to product safety). This means that a product falling under the high-risk categories will have to undergo a conformity assessment with a public body in order to be sold in Europe (“placed on the market”). This applies for instance to a company developing medical devices with AI components but also to machines (e.g. all machinery products falling under the so-called “Machinery Directive”. In addition, new (so-called “stand-alone”) categories will be addressed, such as AI usage in areas like employment or migration control .

What do we see as risks for SMEs?

Firstly, this complex set-up is precisely what we consider risky for SMEs. AI is still at the beginning of its development, and by proposing very complex rules, and moving the discussions away from the market to standardisation organisations and technical committees, many innovative solutions coming from SMEs may be pushed out of the market. The draft law alsofeatures a very broad definition of AI, which may include many existing applications of statistical models, and therefore retroactively include many existing applications, e.g. in the insurance sector.

Secondly, in the proposal, there is a strong emphasis on the role of standards. However, the development of standards is dominated by larger companies, and representation of SMEs is generally low. Therefore, many standards are not very practical for SMEs. Small companies need to invest time and resources to understand and apply them – time & resources that they will not have for further product development or selling their products.

Thirdly, the development of AI products and services is a highly iterative and open process. Many SMEs rely on openly available tools, algorithms and data, and build their solutions on top of those. Given the iterative nature of this process, questions of liability (responsibility between “producer”, “developer”, “user” of AI) arise: Will a person making available an algorithm for everyone’s use already be considered the developer of AI and thus be made responsible for it? How about an AI product in the “Proof-of-Concept”-stage?

How to move forward?

In order not to hamper innovation, Europe needs to continue fostering openness in the development of AI, and limit any impact that the AI Act may have on these open iterative innovation processes. Additionally, SMEs need to be adequately represented in standardisation bodies. Finally, compliance costs need to be kept to a minimum and/or SMEs should be supported financially to go through the assessments.

AI is a key enabling technology, and making it work for SMEs will be crucial for Europe’s competitiveness. Therefore, we stand ready to support the development of a well-balanced law that sets clear rules and enables innovation. Building on our position paper, DIGITAL SME aims to engage with Members of the European Parliament in the coming months to ensure that the voices of AI innovators will be heard and considered in the public debates.

If you want to support our cause, please consider membership options. For instance, if you are an SME, you can join the DIGITAL SME Focus Group AI with more than 150 AI innovators from across Europe!

Check also:


by : Annika Linck on 2021-10-12 08:21:40

Source link

#European #Act #risks #innovative #European #SMEs

Capital Media

Read Previous

FSB publishes final report with policy proposals to enhance money market fund resilience

Read Next

Fragmented approach masks potential of end-to-end trade digitalisation

%d bloggers like this: