Article contents
Legal and Ethical Framework for AI in Europe: Summary of Remarks
Published online by Cambridge University Press: 01 March 2021
Extract
In the European Union (EU) and Europe in general, there is currently no legally binding definition of artificial intelligence (AI) nor a common EU framework for the regulation of AI. The EU Independent High Level Expert Group on Artificial Intelligence (AI HLEG) defines AI systems as “software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal.” The European Commission in its white paper emphasized that a definition of AI “will need to be sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty.” In particular, the terms “data” and “algorithm” need to be defined. With the exception of autonomous vehicles, there are not many AI-specific binding rules yet. A common EU legal framework, which has been proposed in the European Commission's white paper, will build trust and provide legal certainty for consumers and businesses.
- Type
- Contemporary Human Rights Research: Researching Human Rights and Artificial Intelligence
- Information
- Copyright
- Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of The American Society of International Law.
References
1 Independent High Level Expert Group on Artificial Intelligence, A Definition of AI: Main Capabilities and Disciplines 6 (Apr. 8, 2019), at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341.
2 European Commission, White Paper on Artificial Intelligence - A European Approach to Excellence and Trust 16 (Feb. 19, 2020), available at https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
3 Independent High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI 15 (Apr. 8, 2019), at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419.
4 Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) (GDPR), Art. 22, 2016 OJ (L 119) 1, at http://eur-lex.europaeu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN.
5 Id.
6 More guidance can be found in the guidelines from the Article 29 Working Party. See Article 29 Data Protection Working Party, Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679 (Feb. 6, 2018), at https://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=49826.
7 Independent High-Level Expert Group on Artificial Intelligence, supra note 3.
8 Independent High-Level Expert Group on Artificial Intelligence, Policy and Investment Recommendations for Trustworthy AI (June 26, 2019), at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60343.
9 European Commission, supra note 2.
10 Rechtbank Den Haag, Feb. 5, 2020, Docket No. C-09-550982-HA ZA 18-388, ECLI:NL:RBDHA:2020:865 (Neth.), at https://uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:865; Jenny Gesley, Netherlands: Court Prohibits Government's Use of AI Software to Detect Welfare Fraud, Glob. Legal Monitor (Mar. 13, 2020), at https://www.loc.gov/law/foreign-news/article/netherlands-court-prohibits-governments-use-of-ai-software-to-detect-welfare-fraud.
- 1
- Cited by