Marrone, Stefano (2020) Trustworthy AI: the Deep Learning Perspective. [Tesi di dottorato]

[img]
Anteprima
Testo
stefano_marrone_32.pdf

Download (12MB) | Anteprima
[error in script] [error in script]
Tipologia del documento: Tesi di dottorato
Lingua: English
Titolo: Trustworthy AI: the Deep Learning Perspective
Autori:
AutoreEmail
Marrone, Stefanostefano.marrone@unina.it
Data: 13 Marzo 2020
Numero di pagine: 186
Istituzione: Università degli Studi di Napoli Federico II
Dipartimento: Ingegneria Elettrica e delle Tecnologie dell'Informazione
Dottorato: Information technology and electrical engineering
Ciclo di dottorato: 32
Coordinatore del Corso di dottorato:
nomeemail
Riccio, Danieledaniele.riccio@unina.it
Tutor:
nomeemail
Sansone, Carlo[non definito]
Data: 13 Marzo 2020
Numero di pagine: 186
Parole chiave: Ethics; Privacy; Fariness; Deep Learning; Artificial Intelligence; Adversarial Perturbations
Settori scientifico-disciplinari del MIUR: Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 - Sistemi di elaborazione delle informazioni
Depositato il: 05 Apr 2020 20:38
Ultima modifica: 05 Nov 2021 11:40
URI: http://www.fedoa.unina.it/id/eprint/13209

Abstract

The impact of AI, and in particular of deep learning, on the industry has been so disrupting that it gave rise to a new wave of research and applications that goes under the name of Industry 4.0. This expression refers to the application of AI and cognitive computing to leverage an effective data exchange and processing in manufacturing technologies, services and transports, laying the foundation of what is commonly known as the fourth industrial revolution. As a consequence, today's developing trend is increasingly focusing on AI based data-driven approaches, mainly because leveraging user's data (such as location, action patterns, social information, etc.) can make applications able to adapt to them, enhancing the user experience. To this aim, tools like automatic image tagging (e.g. those based on face recognition), voice control, personalised advertising, etc. process enormous amounts of data (often remotely due to the huge computational effort required) too often rich in sensitive information. Artificial intelligence has thus been proving to be so effective that today it is increasingly been using also in critical domains such as facial recognition, biometric verification (e.g. fingerprints), autonomous driving etc. Although this opens unprecedented scenarios, it is important to note that its misuse (malicious or not) can lead to unintended consequences, such as unethical or unfair use (e.g. discriminating on the basis of ethnicity or gender), or used to harm people's privacy. Indeed, if on one hand, the industry is pushing toward a massive use of artificial intelligence enhanced solution, on the other it is not adequately supporting researches in end-to-end understating of capabilities and vulnerabilities of such systems. The results may be very (negatively) mediatic, especially when regarding borderline domains such those related to subjects privacy or to ethical and fairness, like users profiling, fake news generation, reliability of autonomous driving systems, etc. We strongly believe that, since being just a (very powerful) tool, AI is not to blame for its misuse. Nonetheless, we claim that in order to develop a more ethical, fair and secure use of artificial intelligence, all the involved actors (in primis users, developers and legislators) must have a very clear idea about some critical questions, such as "what is AI?", "what are the ethical implications of its improper usage?", "what are its capabilities and limits?", "is it safe to use AI in critical domains?", and so on. Moreover, since AI is very likely to be an important part of our everyday life in the very next future, it is crucial to build trustworthy AI systems. Therefore, the aim of this thesis is to make a first step towards the crucial need for raising awareness about reproducibility, security and fairness threats associated with AI systems, from a technical perspective as well as from the governance and from the ethical point of view. Among the several issues that should be faced, in this work we try to address three central points: understanding what "intelligence" means and implies within the context of artificial intelligence; analyse the limitations and the weaknesses that might affect an AI-based system, independently from the particular adopted technology or technical solutions; assessing the system behaviours in the case of successful attacks and/or in the presence of degraded environmental conditions. To this aim, the thesis is divided into three main parts: in the first part we introduce the concept of AI, focusing on Deep Learning and on some of its more crucial issues, before moving to ethical implications associated with the notion of "intelligence"; in the second part we focus on the perils associated with the reproducibility of results in deep learning, also showing how proper network design can be used to limit their effects; finally, in the third part we address the implications that an AI misuse can cause in a critical domain such as biometrics, proposing some attacks duly designed for the scope. The cornerstone of the whole thesis are adversarial perturbations, a term referring to the set of techniques intended to deceive AI systems by injecting a small perturbation (noise, often totally imperceptible to a human being) into the data. The key idea is that, although adversarial perturbations are a considerable concern to domain experts, on the other hand, they fuel new possibilities to both favours a fair use of artificial intelligence systems and to better understand the "reasoning" they follow in order to reach the solution of a given problem. Results are presented for applications related to critical domains such as medical imaging, facial recognition and biometric verification. However, the concepts and the methodologies introduced in this thesis are intended to be general enough to be applied to different real-life applications.

Downloads

Downloads per month over past year

Actions (login required)

Modifica documento Modifica documento