Iengo, Salvatore (2014) Human Gesture Recognition and Robot Attentional Regulation for Human-Robot Interaction. [Tesi di dottorato]

[img]
Anteprima
Testo
iengo_salvatore_26.pdf

Download (5MB) | Anteprima
[error in script] [error in script]
Tipologia del documento: Tesi di dottorato
Lingua: English
Titolo: Human Gesture Recognition and Robot Attentional Regulation for Human-Robot Interaction
Autori:
AutoreEmail
Iengo, Salvatoresalvatore.iengo@unina.it
Data: 31 Marzo 2014
Numero di pagine: 107
Istituzione: Università degli Studi di Napoli Federico II
Dipartimento: Ingegneria Elettrica e delle Tecnologie dell'Informazione
Scuola di dottorato: Ingegneria dell'informazione
Dottorato: Ingegneria informatica ed automatica
Ciclo di dottorato: 26
Coordinatore del Corso di dottorato:
nomeemail
Garofalo, Francescogrf.fnc@gmail.com
Tutor:
nomeemail
Villani, Luigi[non definito]
Data: 31 Marzo 2014
Numero di pagine: 107
Parole chiave: gesture recognition, deixis, attentional regulation
Settori scientifico-disciplinari del MIUR: Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 - Sistemi di elaborazione delle informazioni
Depositato il: 09 Apr 2014 15:54
Ultima modifica: 27 Gen 2015 14:23
URI: http://www.fedoa.unina.it/id/eprint/9797

Abstract

Human-Robot Interaction (HRI) is defined as the study of interactions between humans and robots: it involves several different disciplines like computer science, engineering, social sciences and psychology. For HRI, the perceptual challenges are particularly complex, because of the need to perceive, understand, and react to human activity in real-time. The main key aspects of the perception are multimodality and attention. Multimodality allows humans to move seamlessly between different modes of interaction, from visual to voice to touch, according to changes in context or user preference, while attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Multimodality and attention play a fundamental role in HRI also. Multimodality allows robot to interpret and react to various humans' stimuli (e.g. gesture, speech, eye gaze) while, on the other hand, implementing attentional models in robot control behavior allows robot to save computational resources and react in real time by selectively processing the salient perceived stimuli. The intention of this thesis is to present novel methods for human gestures recognition including pointing gestures, that are fundamental when interacting with mobile robots, and a robot attentional regulation mechanism that is speech driven. In the context of continuous gesture recognition the aim is to provide a system that can be trained online with few samples and can cope with intra user variability during the gesture execution. The proposed approach relies on the generation of an ad-hoc Hidden Markov Model (HMM) for each gesture exploiting a direct estimation of the parameters. Each model represents the best prototype candidate from the associated gesture training set. The generated models are then employed within a continuous recognition process that provides the probability of each gesture at each step. A pointing gesture recognition computational method is also presented, such model is based on the combined approach a geometrical solution and a machine learning solution. Once the gesture recognition models are described, a human-robot interaction system that exploits emotion and attention to regulate and adapt the robotic interactive behavior is proposed. In particular, the system is focused on the relation between arousal, predictability, and attentional allocation considering as a case study a robotic manipulator interacting with a human operator.

Downloads

Downloads per month over past year

Actions (login required)

Modifica documento Modifica documento