Iengo, Salvatore (2014) Human Gesture Recognition and Robot Attentional Regulation for Human-Robot Interaction. [Tesi di dottorato]
Preview |
Text
iengo_salvatore_26.pdf Download (5MB) | Preview |
Item Type: | Tesi di dottorato |
---|---|
Resource language: | English |
Title: | Human Gesture Recognition and Robot Attentional Regulation for Human-Robot Interaction |
Creators: | Creators Email Iengo, Salvatore salvatore.iengo@unina.it |
Date: | 31 March 2014 |
Number of Pages: | 107 |
Institution: | Università degli Studi di Napoli Federico II |
Department: | Ingegneria Elettrica e delle Tecnologie dell'Informazione |
Scuola di dottorato: | Ingegneria dell'informazione |
Dottorato: | Ingegneria informatica ed automatica |
Ciclo di dottorato: | 26 |
Coordinatore del Corso di dottorato: | nome email Garofalo, Francesco grf.fnc@gmail.com |
Tutor: | nome email Villani, Luigi UNSPECIFIED |
Date: | 31 March 2014 |
Number of Pages: | 107 |
Keywords: | gesture recognition, deixis, attentional regulation |
Settori scientifico-disciplinari del MIUR: | Area 09 - Ingegneria industriale e dell'informazione > ING-INF/05 - Sistemi di elaborazione delle informazioni |
Date Deposited: | 09 Apr 2014 15:54 |
Last Modified: | 27 Jan 2015 14:23 |
URI: | http://www.fedoa.unina.it/id/eprint/9797 |
Collection description
Human-Robot Interaction (HRI) is defined as the study of interactions between humans and robots: it involves several different disciplines like computer science, engineering, social sciences and psychology. For HRI, the perceptual challenges are particularly complex, because of the need to perceive, understand, and react to human activity in real-time. The main key aspects of the perception are multimodality and attention. Multimodality allows humans to move seamlessly between different modes of interaction, from visual to voice to touch, according to changes in context or user preference, while attention is the cognitive process of selectively concentrating on one aspect of the environment while ignoring other things. Multimodality and attention play a fundamental role in HRI also. Multimodality allows robot to interpret and react to various humans' stimuli (e.g. gesture, speech, eye gaze) while, on the other hand, implementing attentional models in robot control behavior allows robot to save computational resources and react in real time by selectively processing the salient perceived stimuli. The intention of this thesis is to present novel methods for human gestures recognition including pointing gestures, that are fundamental when interacting with mobile robots, and a robot attentional regulation mechanism that is speech driven. In the context of continuous gesture recognition the aim is to provide a system that can be trained online with few samples and can cope with intra user variability during the gesture execution. The proposed approach relies on the generation of an ad-hoc Hidden Markov Model (HMM) for each gesture exploiting a direct estimation of the parameters. Each model represents the best prototype candidate from the associated gesture training set. The generated models are then employed within a continuous recognition process that provides the probability of each gesture at each step. A pointing gesture recognition computational method is also presented, such model is based on the combined approach a geometrical solution and a machine learning solution. Once the gesture recognition models are described, a human-robot interaction system that exploits emotion and attention to regulate and adapt the robotic interactive behavior is proposed. In particular, the system is focused on the relation between arousal, predictability, and attentional allocation considering as a case study a robotic manipulator interacting with a human operator.
Downloads
Downloads per month over past year
Actions (login required)
View Item |