Research Topic: Body-Gestures (Body-arm Commands) Classification and Recognition – Building of Human-Robot Interface

The reserch objectives concern with building of advanced algorithms for human, body gestures and dual-arms commands recognition based on extensive capture motion experiments with human patients. The main goal of R/D activities concerns with development of an advanced body gesture interpreter that represents functional core of the specialized human-robot interface. It can be implemented with stereo-vision within different mobile service robots. Interface to be developed serves to enable robots of human commands and gestures understanding, imitation of human models and performing different tasks in an anthropomorphic way.

Human beings, as intelligent creatures, are able to produce different body and arms gestures that have symbolic meanings and can be understood by other people (Fig. 1). For the purpose of body gestures recognition and human arms commands understanding, a specialized gestures interpreter has to be developed. For that purpose, a variety of capture motion experiments with human patients in the laboratory conditions have been performed (Fig. 2). The measurements, with biological system, represent a rich experimental fundament for building of the interpreter considered. The body-gestures interpreter can be synthesized through the several steps: (i) capture motion experiments with human beings (Fig. 2), (ii) image and data processing of sensors (vision and range-finder) data, (iii) body gestures feature extraction, (iv) learning and classification of body gestures, (v) modeling and graphical interpretation of body gestures, (vi) reproduction (imitation) of human gestures provided by an appropriate robotic system (e.g. Fig. 5). The layout of building a body-gestures interpreter is presented in Fig. 5.

Capture motion experiments provide a broad experimental fundament for processing and classification of different types of body gestures and body-arms commands. As the results of body-gestures capturing the corresponding data-files of body generalized coordinates (and their derivatives) are provided. In Figs. 3 and 4, an animation of the human body gesture of type ‘COME’ (Fig. 1) as well as actual joint trajectories, obtained by use of the corresponding kinematical model, are presented.

Different body-gestures are characterized by different sets of joint coordinates (e.g. Fig. 4a and 4b), joint angular velocities as well as joint accelerations. In that sense, several bio-mechanical criteria for gesture features extraction are assumed to enable making difference even among similar limb movements. In such a way, it is possible to extract and store the gesture features that are characteristic for any particular body-gesture or arm command. Also, it is possible to classify different gestures in category with the similar characteristics. For gesture classification and recognition different learning structures and algorithms as well as fuzzy logic sets are used.

By comparison of the features of an unknown gesture with the existing ones stored in the data-base, it is possible to identify the type of the gesture examined. In that case, when the new gesture captured can be identified as existing one of the data-base, we speak about gesture recognition.

Using the human-robot interface for gesture recognition and interpretation it is possible to develop the robotic control system capable to imitate human gestures as presented in Fig. 3. It is important to stress out that the interface described in this report is designed for service robots, to enable accurate recognition of body gestures captured by different robotic systems of vision (e.g. cameras) and range-finders. The interpreter was developed by use of the accurate capture motion experimental data but it will be used with other video sensors customized for robotic systems.

Different human body-gestures and body-arms commands used for building body-gestures classificator and interpreter

Fig. 1. Different human body-gestures and body-arms commands used for building body-gestures classificator and interpreter.

Experimental capture motion experiments concerning different
body-gestures and body-arms commands (in cooperation with University of Reunion, Faculty for Sports and Physical Activities-CURAPS, Le Tampon, France)

Fig. 2. Experimental capture motion experiments concerning different body-gestures and body-arms commands (in cooperation with University of Reunion, Faculty for Sports and Physical Activities-CURAPS, Le Tampon, France).

Animation of the body-arms command ’COME’ (Fig. 1) obtained by simulation of the kinematical model of human body

Fig. 3. Animation of the body-arms command ’COME’ (Fig. 1) obtained by simulation of the kinematical model of human body.

a) Body-gesture ’COME’:  a) Identified joint angles of body trunk, b) identified joint angles of the right and left arms (shoulder and ellbow joints) b) Body-gesture ’COME’:  a) Identified joint angles of body trunk, b) identified joint angles of the right and left arms (shoulder and ellbow joints)

Fig. 4. Body-gesture ’COME’: a) Identified joint angles of body trunk, b) identified joint angles of the right and left arms (shoulder and ellbow joints).

Layout of body-gestures interpretation and human gesture imitation using considered human-robot inteface

Fig. 5. Layout of body-gestures interpretation and human gesture imitation using considered human-robot inteface.



Fig. 6. Body-gestures recognition with Kinect