tsinganos2019.pdf - A Hilbert Curve Based Representation of...

This preview shows page 1 - 2 out of 6 pages.

201 A Hilbert Curve Based Representation of sEMG Signals for Gesture Recognition Panagiotis Tsinganos 1,2 , Bruno Cornelis 2,3 , Jan Cornelis 2 , Bart Jansen 2,3 , Athanassios Skodras 1 1 University of Patras, Department of Electrical and Computer Engineering, 26504 Patras, Greece 2 Vrije Universiteit Brussel, Department of Electronics and Informatics, 1050 Brussels, Belgium 3 imec, 3001 Leuven, Belgium {panagiotis.tsinganos, skodras}@ece.upatras.gr, {bcorneli, jpcornel, bjansen}@etrovub.be Abstract - Deep learn ing (DL) has transformed the field of data analysis by dramatically improving the state of the art in various classification and prediction tasks, especially in the area of computer vision. In biomedical engineering, a lot of new work is directed towards surface electromyography (sEMG) based gesture recognition, often addressed as an image classification problem using Convolutional Neural Networks (CNN). In this paper, we utilize the Hilbert space-filling curve for the generation of image representations of sEMG signals that are then classified by CNN. The proposed method is evaluated on different network architectures and yields a classification improvement of more than 3%. Keywords - classification, CNN, deep learning, electromyography, hand gesture recognition, Hilbert curve, sEMG I. I NTRODUCTION The problem of gesture recognition is encountered in many applications including human computer interaction [1], sign language recognition [2], prosthesis control [3] and rehabilitation gaming [4, 5]. Signals generated from the electrical activity of the forearm muscles, which can be recorded with surface electromyography (sEMG) sensors, contain useful information for decoding muscle activity and hand motion [6]. Machine Learning (ML) classifiers have been used extensively for determining the type of hand motion from sEMG data. A complete pattern recognition system based on ML consists of data acquisition, feature extraction, classifier definition and inference from new data. For the classification of gestures from sEMG data, electrodes attached to the arm and/or forearm acquire the sEMG signals, and features such as Root Mean Square (RMS), variance, zero crossings and frequency coefficients are extracted and then fed as input to classifiers like k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP) or Random Forests [7]. Over the past years, Deep Learning (DL) models have shown great success to the problem of sEMG-based gesture recognition. In these approaches, sEMG data are represented as images and a Convolutional Neural Network (CNN) is used to determine the type of gesture. A typical CNN architecture consists of a stack of convolutional and pooling layers followed by fully connected (i.e. dense) layers and a softmax output. In this way, CNNs transform the input image layer by layer, from the pixel values to the final classification label.

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture