Gesture Recognition by Artificial Intelligence for Embedded Systems

Gesture recognition is a variant for the realization of a user interface for the operation of a technical system or device. Instead of keyboard input or voice input the user makes certain movements with his arms, hands or head which are interpreted by the technical system as control commands. Gesture control works absolutely noiselessly and without contact and can therefore be a suitable operating technology for special applications. Of course, the gestures used to control a system must be clearly distinguishable from one another and the recognition algorithm must be tolerant enough to ensure that the gestures are clearly and reliably recognized and interpreted despite the tolerances and deviations in execution by the users.

The recognition of gestures is therefore predestined for the use of artificial intelligence in which the movements detected by suitable sensors are first trained as part of a learning process before they are actually used. In principle, image sensors that capture the gestures performed and evaluate them via image or video analysis are suitable sensors. Equally suitable are 3D MEMS sensors that measure accelerations and rotation rates when they are fixed to the relevant part of the body or simply held in the hand during the execution of the gestures.

Gesture recognition with AIfES

Fraunhofer IMS has developed the embedded AI library "AIfES" (Artificial Intelligence for Embedded Systems) which allows artificial neural networks (ANN) to be executed and learned on microcontroller platforms with extremely limited resources (computing power, memory). Based on AIfES a gesture recognition method was implemented and verified in the form of a demonstrator. The gesture recognition uses a 3D MEMS sensor and extracts with an innovative and patent-pending method from the recorded motion data the characteristic features that determine the weights of the neural network during the learning process. After this learning process the trained neural network recognizes the gestures learned in this way and identifies them uniquely within a few milliseconds, that means without noticeable delay for the user.

Demonstrator for gesture control of a robotic arm
© Fraunhofer IMS
Demonstrator for gesture control using the example of a robot arm

On the demonstrator developed by Fraunhofer IMS the digits 0-9 were trained as example gestures which are virtually written in the air by the user. The executed gestures are recorded by a sensor and subsequently classified by AIfES on the microcontroller of a wireless remote control. The use of radio technology (LoRa) makes wireless and direction-independent gesture control possible over further distances thus ensuring a high degree of user-friendliness. In the case of the Fraunhofer IMS demonstrator gesture recognition is used to control a robot. If the gesture of one of these digits is recognized, the microcontroller will send the result wirelessly to the robot arm and this will pick up the cube assigned to this digit.

Other individual and more complex gestures can also be easily trained on other microcontrollers. This is made possible by the high platform independence and the extremely resource-saving implementation of embedded AI by AIfES.

In addition to applications in the field of device and machine operation this method of gesture recognition is also suitable for gaming applications or in the field of smart home as well as in the medical field for the evaluation of motion sequences, e.g. in rehabilitation. This opens up a whole range of new smart applications.

 

Embedded Gesture at »embedded world 2022«: Download info sheet

Embedded World 2020 – Gesture Recognition Demonstrator - YouTube

Our technologies - Innovations for your products

Handwriting Recognition

In handwriting recognition the embedded system uses suitable sensors to evaluate the numbers and letters written by the user by hand in order to use them as operator input.

Acoustical Gesture Control

With this combined type of operation, the embedded system gives instructions to the user by speech output while the user controls the system by using defined gestures.

Our technology areas - Our technologies for your development

Communication and Networking

Communication interfaces allow data exchange with other devices and connection to networks

User Interfaces

User interfaces as interface between device and user allow the configuration and operation of a product

Machine Learning for Embedded Systems

Artificial intelligence on resource-limited systems can be used to extract higher quality information from raw sensor data

Computer Vision

Computer Vision methods extract the maximum amount of information from image data

 

Embedded Software and Artificial Intelligence (Home)

Here you can get back to the overview page of the core competence Embedded Software and Artificial Intelligence (ESA).