Federated learning for resource-constrained systems

In the Fraunhofer-wide project SEC-Learn (Sensor Edge Cloud for Federated Learning), the main aim is to find out which developments are necessary so that the training of neural networks can be carried out directly at the sensor, but at the same time all other sensor nodes benefit from what has been learned - so-called federated learning. With local data processing, privacy requirements can be better ensured, which should also generate higher user acceptance. The use of privacy-enhancing technologies is intended to create secure data transmission.

The energy consumption, which is applied for the continuous analysis of the data with AI, is to be reduced by spiking neural networks (SNNs). These are executed close to the hardware and in an energy-efficient manner. For this, methods must be found that convert the models created by federated learning and make them compatible for the target hardware.

In this joint project, Fraunhofer IMS is developing federated learning for microcontrollers. The vision is that micro systems based on microcontrollers can train an artificial neural network (ANN) together, with each device having only a part of the training data at its disposal. The final trained model should be able to be distributed to all systems and used. This system has several advantages. The energy demand is distributed among the different systems and the computational power is increased. This enables the efficient computation of neural networks. Since no training data is exchanged, data protection is also fully guaranteed.

Another advantage here is also the decentralized data acquisition. For example, several intelligent sensors at different points of a machine can record measured values and jointly train a higher-level AI model. Or machines of the same design can be measured at different locations in order to finally train a generalized model.

The communication of the different sensors can be realized in different ways. A wireless communication over a short distance for the use at e.g. a machine or also a connection over the internet, if different devices at different locations have to be measured.

As a base for the implementation, the AI software framework AIfES of the Fraunhofer IMS is used, which allows to train an ANN on microcontrollers. Here, new and compatible algorithms are created that enable federated learning. In order to perform the training as efficiently as possible, the AIRI5C microcontroller core of the Fraunhofer IMS is used, which is based on the open RISC-V instruction set architecture. Special hardware accelerators for efficient training are being developed here as part of the project.

Scope of Fraunhofer IMS

  • Development of federated learning methods especially for microcontrollers
  • Realization of federated learning based on the IMS AIRI5C and AIfES
  • Development of AIRI5C hardware accelerators for efficient execution and training of artificial neural networks
  • Integration of federated learning methods into AIfES
  • Building a working demonstrator that shows federated learning


Fraunhofer Institutes EMFT, IAIS, IESE, IGD, IIS, IKS, IDMT, IPMS, ISIT und ITWM


Fraunhofer internal funding program

Our applications - Examples of what we can do for you

Decentralized AI systems and AIfES platform

AI Framework, open roberta, Arduino

Recognition of humans by means of embedded AI


Personalizable AI

Individually trainable gesture recognition

Our fields of application - Our expertise for you

Sustainable Production

  • Optimization of raw material and energy use
  • Use of alternative energy sources and energy-autonomous sensors
  • Green ICT

Mobile autonomous Manufacturing

  • Sensors / Control for Robots / Cobots
  • Industrial transport systems (AGV)
  • Human-Machine Interaction

Trustworthy Electronics

  • Protection against product piracy / Counterfeit-proof labeling
  • Tamper-proof and fail-safe electronics
  • Trustworthy supply chains

Industry (Home)

Click here to return to the overview page of the Industry business unit.