Continuous integration of software and testing with Jenkins, Raspberry Pi and hardware peripherals

Continuous integration of software and testing with Jenkins, Raspberry Pi and hardware peripherals

Continuous integration of software and testing with Jenkins, Raspberry Pi and hardware peripherals 2560 1920 Irnas

Author: Tjaž Vračko, student intern at IRNAS

Recently we’ve worked on the E&P Used Cooking Oil recycling machine where we tackled the issue of continuous integration (CI). In particular, we wanted to create a CI system that would allow us to test both the code and its interaction with the hardware at every step of the way. The importance of such automated testing solution arises in the need for constant testing during the development process as well as stress testing the final product through a pre-defined scenario of test-cases. Read more about our thinking about testing.

Quick device overview

The UCO-sammelautomat is a machine for separating and storing used cooking oil. The machines are in use throughout Austria and serve as an easy way for people to get rid of their used cooking oil, in return for which they receive a voucher for their local grocery store.

Used cooking oil separation machine

Hardware components

The code is executed on the Raspberry Pi which is operated via a touch screen mounted on one side. Custom electronics was designed, serving as a mounting point for all necessary peripheral devices:

  • motors for driving pumps
  • several sensors (for detection of liquid type, canister presence, stored liquid level etc.)
  • RFID reader (for maintenance functions)
  • thermal printer (for voucher printing)

Software components

In short, the GUI software (not built by IRNAS) sends UDP packets with command messages to our software, that then handles the machinery (operates motors and reads sensor values) and sends reports back to the GUI.

Our software must thus handle everything from a socket connection to business logic, implement drivers for motors and sensors, and handle mechanical errors. Due to a fair degree of system complexity, it is important to validate software functionality as a separate unit, as well as it’s interaction with the hardware.

In addition, it must be ensured that the exchanged messages retain their structure and that command execution remains predictable for the GUI software.

Writing and automating code tests

For code testing, we have used the Python unit testing framework unittest. With it, unit tests and integration tests can be written easily, using assert_* statements to verify that functions are executed as intended.

Tests were written parallel to the code. If new modules or functions were added to the project, tests were also written for them. Organically, the tests are organized in the same folder structure as the code itself, but under a /test directory. The test files also have the same name as the module to be tested, but preceded by test_. For example, tests for the module ./src/hardware/ are located in ./test/test_hardware/

This allows an engineer to verify that any implemented code changes do not change the intended behaviour. To stay on the safest possible side, testing should be automatically executed when new code is pushed to the repository. This also gives an engineer the ability to test only the code, separated from the hardware.

Enter: Jenkins

Jenkins is an automation server. With it, we can specify a test pipeline that is triggered when new code is pushed to GitHub. Jenkins then pulls the changes, runs tests in the specified order, and reports back to GitHub whether tests have passed or not. This is then visible as a green checkmark or red x next to the commit – with a link to Jenkins where a detailed report can be found.

Jenkins test report on next to github commit

Wishing to be able to test the whole system simultaneously, we decided to migrate Jenkins to Raspberry Pi and build a test rig that includes all hardware components so we can perform all levels of automated testing directly on the device.

Test rig

In addition to unit tests and integration tests, such a set up also allows us to write simulation tests in which we can simulate device operation by mocking sensor values and measuring the response of the device.
Mocking is a way to specify function return values or change function implementations on the fly. This allows us to simulate the operation of the machine (simulate states like “oil is being poured in”, “full canister” etc.)

For example:

1. mock: liquid_sensor_1 = 1
2. assert: pump_motor_spinning == true
3. mock: liquid_sensor_1 = 0
4. assert: pump_motor_spinning == false

We check the actual GPIO, motor and sensor values by using the drivers directly, instead of relying on what our program reports via its messages (the drivers are tested in a previous step of the pipeline, so their functioning is verified at this point).


This testing process has shown to be very effective during active development, eliminating the need for manual testing to a very high degree. By putting all the hardware in one place and keeping it nearby, we could also visually verify that the device was operating. This, while not being the most accurate and reliable way of testing, creates another verification method nevertheless for the developer when you push your code and see the motors spin.

This way we’ve established infrastructure that enables us to perform system verification and validation even now, when we only do maintenance and are not adding any new features.

About the author