contact@blackcoffeerobotics.com
14, Raghava Enclave, Transport Road, Secunderabad, Hyderabad (500009)
©2024 All Rights Reserved by BlackCoffeeRobotics
Developing robotics software is a unique endeavor, and so is writing tests for it. In addition to common programming concepts such as data sanity, and computational considerations the developer is also expected to check for a variety of hardware interfaces and work with mathematical concepts of geometry and linear algebra more than most other software engineering.
If you’re looking to write unit tests for ROS2 software, read ahead to find out how to write unit tests, how to write unit-testable code, and how to generate coverage for your codebase. I have created and open-sourced a simple line-following robot example to illustrate -
Our target system is a basic line-following PID-controlled robot. Assume that we are given a straight line using two waypoints in a planar world that it goes through. An external localization system provides us with the current pose of the robot. Our immediate goal is to implement a line following control to move the robot on the straight line represented by the given waypoints. For simplicity, we will read the waypoints from a configuration file. This is a simplistic system, real-world robotic systems deploy a range of software and algorithmic infrastructure to achieve autonomous navigation. However, this is a sufficient problem statement to present the unit-testing concepts at the core of this article.
There are multiple ways to go about writing our simple line-follower robot code. The naive_line_follower.cpp contains a sample implementation. While this is a perfectly valid functional code, it leaves little room to unit-test. There are no arguments to any functions, no returns from functions and everything is largely compressed into the goToGoal() monolith.
Our inability to write meaningful tests for it also means that it will be difficult to catch any bugs we end up introducing when expanding the scope of this. For example, if we were to include a PID implementation instead of only the PD implementation- our perfectly working code might fall apart and we’d have no idea what happened.
On the contrary, consider the alternate implementation in the line_follower.cpp. This breaks down the implementation into simpler logical blocks such as calculateDistanceError(), calculateProportionalEffort(double error), calculateDerivativeEffort(double error), controlLoop(), and several more. This helps our ability to test these modules independently. Now imagine a scenario where the PD control is replaced with a PID or some other control algorithm- we can easily add a calculateIntegralEffort(double error) function, and be better equipped to assess what went wrong.
Writing testable code doesn’t necessarily translate to a well-put-up testing mechanism. In the test_line_follower_basic.cpp, a singular test called TestControlLoop() results in a high test coverage statistically speaking but doesn’t provide any meaningful insights into the code’s correctness. It just ensures that the code generates a twist command.
In some other cases, tests often check for data type sanity, or non-null outputs. While these improve the much coveted code-coverage, it provides little value during development of new features. It isn’t uncommon to see such tests across several open-source ROS2 packages too.
For the sample use case, it makes sense to identify what some failed scenarios are for this line follower. Several geometric quantities such as the distance to a line should be tested for a diverse set of inputs — along the x-axis, the y-axis, and a more general slanted case, TestCalculateDistanceError() exhibits these.
I’ve also sometimes run into scenarios where two wrongs did make one right. For example, having opposite signs in both of distance to line and proportional correction term might yield the correct net result for a proportional controller but it would likely fail for a more general PID control. test_line_follower_good.cpp provides several examples of unit tests to evaluate the overall correctness of the code.
Oftentimes while developing robotics software, it is mandatory to achieve a certain code coverage. Gcov is a test coverage program that allows developers to analyze the effectiveness of their tests by showing exactly which parts of their code have been executed. Complementing it, Lcov provides a graphical representation of Gcov’s coverage data, presenting it in an intuitive HTML format that includes detailed annotations of the source code for easier assessment of test coverage.
While it is useful to have a quantitative representation of code coverage and highlight what functions have (not) been tested. It does not imply that a code with higher coverage is necessarily better tested than the one with a lower coverage. For example, naive_line_follower.cpp has a 90%+ coverage but line_follower.cpp only has 85%, but I’d argue the latter is a better written and tested code than the former.
Unit tests are great for testing the code at a modular level, and they pave the way to more advanced integration and system tests. In general, writing tests does come with its own development and maintenance effort but the return can be substantially rewarding, especially in the case of larger projects. It is however important to note that test coverage as a metric is merely an indication of tests being present, and gives no quality assurance by itself. Designing tests, just like designing algorithms is at the heart of robust robotics software. I’ll cover more testing methodologies focussed on robotic systems in subsequent posts.
At Black Coffee Robotics we have developed a range of simulated integrated testing frameworks for autonomous robots such as manipulators, mobile bases, and general ROS/ROS2 applications. If that is something you can benefit from — reach out to us!