contact@blackcoffeerobotics.com
14, Raghava Enclave, Transport Road, Secunderabad, Hyderabad (500009)
©2024 All Rights Reserved by BlackCoffeeRobotics
Localization is simply the ability of a robot to identify its position (more correctly pose — three dimensions each, for position and orientation) in a given world. What makes this problem non-trivial is the fact that there are no physical sensors to directly measure and report a robot’s pose. Depending on the application, a range of sensors is utilized to measure and infer the measured data. The closest thing to a sensor reporting robot’s position is a GPS (Global Positioning System) but as we shall see in the following section, a GPS by itself is insufficient for the robot to autonomously navigate. Another major limitation comes when operating in fully or partly indoor environments such as a factory floor, a warehouse, a hotel lobby, or even the sidewalks alongside high-rise buildings. Robotics developers typically utilize a LiDAR or Camera or both to create a map of the world and use it as a reference to localize itself. You can read more about the map generation process here.
While some outdoor applications utilize a 3-D LiDAR for mapping and subsequent localization, several other applications do not require such a high precision during operation. Besides, a localization stack centered around GPS is far more cost-effective as compared to LiDAR/Camera counterparts. Here’s the catch — GPS is notoriously poor in terms of accuracy. Some of the standard GPS receivers have accuracy in the order of meters, this is perhaps only useful for a drone sprinkling pesticides without much precision.
Applications such as open area lawn mowing, snow plowing, farm harvesting, open-area surveillance, and several others require the accuracy of at least tens of centimeters. Now, this is definitely achievable. Let’s find out how!
Bayes filter is the foundation on which most of (if not all!) probabilistic algorithms for state estimation build on. One such branch of algorithms assumes Gaussian distribution for modeling robot states, measurements, and motion modeling. Kalman Filter (KF) is an algorithm that allows us to fuse measurements from multiple sensors and generate an estimate of robot state which is more accurate than any single measurement alone. The KF assumes linear system dynamics, both in motion modeling and sensor measurement — this means that the predicted state is a linear combination of current state and control inputs, also the measurement is a linear function of the state. This assumption is seldom true for non-holonomic robots. An Extended Kalman Filter (EKF) does just what the name suggests, it extends the functionality of KF to accommodate non-linear systems. (For readers looking for a mathematical representation of these concepts, I’d recommend the book on Probabilistic Robotics by S.Thrun. )
Alright, with some necessary introductions out of the way let’s get into robot localization using GPS data! Before that, however, let’s try to understand what sensors are available for a typical outdoor robot.
Commercial GPS sensors can provide anywhere between one and twenty meters of accuracy in reporting data. RTK (Real-Time Kinematic) corrections drastically improve this to within the decimetre mark. However, setting up an RTK base station or providing continuous internet connectivity for NTRIP corrections (an over the web method of providing RTK corrections) may not always be viable. The update rate of GPS, much like its accuracy depends on the individual vendor but it is in the range of two to ten Hz. Regardless, GPS is the closest thing to “ground truth”, it is very affordable and an absolute essential for making a perception-less outdoor navigation robot.
Inertial Measurement Unit (IMU) is an essential component of any robotic system and provides a whole range of information regarding the robot’s motion. In addition to the heading data from the magnetic compass, the angular velocities from the gyroscope and the acceleration component along gravity are particularly useful for navigation. While a typical off-the-shelf IMU provides a high update rate (200Hz+), the accelerometer is generally too noisy to be useful for position estimation.
Wheel encoders are another great source of information, the data is continuous and usually available at high frequency. Challenges are manyfold,
Clearly, each sensor has its own core strengths, often complementary to each other. This is a perfect use for EKFs. Let’s check out some test cases to see how EKF performs for global pose estimation for an outdoor robot!
We used a skid-steer-driven Clearpath-Husky as the test platform, introducing some extra Gaussian noise to the GPS sensor from the repository. The robot was driven in what is approximately a closed-loop covering a total distance of over 50 meters. It took some basic configuration of the robot_localization package to run their EKF implementation. Here’s how it went —
As is evident, GPS reading (in yellow) is prone to discrete jumps within a roughly 5 m range here, wheel odometry is largely off and gets worse over time. The fused pose (in green) does a good job of tracking the ground truth (in blue). Further tuning of EKF parameters could potentially yield even better results but that’s a never-ending exercise.
Another common occurrence during GPS-based navigation is the intermittent failure of the sensor to report any values, this can be due to trees, high rise buildings, or lack of availability of a GPS fix. EKF based estimates are surprisingly robust to such failures, at least for a smaller duration. As showcased in the figure below, we paused GPS inputs to the EKF program but the fused pose was still able to track the ground truth with considerable accuracy, albeit with gradual degradation. Regardless, the fused estimate despite a sensor outage is still far better than wheel odometry alone.
In addition to fusing several state estimates, EKF can also estimate poses without ANY sensor directly reporting the target state variable. For example, it isn’t unusual for the magnetic compass to report erratic data in presence of heavy motors/batteries. This leaves the robot essentially without a sense of heading in the world, EKF if applied appropriately can still help us get a good estimate of non-measurable state variables.
Another common variation of the Kalman Filter for non-linear systems is UKF (Unscented Kalman Filter). The core difference is that EKF relies on computing derivatives of state transition matrices whereas UKF uses “sigma points”. UKF is likely to work as well, or better than EKF for almost all applications but at the cost of added computation — well, there’s n addition to fusing several state estimates, EKF can also estimate poses without ANY sensor directly reporting the target state variable. For example, it isn’t unusual for the magnetic compass to report erratic data in presence of heavy motors/batteries. This leaves the robot essentially without a sense of heading in the world, EKF if applied appropriately can still help us get a good estimate of non-measurable state variables.
Another common variation of the Kalman Filter for non-linear systems is always a trade-off.
While the GPS-IMU-Odom stack provides localization ability in several applications, it doesn’t enable a robot to perceive its environment at all. The ability to detect and possibly identify obstacles or other relevant objects necessitates the use of LiDAR(s), cameras, or both. Another limitation comes in form of limited precision in absence of an RTK fix. Accuracy of one meter, for example, is not enough for a fruit picking robot in an orchard, nor is it enough to navigate on a road or even the roadside pavement.
In summary, the claim isn’t that EKF based sensor fusion with GPS-IMU can replace the need for perception-based sensors but simply that appropriate sensor fusion can allow a robot to do much better than it would do without it.
Looking for help in turning outdoor vehicles autonomous?
Reach out to us at Black Coffee Robotics !