Research and development in autonomous navigation, sensor fusion, and uncertainty quantification for mission-critical systems. We build estimators whose reported uncertainty actually means something.
Every autonomous vehicle—drone, spacecraft, ground robot—runs a state estimator that produces two outputs: a best guess of where it is, and a stated uncertainty about how wrong that guess could be. The position estimate gets all the attention. The uncertainty estimate does the harder job: it tells downstream systems how much to trust the next sensor reading, whether to abort a mission, and whether a 95 % confidence corridor actually contains the truth 95 % of the time.
In practice, the stated uncertainty drifts away from reality every time the filter processes a measurement. The standard fix—a mathematical correction applied after each update—is prescribed by the textbooks and implemented in every major open-source filter. The textbooks disagree on which correction convention to use. Both choices introduce a systematic distortion that grows with every update.
We developed a method that maintains the filter's covariance correctly through every measurement update—without applying any post-update correction. No convention choice, no tuning, no accumulated distortion.
We tested it on a 100-mile drone simulation: a square flight path with commercial-grade IMU dead reckoning and periodic GPS fixes. Four filter variants saw the same sensor data and produced the same position estimate. They differed only in how they maintained the covariance—the stated uncertainty.
Matched the unbiased baseline to floating-point precision at every one of 99 GPS events, across all flight phases—straight legs, sharp turns, long dead-reckoning intervals.
One standard correction convention produced numerically singular covariances by mid-flight. Earlier events showed uncertainty inflated to millions of meters for a drone that was within 7 m of its GPS fix.
The other standard correction stayed finite but grew steadily. By the end of the flight, its stated uncertainty was off by 21 km—for a drone that was 7 m from truth.
Not "close." Not "within tolerance." Bitwise identical to the unbiased reference, regardless of how far the linearization point had drifted from truth.
| Question | Standard reset methods | Our method |
|---|---|---|
| Does the filter locate the drone correctly? | Yes — position estimate is fine | Yes — identical estimate |
| Does it know how uncertain it is? | No — stated uncertainty diverges from reality after maneuvers | Yes — matches truth at every event |
| Will the next sensor reading be fused correctly? | No — wrong covariance → wrong Kalman gain → biased future estimates | Yes — correct gain at every update |
| Can autonomy trust the filter's "I'm lost" signal? | No — false alarms or missed warnings depending on convention | Yes — calibrated throughout |
| Does "95% confidence" actually mean 95%? | No — the guarantee degrades silently with each update | Yes — NEES-consistent across all phases |
Standard navigation filters correct the covariance after every measurement update. The textbooks prescribe two conventions for this correction. Both introduce a systematic distortion that grows with the size of the update and the accumulated uncertainty—one makes the filter overconfident, the other underconfident. Neither is correct.
Maneuvers make it worse. Sharp turns couple rotational uncertainty into the position estimate, amplifying the distortion by orders of magnitude in a single event. The result is a filter that locates the vehicle correctly but no longer knows how uncertain it is— and the error compounds silently with every subsequent sensor reading.
Our approach eliminates the distortion at its source. The details are proprietary, but the result is measurable: calibrated uncertainty at every update, on every trajectory, with no post-hoc correction and no tuning parameters.
In practice, the distortion from standard correction methods is proportional to how far the state has drifted since the last sensor update. If you update at camera rate with a good GPS signal, each correction is tiny and the per-update error is negligible. This is why the published benchmarks never show the problem—they are run in sensor-rich lab environments where updates arrive every few milliseconds.
But the distortion accumulates across updates, and when updates stop arriving the workaround disappears. One missed GPS fix and the drift begins. Ten seconds without GPS on a commercial-grade IMU produces a measurable distortion. A few minutes produces a significant one. Five minutes and the stated uncertainty is no longer a meaningful probability statement.
When the covariance becomes clearly inconsistent—eigenvalues diverging, outputs oscillating, confidence intervals that no longer contain the truth—practitioners reinitialize: they discard the covariance, replace it with a conservative prior, and essentially restart the filter. This stops the bleeding, but it destroys the cross-correlations between states that took hundreds of updates to build. The filter loses continuity and enters a transient period where it is simultaneously overconfident in some directions and underconfident in others.
Some implementations use a softer version: periodically inflating the covariance diagonals or injecting artificial process noise. This prevents divergence but deliberately makes the filter less certain than it should be—degrading performance even when everything is working correctly.
Our approach maintains calibrated uncertainty through arbitrarily long sensor gaps without reinitialization, without covariance inflation, and without tuning parameters. When the GPS comes back—or a camera, lidar, or magnetometer provides the next fix—the filter fuses it with the correct gain, as if the gap never happened. No restart. No transient. No lost state.
For GPS-rich environments with frequent updates, existing methods work well enough and the practical gap is small. For GPS-denied navigation, beyond-line-of-sight operations, contested electromagnetic environments, and long-duration autonomous missions, our method is the only one that stays calibrated through the gap.
Excel Solutions LLC is a technical R&D consultancy focused on autonomous navigation in challenging operational environments. We solve problems in perception, localization, and decision-making under uncertainty—particularly where standard methods quietly break down and the failure mode is a wrong confidence interval, not a wrong position.
Our work spans algorithm development and prototyping for inertial navigation, visual-inertial odometry, and SLAM; performance analysis and validation of estimation pipelines in simulation and field testing; sensor fusion architecture design combining IMU, GPS, lidar, camera, and radar; and research collaboration on novel problems in safety-critical autonomy. We operate at the intersection of differential geometry, applied statistics, and real hardware constraints.
Our technical white paper covers the operational gap, demonstrated results, and a roadmap for transition to fielded systems. Available on request.
Request White Paper