Localization, State Estimation, and Frame Discipline¶
Source: ros2-copilot-skills catalog
Why This Matters¶
Navigation quality is capped by state-estimation quality. If map, odom, base_link, and sensor frames are conceptually mixed together, every downstream subsystem becomes harder to tune. Many robots fail not because AMCL, SLAM Toolbox, or robot_localization are bad, but because the frame responsibilities were never made explicit.
Distilled Takeaways¶
- REP 105 is the baseline mental model: global positioning owns
map -> odom, odometry ownsodom -> base_link, and the rest of the robot should hang offbase_linkthrough static or controlled transforms. - Wheel odometry is usually a local motion estimate, not a global truth source.
robot_localizationbecomes much easier to configure when you think in terms of which physical dimensions each source should contribute, rather than turning booleans on until the output looks less wrong.- Double-fusing the same physical quantity poorly is often worse than leaving a measurement out.
- Bad sensor-frame mounting assumptions can look like filter instability, planner drift, or costmap corruption.
Practical Value¶
- Start with frame discipline before touching covariance matrices.
- Fuse only the dimensions each sensor genuinely measures well.
- Verify wheel geometry, IMU orientation, and sensor timestamps before tuning recovery behaviors or controllers.
- Use this page as a bridge between URDF work, hardware calibration, and navigation tuning.
Corroborating References¶
When to Read the Original Source¶
Go to the original skills when you need detailed 15-element filter vectors, wheel-odometry math, AMCL tuning parameters, or explicit examples for combining encoders, IMU, and additional odometry sources in a Jazzy robot.