We are excited to announce our new paper accepted to IEEE Robotics and Automation Letters (RA-L):
ShelfAware: Real-Time Visual-Inertial Semantic Localization in Quasi-Static Environments with Low-Cost Sensors
by Shivendra Agrawal, Jake Brawer, Ashutosh Naik, Alessandro Roncone, and Bradley Hayes.
This work presents ShelfAware, a semantic particle filter for robust global localization that treats scene semantics as statistical evidence over object categories rather than fixed quantity landmarks. Designed for quasi-static environments that suffer from repetitive geometry, dynamic clutter, and perceptual noise, ShelfAware provides fast, targeted hypothesis generation on low-cost, vision-only hardware.
Key highlights include:
- A fusion of depth likelihood with a category-centric semantic similarity matrix.
- A precomputed bank of semantic viewpoints to perform inverse semantic proposals inside Monte Carlo Localization (MCL).
- Evaluation in an operational 3,500 sq. ft. grocery store using an open-vocabulary vision pipeline, significantly outperforming geometric and fixed-quantity semantic baselines.
- A 97% global localization success rate with high tracking success across dynamic occlusion conditions in a mock retail environment.
By modeling semantics distributionally and leveraging inverse proposals, ShelfAware resolves geometric aliasing, bringing us closer to robust, infrastructure-free deployment of mobile and assistive robots in dynamic real-world environments.