Published on October 12, 2017 by Microsoft Research

I will present the main algorithms to achieve robust, 6-DOF, state estimation for mobile robots using passive sensing. Since cameras alone are not robust enough to high-speed motion and high-dynamic range scenes, I will describe how IMUs and event-based cameras can be fused with visual information to achieve higher accuracy and robustness. I will, therefore, dig into the topic of event-based cameras, which are revolutionary sensors with a latency of microseconds, a very high dynamic range, and a measurement update rate that is almost a million time faster than standard cameras. Finally, I will show concrete applications of these methods in autonomous navigation of vision-controlled drones.

See more on this video at

Leave a Reply

1 Comment on "Robust, Visual-Inertial State Estimation: from Frame-based to Event-based Cameras"

Notify of

Brandon Per
Brandon Per
10 months 2 days ago

By far, Prolargent 5×5 Extreme is the best product on the market today. It's a life changer. Got my magic wand back!