Explore our partners’, LAAS-CNRS and Czech Institute of Informatics, Robotics and Cybernetics, publication “Visually Guided Model Predictive Robot C1ontrol via 6D Object Pose Localization and Tracking”.
This study focuses on enhancing the capabilities of robots to manipulate dynamically moving objects using camera-based systems. The goal is to address scenarios where robots need to interact with objects that are in motion, such as grasping items on a conveyor belt or collaborating with humans in dynamic environments. We propose a novel visual perception module that integrates learning-based 6D object pose localization with a high-rate model-based 6D pose tracker. This enables rapid and accurate estimation of the 6D pose of moving objects from video input, crucial for ensuring smooth and stable robot control. Additionally, we introduce a visually guided robot arm controller that combines the visual perception module with a torque-based model predictive control algorithm. This allows for asynchronous integration of visual and proprioceptive signals, ensuring robust and precise control of robot arm movements in dynamic environments. Experimental validation demonstrates the effectiveness of our approach, particularly in scenarios involving real-time interaction with dynamically moving objects using a 7 DoF robot arm.
Read the publication here.
Find all AGIMUS publications here.