As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Recent advances in technology have allowed robotic platforms to implement some tasks currently accomplished by humans. Of particular relevance is the fact that this technology is allowing to perform tasks in hazardous environments. Underwater environments fall into this category, specially when carried out at high depth areas, naturally risky for divers. An important capability, lately affordable by autonomous underwater robots, is to grasp objects from the sea floor. To accomplish this task, the target has to be found into the scene and its pose estimated to guide the robot manipulator. In this work, we present an object detection and pose estimation pipeline which guides the target grasping during an intervention operation. The proposed algorithm is based on two stages: detection and tracking. The detection process takes care of detecting the object pose for the first time, or when it is lost by the second stage. On the other hand, the tracking process, less computationally demanding, is responsible for correcting some small odometry inaccuracies or for adjusting the current pose to align better with the object. To conclude, we report on the results of a number of experiments, carried out by an underwater robot operating in various environments, a water tank and in the sea, and using a stereo camera as the input sensor.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.