Suction-based Grasp Point Estimation in Cluttered Environment for Robotic Manipulator Using Deep Learning-based Affordance Map

Utomo, Tri Wahyu and Cahyadi, Adha Imam and Ardiyanto, Igi (2021) Suction-based Grasp Point Estimation in Cluttered Environment for Robotic Manipulator Using Deep Learning-based Affordance Map. International Journal of Automation and Computing, 18 (2). 277 – 287. ISSN 14768186

Full text not available from this repository. (Request a copy)

Abstract

Perception and manipulation tasks for robotic manipulators involving highly-cluttered objects have become increasingly in-demand for achieving a more efficient problem solving method in modern industrial environments. But, most of the available methods for performing such cluttered tasks failed in terms of performance, mainly due to inability to adapt to the change of the environment and the handled objects. Here, we propose a new, near real-time approach to suction-based grasp point estimation in a highly cluttered environment by employing an affordance-based approach. Compared to the state-of-the-art, our proposed method offers two distinctive contributions. First, we use a modified deep neural network backbone for the input of the semantic segmentation, to classify pixel elements of the input red, green, blue and depth (RGBD) channel image which is then used to produce an affordance map, a pixel-wise probability map representing the probability of a successful grasping action in those particular pixel regions. Later, we incorporate a high speed semantic segmentation to the system, which makes our solution have a lower computational time. This approach does not need to have any prior knowledge or models of the objects since it removes the step of pose estimation and object recognition entirely compared to most of the current approaches and uses an assumption to grasp first then recognize later, which makes it possible to have an object-agnostic property. The system was designed to be used for household objects, but it can be easily extended to any kind of objects provided that the right dataset is used for training the models. Experimental results show the benefit of our approach which achieves a precision of 88.83, compared to the 83.4 precision of the current state-of-the-art. © 2021, Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature.

Item Type: Article
Additional Information: Cited by: 12
Uncontrolled Keywords: Deep neural networks; Flexible manipulators; Image segmentation; Industrial manipulators; Industrial robots; Object recognition; Pixels; Problem solving; Robotics; Semantics; Cluttered environments; Computational time; Industrial environments; Manipulation task; Probability maps; Problem Solving methods; Robotic manipulators; Semantic segmentation; Deep learning
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions: Faculty of Engineering > Electronics Engineering Department
Depositing User: Sri JUNANDI
Date Deposited: 25 Oct 2024 01:41
Last Modified: 25 Oct 2024 01:41
URI: https://ir.lib.ugm.ac.id/id/eprint/8606

Actions (login required)

View Item
View Item