Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Frontiers of Information Technology & Electronic Engineering >> 2023, Volume 24, Issue 10 doi: 10.1631/FITEE.2200502

Attention-based efficient robot grasp detection network

Affiliation(s): School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; College of Science, University of Shanghai for Science and Technology, Shanghai 200093, China; Shanghai Key Laboratory of Modern Optical System, Shanghai 200093, China; Key Laboratory of Biomedical Optical Technology and Devices of Ministry of Education, Shanghai 200093, China; Shanghai Institute of Intelligent Science and Technology, Tongji University, Shanghai 201210, China; less

Received: 2022-10-23 Accepted: 2023-10-27 Available online: 2023-10-27

Next Previous

Abstract

To balance the inference speed and detection accuracy of a grasp detection algorithm, which are both important for robot grasping tasks, we propose an ; structured pixel-level grasp detection named the attention-based efficient network (AE-GDN). Three spatial attention modules are introduced in the encoder stages to enhance the detailed information, and three channel attention modules are introduced in the stages to extract more semantic information. Several lightweight and efficient DenseBlocks are used to connect the encoder and paths to improve the feature modeling capability of AE-GDN. A high intersection over union (IoU) value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration, but might cause a collision. This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers. We design a new IoU loss calculation method based on an hourglass box matching mechanism, which will create good correspondence between high IoUs and high-quality grasp configurations. AE-GDN achieves the accuracy of 98.9% and 96.6% on the Cornell and Jacquard datasets, respectively. The inference speed reaches 43.5 frames per second with only about 1.2×10 parameters. The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well. Codes are available at https://github.com/robvincen/robot_gradethttps://github.com/robvincen/robot_gradet.

Related Research