New method developed by GIST researchers allo

0

image: just like human vision, the method allows the detection of visible, partially occluded and invisible objects in a single frame
to see After

Credit: Gwangju Institute of Science and Technology (GIST)

Robotic vision has come a long way, reaching a level of sophistication with applications in complex and demanding tasks, such as autonomous driving and object manipulation. However, it still struggles to identify individual objects in cluttered scenes where some objects are partially or completely hidden behind others. Typically, when dealing with such scenes, robotic vision systems are trained to identify the occulted object based only on its visible parts. But such training requires large data sets of objects and can be quite tedious.

Associate Professor Kyoobin Lee and Ph.D. Student Seunghyeok Returning from Gwangju Institute of Science and Technology (GIST) in Korea, they encountered this problem when developing an artificial intelligence system to identify and sort objects in cluttered scenes. “We expect a robot to recognize and manipulate objects it has never encountered before or been trained to recognize. In reality, however, we have to manually collect and label the data one by one, because the generalization of deep neural networks highly depends on the quality and quantity of the training dataset.says Mr. Back.

In a new study accepted at the 2022 IEEE International Conference on Robotics and Automation, a research team led by Prof. Lee and M. Back has developed a model called “modeless instance segmentation of invisible objects” (UOAIS) to detect hidden objects in cluttered scenes. To train the model to identify the geometry of the object, they developed a database containing 45,000 photorealistic synthetic images containing depth information. With this (limited) training data, the model was able to detect a variety of occulted objects. When it encounters a cluttered scene, it first selects the object of interest and then determines whether the object is occluded by segmenting the object into a “visible mask” and an “amodal mask”.

The researchers were excited about the results. “Previous methods were either limited to detecting only specific types of objects, or to detecting only visible regions without explicitly reasoning about occluded areas. On the other hand, our method can infer the hidden regions of occluded objects like a human vision system. This helps reduce data collection efforts while improving performance in a complex environmentcommented Mr. Back.

To enable “occlusion reasoning” in their system, the researchers introduced a “hierarchical occlusion modeling” (HOM) scheme, which assigned a hierarchy to the combination of multiple extracted features and their order of prediction. By testing their model against three benchmarks, they validated the effectiveness of the HOM system, which achieved peak performance.

The researchers are optimistic about the future prospects of their method. “Perceiving invisible objects in a cluttered environment is essential for modeless robotic manipulation. Our UOAIS method could serve as a reference on this front,says Mr. Back.

This certainly looks like a giant leap for robotic vision!

***

Reference

DO I: https://doi.org/10.48550/arXiv.2109.11103

Authors: Seunghyeok Back, Joosoon Lee, Taewon Kim, Sangjun Noh, Raeyoung Kang, Seongho Bak, Kyoobin Lee

Affiliations: School of Integrated Technology, Gwangju Institute of Science and Technology, Republic of Korea

About Gwangju Institute of Science and Technology (GIST)

The Gwangju Institute of Science and Technology (GIST) was founded in 1993 by the Korean government as a research-oriented graduate school to help ensure Korea’s continued economic growth and prosperity by developing science and advanced technologies with an emphasis on collaboration with the international community. Since then, GIST pioneered a highly regarded undergraduate science program in 2010, which has become a model for other science universities in Korea. To learn more about GIST and its exciting opportunities for researchers and students, please visit: http://www.gist.ac.kr/.

About the authors

Kyoobin Lee is Associate Professor and Director of the GIST AI Lab. His group is developing AI-based robotic vision and deep learning-based biomedical analysis methods. Before joining GIST, he obtained a Ph.D. in mechatronics from KAIST and completed a postgraduate training program at the Korea Institute of Science and Technology (KIST). The author can be contacted at [email protected]

Seunghyeok Back holds a Ph.D. Student at the GIST AI lab. His research focuses on robotic vision for manipulation of invisible objects using deep neural networks and Sim2Real transfer. He earned a bachelor’s degree in mechanical engineering from GIST. The author can be contacted at [email protected]


Warning: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Share.

About Author

Comments are closed.