A robot learns to see water and pour it into a glass

0

A horse, a zebra and an artificial intelligence helped researchers teach a robot to recognize water and pour it into a glass.

Water presents a tricky challenge for robots because it is clear. Robots have already learned how to pour water, but previous techniques like heating water and using a thermal camera or placing the glass in front of a checkerboard background don’t scale well to everyday life.

A simpler solution could allow robot servers to fill water glasses, robot pharmacists to measure and mix medicine, or robot gardeners to water plants.

Now, researchers have used AI and image translation to solve the problem.

Image translation algorithms use collections of images to train artificial intelligence to convert images from one style to another, such as turning a photo into a Monet-style painting or making a picture of a horse look like a zebra. For this research, the team used a method called contrastive learning for image-to-image unpaired translation, or CUT, for short.

“You need a way to tell the algorithm what the right and wrong answers are during the training phase of learning,” says David Held, assistant professor at Carnegie University’s Robotics Institute. Mellon. “However, data labeling can be time consuming, especially when teaching a robot to pour water, for which the human may need to label individual water droplets in an image.”

Enter the horse and the zebra.

“Just as we can train a model to translate an image of a horse to look like a zebra, we can also train a model to translate a colored liquid image into a transparent liquid image,” Held explains. “We used this model to allow the robot to understand transparent liquids.”

A transparent liquid like water is difficult for a robot to see because the way it reflects, refracts and absorbs light varies depending on the background. To teach the computer to see different backgrounds through a glass of water, the team played YouTube videos behind a clear glass filled with water. Training the system in this way will allow the robot to pour water in a variety of real-world environments, regardless of where the robot is.

“Even for humans, it is sometimes difficult to accurately identify the boundary between water and air,” says Gautham Narasimhan, who obtained his master’s degree at the Institute of Robotics in 2020 and worked with a team from the institute’s robot perception and action laboratory on the new work.

Using their method, the robot was able to pour the water until it reached a certain height into a glass. They then repeated the experiment with glasses of different shapes and sizes.

Narasimhan says there’s room for future research to develop this method – adding different lighting conditions, challenging the robot to pour water from one container to another, or not only estimating the height of the water, but also the volume.

The researchers presented their work at the IEEE International Conference on Robotics and Automation in May 2022.

“Robotics people really appreciate research working in the real world and not just in simulation,” says Narasimhan, who now works as a computer vision engineer at Path Robotics in Columbus, Ohio. “We wanted to do something quite simple but effective.”

Funding for the work came from LG Electronics and a grant from the National Science Foundation.

Source: Carnegie Mellon University

Share.

About Author

Comments are closed.