Background colour

PREVIEW

Video

AssetID: 53073912

Headline: RAW VIDEO: AI Robot 'Ramp' Can Sort Through Shopping And Do Chores

Caption: In a groundbreaking development at The University of Texas at Dallas' Intelligent Robotics and Vision Lab, a robot named Ramp is reshaping the future of artificial intelligence. Ramp is now capable of learning to recognise objects through a revolutionary system designed by a team of computer scientists at the university. Traditionally, robots have relied on single pushes or grasps to "learn" about an object. However, this new system takes a different approach, allowing the robot to push objects multiple times until a sequence of images is collected. This sequence, in turn, empowers the system to segment and recognise all the objects involved.. The team behind this groundbreaking research recently presented their findings at the prestigious Robotics: Science and Systems conference held in Daegu, South Korea. Their paper was selected for its novelty, technical excellence, significance, potential impact, and clarity. Dr. Yu Xiang, the senior author of the paper and an assistant professor of computer science at the UT at Dallas’ Erik Jonsson School of Engineering and Computer Science, explained the significance of their work. “If you ask a robot to pick up the mug or bring you a bottle of water, the robot needs to recognise those objects,” he said. The new technology developed by the UTD researchers also allows them to generalise, identifying similar versions of common items even if they come in different brands, shapes, or sizes. Within the lab, Dr. Xiang and his team use a storage bin filled with toy packages of everyday foods like spaghetti, ketchup, and carrots to train Ramp, their mobile manipulator robot. Ramp stands at about 4 feet tall and has a long mechanical arm with seven joints and a square "hand" with two fingers for grasping objects. Dr. Xiang likened the learning process to how children interact with toys. He explained: “After pushing the object, the robot learns to recognise it. With that data, we train the AI model so the next time the robot sees the object, it does not need to push it again. By the second time it sees the object, it will just pick it up.” What sets this method apart is the number of pushes Ramp uses—between 15 to 20 pushes for each object, compared to previous methods that relied on a single push. This multiple-push approach allows the robot to capture more images with its RGB-D camera, including a depth sensor, providing a more detailed understanding of each item and reducing the potential for errors. According to Dr. Xiang, their system is the first to leverage long-term robot interaction for object segmentation. The researchers' next goal is to enhance other robot functions, including planning and control, which could pave the way for tasks like sorting recycled materials. Their research was supported by the Defense Advanced Research Projects Agency as part of the Perceptually-enabled Task Guidance program, which aims to develop AI technologies that assist users in performing complex physical tasks, ultimately expanding their skill sets and reducing errors using augmented reality. As technology continues to advance, the day when robots can perform household chores with ease may be closer than we think.

Keywords: AI,artificial intelligence,robots,University of Texas,dallas,feature

PersonInImage: