Four-legged robots may now run freely in the wild. Thanks to new algorithms.

Four-legged robots can now walk and run on difficult terrain while avoiding both stationary and moving impediments thanks to a new system of algorithms created by a team lead by the University of California, San Diego.

In tests, the system directed a robot to move fast and autonomously over sand, gravel, grass, and uneven slopes of earth covered in branches and leaves without running into poles, plants, people, benches, or other obstacles. Additionally, the robot moved across a crowded office without running into any boxes, desks, or chairs.

The research moves scientists one step closer to creating robots that can conduct search and rescue operations or gather data in environments that are too hazardous or challenging for humans.

The 2022 International Conference on Intelligent Robots and Systems (IROS), which will be held in Kyoto, Japan, from October 23–27, will feature a presentation of the team’s work.

Because of the way the system combines the robot’s sense of sight with another type of sensing called proprioception, which involves the robot’s sense of movement, direction, speed, location, and touch in this case, the sensation of the ground beneath its feet, a legged robot is given more versatility.

According to study senior author Xiaolong Wang, a professor of electrical and computer engineering at the University of California, San Diego Jacobs School of Engineering, most methods for teaching legged robots to walk and navigate currently rely either on proprioception or vision, but not both at once.

“It’s comparable, in one instance, to teaching a blind robot to walk by simply touching and feeling the ground. In the other, the robot relies solely on sight to arrange its leg movements.

Wang said It is not possible to learn two things at once. Proprioception and computer vision are used in our study to allow a legged robot to walk around effectively and fluidly while avoiding obstacles in a variety of difficult situations, not just well-defined ones.

The method Wang and his team created combines information from sensors on the robot’s legs with information from real-time images obtained by a depth camera mounted on the robot’s head using a unique set of algorithms. It was not an easy task. The issue, according to Wang, is that there is occasionally a little delay in obtaining images from the camera during real-world operation, which causes the data from the two various sensing modalities to not always arrive at the same time.

The team decided to use a method they call multi-modal delay randomization to replicate this mismatch between the two sets of information. Then, an end-to-end reinforcement learning policy was trained using the fused and randomly generated inputs. This method enabled the robot to navigate swiftly and anticipate changes in its surroundings, enabling it to move and take evasive action more quickly over a range of terrains without the assistance of a human operator.

Wang and his colleagues are currently developing legged robots to be more adaptable, so they can traverse even more difficult terrain. “We can already teach a robot to perform basic actions like walking, sprinting, and dodging obstacles. The ability to walk on stones, climb stairs, change lanes, and jump over barriers are some of our upcoming aspirations.”

source: https://youtu.be/GKbTklHrq60

The team’s code is available online at: https://github.com/Mehooz/vision4leg.

Story Source:Materials provided by University of California – San Diego. Original written by Liezel Labios.

Note: Content may be edited for style and length.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top