Using a Specialized Algorithm to Put the Robotics Technology in More Agile Shoes

Although the human identity is constructed upon a versatile set of factors, nothing defines it better than our tendency to improve at a consistent clip. This tendency of getting better under all circumstances has brought the world some huge milestones, with technology emerging as quite a major member of the group.  The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiated a full-blown tech revolution. Of course, the next thing this revolution did was to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.

The researching team at Stanford University has successfully developed a new vision-based algorithm, which is designed to help robodogs scale high objects, leap across gaps, crawl under thresholds, and squeeze through crevices before turning to the next challenge. The significance attached to such a development can be contextualized once you consider how most of the existing methods to teach robots are complex reward systems that must be fine-tuned to specific physical obstacles, meaning they are largely unable to scale to new or unfamiliar environments. As for the remaining approaches, they are reliant upon real-world data to imitate agility skills of other animals, a system which unsurprisingly creates some concrete challenges during the execution part, given the physical skill difference between these robodogs and other animals. Not to say, both the stated methodologies are also quite slow in their operation. Fortunately enough, the new development addresses all challenges at once, doing so through a simplistic reward system and no help whatsoever from any real-world data. To develop the eventual technology, the researchers first synthesized and honed the algorithm through a computer, and then transferred it to two real robodogs. Once the transfer was duly completed, the robodogs used an old technique called reinforcement learning to figure out which of their movements were being rewarded, thus gaining a pretty clear idea in terms of ideal moves and maneuvers.

“The autonomy and range of complex skills that our quadruped robot learned is quite impressive,” said Chelsea Finn, assistant professor of computer science and senior author of a new peer-reviewed paper announcing the teams’ approach to the world, “And we have created it using low-cost, off-the-shelf robots—actually, two different off-the-shelf robots.”

Mind you, robodogs aren’t really a novel concept at this point, but the ones used here set themselves apart on the back of their autonomous capabilities. Owing to the said capabilities, the robodog can size up physical challenges, imagine possible moves, and then execute a broad range of skills based on that assessment.

Following the installation of their algorithm into the robodogs and the subsequent training, the researchers would test their effort to demonstrate their agile approach in especially challenging environments using only those robodogs’ off-the-shelf computers, visual sensors, and power systems. Going by the available details, the new-and-improved robodogs were able to climb obstacles more than one-and-a-half times their height, leap gaps greater than one-and-a-half times their length, crawl beneath barriers three-quarters of their height, and tilt in order to squeeze through a slit thinner than their width.

“What we’re doing is combining both perception and control, using images from a depth camera mounted on the robot and machine learning to process all those inputs and move the legs in order to get over, under, and around obstacles,” said Zipeng Fu, a doctoral candidate in Finn’s lab and first author of the study.

For the immediate future, the researching team plans on leveraging advances in 3D vision and graphics to add more concrete reference to its simulated environments and bring a new level of organic autonomy to their algorithm.

Copyrights © 2024. All Right Reserved. Engineers Outlook.