A Breakthrough to Weave a New Trajectory for Robotics Technology

Over the years, many different traits have tried to define human beings, but truth be told, none have done a better of it than our tendency to improve at a continuous clip. This tendency, in particular, gets the nod mainly because it has brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, predicated upon its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for one hot second, it will become abundantly clear how the whole runner was also very much inspired from the way we applied those skills across a real world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence, and as a result, initiate a full-blown tech revolution. Of course, this revolution eventually went on to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has turned more and more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only put that trend on a higher pedestal moving forward.
The researching team at University of California San Diego has successfully developed a new technique, which is meant to help a robotic hand rotate objects just through touch and not vision. To build the technology in question, they attached 16 touch sensors to the palm and fingers of a four-fingered robotic hand. A notable feature of these sensors is their cost, considering each sensor cost no more than $12. This is a step away from all those prevalent approaches that rely on a few high-cost, high-resolution touch sensors embedded into a small area of the robotic hand. Not just cost, though, there are several other limitations that have kept such techniques from becoming a mainstream idea. For starters, having a few sensors on the robotic hand minimizes their chances of coming in contact with the object; something which ultimately hampers the system’s sensing ability. Then, there is also a factor that the information about texture provided by such high-resolution touch sensors is extremely hard to simulate. This ensures that any application of the same within more realistic surroundings remains challenging. Another limitation would be how most of the stated techniques still rely on vision to some degree at the very least.
In contrast, the new approach uses simple binary signals to perform robotic in-hand rotation. Complementing the same is a large sensor footprint that gives the robotic hand enough information about the object’s 3D structure and orientation to successfully rotate it without vision support.
“Here, we use a very simple solution,” said Xiaolong Wang, a professor of electrical and computer engineering at UC San Diego, who led the current study. . “We show that we don’t need details about an object’s texture to do this task. We just need simple binary signals of whether the sensors have touched the object or not, and these are much easier to simulate and transfer to the real world.”
Once the initial technology was developed, the researchers trained their system by running simulations of a virtual robotic hand rotating various different objects, including ones with irregular shapes. During this stage, the system assessed which sensors on the hand are being touched by the object at any given time, while simultaneously judging the current positions of the hand’s joints, as well as their previous actions. The information it learned through these assessments would enable the system to tell the robotic hand which joint needs to go where in the next time point. Next up, it was time to test the technology on a real-life robotic hand with objects that the system has not yet encountered. Going by the available details, the robotic hand succeeded in rotating a variety of objects without stalling or losing its hold. The objects enlisted for the job included a tomato, pepper, a can of peanut butter and a toy rubber duck. Although objects having a complex shapes did take longer to rotate, it was discovered that the robotic hand could also rotate objects around different axes.
For the future, the team is working to extend their approach to more complex manipulation tasks. Specifically speaking, the focus is on developing techniques that enable robotic hands to catch, throw, juggle, and perform other similar actions.
“In-hand manipulation is a very common skill that we humans have, but it is very complex for robots to master,” said Wang. “If we can give robots this skill that will open the door to the kinds of tasks they can perform.”

 

Copyrights © 2024. All Right Reserved. Engineers Outlook.