The human identity is constructed upon various unique elements, but at the same time, none define us better than that tendency of improving at a continuous clip. This tendency to grow, no matter the situation, …
The human identity is constructed upon various unique elements, but at the same time, none define us better than that tendency of improving at a continuous clip. This tendency to grow, no matter the situation, has brought the world some huge milestones, with technology emerging as quite a major member of the group. The reason why we hold technology in such a high regard is, by and large, purposed around its skill-set, which guided us towards a reality that nobody could have ever imagined otherwise. Nevertheless, if we look beyond the surface for a second, it will become clear how the whole runner was also very much inspired from the way we applied those skills across a real-world environment. The latter component, in fact, did a lot to give the creation a spectrum-wide presence and start a full-blown tech revolution. Of course, this revolution then went on to scale up the human experience through some outright unique avenues, but even after achieving a feat so notable, technology will somehow continue to bring forth the right goods. The same has grown to become a lot more evident in recent times, and assuming one new discovery ends up with the desired impact, it will only make that trend bigger and better moving forward.
The researching team at University of Michigan has successfully developed a new sensory system called SAWSense, which can turn everyday objects like couches, tables, sleeves and more, into a high-fidelity input device for computers. According to certain reports, the stated system works by repurposing technology from these particular new bone-conduction microphones known as Voice Pickup Units (VPUs). But how does the whole idea looks on a more practical note? Well, considering VPUs only detect acoustic waves that travel along the surface of objects, the methodology uses inputs like taps, scratches, swipes, and other gestures to send those acoustic waves along the surfaces of materials. Once that is duly completed, SAWSense then classifies the proverbial waves into a comprehensive and compatible set of inputs through machine learning. If we are to put our stock in demonstrations realized so far, then we can say that the system was able to recognize different inputs at a 97% accuracy rate. Talk about potential use cases here, the inference for that can be gained from one exhibition, where the researchers used a normal table to replace a laptop’s trackpad.
“This technology will enable you to treat, for example, the whole surface of your body like an interactive surface,” said Yasha Iravantchi, U-M doctoral candidate in computer science and engineering. “If you put the device on your wrist, you can do gestures on your own skin. We have preliminary findings that demonstrate this is entirely feasible.”
To understand the importance of such a development, we must acknowledge how, with the growing prevalence of connected devices, the challenge to give them intuitive input mechanisms has also become more and more daunting. Surely, other researchers have already tried to solve a similar problem, but for some reason, they couldn’t really manage to strike a balance between audio and gesture-based inputs.
“When there’s a lot of background noise, or something comes between the user and the camera, audio and visual gesture inputs don’t work well,” Iravantchi said.
The latest study, however, stands out because it accommodated all the required sensors in a hermetically sealed chamber which completely blocks even those loud ambient noises. The only way to enter this chamber runs through a mass-spring system that conducts the surface-acoustic waves inside the housing without ever coming in contact with sounds in the surrounding environment. Interestingly, the researchers placed this foolproof technology alongside their own signal processing software, which generates features from the data before feeding it into the machine learning model. The result, as you know by now, was a system that finally emerged successful in recording and classifying events along an object’s surface.
“There are other ways you could detect vibrations or surface-acoustic waves, like piezo-electric sensors or accelerometers,” said Alanson Sample, U-M associate professor of electrical engineering and computer science, “but they can’t capture the broad range of frequencies that we need to tell the difference between a swipe and a scratch, for instance.” When quizzed regarding technology’s main focus over the near future, though, the team hinted at an application that might involve medical sensing, including picking up delicate noises such as the sounds of joints and connective tissues as they move.
Copyrights © 2024. All Right Reserved. Engineers Outlook.