- July 3, 2017
- No Comment
What Happens When You Give Human Senses to a Machine?
Across the globe, in separate operations, scientists are developing artificial adaptations of every human sense with the intention of bestowing their creations upon the machine world.
In the world of catering, we’ve already got “digital noses” capable of detecting infinitesimal changes in the aura of our edibles. These devices give you real-time analysis on the health and life expectancy of any food stock, allowing you to observe the precise moment when things start going off and giving valuable insight into the best storage conditions and practices to avoid wastage.
With a massive global push to get autonomous vehicles on our roads and in our fields, sooner rather than later, the need for cutting-edge visual capabilities is high. The current generation of autonomous machines are equipped with radar or LiDAR systems, GPS and multiple cameras to create a 3D map of the environment around them. However, the next gen are likely to be endowed with a system modelled off our own bio-mechanism, yet far more advanced.
Human-like vision for machines
Parisian company, Chronocam, is developing a unique, self-adapting system, inspired by our own biology. Traditionally, when a machine analyses images, it does so frame by frame, giving equal importance to every minutiae. While this gives it immense processing capabilities that are essential in certain analytical applications, the precision comes at a cost; in bandwidth, power and speed.
Chronocam’s vision sensors have been developed to take machines beyond this kind of limiting perfectionism, enabling them to replicate the useful functions of the human eye without any of the drawbacks of our biology. These advanced vision systems mimic the way humans interpret visual input. Our brains don’t process and re-process every detail of our environment every second. That would be an intense sensory overload and, in a survival setting, more of a hindrance than a value. Rather, we adjust to our environment and then only notice change and movement. This gives us the ability to focus on certain tasks without distraction while also being alert for any disturbances we need to be aware of. And it frees our brains up to spend their processing power on other tasks.
“Humans capture the stuff of interest—spatial and temporal changes—and send that information to the brain very efficiently. If the strategy is good enough for humans, it should be good enough for a new generation of bio-inspired vision sensors, and the related artificial intelligence algorithms that underpin computer vision.”
Chronocam’s vision solution operates from this same perceptual framework, giving computers dynamic range and power efficiency, along with unprecedented speed. The technology is intended for autonomous vehicles, IoT connected devices, and surveillance systems. Which is great in terms of the benefits but slightly creepy if you think about it too long.
A security system that can conserve power while nothing is happening but become instantly alert to the slightest change would be a phenomenal asset, especially if equipped with the ability to relay that information to humans, or other computers, for action. But the idea of machines watching us with human-like attention is somehow disconcerting. No longer just sweeping and recording automatically but noticing you and then watching what you’re up to.
Not only is this “short-latency event detection” marvelously creepy, it’s low power and bandwidth consumption open it up for use in a far broader range of products and applications than previously imagined. Which brings us back to the digital nose. As yet, there’s no driving need to pair these senses together. But an autonomous vehicle that can smell if there’s a spill or gas leak would be mighty useful. Surely it’s only a matter of time before the noses, eyes and other senses are unified within an intelligent machine, capable of sensing the world just like you. Only better.