A big reason the electronic devices around us are getting smarter, and hopefully more useful, is machine learning. By building complex models of data, then training those models, tasks as diverse as facial recognition, language translation, and autonomous driving can be accomplished. In addition to the need for huge amounts of compute power to train those systems, they all tend to be both very power and processor intensive to run. That has kept most of them tethered to plug-in devices — like the Kinect — or requiring large batteries, like those found in a car. For example, Nvidia’s Drive PX 2 trunk-mountable car computer will require liquid cooling. For mobile devices that has meant a constant connection to the cloud, with raw data sent up, analyzed at the data center, and the results returned.
Google has been trying to change this dynamic with Project Tango, a mobile device that can do real-time mapping and some object tracking, while running off only a small battery. To accomplish that, it tapped a new kind of processor, the Video Processing Unit (VPU) chip Myriad 1 from startup Movidius. By moving the processor-intensive tasks associated with computer vision into a specially designed chip, Myriad increased the performance of, and decreased the power requirements for, the vision-related functions of the Tango device. Movidius claims at least a factor of 10 savings in power, along with an 80% reduction in both space and cost over competing technologies — all compelling stats when it comes to mobile device design.
Beyond Project Tango: Using Movidius for mobile machine intelligence
Now, Google has broadened its relationship with Movidius, announcing that it will be using the company’s newest and most powerful VPU, the Myriad M2450, to help bring more intelligence to a wider array of mobile devices. The Myriad isn’t limited to running vision-related applications, either. Google will use Movidius’s software development environment to port its advanced neural computation engine to the chip, so that a wide-variety of deep-learning-based algorithms can be run in real time.
Being able to run deep-learning-enabled tasks locally will reduce dependence on the cloud, thus reducing latency and privacy issues. For example, your phone could recognize your friends in a photograph without you needing to upload it to the cloud. Remi El-Ouazzane, Movidius CEO, explains, “The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in.”
Unfortunately, there aren’t any details yet on any new Google products that will use the Movidius chips (and there was no mention of them at the Lenovo and Google Project Tango phone announcement), but given the importance of computer vision and machine learning to the future of mobile devices, I’m sure we’ll be hearing more soon.
Now read: Artificial neural networks are changing the world. What are they?