The Alphabet’s DeepMind project can now scan two-dimensional images to create a three-dimensional layout, says the company. But, as of now, it is too hard for the algorithms and the hardware to scan the real or the natural surroundings and learn from it. The team writes in Science Magazine, the algorithms and the deep neural network will soon be deployed into the outside world whereas it is now confined to virtual environments.
The new method can be helpful for surveillance systems trying to reconstruct a crime scene using just a few snapshots. The field of robotics too is likely beneficiaries of this technology, like it can aid the household robots, and self-driving cars by allowing them to sense their environment in a 3-D fashion-the more real and accurate graphical representation.
The researchers of the DeepMind say that it is one of the most brilliant technologies that require zero human intervention and very less information to start working. DeepMind is a combination of two neural networks, the representation network, and the generation network. The representation network reduces the objects in the given image to a simple abstraction which is then developed and detailed by the generation network.
Alphabet, the umbrella company developing DeepMind makes most of its revenue from Google, and its efforts to extend the achievement of DeepMind machine vision to real-life optical feeds might help the company contribute a wave of applications into the market.