Google will incorporate various improvements of both software and hardware on their phones, Pixel, like the rest of its competition, but Google’s case is different. The team of Mountain View take the time to explain to us what it is doingand also why. He did it with the mode of Astrophotographywith the Portrait mode with dual camera and also with the AI behind the transcription of audiosto , among other things.
Now we have just to attend a new explanation on the face unlock by infrared Pixel 4. A system which uses dedicated hardware and software, similar to the system introduced in his time Apple with Face IDbut with a twist ‘to Google’ and an explanation of the reasons for the choice of this system.
Maps of depth with parallax
Google explains that were raised initially, a facial recognition system for RGB cameras, but the processing of these images to achieve its purpose it would have been “computationally expensive”. That is to say, it will require a lot of effort on the part of the processor of the phone to obtain a result more simple through different systems, hence be retained by the infrared system.
So, Google set up a system infrared “stereo” based on two sensors in front in the Pixel 4, one that would operate more efficiently and quickly in the dark. He was born on the system uDepththat not only allowed the facial recognition, but also to prevent impersonation of identity to be a system of reading three-dimensional mixed with various algorithms driven by artificial intelligence.
This system, in fact, it has recently opened to other apps thanks to the API, Camera2, so that their readings can be used for the improvement of the blur front for selfies, as well as other functions accessory. Very briefly, Google explains that its system uDepth it is based on what is known as parallax. The parallax is not another thing that the deflection angle of the apparent position of an object, depending on the point from which you observe.
A source of infrared light, two “eyes”
With this concept in mind, Google mounts a single infrared emitter to project above the surfaces located in front of the phone. This creates a system of points similar to the one created by the systems of Time-of-Flight sensors TOF that we have already described many times and also used both for recognition of surfaces, such as for photography. The difference is that with this transmitter unique is located two cameras to capture the infrared beam, one on each side of the front of the phone.
So, the map reading infrared is carried out from two points, which helps by using mathematical algorithms to build a mapped three-dimensional more precise. In the same way that humans (and many other species) read the depth of the scenes thanks to the situation of each one of the eyes. As explained by Google, when a system as uDepth is calibrated properly “reconstructions generated are metric, which means that expressing distances to real physical“.
Example of a three-dimensional image composed from a single reading with uDepth thanks to its double infrared camera, and the algorithms of deep learning and artificial intelligence.
Google also explains that its system uDepth is autocalibra in the case of detecting a severe fall in the deviations logical to capture the infrared images in front of the computer. The system is connected additionally to an architecture of deep learning or deep learning who is responsible for improving the data obtained in the rough. So you get to build a three-dimensional map very complex at high speed, and that is just as efficient with good lighting and without it.
Concludes Google explaining that uDepth produces a depth map in real-time at 30Hz smooth and valid for both photography and video. This allows the accused to apply also to ex-postnot only during the capture of the photograph or the facial recognition at the time of the unlock. The application demonstration to verify the operation of uDepth is in this linkalthough it is only valid for the Google Pixel 4 by relying on the hardware of the phone.
More information | Google
it was originally published in