Google Meet obtained in September the function of blurring the background in real time, a feature that, apart from having privacy implications, is quite striking. Google wanted to explain how this blur works, which uses machine learning to segment the image, in a way quite similar to what we saw with the Google Pixel.
We are going to tell you in a simple way and without too many technicalities how it is possible blur the background of the image in real time, as well as the function to change the background of the call, something that became very popular in alternatives to Google Meet, such as Zoom.
As in practically everything related to image processing, Google uses machine learning. Without going into the complexity of processing the different models that are used, the key here is that the process is very similar to that done with Google phones.
Google Meet uses MediaPipe technology on the web, so it is not necessary to have the application installed for the blur effect to work. Google processes each frame of the video in real time, using the data provided by it to create a mask. The mask render a video output with the background blurred or replaced, that is: the mechanism to blur the image or change the background is the same.
One of the great challenges is make the image segmentation model less demanding at the resource level. To achieve this, Google reduces the resolution of the images obtained before sending it to the model, thus offering a segmentation mask created from a low-resolution image.
Once said segmentation is completed (creation of the mask to separate background and figure), different processes are used through OpenGL to process the video and render the effectss.
A gradual bokeh effect is created based on the segmentation maskIn other words, the background is not “cut and pasted” behind the subject, but the bokeh is adjusted according to the position of the person and different effects are applied, such as shading, to make it as natural as possible.
The blur, created as we have told you from a low resolution image, is combined with the original input from our internal camera, so there is no quality loss when using these effects. The same thing happens when you separate to add a background.
Meet is not the only application that performs this function, but it is curious to see how Google applies this method through its machine learning functions. It is a similar proposal, although much more basic, with respect to what the Google Camera app does.
More information | Google
–
The news
Google explains how to blur and change the background of video calls on Google Meet
was originally published in
Xataka Android
by
Ricardo Aguilar
.
Exploring the Top 5 Voice AI Alternatives: What Sets Them Apart?
How iGaming Platforms Ensure Seamless Integration of Casino Games and Sports Betting?
The Rise of Spatial Computing: Evolution of Human-Computer Interaction
Data Loss on Windows? Here's How Windows Recovery Software Can Help
Integrating Widgets Seamlessly: Tips for Smooth Implementation and Functionality