There’s no denying that the Google Pixel 2 has a really great camera. Although the camera UI does not offer manual controls, Google’s HDR+ algorithm is really good at exposing most scenes very evenly. It can even do this fairly well in lower-lit conditions.

HDR+ aside, Google made a Research Blog post about the Google Camera’s portrait mode. The Google Pixel’s portrait mode only needs a single camera lens, unlike many other OEMs that require a second camera to map out depth to synthesize the bokeh (blur) effect.

We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

Google’s camera uses “semantic image segmentation” to make this happen. It is able to map out which pixels pertain to the subject and which ones belong to the background. This technology has been released as open source by Google so that any phone maker can implement the technology in its own smartphones or app developers to make include it into their own apps.

The semantic image segmentation model can also be used to label pixels as anything like road, person, dog, and sky, so portrait mode is only one of the possibilities of the model. Check out the blog post in the source link to get a better idea of how DeepLab works to accurately predict what is actually in the image.



ارسال دیدگاه

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

logo-samandehi پشتیبانی
en_USEnglish fa_IRPersian