Dual cameras have become largely available in the smartphone world today; with this emerged the new technique for simulating a shallow depth-of-field or the feature popularly known as Portrait mode. Because Smartphone’s sensor is so small and the field-of-view is so wide, the phone-cameras capture everything in its view and all objects in the image focused.
But luckily, Portrait mode simulates a shallow depth-of-field by using edge-detection mapping to differentiate between the foreground and the background. It then blurs the background and making the foreground pop. When iPhone X and Note 8 implements depth mapping to identify what’s in the foreground, Pixel 2 does it with the help of machine learning and pixel splitting feature that’s inbuilt.
Marques Brownlee, an American YouTuber, demonstrates in his video on how iPhone X and Note 8 use data from the wide angle and telephoto lenses to create depth-map, and then artificially blur objects in the background depending on how far they are from the subject in-focus. Whereas Pixel 2 with its front-facing camera, with no expensive lenses for capturing data in detail, utilizes pixel splitting to create depth-map and machine learning to identify and mask the subject.
The resulting images from the phones differ, and greatly from photography equipment like the ‘real’ cameras. The capabilities of smartphone camera software and hardware are improving at a much faster rate than the traditional cameras, he says. And in the coming days, it will be hard for even the trained eyes to identify the ‘real’ photographs.