During the unveiling of the iPhone 11 range in September, Apple introduced a potentially game-changing new camera feature it dubbed Dual Fusion. Now it’s available to test as part of the latest iOS 13 developer beta.
The Dual Fusion feature uses machine learning techniques to combine the same exposure shot by the telephoto and standard wide angle lenses on the iPhone 11 and 11 Pro and Max.
The feature is only available on the iPhone 11 range due to the reliance on the A13 Bionic SoC, but it promises to bring a major change to iPhone photography moving forward in terms of the role of machine learning tech.
Apple says the end result will optimise texture, detail and reduce noise within photos taken using Deep Fusion.
It’s billed to be especially effective in indoor situations where smartphone cameras haven’t traditionally performed as well as we hope. Others have dubbed it ‘sweater mode’ because of how the pixel-by-pixel processing translates to better images when wearing sweaters with detailed patterns.
During its reveal last month, Apple senior VP of marketing Phil Schiller called it “computational photography mad science.”
In the iPhone 11 press release, Apple went into a little more detail: “Deep Fusion, coming later this fall, is a new image processing system enabled by the Neural Engine of A13 Bionic. Deep Fusion uses advanced machine learning to do pixel-by-pixel processing of photos, optimising for texture, details and noise in every part of the photo.”
Apple isn’t using an on-screen indicator to show when Deep Fusion is in effect and it isn’t clear whether the mode can be turned off or not.
The feature comes as part of a new iOS 13 beta for developers only. However, it’s also likely to hit a public version of the beta in the near future. Have you taken Deep Fusion for a spin yet? Let us know @TrustedReviews on Twitter.