As an Amazon Associate I earn from qualifying purchases from amazon.com

Seeing in the dark: Google researchers use AI to create novel HDR views from noisy raw images: Digital Photography Review


We’ve seen Google researchers accomplish amazing things with artificial intelligence, including remarkable upscaling. Google has set its sights on noise reduction using MultiNeRF, an open source project that uses AI to improve image quality. The RawNeRF program views images and then uses AI to increase the detail to images captured in low-light and dark conditions.

In a research paper, ‘NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images,’ the team showcases how it’s used Neural Radiance Fields (NeRF) to create high-quality novel view analysis from a collection of input images. The NeRF has been trained to preserve a scene’s full dynamic range and it’s possible to manipulate focus, exposure and tone mapping after the time of capture. When optimized over many noisy raw inputs, the NeRF can produce a scene that outperforms single and multi-image raw denoisers. Further, the team claims that RawNeRF can reconstruct extremely noisy scenes captured in almost complete darkness.

While standard NeRF uses low dynamic range images captured in the sRGB color space, RawNeRF uses linear raw input data within the high dynamic range (HDR) color space. Reconstructing NeRF in raw space produces better results and allows for novel HDR view synthesis. The research shows that RawNeRF is ‘surprisingly robust to high levels of noise, to the extent that it can act as a competitive multi-image denoiser when applied to wide-baseline images of a static scene.’ Further, the team demonstrated the ‘HDR view synthesis applications enabled by recovering a scene representation that preserves high dynamic range color values.’

Figure 6 – ‘Example postprocessed and color-aligned patches from our real denoising dataset. RawNeRF produces the most detailed output in each case. All deep denoising methods (columns 2-5) receive the noisy test image as input, whereas NeRF variants (columns 6-8) perform both novel view synthesis and denoising.’

The results are extremely impressive. Utilizing linear raw HDR input data opens up many new possibilities for computational photography, including postprocessing, like editing focus and exposure, of a novel HDR view.

To read the full research paper, click here. The research was written by Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan and Jonathan T. Barron.



Source link

We will be happy to hear your thoughts

Leave a reply

Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0