Alejandro Sztrajman1, Alexandros Neophytou2, Tim Weyrich2, Eric Sommerlade1
1 University College London
2 Microsoft, Reading, United Kingdom
We present a CNN-based method for outdoor high-dynamic-range (HDR) environment map prediction from low-dynamic-range (LDR) portrait images. Our method relies on two different CNN architectures, one for light encoding and another for face-to-light prediction. Outdoor lighting is characterised by an extremely high dynamic range, and thus our encoding splits the environment map data between low and high-intensity components, and encodes them using tailored representations. The combination of both network architectures constitutes an end-to-end method for accurate HDR light prediction from faces at real-time rates, inaccessible for previous methods which focused on low dynamic range lighting or relied on non-linear optimisation schemes. We train our networks using both real and synthetic images, we compare our light encoding with other methods for light representation, and we analyse our results for light prediction on real images. We show that our predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality.
Alejandro Sztrajman, Alexandros Neophytou, Tim Weyrich, Eric Sommerlade. International Virtual Conference on 3D Vision (3DV), pp. 355–363, Fukuoka, Japan (virtual), November 2020.Alejandro Sztrajman, Alexandros Neophytou, Tim Weyrich, and Eric Sommerlade. High-dynamic-range lighting estimation from face portraits. InInternational Virtual Conference on 3D Vision (3DV), pages 355–363, November 2020.Sztrajman, A., Neophytou, A., Weyrich, T., and Sommerlade, E. 2020. High-dynamic-range lighting estimation from face portraits. In International Virtual Conference on 3D Vision (3DV), 355–363.A. Sztrajman, A. Neophytou, T. Weyrich, and E. Sommerlade, “High-dynamic-range lighting estimation from face portraits,” in International Virtual Conference on 3D Vision (3DV), Nov. 2020, pp. 355–363. |
This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 642841.