What is Learned in Deep Uncalibrated Photometric Stereo?
Guanying Chen1     Michael Waechter2     Boxin Shi3,4
Kwan-Yee K. Wong1     Yasuyuki Matsushita2
1The University of Hong Kong     2Osaka University
3Peking University   4Peng Cheng Laboratory

Code [PyTorch]     Paper [ECCV 2020]     Supplementary [PDF]    



Abstract

This paper targets at discovering what a deep uncalibrated photometric stereo network learns to resolve the problem’s inherent ambiguity, and designing an effective network architecture based on the new insight to improve the performance. The recently proposed deep uncalibrated photometric stereo method achieved promising results in estimating directional lightings. However, what specifically inside the network contributes to its success remains a mystery. In this paper, we analyze the features learned by this method and find that they strikingly resemble attached shadows, shadings, and specular highlights, which are known to provide useful clues in resolving the generalized bas-relief (GBR) ambiguity. Based on this insight, we propose a guided calibration network, named GCNet, that explicitly leverages object shape and shading information for improved lighting estimation. Experiments on synthetic and real datasets show that GCNet achieves improved results in lighting estimation for photometric stereo, which echoes the findings of our analysis. We further demonstrate that GCNet can be directly integrated with existing calibrated methods to achieve improved results on surface normal estimation.


Introduction Talk (Video)



Feature Visualization of LCNet (Video)



Method


Structure of (a) the lighting estimation sub-network L-Net, (b) the normal estimation sub-network N-Net, and (c) the entire GCNet. Values in layers indicate the output channel number.


Lighting Estimation Results on Real Dataset

1. DiLiGenT Main Dataset

2. DiLiGenT Test Dataset

3. Light Stage Data Gallery



Code and Model

Code and models are available at Github!


Acknowledgments

Michael Waechter was supported through a JSPS Postdoctoral Fellowship (JP17F17350). Boxin Shi is supported by the National Natural Science Foundation of China under Grant No. 61872012, National Key R\&D Program of China (2019YFF0302902), and Beijing Academy of Artificial Intelligence (BAAI). Kwan-Yee K. Wong is supported by the Research Grant Council of Hong Kong (SAR), China, under the project HKU 17203119. Yasuyuki Matsushita is supported by JSPS KAKENHI Grant Number JP19H01123.

Webpage template borrowed from Split-Brain Autoencoders, CVPR 2017.