TOM-Net: Learning Transparent Object Matting from a Single Image
Guanying Chen*
Kai Han*
Kwan-Yee K. Wong
Department of Computer Science, The University of Hong Kong

Code [Torch]    Paper [CVPR 2018 Spotlight]    Journal Extension [IJCV]
Supplementary [PDF]       Poster [LaTex]




Abstract

This paper addresses the problem of transparent object matting. Existing image matting approaches for transparent objects often require tedious capturing procedures and long processing time, which limit their practical use. In this paper, we first formulate transparent object matting as a refractive flow estimation problem. We then propose a deep learning framework, called TOM-Net, for learning the refractive flow. Our framework comprises two parts, namely a multi-scale encoder-decoder network for producing a coarse prediction, and a residual network for refinement. At test time, TOM-Net takes a single image as input, and outputs a matte (consisting of an object mask, an attenuation mask and a refractive flow field) in a fast feed-forward pass. As no off-the-shelf dataset is available for transparent object matting, we create a large-scale synthetic dataset consisting of 158K images of transparent objects rendered in front of images sampled from the Microsoft COCO dataset. We also collect a real dataset consisting of 876 samples using 14 transparent objects and 60 background images. Promising experimental results have been achieved on both synthetic and real data, which clearly demonstrate the effectiveness of our approach.


Video


Method


Qualitative Results

1. Results on synthetic dataset.

2. Results on real dataset.


Journal Extension (IJCV)

1. Comparison between TOM-Net, TOM-Net+Trimap, and TOM-Net+Bg

2. Limitation of TOM-Net: colored objects and objects under natural illumination


Code and Dataset

We make our code, trained model, and dataset publicly available.


[Code & Model]
[Datasets]
[Rendering Code]

Acknowledgments

This project is supported by a grant from the Research Grant Council of the Hong Kong (SAR), China, under the project HKU 718113E. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. We thank Yiming Qian for help with the synthetic data rendering.

Webpage template borrowed from Split-Brain Autoencoders, CVPR 2017.