BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization

1Nanjing University of Science and Technology, 2Communication University of China

Abstract

How to effectively explore the colors of reference exemplars and propagate them to colorize each frame is vital for exemplar-based video colorization. In this paper, we present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization by using a bidirectional temporal feature fusion with the guidance of semantic image prior. We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars. Then, to better propagate the colors of reference exemplars into each frame and avoid the inaccurate matches colors from exemplars we develop a simple yet effective bidirectional temporal feature fusion module to better colorize each frame. We note that there usually exist color-bleeding artifacts around the boundaries of the important objects in videos. To overcome this problem, we further develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process for better performance. In addition, we develop a multi-scale recurrent block to progressively colorize frames in a coarse-to-fine manner. Extensive experimental results demonstrate that the proposed BiSTNet performs favorably against state-of-the-art methods on the benchmark datasets.

Framework

We present an effective BiSTNet to better explore and propagate colors from reference exemplars for video colorization. We first establish the semantic correspondence between each frame and the reference exemplars in a deep feature space and develop a simple yet effective bidirectional temporal fusion block (BTFB) to better propagate the colors of reference exemplars and avoid the inaccurately matched colors from exemplars. Then, we develop a mixed expert block (MEB) to guide the colorization of the regions around object boundaries. Finally, we formulate the proposed method into a multi-scale recurrent convolutional block (MSRB) to progressively restore colorful video frames in a coarse-to-fine manner.

The architecture of BiSTNet for exemplar-based video colorization. The core components of our method include: (a) bidirectional temporal fusion block (BTFB), (b) mixed expert block (MEB) and (c) multi-scale recurrent block (MSRB).

Input Videos (left column) and Colorized Videos (right column)

More Results on Synthetic Datasets

More Results on Real-World Videos