Temporal Denoising / Olaf Shot 3
We show denoising results with various numbers of temporal frames, and compare the results to a high-quality reference and several baselines.
We compare our proposed denoiser to two state-of-the-art denoisers: NFOR (Bitterli et al. 2016) and a variant of the recurrent approach proposed by Chakravarty et al. (2017). In order to ensure a fair comparison to the latter, we pre-trained a single-frame, direct-predicting network with the same dimensions as our proposed network. We then added recurrent connections to obtain a temporal context and trained it on sequences to directly predict the denoised color of the center frame. In this way, the recurrent network has an equivalent number of parameters and access to the same amount of training data as our proposal. We refer to this approach as R-DP. We also considered a direct reimplementation of the method by Chakravarty et al. (2017), but it never yielded better results than R-DP. We thus focus on comparing our method and R-DP, which puts emphasis on the main concepts rather than particular implementation details.
Numbers in parentheses indicate the temporal window of the denoiser.