Denoising with Kernel Prediction and Asymmetric Loss Functions
We show denoising results with various numbers of temporal frames, and compare the results to a high-quality reference and several baselines.
Source-aware encoders enable straightforward adaptation of a trained model to new content. We train new source-aware encoders for the Tungsten Renderer on a training set built upon publicly available scenes.
We propose asymmetric loss functions that control the trade-of between over-blurring and leaving in residual noise in cases where the network cannot perform well. We show denoised images at multiple levels of the run-time slope parameter λ.
Network Size Experiments
We choose to use a deep network (48 layers) with residual blocks. This combination gives the best results among (12, 24, 48) layers and (with residual blocks, without residual blocks). The following convergence plots illustrate this.
Source-aware Encoders and a Comparison to the NFOR Denoiser
This is an extension of Fig. 8 in the paper. We plot the DSSIM error of our network relative to NFOR when: 1) training a Tungsten-aware encoder for a pre-trained network with frozen weights (orange line, the original network was trained using Moana and Cars data), and 2) training a Tungsten-specific network from scratch with freshly initialized weights (blue line). We used varying training set sizes (indicated on the horizontal axis), and averaged the error metrics over sampling rates of 32 to 256 spp. For smaller training sets, training a Tungsten-aware encoder for an existing network gives better results than training from scratch. We get more robust results than NFOR in both cases.
- Trained from scratch
- Trained a source-aware encoder only
Images are © Disney, Disney / Pixar.