[go: up one dir, main page]

Skip to main content

RaFE: Generative Radiance Fields Restoration

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15125))

Included in the following conference series:

  • 246 Accesses

Abstract

NeRF (Neural Radiance Fields) has demonstrated tremendous potential in novel view synthesis and 3D reconstruction, but its performance is sensitive to input image quality, which struggles to achieve high-fidelity rendering when provided with low-quality sparse input viewpoints. Previous methods for NeRF restoration are tailored for specific degradation type, ignoring the generality of restoration. To overcome this limitation, we propose a generic radiance fields restoration pipeline, named RaFE, which applies to various types of degradations, such as low resolution, blurriness, noise, compression artifacts, or their combinations. Our approach leverages the success of off-the-shelf 2D restoration methods to recover the multi-view images individually. Instead of reconstructing a blurred NeRF by averaging inconsistencies, we introduce a novel approach using Generative Adversarial Networks (GANs) for NeRF generation to better accommodate the geometric and appearance inconsistencies present in the multi-view images. Specifically, we adopt a two-level tri-plane architecture, where the coarse level remains fixed to represent the low-quality NeRF, and a fine-level residual tri-plane to be added to the coarse level is modeled as a distribution with GAN to capture potential variations in restoration. We validate RaFE on both synthetic and real cases for various restoration tasks, demonstrating superior performance in both quantitative and qualitative evaluations, surpassing other 3D restoration methods specific to single task. Please see our project website zkaiwu.github.io/RaFE.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bahat, Y., Michaeli, T.: Explorable super resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2716–2725 (2020)

    Google Scholar 

  2. Bahat, Y., Zhang, Y., Sommerhoff, H., Kolb, A., Heide, F.: Neural volume super-resolution. arXiv preprint arXiv:2212.04666 (2022)

  3. Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., Srinivasan, P.P.: Mip-NeRF: a multiscale representation for anti-aliasing neural radiance fields (2021)

    Google Scholar 

  4. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)

    Google Scholar 

  5. Beyer, L., Zhai, X., Kolesnikov, A.: Big vision (2022). https://github.com/google-research/big_vision

  6. Chan, E.R., et al.: Efficient geometry-aware 3D generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16123–16133 (2022)

    Google Scholar 

  7. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 333–350. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_20

    Chapter  Google Scholar 

  8. Chen, W.T., Yifan, W., Kuo, S.Y., Wetzstein, G.: DehazeNeRF: multiple image haze removal and 3D shape reconstruction using neural radiance fields. arXiv preprint arXiv:2303.11364 (2023)

  9. Chen, X., Deng, Y., Wang, B.: Mimic3D: thriving 3D-aware GANs via 3D-to-2D imitation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023)

    Google Scholar 

  10. Chen, Z., et al.: Hierarchical integration diffusion model for realistic image deblurring. In: NeurIPS (2023)

    Google Scholar 

  11. Chen, Z., Zhang, Y., Gu, J., Kong, L., Yang, X., Yu, F.: Dual aggregation transformer for image super-resolution. In: ICCV (2023)

    Google Scholar 

  12. Fridovich-Keil, S., Meanti, G., Warburg, F.R., Recht, B., Kanazawa, A.: K-planes: explicit radiance fields in space, time, and appearance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12479–12488 (2023)

    Google Scholar 

  13. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  14. Han, Y., Yu, T., Yu, X., Wang, Y., Dai, Q.: Super-NeRF: view-consistent detail generation for NeRF super-resolution. arXiv preprint arXiv:2304.13518 (2023)

  15. Karras, T., et al.: Alias-free generative adversarial networks. In: Proceedings of the NeurIPS (2021)

    Google Scholar 

  16. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of StyleGAN. In: Proceedings of the CVPR (2020)

    Google Scholar 

  17. Kawar, B., Elad, M., Ermon, S., Song, J.: Denoising diffusion restoration models. In: Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  18. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 42(4) (2023). https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  19. Lee, D., Lee, M., Shin, C., Lee, S.: DP-NeRF: deblurred neural radiance field with physical scene priors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12386–12396 (2023)

    Google Scholar 

  20. Lee, D., Oh, J., Rim, J., Cho, S., Lee, K.M.: ExBluRF: efficient radiance fields for extreme motion blurred images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17639–17648 (2023)

    Google Scholar 

  21. Li, H., Zhang, Z., Jiang, T., Luo, P., Feng, H., Xu, Z.: Real-world deep local motion deblurring. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 1314–1322 (2023)

    Google Scholar 

  22. Li, J., Li, D., Xiong, C., Hoi, S.: BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In: ICML (2022)

    Google Scholar 

  23. Li, J., et al.: Spatially adaptive self-supervised learning for real-world image denoising. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9914–9924 (2023)

    Google Scholar 

  24. Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: SwinIR: image restoration using swin transformer. arXiv preprint arXiv:2108.10257 (2021)

  25. Lin, X., et al.: DiffBIR: towards blind image restoration with generative diffusion prior. arXiv preprint arXiv:2308.15070 (2023)

  26. Liu, X., Xue, H., Luo, K., Tan, P., Yi, L.: GenN2N: generative NeRF2NeRF translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5105–5114 (2024)

    Google Scholar 

  27. Ma, L., et al.: Deblur-NeRF: neural radiance fields from blurry images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12861–12870 (2022)

    Google Scholar 

  28. Mildenhall, B., Barron, J.T., Chen, J., Sharlet, D., Ng, R., Carroll, R.: Burst denoising with kernel prediction networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2502–2510 (2018)

    Google Scholar 

  29. Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., Barron, J.T.: NeRF in the dark: high dynamic range view synthesis from noisy raw images. In: CVPR (2022)

    Google Scholar 

  30. Mildenhall, B., et al.: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. (TOG) (2019)

    Google Scholar 

  31. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  32. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 102:1–102:15 (2022). https://doi.org/10.1145/3528223.3530127

  33. Pearl, N., Treibitz, T., Korman, S.: NAN: noise-aware NeRFs for burst-denoising. In: CVPR (2022)

    Google Scholar 

  34. Saharia, C., et al.: Palette: image-to-image diffusion models. In: ACM SIGGRAPH 2022 Conference Proceedings, pp. 1–10 (2022)

    Google Scholar 

  35. Saharia, C., et al.: Photorealistic text-to-image diffusion models with deep language understanding. Adv. Neural. Inf. Process. Syst. 35, 36479–36494 (2022)

    Google Scholar 

  36. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  37. Skorokhodov, I., Tulyakov, S., Wang, Y., Wonka, P.: EpiGRAF: rethinking training of 3D GANs. Adv. Neural. Inf. Process. Syst. 35, 24487–24501 (2022)

    Google Scholar 

  38. Sun, C., Sun, M., Chen, H.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: CVPR (2022)

    Google Scholar 

  39. Tian, K., Jiang, Y., Yuan, Z., Bingyue, P., Wang, L.: Visual autoregressive modeling: scalable image generation via next-scale prediction. arXiv preprint arXiv:2404.02905 (2024)

  40. Wan, Z., et al.: CAD: photorealistic 3D generation via adversarial distillation. arXiv preprint arXiv:2312.06663 (2023)

  41. Wan, Z., et al.: Learning neural duplex radiance fields for real-time view synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8307–8316 (2023)

    Google Scholar 

  42. Wan, Z., et al.: Bringing old photos back to life. In: proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2747–2757 (2020)

    Google Scholar 

  43. Wang, C., Wu, X., Guo, Y.C., Zhang, S.H., Tai, Y.W., Hu, S.M.: NeRF-SR: high quality neural radiance fields using supersampling. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 6445–6454 (2022)

    Google Scholar 

  44. Wang, P., Zhao, L., Ma, R., Liu, P.: BAD-NeRF: bundle adjusted deblur neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4170–4179 (2023)

    Google Scholar 

  45. Wang, P., Zhao, L., Ma, R., Liu, P.: BAD-NeRF: bundle adjusted deblur neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4170–4179 (2023)

    Google Scholar 

  46. Wang, Y., Yu, J., Zhang, J.: Zero-shot image restoration using denoising diffusion null-space model. In: The Eleventh International Conference on Learning Representations (2023)

    Google Scholar 

  47. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  48. Yang, S., et al.: MANIQA: multi-dimension attention network for no-reference image quality assessment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1191–1200 (2022)

    Google Scholar 

  49. Zhang, K., Liang, J., Van Gool, L., Timofte, R.: Designing a practical degradation model for deep blind image super-resolution. In: IEEE International Conference on Computer Vision, pp. 4791–4800 (2021)

    Google Scholar 

  50. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)

    Google Scholar 

  51. Zhang, W., Zhai, G., Wei, Y., Yang, X., Ma, K.: Blind image quality assessment via vision-language correspondence: A multitask learning perspective. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 14071–14081 (2023)

    Google Scholar 

  52. Zhang, W., Li, X., Chen, X., Qiao, Y., Wu, X.M., Dong, C.: SEAL: a framework for systematic evaluation of real-world super-resolution. arXiv preprint arXiv:2309.03020 (2023)

  53. Zhou, K., et al.: NeRFlix: high-quality neural view synthesis by learning a degradation-driven inter-viewpoint mixer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12363–12374 (2023)

    Google Scholar 

  54. Zhou, Y., Li, Z., Guo, C.L., Bai, S., Cheng, M.M., Hou, Q.: SRFormer: permuted self-attention for single image super-resolution. arXiv preprint arXiv:2303.09735 (2023)

Download references

Acknowledgments

This work was supported in part by the Hong Kong Research Grants Council General Research Fund (17203023), in part by The Hong Kong Jockey Club Charities Trust under Grant 2022-0174, in part by the Startup Fund and the Seed Fund for Basic Research for New Staff from The University of Hong Kong, in part by the funding from UBTECH Robotics, in part by a GRF grant from the Research Grants Council (RGC) of the Hong Kong Special Administrative Region, China [Project No. CityU 11208123], and in part by the National Natural Science Foundation of China (62132001).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jing Zhang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4963 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, Z., Wan, Z., Zhang, J., Liao, J., Xu, D. (2025). RaFE: Generative Radiance Fields Restoration. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15125. Springer, Cham. https://doi.org/10.1007/978-3-031-72855-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72855-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72854-9

  • Online ISBN: 978-3-031-72855-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics