** LABORATORY CLASS 14: Neural Radiance Fields (NeRF) **

In this class, we will learn about Neural Radiance Fields a technique that allows us to synthesize novel views of a scene from a sparse set of input views. This algorithm uses an implicit function representation of a scene encoded in a fully connected deep network, which takes a 5D coordinate vector of position and viewing direction as input, and outputs the volume density and radiance. We’ll learn how to use PyTorch3D to represent and render implicit surfaces and also how to reproduce results similar to NeRF’s on Google Colab.

The goals of this practice are the following:

  • Understand how to represent implicit surfaces using deep neural networks.
  • Understand some techniques to render implicit surfaces
  • Learn how to implement a “NeRF-like” pipeline using PyTorch3D

Instructions and submission:

You are not required to deliver anything as assignment this time. We recommend you study the references below and ask us any questions if you have doubts.

Class Slides

Open In Colab

References

  1. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, & Ren Ng (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
  2. Frank Dellaert. NeRF Explosion 2020 (personal blog post)
  3. Volumetric Reconstruction in Pytorch3D. An Introduction to PyTorch3D, SIGGRAPH Asia 2020 course.
  4. Hallison Paz and Luiz Velho. Machine Learning for New Media. Technical Report TR-03-2021, VISGRAF Lab - IMPA, 2023.
  5. Sitzmann V, Martel JN, Bergman AW, Lindell DB, Wetzstein G (2020) Implicit neural representations with periodic activation functions
  6. Matthew Tancik: Neural Radiance Fields for View Synthesis