when i saw this article about Neural Radiance Field i cant resist to share it inside my blog.

NeRF – Representing Scenes as Neural Radiance Fields for View Synthesis,

its learning-based method for synthesizing novel views of complex outdoor scenes using only unstructured collections of in-the-wild photographs. We build on neural radiance fields (NeRF), which uses the weights of a
multilayer perceptron to implicitly model the volumetric density and color of a scene. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or
transient occluders. In this work, we introduce a series of extensions to NeRF to address these issues, thereby allowing for accurate reconstructions from unstructured image collections taken from the internet. We apply our system, which we dub NeRF-W, to internet photo collections of famous landmarks, thereby producing photorealistic, spatially consistent scene representations despite unknown and confounding factors, resulting in significant improvement over the state of the art.

Code | Paper →

For Full blog please visit this
https://wandb.ai/sweep/nerf/reports/NeRF-Representing-Scenes-as-Neural-Radiance-Fields-for-View-Synthesis–Vmlldzo3ODIzMA

Recommended Posts

Leave a Comment