Pseudo-Generalized Dynamic View Synthesis from a Video
AuthorsXiaoming Zhao, Alex Colburn, Fangchang Ma, Miguel Angel Bautista Martin, Josh Susskind, Alex Schwing
AuthorsXiaoming Zhao, Alex Colburn, Fangchang Ma, Miguel Angel Bautista Martin, Josh Susskind, Alex Schwing
Rendering scenes observed in a monocular video from novel viewpoints is a chal- lenging problem. For static scenes the community has studied both scene-specific optimization techniques, which optimize on every test scene, and generalized tech- niques, which only run a deep net forward pass on a test scene. In contrast, for dy- namic scenes, scene-specific optimization techniques exist, but, to our best knowl- edge, there is currently no generalized method for dynamic novel view synthesis from a given monocular video. To explore whether generalized dynamic novel view synthesis from monocular videos is possible today, we establish an analy- sis framework based on existing techniques and work toward the generalized ap- proach. We find a pseudo-generalized process without scene-specific appearance optimization is possible, but geometrically and temporally consistent depth esti- mates are needed. Despite no scene-specific appearance optimization, the pseudo- generalized approach improves upon some scene-specific methods. For more information see project page at https://xiaoming-zhao.github.io/projects/pgdvs.
February 2, 2022research area Computer Vision, research area Methods and Algorithmsconference WACV
We study the problem of novel view synthesis from sparse source observations of a scene comprised of 3D objects. We propose a simple yet effective approach that is neither continuous nor implicit, challenging recent trends on view synthesis. Our approach explicitly encodes observations into a volumetric representation that enables amortized rendering. We demonstrate that although continuous radiance field representations have gained a lot of...
July 29, 2021research area Computer Vision, research area Methods and Algorithmsconference ICCV
3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks. In contrast to traditional generative learned models which encode the full generative process into a neural network and can struggle with maintaining local details at the scene level, we introduce a new method that directly leverages scene geometry from the training database. First, we...