DeepSurfels is a novel 3D representation for geometry and appearance information that combines planar surface primitives with voxel grid representation for improved scalability and rendering quality.
We approach the problem of online appearance reconstruction from RGB-D images with our novel DeepSurfel representation that leverages the implicit grid representation to support flexible topologies and 2D surface-aligned feature maps to encode high-frequency appearance information and improve scalability to large scenes.
We further present an end-to-end trainable online appearance fusion pipeline that fuses information provided by RGB images into the proposed scene representation and is trained using self-supervision imposed by the reprojection error with respect to the input images.
DeepSurfel representation is well-suited for online updates of appearance information and is compatible with classical texture mapping as well as learning-based approaches.
@InProceedings{DeepSurfels:CVPR:21, title = {{DeepSurfels}: Learning Online Appearance Fusion}, author = {Mihajlovic, Marko and Weder, Silvan and Pollefeys, Marc and Oswald, Martin R}, booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)}, month = jun, year = {2021}, }