MM-3DScene : 3D Scene Understanding by Customizing Masked Modeling with Informative-Preserved Reconstruction and Self-Distilled Consistency

Mingye Xu1,3,4 *, Mutian Xu2,5*, Tong He4, Wanli Ouyang4, Yali Wang1,4 †, Xiaoguang Han2,5, Yu Qiao1,4 †
1 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences 2 SSE, CUHKSZ 3 University of Chinese Academy of Sciences 4 Shanghai Artificial Intelligence Laboratory 5 FNii, CUHKSZ
*Indicates Equal Contribution
  Indicates Equal Corresponding Authors
MY ALT TEXT

How to apply masked modeling for large-scale 3D scenes? (a) Conventional random masked modeling on 3D scenes may cause a high risk of uncertainty. In this figure, a chair and a TV are totally masked, which are extremely difficult to be recovered without any context guidance. (b) Our MM-3DScene exploits local statistics to discover and preserve representative structured points, effectively simplifying the pretext task. At each learning step, our method focuses on restoring regional geometry, and enjoys less ambiguity. Moreover, since unmasked areas are underexplored during reconstruction, the model is encouraged to maintain the intrinsic spatial consistency on unmasked points between different masking ratios, which requires the consistent understanding of unmasked areas.

Abstract

Masked Modeling (MM) has demonstrated widespread success in various vision challenges, by reconstructing masked visual patches. Yet, applying MM for large-scale 3D scenes remains an open problem due to the data sparsity and scene complexity. The conventional random masking paradigm used in 2D images often causes a high risk of ambiguity when recovering the masked region of 3D scenes.

To this end, we propose a novel informative-preserved reconstruction, which explores local statistics to discover and preserve the representative structured points, effectively enhancing the pretext masking task for 3D scene understanding. Integrated with a progressive reconstruction manner, our method can concentrate on modeling regional geometry and enjoy less ambiguity for masked reconstruction. Besides, such scenes with progressive masking ratios can also serve to self-distill their intrinsic spatial consistency, requiring to learn the consistent representations from unmasked areas. By elegantly combining informative-preserved reconstruction on masked areas and consistency self-distillation from unmasked areas, a unified framework called MM-3DScene is yielded.

We conduct comprehensive experiments on a host of downstream tasks. The consistent improvement (e.g., +6.1% mAP@0.5 on object detection and +2.2% mIoU on semantic segmentation) demonstrates the superiority of our approach.

BibTeX


        @article{xu2022mm,
        title={MM-3DScene: 3D Scene Understanding by Customizing Masked Modeling with Informative-Preserved Reconstruction and Self-Distilled Consistency},
        author={Xu, Mingye and Xu, Mutian and He, Tong and Ouyang, Wanli and Wang, Yali and Han, Xiaoguang and Qiao, Yu},
        journal={CVPR},
        year={2023}
      }