Semanticfusion: Dense 3d semantic mapping with convolutional neural networks
2017 IEEE International Conference on Robotics and automation (ICRA), 2017•ieeexplore.ieee.org
Ever more robust, accurate and detailed mapping using visual sensing has proven to be an
enabling factor for mobile robots across a wide variety of applications. For the next level of
robot intelligence and intuitive user interaction, maps need to extend beyond geometry and
appearance-they need to contain semantics. We address this challenge by combining
Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous
Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense …
enabling factor for mobile robots across a wide variety of applications. For the next level of
robot intelligence and intuitive user interaction, maps need to extend beyond geometry and
appearance-they need to contain semantics. We address this challenge by combining
Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous
Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense …
Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need to extend beyond geometry and appearance - they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state-of-the-art dense Simultaneous Localization and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondences between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of ≈25Hz.
ieeexplore.ieee.org
Showing the best result for this search. See all results