LEAP: LIBERATE SPARSE-VIEW 3D MODELING FROM CAMERA POSES

本文提出LEAP,一种无需依赖相机姿势的新型方法,针对稀疏视图3D建模中的挑战。LEAP通过学习数据中的几何知识,利用神经体积克服姿态估计误差,实现高质量的3D重建,且在速度上远超使用预测姿态的方法。

日期

2023-10-02

论文标题

LEAP: LIBERATE SPARSE-VIEW 3D MODELING FROM CAMERA POSES

摘要

Are camera poses necessary for multi-view 3D modeling? Existing approaches predominantly assume access to accurate camera poses. While this assumption might hold for dense views, accurately estimating camera poses for sparse views is often elusive. Our analysis reveals that noisy estimated poses lead to degraded performance for existing sparse-view 3D modeling methods. To address this issue, we present LEAP, a novel pose-free approach, therefore challenging the prevailing notion that camera poses are indispensable. LEAP discards pose-based operations and learns geometric knowledge from data. LEAP is equipped with a neural volume, which is shared across scenes and is parameterized to encode geometry and texture priors. For each incoming scene, we update the neural volume by aggregating 2D image features in a feature-similarity-driven manner. The updated neural volume is decoded into the radiance field, enabling novel view synthesis from any viewpoint. On both object-centric and scene-level datasets, we show that LEAP significantly outperforms prior methods when they employ predicted poses from state-of-the-art pose estimators. Notably, LEAP performs on par with prior approaches that use ground-truth poses while running 400× faster than PixelNeRF. We show LEAP generalizes to novel object categories and scenes, and learns knowledge closely resembles epipolar geometry.
现有的方法主要已知准确的相机位姿前提下进行的三维重建,而这大多数适用于密集视图,但准确估计稀疏视图的相机姿势通常是很难的。我们的分析表明,噪声估计姿态导致现有稀疏视图三维建模方法的性能下降。为了解决这个问题,我们提出了LEAP,一种新颖的无姿势方法,因此挑战了相机姿势不可或缺的流行观念。LEAP抛弃了基于姿态的操作,从数据中学习几何知识。

Introduction

在3D视觉中,相机姿势提供了几何先验信息用来关联点云和2D像素点。其有效性已在一些3D视觉任务中得到验证,能够实现高质量的重构模型。然而,在现实世界中,准确的相机姿势并不总是可用的,不准确的姿势会导致性能下降。为了获得准确的相机姿势,一个解决方案是捕捉密集的视图应用SFM技术。然而,在现实世界的场景中,比如在线商店中的产品图像,我们通常会观察到宽基线相机捕获的稀疏图像。对于稀疏视图,估计准确的相机姿态仍然具有挑战性,然后一个问题出现了:使用噪声估计相机姿势仍然是最好的选择,从稀疏和未放置视图的3D建模

引用信息(BibTeX格式)

@article{jiang2022LEAP,
title={LEAP: Liberate Sparse-view 3D Modeling from Camera Poses},
author={Jiang, Hanwen and Jiang, Zhenyu and Zhao, Yue and Huang, Qixing},
journal={ArXiv},
year={2023},
volume={2310.01410}
}

本论文解决什么问题

xxxxxxxxxxxx.

已有方法的优缺点

xxxxxxxxxxxx.

本文采用什么方法及其优缺点

xxxxxxxxxxxx.

使用的数据集和性能度量

xxxxxxxxxxxx.

与我们工作的相关性

xxxxxxxxxxxx.

英文总结

xxxxxxxxxxxx.

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值