<img src='imgs/output6_120.gif' align="right" height="120px">
<br><br><br><br>
# Local Light Field Fusion
### [Project](https://bmild.github.io/llff) | [Video](https://youtu.be/LY6MgDUzS3M) | [Paper](https://arxiv.org/abs/1905.00889)
Tensorflow implementation for novel view synthesis from sparse input images.<br><br>
[Local Light Field Fusion: Practical View Synthesis
with Prescriptive Sampling Guidelines](https://bmild.github.io/llff)
[Ben Mildenhall](https://people.eecs.berkeley.edu/~bmild/)\*<sup>1</sup>,
[Pratul Srinivasan](https://people.eecs.berkeley.edu/~pratul/)\*<sup>1</sup>,
[Rodrigo Ortiz-Cayon](https://scholar.google.com/citations?user=yZMAlU4AAAAJ)<sup>2</sup>,
[Nima Khademi Kalantari](http://faculty.cs.tamu.edu/nimak/)<sup>3</sup>,
[Ravi Ramamoorthi](http://cseweb.ucsd.edu/~ravir/)<sup>4</sup>,
[Ren Ng](https://www2.eecs.berkeley.edu/Faculty/Homepages/yirenng.html)<sup>1</sup>,
[Abhishek Kar](https://abhishekkar.info/)<sup>2</sup>
<sup>1</sup>UC Berkeley, <sup>2</sup>Fyusion Inc, <sup>3</sup>Texas A&M, <sup>4</sup>UC San Diego
\*denotes equal contribution
In SIGGRAPH 2019
<img src='imgs/teaser.jpg'/>
## Table of Contents
* [Installation TL;DR: Setup and render a demo scene](#installation-tldr-setup-and-render-a-demo-scene)
* [Full Installation Details](#full-installation-details)
* [Manual installation](#manual-installation)
* [Docker installation](#docker-installation)
* [Using your own input images for view synthesis](#using-your-own-input-images-for-view-synthesis)
* [Quickstart: rendering a video from a zip file of your images](#quickstart-rendering-a-video-from-a-zip-file-of-your-images)
* [General step-by-step usage](#general-step-by-step-usage)
* [1. Recover camera poses](#1-recover-camera-poses)
* [2. Generate MPIs](#2-generate-mpis)
* [3. Render novel views](#3-render-novel-views)
* [Using your own poses without running COLMAP](#using-your-own-poses-without-running-colmap)
* [Troubleshooting](#troubleshooting)
* [Citation](#citation)
## Installation TL;DR: Setup and render a demo scene
First install `docker` ([instructions](https://docs.docker.com/install/linux/docker-ce/ubuntu/)) and `nvidia-docker` ([instructions](https://github.com/NVIDIA/nvidia-docker)).
Run this in the base directory to download a pretrained checkpoint, download a Docker image, and run code to generate MPIs and a rendered output video on an example input dataset:
```
bash download_data.sh
sudo docker pull bmild/tf_colmap
sudo docker tag bmild/tf_colmap tf_colmap
sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap bash demo.sh
```
A video like this should be output to `data/testscene/outputs/test_vid.mp4`:
</br>
<img src='imgs/fern.gif'/>
If this works, then you are ready to start processing your own images! Run
```
sudo nvidia-docker run -it --rm --volume /:/host --workdir /host$PWD tf_colmap
```
to enter a shell inside the Docker container, and [skip ahead](#using-your-own-input-images-for-view-synthesis) to the section on using your own input images for view synthesis.
## Full Installation Details
You can either install the prerequisites by hand or use our provided Dockerfile to make a docker image.
In either case, start by downloading this repository, then running the `download_data.sh` script to download a pretrained model and example input dataset:
```
bash download_data.sh
```
After installing dependencies, try running `bash demo.sh` from the base directory. (If using Docker, run this inside the container.) This should generate the video shown in the *Installation TL;DR* section at `data/testscene/outputs/test_vid.mp4`.
### Manual installation
- Install CUDA, Tensorflow, COLMAP, ffmpeg
- Install the required Python packages:
```
pip install -r requirements.txt
```
- Optional: run `make` in `cuda_renderer/` directory.
- Optional: run `make` in `opengl_viewer/` directory. You may need to install GLFW or some other OpenGL libraries. For GLFW:
```
sudo apt-get install libglfw3-dev
```
### Docker installation
To build the docker image on your own machine, which may take 15-30 mins:
```
sudo docker build -t tf_colmap:latest .
```
To download the image (~6GB) instead:
```
sudo docker pull bmild/tf_colmap
sudo docker tag bmild/tf_colmap tf_colmap
```
Afterwards, you can launch an interactive shell inside the container:
```
sudo nvidia-docker run -it --rm --volume /:/host --workdir /host$PWD tf_colmap
```
From this shell, all the code in the repo should work (except `opengl_viewer`).
To run any single command `<command...>` inside the docker container:
```
sudo nvidia-docker run --rm --volume /:/host --workdir /host$PWD tf_colmap <command...>
```
## Using your own input images for view synthesis
<img src='imgs/capture.gif'/>
Our method takes in a set of images of a static scene, promotes each image to a local layered representation (MPI), and blends local light fields rendered from these MPIs to render novel views. Please see our paper for more details.
As a rule of thumb, you should use images where the maximum disparity between views is no more than about 64 pixels (watch the closest thing to the camera and don't let it move more than ~1/8 the horizontal field of view between images). Our datasets usually consist of 20-30 images captured handheld in a rough grid pattern.
#### Quickstart: rendering a video from a zip file of your images
You can quickly render novel view frames and a .mp4 video from a zip file of your captured input images with the `zip2mpis.sh` bash script.
```
bash zip2mpis.sh <zipfile> <your_outdir> [--height HEIGHT]
```
`height` is the output height in pixels. We recommend using a height of 360 pixels for generating results quickly.
## General step-by-step usage
Begin by creating a base scene directory (e.g., `scenedir/`), and copying your images into a subdirectory called `images/` (e.g., `scenedir/images`).
#### 1. Recover camera poses
This script calls COLMAP to run structure from motion to get 6-DoF camera poses and near/far depth bounds for the scene.
```
python imgs2poses.py <your_scenedir>
```
#### 2. Generate MPIs
This script uses our pretrained Tensorflow graph (make sure it exists in `checkpoints/papermodel`) to generate MPIs from the posed images. They will be saved in `<your_mpidir>`, a directory will be created by the script.
```
python imgs2mpis.py <your_scenedir> <your_mpidir> \
[--checkpoint CHECKPOINT] \
[--factor FACTOR] [--width WIDTH] [--height HEIGHT] [--numplanes NUMPLANES] \
[--disps] [--psvs]
```
You should set at most one of `factor`, `width`, or `height` to determine the output MPI resolution (factor will scale the input image size down an integer factor, eg. 2, 4, 8, and height/width directly scale the input images to have the specified height or width). `numplanes` is 32 by default. `checkpoint` is set to the downloaded checkpoint by default.
Example usage:
```
python imgs2mpis.py scenedir scenedir/mpis --height 360
```
#### 3. Render novel views
You can either generate a list of novel view camera poses and render out a video, or you can load the saved MPIs in our interactive OpenGL viewer.
#### Generate poses for new view path
First, generate a smooth new view path by calling
```
python imgs2renderpath.py <your_scenedir> <your_posefile> \
[--x_axis] [--y_axis] [--z_axis] [--circle][--spiral]
```
`<your_posefile>` is the path of an output .txt file that will be created by the script, and will contain camera poses for the rendered novel views. The five optional arguments specify the trajectory of the camera. The xyz-axis options are straight lines along each camera axis respectively, "circle" is a circle in the camera plane, and "spiral" is a circle combined with movement along the z-axis.
Example u
没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论




















格式:pdf 资源大小:4.8MB 页数:103

格式:zip 资源大小:293.9MB


格式:zip 资源大小:367.4MB







格式:zip 资源大小:991.0MB

收起资源包目录





































































































共 720 条
- 1
- 2
- 3
- 4
- 5
- 6
- 8
资源评论


吾名招财
- 粉丝: 5659
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 广电网络工程中实施项目管理的必要性及优化措施探析.docx
- 单片机PID控制器设计正文.doc
- 利用信息化手段对医院全面成本管控的探索.docx
- 机械制造及自动化中的节能高效设计理念.docx
- 四川省2017年大数据时代的互联网信息安全考试答案.docx
- 移动平台利用AJAX技术实现一个新型的学生网络档案管理系统-化工.doc
- 酒店管理工作中大数据的应用研究.docx
- 花梨木网络销售的推广及营销方案.doc
- VB食品公司进销存管理系统.doc
- word--excel高级应用讲义2.ppt
- 计算机应用技术基础模拟试题.doc
- Web-service-技术-基于Web的ERP物流管理系统的设计与实现.doc
- 基于计算机互联网技术的通信网络安全建设研究.docx
- Excel表格模板:工资表模版(自动计算).xlsx
- 构建网络信息安全服务平台的研究.docx
- 好透团购网站的特点与优势.doc
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
