nyu_depth_v2
Mantieni tutto organizzato con le raccolte
Salva e classifica i contenuti in base alle tue preferenze.
Il set di dati NYU-Depth V2 comprende sequenze video di una varietà di scene in interni registrate dalle telecamere RGB e Depth di Microsoft Kinect.
Diviso | Esempi |
---|
'train' | 47.584 |
'validation' | 654 |
- Struttura delle caratteristiche :
FeaturesDict({
'depth': Tensor(shape=(480, 640), dtype=float16),
'image': Image(shape=(480, 640, 3), dtype=uint8),
})
- Documentazione delle funzionalità :
Caratteristica | Classe | Forma | Tipo D | Descrizione |
---|
| CaratteristicheDict | | | |
profondità | Tensore | (480, 640) | galleggiante16 | |
Immagine | Immagine | (480, 640, 3) | uint8 | |

@inproceedings{Silberman:ECCV12,
author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
title = {Indoor Segmentation and Support Inference from RGBD Images},
booktitle = {ECCV},
year = {2012}
}
@inproceedings{icra_2019_fastdepth,
author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},
title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2019}
}
Salvo quando diversamente specificato, i contenuti di questa pagina sono concessi in base alla licenza Creative Commons Attribution 4.0, mentre gli esempi di codice sono concessi in base alla licenza Apache 2.0. Per ulteriori dettagli, consulta le norme del sito di Google Developers. Java è un marchio registrato di Oracle e/o delle sue consociate.
Ultimo aggiornamento 2022-11-23 UTC.
[null,null,["Ultimo aggiornamento 2022-11-23 UTC."],[],[],null,["# nyu_depth_v2\n\n\u003cbr /\u003e\n\n- **Description**:\n\nThe NYU-Depth V2 data set is comprised of video sequences from a variety of\nindoor scenes as recorded by both the RGB and Depth cameras from the Microsoft\nKinect.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/nyuv2)\n\n- **Homepage** :\n [https://cs.nyu.edu/\\~silberman/datasets/nyu_depth_v2.html](https://cs.nyu.edu/%7Esilberman/datasets/nyu_depth_v2.html)\n\n- **Source code** :\n [`tfds.datasets.nyu_depth_v2.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)\n\n- **Versions**:\n\n - **`0.0.1`** (default): No release notes.\n- **Download size** : `31.92 GiB`\n\n- **Dataset size** : `74.03 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|----------|\n| `'train'` | 47,584 |\n| `'validation'` | 654 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'depth': Tensor(shape=(480, 640), dtype=float16),\n 'image': Image(shape=(480, 640, 3), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| depth | Tensor | (480, 640) | float16 | |\n| image | Image | (480, 640, 3) | uint8 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('image', 'depth')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{Silberman:ECCV12,\n author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},\n title = {Indoor Segmentation and Support Inference from RGBD Images},\n booktitle = {ECCV},\n year = {2012}\n }\n @inproceedings{icra_2019_fastdepth,\n author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},\n title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},\n booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},\n year = {2019}\n }"]]