nyu_depth_v2
Koleksiyonlar ile düzeninizi koruyun
İçeriği tercihlerinize göre kaydedin ve kategorilere ayırın.
NYU-Depth V2 veri seti, Microsoft Kinect'in hem RGB hem de Depth kameraları tarafından kaydedilen çeşitli iç mekan sahnelerinden video dizilerinden oluşur.
Bölmek | örnekler |
---|
'train' | 47.584 |
'validation' | 654 |
FeaturesDict({
'depth': Tensor(shape=(480, 640), dtype=float16),
'image': Image(shape=(480, 640, 3), dtype=uint8),
})
Özellik | Sınıf | Şekil | Dtipi | Tanım |
---|
| ÖzelliklerDict | | | |
derinlik | tensör | (480, 640) | şamandıra16 | |
resim | resim | (480, 640, 3) | uint8 | |

@inproceedings{Silberman:ECCV12,
author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
title = {Indoor Segmentation and Support Inference from RGBD Images},
booktitle = {ECCV},
year = {2012}
}
@inproceedings{icra_2019_fastdepth,
author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},
title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2019}
}
Aksi belirtilmediği sürece bu sayfanın içeriği Creative Commons Atıf 4.0 Lisansı altında ve kod örnekleri Apache 2.0 Lisansı altında lisanslanmıştır. Ayrıntılı bilgi için Google Developers Site Politikaları'na göz atın. Java, Oracle ve/veya satış ortaklarının tescilli ticari markasıdır.
Son güncelleme tarihi: 2022-11-23 UTC.
[[["Anlaması kolay","easyToUnderstand","thumb-up"],["Sorunumu çözdü","solvedMyProblem","thumb-up"],["Diğer","otherUp","thumb-up"]],[["İhtiyacım olan bilgiler yok","missingTheInformationINeed","thumb-down"],["Çok karmaşık / çok fazla adım var","tooComplicatedTooManySteps","thumb-down"],["Güncel değil","outOfDate","thumb-down"],["Çeviri sorunu","translationIssue","thumb-down"],["Örnek veya kod sorunu","samplesCodeIssue","thumb-down"],["Diğer","otherDown","thumb-down"]],["Son güncelleme tarihi: 2022-11-23 UTC."],[],[],null,["# nyu_depth_v2\n\n\u003cbr /\u003e\n\n- **Description**:\n\nThe NYU-Depth V2 data set is comprised of video sequences from a variety of\nindoor scenes as recorded by both the RGB and Depth cameras from the Microsoft\nKinect.\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/nyuv2)\n\n- **Homepage** :\n [https://cs.nyu.edu/\\~silberman/datasets/nyu_depth_v2.html](https://cs.nyu.edu/%7Esilberman/datasets/nyu_depth_v2.html)\n\n- **Source code** :\n [`tfds.datasets.nyu_depth_v2.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)\n\n- **Versions**:\n\n - **`0.0.1`** (default): No release notes.\n- **Download size** : `31.92 GiB`\n\n- **Dataset size** : `74.03 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|----------------|----------|\n| `'train'` | 47,584 |\n| `'validation'` | 654 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'depth': Tensor(shape=(480, 640), dtype=float16),\n 'image': Image(shape=(480, 640, 3), dtype=uint8),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|---------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| depth | Tensor | (480, 640) | float16 | |\n| image | Image | (480, 640, 3) | uint8 | |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `('image', 'depth')`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @inproceedings{Silberman:ECCV12,\n author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},\n title = {Indoor Segmentation and Support Inference from RGBD Images},\n booktitle = {ECCV},\n year = {2012}\n }\n @inproceedings{icra_2019_fastdepth,\n author = {Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne},\n title = {FastDepth: Fast Monocular Depth Estimation on Embedded Systems},\n booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},\n year = {2019}\n }"]]