SlideShare a Scribd company logo
3D Multi Object GAN
Fully Convolutional Refined Auto-Encoding Generative Adversarial
Networks for 3D Multi Object Scenes
8/31/2017
Real/Fake
Encoder Generator
Fully Convolution
x
Normal Distribution
z Generator
Discriminator
zenc
reshape
Code
DiscriminatorReal/Fake
Refiner
Refiner
xgen
xrec
Agenda
• Introduction
• Dataset
• Network Architecture
• Loss Functions
• Experiments
• Evaluations
• Suggestions of Future Work
Source Code:
https://github.com/yunishi3/3D-FCR-alphaGAN
Introduction
3D multi object generative models should be an extremely important tasks
for AR/VR and graphics fields.
- Synthesize a variety of novel 3D multi objects
- Recognize the objects including shapes, objects and layouts
Only single objects are generated so far
Simple 3D-GANs[1]
Simple 3D-VAE[2]
Multi Object Scenes
Dataset
• SUNCG dataset
Extracted only voxel models from SUNCG dataset
-Voxel size: 80 x 48 x 80 (Downsized from 240x144x240)
-12 Objects
[empty, ceiling, floor, wall, window, chair, bed, sofa, table, tvs, furn, objs]
-Amount: around 185000
-Got rid of trimming by camera angles
-Chose the scenes that have over 10000 amount of voxels.
-No label
From Princeton [3]
Challenges, Difficulties
Sparse, so much varieties
empty ceiling floor wall window chair bed sofa table tv furnitureobjects
92.466 0.945 1.103 1.963 0.337 0.070 0.368 0.378 0.107 0.009 1.406 0.846
Average occupancy ratio of each objects in dataset [%]
[%] Average occupancy ratio
Dining room
Bedroom
Garage
Living room
Network Architecture
Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks
-Similar architecture with 3DGAN[1]
-Encoder, discriminator are almost mirrored from generator
-Latent space is fully convolutional layer (5x3x5x16)
-Fully convolution enables Zenc to represent more features
-Last activation of generator is softmax -> Divide 12 classes
-Code discriminator is fully connected (2 hidden layers)[4]
-Refiner is similar architecture of simGAN[5]
-Multi class activation
for multi object scenes
-Fully Convolution
Novel Contribution
Real/Fake
Encoder Generator
Fully Convolution
Normal Distribution
z Generator
Discriminator
zenc
reshape
Code
DiscriminatorReal/Fake
Refiner
Refiner
xgen
xrec
x
Network Architecture
Inspired by [1]
z
5x3x5x512 10x6x10x256 20x12x20x128 40x24x40x64 80x48x80x12
Each Network
-3D deconv (Stride:2)
-Batch Norm
-LRelu(Discriminator)
Relu(Encoder, Generator)
Last Activation
-Softmax
5x3x5x16
Reshape
& FC
batchnorm
relu
5x3x5
stride 2
batchnorm
relu
Generator Network
5x3x5
stride 2
batchnorm
relu
5x3x5
stride 2
batchnorm
relu
5x3x5
stride 2
softmax
[Wu et al. 2016, MIT]
Network Architecture
Inspired by [5]
Each Network
-3D deconv (Stride:1)
-Relu(Encoder, Generator)
-ResNet Block loops 4 times
(different weights)
Last Activation
-Softmax
Refiner Network
Unlabeled Real Images
Synthetic
Simulated images
Refined
Figure5. Exampleoutput of SimGAN for theUnityEyesgazeestimation dataset [40]. (Left) real imagesfrom MPIIGaze[43]. Our
refiner network doesnot useany label information from MPIIGazedataset at training time. (Right) refinement resultson UnityEye.
The skin texture and the iris region in the refined synthetic images are qualitatively significantly more similar to the real images
than to thesynthetic images. More examples areincluded in thesupplementary material.
maps
Conv
f@nxn
Conv
f@nxn
+
ReLU
ReLU
Input
Features
Output
Features
Figure6. A ResNet block with two n ⇥n convolutional layers,
and 214K real images from the MPIIGaze dataset [43]
– samples shown in Figure 5. MPIIGaze is a very chal-
lenging eye gaze estimation dataset captured under ex-
treme illumination conditions. For UnityEyes we use a
single generic rendering environment to generate train-
ing data without any dataset-specific targeting.
80x48x80x12
80x48x80x32 80x48x80x32
80x48x80x12
ResNet Block x4
3x3x3
relu
3x3x3
relu
3x3x3
Relu
Loss / Training
Encoder
Distribution GAN Loss
Reconstruction Loss
Discriminator discriminates real and fake scenes accurately
Generator fools discriminator
ℒ 𝑟𝑒𝑐 =
𝑛
𝑐𝑙𝑎𝑠𝑠
𝑤 𝑛 −𝛾𝑥𝑙𝑜𝑔 𝑥 𝑟𝑒𝑐 − 1 − 𝛾 1 − 𝑥 𝑙𝑜𝑔 1 − 𝑥 𝑟𝑒𝑐
Reconstruction accuracy would be high
w is occupancy normalized weights with every batch
GAN Loss
ℒ 𝐺𝐴𝑁 𝐷 = −log 𝐷 𝑥 − log 1 − 𝐷 𝑥 𝑟𝑒𝑐 − log 1 − 𝐷 𝑥 𝑔𝑒𝑛
ℒ 𝐺𝐴𝑁 𝐺 = − log 𝐷 𝑥 𝑟𝑒𝑐 − log 𝐷 𝑥 𝑔𝑒𝑛
min
𝐸
ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐸 + 𝜆ℒ 𝑟𝑒𝑐
TrainingLoss
Generator with refiner
min
𝐺
ℒ = 𝜆ℒ 𝑟𝑒𝑐 + ℒ 𝐺𝐴𝑁 𝐺
Discriminator
min
𝐷
ℒ = ℒ 𝐺𝐴𝑁 𝐷
Learning rate: 0.0001
Batch size: 20(Base), 8(Refiner)
Iteration: 100000
(75000:Base, 25000:Refiner)
ℒ 𝑐𝐺𝐴𝑁 𝐷 = −log 𝐷𝑐𝑜𝑑𝑒 𝑧 − log 1 − 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐
ℒ 𝑐𝐺𝐴𝑁 𝐸 = − log 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐
Code discriminator discriminates real and fake distribution accurately
Encoder fools code discriminator
Code Discriminator
min
𝐶
ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐷
Refiner is trained after 75000 iterations
Experiments
Refiner smooths and refines shapes visually.
Generated scenes from random distribution were not realistic
Generated from random distribution
FC-VAE 3D FCR-alphaGAN
Reconstruction
Real Reconstruction
Almost reconstructed, but small shapes have disappeared.
Before Refine After Refine
This architecture worked better than just VAE, but it’s not enough.
This is because encoder was not generalized to the distribution
Before Refine After Refine
Result
Numerical evaluation of reconstruction by IoU
Intersection-over Union(IoU) [6]
Reconstruction accuracy got high due to the fully convolution and alphaGAN
IoU for every class
IoU for all
Same number of latent space dimension
Same number of
latent space dimension
Evaluations
Interpolation
Smooth transition between scenes are built
Evaluations
Latent Space Evaluation
The 2D represented mapping by SVD of 200 encoded samples
Color:1D embedding by SVD of centroid coordinates of each scene
Fully convolution Standard VAE
Fully Convolution enables the latent space to be related to spatial context
This follows 1d embedding of centroid coordinates from lower right to upper left. This does not.
Evaluations
Latent space evaluation by added noise
The effects of individual spatial dimensions composed of 5x3x5 as the latent space.
Red means the level of changes given by normal distribution noises of one dimension.
・2,0,4 dimension changes objects in right back area.
・4,0,1 dimension changes objects in left front area.
・1,0,0 dimension changes objects in left back area.
・4,0,4 dimension changes objects in right front area.
Fully Convolution enables the latent space to be related to spatial context
Suggestions of Future Work
・Revise the dataset
This dataset is extremely sparse and has plenty of varieties. Floors and small objects are allocated to huge varieties of
positions, also some of the small parts like legs of chairs broke up in the dataset because of the downsizing. That makes
predicting latent space too hard. Therefore, it is an important work to revise the dataset like limiting the varieties or
adjusting the positions of objects.
・Redefine the latent space
In this work, I defined the latent space with one space which includes all information like shapes and positions of each
object. Therefore, some small objects disappeared in the generated models, and a lot of non-realistic objects were
generated. In order to solve that, it is an important work to redefine the latent space like isolating it to each object and
layout. However, increasing the varieties of objects and taking account into multiple objects are required in that case.
3D Multi Object GAN

More Related Content

What's hot (20)

PDF
Basic Generative Adversarial Networks
Dong Heon Cho
 
PPTX
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
宏毅 李
 
PDF
그림 그리는 AI
NAVER Engineering
 
PDF
Unsupervised learning represenation with DCGAN
Shyam Krishna Khadka
 
PDF
Generative adversarial network_Ayadi_Alaeddine
Deep Learning Italia
 
PDF
Generative adversarial text to image synthesis
Universitat Politècnica de Catalunya
 
PDF
Generative adversarial networks
남주 김
 
PDF
Variational Autoencoded Regression of Visual Data with Generative Adversarial...
NAVER Engineering
 
PPTX
Generative Adversarial Networks (GAN)
Manohar Mukku
 
PDF
Generative adversarial networks
Yunjey Choi
 
PDF
A pixel to-pixel segmentation method of DILD without masks using CNN and perl...
남주 김
 
PDF
Generative Adversarial Network (+Laplacian Pyramid GAN)
NamHyuk Ahn
 
PDF
[GAN by Hung-yi Lee]Part 1: General introduction of GAN
NAVER Engineering
 
PPTX
Spark algorithms
Ashutosh Trivedi
 
PDF
Introduction to Generative Adversarial Networks
BennoG1
 
PDF
Generative Adversarial Networks
Mustafa Yagmur
 
PDF
Convolutional neural network in practice
남주 김
 
PPTX
Gan seminar
San Kim
 
PDF
Generative Adversarial Networks and Their Applications
Artifacia
 
PDF
Recent Progress on Utilizing Tag Information with GANs - StarGAN & TD-GAN
Hao-Wen (Herman) Dong
 
Basic Generative Adversarial Networks
Dong Heon Cho
 
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
宏毅 李
 
그림 그리는 AI
NAVER Engineering
 
Unsupervised learning represenation with DCGAN
Shyam Krishna Khadka
 
Generative adversarial network_Ayadi_Alaeddine
Deep Learning Italia
 
Generative adversarial text to image synthesis
Universitat Politècnica de Catalunya
 
Generative adversarial networks
남주 김
 
Variational Autoencoded Regression of Visual Data with Generative Adversarial...
NAVER Engineering
 
Generative Adversarial Networks (GAN)
Manohar Mukku
 
Generative adversarial networks
Yunjey Choi
 
A pixel to-pixel segmentation method of DILD without masks using CNN and perl...
남주 김
 
Generative Adversarial Network (+Laplacian Pyramid GAN)
NamHyuk Ahn
 
[GAN by Hung-yi Lee]Part 1: General introduction of GAN
NAVER Engineering
 
Spark algorithms
Ashutosh Trivedi
 
Introduction to Generative Adversarial Networks
BennoG1
 
Generative Adversarial Networks
Mustafa Yagmur
 
Convolutional neural network in practice
남주 김
 
Gan seminar
San Kim
 
Generative Adversarial Networks and Their Applications
Artifacia
 
Recent Progress on Utilizing Tag Information with GANs - StarGAN & TD-GAN
Hao-Wen (Herman) Dong
 

Viewers also liked (8)

DOCX
Rpp revisi 2017 sejarah peminatan kelas 11 sma
Diva Pendidikan
 
PDF
IoTビジネスのフレームワーク、ロードマップ
Katsuhito Okada
 
PPTX
AI eats UX vol.2 Talk 20170913 -人工知能は「検索」体験をどう変えるか
Nozomu Tannaka
 
PPTX
170130 IoT LT #23 (CESで見てきたハードウェアスタートアップを支えるエコシステム) @ソフトバンク
Toshiki Tsuboi
 
PDF
会社説明会資料【2012年卒新卒採用】
三井ホームリモデリング株式会社 新卒採用
 
PDF
Lightning Network入門
Mitsuta Takashi
 
PDF
0528 kanntigai ui_ux
Saori Matsui
 
PDF
女子の心をつかむUIデザインポイント - MERY編 -
Shoko Tanaka
 
Rpp revisi 2017 sejarah peminatan kelas 11 sma
Diva Pendidikan
 
IoTビジネスのフレームワーク、ロードマップ
Katsuhito Okada
 
AI eats UX vol.2 Talk 20170913 -人工知能は「検索」体験をどう変えるか
Nozomu Tannaka
 
170130 IoT LT #23 (CESで見てきたハードウェアスタートアップを支えるエコシステム) @ソフトバンク
Toshiki Tsuboi
 
会社説明会資料【2012年卒新卒採用】
三井ホームリモデリング株式会社 新卒採用
 
Lightning Network入門
Mitsuta Takashi
 
0528 kanntigai ui_ux
Saori Matsui
 
女子の心をつかむUIデザインポイント - MERY編 -
Shoko Tanaka
 
Ad

Similar to 3D Multi Object GAN (20)

PDF
Faire de la reconnaissance d'images avec le Deep Learning - Cristina & Pierre...
Jedha Bootcamp
 
PPTX
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
SelvaLakshmi63
 
PPTX
Introduction to convolutional networks .pptx
ArunNegi37
 
PDF
Comparison of Various RCNN techniques for Classification of Object from Image
IRJET Journal
 
PPTX
Cahall Final Intern Presentation
Daniel Cahall
 
PPTX
Machine Learning - Convolutional Neural Network
Richard Kuo
 
PPT
lec6a.ppt
SaadMemon23
 
PDF
Overview of Convolutional Neural Networks
ananth
 
PPTX
The Technology behind Shadow Warrior, ZTG 2014
Jarosław Pleskot
 
PPTX
IMAGE PROCESSING
ABHISHEK MAURYA
 
PDF
Joey gonzalez, graph lab, m lconf 2013
MLconf
 
PPTX
Deep learning requirement and notes for novoice
AmmarAhmedSiddiqui2
 
PPTX
Implementing a modern, RenderMan compliant, REYES renderer
Davide Pasca
 
PPTX
ImageNet classification with deep convolutional neural networks(2012)
WoochulShin10
 
PDF
AI_Theory: Covolutional_neuron_network.pdf
21146290
 
PDF
Eye deep
sveitser
 
PPTX
Presentation vision transformersppt.pptx
htn540
 
PDF
Tensorflow London 13: Zbigniew Wojna 'Deep Learning for Big Scale 2D Imagery'
Seldon
 
PDF
Hardware Acceleration for Machine Learning
CastLabKAIST
 
PPTX
SeRanet introduction
Kosuke Nakago
 
Faire de la reconnaissance d'images avec le Deep Learning - Cristina & Pierre...
Jedha Bootcamp
 
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
SelvaLakshmi63
 
Introduction to convolutional networks .pptx
ArunNegi37
 
Comparison of Various RCNN techniques for Classification of Object from Image
IRJET Journal
 
Cahall Final Intern Presentation
Daniel Cahall
 
Machine Learning - Convolutional Neural Network
Richard Kuo
 
lec6a.ppt
SaadMemon23
 
Overview of Convolutional Neural Networks
ananth
 
The Technology behind Shadow Warrior, ZTG 2014
Jarosław Pleskot
 
IMAGE PROCESSING
ABHISHEK MAURYA
 
Joey gonzalez, graph lab, m lconf 2013
MLconf
 
Deep learning requirement and notes for novoice
AmmarAhmedSiddiqui2
 
Implementing a modern, RenderMan compliant, REYES renderer
Davide Pasca
 
ImageNet classification with deep convolutional neural networks(2012)
WoochulShin10
 
AI_Theory: Covolutional_neuron_network.pdf
21146290
 
Eye deep
sveitser
 
Presentation vision transformersppt.pptx
htn540
 
Tensorflow London 13: Zbigniew Wojna 'Deep Learning for Big Scale 2D Imagery'
Seldon
 
Hardware Acceleration for Machine Learning
CastLabKAIST
 
SeRanet introduction
Kosuke Nakago
 
Ad

More from Yu Nishimura (8)

PDF
ACL2018 出張報告
Yu Nishimura
 
PDF
ICLR2018出張報告
Yu Nishimura
 
PPTX
ぼくがd.schoolで学んだこと
Yu Nishimura
 
PPTX
Relonch体験レポート
Yu Nishimura
 
PPTX
CVPR 2017 報告
Yu Nishimura
 
PDF
Drama2Vec
Yu Nishimura
 
PPTX
Snap Inc.徹底分析
Yu Nishimura
 
PPTX
シリコンバレーとスタンフォードに見るイノベーションの源泉
Yu Nishimura
 
ACL2018 出張報告
Yu Nishimura
 
ICLR2018出張報告
Yu Nishimura
 
ぼくがd.schoolで学んだこと
Yu Nishimura
 
Relonch体験レポート
Yu Nishimura
 
CVPR 2017 報告
Yu Nishimura
 
Drama2Vec
Yu Nishimura
 
Snap Inc.徹底分析
Yu Nishimura
 
シリコンバレーとスタンフォードに見るイノベーションの源泉
Yu Nishimura
 

Recently uploaded (20)

PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
Simple and concise overview about Quantum computing..pptx
mughal641
 
PDF
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Simple and concise overview about Quantum computing..pptx
mughal641
 
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
NewMind AI Weekly Chronicles – July’25, Week III
NewMind AI
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 

3D Multi Object GAN

  • 1. 3D Multi Object GAN Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes 8/31/2017 Real/Fake Encoder Generator Fully Convolution x Normal Distribution z Generator Discriminator zenc reshape Code DiscriminatorReal/Fake Refiner Refiner xgen xrec
  • 2. Agenda • Introduction • Dataset • Network Architecture • Loss Functions • Experiments • Evaluations • Suggestions of Future Work Source Code: https://github.com/yunishi3/3D-FCR-alphaGAN
  • 3. Introduction 3D multi object generative models should be an extremely important tasks for AR/VR and graphics fields. - Synthesize a variety of novel 3D multi objects - Recognize the objects including shapes, objects and layouts Only single objects are generated so far Simple 3D-GANs[1] Simple 3D-VAE[2] Multi Object Scenes
  • 4. Dataset • SUNCG dataset Extracted only voxel models from SUNCG dataset -Voxel size: 80 x 48 x 80 (Downsized from 240x144x240) -12 Objects [empty, ceiling, floor, wall, window, chair, bed, sofa, table, tvs, furn, objs] -Amount: around 185000 -Got rid of trimming by camera angles -Chose the scenes that have over 10000 amount of voxels. -No label From Princeton [3]
  • 5. Challenges, Difficulties Sparse, so much varieties empty ceiling floor wall window chair bed sofa table tv furnitureobjects 92.466 0.945 1.103 1.963 0.337 0.070 0.368 0.378 0.107 0.009 1.406 0.846 Average occupancy ratio of each objects in dataset [%] [%] Average occupancy ratio Dining room Bedroom Garage Living room
  • 6. Network Architecture Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks -Similar architecture with 3DGAN[1] -Encoder, discriminator are almost mirrored from generator -Latent space is fully convolutional layer (5x3x5x16) -Fully convolution enables Zenc to represent more features -Last activation of generator is softmax -> Divide 12 classes -Code discriminator is fully connected (2 hidden layers)[4] -Refiner is similar architecture of simGAN[5] -Multi class activation for multi object scenes -Fully Convolution Novel Contribution Real/Fake Encoder Generator Fully Convolution Normal Distribution z Generator Discriminator zenc reshape Code DiscriminatorReal/Fake Refiner Refiner xgen xrec x
  • 7. Network Architecture Inspired by [1] z 5x3x5x512 10x6x10x256 20x12x20x128 40x24x40x64 80x48x80x12 Each Network -3D deconv (Stride:2) -Batch Norm -LRelu(Discriminator) Relu(Encoder, Generator) Last Activation -Softmax 5x3x5x16 Reshape & FC batchnorm relu 5x3x5 stride 2 batchnorm relu Generator Network 5x3x5 stride 2 batchnorm relu 5x3x5 stride 2 batchnorm relu 5x3x5 stride 2 softmax [Wu et al. 2016, MIT]
  • 8. Network Architecture Inspired by [5] Each Network -3D deconv (Stride:1) -Relu(Encoder, Generator) -ResNet Block loops 4 times (different weights) Last Activation -Softmax Refiner Network Unlabeled Real Images Synthetic Simulated images Refined Figure5. Exampleoutput of SimGAN for theUnityEyesgazeestimation dataset [40]. (Left) real imagesfrom MPIIGaze[43]. Our refiner network doesnot useany label information from MPIIGazedataset at training time. (Right) refinement resultson UnityEye. The skin texture and the iris region in the refined synthetic images are qualitatively significantly more similar to the real images than to thesynthetic images. More examples areincluded in thesupplementary material. maps Conv f@nxn Conv f@nxn + ReLU ReLU Input Features Output Features Figure6. A ResNet block with two n ⇥n convolutional layers, and 214K real images from the MPIIGaze dataset [43] – samples shown in Figure 5. MPIIGaze is a very chal- lenging eye gaze estimation dataset captured under ex- treme illumination conditions. For UnityEyes we use a single generic rendering environment to generate train- ing data without any dataset-specific targeting. 80x48x80x12 80x48x80x32 80x48x80x32 80x48x80x12 ResNet Block x4 3x3x3 relu 3x3x3 relu 3x3x3 Relu
  • 9. Loss / Training Encoder Distribution GAN Loss Reconstruction Loss Discriminator discriminates real and fake scenes accurately Generator fools discriminator ℒ 𝑟𝑒𝑐 = 𝑛 𝑐𝑙𝑎𝑠𝑠 𝑤 𝑛 −𝛾𝑥𝑙𝑜𝑔 𝑥 𝑟𝑒𝑐 − 1 − 𝛾 1 − 𝑥 𝑙𝑜𝑔 1 − 𝑥 𝑟𝑒𝑐 Reconstruction accuracy would be high w is occupancy normalized weights with every batch GAN Loss ℒ 𝐺𝐴𝑁 𝐷 = −log 𝐷 𝑥 − log 1 − 𝐷 𝑥 𝑟𝑒𝑐 − log 1 − 𝐷 𝑥 𝑔𝑒𝑛 ℒ 𝐺𝐴𝑁 𝐺 = − log 𝐷 𝑥 𝑟𝑒𝑐 − log 𝐷 𝑥 𝑔𝑒𝑛 min 𝐸 ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐸 + 𝜆ℒ 𝑟𝑒𝑐 TrainingLoss Generator with refiner min 𝐺 ℒ = 𝜆ℒ 𝑟𝑒𝑐 + ℒ 𝐺𝐴𝑁 𝐺 Discriminator min 𝐷 ℒ = ℒ 𝐺𝐴𝑁 𝐷 Learning rate: 0.0001 Batch size: 20(Base), 8(Refiner) Iteration: 100000 (75000:Base, 25000:Refiner) ℒ 𝑐𝐺𝐴𝑁 𝐷 = −log 𝐷𝑐𝑜𝑑𝑒 𝑧 − log 1 − 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐 ℒ 𝑐𝐺𝐴𝑁 𝐸 = − log 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐 Code discriminator discriminates real and fake distribution accurately Encoder fools code discriminator Code Discriminator min 𝐶 ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐷 Refiner is trained after 75000 iterations
  • 10. Experiments Refiner smooths and refines shapes visually. Generated scenes from random distribution were not realistic Generated from random distribution FC-VAE 3D FCR-alphaGAN Reconstruction Real Reconstruction Almost reconstructed, but small shapes have disappeared. Before Refine After Refine This architecture worked better than just VAE, but it’s not enough. This is because encoder was not generalized to the distribution Before Refine After Refine
  • 11. Result Numerical evaluation of reconstruction by IoU Intersection-over Union(IoU) [6] Reconstruction accuracy got high due to the fully convolution and alphaGAN IoU for every class IoU for all Same number of latent space dimension Same number of latent space dimension
  • 13. Evaluations Latent Space Evaluation The 2D represented mapping by SVD of 200 encoded samples Color:1D embedding by SVD of centroid coordinates of each scene Fully convolution Standard VAE Fully Convolution enables the latent space to be related to spatial context This follows 1d embedding of centroid coordinates from lower right to upper left. This does not.
  • 14. Evaluations Latent space evaluation by added noise The effects of individual spatial dimensions composed of 5x3x5 as the latent space. Red means the level of changes given by normal distribution noises of one dimension. ・2,0,4 dimension changes objects in right back area. ・4,0,1 dimension changes objects in left front area. ・1,0,0 dimension changes objects in left back area. ・4,0,4 dimension changes objects in right front area. Fully Convolution enables the latent space to be related to spatial context
  • 15. Suggestions of Future Work ・Revise the dataset This dataset is extremely sparse and has plenty of varieties. Floors and small objects are allocated to huge varieties of positions, also some of the small parts like legs of chairs broke up in the dataset because of the downsizing. That makes predicting latent space too hard. Therefore, it is an important work to revise the dataset like limiting the varieties or adjusting the positions of objects. ・Redefine the latent space In this work, I defined the latent space with one space which includes all information like shapes and positions of each object. Therefore, some small objects disappeared in the generated models, and a lot of non-realistic objects were generated. In order to solve that, it is an important work to redefine the latent space like isolating it to each object and layout. However, increasing the varieties of objects and taking account into multiple objects are required in that case.