SlideShare a Scribd company logo
RENDERING BATTLEFIELD 4
WITH MANTLE
Johan Andersson – Electronic Arts
2
3
DX11 Mantle
Avg: 78 fps
Min: 42 fps
Core i7-3970x, AMD Radeon R9 290x, 1080p ULTRA
Avg: 120 fps
Min: 94 fps+58%!
4
BF4 MANTLE GOALS
Goals:
– Significantly improve CPU performance
– More consistent & stable performance
– Improve GPU performance where possible
– Add support for a new Mantle rendering
backend in a live game
 Minimize changes to engine interfaces
 Compatible with built PC content
– Work on wide set of hardware
 APU to quad-GPU
 But x64 only (32-bit Windows needs to die)
Non-goals:
– Design new renderer from scratch for Mantle
– Take advantage of asymmetric MGPU
(APU+discrete)
– Optimize video memory consumption
5
BF4 MANTLE STRATEGIC GOALS
 Prove that low-level graphics APIs work outside of consoles
 Push the industry towards low-level graphics APIs everywhere
 Build a foundation for the future that we can build great games on
6
SHADERS
7
SHADERS
 Shader resource bind points replaced with a resource table object - descriptor set
– This is how the hardware accesses the shader resources
– Flat list of images, buffers and samplers used by any of the shader stages
– Vertex shader streams converted to vertex shader buffer loads
 Engine assign each shader resource to specific slot in the descriptor set(s)
– Can share slots between shader stages = smaller descriptor sets
– The mapping takes a while to wrap one’s head around
8
SHADER CONVERSION
 DX11 bytecode shaders gets converted to AMDIL & mapping applied using ILC tool
– Done at load time
– Don’t have to change our shaders!
 Have full source & control over the process
 Could write AMDIL directly or use other frontends if wanted
9
DESCRIPTOR SETS
 Very simple usage in BF4: for each draw call write flat list of resources
–Essentially direct replacement of SetTexture/SetConstantBuffer/SetInputStream
 Single dynamic descriptor set object per frame
 Sub-allocate for each draw call and write list of resources
 ~15000 resource slots written per frame in BF4, still very fast
10
DESCRIPTOR SETS
11
DESCRIPTOR SETS – FUTURE OPTIMIZATIONS
 Use static descriptor sets when possible
 Reduce resource duplication by reusing & sharing more across shader stages
 Nested descriptor sets
12
COMPUTE PIPELINES
 1:1 mapping between pipeline & shader
 No state built into pipeline
 Can execute in parallel with rendering
 ~100 compute pipelines in BF4
13
GRAPHICS PIPELINES
 All graphics shader stages combined to a single pipeline object together with important graphics state
 ~10000 graphics pipelines in BF4 on a single level, ~25 MB of video memory
 Could use smaller working pool of active state objects to keep reasonable amount in memory
– Have not been required for us
14
PRE-BUILDING PIPELINES
 Graphics pipeline creation is expensive operation, do at load time instead of runtime!
– Creating one of our graphics pipelines take ~10-60 ms each
– Pre-build using N parallel low-priority jobs
– Avoid 99.9% of runtime stalls caused by pipeline creation!
 Requires knowing the graphics pipeline state that will be used with the shaders
– Primitive type
– Render target formats
– Render target write masks
– Blend modes
 Not fully trivial to know all state, may require engine changes / pre-defining use cases
– Important to design for!
15
PIPELINE CACHE
 Cache built pipelines both in memory cache and disk cache
– Improved loading times
– Max 300 MB
– Simple LRU policy
– LZ4 compressed (free)
 Database signature:
– Driver version
– Vendor ID
– Device ID
16
MEMORY
17
MEMORY MANAGEMENT
 Mantle devices exposes multiple memory heaps with characteristics
– Can be different between devices, drivers and OS:es
 User explicitly places resources in wanted heaps
– Driver suggests preferred heaps when creating objects, not a requirement
Type Size Page CPU access GPU
Read
GPU
Write
CPU
Read
CPU
Write
Local 256 MB 65535 CpuVisible|CpuGpuCoherent|CpuUncached|CpuWriteCombined 130 170 0.0058 2.8
Local 4096 MB 65535 130 180 0 0
Remote 16106 MB 65535 CpuVisible|CpuGpuCoherent|CpuUncached|CpuWriteCombined 2.6 2.6 0.1 3.3
Remote 16106 MB 65535 CpuVisible|CpuGpuCoherent 2.6 2.6 3.2 2.9
18
FROSTBITE MEMORY HEAPS
 System Shared Mapped
– CPU memory that is GPU visible.
– Write combined & persistently mapped = easy
& fast to write to in parallel at any time
 System Shared Pinned
– CPU cached for readback.
– Not used much
 Video Shared
– GPU memory accessible by CPU. Used for
descriptor sets and dynamic buffers
– Max 256 MB (legacy constraint)
– Avoid keeping persistently mapped as WDMM
doesn’t like this and can decide to move it back
to CPU memory 
 Video Private
– GPU private memory.
– Used for render targets, textures and other
resources CPU does not need to access
19
MEMORY REFERENCES
 WDDM needs to know which memory allocations are referenced for each command buffer
– In order to make sure they are resident and not paged out
– Max ~1700 memory references are supported
– Overhead with having lots of references
 Engine needs to keep track of what memory is referenced while building the command buffers
– Easy & fast to do
– Each reference is either read-only or read/write
– We use a simple global list of references shared for all command buffers.
20
MEMORY POOLING
 Pooling memory allocations were required for us
– Sub allocate within larger 1 – 32 MB chunks
– All resources stored memory handle + offset
– Not as elegant as just void* on consoles
– Fragmentation can be a concern, not too much issues for us in practice
 GPU virtual memory mapping is fully supported, can simplify & optimize management
21
OVERCOMMITTING VIDEO MEMORY
 Avoid overcommitting video memory!
– Will lead to severe stalls as VidMM moves blocks and moves memory back and forth
– VidMM is a black box 
– One of the biggest issues we ran into during development
 Recommendations
– Balance memory pools
– Make sure to use read-only memory references
– Use memory priorities
22
MEMORY PRIORITIES
 Setting priorities on the memory allocations helps VidMM choose what to page out when it has to
 5 priority levels
– Very high = Render targets with MSAA
– High = Render targets and UAVs
– Normal = Textures
– Low = Shader & constant buffers
– Very low = vertex & index buffers
23
MEMORY RESIDENCY FUTURE
 For best results manage which resources are in video memory yourself & keep only ~80% used
– Avoid all stalls
– Can async DMA in and out
 We are thinking of redesigning to fully avoid possibility of overcommitting
 Hoping WDDM’s memory residency management can be simplified & improved in the future
24
RESOURCE MANAGEMENT
25
RESOURCE LIFETIMES
 App manages lifetime of all resources
– Have to make sure GPU is not using an object or memory while we are freeing it on the CPU
– How we’ve always worked with GPUs on the consoles
– Multi-GPU adds some additional complexity that consoles do not have
 We keep track of lifetimes on a per frame granularity
– Queues for object destruction & free memory operations
– Add to queue at any time on the CPU
– Process queues when GPU command buffers for the frame are done executing
– Tracked with command buffer fences
26
LINEAR FRAME ALLOCATOR
 We use multiple linear allocators with Mantle for both transient buffers & images
– Used for huge amount of small constant data and other GPU frame data that CPU writes
– Easy to use and very low overhead
– Don’t have to care about lifetimes or state
 Fixed memory buffers for each frame
– Super cheap sub-allocation from from any thread
– If full, use heap allocation (also fast due to pooling)
 Alternative: ring buffers
– Requires being able to stall & drain pipeline at any allocation if full, additional complexity for us
27
TILING
 Textures should be tiled for performance
– Explicitly handled in Mantle, user selects linear or tiled
– Some formats (BC) can’t be accessed as linear by the GPU
 On consoles we handle tiling offline as part of our data processing pipeline
– We know the exact tiling formats and have separate resources per platform
 For Mantle
– Tiling formats are opaque, can be different between GPU architectures and image types
– Tile textures with DMA image upload from SystemShared to VideoPrivate
 Linear source, tiled destination
 Free
28
COMMAND BUFFERS
29
COMMAND BUFFERS
 Command buffers are the atomic unit of work dispatched to the GPU
– Separate creation from execution
– No “immediate context” a la DX11 that can execute work at any call
– Makes resource synchronization and setup significantly easier & faster
 Typical BF4 scenes have around ~50 command buffers per frame
– Reasonable tradeoff for us with submission overhead vs CPU load-balancing
30
COMMAND BUFFER SOURCES
 Frostbite has 2 separate sources of command buffers
– World rendering
 Rendering the world with tons of objects, lots of draw calls. Have all frame data up front
 All resources except for render targets are read-only
 Generated in parallel up front each frame
– Immediate rendering (“the rest”)
 Setting up rendering and doing lighting, post-fx, virtual texturing, compute, etc
 Managing resource state, memory and running on different queues (graphics, compute, DMA)
 Sequentially generated in a single job, simulate an immediate context by splitting the command buffer
 Both are very important and have different requirements
31
RESOURCE TRANSITIONS
 Key design in Mantle to significantly lower driver overhead & complexity
– Explicit hazard tracking by the app/engine
– Drives architecture-specific caches & compression
– AMD: FMASK, CMASK, HTILE
– Enables explicit memory management
 Examples:
– Optimal render target writes → Graphics shader read-only
– Compute shader write-only → DrawIndirect arguments
 Mantle has a strong validation layer that tracks transitions which is a major help
32
MANAGING RESOURCE TRANSITIONS
 Engines need a clear design on how to handle state transitions
 Multiple approaches possible:
– Sequential in-order command buffers
 Generate one command buffer at the time in order
 Transition resources on-demand when doing operation on them, very simple
 Recommendation: start with this
– Out-of-order multiple command buffers
 Track state per command buffer, fix up transitions when order of command buffers is known
– Hybrid approaches & more
33
MANAGING RESOURCE TRANSITIONS IN FROSTBITE
 Current approach in Frostbite is quite basic:
– We keep track of a single state for each resource (not subresource)
– The “immediate rendering” transition resources as needed depending on operation
– The out of order “world rendering” command buffers don’t need to transition states
 Already have write access to MRTs and read-access to all resources setup outside them
 Avoids the problem of them not knowing the state during generation
 Works now but as we do more general parallel rendering it will have to change
– Track resource state for each command buffer & fixup between command buffers
34
DYNAMIC STATE OBJECTS
 Graphics state is only set with the pipeline object and 5 dynamic state objects
– State objects: color blend, raster, viewport, depth-stencil, MSAA
– No other parameters such as in DX11 with stencil ref or SetViewport functions
 Frostbite use case:
– Pre-create when possible
– Otherwise on-demand creation (hash map)
– Only ~100 state objects!
 Still possible to end up with lots of state objects
– Esp. with state object float & integer values (depth bounds, depth bias, viewport)
– But no need to store all permutations in memory, objects are fast to create & app manages lifetimes
35
QUEUES
36
QUEUES
 Universal queue can do both graphics, compute and presents
 We use also use additional queues to parallelize GPU operations:
– DMA queue – Improve perf with faster transfers & avoiding idling graphics will transfering
– Compute queue - Improve perf by utilizing idle ALU and update resources simultaneously with gfx
 More GPUs = more queues!
37
 Order of execution within a queue is sequential
 Synchronize multiple queues with GPU semaphores (signal & wait)
 Also works across multiple GPUs
Compute
Graphics
QUEUES SYNCHRONIZATION
S
Wait
W
S
38
QUEUES SYNCHRONIZATION CONT
 Started out with explicit semaphores
– Error prone to handle when having lots of different semaphores & queues
– Difficult to visualize & debug
 Switched to more representation more similar to a job graph
 Just a model on top of the semaphores
39
GPU JOB GRAPH
 Each GPU job has list of dependencies (other command buffers)
 Dependencies has to finish first before job can run on its queue
 The dependencies can be from any queue
 Was easier to work with, debug and visualize
 Really extendable going forward
Graphics 1 Graphics 2
DMA
Compute
Graphics 2
40
ASYNC DMA
 AMD GPUs have dedicated hardware DMA engines, let’s use them!
– Uploading through DMA is faster than on universal queue, even if blocking
– DMA have alignment restrictions, have to support falling back to copies on universal queue
 Use case: Frame buffer & texture uploads
– Used by resource initial data uploads and our UpdateSubresource
– Guaranteed to be finished before the GPU universal queue starts rendering the frame
 Use case: Multi-GPU frame buffer copy
– Peer-to-peer copy of the frame buffer to the GPU that will present it
41
ASYNC COMPUTE
 Frostbite has lots of compute shader passes that could run in parallel with graphics work
– HBAO, blurring, classification, tile-based lighting, etc
 Running as async compute can improve GPU performance by utilizing ”free” ALU
– For example while doing shadowmap rendering (ROP bound)
42
ASYNC COMPUTE – TILE-BASED LIGHTING
 3 sequential compute shaders
– Input: zbuffer & gbuffer
– Output: HDR texture/UAV
 Runs in parallel with graphics pipeline that renders to other targets
Compute
Graphics
TileZ
Gbuffer Shadowmaps Reflection Distort Transp
Cull lights Lighting
S
SWait
W
43
ASYNC COMPUTE – TILE-BASED LIGHTING
 We manually prepare the resources for the async compute
– Important to not access the resources on other queues at the same time (unless read-only state)
– Have to transition resources on the queue that last used it
 Up to 80% faster in our initial tests, but not fully reliable
– But is a pretty small part of the frame time
– Not in BF4 yet
Compute
Graphics
TileZ
Gbuffer Shadowmaps Reflection Distort Transp
Cull lights Lighting
S
SWait
W
44
MULTI-GPU
45
MULTI-GPU
 Multi-GPU alternatives:
– AFR – Alternate Frame Rendering (1-4 GPUs of the same power)
– Heterogeneous AFR – 1 small + 1 big GPU (APU + Discrete)
– SFR – Split Frame Rendering
– Multi-GPU Job Graph – Primary strong GPU + slave GPUs helping
 Frostbite supports AFR natively
– No synchronization points within the frame
– For resources that are not rendered every frame: re-render resources for each GPU
 Example: sky envmap update on weather change
 With Mantle multi-GPU is explicit and we have to build support for it ourselves
46
MULTI-GPU AFR WITH MANTLE
 All resources explicitly duplicated on each GPU with async DMA
– Hidden internally in our rendering abstraction
 Every frame alternate which GPU we build command buffers for and are using resources from
 Our UpdateSubresource has to make sure it updates resources on all GPU
 Presenting the screen has to in some modes copy the frame buffer to the GPU that owns the display
 Bonus:
– Can simulate multi-GPU mode even with single GPU!
– Multi-GPU works in windowed mode!
47
 GPUs are independently rendering & presenting to the screen – can cause micro-stuttering
– Frames are not presented in a regular intervals
– Frame rate can be high but presentation & gameplay is not smooth
– FCAT is a good tool to analyse this
MULTI-GPU ISSUES
GPU0
GPU1
Frame 0 P
Frame 1 P
Frame 2 P
Frame 3 P
GPU0
GPU1
Irregular
presentation
interval
48
 GPUs are independently rendering & presenting to the screen – can cause micro-stuttering
– Frames are not presented in a regular intervals
– Frame rate can be high but presentation & gameplay is not smooth
– FCAT is a good tool to analyse this
 We need to introduce dependency & dampening between the GPUs to alleviate this – frame pacing
MULTI-GPU ISSUES
GPU0
GPU1
Frame 0 P
Frame 1 P
Frame 2 P
Frame 3 P
Ideal
presentation
interval
49
FRAME PACING
 Measure average frame rate on each GPU
– Short history (10-30 frames)
– Filter out spikes
 Insert delay on the GPU before each present
– Force the frame times to become more regular and GPUs to align
– Delay value is based on the calculate avg frame rate
GPU0
GPU1
Frame 0 P
Frame 1 P
Frame 2 P
Frame 3 P
GPU0
GPU1
Delay
D
50
CONCLUSION
51
MANTLE DEV RECOMMENDATIONS
 The validation layer is a critical friend!
 You’ll end up with a lot of object & memory management code, try share with console code
 Make sure you have control over memory usage and can avoid overcommitting video memory
 Build a robust solution for resource state management early
 Figure out how to pre-create your graphics pipelines, can require engine design changes
 Build for multi-GPU support from the start, easier than to retrofit
52
FUTURE
 Second wave of Frostbite Mantle titles
 Adapt Frostbite core rendering layer based on learnings from Mantle
– Refine binding & buffer updates to further reduce overhead
– Virtual memory management
– More async compute & async DMAs
– Multi-GPU job graph R&D
 Linux
– Would like to see how our Mantle renderer behaves with different memory management & driver model
53
QUESTIONS?
Email: johan@frostbite.com
Web: http://frostbite.com
Twitter: @repi

More Related Content

PPTX
ゲーム向けマネジメントツール 「Hansoft」の概要と コンシューマ開発で1年間 運用した事例
Hiroyuki Tanaka
 
PDF
Bindless Deferred Decals in The Surge 2
Philip Hammer
 
PDF
【2000行弱!】x86用自作カーネルの紹介
Yuma Ohgami
 
PDF
CEDEC 2020 - 高品質かつ低負荷な3Dライブを実現するシェーダー開発 ~『ラブライブ!スクールアイドルフェスティバル ALL STARS』(スク...
KLab Inc. / Tech
 
PPTX
A Scalable Real-Time Many-Shadowed-Light Rendering System
Bo Li
 
PPTX
ゲームの中の人工知能
Youichiro Miyake
 
PPTX
Gamma and linear color-space
민웅 이
 
PDF
【Unite Tokyo 2019】MeshSyncを有効活用したセルルックプリレンダーのワークフロー
UnityTechnologiesJapan002
 
ゲーム向けマネジメントツール 「Hansoft」の概要と コンシューマ開発で1年間 運用した事例
Hiroyuki Tanaka
 
Bindless Deferred Decals in The Surge 2
Philip Hammer
 
【2000行弱!】x86用自作カーネルの紹介
Yuma Ohgami
 
CEDEC 2020 - 高品質かつ低負荷な3Dライブを実現するシェーダー開発 ~『ラブライブ!スクールアイドルフェスティバル ALL STARS』(スク...
KLab Inc. / Tech
 
A Scalable Real-Time Many-Shadowed-Light Rendering System
Bo Li
 
ゲームの中の人工知能
Youichiro Miyake
 
Gamma and linear color-space
민웅 이
 
【Unite Tokyo 2019】MeshSyncを有効活用したセルルックプリレンダーのワークフロー
UnityTechnologiesJapan002
 

What's hot (20)

PPTX
Developing and optimizing a procedural game: The Elder Scrolls Blades- Unite ...
Unity Technologies
 
PDF
Game Engine Overview
Sharad Mitra
 
PDF
Multiprocessor Game Loops: Lessons from Uncharted 2: Among Thieves
Naughty Dog
 
PDF
Rendering Techniques in Rise of the Tomb Raider
Eidos-Montréal
 
PPTX
Parallel Futures of a Game Engine
repii
 
PPTX
ポストフィルタ論
Yoshiki Domae
 
PDF
ビジュアルスクリプティングで始めるUnity入門2日目 ゴールとスコアの仕組み - Unityステーション
Unity Technologies Japan K.K.
 
PPSX
Gaming console final presentation
Vivek Bharadwaj
 
PPSX
Holy smoke! Faster Particle Rendering using Direct Compute by Gareth Thomas
AMD Developer Central
 
PPTX
Unityアセット販売の真実
マスタッシュ
 
PPTX
FrameGraph: Extensible Rendering Architecture in Frostbite
Electronic Arts / DICE
 
PPTX
LOD and Culling Systems That Scale - Unite LA
Unity Technologies
 
PPT
Paris Master Class 2011 - 07 Dynamic Global Illumination
Wolfgang Engel
 
DOCX
Optical camouflage
Uttej Kumar Palavai
 
PDF
【Unite Tokyo 2019】Unityプログレッシブライトマッパー2019
UnityTechnologiesJapan002
 
PPTX
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
Electronic Arts / DICE
 
PPT
Secrets of CryENGINE 3 Graphics Technology
Tiago Sousa
 
PDF
Best Practices for Shader Graph
Unity Technologies
 
PDF
楽しいゲーム開発管理
Maki Koiwa
 
PDF
The Technology of Uncharted: Drake’s Fortune
Naughty Dog
 
Developing and optimizing a procedural game: The Elder Scrolls Blades- Unite ...
Unity Technologies
 
Game Engine Overview
Sharad Mitra
 
Multiprocessor Game Loops: Lessons from Uncharted 2: Among Thieves
Naughty Dog
 
Rendering Techniques in Rise of the Tomb Raider
Eidos-Montréal
 
Parallel Futures of a Game Engine
repii
 
ポストフィルタ論
Yoshiki Domae
 
ビジュアルスクリプティングで始めるUnity入門2日目 ゴールとスコアの仕組み - Unityステーション
Unity Technologies Japan K.K.
 
Gaming console final presentation
Vivek Bharadwaj
 
Holy smoke! Faster Particle Rendering using Direct Compute by Gareth Thomas
AMD Developer Central
 
Unityアセット販売の真実
マスタッシュ
 
FrameGraph: Extensible Rendering Architecture in Frostbite
Electronic Arts / DICE
 
LOD and Culling Systems That Scale - Unite LA
Unity Technologies
 
Paris Master Class 2011 - 07 Dynamic Global Illumination
Wolfgang Engel
 
Optical camouflage
Uttej Kumar Palavai
 
【Unite Tokyo 2019】Unityプログレッシブライトマッパー2019
UnityTechnologiesJapan002
 
4K Checkerboard in Battlefield 1 and Mass Effect Andromeda
Electronic Arts / DICE
 
Secrets of CryENGINE 3 Graphics Technology
Tiago Sousa
 
Best Practices for Shader Graph
Unity Technologies
 
楽しいゲーム開発管理
Maki Koiwa
 
The Technology of Uncharted: Drake’s Fortune
Naughty Dog
 
Ad

Viewers also liked (20)

PPSX
The Small Batch (and other) solutions in Mantle API, by Guennadi Riguer, Mant...
AMD Developer Central
 
PPT
Webinar: Whats New in Java 8 with Develop Intelligence
AMD Developer Central
 
PPSX
Rendering Battlefield 4 with Mantle by Yuriy ODonnell
AMD Developer Central
 
PPSX
Inside XBox- One, by Martin Fuller
AMD Developer Central
 
PPSX
TressFX The Fast and The Furry by Nicolas Thibieroz
AMD Developer Central
 
PPSX
Gcn performance ftw by stephan hodes
AMD Developer Central
 
PDF
DirectGMA on AMD’S FirePro™ GPUS
AMD Developer Central
 
PDF
Computer Vision Powered by Heterogeneous System Architecture (HSA) by Dr. Ha...
AMD Developer Central
 
PPTX
Low-level Shader Optimization for Next-Gen and DX11 by Emil Persson
AMD Developer Central
 
PDF
Productive OpenCL Programming An Introduction to OpenCL Libraries with Array...
AMD Developer Central
 
PPTX
Introduction to Node.js
AMD Developer Central
 
PPTX
Media SDK Webinar 2014
AMD Developer Central
 
PPSX
Introduction to Direct 3D 12 by Ivan Nevraev
AMD Developer Central
 
PPSX
Direct3D12 and the Future of Graphics APIs by Dave Oldcorn
AMD Developer Central
 
PPTX
Leverage the Speed of OpenCL™ with AMD Math Libraries
AMD Developer Central
 
PDF
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
AMD Developer Central
 
PDF
An Introduction to OpenCL™ Programming with AMD GPUs - AMD & Acceleware Webinar
AMD Developer Central
 
PPTX
GS-4106 The AMD GCN Architecture - A Crash Course, by Layla Mah
AMD Developer Central
 
PPSX
Inside XBOX ONE by Martin Fuller
AMD Developer Central
 
PDF
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD
 
The Small Batch (and other) solutions in Mantle API, by Guennadi Riguer, Mant...
AMD Developer Central
 
Webinar: Whats New in Java 8 with Develop Intelligence
AMD Developer Central
 
Rendering Battlefield 4 with Mantle by Yuriy ODonnell
AMD Developer Central
 
Inside XBox- One, by Martin Fuller
AMD Developer Central
 
TressFX The Fast and The Furry by Nicolas Thibieroz
AMD Developer Central
 
Gcn performance ftw by stephan hodes
AMD Developer Central
 
DirectGMA on AMD’S FirePro™ GPUS
AMD Developer Central
 
Computer Vision Powered by Heterogeneous System Architecture (HSA) by Dr. Ha...
AMD Developer Central
 
Low-level Shader Optimization for Next-Gen and DX11 by Emil Persson
AMD Developer Central
 
Productive OpenCL Programming An Introduction to OpenCL Libraries with Array...
AMD Developer Central
 
Introduction to Node.js
AMD Developer Central
 
Media SDK Webinar 2014
AMD Developer Central
 
Introduction to Direct 3D 12 by Ivan Nevraev
AMD Developer Central
 
Direct3D12 and the Future of Graphics APIs by Dave Oldcorn
AMD Developer Central
 
Leverage the Speed of OpenCL™ with AMD Math Libraries
AMD Developer Central
 
DX12 & Vulkan: Dawn of a New Generation of Graphics APIs
AMD Developer Central
 
An Introduction to OpenCL™ Programming with AMD GPUs - AMD & Acceleware Webinar
AMD Developer Central
 
GS-4106 The AMD GCN Architecture - A Crash Course, by Layla Mah
AMD Developer Central
 
Inside XBOX ONE by Martin Fuller
AMD Developer Central
 
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD
 
Ad

Similar to Rendering Battlefield 4 with Mantle by Johan Andersson - AMD at GDC14 (20)

PPTX
Low-level Graphics APIs
repii
 
PPT
module4.ppt
Subhasis Dash
 
PDF
Keynote (Johan Andersson) - Mantle for Developers - by Johan Andersson, Techn...
AMD Developer Central
 
PPTX
Mantle for Developers
Electronic Arts / DICE
 
ODP
µCLinux on Pluto 6 Project presentation
edlangley
 
ODP
UKUUG presentation about µCLinux on Pluto 6
edlangley
 
PPTX
Stream Processing
arnamoy10
 
PDF
Towards Software Defined Persistent Memory
Swaminathan Sundararaman
 
PPTX
[Unite Seoul 2019] Mali GPU Architecture and Mobile Studio
Owen Wu
 
PDF
DB2 for z/OS - Starter's guide to memory monitoring and control
Florence Dubois
 
PDF
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
PDF
Sony Computer Entertainment Europe Research & Development Division
Slide_N
 
PPTX
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
PDF
Shak larry-jeder-perf-and-tuning-summit14-part1-final
Tommy Lee
 
PDF
High Performance Computer Architecture
Subhasis Dash
 
PPT
Threading Successes 06 Allegorithmic
guest40fc7cd
 
PPTX
Storage and performance- Batch processing, Whiptail
Internet World
 
PDF
Current and Future of Non-Volatile Memory on Linux
mountpoint.io
 
PPTX
Hardware-aware thread scheduling: the case of asymmetric multicore processors
Achille Peternier
 
PPTX
Computação acelerada – a era das ap us roberto brandão, ciência
Campus Party Brasil
 
Low-level Graphics APIs
repii
 
module4.ppt
Subhasis Dash
 
Keynote (Johan Andersson) - Mantle for Developers - by Johan Andersson, Techn...
AMD Developer Central
 
Mantle for Developers
Electronic Arts / DICE
 
µCLinux on Pluto 6 Project presentation
edlangley
 
UKUUG presentation about µCLinux on Pluto 6
edlangley
 
Stream Processing
arnamoy10
 
Towards Software Defined Persistent Memory
Swaminathan Sundararaman
 
[Unite Seoul 2019] Mali GPU Architecture and Mobile Studio
Owen Wu
 
DB2 for z/OS - Starter's guide to memory monitoring and control
Florence Dubois
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
In-Memory Computing Summit
 
Sony Computer Entertainment Europe Research & Development Division
Slide_N
 
Designing for High Performance Ceph at Scale
James Saint-Rossy
 
Shak larry-jeder-perf-and-tuning-summit14-part1-final
Tommy Lee
 
High Performance Computer Architecture
Subhasis Dash
 
Threading Successes 06 Allegorithmic
guest40fc7cd
 
Storage and performance- Batch processing, Whiptail
Internet World
 
Current and Future of Non-Volatile Memory on Linux
mountpoint.io
 
Hardware-aware thread scheduling: the case of asymmetric multicore processors
Achille Peternier
 
Computação acelerada – a era das ap us roberto brandão, ciência
Campus Party Brasil
 

More from AMD Developer Central (9)

PDF
RapidFire - the Easy Route to low Latency Cloud Gaming Solutions - AMD at GDC14
AMD Developer Central
 
PPSX
Mantle and Nitrous - Combining Efficient Engine Design with a modern API - AM...
AMD Developer Central
 
PPSX
Mantle - Introducing a new API for Graphics - AMD at GDC14
AMD Developer Central
 
PPSX
Direct3D and the Future of Graphics APIs - AMD at GDC14
AMD Developer Central
 
PPSX
Vertex Shader Tricks by Bill Bilodeau - AMD at GDC14
AMD Developer Central
 
PDF
Keynote (Tony King-Smith) - Silicon? Check. HSA? Check. All done? Wrong! - by...
AMD Developer Central
 
PDF
Keynote (Nandini Ramani) - The Role of Java in Heterogeneous Computing & How ...
AMD Developer Central
 
PDF
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
AMD Developer Central
 
PDF
Keynote (Dr. Lisa Su) - Developers: The Heart of AMD Innovation - by Dr. Lisa...
AMD Developer Central
 
RapidFire - the Easy Route to low Latency Cloud Gaming Solutions - AMD at GDC14
AMD Developer Central
 
Mantle and Nitrous - Combining Efficient Engine Design with a modern API - AM...
AMD Developer Central
 
Mantle - Introducing a new API for Graphics - AMD at GDC14
AMD Developer Central
 
Direct3D and the Future of Graphics APIs - AMD at GDC14
AMD Developer Central
 
Vertex Shader Tricks by Bill Bilodeau - AMD at GDC14
AMD Developer Central
 
Keynote (Tony King-Smith) - Silicon? Check. HSA? Check. All done? Wrong! - by...
AMD Developer Central
 
Keynote (Nandini Ramani) - The Role of Java in Heterogeneous Computing & How ...
AMD Developer Central
 
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
AMD Developer Central
 
Keynote (Dr. Lisa Su) - Developers: The Heart of AMD Innovation - by Dr. Lisa...
AMD Developer Central
 

Recently uploaded (20)

PDF
Security features in Dell, HP, and Lenovo PC systems: A research-based compar...
Principled Technologies
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PPTX
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
PDF
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 
PDF
Doc9.....................................
SofiaCollazos
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PPTX
Simple and concise overview about Quantum computing..pptx
mughal641
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
Security features in Dell, HP, and Lenovo PC systems: A research-based compar...
Principled Technologies
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
The Future of Artificial Intelligence (AI)
Mukul
 
SparkLabs Primer on Artificial Intelligence 2025
SparkLabs Group
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
Introduction to Flutter by Ayush Desai.pptx
ayushdesai204
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
NewMind AI Weekly Chronicles - July'25 - Week IV
NewMind AI
 
Doc9.....................................
SofiaCollazos
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
Simple and concise overview about Quantum computing..pptx
mughal641
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 

Rendering Battlefield 4 with Mantle by Johan Andersson - AMD at GDC14

  • 1. RENDERING BATTLEFIELD 4 WITH MANTLE Johan Andersson – Electronic Arts
  • 2. 2
  • 3. 3 DX11 Mantle Avg: 78 fps Min: 42 fps Core i7-3970x, AMD Radeon R9 290x, 1080p ULTRA Avg: 120 fps Min: 94 fps+58%!
  • 4. 4 BF4 MANTLE GOALS Goals: – Significantly improve CPU performance – More consistent & stable performance – Improve GPU performance where possible – Add support for a new Mantle rendering backend in a live game  Minimize changes to engine interfaces  Compatible with built PC content – Work on wide set of hardware  APU to quad-GPU  But x64 only (32-bit Windows needs to die) Non-goals: – Design new renderer from scratch for Mantle – Take advantage of asymmetric MGPU (APU+discrete) – Optimize video memory consumption
  • 5. 5 BF4 MANTLE STRATEGIC GOALS  Prove that low-level graphics APIs work outside of consoles  Push the industry towards low-level graphics APIs everywhere  Build a foundation for the future that we can build great games on
  • 7. 7 SHADERS  Shader resource bind points replaced with a resource table object - descriptor set – This is how the hardware accesses the shader resources – Flat list of images, buffers and samplers used by any of the shader stages – Vertex shader streams converted to vertex shader buffer loads  Engine assign each shader resource to specific slot in the descriptor set(s) – Can share slots between shader stages = smaller descriptor sets – The mapping takes a while to wrap one’s head around
  • 8. 8 SHADER CONVERSION  DX11 bytecode shaders gets converted to AMDIL & mapping applied using ILC tool – Done at load time – Don’t have to change our shaders!  Have full source & control over the process  Could write AMDIL directly or use other frontends if wanted
  • 9. 9 DESCRIPTOR SETS  Very simple usage in BF4: for each draw call write flat list of resources –Essentially direct replacement of SetTexture/SetConstantBuffer/SetInputStream  Single dynamic descriptor set object per frame  Sub-allocate for each draw call and write list of resources  ~15000 resource slots written per frame in BF4, still very fast
  • 11. 11 DESCRIPTOR SETS – FUTURE OPTIMIZATIONS  Use static descriptor sets when possible  Reduce resource duplication by reusing & sharing more across shader stages  Nested descriptor sets
  • 12. 12 COMPUTE PIPELINES  1:1 mapping between pipeline & shader  No state built into pipeline  Can execute in parallel with rendering  ~100 compute pipelines in BF4
  • 13. 13 GRAPHICS PIPELINES  All graphics shader stages combined to a single pipeline object together with important graphics state  ~10000 graphics pipelines in BF4 on a single level, ~25 MB of video memory  Could use smaller working pool of active state objects to keep reasonable amount in memory – Have not been required for us
  • 14. 14 PRE-BUILDING PIPELINES  Graphics pipeline creation is expensive operation, do at load time instead of runtime! – Creating one of our graphics pipelines take ~10-60 ms each – Pre-build using N parallel low-priority jobs – Avoid 99.9% of runtime stalls caused by pipeline creation!  Requires knowing the graphics pipeline state that will be used with the shaders – Primitive type – Render target formats – Render target write masks – Blend modes  Not fully trivial to know all state, may require engine changes / pre-defining use cases – Important to design for!
  • 15. 15 PIPELINE CACHE  Cache built pipelines both in memory cache and disk cache – Improved loading times – Max 300 MB – Simple LRU policy – LZ4 compressed (free)  Database signature: – Driver version – Vendor ID – Device ID
  • 17. 17 MEMORY MANAGEMENT  Mantle devices exposes multiple memory heaps with characteristics – Can be different between devices, drivers and OS:es  User explicitly places resources in wanted heaps – Driver suggests preferred heaps when creating objects, not a requirement Type Size Page CPU access GPU Read GPU Write CPU Read CPU Write Local 256 MB 65535 CpuVisible|CpuGpuCoherent|CpuUncached|CpuWriteCombined 130 170 0.0058 2.8 Local 4096 MB 65535 130 180 0 0 Remote 16106 MB 65535 CpuVisible|CpuGpuCoherent|CpuUncached|CpuWriteCombined 2.6 2.6 0.1 3.3 Remote 16106 MB 65535 CpuVisible|CpuGpuCoherent 2.6 2.6 3.2 2.9
  • 18. 18 FROSTBITE MEMORY HEAPS  System Shared Mapped – CPU memory that is GPU visible. – Write combined & persistently mapped = easy & fast to write to in parallel at any time  System Shared Pinned – CPU cached for readback. – Not used much  Video Shared – GPU memory accessible by CPU. Used for descriptor sets and dynamic buffers – Max 256 MB (legacy constraint) – Avoid keeping persistently mapped as WDMM doesn’t like this and can decide to move it back to CPU memory   Video Private – GPU private memory. – Used for render targets, textures and other resources CPU does not need to access
  • 19. 19 MEMORY REFERENCES  WDDM needs to know which memory allocations are referenced for each command buffer – In order to make sure they are resident and not paged out – Max ~1700 memory references are supported – Overhead with having lots of references  Engine needs to keep track of what memory is referenced while building the command buffers – Easy & fast to do – Each reference is either read-only or read/write – We use a simple global list of references shared for all command buffers.
  • 20. 20 MEMORY POOLING  Pooling memory allocations were required for us – Sub allocate within larger 1 – 32 MB chunks – All resources stored memory handle + offset – Not as elegant as just void* on consoles – Fragmentation can be a concern, not too much issues for us in practice  GPU virtual memory mapping is fully supported, can simplify & optimize management
  • 21. 21 OVERCOMMITTING VIDEO MEMORY  Avoid overcommitting video memory! – Will lead to severe stalls as VidMM moves blocks and moves memory back and forth – VidMM is a black box  – One of the biggest issues we ran into during development  Recommendations – Balance memory pools – Make sure to use read-only memory references – Use memory priorities
  • 22. 22 MEMORY PRIORITIES  Setting priorities on the memory allocations helps VidMM choose what to page out when it has to  5 priority levels – Very high = Render targets with MSAA – High = Render targets and UAVs – Normal = Textures – Low = Shader & constant buffers – Very low = vertex & index buffers
  • 23. 23 MEMORY RESIDENCY FUTURE  For best results manage which resources are in video memory yourself & keep only ~80% used – Avoid all stalls – Can async DMA in and out  We are thinking of redesigning to fully avoid possibility of overcommitting  Hoping WDDM’s memory residency management can be simplified & improved in the future
  • 25. 25 RESOURCE LIFETIMES  App manages lifetime of all resources – Have to make sure GPU is not using an object or memory while we are freeing it on the CPU – How we’ve always worked with GPUs on the consoles – Multi-GPU adds some additional complexity that consoles do not have  We keep track of lifetimes on a per frame granularity – Queues for object destruction & free memory operations – Add to queue at any time on the CPU – Process queues when GPU command buffers for the frame are done executing – Tracked with command buffer fences
  • 26. 26 LINEAR FRAME ALLOCATOR  We use multiple linear allocators with Mantle for both transient buffers & images – Used for huge amount of small constant data and other GPU frame data that CPU writes – Easy to use and very low overhead – Don’t have to care about lifetimes or state  Fixed memory buffers for each frame – Super cheap sub-allocation from from any thread – If full, use heap allocation (also fast due to pooling)  Alternative: ring buffers – Requires being able to stall & drain pipeline at any allocation if full, additional complexity for us
  • 27. 27 TILING  Textures should be tiled for performance – Explicitly handled in Mantle, user selects linear or tiled – Some formats (BC) can’t be accessed as linear by the GPU  On consoles we handle tiling offline as part of our data processing pipeline – We know the exact tiling formats and have separate resources per platform  For Mantle – Tiling formats are opaque, can be different between GPU architectures and image types – Tile textures with DMA image upload from SystemShared to VideoPrivate  Linear source, tiled destination  Free
  • 29. 29 COMMAND BUFFERS  Command buffers are the atomic unit of work dispatched to the GPU – Separate creation from execution – No “immediate context” a la DX11 that can execute work at any call – Makes resource synchronization and setup significantly easier & faster  Typical BF4 scenes have around ~50 command buffers per frame – Reasonable tradeoff for us with submission overhead vs CPU load-balancing
  • 30. 30 COMMAND BUFFER SOURCES  Frostbite has 2 separate sources of command buffers – World rendering  Rendering the world with tons of objects, lots of draw calls. Have all frame data up front  All resources except for render targets are read-only  Generated in parallel up front each frame – Immediate rendering (“the rest”)  Setting up rendering and doing lighting, post-fx, virtual texturing, compute, etc  Managing resource state, memory and running on different queues (graphics, compute, DMA)  Sequentially generated in a single job, simulate an immediate context by splitting the command buffer  Both are very important and have different requirements
  • 31. 31 RESOURCE TRANSITIONS  Key design in Mantle to significantly lower driver overhead & complexity – Explicit hazard tracking by the app/engine – Drives architecture-specific caches & compression – AMD: FMASK, CMASK, HTILE – Enables explicit memory management  Examples: – Optimal render target writes → Graphics shader read-only – Compute shader write-only → DrawIndirect arguments  Mantle has a strong validation layer that tracks transitions which is a major help
  • 32. 32 MANAGING RESOURCE TRANSITIONS  Engines need a clear design on how to handle state transitions  Multiple approaches possible: – Sequential in-order command buffers  Generate one command buffer at the time in order  Transition resources on-demand when doing operation on them, very simple  Recommendation: start with this – Out-of-order multiple command buffers  Track state per command buffer, fix up transitions when order of command buffers is known – Hybrid approaches & more
  • 33. 33 MANAGING RESOURCE TRANSITIONS IN FROSTBITE  Current approach in Frostbite is quite basic: – We keep track of a single state for each resource (not subresource) – The “immediate rendering” transition resources as needed depending on operation – The out of order “world rendering” command buffers don’t need to transition states  Already have write access to MRTs and read-access to all resources setup outside them  Avoids the problem of them not knowing the state during generation  Works now but as we do more general parallel rendering it will have to change – Track resource state for each command buffer & fixup between command buffers
  • 34. 34 DYNAMIC STATE OBJECTS  Graphics state is only set with the pipeline object and 5 dynamic state objects – State objects: color blend, raster, viewport, depth-stencil, MSAA – No other parameters such as in DX11 with stencil ref or SetViewport functions  Frostbite use case: – Pre-create when possible – Otherwise on-demand creation (hash map) – Only ~100 state objects!  Still possible to end up with lots of state objects – Esp. with state object float & integer values (depth bounds, depth bias, viewport) – But no need to store all permutations in memory, objects are fast to create & app manages lifetimes
  • 36. 36 QUEUES  Universal queue can do both graphics, compute and presents  We use also use additional queues to parallelize GPU operations: – DMA queue – Improve perf with faster transfers & avoiding idling graphics will transfering – Compute queue - Improve perf by utilizing idle ALU and update resources simultaneously with gfx  More GPUs = more queues!
  • 37. 37  Order of execution within a queue is sequential  Synchronize multiple queues with GPU semaphores (signal & wait)  Also works across multiple GPUs Compute Graphics QUEUES SYNCHRONIZATION S Wait W S
  • 38. 38 QUEUES SYNCHRONIZATION CONT  Started out with explicit semaphores – Error prone to handle when having lots of different semaphores & queues – Difficult to visualize & debug  Switched to more representation more similar to a job graph  Just a model on top of the semaphores
  • 39. 39 GPU JOB GRAPH  Each GPU job has list of dependencies (other command buffers)  Dependencies has to finish first before job can run on its queue  The dependencies can be from any queue  Was easier to work with, debug and visualize  Really extendable going forward Graphics 1 Graphics 2 DMA Compute Graphics 2
  • 40. 40 ASYNC DMA  AMD GPUs have dedicated hardware DMA engines, let’s use them! – Uploading through DMA is faster than on universal queue, even if blocking – DMA have alignment restrictions, have to support falling back to copies on universal queue  Use case: Frame buffer & texture uploads – Used by resource initial data uploads and our UpdateSubresource – Guaranteed to be finished before the GPU universal queue starts rendering the frame  Use case: Multi-GPU frame buffer copy – Peer-to-peer copy of the frame buffer to the GPU that will present it
  • 41. 41 ASYNC COMPUTE  Frostbite has lots of compute shader passes that could run in parallel with graphics work – HBAO, blurring, classification, tile-based lighting, etc  Running as async compute can improve GPU performance by utilizing ”free” ALU – For example while doing shadowmap rendering (ROP bound)
  • 42. 42 ASYNC COMPUTE – TILE-BASED LIGHTING  3 sequential compute shaders – Input: zbuffer & gbuffer – Output: HDR texture/UAV  Runs in parallel with graphics pipeline that renders to other targets Compute Graphics TileZ Gbuffer Shadowmaps Reflection Distort Transp Cull lights Lighting S SWait W
  • 43. 43 ASYNC COMPUTE – TILE-BASED LIGHTING  We manually prepare the resources for the async compute – Important to not access the resources on other queues at the same time (unless read-only state) – Have to transition resources on the queue that last used it  Up to 80% faster in our initial tests, but not fully reliable – But is a pretty small part of the frame time – Not in BF4 yet Compute Graphics TileZ Gbuffer Shadowmaps Reflection Distort Transp Cull lights Lighting S SWait W
  • 45. 45 MULTI-GPU  Multi-GPU alternatives: – AFR – Alternate Frame Rendering (1-4 GPUs of the same power) – Heterogeneous AFR – 1 small + 1 big GPU (APU + Discrete) – SFR – Split Frame Rendering – Multi-GPU Job Graph – Primary strong GPU + slave GPUs helping  Frostbite supports AFR natively – No synchronization points within the frame – For resources that are not rendered every frame: re-render resources for each GPU  Example: sky envmap update on weather change  With Mantle multi-GPU is explicit and we have to build support for it ourselves
  • 46. 46 MULTI-GPU AFR WITH MANTLE  All resources explicitly duplicated on each GPU with async DMA – Hidden internally in our rendering abstraction  Every frame alternate which GPU we build command buffers for and are using resources from  Our UpdateSubresource has to make sure it updates resources on all GPU  Presenting the screen has to in some modes copy the frame buffer to the GPU that owns the display  Bonus: – Can simulate multi-GPU mode even with single GPU! – Multi-GPU works in windowed mode!
  • 47. 47  GPUs are independently rendering & presenting to the screen – can cause micro-stuttering – Frames are not presented in a regular intervals – Frame rate can be high but presentation & gameplay is not smooth – FCAT is a good tool to analyse this MULTI-GPU ISSUES GPU0 GPU1 Frame 0 P Frame 1 P Frame 2 P Frame 3 P GPU0 GPU1 Irregular presentation interval
  • 48. 48  GPUs are independently rendering & presenting to the screen – can cause micro-stuttering – Frames are not presented in a regular intervals – Frame rate can be high but presentation & gameplay is not smooth – FCAT is a good tool to analyse this  We need to introduce dependency & dampening between the GPUs to alleviate this – frame pacing MULTI-GPU ISSUES GPU0 GPU1 Frame 0 P Frame 1 P Frame 2 P Frame 3 P Ideal presentation interval
  • 49. 49 FRAME PACING  Measure average frame rate on each GPU – Short history (10-30 frames) – Filter out spikes  Insert delay on the GPU before each present – Force the frame times to become more regular and GPUs to align – Delay value is based on the calculate avg frame rate GPU0 GPU1 Frame 0 P Frame 1 P Frame 2 P Frame 3 P GPU0 GPU1 Delay D
  • 51. 51 MANTLE DEV RECOMMENDATIONS  The validation layer is a critical friend!  You’ll end up with a lot of object & memory management code, try share with console code  Make sure you have control over memory usage and can avoid overcommitting video memory  Build a robust solution for resource state management early  Figure out how to pre-create your graphics pipelines, can require engine design changes  Build for multi-GPU support from the start, easier than to retrofit
  • 52. 52 FUTURE  Second wave of Frostbite Mantle titles  Adapt Frostbite core rendering layer based on learnings from Mantle – Refine binding & buffer updates to further reduce overhead – Virtual memory management – More async compute & async DMAs – Multi-GPU job graph R&D  Linux – Would like to see how our Mantle renderer behaves with different memory management & driver model