The document discusses Nervana's approach to building hardware optimized for deep learning. It describes Nervana's tensor processing unit (TPU) which provides unprecedented compute density, a scalable distributed architecture, memory near the computation, and power efficiency. The TPU is optimized to take advantage of the characteristics of deep learning workloads and provides 10-100x gains over GPUs. Nervana is also developing software like their Neon library and cloud services to make deep learning more accessible and efficient.