This document summarizes a research paper that analyzes deep networks using kernel methods. It hypothesizes that (1) representations in higher layers of deep networks are simpler and more accurate, and (2) the network architecture controls how quickly representations are formed. The researchers used kernel principal component analysis to measure representation simplicity and accuracy at each layer of deep networks trained on MNIST and CIFAR. Their experiments found support for both hypotheses and that convolutional and pretrained networks form representations more systematically than standard multilayer perceptrons.