Regularization is used in deep learning to reduce generalization error by modifying the learning algorithm. Common regularization techniques for deep neural networks include:
1) Parameter norm penalties like L2 and L1 regularization that penalize the weights of a network. This encourages simpler models that generalize better.
2) Early stopping which obtains the model parameters at the point of lowest validation error during training, rather than at the end of training.
3) Data augmentation which creates additional fake training data through techniques like translation to improve robustness.