Neural networks can be used for language modeling by assigning probabilities to sequences of words. Language models are evaluated based on how well they can predict upcoming words. Recurrent neural networks like LSTMs are commonly used for language modeling and have achieved better performance than traditional n-gram models, with state-of-the-art LSTMs obtaining a perplexity score of 23 compared to 41 for regular RNNs. Convolutional networks can also be applied to language modeling by treating word embeddings as inputs.