Building Language Models in NLP
Last Updated :
10 May, 2024
Building language models is a fundamental task in natural language processing (NLP) that involves creating computational models capable of predicting the next word in a sequence of words. These models are essential for various NLP applications, such as machine translation, speech recognition, and text generation.
In this article, we will build a language model using NLP using LSTM.
What is a Language Model?
- A language model is a statistical model that is used to predict the probability of a sequence of words.
- It learns the structure and patterns of a language from a given text corpus and can be used to generate new text that is similar to the original text.
- Language models are a fundamental component of many natural language processing (NLP) tasks, such as machine translation, speech recognition, and text generation.
Steps to Build a Language Model in NLP
Here, we will implement these steps to build a language model in NLP.
Step 1: Importing Necessary Libraries
We will, at first, import all the necessary libraries required for building our model.
import tensorflow as tf
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.models import Sequential
Step 2: Generate Sample Data
We will at first take a sample text data.
text_data = "Hello, how are you? I am doing well. Thank you for asking."
Step 3: Preprocessing the Data
The preprocessing involves tokenizing the input text data, creates input sequences, and pads the sequences to make them equal in length.
# Tokenize the text
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts([text_data])
total_words = len(tokenizer.word_index) + 1
# Create input sequences and labels
input_sequences = []
for line in text_data.split('.'):
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# Pad sequences for equal length
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = tf.keras.preprocessing.sequence.pad_sequences(input_sequences, maxlen=max_sequence_len, padding='pre')
Step 4: One hot encoding
The input sequences are split into predictors (xs) and labels (ys). The labels are converted to one-hot encoding.
# Create predictors and label
xs, labels = input_sequences[:,:-1],input_sequences[:,-1]
# Convert labels to one-hot encoding
ys = tf.keras.utils.to_categorical(labels, num_classes=total_words)
Step 5: Defining and Compiling the Model
This code defines and compiles a simple LSTM-based language model using Keras
# Define the model
model = Sequential()
model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))
model.add(LSTM(100))
model.add(Dense(total_words, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
history = model.fit(xs, ys, epochs=100, verbose=1)
Step 6: Generating Text
This generate_text
function takes a seed_text
as input and generates next_words
number of words using the provided model
and max_sequence_len
.
def generate_text(seed_text, next_words, model, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = tf.keras.preprocessing.sequence.pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted_probs = model.predict(token_list, verbose=0)[0]
predicted_index = tf.argmax(predicted_probs, axis=-1).numpy()
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted_index:
output_word = word
break
seed_text += " " + output_word
return seed_text
# Generate text
print(generate_text("how", 5, model, max_sequence_len))
Below is the complete Implemention:
Python
import tensorflow as tf
from tensorflow.keras.layers import Embedding, LSTM, Dense
from tensorflow.keras.models import Sequential
# Sample text data
text_data = "Hello, how are you? I am doing well. Thank you for asking."
# Tokenize the text
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts([text_data])
total_words = len(tokenizer.word_index) + 1
# Create input sequences and labels
input_sequences = []
for line in text_data.split('.'):
token_list = tokenizer.texts_to_sequences([line])[0]
for i in range(1, len(token_list)):
n_gram_sequence = token_list[:i+1]
input_sequences.append(n_gram_sequence)
# Pad sequences for equal length
max_sequence_len = max([len(x) for x in input_sequences])
input_sequences = tf.keras.preprocessing.sequence.pad_sequences(input_sequences,
maxlen=max_sequence_len,
padding='pre')
# Create predictors and label
xs, labels = input_sequences[:,:-1],input_sequences[:,-1]
# Convert labels to one-hot encoding
ys = tf.keras.utils.to_categorical(labels, num_classes=total_words)
# Define the model
model = Sequential()
model.add(Embedding(total_words, 64, input_length=max_sequence_len-1))
model.add(LSTM(100))
model.add(Dense(total_words, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Fit the model
history = model.fit(xs, ys, epochs=100, verbose=1)
def generate_text(seed_text, next_words, model, max_sequence_len):
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = tf.keras.preprocessing.sequence.pad_sequences([token_list],
maxlen=max_sequence_len-1,
padding='pre')
predicted_probs = model.predict(token_list, verbose=0)[0]
predicted_index = tf.argmax(predicted_probs, axis=-1).numpy()
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted_index:
output_word = word
break
seed_text += " " + output_word
return seed_text
# Generate text
print(generate_text("how", 5, model, max_sequence_len))
Output:
how are you i am doing
In summary, constructing language models for natural language processing (NLP) include various stages, including tokenization, sequence creation, model construction, training, and text generation. Tokenization transforms textual data into numerical representations, while sequence creation generates input-output pairs for model training. The model typically comprises layers like Embedding and LSTM, followed by a Dense layer for predictions. Training involves fitting the model to input sequences and their labels, while text generation utilizes the trained model to generate new text based on a provided seed text. Overall, language models are vital for NLP tasks such as text generation, machine translation, and sentiment analysis, among others.
Similar Reads
Causal Language Models in NLP Causal language models are a type of machine learning model that generates text by predicting the next word in a sequence based on the words that came before it. Unlike masked language models which predict missing words in a sentence by analyzing both preceding and succeeding words causal models ope
4 min read
What are Language Models in NLP? Language models are a fundamental component of natural language processing (NLP) and computational linguistics. They are designed to understand, generate, and predict human language. These models analyze the structure and use of language to perform tasks such as machine translation, text generation,
9 min read
Discounting Techniques in Language Models Language models are essential tools in natural language processing (NLP), responsible for predicting the next word in a sequence based on the words that precede it. A common challenge in building language models, particularly n-gram models, is the estimation of probabilities for word sequences that
7 min read
Universal Language Model Fine-tuning (ULMFit) in NLP Understanding human language is one of the toughest challenges for computers. ULMFit (Universal Language Model Fine-tuning) is a technique used that helps machines learn language by first studying a large amount of text and then quickly adapting to specific language tasks. This makes building langua
6 min read
N-Gram Language Modelling with NLTK Language modeling is the way of determining the probability of any sequence of words. Language modeling is used in various applications such as Speech Recognition, Spam filtering, etc. Language modeling is the key aim behind implementing many state-of-the-art Natural Language Processing models.Metho
5 min read
Katz's Back-Off Model in Language Modeling Language Modeling is one of the main tasks in the field of natural language processing and linguistics. We use these models to predict the probability with which the next word should come in a sequence of words. There are many good language models, one such is Katzâs Back-Off Model which was introdu
6 min read