Music Genre Classification using Transformers
Last Updated :
23 Jul, 2025
All the animals have a reaction to music. Music is a special thing that has an effect directly on our brains. Humans are the creators of different types of music, like pop, hip-hop, rap, classical, rock, and many more. Specifically, music can be classified by its genres. Our brains can detect different genres of music by default, but computers don't have this mechanism. But music genre classification has vast usage in recommendation systems, content organization, and the music industry as well. To analyze music genres, we can use machine learning. In this article, we will discuss how we can utilize transformer-based models to perform music genre classification.
Why use Transformers?
Music genre classification is a challenging and complex task that involves several steps like genre analysis, embedding, and categorization of music tracks into distinct genres. All these steps involve several large calculations, which are time- and memory-consuming. Many other methods of music genre classification have already been tested, but recent studies showed that the Transformers module can effectively handle all the complex steps associated with music genre classification. The essence of using transformers in music genre classification lies in their ability to capture intricate patterns, dependencies, and temporal relationships within music data. Unlike other methods, which often struggle to represent the rich and complex structures of music, transformers outperform in modelling sequences of data. It can analyze raw audio, symbolic notation, or even textual descriptions to identify the underlying genre with remarkable accuracy.
Step-by-step implementation
Installing required module
At first, we need to install transformers, accelerate, datasets and evaluate modules to our runtime.
!pip install transformers
!pip install accelerate
!pip install datasets
!pip install evaluate
Importing required libraries
Now we will import all required Python libraries like NumPy and transformers etc.
Python3
from datasets import load_dataset, Audio
import numpy as np
from transformers import pipeline, AutoFeatureExtractor, AutoModelForAudioClassification, TrainingArguments, Trainer
import evaluate
Loading dataset and Splitting
Now we will load the GTZAN dataset which contains total 10 music genres. Then we will split it into training and testing sets(90:10).
Python3
gtzan = load_dataset("marsyas/gtzan", "all")
gtzan = gtzan["train"].train_test_split(seed=42, shuffle=True, test_size=0.1)
Data pre-processing
Now we will extract the features of audio files using transformers' AutoFeatureExtractor. And define a driver function to iterate over the audio files(.wav).
- Model and Feature Initialization
- used a pretrained model from the Hugging Face model hub
- initialized the feature extractor
- Load data and performed audio preprocessing
- Preprocessed the audio data in the GTZAN dataset using the feature extractor, the preprocess_function applies the feature extractor to a list of audio arrays, setting options such as 'max_length' and 'truncation'.
Python3
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(
model_id, do_normalize=True, return_attention_mask=True
)
sampling_rate = feature_extractor.sampling_rate
gtzan = gtzan.cast_column("audio", Audio(sampling_rate=sampling_rate))
sample = gtzan["train"][0]["audio"]
inputs = feature_extractor(
sample["array"], sampling_rate=sample["sampling_rate"])
max_duration = 20.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
gtzan_encoded = gtzan.map(
preprocess_function,
remove_columns=["audio", "file"],
batched=True,
batch_size=25,
num_proc=1,
)
Encoding dataset
To feed the dataset to the model we need to encode it.
- Renamed the 'genre' column to 'label'
- Created mapping functions
Python3
gtzan_encoded = gtzan_encoded.rename_column("genre", "label")
id2label_fn = gtzan["train"].features["genre"].int2str
id2label = {
str(i): id2label_fn(i)
for i in range(len(gtzan_encoded["train"].features["label"].names))
}
label2id = {v: k for k, v in id2label.items()}
Classification model
Now we will use 'AutoModelForAudioClassification' for the music genre classifiation. We will specify various training arguments for the model as per our choice and machine's capability.
- At first, we initialized a pretrained audio model for finetuning
- We created an object containing various training configuration settings, such as evaluation strategy, learning rate, batch sizes, logging settings, etc. These settings are used during the model training process.
Python3
num_labels = len(id2label)
model = AutoModelForAudioClassification.from_pretrained(
model_id,
num_labels=num_labels,
label2id=label2id,
id2label=id2label,
)
model_name = model_id.split("/")[-1]
batch_size = 2
gradient_accumulation_steps = 1
num_train_epochs = 5
training_args = TrainingArguments(
f"{model_name}-Music classification Finetuned",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_train_epochs,
warmup_ratio=0.1,
logging_steps=5,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
fp16=True,
)
Model evaluation
Now we will evaluate our model in the terms of Accuracy.
- We loaded the accuracy metric for evaluation and it loaded from Hugging Face module.
- We computed the evaluation metrics based on the model predictions and the reference labels. In this case, it uses the loaded accuracy metric to compute the accuracy.
- Then we initialized the trainer and trained the model
Python3
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
trainer = Trainer(
model,
training_args,
train_dataset=gtzan_encoded["train"],
eval_dataset=gtzan_encoded["test"],
tokenizer=feature_extractor,
compute_metrics=compute_metrics,
)
trainer.train()
Output:
Epoch Training Loss Validation Loss Accuracy
1 1.180900 1.429399 0.610000
TrainOutput(global_step=450, training_loss=1.8381363932291668,
metrics= {'train_runtime': 493.46, 'train_samples_per_second': 1.822,
'train_steps_per_second': 0.912, 'total_flos': 4.089325516416e+16, 'train_loss': 1.8381363932291668, 'epoch': 1.0})
Loading and Saving the model in a "Saved Model" Folder
Code for Saving the model
Python3
# Save the model and feature extractor
model.save_pretrained("/content/Saved Model")
feature_extractor.save_pretrained("/content/Saved Model")
Code for loading the model
Python3
# Load the model and feature extractor
loaded_model = AutoModelForAudioClassification.from_pretrained("/content/Saved Model")
loaded_feature_extractor = AutoFeatureExtractor.from_pretrained("/content/Saved Model")
Pipeline
Using this pipeline you will be able input an audio file and obtain the predicted genre along with the probability score. For the following code we have used a file of genre blue. The file can be downloaded from here.
Python3
from transformers import pipeline, AutoFeatureExtractor
pipe = pipeline("audio-classification", model=loaded_model,
feature_extractor=loaded_feature_extractor)
def classify_audio(filepath):
preds = pipe(filepath)
outputs = {}
for p in preds:
outputs[p["label"]] = p["score"]
return outputs
# Provide the input file path
input_file_path = input('Input:')
# Classify the audio file
output = classify_audio(input_file_path)
# Print the output genre
print("Predicted Genre:")
max_key = max(output, key=output.get)
print("The predicted genre is:", max_key)
print("The prediction score is:", output[max_key])
Output:
Input:/content/sound-genre-blue.wav
Predicted Genre:
The predicted genre is: blues
The prediction score is: 0.9631124138832092
Conclusion
We can conclude that Music genre classification is a complex and computationally costly task. But this is required in many industries. Our model has achieved a good accuracy of 82%. However, using larger dataset can be useful to get better accuracy.
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is an important text pre-processing technique in Natural Language Processing (NLP) that reduces words to their base form known as a "lemma." For example, the lemma of "running" is "run" and "better" becomes "good." Unlike stemming which simply removes prefixes or suffixes, it considers
6 min read
Introduction to StemmingStemming is an important text-processing technique that reduces words to their base or root form by removing prefixes and suffixes. This process standardizes words which helps to improve the efficiency and effectiveness of various natural language processing (NLP) tasks.In NLP, stemming simplifies w
6 min read
Removing stop words with NLTK in PythonNatural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, prope
5 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarization using HuggingFace ModelText summarization involves reducing a document to its most essential content. The aim is to generate summaries that are concise and retain the original meaning. Summarization plays an important role in many real-world applications such as digesting long articles, summarizing legal contracts, highli
4 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read