Music Genre Classification using Transformers
Last Updated :
23 Jul, 2025
All the animals have a reaction to music. Music is a special thing that has an effect directly on our brains. Humans are the creators of different types of music, like pop, hip-hop, rap, classical, rock, and many more. Specifically, music can be classified by its genres. Our brains can detect different genres of music by default, but computers don't have this mechanism. But music genre classification has vast usage in recommendation systems, content organization, and the music industry as well. To analyze music genres, we can use machine learning. In this article, we will discuss how we can utilize transformer-based models to perform music genre classification.
Why use Transformers?
Music genre classification is a challenging and complex task that involves several steps like genre analysis, embedding, and categorization of music tracks into distinct genres. All these steps involve several large calculations, which are time- and memory-consuming. Many other methods of music genre classification have already been tested, but recent studies showed that the Transformers module can effectively handle all the complex steps associated with music genre classification. The essence of using transformers in music genre classification lies in their ability to capture intricate patterns, dependencies, and temporal relationships within music data. Unlike other methods, which often struggle to represent the rich and complex structures of music, transformers outperform in modelling sequences of data. It can analyze raw audio, symbolic notation, or even textual descriptions to identify the underlying genre with remarkable accuracy.
Step-by-step implementation
Installing required module
At first, we need to install transformers, accelerate, datasets and evaluate modules to our runtime.
!pip install transformers
!pip install accelerate
!pip install datasets
!pip install evaluate
Importing required libraries
Now we will import all required Python libraries like NumPy and transformers etc.
Python3
from datasets import load_dataset, Audio
import numpy as np
from transformers import pipeline, AutoFeatureExtractor, AutoModelForAudioClassification, TrainingArguments, Trainer
import evaluate
Loading dataset and Splitting
Now we will load the GTZAN dataset which contains total 10 music genres. Then we will split it into training and testing sets(90:10).
Python3
gtzan = load_dataset("marsyas/gtzan", "all")
gtzan = gtzan["train"].train_test_split(seed=42, shuffle=True, test_size=0.1)
Data pre-processing
Now we will extract the features of audio files using transformers' AutoFeatureExtractor. And define a driver function to iterate over the audio files(.wav).
- Model and Feature Initialization
- used a pretrained model from the Hugging Face model hub
- initialized the feature extractor
- Load data and performed audio preprocessing
- Preprocessed the audio data in the GTZAN dataset using the feature extractor, the preprocess_function applies the feature extractor to a list of audio arrays, setting options such as 'max_length' and 'truncation'.
Python3
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(
model_id, do_normalize=True, return_attention_mask=True
)
sampling_rate = feature_extractor.sampling_rate
gtzan = gtzan.cast_column("audio", Audio(sampling_rate=sampling_rate))
sample = gtzan["train"][0]["audio"]
inputs = feature_extractor(
sample["array"], sampling_rate=sample["sampling_rate"])
max_duration = 20.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
gtzan_encoded = gtzan.map(
preprocess_function,
remove_columns=["audio", "file"],
batched=True,
batch_size=25,
num_proc=1,
)
Encoding dataset
To feed the dataset to the model we need to encode it.
- Renamed the 'genre' column to 'label'
- Created mapping functions
Python3
gtzan_encoded = gtzan_encoded.rename_column("genre", "label")
id2label_fn = gtzan["train"].features["genre"].int2str
id2label = {
str(i): id2label_fn(i)
for i in range(len(gtzan_encoded["train"].features["label"].names))
}
label2id = {v: k for k, v in id2label.items()}
Classification model
Now we will use 'AutoModelForAudioClassification' for the music genre classifiation. We will specify various training arguments for the model as per our choice and machine's capability.
- At first, we initialized a pretrained audio model for finetuning
- We created an object containing various training configuration settings, such as evaluation strategy, learning rate, batch sizes, logging settings, etc. These settings are used during the model training process.
Python3
num_labels = len(id2label)
model = AutoModelForAudioClassification.from_pretrained(
model_id,
num_labels=num_labels,
label2id=label2id,
id2label=id2label,
)
model_name = model_id.split("/")[-1]
batch_size = 2
gradient_accumulation_steps = 1
num_train_epochs = 5
training_args = TrainingArguments(
f"{model_name}-Music classification Finetuned",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
per_device_eval_batch_size=batch_size,
num_train_epochs=num_train_epochs,
warmup_ratio=0.1,
logging_steps=5,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
fp16=True,
)
Model evaluation
Now we will evaluate our model in the terms of Accuracy.
- We loaded the accuracy metric for evaluation and it loaded from Hugging Face module.
- We computed the evaluation metrics based on the model predictions and the reference labels. In this case, it uses the loaded accuracy metric to compute the accuracy.
- Then we initialized the trainer and trained the model
Python3
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
trainer = Trainer(
model,
training_args,
train_dataset=gtzan_encoded["train"],
eval_dataset=gtzan_encoded["test"],
tokenizer=feature_extractor,
compute_metrics=compute_metrics,
)
trainer.train()
Output:
Epoch Training Loss Validation Loss Accuracy
1 1.180900 1.429399 0.610000
TrainOutput(global_step=450, training_loss=1.8381363932291668,
metrics= {'train_runtime': 493.46, 'train_samples_per_second': 1.822,
'train_steps_per_second': 0.912, 'total_flos': 4.089325516416e+16, 'train_loss': 1.8381363932291668, 'epoch': 1.0})
Loading and Saving the model in a "Saved Model" Folder
Code for Saving the model
Python3
# Save the model and feature extractor
model.save_pretrained("/content/Saved Model")
feature_extractor.save_pretrained("/content/Saved Model")
Code for loading the model
Python3
# Load the model and feature extractor
loaded_model = AutoModelForAudioClassification.from_pretrained("/content/Saved Model")
loaded_feature_extractor = AutoFeatureExtractor.from_pretrained("/content/Saved Model")
Pipeline
Using this pipeline you will be able input an audio file and obtain the predicted genre along with the probability score. For the following code we have used a file of genre blue. The file can be downloaded from here.
Python3
from transformers import pipeline, AutoFeatureExtractor
pipe = pipeline("audio-classification", model=loaded_model,
feature_extractor=loaded_feature_extractor)
def classify_audio(filepath):
preds = pipe(filepath)
outputs = {}
for p in preds:
outputs[p["label"]] = p["score"]
return outputs
# Provide the input file path
input_file_path = input('Input:')
# Classify the audio file
output = classify_audio(input_file_path)
# Print the output genre
print("Predicted Genre:")
max_key = max(output, key=output.get)
print("The predicted genre is:", max_key)
print("The prediction score is:", output[max_key])
Output:
Input:/content/sound-genre-blue.wav
Predicted Genre:
The predicted genre is: blues
The prediction score is: 0.9631124138832092
Conclusion
We can conclude that Music genre classification is a complex and computationally costly task. But this is required in many industries. Our model has achieved a good accuracy of 82%. However, using larger dataset can be useful to get better accuracy.
Similar Reads
Audio Classification using Transformers Our daily life is full of different types of audio. The human brain can effectively classify different audio signals. But what about our machines? They can't even understand any audio signals by default. Classifying different audio signals is very important for different advanced tasks like speech r
5 min read
Classification Metrics using Sklearn Machine learning classification is a powerful tool that helps us make predictions and decisions based on data. Whether it's determining whether an email is spam or not, diagnosing diseases from medical images, or predicting customer churn, classification algorithms are at the heart of many real-worl
14 min read
Audio classification using spectrograms Our everyday lives are full of various types of audio signals. Our brains are capable of distinguishing different audio signals from each other by default. But machines don't have this capability. To learn audio classification, different approaches can be used. One of them is classification using sp
7 min read
Image Classification Using PyTorch Lightning Image classification is one of the most common tasks in computer vision and involves assigning a label to an input image from a predefined set of categories. While PyTorch is a powerful deep learning framework, PyTorch Lightning builds on it to simplify model training, reduce boilerplate code, and i
4 min read
Audio Seq2seq Model using Transformers The article explores the various applications of the Seq2Seq model in various fields, delving into its complexities. We'll also look at how audio transformation can be used in practice. What is Seq2Seq model?Seq2Seq are encoder and decoder models allowing for different lengths of inputs and outputs
9 min read
Automated Music Genre Classification using Librosa and XGBOOST Music genre classification is a critical task in the field of music information retrieval, which aims to categorize music tracks into predefined genres based on their audio features. This process is essential for organizing large music libraries, enhancing music recommendation systems, and providing
10 min read