This document describes a study that aimed to classify heart sounds as normal or abnormal using deep learning techniques. The study used phonocardiogram (PCG) signals from online datasets to train convolutional neural network (CNN) and recurrent neural network (RNN) models. Feature extraction using mel-frequency cepstral coefficients was performed on the PCG signals before training the models. Experimental results showed that the CNN model achieved higher accuracy (90.6%) and lower loss than the RNN model, demonstrating that CNN is better suited for this heart sound classification task. The trained CNN model can classify new heart sound recordings with a confidence value indicating the likelihood of the sound being normal or abnormal.