This document summarizes a student's research on using parallel computing to improve the efficiency of the K-nearest neighbors (K-NN) machine learning algorithm. It discusses how Message Passing Interface (MPI) can be used to divide the training data across multiple processors. The proposed approach pre-processes the training data using clustering, then assigns test data to processors based on similarity to cluster representatives. This parallel approach reduces the time complexity of K-NN from serial computations. The student's work focused on applying competence enhancement and preservation rules to iteratively update cluster centroids.