This document discusses the k-nearest neighbors (k-NN) algorithm. It begins by explaining the basic principles of k-NN, including that records close to each other in a data space will be of the same type. It then discusses issues with k-NN like computational expense, storage requirements, and performance with high-dimensional data. The document goes on to discuss techniques to improve k-NN, including condensing the training data set to reduce redundant points while retaining the decision boundary, and using proximity graphs and editing algorithms to further refine the training set.