This paper proposes a multi-penalty regularization framework to improve the convergence rates of learning algorithms in the context of manifold learning. It provides a theoretical analysis of these algorithms within the setting of reproducing kernel Hilbert spaces and establishes optimal convergence rates under specific smoothness assumptions. Additionally, the study introduces a new parameter selection rule called the 'penalty balancing principle' to enhance the performance of regularization strategies.