This document presents a novel continual learning framework called Additive Parameter Decomposition (APD), which addresses challenges such as scalability and catastrophic forgetting by decomposing model parameters into task-shared and task-adaptive components. The experimental results demonstrate that APD outperforms existing continual learning methods, showing robust performance and efficiency across a large number of tasks. This approach facilitates selective task forgetting and maintains task-order fairness, making it a practical solution for lifelong learning scenarios.