This document presents a comparative study on the difficulty levels of questions, focusing on the application of Item Response Theory (IRT) to assess question difficulty based on learner proficiency. It reviews existing methods for estimating question difficulty and introduces a new model that incorporates semantic analysis for automated question generation, aimed at improving the efficiency of test construction. The study evaluates the model's performance through a comprehensive analysis and proposes future research directions to enhance the methodology.