Research on the Impact of Large Language Models on Computer Science Students: A Multi-dimensional An

文章标题:Research on the Impact of Large Language Models on Computer Science Students: A Multi-dimensional Analysis of Education, Practice, and Employment

研究问题:

1.大模型技术如何改变计算机专业课程的教学方法和学习资源,这种变化对学生的学习效果和兴趣培养产生了怎样的影响?

2.在实际操作和技术开发过程中,计算机专业学生如何利用大模型技术提高项目开发效率或创新能力,这些技术为学生的实践能力带来了哪些具体挑战和机遇?

3. 计算机专业学生认为大模型会如何影响就业?

引言部分:

In recent years, the groundbreaking developments in artificial intelligence technology have reshaped the global technology and education landscape, particularly with the emergence of Large Language Models (LLMs) represented by Generative Pre-trained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT). This marks a new era for artificial intelligence technology. In educational settings, these large models have not only transformed traditional methods of knowledge acquisition and dissemination but also deeply infiltrated core aspects such as academic research, course learning, and technical development. As a crucial link that connects the preceding and the following in the chain of scientific and technological innovation, the dual nature of postgraduate students in technological research and development as well as ethical practice makes the innovation of their education model an important breakthrough for striking a balance between technical efficiency and academic ethics. As advanced academic entities in higher education, graduate students have stronger demands for professionalism, research orientation, and innovation in their academic lives. Especially in cutting-edge technical fields like cybersecurity, computer science, and artificial intelligence, the application of large models has brought about efficiency improvements and paradigm shifts, while also sparking debates about academic ethics, skill alienation, and cognitive dependence.

Graduate education serves as a crucial hub for scientific and technological innovation as well as talent cultivation. For postgraduate students majoring in Computer Science and Technology, this pivotal role exhibits a twofold particularity. On the one hand, this group constitutes the core talent reserve for the research and development of artificial intelligence technologies. On the other hand, as direct constructors and in-depth users of Large Language Model (LLM) technologies, they encounter practical choices regarding technical ethics earlier than researchers in other fields. According to statistics from the United Nations Educational, Scientific and Cultural Organization (UNESCO), the number of doctoral students in the global field of artificial intelligence has increased by nearly 300% over the past decade, among which those majoring in computer science account for 78%, and 45% of the research focuses on algorithm optimization and model application [1]. This disciplinary characteristic enables postgraduate students majoring in computer science to play the triple roles of developers, testers, and disseminators simultaneously during technological iterations. As a tool, large language models can assist postgraduate students in quickly completing tasks such as code generation. However, the data security and ethical risks of large language models also pose higher requirements for the academic norms of postgraduate students. Based on a systematic analysis of 177 global academic papers [2], more than 27% of the research focuses on AI as a defensive measure, while 32% of the papers explore the potential of LLMs as attack tools. The complexity of such technological applications requires postgraduate students not only to master traditional network security skills but also to have a profound understanding of the technical characteristics of LLMs.

Moreover, the widespread application of LLMs is reshaping the academic training paradigm for graduate students. The traditional research model, which primarily relied on manual coding and experimental validation, is gradually shifting towards a "human-machine collaboration" model, such as utilizing LLMs to assist with literature reviews [3]. However, this transition may pose a risk of "hollowing out" academic capabilities—excessive reliance on LLMs could lead graduate students to neglect in-depth exploration of underlying technical principles, for example, focusing solely on the detection results generated by models in malware analysis while ignoring practical training in binary code reverse engineering [4]. Therefore, how to strike a balance between the empowerment provided by LLMs and the cultivation of academic abilities has become a core issue that urgently needs to be addressed in graduate education.

Therefore, this study focuses on the multidimensional impact of large models on the academic life of contemporary graduate students, aiming to answer the following core questions:

  1. How does large model technology transform the teaching methods and learning resources of computer science courses, and what impact does this transformation have on students' learning outcomes and the cultivation of their interests?
  2. In practical operations and technological development processes, how can computer science students leverage large model technology to enhance project development efficiency or innovation capabilities? What specific challenges and opportunities do these technologies present for students' practical skills?
  3. How do computer science students believe large models will impact employment?

Existing research on the application of large models in the field of education has mostly focused on basic education or undergraduate student populations. For example, Zawacki-Richter et al. [3] pointed out through a meta-analysis that AI tools demonstrate significant advantages in writing tutoring and personalized learning for undergraduate students, but their effectiveness in supporting higher-order research skills remains uncertain. Research on graduate education, however, is more fragmented and often concentrates on single technical scenarios. In terms of technology acceptance, the existing literature generally acknowledges a high level of acceptance of large models among graduate students. Nevertheless, this high acceptance has not fully translated into a comprehensive understanding of the technological risks. This disconnect between cognition and practice underscores the imbalance between technical literacy and ethical education in graduate education. In terms of research methods, the existing literature predominantly employs quantitative analysis (such as surveys) or case studies, as exemplified by references [5] and [6], lacking qualitative exploration of the full spectrum of graduate students' academic life. Furthermore, interdisciplinary comparative studies are particularly scarce.

This study aims to bridge the aforementioned gaps by employing a mixed-methods approach to comprehensively analyze the complex impacts of large models on graduate students' academic life. The objectives of this study include:

- revealing the multidimensional mechanisms through which large models influence graduate students' academic life;

- constructing a discipline-specific risk assessment framework and proposing targeted educational strategies;

- promoting adaptive reforms in technology ethics and academic norms to facilitate a positive interaction between artificial intelligence and higher education.

This study will provide empirical evidence for optimizing cybersecurity graduate education, aiding in the cultivation of composite talents with both technical acumen and ethical judgment. By systematically exploring these issues, this study not only offers theoretical foundations for optimizing graduate education models but also provides practical references for constructing technology ethics frameworks and formulating higher education policies.

References

[1]UNESCO. (2023). *Global Education Monitoring Report: Technology in Education*. Paris: UNESCO Publishing. 

[2]Hasanov, I., Virtanen, S., Hakkala, A., & Isoaho, J. (2024). Application of Large Language Models in Cybersecurity: A Systematic Literature Review. *IEEE Access*, 12, 176751-176778. 

[3]Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education. *International Journal of Educational Technology in Higher Education*, 16(1), 1-27. 

[4]Shandilya, S. K., Piharsha, G., & Choudhary, G. (2023). GPT-based malware: Unveiling vulnerabilities and creating a way forward in digital space. *Proceedings of the International Conference on Data Security and Privacy Protection*, 164-173.

[5] 李艳,许洁,贾程媛,等. 大学生生成式人工智能应用现状与思考[J]. 开放教育研究,2024,30(1):89-98. DOI:10.13966/j.cnki.kfjyyj.2024.01.010.

[6] 冉进军,华丽,萧晨路,等. 基于问卷调查探讨高等院校大学生对人工智能的认知情况[J]. 高教学刊,2025,11(5):8-12,16. DOI:10.19980/j.CN23-1593/G4.2025.05.002.s

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值