ChatGLM2-6B is the second-gen Chinese-English conversational LLM from ZhipuAI/Tsinghua. It upgrades the base model with GLM’s hybrid pretraining objective, 1.4 TB bilingual data, and preference alignment—delivering big gains on MMLU, CEval, GSM8K, and BBH. The context window extends up to 32K (FlashAttention), and Multi-Query Attention improves speed and memory use. The repo includes Python APIs, CLI & web demos, OpenAI-style/FASTAPI servers, and quantized checkpoints for lightweight local deployment on GPUs or CPU/MPS.

Features

  • Stronger base model: large bilingual pretrain + alignment; big benchmark lifts
  • Long context variants: 8K default, 32K model (LongBench-competitive)
  • Faster, lighter inference: Multi-Query Attention + FlashAttention
  • Low-cost deploy: FP16/BF16, INT8/INT4 (≈5.5 GB), CPU & Apple MPS
  • Demos & APIs: CLI, Gradio/Streamlit, FastAPI and OpenAI-format servers
  • Finetuning & tooling: P-Tuning v2, full-parameter scripts, multi-GPU utilities

Project Samples

Project Activity

See All Activity >

License

Apache License V2.0

Follow ChatGLM2-6B

ChatGLM2-6B Web Site

Other Useful Business Software
Auth0 for AI Agents now in GA Icon
Auth0 for AI Agents now in GA

Ready to implement AI with confidence (without sacrificing security)?

Connect your AI agents to apps and data more securely, give users control over the actions AI agents can perform and the data they can access, and enable human confirmation for critical agent actions.
Start building today
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of ChatGLM2-6B!

Additional Project Details

Operating Systems

Linux, Mac, Windows

Programming Language

Python, Unix Shell

Related Categories

Unix Shell Large Language Models (LLM), Python Large Language Models (LLM)

Registered

2025-10-04