GLM-130B

Claim Tool

Last updated: December 7, 2025

Reviews

0 reviews

What is GLM-130B?

GLM-130B is a 130-billion-parameter bilingual (Chinese–English) transformer-based General Language Model from THUDM/THUKEG, released as an open model for academic research and certain commercial use. Trained on 400B tokens with the GLM library, it delivers strong results on a range of NLP benchmarks and ships with downloadable checkpoints and inference/deployment code for efficient multi-GPU and mixed-precision serving.

Category

GLM-130B's Top Features

Open bilingual pre-trained model supporting Chinese and English

130B-parameter transformer-based large language model

Open repository with model/code licenses; permitted research and certain commercial uses per repo

Trained on 400B text tokens with the GLM training framework

Optimized large-scale training using the GLM library (parallelism and efficiency)

Strong reported performance across multiple NLP benchmarks

Inference and deployment scripts, including multi-GPU and mixed-precision

Downloadable checkpoints/weights for immediate use

Instruction-style and few-shot prompting capabilities

Associated ICLR 2023 paper describing model design and training

Frequently asked questions about GLM-130B

GLM-130B's pricing

Share

Customer Reviews

Share your thoughts

If you've used this product, share your thoughts with other customers

Recent reviews

News

    Top GLM-130B Alternatives

    Use Cases

    Academic researchers

    Benchmarking bilingual (Chinese–English) NLU and generation tasks with a large, open model.

    NLP engineers

    Building question answering and dialogue systems via prompt-based inference.

    Data scientists

    Rapid prototyping of domain prompts and few-shot workflows without fine-tuning.

    Enterprise R&D teams

    Evaluating 130B-scale model performance for internal pilots under the repository license.

    Product teams

    Drafting bilingual content and templates in Chinese or English for user-facing features.

    Researchers in efficiency

    Studying large-model inference on multi-GPU and mixed-precision setups using provided scripts.

    Educators

    Demonstrating state-of-the-art transformer behavior in classroom or lab settings.

    Benchmark maintainers

    Comparing strong baseline results across standard NLP leaderboards.

    Applied scientists

    Exploring instruction-style prompting and few-shot examples for task completion.

    Open-source contributors

    Extending the GLM library or improving deployment tooling for large models.