chatglm-cpp


Namechatglm-cpp JSON
Version 0.3.2 PyPI version JSON
download
home_pageNone
SummaryC++ implementation of ChatGLM family models and more LLMs
upload_time2024-04-23 15:35:04
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT License
keywords chatglm chatglm2 chatglm3 large language model
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ChatGLM.cpp

[![CMake](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml)
[![Python package](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml)
[![PyPI](https://img.shields.io/pypi/v/chatglm-cpp)](https://pypi.org/project/chatglm-cpp/)
![Python](https://img.shields.io/pypi/pyversions/chatglm-cpp)
[![License: MIT](https://img.shields.io/badge/license-MIT-blue)](LICENSE)

C++ implementation of [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B), [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3) and more LLMs for real-time chatting on your MacBook.

![demo](docs/demo.gif)

## Features

Highlights:
* Pure C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp).
* Accelerated memory-efficient CPU inference with int4/int8 quantization, optimized KV cache and parallel computing.
* P-Tuning v2 and LoRA finetuned models support.
* Streaming generation with typewriter effect.
* Python binding, web demo, api servers and more possibilities.

Support Matrix:
* Hardwares: x86/arm CPU, NVIDIA GPU, Apple Silicon GPU
* Platforms: Linux, MacOS, Windows
* Models: [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B), [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3), [CodeGeeX2](https://github.com/THUDM/CodeGeeX2), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan2](https://github.com/baichuan-inc/Baichuan2), [InternLM](https://github.com/InternLM/InternLM)

**NOTE**: Baichuan & InternLM model series are deprecated in favor of [llama.cpp](https://github.com/ggerganov/llama.cpp).

## Getting Started

**Preparation**

Clone the ChatGLM.cpp repository into your local machine:
```sh
git clone --recursive https://github.com/li-plus/chatglm.cpp.git && cd chatglm.cpp
```

If you forgot the `--recursive` flag when cloning the repository, run the following command in the `chatglm.cpp` folder:
```sh
git submodule update --init --recursive
```

**Quantize Model**

Install necessary packages for loading and quantizing Hugging Face models:
```sh
python3 -m pip install -U pip
python3 -m pip install torch tabulate tqdm transformers accelerate sentencepiece
```

Use `convert.py` to transform ChatGLM-6B into quantized GGML format. For example, to convert the fp16 original model to q4_0 (quantized int4) GGML model, run:
```sh
python3 chatglm_cpp/convert.py -i THUDM/chatglm-6b -t q4_0 -o chatglm-ggml.bin
```

The original model (`-i <model_name_or_path>`) can be a Hugging Face model name or a local path to your pre-downloaded model. Currently supported models are:
* ChatGLM-6B: `THUDM/chatglm-6b`, `THUDM/chatglm-6b-int8`, `THUDM/chatglm-6b-int4`
* ChatGLM2-6B: `THUDM/chatglm2-6b`, `THUDM/chatglm2-6b-int4`
* ChatGLM3-6B: `THUDM/chatglm3-6b`
* CodeGeeX2: `THUDM/codegeex2-6b`, `THUDM/codegeex2-6b-int4`
* Baichuan & Baichuan2: `baichuan-inc/Baichuan-13B-Chat`, `baichuan-inc/Baichuan2-7B-Chat`, `baichuan-inc/Baichuan2-13B-Chat`

You are free to try any of the below quantization types by specifying `-t <type>`:
* `q4_0`: 4-bit integer quantization with fp16 scales.
* `q4_1`: 4-bit integer quantization with fp16 scales and minimum values.
* `q5_0`: 5-bit integer quantization with fp16 scales.
* `q5_1`: 5-bit integer quantization with fp16 scales and minimum values.
* `q8_0`: 8-bit integer quantization with fp16 scales.
* `f16`: half precision floating point weights without quantization.
* `f32`: single precision floating point weights without quantization.

For LoRA models, add `-l <lora_model_name_or_path>` flag to merge your LoRA weights into the base model. For example, run `python3 chatglm_cpp/convert.py -i THUDM/chatglm3-6b -t q4_0 -o chatglm3-ggml-lora.bin -l shibing624/chatglm3-6b-csc-chinese-lora` to merge public LoRA weights from Hugging Face.

For P-Tuning v2 models using the [official finetuning script](https://github.com/THUDM/ChatGLM3/tree/main/finetune_demo), additional weights are automatically detected by `convert.py`. If `past_key_values` is on the output weight list, the P-Tuning checkpoint is successfully converted.

**Build & Run**

Compile the project using CMake:
```sh
cmake -B build
cmake --build build -j --config Release
```

Now you may chat with the quantized ChatGLM-6B model by running:
```sh
./build/bin/main -m chatglm-ggml.bin -p 你好
# 你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。
```

To run the model in interactive mode, add the `-i` flag. For example:
```sh
./build/bin/main -m chatglm-ggml.bin -i
```
In interactive mode, your chat history will serve as the context for the next-round conversation.

Run `./build/bin/main -h` to explore more options!

**Try Other Models**

<details open>
<summary>ChatGLM2-6B</summary>

```sh
python3 chatglm_cpp/convert.py -i THUDM/chatglm2-6b -t q4_0 -o chatglm2-ggml.bin
./build/bin/main -m chatglm2-ggml.bin -p 你好 --top_p 0.8 --temp 0.8
# 你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。
```
</details>

<details open>
<summary>ChatGLM3-6B</summary>

ChatGLM3-6B further supports function call and code interpreter in addition to chat mode.

Chat mode:
```sh
python3 chatglm_cpp/convert.py -i THUDM/chatglm3-6b -t q4_0 -o chatglm3-ggml.bin
./build/bin/main -m chatglm3-ggml.bin -p 你好 --top_p 0.8 --temp 0.8
# 你好👋!我是人工智能助手 ChatGLM3-6B,很高兴见到你,欢迎问我任何问题。
```

Setting system prompt:
```sh
./build/bin/main -m chatglm3-ggml.bin -p 你好 -s "You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown."
# 你好👋!我是 ChatGLM3,有什么问题可以帮您解答吗?
```

Function call:
~~~
$ ./build/bin/main -m chatglm3-ggml.bin --top_p 0.8 --temp 0.8 --sp examples/system/function_call.txt -i
System   > Answer the following questions as best as you can. You have access to the following tools: ...
Prompt   > 生成一个随机数
ChatGLM3 > random_number_generator
```python
tool_call(seed=42, range=(0, 100))
```
Tool Call   > Please manually call function `random_number_generator` with args `tool_call(seed=42, range=(0, 100))` and provide the results below.
Observation > 23
ChatGLM3 > 根据您的要求,我使用随机数生成器API生成了一个随机数。根据API返回结果,生成的随机数为23。
~~~

Code interpreter:
~~~
$ ./build/bin/main -m chatglm3-ggml.bin --top_p 0.8 --temp 0.8 --sp examples/system/code_interpreter.txt -i
System   > 你是一位智能AI助手,你叫ChatGLM,你连接着一台电脑,但请注意不能联网。在使用Python解决任务时,你可以运行代码并得到结果,如果运行结果有错误,你需要尽可能对代码进行改进。你可以处理用户上传到电脑上的文件,文件默认存储路径是/mnt/data/。
Prompt   > 列出100以内的所有质数
ChatGLM3 > 好的,我会为您列出100以内的所有质数。
```python
def is_prime(n):
   """Check if a number is prime."""
   if n <= 1:
       return False
   if n <= 3:
       return True
   if n % 2 == 0 or n % 3 == 0:
       return False
   i = 5
   while i * i <= n:
       if n % i == 0 or n % (i + 2) == 0:
           return False
       i += 6
   return True

primes_upto_100 = [i for i in range(2, 101) if is_prime(i)]
primes_upto_100
```

Code Interpreter > Please manually run the code and provide the results below.
Observation      > [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
ChatGLM3 > 100以内的所有质数为:

$$
2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 
$$
~~~

</details>

<details>
<summary>CodeGeeX2</summary>

```sh
$ python3 chatglm_cpp/convert.py -i THUDM/codegeex2-6b -t q4_0 -o codegeex2-ggml.bin
$ ./build/bin/main -m codegeex2-ggml.bin --temp 0 --mode generate -p "\
# language: Python
# write a bubble sort function
"


def bubble_sort(list):
    for i in range(len(list) - 1):
        for j in range(len(list) - 1):
            if list[j] > list[j + 1]:
                list[j], list[j + 1] = list[j + 1], list[j]
    return list


print(bubble_sort([5, 4, 3, 2, 1]))
```
</details>

<details>
<summary>Baichuan-13B-Chat</summary>

```sh
python3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan-13B-Chat -t q4_0 -o baichuan-13b-chat-ggml.bin
./build/bin/main -m baichuan-13b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1
# 你好!有什么我可以帮助你的吗?
```
</details>

<details>
<summary>Baichuan2-7B-Chat</summary>

```sh
python3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan2-7B-Chat -t q4_0 -o baichuan2-7b-chat-ggml.bin
./build/bin/main -m baichuan2-7b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05
# 你好!很高兴为您提供帮助。请问有什么问题我可以帮您解答?
```
</details>

<details>
<summary>Baichuan2-13B-Chat</summary>

```sh
python3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan2-13B-Chat -t q4_0 -o baichuan2-13b-chat-ggml.bin
./build/bin/main -m baichuan2-13b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05
# 你好!今天我能为您提供什么帮助?
```
</details>

<details>
<summary>InternLM-Chat-7B</summary>

```sh
python3 chatglm_cpp/convert.py -i internlm/internlm-chat-7b-v1_1 -t q4_0 -o internlm-chat-7b-ggml.bin
./build/bin/main -m internlm-chat-7b-ggml.bin -p 你好 --top_p 0.8 --temp 0.8
# 你好,我是书生·浦语,有什么可以帮助你的吗?
```
</details>

<details>
<summary>InternLM-Chat-20B</summary>

```sh
python3 chatglm_cpp/convert.py -i internlm/internlm-chat-20b -t q4_0 -o internlm-chat-20b-ggml.bin
./build/bin/main -m internlm-chat-20b-ggml.bin -p 你好 --top_p 0.8 --temp 0.8
# 你好!有什么我可以帮到你的吗?
```
</details>

## Using BLAS

BLAS library can be integrated to further accelerate matrix multiplication. However, in some cases, using BLAS may cause performance degradation. Whether to turn on BLAS should depend on the benchmarking result.

**Accelerate Framework**

Accelerate Framework is automatically enabled on macOS. To disable it, add the CMake flag `-DGGML_NO_ACCELERATE=ON`.

**OpenBLAS**

OpenBLAS provides acceleration on CPU. Add the CMake flag `-DGGML_OPENBLAS=ON` to enable it.
```sh
cmake -B build -DGGML_OPENBLAS=ON && cmake --build build -j
```

**cuBLAS**

cuBLAS uses NVIDIA GPU to accelerate BLAS. Add the CMake flag `-DGGML_CUBLAS=ON` to enable it.
```sh
cmake -B build -DGGML_CUBLAS=ON && cmake --build build -j
```

By default, all kernels will be compiled for all possible CUDA architectures and it takes some time. To run on a specific type of device, you may specify `CUDA_ARCHITECTURES` to speed up the nvcc compilation. For example:
```sh
cmake -B build -DGGML_CUBLAS=ON -DCUDA_ARCHITECTURES="80"       # for A100
cmake -B build -DGGML_CUBLAS=ON -DCUDA_ARCHITECTURES="70;75"    # compatible with both V100 and T4
```

To find out the CUDA architecture of your GPU device, see [Your GPU Compute Capability](https://developer.nvidia.com/cuda-gpus).

**Metal**

MPS (Metal Performance Shaders) allows computation to run on powerful Apple Silicon GPU. Add the CMake flag `-DGGML_METAL=ON` to enable it.
```sh
cmake -B build -DGGML_METAL=ON && cmake --build build -j
```

## Python Binding

The Python binding provides high-level `chat` and `stream_chat` interface similar to the original Hugging Face ChatGLM(2)-6B.

**Installation**

Install from PyPI (recommended): will trigger compilation on your platform.
```sh
pip install -U chatglm-cpp
```

To enable cuBLAS acceleration on NVIDIA GPU:
```sh
CMAKE_ARGS="-DGGML_CUBLAS=ON" pip install -U chatglm-cpp
```

To enable Metal on Apple silicon devices:
```sh
CMAKE_ARGS="-DGGML_METAL=ON" pip install -U chatglm-cpp
```

You may also install from source. Add the corresponding `CMAKE_ARGS` for acceleration.
```sh
# install from the latest source hosted on GitHub
pip install git+https://github.com/li-plus/chatglm.cpp.git@main
# or install from your local source after git cloning the repo
pip install .
```

Pre-built wheels for CPU backend on Linux / MacOS / Windows are published on [release](https://github.com/li-plus/chatglm.cpp/releases). For CUDA / Metal backends, please compile from source code or source distribution.

**Using Pre-converted GGML Models**

Here is a simple demo that uses `chatglm_cpp.Pipeline` to load the GGML model and chat with it. First enter the examples folder (`cd examples`) and launch a Python interactive shell:
```python
>>> import chatglm_cpp
>>> 
>>> pipeline = chatglm_cpp.Pipeline("../chatglm-ggml.bin")
>>> pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
ChatMessage(role="assistant", content="你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。", tool_calls=[])
```

To chat in stream, run the below Python example:
```sh
python3 cli_demo.py -m ../chatglm-ggml.bin -i
```

Launch a web demo to chat in your browser:
```sh
python3 web_demo.py -m ../chatglm-ggml.bin
```

![web_demo](docs/web_demo.jpg)

For other models:

<details open>
<summary>ChatGLM2-6B</summary>

```sh
python3 cli_demo.py -m ../chatglm2-ggml.bin -p 你好 --temp 0.8 --top_p 0.8  # CLI demo
python3 web_demo.py -m ../chatglm2-ggml.bin --temp 0.8 --top_p 0.8  # web demo
```
</details>

<details open>
<summary>ChatGLM3-6B</summary>

**CLI Demo**

Chat mode:
```sh
python3 cli_demo.py -m ../chatglm3-ggml.bin -p 你好 --temp 0.8 --top_p 0.8
```

Function call:
```sh
python3 cli_demo.py -m ../chatglm3-ggml.bin --temp 0.8 --top_p 0.8 --sp system/function_call.txt -i
```

Code interpreter:
```sh
python3 cli_demo.py -m ../chatglm3-ggml.bin --temp 0.8 --top_p 0.8 --sp system/code_interpreter.txt -i
```

**Web Demo**

Install Python dependencies and the IPython kernel for code interpreter.
```sh
pip install streamlit jupyter_client ipython ipykernel
ipython kernel install --name chatglm3-demo --user
```

Launch the web demo:
```sh
streamlit run chatglm3_demo.py
```

| Function Call               | Code Interpreter               |
|-----------------------------|--------------------------------|
| ![](docs/function_call.png) | ![](docs/code_interpreter.png) |

</details>

<details>
<summary>CodeGeeX2</summary>

```sh
# CLI demo
python3 cli_demo.py -m ../codegeex2-ggml.bin --temp 0 --mode generate -p "\
# language: Python
# write a bubble sort function
"
# web demo
python3 web_demo.py -m ../codegeex2-ggml.bin --temp 0 --max_length 512 --mode generate --plain
```
</details>

<details>
<summary>Baichuan-13B-Chat</summary>

```sh
python3 cli_demo.py -m ../baichuan-13b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1 # CLI demo
python3 web_demo.py -m ../baichuan-13b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1   # web demo
```
</details>

<details>
<summary>Baichuan2-7B-Chat</summary>

```sh
python3 cli_demo.py -m ../baichuan2-7b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05 # CLI demo
python3 web_demo.py -m ../baichuan2-7b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05   # web demo
```
</details>

<details>
<summary>Baichuan2-13B-Chat</summary>

```sh
python3 cli_demo.py -m ../baichuan2-13b-chat-ggml.bin -p 你好 --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05 # CLI demo
python3 web_demo.py -m ../baichuan2-13b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05   # web demo
```
</details>

<details>
<summary>InternLM-Chat-7B</summary>

```sh
python3 cli_demo.py -m ../internlm-chat-7b-ggml.bin -p 你好 --top_p 0.8 --temp 0.8  # CLI demo
python3 web_demo.py -m ../internlm-chat-7b-ggml.bin --top_p 0.8 --temp 0.8  # web demo
```
</details>

<details>
<summary>InternLM-Chat-20B</summary>

```sh
python3 cli_demo.py -m ../internlm-chat-20b-ggml.bin -p 你好 --top_p 0.8 --temp 0.8 # CLI demo
python3 web_demo.py -m ../internlm-chat-20b-ggml.bin --top_p 0.8 --temp 0.8 # web demo
```
</details>

**Converting Hugging Face LLMs at Runtime**

Sometimes it might be inconvenient to convert and save the intermediate GGML models beforehand. Here is an option to directly load from the original Hugging Face model, quantize it into GGML models in a minute, and start serving. All you need is to replace the GGML model path with the Hugging Face model name or path.
```python
>>> import chatglm_cpp
>>> 
>>> pipeline = chatglm_cpp.Pipeline("THUDM/chatglm-6b", dtype="q4_0")
Loading checkpoint shards: 100%|██████████████████████████████████| 8/8 [00:10<00:00,  1.27s/it]
Processing model states: 100%|████████████████████████████████| 339/339 [00:23<00:00, 14.73it/s]
...
>>> pipeline.chat([chatglm_cpp.ChatMessage(role="user", content="你好")])
ChatMessage(role="assistant", content="你好👋!我是人工智能助手 ChatGLM-6B,很高兴见到你,欢迎问我任何问题。", tool_calls=[])
```

Likewise, replace the GGML model path with Hugging Face model in any example script, and it just works. For example:
```sh
python3 cli_demo.py -m THUDM/chatglm-6b -p 你好 -i
```

## API Server

We support various kinds of API servers to integrate with popular frontends. Extra dependencies can be installed by:
```sh
pip install 'chatglm-cpp[api]'
```
Remember to add the corresponding `CMAKE_ARGS` to enable acceleration.

**LangChain API**

Start the api server for LangChain:
```sh
MODEL=./chatglm2-ggml.bin uvicorn chatglm_cpp.langchain_api:app --host 127.0.0.1 --port 8000
```

Test the api endpoint with `curl`:
```sh
curl http://127.0.0.1:8000 -H 'Content-Type: application/json' -d '{"prompt": "你好"}'
```

Run with LangChain:
```python
>>> from langchain.llms import ChatGLM
>>> 
>>> llm = ChatGLM(endpoint_url="http://127.0.0.1:8000")
>>> llm.predict("你好")
'你好👋!我是人工智能助手 ChatGLM2-6B,很高兴见到你,欢迎问我任何问题。'
```

For more options, please refer to [examples/langchain_client.py](examples/langchain_client.py) and [LangChain ChatGLM Integration](https://python.langchain.com/docs/integrations/llms/chatglm).

**OpenAI API**

Start an API server compatible with [OpenAI chat completions protocol](https://platform.openai.com/docs/api-reference/chat):
```sh
MODEL=./chatglm3-ggml.bin uvicorn chatglm_cpp.openai_api:app --host 127.0.0.1 --port 8000
```

Test your endpoint with `curl`:
```sh
curl http://127.0.0.1:8000/v1/chat/completions -H 'Content-Type: application/json' \
    -d '{"messages": [{"role": "user", "content": "你好"}]}'
```

Use the OpenAI client to chat with your model:
```python
>>> from openai import OpenAI
>>> 
>>> client = OpenAI(base_url="http://127.0.0.1:8000/v1")
>>> response = client.chat.completions.create(model="default-model", messages=[{"role": "user", "content": "你好"}])
>>> response.choices[0].message.content
'你好👋!我是人工智能助手 ChatGLM3-6B,很高兴见到你,欢迎问我任何问题。'
```

For stream response, check out the example client script:
```sh
OPENAI_BASE_URL=http://127.0.0.1:8000/v1 python3 examples/openai_client.py --stream --prompt 你好
```

Tool calling is also supported:
```sh
OPENAI_BASE_URL=http://127.0.0.1:8000/v1 python3 examples/openai_client.py --tool_call --prompt 上海天气怎么样
```

With this API server as backend, ChatGLM.cpp models can be seamlessly integrated into any frontend that uses OpenAI-style API, including [mckaywrigley/chatbot-ui](https://github.com/mckaywrigley/chatbot-ui), [fuergaosi233/wechat-chatgpt](https://github.com/fuergaosi233/wechat-chatgpt), [Yidadaa/ChatGPT-Next-Web](https://github.com/Yidadaa/ChatGPT-Next-Web), and more.

## Using Docker

**Option 1: Building Locally**

Building docker image locally and start a container to run inference on CPU:
```sh
docker build . --network=host -t chatglm.cpp
# cpp demo
docker run -it --rm -v $PWD:/opt chatglm.cpp ./build/bin/main -m /opt/chatglm-ggml.bin -p "你好"
# python demo
docker run -it --rm -v $PWD:/opt chatglm.cpp python3 examples/cli_demo.py -m /opt/chatglm-ggml.bin -p "你好"
# langchain api server
docker run -it --rm -v $PWD:/opt -p 8000:8000 -e MODEL=/opt/chatglm-ggml.bin chatglm.cpp \
    uvicorn chatglm_cpp.langchain_api:app --host 0.0.0.0 --port 8000
# openai api server
docker run -it --rm -v $PWD:/opt -p 8000:8000 -e MODEL=/opt/chatglm-ggml.bin chatglm.cpp \
    uvicorn chatglm_cpp.openai_api:app --host 0.0.0.0 --port 8000
```

For CUDA support, make sure [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) is installed. Then run:
```sh
docker build . --network=host -t chatglm.cpp-cuda \
    --build-arg BASE_IMAGE=nvidia/cuda:12.2.0-devel-ubuntu20.04 \
    --build-arg CMAKE_ARGS="-DGGML_CUBLAS=ON"
docker run -it --rm --gpus all -v $PWD:/chatglm.cpp/models chatglm.cpp-cuda ./build/bin/main -m models/chatglm-ggml.bin -p "你好"
```

**Option 2: Using Pre-built Image**

The pre-built image for CPU inference is published on both [Docker Hub](https://hub.docker.com/repository/docker/liplusx/chatglm.cpp) and [GitHub Container Registry (GHCR)](https://github.com/li-plus/chatglm.cpp/pkgs/container/chatglm.cpp).

To pull from Docker Hub and run demo:
```sh
docker run -it --rm -v $PWD:/opt liplusx/chatglm.cpp:main \
    ./build/bin/main -m /opt/chatglm-ggml.bin -p "你好"
```

To pull from GHCR and run demo:
```sh
docker run -it --rm -v $PWD:/opt ghcr.io/li-plus/chatglm.cpp:main \
    ./build/bin/main -m /opt/chatglm-ggml.bin -p "你好"
```

Python demo and API servers are also supported in pre-built image. Use it in the same way as **Option 1**.

## Performance

Environment:
* CPU backend performance is measured on a Linux server with Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz using 16 threads.
* CUDA backend is measured on a V100-SXM2-32GB GPU using 1 thread.
* MPS backend is measured on an Apple M2 Ultra device using 1 thread.

ChatGLM-6B:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 74    | 77    | 86    | 89    | 114   | 189   |
| ms/token (CUDA @ V100 SXM2)    | 8.1   | 8.7   | 9.4   | 9.5   | 12.0  | 19.1  |
| ms/token (MPS @ M2 Ultra)      | 11.5  | 12.3  | N/A   | N/A   | 16.1  | 24.4  |
| file size                      | 3.3G  | 3.7G  | 4.0G  | 4.4G  | 6.2G  | 12G   |
| mem usage                      | 4.0G  | 4.4G  | 4.7G  | 5.1G  | 6.9G  | 13G   |

ChatGLM2-6B / ChatGLM3-6B / CodeGeeX2:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 64    | 71    | 79    | 83    | 106   | 189   |
| ms/token (CUDA @ V100 SXM2)    | 7.9   | 8.3   | 9.2   | 9.2   | 11.7  | 18.5  |
| ms/token (MPS @ M2 Ultra)      | 10.0  | 10.8  | N/A   | N/A   | 14.5  | 22.2  |
| file size                      | 3.3G  | 3.7G  | 4.0G  | 4.4G  | 6.2G  | 12G   |
| mem usage                      | 3.4G  | 3.8G  | 4.1G  | 4.5G  | 6.2G  | 12G   |

Baichuan-7B / Baichuan2-7B:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 85.3  | 94.8  | 103.4 | 109.6 | 136.8 | 248.5 |
| ms/token (CUDA @ V100 SXM2)    | 8.7   | 9.2   | 10.2  | 10.3  | 13.2  | 21.0  |
| ms/token (MPS @ M2 Ultra)      | 11.3  | 12.0  | N/A   | N/A   | 16.4  | 25.6  |
| file size                      | 4.0G  | 4.4G  | 4.9G  | 5.3G  | 7.5G  | 14G   |
| mem usage                      | 4.5G  | 4.9G  | 5.3G  | 5.7G  | 7.8G  | 14G   |

Baichuan-13B / Baichuan2-13B:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 161.7 | 175.8 | 189.9 | 192.3 | 255.6 | 459.6 |
| ms/token (CUDA @ V100 SXM2)    | 13.7  | 15.1  | 16.3  | 16.9  | 21.9  | 36.8  |
| ms/token (MPS @ M2 Ultra)      | 18.2  | 18.8  | N/A   | N/A   | 27.2  | 44.4  |
| file size                      | 7.0G  | 7.8G  | 8.5G  | 9.3G  | 14G   | 25G   |
| mem usage                      | 7.8G  | 8.8G  | 9.5G  | 10G   | 14G   | 25G   |

InternLM-7B:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 85.3  | 90.1  | 103.5 | 112.5 | 137.3 | 232.2 |
| ms/token (CUDA @ V100 SXM2)    | 9.1   | 9.4   | 10.5  | 10.5  | 13.3  | 21.1  |

InternLM-20B:

|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|--------------------------------|-------|-------|-------|-------|-------|-------|
| ms/token (CPU @ Platinum 8260) | 230.0 | 236.7 | 276.6 | 290.6 | 357.1 | N/A   |
| ms/token (CUDA @ V100 SXM2)    | 21.6  | 23.2  | 25.0  | 25.9  | 33.4  | N/A   |

## Model Quality

We measure model quality by evaluating the perplexity over the WikiText-2 test dataset, following the strided sliding window strategy in https://huggingface.co/docs/transformers/perplexity. Lower perplexity usually indicates a better model.

Download and unzip the dataset from [link](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip). Measure the perplexity with a stride of 512 and max input length of 2048:
```sh
./build/bin/perplexity -m <model_path> -f wikitext-2-raw/wiki.test.raw -s 512 -l 2048
```

|                         | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |
|-------------------------|-------|-------|-------|-------|-------|-------|
| [ChatGLM3-6B-Base][1]   | 6.215 | 6.184 | 5.997 | 6.015 | 5.965 | 5.971 |

[1]: https://huggingface.co/THUDM/chatglm3-6b-base

## Development

**Unit Test & Benchmark**

To perform unit tests, add this CMake flag `-DCHATGLM_ENABLE_TESTING=ON` to enable testing. Recompile and run the unit test (including benchmark).
```sh
mkdir -p build && cd build
cmake .. -DCHATGLM_ENABLE_TESTING=ON && make -j
./bin/chatglm_test
```

For benchmark only:
```sh
./bin/chatglm_test --gtest_filter='Benchmark.*'
```

**Lint**

To format the code, run `make lint` inside the `build` folder. You should have `clang-format`, `black` and `isort` pre-installed.

**Performance**

To detect the performance bottleneck, add the CMake flag `-DGGML_PERF=ON`:
```sh
cmake .. -DGGML_PERF=ON && make -j
```
This will print timing for each graph operation when running the model.

## Acknowledgements

* This project is greatly inspired by [@ggerganov](https://github.com/ggerganov)'s [llama.cpp](https://github.com/ggerganov/llama.cpp) and is based on his NN library [ggml](https://github.com/ggerganov/ggml).
* Thank [@THUDM](https://github.com/THUDM) for the amazing [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) and [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3) and for releasing the model sources and checkpoints.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chatglm-cpp",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Jiahao Li <liplus17@163.com>",
    "keywords": "ChatGLM, ChatGLM2, ChatGLM3, Large Language Model",
    "author": null,
    "author_email": "Jiahao Li <liplus17@163.com>",
    "download_url": "https://files.pythonhosted.org/packages/58/42/177051a247d3a3b76574604f152c214014958b47b4d49b42df53eb041528/chatglm_cpp-0.3.2.tar.gz",
    "platform": null,
    "description": "# ChatGLM.cpp\n\n[![CMake](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/cmake.yml)\n[![Python package](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml/badge.svg)](https://github.com/li-plus/chatglm.cpp/actions/workflows/python-package.yml)\n[![PyPI](https://img.shields.io/pypi/v/chatglm-cpp)](https://pypi.org/project/chatglm-cpp/)\n![Python](https://img.shields.io/pypi/pyversions/chatglm-cpp)\n[![License: MIT](https://img.shields.io/badge/license-MIT-blue)](LICENSE)\n\nC++ implementation of [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B), [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3) and more LLMs for real-time chatting on your MacBook.\n\n![demo](docs/demo.gif)\n\n## Features\n\nHighlights:\n* Pure C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp).\n* Accelerated memory-efficient CPU inference with int4/int8 quantization, optimized KV cache and parallel computing.\n* P-Tuning v2 and LoRA finetuned models support.\n* Streaming generation with typewriter effect.\n* Python binding, web demo, api servers and more possibilities.\n\nSupport Matrix:\n* Hardwares: x86/arm CPU, NVIDIA GPU, Apple Silicon GPU\n* Platforms: Linux, MacOS, Windows\n* Models: [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B), [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3), [CodeGeeX2](https://github.com/THUDM/CodeGeeX2), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B), [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B), [Baichuan2](https://github.com/baichuan-inc/Baichuan2), [InternLM](https://github.com/InternLM/InternLM)\n\n**NOTE**: Baichuan & InternLM model series are deprecated in favor of [llama.cpp](https://github.com/ggerganov/llama.cpp).\n\n## Getting Started\n\n**Preparation**\n\nClone the ChatGLM.cpp repository into your local machine:\n```sh\ngit clone --recursive https://github.com/li-plus/chatglm.cpp.git && cd chatglm.cpp\n```\n\nIf you forgot the `--recursive` flag when cloning the repository, run the following command in the `chatglm.cpp` folder:\n```sh\ngit submodule update --init --recursive\n```\n\n**Quantize Model**\n\nInstall necessary packages for loading and quantizing Hugging Face models:\n```sh\npython3 -m pip install -U pip\npython3 -m pip install torch tabulate tqdm transformers accelerate sentencepiece\n```\n\nUse `convert.py` to transform ChatGLM-6B into quantized GGML format. For example, to convert the fp16 original model to q4_0 (quantized int4) GGML model, run:\n```sh\npython3 chatglm_cpp/convert.py -i THUDM/chatglm-6b -t q4_0 -o chatglm-ggml.bin\n```\n\nThe original model (`-i <model_name_or_path>`) can be a Hugging Face model name or a local path to your pre-downloaded model. Currently supported models are:\n* ChatGLM-6B: `THUDM/chatglm-6b`, `THUDM/chatglm-6b-int8`, `THUDM/chatglm-6b-int4`\n* ChatGLM2-6B: `THUDM/chatglm2-6b`, `THUDM/chatglm2-6b-int4`\n* ChatGLM3-6B: `THUDM/chatglm3-6b`\n* CodeGeeX2: `THUDM/codegeex2-6b`, `THUDM/codegeex2-6b-int4`\n* Baichuan & Baichuan2: `baichuan-inc/Baichuan-13B-Chat`, `baichuan-inc/Baichuan2-7B-Chat`, `baichuan-inc/Baichuan2-13B-Chat`\n\nYou are free to try any of the below quantization types by specifying `-t <type>`:\n* `q4_0`: 4-bit integer quantization with fp16 scales.\n* `q4_1`: 4-bit integer quantization with fp16 scales and minimum values.\n* `q5_0`: 5-bit integer quantization with fp16 scales.\n* `q5_1`: 5-bit integer quantization with fp16 scales and minimum values.\n* `q8_0`: 8-bit integer quantization with fp16 scales.\n* `f16`: half precision floating point weights without quantization.\n* `f32`: single precision floating point weights without quantization.\n\nFor LoRA models, add `-l <lora_model_name_or_path>` flag to merge your LoRA weights into the base model. For example, run `python3 chatglm_cpp/convert.py -i THUDM/chatglm3-6b -t q4_0 -o chatglm3-ggml-lora.bin -l shibing624/chatglm3-6b-csc-chinese-lora` to merge public LoRA weights from Hugging Face.\n\nFor P-Tuning v2 models using the [official finetuning script](https://github.com/THUDM/ChatGLM3/tree/main/finetune_demo), additional weights are automatically detected by `convert.py`. If `past_key_values` is on the output weight list, the P-Tuning checkpoint is successfully converted.\n\n**Build & Run**\n\nCompile the project using CMake:\n```sh\ncmake -B build\ncmake --build build -j --config Release\n```\n\nNow you may chat with the quantized ChatGLM-6B model by running:\n```sh\n./build/bin/main -m chatglm-ggml.bin -p \u4f60\u597d\n# \u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002\n```\n\nTo run the model in interactive mode, add the `-i` flag. For example:\n```sh\n./build/bin/main -m chatglm-ggml.bin -i\n```\nIn interactive mode, your chat history will serve as the context for the next-round conversation.\n\nRun `./build/bin/main -h` to explore more options!\n\n**Try Other Models**\n\n<details open>\n<summary>ChatGLM2-6B</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i THUDM/chatglm2-6b -t q4_0 -o chatglm2-ggml.bin\n./build/bin/main -m chatglm2-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8\n# \u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM2-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002\n```\n</details>\n\n<details open>\n<summary>ChatGLM3-6B</summary>\n\nChatGLM3-6B further supports function call and code interpreter in addition to chat mode.\n\nChat mode:\n```sh\npython3 chatglm_cpp/convert.py -i THUDM/chatglm3-6b -t q4_0 -o chatglm3-ggml.bin\n./build/bin/main -m chatglm3-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8\n# \u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM3-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002\n```\n\nSetting system prompt:\n```sh\n./build/bin/main -m chatglm3-ggml.bin -p \u4f60\u597d -s \"You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.\"\n# \u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f ChatGLM3\uff0c\u6709\u4ec0\u4e48\u95ee\u9898\u53ef\u4ee5\u5e2e\u60a8\u89e3\u7b54\u5417\uff1f\n```\n\nFunction call:\n~~~\n$ ./build/bin/main -m chatglm3-ggml.bin --top_p 0.8 --temp 0.8 --sp examples/system/function_call.txt -i\nSystem   > Answer the following questions as best as you can. You have access to the following tools: ...\nPrompt   > \u751f\u6210\u4e00\u4e2a\u968f\u673a\u6570\nChatGLM3 > random_number_generator\n```python\ntool_call(seed=42, range=(0, 100))\n```\nTool Call   > Please manually call function `random_number_generator` with args `tool_call(seed=42, range=(0, 100))` and provide the results below.\nObservation > 23\nChatGLM3 > \u6839\u636e\u60a8\u7684\u8981\u6c42\uff0c\u6211\u4f7f\u7528\u968f\u673a\u6570\u751f\u6210\u5668API\u751f\u6210\u4e86\u4e00\u4e2a\u968f\u673a\u6570\u3002\u6839\u636eAPI\u8fd4\u56de\u7ed3\u679c\uff0c\u751f\u6210\u7684\u968f\u673a\u6570\u4e3a23\u3002\n~~~\n\nCode interpreter:\n~~~\n$ ./build/bin/main -m chatglm3-ggml.bin --top_p 0.8 --temp 0.8 --sp examples/system/code_interpreter.txt -i\nSystem   > \u4f60\u662f\u4e00\u4f4d\u667a\u80fdAI\u52a9\u624b\uff0c\u4f60\u53ebChatGLM\uff0c\u4f60\u8fde\u63a5\u7740\u4e00\u53f0\u7535\u8111\uff0c\u4f46\u8bf7\u6ce8\u610f\u4e0d\u80fd\u8054\u7f51\u3002\u5728\u4f7f\u7528Python\u89e3\u51b3\u4efb\u52a1\u65f6\uff0c\u4f60\u53ef\u4ee5\u8fd0\u884c\u4ee3\u7801\u5e76\u5f97\u5230\u7ed3\u679c\uff0c\u5982\u679c\u8fd0\u884c\u7ed3\u679c\u6709\u9519\u8bef\uff0c\u4f60\u9700\u8981\u5c3d\u53ef\u80fd\u5bf9\u4ee3\u7801\u8fdb\u884c\u6539\u8fdb\u3002\u4f60\u53ef\u4ee5\u5904\u7406\u7528\u6237\u4e0a\u4f20\u5230\u7535\u8111\u4e0a\u7684\u6587\u4ef6\uff0c\u6587\u4ef6\u9ed8\u8ba4\u5b58\u50a8\u8def\u5f84\u662f/mnt/data/\u3002\nPrompt   > \u5217\u51fa100\u4ee5\u5185\u7684\u6240\u6709\u8d28\u6570\nChatGLM3 > \u597d\u7684\uff0c\u6211\u4f1a\u4e3a\u60a8\u5217\u51fa100\u4ee5\u5185\u7684\u6240\u6709\u8d28\u6570\u3002\n```python\ndef is_prime(n):\n   \"\"\"Check if a number is prime.\"\"\"\n   if n <= 1:\n       return False\n   if n <= 3:\n       return True\n   if n % 2 == 0 or n % 3 == 0:\n       return False\n   i = 5\n   while i * i <= n:\n       if n % i == 0 or n % (i + 2) == 0:\n           return False\n       i += 6\n   return True\n\nprimes_upto_100 = [i for i in range(2, 101) if is_prime(i)]\nprimes_upto_100\n```\n\nCode Interpreter > Please manually run the code and provide the results below.\nObservation      > [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]\nChatGLM3 > 100\u4ee5\u5185\u7684\u6240\u6709\u8d28\u6570\u4e3a\uff1a\n\n$$\n2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97 \n$$\n~~~\n\n</details>\n\n<details>\n<summary>CodeGeeX2</summary>\n\n```sh\n$ python3 chatglm_cpp/convert.py -i THUDM/codegeex2-6b -t q4_0 -o codegeex2-ggml.bin\n$ ./build/bin/main -m codegeex2-ggml.bin --temp 0 --mode generate -p \"\\\n# language: Python\n# write a bubble sort function\n\"\n\n\ndef bubble_sort(list):\n    for i in range(len(list) - 1):\n        for j in range(len(list) - 1):\n            if list[j] > list[j + 1]:\n                list[j], list[j + 1] = list[j + 1], list[j]\n    return list\n\n\nprint(bubble_sort([5, 4, 3, 2, 1]))\n```\n</details>\n\n<details>\n<summary>Baichuan-13B-Chat</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan-13B-Chat -t q4_0 -o baichuan-13b-chat-ggml.bin\n./build/bin/main -m baichuan-13b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1\n# \u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u52a9\u4f60\u7684\u5417\uff1f\n```\n</details>\n\n<details>\n<summary>Baichuan2-7B-Chat</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan2-7B-Chat -t q4_0 -o baichuan2-7b-chat-ggml.bin\n./build/bin/main -m baichuan2-7b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05\n# \u4f60\u597d\uff01\u5f88\u9ad8\u5174\u4e3a\u60a8\u63d0\u4f9b\u5e2e\u52a9\u3002\u8bf7\u95ee\u6709\u4ec0\u4e48\u95ee\u9898\u6211\u53ef\u4ee5\u5e2e\u60a8\u89e3\u7b54\uff1f\n```\n</details>\n\n<details>\n<summary>Baichuan2-13B-Chat</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i baichuan-inc/Baichuan2-13B-Chat -t q4_0 -o baichuan2-13b-chat-ggml.bin\n./build/bin/main -m baichuan2-13b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05\n# \u4f60\u597d\uff01\u4eca\u5929\u6211\u80fd\u4e3a\u60a8\u63d0\u4f9b\u4ec0\u4e48\u5e2e\u52a9\uff1f\n```\n</details>\n\n<details>\n<summary>InternLM-Chat-7B</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i internlm/internlm-chat-7b-v1_1 -t q4_0 -o internlm-chat-7b-ggml.bin\n./build/bin/main -m internlm-chat-7b-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8\n# \u4f60\u597d\uff0c\u6211\u662f\u4e66\u751f\u00b7\u6d66\u8bed\uff0c\u6709\u4ec0\u4e48\u53ef\u4ee5\u5e2e\u52a9\u4f60\u7684\u5417\uff1f\n```\n</details>\n\n<details>\n<summary>InternLM-Chat-20B</summary>\n\n```sh\npython3 chatglm_cpp/convert.py -i internlm/internlm-chat-20b -t q4_0 -o internlm-chat-20b-ggml.bin\n./build/bin/main -m internlm-chat-20b-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8\n# \u4f60\u597d\uff01\u6709\u4ec0\u4e48\u6211\u53ef\u4ee5\u5e2e\u5230\u4f60\u7684\u5417\uff1f\n```\n</details>\n\n## Using BLAS\n\nBLAS library can be integrated to further accelerate matrix multiplication. However, in some cases, using BLAS may cause performance degradation. Whether to turn on BLAS should depend on the benchmarking result.\n\n**Accelerate Framework**\n\nAccelerate Framework is automatically enabled on macOS. To disable it, add the CMake flag `-DGGML_NO_ACCELERATE=ON`.\n\n**OpenBLAS**\n\nOpenBLAS provides acceleration on CPU. Add the CMake flag `-DGGML_OPENBLAS=ON` to enable it.\n```sh\ncmake -B build -DGGML_OPENBLAS=ON && cmake --build build -j\n```\n\n**cuBLAS**\n\ncuBLAS uses NVIDIA GPU to accelerate BLAS. Add the CMake flag `-DGGML_CUBLAS=ON` to enable it.\n```sh\ncmake -B build -DGGML_CUBLAS=ON && cmake --build build -j\n```\n\nBy default, all kernels will be compiled for all possible CUDA architectures and it takes some time. To run on a specific type of device, you may specify `CUDA_ARCHITECTURES` to speed up the nvcc compilation. For example:\n```sh\ncmake -B build -DGGML_CUBLAS=ON -DCUDA_ARCHITECTURES=\"80\"       # for A100\ncmake -B build -DGGML_CUBLAS=ON -DCUDA_ARCHITECTURES=\"70;75\"    # compatible with both V100 and T4\n```\n\nTo find out the CUDA architecture of your GPU device, see [Your GPU Compute Capability](https://developer.nvidia.com/cuda-gpus).\n\n**Metal**\n\nMPS (Metal Performance Shaders) allows computation to run on powerful Apple Silicon GPU. Add the CMake flag `-DGGML_METAL=ON` to enable it.\n```sh\ncmake -B build -DGGML_METAL=ON && cmake --build build -j\n```\n\n## Python Binding\n\nThe Python binding provides high-level `chat` and `stream_chat` interface similar to the original Hugging Face ChatGLM(2)-6B.\n\n**Installation**\n\nInstall from PyPI (recommended): will trigger compilation on your platform.\n```sh\npip install -U chatglm-cpp\n```\n\nTo enable cuBLAS acceleration on NVIDIA GPU:\n```sh\nCMAKE_ARGS=\"-DGGML_CUBLAS=ON\" pip install -U chatglm-cpp\n```\n\nTo enable Metal on Apple silicon devices:\n```sh\nCMAKE_ARGS=\"-DGGML_METAL=ON\" pip install -U chatglm-cpp\n```\n\nYou may also install from source. Add the corresponding `CMAKE_ARGS` for acceleration.\n```sh\n# install from the latest source hosted on GitHub\npip install git+https://github.com/li-plus/chatglm.cpp.git@main\n# or install from your local source after git cloning the repo\npip install .\n```\n\nPre-built wheels for CPU backend on Linux / MacOS / Windows are published on [release](https://github.com/li-plus/chatglm.cpp/releases). For CUDA / Metal backends, please compile from source code or source distribution.\n\n**Using Pre-converted GGML Models**\n\nHere is a simple demo that uses `chatglm_cpp.Pipeline` to load the GGML model and chat with it. First enter the examples folder (`cd examples`) and launch a Python interactive shell:\n```python\n>>> import chatglm_cpp\n>>> \n>>> pipeline = chatglm_cpp.Pipeline(\"../chatglm-ggml.bin\")\n>>> pipeline.chat([chatglm_cpp.ChatMessage(role=\"user\", content=\"\u4f60\u597d\")])\nChatMessage(role=\"assistant\", content=\"\u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002\", tool_calls=[])\n```\n\nTo chat in stream, run the below Python example:\n```sh\npython3 cli_demo.py -m ../chatglm-ggml.bin -i\n```\n\nLaunch a web demo to chat in your browser:\n```sh\npython3 web_demo.py -m ../chatglm-ggml.bin\n```\n\n![web_demo](docs/web_demo.jpg)\n\nFor other models:\n\n<details open>\n<summary>ChatGLM2-6B</summary>\n\n```sh\npython3 cli_demo.py -m ../chatglm2-ggml.bin -p \u4f60\u597d --temp 0.8 --top_p 0.8  # CLI demo\npython3 web_demo.py -m ../chatglm2-ggml.bin --temp 0.8 --top_p 0.8  # web demo\n```\n</details>\n\n<details open>\n<summary>ChatGLM3-6B</summary>\n\n**CLI Demo**\n\nChat mode:\n```sh\npython3 cli_demo.py -m ../chatglm3-ggml.bin -p \u4f60\u597d --temp 0.8 --top_p 0.8\n```\n\nFunction call:\n```sh\npython3 cli_demo.py -m ../chatglm3-ggml.bin --temp 0.8 --top_p 0.8 --sp system/function_call.txt -i\n```\n\nCode interpreter:\n```sh\npython3 cli_demo.py -m ../chatglm3-ggml.bin --temp 0.8 --top_p 0.8 --sp system/code_interpreter.txt -i\n```\n\n**Web Demo**\n\nInstall Python dependencies and the IPython kernel for code interpreter.\n```sh\npip install streamlit jupyter_client ipython ipykernel\nipython kernel install --name chatglm3-demo --user\n```\n\nLaunch the web demo:\n```sh\nstreamlit run chatglm3_demo.py\n```\n\n| Function Call               | Code Interpreter               |\n|-----------------------------|--------------------------------|\n| ![](docs/function_call.png) | ![](docs/code_interpreter.png) |\n\n</details>\n\n<details>\n<summary>CodeGeeX2</summary>\n\n```sh\n# CLI demo\npython3 cli_demo.py -m ../codegeex2-ggml.bin --temp 0 --mode generate -p \"\\\n# language: Python\n# write a bubble sort function\n\"\n# web demo\npython3 web_demo.py -m ../codegeex2-ggml.bin --temp 0 --max_length 512 --mode generate --plain\n```\n</details>\n\n<details>\n<summary>Baichuan-13B-Chat</summary>\n\n```sh\npython3 cli_demo.py -m ../baichuan-13b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1 # CLI demo\npython3 web_demo.py -m ../baichuan-13b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.1   # web demo\n```\n</details>\n\n<details>\n<summary>Baichuan2-7B-Chat</summary>\n\n```sh\npython3 cli_demo.py -m ../baichuan2-7b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05 # CLI demo\npython3 web_demo.py -m ../baichuan2-7b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05   # web demo\n```\n</details>\n\n<details>\n<summary>Baichuan2-13B-Chat</summary>\n\n```sh\npython3 cli_demo.py -m ../baichuan2-13b-chat-ggml.bin -p \u4f60\u597d --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05 # CLI demo\npython3 web_demo.py -m ../baichuan2-13b-chat-ggml.bin --top_k 5 --top_p 0.85 --temp 0.3 --repeat_penalty 1.05   # web demo\n```\n</details>\n\n<details>\n<summary>InternLM-Chat-7B</summary>\n\n```sh\npython3 cli_demo.py -m ../internlm-chat-7b-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8  # CLI demo\npython3 web_demo.py -m ../internlm-chat-7b-ggml.bin --top_p 0.8 --temp 0.8  # web demo\n```\n</details>\n\n<details>\n<summary>InternLM-Chat-20B</summary>\n\n```sh\npython3 cli_demo.py -m ../internlm-chat-20b-ggml.bin -p \u4f60\u597d --top_p 0.8 --temp 0.8 # CLI demo\npython3 web_demo.py -m ../internlm-chat-20b-ggml.bin --top_p 0.8 --temp 0.8 # web demo\n```\n</details>\n\n**Converting Hugging Face LLMs at Runtime**\n\nSometimes it might be inconvenient to convert and save the intermediate GGML models beforehand. Here is an option to directly load from the original Hugging Face model, quantize it into GGML models in a minute, and start serving. All you need is to replace the GGML model path with the Hugging Face model name or path.\n```python\n>>> import chatglm_cpp\n>>> \n>>> pipeline = chatglm_cpp.Pipeline(\"THUDM/chatglm-6b\", dtype=\"q4_0\")\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8/8 [00:10<00:00,  1.27s/it]\nProcessing model states: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 339/339 [00:23<00:00, 14.73it/s]\n...\n>>> pipeline.chat([chatglm_cpp.ChatMessage(role=\"user\", content=\"\u4f60\u597d\")])\nChatMessage(role=\"assistant\", content=\"\u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002\", tool_calls=[])\n```\n\nLikewise, replace the GGML model path with Hugging Face model in any example script, and it just works. For example:\n```sh\npython3 cli_demo.py -m THUDM/chatglm-6b -p \u4f60\u597d -i\n```\n\n## API Server\n\nWe support various kinds of API servers to integrate with popular frontends. Extra dependencies can be installed by:\n```sh\npip install 'chatglm-cpp[api]'\n```\nRemember to add the corresponding `CMAKE_ARGS` to enable acceleration.\n\n**LangChain API**\n\nStart the api server for LangChain:\n```sh\nMODEL=./chatglm2-ggml.bin uvicorn chatglm_cpp.langchain_api:app --host 127.0.0.1 --port 8000\n```\n\nTest the api endpoint with `curl`:\n```sh\ncurl http://127.0.0.1:8000 -H 'Content-Type: application/json' -d '{\"prompt\": \"\u4f60\u597d\"}'\n```\n\nRun with LangChain:\n```python\n>>> from langchain.llms import ChatGLM\n>>> \n>>> llm = ChatGLM(endpoint_url=\"http://127.0.0.1:8000\")\n>>> llm.predict(\"\u4f60\u597d\")\n'\u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM2-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002'\n```\n\nFor more options, please refer to [examples/langchain_client.py](examples/langchain_client.py) and [LangChain ChatGLM Integration](https://python.langchain.com/docs/integrations/llms/chatglm).\n\n**OpenAI API**\n\nStart an API server compatible with [OpenAI chat completions protocol](https://platform.openai.com/docs/api-reference/chat):\n```sh\nMODEL=./chatglm3-ggml.bin uvicorn chatglm_cpp.openai_api:app --host 127.0.0.1 --port 8000\n```\n\nTest your endpoint with `curl`:\n```sh\ncurl http://127.0.0.1:8000/v1/chat/completions -H 'Content-Type: application/json' \\\n    -d '{\"messages\": [{\"role\": \"user\", \"content\": \"\u4f60\u597d\"}]}'\n```\n\nUse the OpenAI client to chat with your model:\n```python\n>>> from openai import OpenAI\n>>> \n>>> client = OpenAI(base_url=\"http://127.0.0.1:8000/v1\")\n>>> response = client.chat.completions.create(model=\"default-model\", messages=[{\"role\": \"user\", \"content\": \"\u4f60\u597d\"}])\n>>> response.choices[0].message.content\n'\u4f60\u597d\ud83d\udc4b\uff01\u6211\u662f\u4eba\u5de5\u667a\u80fd\u52a9\u624b ChatGLM3-6B\uff0c\u5f88\u9ad8\u5174\u89c1\u5230\u4f60\uff0c\u6b22\u8fce\u95ee\u6211\u4efb\u4f55\u95ee\u9898\u3002'\n```\n\nFor stream response, check out the example client script:\n```sh\nOPENAI_BASE_URL=http://127.0.0.1:8000/v1 python3 examples/openai_client.py --stream --prompt \u4f60\u597d\n```\n\nTool calling is also supported:\n```sh\nOPENAI_BASE_URL=http://127.0.0.1:8000/v1 python3 examples/openai_client.py --tool_call --prompt \u4e0a\u6d77\u5929\u6c14\u600e\u4e48\u6837\n```\n\nWith this API server as backend, ChatGLM.cpp models can be seamlessly integrated into any frontend that uses OpenAI-style API, including [mckaywrigley/chatbot-ui](https://github.com/mckaywrigley/chatbot-ui), [fuergaosi233/wechat-chatgpt](https://github.com/fuergaosi233/wechat-chatgpt), [Yidadaa/ChatGPT-Next-Web](https://github.com/Yidadaa/ChatGPT-Next-Web), and more.\n\n## Using Docker\n\n**Option 1: Building Locally**\n\nBuilding docker image locally and start a container to run inference on CPU:\n```sh\ndocker build . --network=host -t chatglm.cpp\n# cpp demo\ndocker run -it --rm -v $PWD:/opt chatglm.cpp ./build/bin/main -m /opt/chatglm-ggml.bin -p \"\u4f60\u597d\"\n# python demo\ndocker run -it --rm -v $PWD:/opt chatglm.cpp python3 examples/cli_demo.py -m /opt/chatglm-ggml.bin -p \"\u4f60\u597d\"\n# langchain api server\ndocker run -it --rm -v $PWD:/opt -p 8000:8000 -e MODEL=/opt/chatglm-ggml.bin chatglm.cpp \\\n    uvicorn chatglm_cpp.langchain_api:app --host 0.0.0.0 --port 8000\n# openai api server\ndocker run -it --rm -v $PWD:/opt -p 8000:8000 -e MODEL=/opt/chatglm-ggml.bin chatglm.cpp \\\n    uvicorn chatglm_cpp.openai_api:app --host 0.0.0.0 --port 8000\n```\n\nFor CUDA support, make sure [nvidia-docker](https://github.com/NVIDIA/nvidia-docker) is installed. Then run:\n```sh\ndocker build . --network=host -t chatglm.cpp-cuda \\\n    --build-arg BASE_IMAGE=nvidia/cuda:12.2.0-devel-ubuntu20.04 \\\n    --build-arg CMAKE_ARGS=\"-DGGML_CUBLAS=ON\"\ndocker run -it --rm --gpus all -v $PWD:/chatglm.cpp/models chatglm.cpp-cuda ./build/bin/main -m models/chatglm-ggml.bin -p \"\u4f60\u597d\"\n```\n\n**Option 2: Using Pre-built Image**\n\nThe pre-built image for CPU inference is published on both [Docker Hub](https://hub.docker.com/repository/docker/liplusx/chatglm.cpp) and [GitHub Container Registry (GHCR)](https://github.com/li-plus/chatglm.cpp/pkgs/container/chatglm.cpp).\n\nTo pull from Docker Hub and run demo:\n```sh\ndocker run -it --rm -v $PWD:/opt liplusx/chatglm.cpp:main \\\n    ./build/bin/main -m /opt/chatglm-ggml.bin -p \"\u4f60\u597d\"\n```\n\nTo pull from GHCR and run demo:\n```sh\ndocker run -it --rm -v $PWD:/opt ghcr.io/li-plus/chatglm.cpp:main \\\n    ./build/bin/main -m /opt/chatglm-ggml.bin -p \"\u4f60\u597d\"\n```\n\nPython demo and API servers are also supported in pre-built image. Use it in the same way as **Option 1**.\n\n## Performance\n\nEnvironment:\n* CPU backend performance is measured on a Linux server with Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz using 16 threads.\n* CUDA backend is measured on a V100-SXM2-32GB GPU using 1 thread.\n* MPS backend is measured on an Apple M2 Ultra device using 1 thread.\n\nChatGLM-6B:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 74    | 77    | 86    | 89    | 114   | 189   |\n| ms/token (CUDA @ V100 SXM2)    | 8.1   | 8.7   | 9.4   | 9.5   | 12.0  | 19.1  |\n| ms/token (MPS @ M2 Ultra)      | 11.5  | 12.3  | N/A   | N/A   | 16.1  | 24.4  |\n| file size                      | 3.3G  | 3.7G  | 4.0G  | 4.4G  | 6.2G  | 12G   |\n| mem usage                      | 4.0G  | 4.4G  | 4.7G  | 5.1G  | 6.9G  | 13G   |\n\nChatGLM2-6B / ChatGLM3-6B / CodeGeeX2:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 64    | 71    | 79    | 83    | 106   | 189   |\n| ms/token (CUDA @ V100 SXM2)    | 7.9   | 8.3   | 9.2   | 9.2   | 11.7  | 18.5  |\n| ms/token (MPS @ M2 Ultra)      | 10.0  | 10.8  | N/A   | N/A   | 14.5  | 22.2  |\n| file size                      | 3.3G  | 3.7G  | 4.0G  | 4.4G  | 6.2G  | 12G   |\n| mem usage                      | 3.4G  | 3.8G  | 4.1G  | 4.5G  | 6.2G  | 12G   |\n\nBaichuan-7B / Baichuan2-7B:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 85.3  | 94.8  | 103.4 | 109.6 | 136.8 | 248.5 |\n| ms/token (CUDA @ V100 SXM2)    | 8.7   | 9.2   | 10.2  | 10.3  | 13.2  | 21.0  |\n| ms/token (MPS @ M2 Ultra)      | 11.3  | 12.0  | N/A   | N/A   | 16.4  | 25.6  |\n| file size                      | 4.0G  | 4.4G  | 4.9G  | 5.3G  | 7.5G  | 14G   |\n| mem usage                      | 4.5G  | 4.9G  | 5.3G  | 5.7G  | 7.8G  | 14G   |\n\nBaichuan-13B / Baichuan2-13B:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 161.7 | 175.8 | 189.9 | 192.3 | 255.6 | 459.6 |\n| ms/token (CUDA @ V100 SXM2)    | 13.7  | 15.1  | 16.3  | 16.9  | 21.9  | 36.8  |\n| ms/token (MPS @ M2 Ultra)      | 18.2  | 18.8  | N/A   | N/A   | 27.2  | 44.4  |\n| file size                      | 7.0G  | 7.8G  | 8.5G  | 9.3G  | 14G   | 25G   |\n| mem usage                      | 7.8G  | 8.8G  | 9.5G  | 10G   | 14G   | 25G   |\n\nInternLM-7B:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 85.3  | 90.1  | 103.5 | 112.5 | 137.3 | 232.2 |\n| ms/token (CUDA @ V100 SXM2)    | 9.1   | 9.4   | 10.5  | 10.5  | 13.3  | 21.1  |\n\nInternLM-20B:\n\n|                                | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|--------------------------------|-------|-------|-------|-------|-------|-------|\n| ms/token (CPU @ Platinum 8260) | 230.0 | 236.7 | 276.6 | 290.6 | 357.1 | N/A   |\n| ms/token (CUDA @ V100 SXM2)    | 21.6  | 23.2  | 25.0  | 25.9  | 33.4  | N/A   |\n\n## Model Quality\n\nWe measure model quality by evaluating the perplexity over the WikiText-2 test dataset, following the strided sliding window strategy in https://huggingface.co/docs/transformers/perplexity. Lower perplexity usually indicates a better model.\n\nDownload and unzip the dataset from [link](https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip). Measure the perplexity with a stride of 512 and max input length of 2048:\n```sh\n./build/bin/perplexity -m <model_path> -f wikitext-2-raw/wiki.test.raw -s 512 -l 2048\n```\n\n|                         | Q4_0  | Q4_1  | Q5_0  | Q5_1  | Q8_0  | F16   |\n|-------------------------|-------|-------|-------|-------|-------|-------|\n| [ChatGLM3-6B-Base][1]   | 6.215 | 6.184 | 5.997 | 6.015 | 5.965 | 5.971 |\n\n[1]: https://huggingface.co/THUDM/chatglm3-6b-base\n\n## Development\n\n**Unit Test & Benchmark**\n\nTo perform unit tests, add this CMake flag `-DCHATGLM_ENABLE_TESTING=ON` to enable testing. Recompile and run the unit test (including benchmark).\n```sh\nmkdir -p build && cd build\ncmake .. -DCHATGLM_ENABLE_TESTING=ON && make -j\n./bin/chatglm_test\n```\n\nFor benchmark only:\n```sh\n./bin/chatglm_test --gtest_filter='Benchmark.*'\n```\n\n**Lint**\n\nTo format the code, run `make lint` inside the `build` folder. You should have `clang-format`, `black` and `isort` pre-installed.\n\n**Performance**\n\nTo detect the performance bottleneck, add the CMake flag `-DGGML_PERF=ON`:\n```sh\ncmake .. -DGGML_PERF=ON && make -j\n```\nThis will print timing for each graph operation when running the model.\n\n## Acknowledgements\n\n* This project is greatly inspired by [@ggerganov](https://github.com/ggerganov)'s [llama.cpp](https://github.com/ggerganov/llama.cpp) and is based on his NN library [ggml](https://github.com/ggerganov/ggml).\n* Thank [@THUDM](https://github.com/THUDM) for the amazing [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) and [ChatGLM3-6B](https://github.com/THUDM/ChatGLM3) and for releasing the model sources and checkpoints.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "C++ implementation of ChatGLM family models and more LLMs",
    "version": "0.3.2",
    "project_urls": {
        "Homepage": "https://github.com/li-plus/chatglm.cpp",
        "Repository": "https://github.com/li-plus/chatglm.cpp.git"
    },
    "split_keywords": [
        "chatglm",
        " chatglm2",
        " chatglm3",
        " large language model"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5842177051a247d3a3b76574604f152c214014958b47b4d49b42df53eb041528",
                "md5": "76cd92833b7a78b836ae4b70f5251774",
                "sha256": "902a7cefbf8708949b203231e0fef939037339a484b2ab3bba2c8b1195b0e171"
            },
            "downloads": -1,
            "filename": "chatglm_cpp-0.3.2.tar.gz",
            "has_sig": false,
            "md5_digest": "76cd92833b7a78b836ae4b70f5251774",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 2614382,
            "upload_time": "2024-04-23T15:35:04",
            "upload_time_iso_8601": "2024-04-23T15:35:04.561185Z",
            "url": "https://files.pythonhosted.org/packages/58/42/177051a247d3a3b76574604f152c214014958b47b4d49b42df53eb041528/chatglm_cpp-0.3.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 15:35:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "li-plus",
    "github_project": "chatglm.cpp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "chatglm-cpp"
}
        
Elapsed time: 0.27166s