huixiangdou


Namehuixiangdou JSON
Version 0.1.0rc1 PyPI version JSON
download
home_pagehttps://github.com/InternLM/huixiangdou
SummaryOvercoming Group Chat Scenarios with LLM-based Technical Assistance
upload_time2024-01-14 13:10:18
maintainer
docs_urlNone
authorOpenMMLab
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="resource/logo_blue.svg" width="550px"/>

<small> [įŽ€äŊ“中文](README_zh.md) | English </small>

[![GitHub license](https://img.shields.io/badge/license-BSD--3--Clause-brightgreen.svg?style=plastic)](./LICENSE)
![CI](https://img.shields.io/github/actions/workflow/status/internml/huixiangdou/lint.yml?branch=master&style=plastic)

</div>

"HuixiangDou" is a domain-specific knowledge assistant based on the LLM. Features:

1. Deal with complex scenarios like group chats, answer user questions without causing message flooding.
2. Propose an algorithm pipeline for answering technical questions.
3. Low deployment cost, only need the LLM model to meet 4 traits can answer most of the user's questions, see [technical report](./resource/HuixiangDou.pdf).

View [HuixiangDou inside](./huixiangdou-inside.md).

# đŸ“Ļ Hardware Requirements

The following are the hardware requirements for running. It is suggested to follow this document, starting with the basic version and gradually experiencing advanced features.

|     Version      | GPU Memory Requirements |                      Features                      |                                Tested on Linux                                |
| :--------------: | :---------------------: | :------------------------------------------------: | :---------------------------------------------------------------------------: |
|  Basic Version   |          20GB           | Answer basic domain knowledge questions, zero cost | ![](https://img.shields.io/badge/3090%2024G-passed-blue?style=for-the-badge)  |
| Advanced Version |          40GB           |   Answer source code level questions, zero cost    | ![](https://img.shields.io/badge/A100%2080G-passed-blue?style=for-the-badge)  |
| Modified Version |           4GB           |     Using openai API, operation involves cost      | ![](https://img.shields.io/badge/1660ti%206G-passed-blue?style=for-the-badge) |

# đŸ”Ĩ Run

We will take lmdeploy & mmpose as examples to explain how to deploy the knowledge assistant to Feishu group chat.

## STEP1. Establish Topic Feature Repository

Execute all the commands below (including the '#' symbol).

```shell
# Download the repo
git clone https://github.com/internlm/huixiangdou --depth=1 && cd huixiangdou

# Download chatting topics
mkdir repodir
git clone https://github.com/open-mmlab/mmpose --depth=1 repodir/mmpose
git clone https://github.com/internlm/lmdeploy --depth=1 repodir/lmdeploy

# Build a feature store
mkdir workdir # create a working directory
python3 -m pip install -r requirements.txt # install dependencies, python3.11 needs `conda install conda-forge::faiss-gpu`
python3 -m huixiangdou.service.feature_store # save the features of repodir to workdir
```

The first run will automatically download the configuration of [text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese), you can also manually download it and update model path in `config.ini`.

After running, HuixiangDou can distinguish which user topics should be dealt with and which chitchats should be rejected. Please edit [good_questions](./resource/good_questions.json) and [bad_questions](./resource/bad_questions.json), and try your own domain knowledge (medical, finance, electricity, etc.).

```shell
# Accept technical topics
process query: Does mmdeploy support mmtrack model conversion now?
process query: Are there any Chinese text to speech models?
# Reject chitchat
reject query: What to eat for lunch today?
reject query: How to make HuixiangDou?
```

## STEP2. Run Basic Technical Assistant

**Configure free TOKEN**

HuixiangDou uses a search engine. Click [Serper](https://serper.dev/api-key) to obtain a quota-limited TOKEN and fill it in `config.ini`.

```shell
# config.ini
..
[web_search]
x_api_key = "${YOUR-X-API-KEY}"
..
```

**Test Q&A Effect**

Please ensure that the GPU memory is over 20GB (such as 3090 or above). If the memory is low, please modify it according to the FAQ.

The first run will automatically download the configuration of internlm2-7B.

- **Non-docker users**. If you **don't** use docker environment, you can start all services at once.

  ```shell
  # standalone
  python3 -m huixiangdou.main --standalone
  ..
  ErrorCode.SUCCESS,
  Query: Could you please advise if there is any good optimization method for video stream detection flickering caused by frame skipping?
  Reply:
  1. Frame rate control and frame skipping strategy are key to optimizing video stream detection performance, but you need to pay attention to the impact of frame skipping on detection results.
  2. Multithreading processing and caching mechanism can improve detection efficiency, but you need to pay attention to the stability of detection results.
  3. The use of sliding window method can reduce the impact of frame skipping and caching on detection results.
  ```

- **Docker users**. If you are using docker, HuixiangDou's Hybrid LLM Service needs to be deployed separately.

  ```shell
  # Start LLM service
  python3 -m huixiangdou.service.llm_server_hybrid
  ```

  Open a new terminal, configure the host IP (**not** container IP) in `config.ini`, run

  ```shell
  # config.ini
  [llm]
  ..
  client_url = "http://10.140.24.142:8888/inference" # example

  python3 -m huixiangdou.main
  ```

## STEP3. Integrate into Feishu \[Optional\]

Click [Create a Feishu Custom Robot](https://open.feishu.cn/document/client-docs/bot-v3/add-custom-bot) to get the WEBHOOK_URL callback, and fill it in the config.ini.

```shell
# config.ini
..
[frontend]
type = "lark"
webhook_url = "${YOUR-LARK-WEBHOOK-URL}"
```

Run. After it ends, the technical assistant's reply will be sent to the Feishu group chat.

```shell
python3 -m huixiangdou.main --standalone # for non-docker users
python3 -m huixiangdou.main # for docker users
```

<img src="./resource/figures/lark-example.png" width="400">

If you still need to read Feishu group messages, see [Feishu Developer Square - Add Application Capabilities - Robots](https://open.feishu.cn/app?lang=zh-CN).

## STEP4. Advanced Version \[Optional\]

The basic version may not perform well. You can enable these features to enhance performance. The more features you turn on, the better.

1. Use higher accuracy local LLM

   Adjust the `llm.local` model in config.ini to `internlm2-20B`.
   This option has a significant effect, but requires more GPU memory.

2. Hybrid LLM Service

   For LLM services that support the openai interface, HuixiangDou can utilize its Long Context ability.
   Using [kimi](https://platform.moonshot.cn/) as an example, below is an example of `config.ini` configuration:

   ```shell
   # config.ini
   [llm]
   enable_local = 1
   enable_remote = 1
   ..
   [llm.server]
   ..
   # open https://platform.moonshot.cn/
   remote_type = "kimi"
   remote_api_key = "YOUR-KIMI-API-KEY"
   remote_llm_max_text_length = 128000
   remote_llm_model = "moonshot-v1-128k"
   ```

   We also support chatgpt API. Note that this feature will increase response time and operating costs.

3. Repo search enhancement

   This feature is suitable for handling difficult questions and requires basic development capabilities to adjust the prompt.

   - Click [sourcegraph-account-access](https://sourcegraph.com/users/tpoisonooo/settings/tokens) to get token

     ```shell
     # open https://github.com/sourcegraph/src-cli#installation
     sudo curl -L https://sourcegraph.com/.api/src-cli/src_linux_amd64 -o /usr/local/bin/src && chmod +x /usr/local/bin/src

     # Enable search and fill the token
     [worker]
     enable_sg_search = 1
     ..
     [sg_search]
     ..
     src_access_token = "${YOUR_ACCESS_TOKEN}"
     ```

   - Edit the name and introduction of the repo, we take opencompass as an example

     ```shell
     # config.ini
     # add your repo here, we just take opencompass and lmdeploy as example
     [sg_search.opencompass]
     github_repo_id = "open-compass/opencompass"
     introduction = "Used for evaluating large language models (LLM) .."
     ```

   - Use `python3 -m huixiangdou.service.sg_search` for unit test, the returned content should include opencompass source code and documentation

     ```shell
     python3 -m huixiangdou.service.sg_search
     ..
     "filepath": "opencompass/datasets/longbench/longbench_trivia_qa.py",
     "content": "from datasets import Dataset..
     ```

   Run `main.py`, HuixiangDou will enable search enhancement when appropriate.

4. Tune Parameters

   It is often unavoidable to adjust parameters with respect to business scenarios.

   - Refer to [data.json](./tests/data.json) to add real data, run [test_intention_prompt.py](./tests/test_intention_prompt.py) to get suitable prompts and thresholds, and update them into [worker](./huixiangdou/service/worker.py).
   - Adjust the [number of search results](./huixiangdou/service/worker.py) based on the maximum length supported by the model.

# 🛠ī¸ FAQ

1. How to access other IMs?

   - WeChat. For Enterprise WeChat, see [Enterprise WeChat Application Development Guide](https://developer.work.weixin.qq.com/document/path/90594) ; for personal WeChat, we have confirmed with the WeChat team that there is currently no API, you need to search and learn by yourself.
   - DingTalk. Refer to [DingTalk Open Platform-Custom Robot Access](https://open.dingtalk.com/document/robots/custom-robot-access)

2. What if the robot is too cold/too chatty?

   - Fill in the questions that should be answered in the real scenario into `resource/good_questions.json`, and fill the ones that should be rejected into `resource/bad_questions.json`.
   - Adjust the theme content in `repodir` to ensure that the markdown documents in the main library do not contain irrelevant content.

   Re-run `service/feature_store.py` to update thresholds and feature libraries.

3. Launch is normal, but out of memory during runtime?

   LLM long text based on transformers structure requires more memory. At this time, kv cache quantization needs to be done on the model, such as [lmdeploy quantization description](https://github.com/InternLM/lmdeploy/blob/main/docs/en/kv_int8.md). Then use docker to independently deploy Hybrid LLM Service.

4. How to access other local LLM / After access, the effect is not ideal?

   - Open [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py), add a new LLM inference implementation.
   - Refer to [test_intention_prompt and test data](./tests/test_intention_prompt.py), adjust prompt and threshold for the new model, and update them into [worker.py](./huixiangdou/service/worker.py).

5. What if the response is too slow/request always fails?

   - Refer to [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py) to add exponential backoff and retransmission.
   - Replace local LLM with an inference framework such as [lmdeploy](https://github.com/internlm/lmdeploy), instead of the native huggingface/transformers.

6. What if the GPU memory is too low?

   At this time, it is impossible to run local LLM, and only remote LLM can be used in conjunction with text2vec to execute the pipeline. Please make sure that `config.ini` only uses remote LLM and turn off local LLM.

# 📝 Citation

```shell
@misc{2023HuixiangDou,
    title={HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance},
    author={HuixiangDou Contributors},
    howpublished = {\url{https://github.com/internlm/huixiangdou}},
    year={2023}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/InternLM/huixiangdou",
    "name": "huixiangdou",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "OpenMMLab",
    "author_email": "openmmlab@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d6/0a/90fd00703584e6cbe0be5244b12f4d31356de9fa6e9bbdb646308b6ccf85/huixiangdou-0.1.0rc1.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img src=\"resource/logo_blue.svg\" width=\"550px\"/>\n\n<small> [\u7b80\u4f53\u4e2d\u6587](README_zh.md) | English </small>\n\n[![GitHub license](https://img.shields.io/badge/license-BSD--3--Clause-brightgreen.svg?style=plastic)](./LICENSE)\n![CI](https://img.shields.io/github/actions/workflow/status/internml/huixiangdou/lint.yml?branch=master&style=plastic)\n\n</div>\n\n\"HuixiangDou\" is a domain-specific knowledge assistant based on the LLM. Features:\n\n1. Deal with complex scenarios like group chats, answer user questions without causing message flooding.\n2. Propose an algorithm pipeline for answering technical questions.\n3. Low deployment cost, only need the LLM model to meet 4 traits can answer most of the user's questions, see [technical report](./resource/HuixiangDou.pdf).\n\nView [HuixiangDou inside](./huixiangdou-inside.md).\n\n# \ud83d\udce6 Hardware Requirements\n\nThe following are the hardware requirements for running. It is suggested to follow this document, starting with the basic version and gradually experiencing advanced features.\n\n|     Version      | GPU Memory Requirements |                      Features                      |                                Tested on Linux                                |\n| :--------------: | :---------------------: | :------------------------------------------------: | :---------------------------------------------------------------------------: |\n|  Basic Version   |          20GB           | Answer basic domain knowledge questions, zero cost | ![](https://img.shields.io/badge/3090%2024G-passed-blue?style=for-the-badge)  |\n| Advanced Version |          40GB           |   Answer source code level questions, zero cost    | ![](https://img.shields.io/badge/A100%2080G-passed-blue?style=for-the-badge)  |\n| Modified Version |           4GB           |     Using openai API, operation involves cost      | ![](https://img.shields.io/badge/1660ti%206G-passed-blue?style=for-the-badge) |\n\n# \ud83d\udd25 Run\n\nWe will take lmdeploy & mmpose as examples to explain how to deploy the knowledge assistant to Feishu group chat.\n\n## STEP1. Establish Topic Feature Repository\n\nExecute all the commands below (including the '#' symbol).\n\n```shell\n# Download the repo\ngit clone https://github.com/internlm/huixiangdou --depth=1 && cd huixiangdou\n\n# Download chatting topics\nmkdir repodir\ngit clone https://github.com/open-mmlab/mmpose --depth=1 repodir/mmpose\ngit clone https://github.com/internlm/lmdeploy --depth=1 repodir/lmdeploy\n\n# Build a feature store\nmkdir workdir # create a working directory\npython3 -m pip install -r requirements.txt # install dependencies, python3.11 needs `conda install conda-forge::faiss-gpu`\npython3 -m huixiangdou.service.feature_store # save the features of repodir to workdir\n```\n\nThe first run will automatically download the configuration of [text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese), you can also manually download it and update model path in `config.ini`.\n\nAfter running, HuixiangDou can distinguish which user topics should be dealt with and which chitchats should be rejected. Please edit [good_questions](./resource/good_questions.json) and [bad_questions](./resource/bad_questions.json), and try your own domain knowledge (medical, finance, electricity, etc.).\n\n```shell\n# Accept technical topics\nprocess query: Does mmdeploy support mmtrack model conversion now?\nprocess query: Are there any Chinese text to speech models?\n# Reject chitchat\nreject query: What to eat for lunch today?\nreject query: How to make HuixiangDou?\n```\n\n## STEP2. Run Basic Technical Assistant\n\n**Configure free TOKEN**\n\nHuixiangDou uses a search engine. Click [Serper](https://serper.dev/api-key) to obtain a quota-limited TOKEN and fill it in `config.ini`.\n\n```shell\n# config.ini\n..\n[web_search]\nx_api_key = \"${YOUR-X-API-KEY}\"\n..\n```\n\n**Test Q&A Effect**\n\nPlease ensure that the GPU memory is over 20GB (such as 3090 or above). If the memory is low, please modify it according to the FAQ.\n\nThe first run will automatically download the configuration of internlm2-7B.\n\n- **Non-docker users**. If you **don't** use docker environment, you can start all services at once.\n\n  ```shell\n  # standalone\n  python3 -m huixiangdou.main --standalone\n  ..\n  ErrorCode.SUCCESS,\n  Query: Could you please advise if there is any good optimization method for video stream detection flickering caused by frame skipping?\n  Reply:\n  1. Frame rate control and frame skipping strategy are key to optimizing video stream detection performance, but you need to pay attention to the impact of frame skipping on detection results.\n  2. Multithreading processing and caching mechanism can improve detection efficiency, but you need to pay attention to the stability of detection results.\n  3. The use of sliding window method can reduce the impact of frame skipping and caching on detection results.\n  ```\n\n- **Docker users**. If you are using docker, HuixiangDou's Hybrid LLM Service needs to be deployed separately.\n\n  ```shell\n  # Start LLM service\n  python3 -m huixiangdou.service.llm_server_hybrid\n  ```\n\n  Open a new terminal, configure the host IP (**not** container IP) in `config.ini`, run\n\n  ```shell\n  # config.ini\n  [llm]\n  ..\n  client_url = \"http://10.140.24.142:8888/inference\" # example\n\n  python3 -m huixiangdou.main\n  ```\n\n## STEP3. Integrate into Feishu \\[Optional\\]\n\nClick [Create a Feishu Custom Robot](https://open.feishu.cn/document/client-docs/bot-v3/add-custom-bot) to get the WEBHOOK_URL callback, and fill it in the config.ini.\n\n```shell\n# config.ini\n..\n[frontend]\ntype = \"lark\"\nwebhook_url = \"${YOUR-LARK-WEBHOOK-URL}\"\n```\n\nRun. After it ends, the technical assistant's reply will be sent to the Feishu group chat.\n\n```shell\npython3 -m huixiangdou.main --standalone # for non-docker users\npython3 -m huixiangdou.main # for docker users\n```\n\n<img src=\"./resource/figures/lark-example.png\" width=\"400\">\n\nIf you still need to read Feishu group messages, see [Feishu Developer Square - Add Application Capabilities - Robots](https://open.feishu.cn/app?lang=zh-CN).\n\n## STEP4. Advanced Version \\[Optional\\]\n\nThe basic version may not perform well. You can enable these features to enhance performance. The more features you turn on, the better.\n\n1. Use higher accuracy local LLM\n\n   Adjust the `llm.local` model in config.ini to `internlm2-20B`.\n   This option has a significant effect, but requires more GPU memory.\n\n2. Hybrid LLM Service\n\n   For LLM services that support the openai interface, HuixiangDou can utilize its Long Context ability.\n   Using [kimi](https://platform.moonshot.cn/) as an example, below is an example of `config.ini` configuration:\n\n   ```shell\n   # config.ini\n   [llm]\n   enable_local = 1\n   enable_remote = 1\n   ..\n   [llm.server]\n   ..\n   # open https://platform.moonshot.cn/\n   remote_type = \"kimi\"\n   remote_api_key = \"YOUR-KIMI-API-KEY\"\n   remote_llm_max_text_length = 128000\n   remote_llm_model = \"moonshot-v1-128k\"\n   ```\n\n   We also support chatgpt API. Note that this feature will increase response time and operating costs.\n\n3. Repo search enhancement\n\n   This feature is suitable for handling difficult questions and requires basic development capabilities to adjust the prompt.\n\n   - Click [sourcegraph-account-access](https://sourcegraph.com/users/tpoisonooo/settings/tokens) to get token\n\n     ```shell\n     # open https://github.com/sourcegraph/src-cli#installation\n     sudo curl -L https://sourcegraph.com/.api/src-cli/src_linux_amd64 -o /usr/local/bin/src && chmod +x /usr/local/bin/src\n\n     # Enable search and fill the token\n     [worker]\n     enable_sg_search = 1\n     ..\n     [sg_search]\n     ..\n     src_access_token = \"${YOUR_ACCESS_TOKEN}\"\n     ```\n\n   - Edit the name and introduction of the repo, we take opencompass as an example\n\n     ```shell\n     # config.ini\n     # add your repo here, we just take opencompass and lmdeploy as example\n     [sg_search.opencompass]\n     github_repo_id = \"open-compass/opencompass\"\n     introduction = \"Used for evaluating large language models (LLM) ..\"\n     ```\n\n   - Use `python3 -m huixiangdou.service.sg_search` for unit test, the returned content should include opencompass source code and documentation\n\n     ```shell\n     python3 -m huixiangdou.service.sg_search\n     ..\n     \"filepath\": \"opencompass/datasets/longbench/longbench_trivia_qa.py\",\n     \"content\": \"from datasets import Dataset..\n     ```\n\n   Run `main.py`, HuixiangDou will enable search enhancement when appropriate.\n\n4. Tune Parameters\n\n   It is often unavoidable to adjust parameters with respect to business scenarios.\n\n   - Refer to [data.json](./tests/data.json) to add real data, run [test_intention_prompt.py](./tests/test_intention_prompt.py) to get suitable prompts and thresholds, and update them into [worker](./huixiangdou/service/worker.py).\n   - Adjust the [number of search results](./huixiangdou/service/worker.py) based on the maximum length supported by the model.\n\n# \ud83d\udee0\ufe0f FAQ\n\n1. How to access other IMs?\n\n   - WeChat. For Enterprise WeChat, see [Enterprise WeChat Application Development Guide](https://developer.work.weixin.qq.com/document/path/90594) ; for personal WeChat, we have confirmed with the WeChat team that there is currently no API, you need to search and learn by yourself.\n   - DingTalk. Refer to [DingTalk Open Platform-Custom Robot Access](https://open.dingtalk.com/document/robots/custom-robot-access)\n\n2. What if the robot is too cold/too chatty?\n\n   - Fill in the questions that should be answered in the real scenario into `resource/good_questions.json`, and fill the ones that should be rejected into `resource/bad_questions.json`.\n   - Adjust the theme content in `repodir` to ensure that the markdown documents in the main library do not contain irrelevant content.\n\n   Re-run `service/feature_store.py` to update thresholds and feature libraries.\n\n3. Launch is normal, but out of memory during runtime?\n\n   LLM long text based on transformers structure requires more memory. At this time, kv cache quantization needs to be done on the model, such as [lmdeploy quantization description](https://github.com/InternLM/lmdeploy/blob/main/docs/en/kv_int8.md). Then use docker to independently deploy Hybrid LLM Service.\n\n4. How to access other local LLM / After access, the effect is not ideal?\n\n   - Open [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py), add a new LLM inference implementation.\n   - Refer to [test_intention_prompt and test data](./tests/test_intention_prompt.py), adjust prompt and threshold for the new model, and update them into [worker.py](./huixiangdou/service/worker.py).\n\n5. What if the response is too slow/request always fails?\n\n   - Refer to [hybrid llm service](./huixiangdou/service/llm_server_hybrid.py) to add exponential backoff and retransmission.\n   - Replace local LLM with an inference framework such as [lmdeploy](https://github.com/internlm/lmdeploy), instead of the native huggingface/transformers.\n\n6. What if the GPU memory is too low?\n\n   At this time, it is impossible to run local LLM, and only remote LLM can be used in conjunction with text2vec to execute the pipeline. Please make sure that `config.ini` only uses remote LLM and turn off local LLM.\n\n# \ud83d\udcdd Citation\n\n```shell\n@misc{2023HuixiangDou,\n    title={HuixiangDou: Overcoming Group Chat Scenarios with LLM-based Technical Assistance},\n    author={HuixiangDou Contributors},\n    howpublished = {\\url{https://github.com/internlm/huixiangdou}},\n    year={2023}\n}\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Overcoming Group Chat Scenarios with LLM-based Technical Assistance",
    "version": "0.1.0rc1",
    "project_urls": {
        "Homepage": "https://github.com/InternLM/huixiangdou"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "61a324483aca29c1bc05559be93f2ae6c8aff5b71a98aa45ca1e0bc1bfde27c8",
                "md5": "531b0e8bd270bd94deeb2cde666740fa",
                "sha256": "554f756f4cd89e6a94d67582fd7c1c918bca0391c7d57073e31edafe11b79e74"
            },
            "downloads": -1,
            "filename": "huixiangdou-0.1.0rc1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "531b0e8bd270bd94deeb2cde666740fa",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 37812,
            "upload_time": "2024-01-14T13:10:16",
            "upload_time_iso_8601": "2024-01-14T13:10:16.531229Z",
            "url": "https://files.pythonhosted.org/packages/61/a3/24483aca29c1bc05559be93f2ae6c8aff5b71a98aa45ca1e0bc1bfde27c8/huixiangdou-0.1.0rc1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d60a90fd00703584e6cbe0be5244b12f4d31356de9fa6e9bbdb646308b6ccf85",
                "md5": "bb8f18a0646322a1b3fd680f14a3545d",
                "sha256": "de93ab3b872d81bbbe2bd9ef0d145117118ba9d524c1b867875bfac16c6e7e69"
            },
            "downloads": -1,
            "filename": "huixiangdou-0.1.0rc1.tar.gz",
            "has_sig": false,
            "md5_digest": "bb8f18a0646322a1b3fd680f14a3545d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 44023,
            "upload_time": "2024-01-14T13:10:18",
            "upload_time_iso_8601": "2024-01-14T13:10:18.111480Z",
            "url": "https://files.pythonhosted.org/packages/d6/0a/90fd00703584e6cbe0be5244b12f4d31356de9fa6e9bbdb646308b6ccf85/huixiangdou-0.1.0rc1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-14 13:10:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "InternLM",
    "github_project": "huixiangdou",
    "github_not_found": true,
    "lcname": "huixiangdou"
}
        
Elapsed time: 2.85558s