pyllama


Namepyllama JSON
Version 0.0.8 PyPI version JSON
download
home_pagehttps://github.com/juncongmoo/pyllama
Summary🦙 LLaMA: Open and Efficient Foundation Language Models in A Single GPU
upload_time2023-03-17 22:56:10
maintainer
docs_urlNone
authorJuncong Moo;Meta AI
requires_python
license
keywords llama
VCS
bugtrack_url
requirements torch fairscale fire hiq-python sentencepiece
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🦙 LLaMA - Run LLM in A Single 4GB GPU


> 📢 `pyllama` is a hacked version of `LLaMA` based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU.

> The Hugging Face's LLaMA implementation is available at `pyllama.hf`.

## 📥 Installation

In a conda env with pytorch / cuda available, run
```
pip install pyllama -U
```

> 🐏 If you have installed llama library from other sources, please uninstall the previous llama library and use `pip install pyllama -U` to install the latest version.


## 📦 Download Model Files

### 🧘‍♀️ Official Way

In order to download the checkpoints and tokenizer, fill this [google form](https://forms.gle/jk851eBVbX1m5TAv5)

Once your request is approved, you will receive links to download the tokenizer and model files.
Edit the `download.sh` script with the signed url provided in the email to download the model weights and tokenizer.

### 🐒 Community Way

- 1. pyllama

There is another high-speed way to download the checkpoints and tokenizers. There are four models(7B,13B,30B,65B) available. To download all of them, run:

```bash
python -m llama.download
```

To download only the 7B model files to your current directory, run:

```bash
python -m llama.download --model_size 7B
```

To download only the 7B and 30B model files to folder `/tmp/pyllama_data`, run:

```bash
python -m llama.download --model_size 7B,30B --folder /tmp/pyllama_data
```

The help doc is:
```bash
$python -m llama.download --help
usage: download.py [-h] [--model_size MODEL_SIZE] [--folder FOLDER]

optional arguments:
  -h, --help            show this help message and exit
  --model_size MODEL_SIZE
                        The size of the models that you want to download. A comma separated
                        string of any of "7B", "13B", "30B", "65B". Totally 219G disk space
                        is needed to download them all. If you only want to download the 7B
                        model, just put "7B" here.
  --folder FOLDER       The target folder for the download files
```

- Sample Screenshot

![](docs/download.png)

- 2. Bittorrent

🔥 In order to download the checkpoints and tokenizer, use this BitTorrent link: "[magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA](magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA)".


## 💎 Quantize LLaMA to run in a 4GB GPU

`pyllama` support quantization of 2/3/4/8/16-bit so that you can run model in a 4G memory GPU.

> You need to run `export HUGGING_FACE_HUB_TOKEN=XXX` to be able to access Hugging Face's data. You also need to install [gptq](https://pypi.org/project/gptq/) with command `pip install gptq`.

```bash
python -m llama.llama_quant --help
usage: llama_quant.py [-h] [--ckpt_dir CKPT_DIR] [--tokenizer_path TOKENIZER_PATH] 
                      [--seed SEED] [--nsamples NSAMPLES] [--percdamp PERCDAMP]
                      [--nearest] [--wbits {2,3,4,8,16}] [--groupsize GROUPSIZE]
                      [--save SAVE] [--load LOAD] [--benchmark BENCHMARK] [--check]
                      [--cuda CUDA] [--eval]
                      {wikitext2,ptb,c4}

positional arguments:
  {wikitext2,ptb,c4}    Where to extract calibration data from.

optional arguments:
  -h, --help            show this help message and exit
  --ckpt_dir CKPT_DIR
  --tokenizer_path TOKENIZER_PATH
  --seed SEED           Seed for sampling the calibration data.
  --nsamples NSAMPLES   Number of calibration data samples.
  --percdamp PERCDAMP   Percent of the average Hessian diagonal to use for dampening.
  --nearest             Whether to run the RTN baseline.
  --wbits {2,3,4,8,16}  bits for quauntization
  --groupsize GROUPSIZE
                        Groupsize to use for quantization; default uses full row.
  --save SAVE           Save quantized checkpoint under this name, eg pyllama-7B4b.pt.
  --load LOAD           Load quantized model.
  --benchmark BENCHMARK
                        Number of tokens to use for benchmarking.
  --check               Whether to compute perplexity during benchmarking for verification.
  --cuda CUDA           GPU device string, 'cuda:0' by default.
  --eval                Evaluate the model with dataset wikitext2, ptb and c4
```

- Quantize 7B model to 8-bit

```bash
python -m llama.llama_quant decapoda-research/llama-7b-hf c4 --wbits 8 --save pyllama-7B8b.pt
```

- Quantize 7B model to 2-bit

```bash
python -m llama.llama_quant decapoda-research/llama-7b-hf c4 --wbits 2 --save pyllama-7B2b.pt
```

The download links for quantized LLaMA files are below:

- 7B

| Quant Type   |      Size      |  Link | MD5 |Loss | Password |
|----------|:-------------:|------:|------:|------:|--:|
| 2-bit |  2160484475 | [🔗](https://pan.baidu.com/s/1zOdKOHnSCsz6TFix2NTFtg) | 4c7215d28c1f650218c43fc46402cec5|- | 8g9d |
| 3-bit |  - | - | -|- |-|
| 4-bit |  3779485819 | - | cce9a3b522ddf5c011ee0174b2ff3dfb|- |-|
| 8-bit |  7017493231 | - | 2648b09597cf8f9e0d1a04cb70b71cab|- |-|
| 16-bit |  - | - | -|- |-|
| 32-bit |  - | - | -|- |-|

It took me 2 hours 40 mins to quantize the 65B model to 4bit. The file size is reduced from 122GB to 32GB.

## 🔮 Single GPU Inference

### 🥥 Without Quantization

Set the environment variables `CKPT_DIR` as your llamm model folder, for example `/llama_data/7B`, and `TOKENIZER_PATH` as your tokenizer's path, such as `/llama_data/tokenizer.model`.

And then run the following command:

```bash
python inference.py --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH
```

The following is an example of LLaMA running in a 8GB single GPU.

![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_inference.png)

### 🥝 With Quantization

With quantization, you can run LLaMA with a 4GB memory GPU.

- pyllama can run 7B model with 6GB GPU memory.

![4bit-quant-6GB](https://github.com/juncongmoo/pyllama/blob/main/docs/pyllama_7B_6GB.png)

- pyllama can run 7B model with 3.2GB GPU memory.

![2bit-quant-6GB](https://github.com/juncongmoo/pyllama/blob/main/docs/pyllama_7B_3GB.png)

### 💡 Tips

- To load KV cache in CPU, run `export KV_CAHCHE_IN_GPU=0` in the shell.

- To profile CPU/GPU/Latency, run:

```bash
python inference_driver.py --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH
```

A sample result is like:

![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_profiling.png)

- Tune `max_seq_len` and `max_batch_size` to reduce memory consumption to be able to run in GPU. Refer to: [this post](https://github.com/juncongmoo/pyllama/issues/9)!

### 🍉 Start a gradio webui


```bash
$ cd apps/gradio
$ python webapp_single.py  --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH
```

You should see something like this in your browser:

![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_webui.png)

### 🍓 Start a web server

The following command will start a flask web server:

```bash
$ cd apps/flask
$ python web_server_single.py  --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH
```

## 🍒 Multiple GPU Inference

### 🧘‍♀️ Official Way

To use the original META's model parallel, please set environment variable `PYLLAMA_META_MP` like:

```
export PYLLAMA_META_MP=1
```

With this environment variable set, you can `import llama` and the original META version's llama will be imported.

The provided `example.py` can be run on a single or multi-gpu node with `torchrun` and will output completions for two pre-defined prompts. Using `TARGET_FOLDER` as defined in `download.sh`:

```bash
torchrun --nproc_per_node MP example.py --ckpt_dir $TARGET_FOLDER/model_size \
  --tokenizer_path $TARGET_FOLDER/tokenizer.model
```

Different models require different MP values:

|  Model | MP |
|--------|----|
| 7B     | 1  |
| 13B    | 2  |
| 30B    | 4  |
| 65B    | 8  |

### 🐒 Community Way

There are two steps to run LLaMA in multi-GPU environment.

- Convert original LLaMA model

```bash
$python -m llama.convert_llama --help
usage: convert_llama.py [-h] [--ckpt_dir CKPT_DIR] [--tokenizer_path TOKENIZER_PATH]
                        [--model_size {7B,13B,30B,65B}] [--output_dir OUTPUT_DIR]
                        [--max_batch_size MAX_BATCH_SIZE] [--to {hf,fb}]

optional arguments:
  -h, --help            show this help message and exit
  --ckpt_dir CKPT_DIR
  --tokenizer_path TOKENIZER_PATH
  --model_size {7B,13B,30B,65B}
  --output_dir OUTPUT_DIR
                        Location to write HF model and tokenizer
  --max_batch_size MAX_BATCH_SIZE
  --to {hf,fb}
```

- Run with HF's accelerate with multiple GPUs

```bash
$python -m llama.llama_multigpu --help
usage: llama_multigpu.py [-h] [--state_dict_dir STATE_DICT_DIR] [--model_size {7B,13B,30B,65B}]

optional arguments:
  -h, --help            show this help message and exit
  --state_dict_dir STATE_DICT_DIR
  --model_size {7B,13B,30B,65B}
```

![](https://github.com/juncongmoo/pyllama/blob/main/docs/llama_multigpu.png)

## 🔬 Model Fine Tuning

### With [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) Instruction-Following Dataset

- Tokenization
- Finetuning
- Efficient FT

## 🧬 LLaMA model structure

- Meta
- Hugging Face

```
https://github.com/facebookresearch/llama/blob/main/llama/model.py#LL127C27-L127C27
```

### Model Card

See [MODEL_CARD.md](https://github.com/juncongmoo/pyllama/blob/main/MODEL_CARD.md)

### License

See the [LICENSE](https://github.com/juncongmoo/pyllama/blob/main/LICENSE) file.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/juncongmoo/pyllama",
    "name": "pyllama",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "LLaMA",
    "author": "Juncong Moo;Meta AI",
    "author_email": "JuncongMoo@gmail.com",
    "download_url": "",
    "platform": null,
    "description": "# \ud83e\udd99 LLaMA - Run LLM in A Single 4GB GPU\n\n\n> \ud83d\udce2 `pyllama` is a hacked version of `LLaMA` based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU.\n\n> The Hugging Face's LLaMA implementation is available at `pyllama.hf`.\n\n## \ud83d\udce5 Installation\n\nIn a conda env with pytorch / cuda available, run\n```\npip install pyllama -U\n```\n\n> \ud83d\udc0f If you have installed llama library from other sources, please uninstall the previous llama library and use `pip install pyllama -U` to install the latest version.\n\n\n## \ud83d\udce6 Download Model Files\n\n### \ud83e\uddd8\u200d\u2640\ufe0f Official Way\n\nIn order to download the checkpoints and tokenizer, fill this [google form](https://forms.gle/jk851eBVbX1m5TAv5)\n\nOnce your request is approved, you will receive links to download the tokenizer and model files.\nEdit the `download.sh` script with the signed url provided in the email to download the model weights and tokenizer.\n\n### \ud83d\udc12 Community Way\n\n- 1. pyllama\n\nThere is another high-speed way to download the checkpoints and tokenizers. There are four models(7B,13B,30B,65B) available. To download all of them, run:\n\n```bash\npython -m llama.download\n```\n\nTo download only the 7B model files to your current directory, run:\n\n```bash\npython -m llama.download --model_size 7B\n```\n\nTo download only the 7B and 30B model files to folder `/tmp/pyllama_data`, run:\n\n```bash\npython -m llama.download --model_size 7B,30B --folder /tmp/pyllama_data\n```\n\nThe help doc is:\n```bash\n$python -m llama.download --help\nusage: download.py [-h] [--model_size MODEL_SIZE] [--folder FOLDER]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --model_size MODEL_SIZE\n                        The size of the models that you want to download. A comma separated\n                        string of any of \"7B\", \"13B\", \"30B\", \"65B\". Totally 219G disk space\n                        is needed to download them all. If you only want to download the 7B\n                        model, just put \"7B\" here.\n  --folder FOLDER       The target folder for the download files\n```\n\n- Sample Screenshot\n\n![](docs/download.png)\n\n- 2. Bittorrent\n\n\ud83d\udd25 In order to download the checkpoints and tokenizer, use this BitTorrent link: \"[magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA](magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA)\".\n\n\n## \ud83d\udc8e Quantize LLaMA to run in a 4GB GPU\n\n`pyllama` support quantization of 2/3/4/8/16-bit so that you can run model in a 4G memory GPU.\n\n> You need to run `export HUGGING_FACE_HUB_TOKEN=XXX` to be able to access Hugging Face's data. You also need to install [gptq](https://pypi.org/project/gptq/) with command `pip install gptq`.\n\n```bash\npython -m llama.llama_quant --help\nusage: llama_quant.py [-h] [--ckpt_dir CKPT_DIR] [--tokenizer_path TOKENIZER_PATH] \n                      [--seed SEED] [--nsamples NSAMPLES] [--percdamp PERCDAMP]\n                      [--nearest] [--wbits {2,3,4,8,16}] [--groupsize GROUPSIZE]\n                      [--save SAVE] [--load LOAD] [--benchmark BENCHMARK] [--check]\n                      [--cuda CUDA] [--eval]\n                      {wikitext2,ptb,c4}\n\npositional arguments:\n  {wikitext2,ptb,c4}    Where to extract calibration data from.\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --ckpt_dir CKPT_DIR\n  --tokenizer_path TOKENIZER_PATH\n  --seed SEED           Seed for sampling the calibration data.\n  --nsamples NSAMPLES   Number of calibration data samples.\n  --percdamp PERCDAMP   Percent of the average Hessian diagonal to use for dampening.\n  --nearest             Whether to run the RTN baseline.\n  --wbits {2,3,4,8,16}  bits for quauntization\n  --groupsize GROUPSIZE\n                        Groupsize to use for quantization; default uses full row.\n  --save SAVE           Save quantized checkpoint under this name, eg pyllama-7B4b.pt.\n  --load LOAD           Load quantized model.\n  --benchmark BENCHMARK\n                        Number of tokens to use for benchmarking.\n  --check               Whether to compute perplexity during benchmarking for verification.\n  --cuda CUDA           GPU device string, 'cuda:0' by default.\n  --eval                Evaluate the model with dataset wikitext2, ptb and c4\n```\n\n- Quantize 7B model to 8-bit\n\n```bash\npython -m llama.llama_quant decapoda-research/llama-7b-hf c4 --wbits 8 --save pyllama-7B8b.pt\n```\n\n- Quantize 7B model to 2-bit\n\n```bash\npython -m llama.llama_quant decapoda-research/llama-7b-hf c4 --wbits 2 --save pyllama-7B2b.pt\n```\n\nThe download links for quantized LLaMA files are below:\n\n- 7B\n\n| Quant Type   |      Size      |  Link | MD5 |Loss | Password |\n|----------|:-------------:|------:|------:|------:|--:|\n| 2-bit |  2160484475 | [\ud83d\udd17](https://pan.baidu.com/s/1zOdKOHnSCsz6TFix2NTFtg) | 4c7215d28c1f650218c43fc46402cec5|- | 8g9d |\n| 3-bit |  - | - | -|- |-|\n| 4-bit |  3779485819 | - | cce9a3b522ddf5c011ee0174b2ff3dfb|- |-|\n| 8-bit |  7017493231 | - | 2648b09597cf8f9e0d1a04cb70b71cab|- |-|\n| 16-bit |  - | - | -|- |-|\n| 32-bit |  - | - | -|- |-|\n\nIt took me 2 hours 40 mins to quantize the 65B model to 4bit. The file size is reduced from 122GB to 32GB.\n\n## \ud83d\udd2e Single GPU Inference\n\n### \ud83e\udd65 Without Quantization\n\nSet the environment variables `CKPT_DIR` as your llamm model folder, for example `/llama_data/7B`, and `TOKENIZER_PATH` as your tokenizer's path, such as `/llama_data/tokenizer.model`.\n\nAnd then run the following command:\n\n```bash\npython inference.py --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH\n```\n\nThe following is an example of LLaMA running in a 8GB single GPU.\n\n![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_inference.png)\n\n### \ud83e\udd5d With Quantization\n\nWith quantization, you can run LLaMA with a 4GB memory GPU.\n\n- pyllama can run 7B model with 6GB GPU memory.\n\n![4bit-quant-6GB](https://github.com/juncongmoo/pyllama/blob/main/docs/pyllama_7B_6GB.png)\n\n- pyllama can run 7B model with 3.2GB GPU memory.\n\n![2bit-quant-6GB](https://github.com/juncongmoo/pyllama/blob/main/docs/pyllama_7B_3GB.png)\n\n### \ud83d\udca1 Tips\n\n- To load KV cache in CPU, run `export KV_CAHCHE_IN_GPU=0` in the shell.\n\n- To profile CPU/GPU/Latency, run:\n\n```bash\npython inference_driver.py --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH\n```\n\nA sample result is like:\n\n![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_profiling.png)\n\n- Tune `max_seq_len` and `max_batch_size` to reduce memory consumption to be able to run in GPU. Refer to: [this post](https://github.com/juncongmoo/pyllama/issues/9)!\n\n### \ud83c\udf49 Start a gradio webui\n\n\n```bash\n$ cd apps/gradio\n$ python webapp_single.py  --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH\n```\n\nYou should see something like this in your browser:\n\n![LLaMA Inference](https://raw.githubusercontent.com/juncongmoo/pyllama/main/docs/llama_webui.png)\n\n### \ud83c\udf53 Start a web server\n\nThe following command will start a flask web server:\n\n```bash\n$ cd apps/flask\n$ python web_server_single.py  --ckpt_dir $CKPT_DIR --tokenizer_path $TOKENIZER_PATH\n```\n\n## \ud83c\udf52 Multiple GPU Inference\n\n### \ud83e\uddd8\u200d\u2640\ufe0f Official Way\n\nTo use the original META's model parallel, please set environment variable `PYLLAMA_META_MP` like:\n\n```\nexport PYLLAMA_META_MP=1\n```\n\nWith this environment variable set, you can `import llama` and the original META version's llama will be imported.\n\nThe provided `example.py` can be run on a single or multi-gpu node with `torchrun` and will output completions for two pre-defined prompts. Using `TARGET_FOLDER` as defined in `download.sh`:\n\n```bash\ntorchrun --nproc_per_node MP example.py --ckpt_dir $TARGET_FOLDER/model_size \\\n  --tokenizer_path $TARGET_FOLDER/tokenizer.model\n```\n\nDifferent models require different MP values:\n\n|  Model | MP |\n|--------|----|\n| 7B     | 1  |\n| 13B    | 2  |\n| 30B    | 4  |\n| 65B    | 8  |\n\n### \ud83d\udc12 Community Way\n\nThere are two steps to run LLaMA in multi-GPU environment.\n\n- Convert original LLaMA model\n\n```bash\n$python -m llama.convert_llama --help\nusage: convert_llama.py [-h] [--ckpt_dir CKPT_DIR] [--tokenizer_path TOKENIZER_PATH]\n                        [--model_size {7B,13B,30B,65B}] [--output_dir OUTPUT_DIR]\n                        [--max_batch_size MAX_BATCH_SIZE] [--to {hf,fb}]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --ckpt_dir CKPT_DIR\n  --tokenizer_path TOKENIZER_PATH\n  --model_size {7B,13B,30B,65B}\n  --output_dir OUTPUT_DIR\n                        Location to write HF model and tokenizer\n  --max_batch_size MAX_BATCH_SIZE\n  --to {hf,fb}\n```\n\n- Run with HF's accelerate with multiple GPUs\n\n```bash\n$python -m llama.llama_multigpu --help\nusage: llama_multigpu.py [-h] [--state_dict_dir STATE_DICT_DIR] [--model_size {7B,13B,30B,65B}]\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --state_dict_dir STATE_DICT_DIR\n  --model_size {7B,13B,30B,65B}\n```\n\n![](https://github.com/juncongmoo/pyllama/blob/main/docs/llama_multigpu.png)\n\n## \ud83d\udd2c Model Fine Tuning\n\n### With [Standford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) Instruction-Following Dataset\n\n- Tokenization\n- Finetuning\n- Efficient FT\n\n## \ud83e\uddec LLaMA model structure\n\n- Meta\n- Hugging Face\n\n```\nhttps://github.com/facebookresearch/llama/blob/main/llama/model.py#LL127C27-L127C27\n```\n\n### Model Card\n\nSee [MODEL_CARD.md](https://github.com/juncongmoo/pyllama/blob/main/MODEL_CARD.md)\n\n### License\n\nSee the [LICENSE](https://github.com/juncongmoo/pyllama/blob/main/LICENSE) file.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "\ud83e\udd99 LLaMA: Open and Efficient Foundation Language Models in A Single GPU",
    "version": "0.0.8",
    "split_keywords": [
        "llama"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "265247329ab6e62801decde0c80e63cc3fc9233fa218298c0cc9e4af5cb8f812",
                "md5": "47cff65d24a2c050168d2f3e66dcbb90",
                "sha256": "8d360a7c7cf25f538fdfea2172b2e01a79a6a0299c453fa7066a67bf5556948d"
            },
            "downloads": -1,
            "filename": "pyllama-0.0.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "47cff65d24a2c050168d2f3e66dcbb90",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 51226,
            "upload_time": "2023-03-17T22:56:10",
            "upload_time_iso_8601": "2023-03-17T22:56:10.451332Z",
            "url": "https://files.pythonhosted.org/packages/26/52/47329ab6e62801decde0c80e63cc3fc9233fa218298c0cc9e4af5cb8f812/pyllama-0.0.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-17 22:56:10",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "juncongmoo",
    "github_project": "pyllama",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.12.0"
                ]
            ]
        },
        {
            "name": "fairscale",
            "specs": [
                [
                    ">=",
                    "0.4.13"
                ]
            ]
        },
        {
            "name": "fire",
            "specs": [
                [
                    "~=",
                    "0.5.0"
                ]
            ]
        },
        {
            "name": "hiq-python",
            "specs": [
                [
                    ">=",
                    "1.1.9"
                ]
            ]
        },
        {
            "name": "sentencepiece",
            "specs": [
                [
                    "==",
                    "0.1.97"
                ]
            ]
        }
    ],
    "lcname": "pyllama"
}
        
Elapsed time: 0.04681s