grazier


Namegrazier JSON
Version 0.1.1 PyPI version JSON
download
home_page
SummaryA tool for calling (and calling out to) large language models.
upload_time2023-12-06 07:20:09
maintainer
docs_urlNone
author
requires_python>=3.8.0
license Copyright 2023, Regents of the University of California Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. THIS SOFTWARE AND/OR DATA WAS DEPOSITED IN THE BAIR OPEN RESEARCH COMMONS REPOSITORY ON 02/28/2023
keywords language-models api transformers huggingface openai anthropic gpt3 gpt-j gpt-neo gpt4 vertex palm bard
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Grazier: Easily call Large Language Models from a unified API

Grazier is a Python library for easily calling large language models from a unified API.

## Supported Large Models

From OpenAI:
- GPT-4 (Base, 32K) (Chat and Completion Engines)
- GPT-3.5 (ChatGPT) (Chat and Completion Engines)
- GPT-3 (Davinci (v2,v3), Ada, Babbage, Curie) (Completion Engine)

From Anthropic:
- Claude 2 (Base) (Chat and Completion Engines)
- Claude (Base, 100K) (Chat and Completion Engines)
- Claude Instant (Base, 100K) (Chat and Completion Engines)

From Google/GCP:
- PaLM (Chat and Completion Engines)

From Huggingface
- GPT-2 (Base, Medium, Large, XL) (Completion Engine)
- GPT-Neo (125M, 1.3B, 2.7B) (Completion Engine)
- GPT-J (6B) (Completion Engine)
- Falcon (7B, 40B, rw-1B, rw-7B) (Completion Engine)
- Dolly (v1 - 6B, v2 - 3B, 7B, 12B) (Chat and Completion Engines)
- MPT (Instruct - 7B, 30B) (Chat and Completion Engines)

From Facebook (via Huggingface)
- Llama (7B, 13B, 30B, 65B) (Completion Engine)
- Llama 2 (7B, 13B, 70B) (Completion Engine)
- OPT (125M, 350M, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B) (Completion Engine)

From Stanford (via Huggingface)
- Alpaca (7B) (Chat and Completion Engines)

From Berkeley (via Huggingface)
- Koala (7B, 13B_v1, 13B_v2) (Chat and Completion Engines)
- Vicuna (7B, 13B) (Chat and Completion Engines)

From StabilityAI (via Huggingface)
- StableLM (7B, 13B) (Chat and Completion Engines)

From AllenAI (via Huggingface)
- Tulu (7B, 13B, 30B, 65B) (Chat and Completion Engines)
- Open Instruct (ShareGPT) (7B, 13B, 30B, 65B) (Chat and Completion Engines)

From AI21
- Jurassic 2 (Light, Mid, Ultra) (Completion Engines)

Image/Language Models:
- Blip2 (2.7B, 6.7B) (Image + Text Engine)
- Open Flamingo (3B, 4B, 9B) (Image + Text Engine)


## Installation

Grazier can easily be installed using pip:
```bash
pip install grazier
```

Each of the LLMs may need additional setup, which you can find in the engine setup section below.


## Usage

For completion engines, it's as simple as:
```python
import grazier

grazier.list_models()
['gptj-6B', 'gpt2', 'gpt2-med', 'gpt2-lg', 'gpt2-xl', 'distilgpt2', 'gptneo-125M', 'gptneo-1.3B', 'gptneo-2.7B', 'stablelm-3B', 'stablelm-7B', 'opt-125M', 'opt-350M', 'opt-1.3b', 'opt-2.7b', 'opt-6.7b', 'opt-13b', 'opt-30b', 'opt-66b', 'llama-7B', 'llama-13B', 'llama-30B', 'llama-65B', 'gpt3-davinci3', 'gpt3-davinci2', 'gpt3-curie', 'gpt3-babbage', 'gpt3-ada', 'palm']
gpt2 = grazier.get("gpt2")
completion = gpt2("I enjoy walking with my cute dog, but sometimes he gets scared and")
print(completion)
```

For chat engines, all you need to do is add the `type="chat"` parameter:
```python
from grazier import Conversation, Speaker, get, list_models

conversation = Conversation()
conversation.add_turn("You are a funny person.", speaker=Speaker.SYSTEM)
conversation.add_turn("Hi, how are you?", speaker=Speaker.USER)
conversation.add_turn("I am doing well, how about you?", speaker=Speaker.AI)
conversation.add_turn("What are you planning to do today?", speaker=Speaker.USER)

list_models(type="chat")
['claude', 'claude-100k', 'claude-instant', 'claude-instant-100k', 'bard', 'koala-7b', 'koala-13b-v1', 'koala-13b-v2', 'vicuna-7b', 'vicuna-13b', 'alpaca-13b', 'chat-gpt', 'gpt4', 'gpt4-32k', 'stablelm-3b', 'stablelm-7b', 'palm']
gpt4 = get("gpt4", type="chat")
next_turn = gpt4(conversation)
print(next_turn)
```

For vision-augmented (image) engines, use `type="image"`
```python
import grazier
from PIL import Image

grazier.list_models(type="image")
['blip2-opt-2.7b', 'blip2-opt-6.7b', 'blip2-opt-2.7b-coco', 'blip2-opt-6.7b-coco', 'blip2-flan-t5-xl', 'blip2-flan-t5-xxl', 'blip2-flan-t5-xl-coco', 'openflamingo-3b-vitl-mpt1b', 'openflamingo-3b-vitl-mpt1b-dolly', 'openflamingo-9b-vitl-mpt7b', 'openflamingo-4b-vitl-rpj3b']
blip2 = grazier.get("blip2-opt-2.7b", type="image")

image = Image.open('test_data/dog.jpg')
completion = blip2(image, "A photo of")

print(completion)
```

## Individual Engine Setup

Each engine may require some specific details to be passed in. For example, OpenAI engines require an API key. These
details are generally set up with environment variables.

### OpenAI Engines

For OpenAI engines, you will need to set the `OPENAI_API_KEY` and `OPENAI_API_ORG` environment variables. You can find
your API key and organization ID on the [OpenAI dashboard](https://platform.openai.com/). You can set these environment
variables in your shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:
```bash
OPENAI_API_KEY=<your key>
OPENAI_API_ORG=<your org id>
```
or on the command line:
```bash
export OPENAI_API_KEY=<your key>
export OPENAI_API_ORG=<your org id>
```

### Anthropic Engines
For Anthropic engines, you will need to set the `ANTHROPIC_API_KEY` environment variable. You can find your API key at
the [Anthropic dashboard](https://console.anthropic.com/account/keys). You can set this environment variable in
your shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:
```bash
ANTHROPIC_API_KEY=<your key>
```
or on the command line:
```bash
export ANTHROPIC_API_KEY=<your key>
```

### Vertex Engines (PaLM)
For Google engines, we use the Vertex cloud API, which requires a Google Cloud Platform (GCP) project. You can create a
GCP project at the [GCP console](https://console.cloud.google.com/). You will also need to enable the Vertex AI API for
your project, set up a services account, and download the account JSON credentials. You can find instructions for that
following steps 1 to 6 of the tutorial  [here](https://cloud.google.com/vertex-ai/docs/tutorials/image-recognition-automl).
Finally, you will need to set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of the JSON file.
You can set this environment variable in your shell or in a `.env` file in the root of your project. For example, in a
`.env` file, you would have:
```bash
GOOGLE_APPLICATION_CREDENTIALS=<path to your JSON file>
```
or on the command line:
```bash
export GOOGLE_APPLICATION_CREDENTIALS=<path to your JSON file>
```

### Bard
For the Bard engine, you will need to get your Bard __Secure-1PSID and __Secure-1PSIDTS tokens.  Get the value of this
variable by first going to https://bard.google.com/, then log in, press F12 for console, and go to the "Application" tab,
then "Cookies", then copy the value of the "__Secure-1PSID" and "__Secure-1PSIDTS" cookies. You can then set the
environment variables:
```bash
BARD__Secure_1PSID=<your session id>
BARD__Secure_1PSIDTS=<your session id timestamp>
```

### Huggingface Engines
Most of the huggingface engines require no additional setup, however, some of the larger models require a GPU to run
with any kind of efficiency (and some require multiple GPUs with large amounts of memory). You can find more details
about the requirements for each model on the [Huggingface model hub](https://huggingface.co/models).

### Llama, Alpaca, Koala, Vicuna and AllenAI Engines
For these engines, you will need to obtain and postprocess the weights yourself (due to Facebook's licensing). You can
find the instructions for doing so on each model page:
- Llama: https://huggingface.co/docs/transformers/main/model_doc/llama
- Alpaca: https://github.com/tatsu-lab/stanford_alpaca#recovering-alpaca-weights
- Koala: https://github.com/young-geng/EasyLM/blob/main/docs/koala.md
- Vicuna: https://github.com/lm-sys/FastChat#vicuna-weights
- AllenAI: https://huggingface.co/allenai/tulu-65b

Once the weights have been downloaded and processed, you can set the following environment variables to the root
directory containing a folder for each variant (The format is, `{root_dir}/{model-prefix}/weights.bin`, the root directory would
be `root_dir`, and the model-prefix is the name of the model, e.g. `tulu-65b`):
```bash
LLAMA_WEIGHTS_ROOT=<path to the llama weights>
ALPACA_WEIGHTS_ROOT=<path to the alpaca weights>
KOALA_WEIGHTS_ROOT=<path to the koala weights>
VICUNA_WEIGHTS_ROOT=<path to the vicuna weights>
ALLENAI_WEIGHTS_ROOT=<path to the allenai weights>
```

### AI21 Models (Jurassic)
For AI21 models, you will need to set the `AI21_API_KEY` environment variable. You can find your API key at
the [AI21 Studio Dashboard](https://studio.ai21.com/account/api-key). You can set this environment variable in
your shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:
```bash
AI21_API_KEY=<your key>
```
or on the command line:
```bash
export AI21_API_KEY=<your key>
```

## Citation

If you use grazier in your work, please cite:

```
@misc{grazier,
  author = {David Chan},
  title = {grazier: Easily call Large Language Models from a unified API},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{
    https://github.com/DavidMChan/grazier
  }}
}
```

## License

grazier is licensed under the terms of the MIT license. See [LICENSE](LICENSE) for more information.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "grazier",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "language-models,api,transformers,huggingface,openai,anthropic,gpt3,gpt-j,gpt-neo,gpt4,vertex,palm,bard",
    "author": "",
    "author_email": "David Chan <davidchan@berkeley.edu>",
    "download_url": "https://files.pythonhosted.org/packages/b0/90/5bc17090d3935ac2dd63c4f74e2207eb472802acb08230dfbeb41e314694/grazier-0.1.1.tar.gz",
    "platform": null,
    "description": "# Grazier: Easily call Large Language Models from a unified API\n\nGrazier is a Python library for easily calling large language models from a unified API.\n\n## Supported Large Models\n\nFrom OpenAI:\n- GPT-4 (Base, 32K) (Chat and Completion Engines)\n- GPT-3.5 (ChatGPT) (Chat and Completion Engines)\n- GPT-3 (Davinci (v2,v3), Ada, Babbage, Curie) (Completion Engine)\n\nFrom Anthropic:\n- Claude 2 (Base) (Chat and Completion Engines)\n- Claude (Base, 100K) (Chat and Completion Engines)\n- Claude Instant (Base, 100K) (Chat and Completion Engines)\n\nFrom Google/GCP:\n- PaLM (Chat and Completion Engines)\n\nFrom Huggingface\n- GPT-2 (Base, Medium, Large, XL) (Completion Engine)\n- GPT-Neo (125M, 1.3B, 2.7B) (Completion Engine)\n- GPT-J (6B) (Completion Engine)\n- Falcon (7B, 40B, rw-1B, rw-7B) (Completion Engine)\n- Dolly (v1 - 6B, v2 - 3B, 7B, 12B) (Chat and Completion Engines)\n- MPT (Instruct - 7B, 30B) (Chat and Completion Engines)\n\nFrom Facebook (via Huggingface)\n- Llama (7B, 13B, 30B, 65B) (Completion Engine)\n- Llama 2 (7B, 13B, 70B) (Completion Engine)\n- OPT (125M, 350M, 1.3B, 2.7B, 6.7B, 13B, 30B, 66B) (Completion Engine)\n\nFrom Stanford (via Huggingface)\n- Alpaca (7B) (Chat and Completion Engines)\n\nFrom Berkeley (via Huggingface)\n- Koala (7B, 13B_v1, 13B_v2) (Chat and Completion Engines)\n- Vicuna (7B, 13B) (Chat and Completion Engines)\n\nFrom StabilityAI (via Huggingface)\n- StableLM (7B, 13B) (Chat and Completion Engines)\n\nFrom AllenAI (via Huggingface)\n- Tulu (7B, 13B, 30B, 65B) (Chat and Completion Engines)\n- Open Instruct (ShareGPT) (7B, 13B, 30B, 65B) (Chat and Completion Engines)\n\nFrom AI21\n- Jurassic 2 (Light, Mid, Ultra) (Completion Engines)\n\nImage/Language Models:\n- Blip2 (2.7B, 6.7B) (Image + Text Engine)\n- Open Flamingo (3B, 4B, 9B) (Image + Text Engine)\n\n\n## Installation\n\nGrazier can easily be installed using pip:\n```bash\npip install grazier\n```\n\nEach of the LLMs may need additional setup, which you can find in the engine setup section below.\n\n\n## Usage\n\nFor completion engines, it's as simple as:\n```python\nimport grazier\n\ngrazier.list_models()\n['gptj-6B', 'gpt2', 'gpt2-med', 'gpt2-lg', 'gpt2-xl', 'distilgpt2', 'gptneo-125M', 'gptneo-1.3B', 'gptneo-2.7B', 'stablelm-3B', 'stablelm-7B', 'opt-125M', 'opt-350M', 'opt-1.3b', 'opt-2.7b', 'opt-6.7b', 'opt-13b', 'opt-30b', 'opt-66b', 'llama-7B', 'llama-13B', 'llama-30B', 'llama-65B', 'gpt3-davinci3', 'gpt3-davinci2', 'gpt3-curie', 'gpt3-babbage', 'gpt3-ada', 'palm']\ngpt2 = grazier.get(\"gpt2\")\ncompletion = gpt2(\"I enjoy walking with my cute dog, but sometimes he gets scared and\")\nprint(completion)\n```\n\nFor chat engines, all you need to do is add the `type=\"chat\"` parameter:\n```python\nfrom grazier import Conversation, Speaker, get, list_models\n\nconversation = Conversation()\nconversation.add_turn(\"You are a funny person.\", speaker=Speaker.SYSTEM)\nconversation.add_turn(\"Hi, how are you?\", speaker=Speaker.USER)\nconversation.add_turn(\"I am doing well, how about you?\", speaker=Speaker.AI)\nconversation.add_turn(\"What are you planning to do today?\", speaker=Speaker.USER)\n\nlist_models(type=\"chat\")\n['claude', 'claude-100k', 'claude-instant', 'claude-instant-100k', 'bard', 'koala-7b', 'koala-13b-v1', 'koala-13b-v2', 'vicuna-7b', 'vicuna-13b', 'alpaca-13b', 'chat-gpt', 'gpt4', 'gpt4-32k', 'stablelm-3b', 'stablelm-7b', 'palm']\ngpt4 = get(\"gpt4\", type=\"chat\")\nnext_turn = gpt4(conversation)\nprint(next_turn)\n```\n\nFor vision-augmented (image) engines, use `type=\"image\"`\n```python\nimport grazier\nfrom PIL import Image\n\ngrazier.list_models(type=\"image\")\n['blip2-opt-2.7b', 'blip2-opt-6.7b', 'blip2-opt-2.7b-coco', 'blip2-opt-6.7b-coco', 'blip2-flan-t5-xl', 'blip2-flan-t5-xxl', 'blip2-flan-t5-xl-coco', 'openflamingo-3b-vitl-mpt1b', 'openflamingo-3b-vitl-mpt1b-dolly', 'openflamingo-9b-vitl-mpt7b', 'openflamingo-4b-vitl-rpj3b']\nblip2 = grazier.get(\"blip2-opt-2.7b\", type=\"image\")\n\nimage = Image.open('test_data/dog.jpg')\ncompletion = blip2(image, \"A photo of\")\n\nprint(completion)\n```\n\n## Individual Engine Setup\n\nEach engine may require some specific details to be passed in. For example, OpenAI engines require an API key. These\ndetails are generally set up with environment variables.\n\n### OpenAI Engines\n\nFor OpenAI engines, you will need to set the `OPENAI_API_KEY` and `OPENAI_API_ORG` environment variables. You can find\nyour API key and organization ID on the [OpenAI dashboard](https://platform.openai.com/). You can set these environment\nvariables in your shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:\n```bash\nOPENAI_API_KEY=<your key>\nOPENAI_API_ORG=<your org id>\n```\nor on the command line:\n```bash\nexport OPENAI_API_KEY=<your key>\nexport OPENAI_API_ORG=<your org id>\n```\n\n### Anthropic Engines\nFor Anthropic engines, you will need to set the `ANTHROPIC_API_KEY` environment variable. You can find your API key at\nthe [Anthropic dashboard](https://console.anthropic.com/account/keys). You can set this environment variable in\nyour shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:\n```bash\nANTHROPIC_API_KEY=<your key>\n```\nor on the command line:\n```bash\nexport ANTHROPIC_API_KEY=<your key>\n```\n\n### Vertex Engines (PaLM)\nFor Google engines, we use the Vertex cloud API, which requires a Google Cloud Platform (GCP) project. You can create a\nGCP project at the [GCP console](https://console.cloud.google.com/). You will also need to enable the Vertex AI API for\nyour project, set up a services account, and download the account JSON credentials. You can find instructions for that\nfollowing steps 1 to 6 of the tutorial  [here](https://cloud.google.com/vertex-ai/docs/tutorials/image-recognition-automl).\nFinally, you will need to set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of the JSON file.\nYou can set this environment variable in your shell or in a `.env` file in the root of your project. For example, in a\n`.env` file, you would have:\n```bash\nGOOGLE_APPLICATION_CREDENTIALS=<path to your JSON file>\n```\nor on the command line:\n```bash\nexport GOOGLE_APPLICATION_CREDENTIALS=<path to your JSON file>\n```\n\n### Bard\nFor the Bard engine, you will need to get your Bard __Secure-1PSID and __Secure-1PSIDTS tokens.  Get the value of this\nvariable by first going to https://bard.google.com/, then log in, press F12 for console, and go to the \"Application\" tab,\nthen \"Cookies\", then copy the value of the \"__Secure-1PSID\" and \"__Secure-1PSIDTS\" cookies. You can then set the\nenvironment variables:\n```bash\nBARD__Secure_1PSID=<your session id>\nBARD__Secure_1PSIDTS=<your session id timestamp>\n```\n\n### Huggingface Engines\nMost of the huggingface engines require no additional setup, however, some of the larger models require a GPU to run\nwith any kind of efficiency (and some require multiple GPUs with large amounts of memory). You can find more details\nabout the requirements for each model on the [Huggingface model hub](https://huggingface.co/models).\n\n### Llama, Alpaca, Koala, Vicuna and AllenAI Engines\nFor these engines, you will need to obtain and postprocess the weights yourself (due to Facebook's licensing). You can\nfind the instructions for doing so on each model page:\n- Llama: https://huggingface.co/docs/transformers/main/model_doc/llama\n- Alpaca: https://github.com/tatsu-lab/stanford_alpaca#recovering-alpaca-weights\n- Koala: https://github.com/young-geng/EasyLM/blob/main/docs/koala.md\n- Vicuna: https://github.com/lm-sys/FastChat#vicuna-weights\n- AllenAI: https://huggingface.co/allenai/tulu-65b\n\nOnce the weights have been downloaded and processed, you can set the following environment variables to the root\ndirectory containing a folder for each variant (The format is, `{root_dir}/{model-prefix}/weights.bin`, the root directory would\nbe `root_dir`, and the model-prefix is the name of the model, e.g. `tulu-65b`):\n```bash\nLLAMA_WEIGHTS_ROOT=<path to the llama weights>\nALPACA_WEIGHTS_ROOT=<path to the alpaca weights>\nKOALA_WEIGHTS_ROOT=<path to the koala weights>\nVICUNA_WEIGHTS_ROOT=<path to the vicuna weights>\nALLENAI_WEIGHTS_ROOT=<path to the allenai weights>\n```\n\n### AI21 Models (Jurassic)\nFor AI21 models, you will need to set the `AI21_API_KEY` environment variable. You can find your API key at\nthe [AI21 Studio Dashboard](https://studio.ai21.com/account/api-key). You can set this environment variable in\nyour shell or in a `.env` file in the root of your project. For example, in a `.env` file, you would have:\n```bash\nAI21_API_KEY=<your key>\n```\nor on the command line:\n```bash\nexport AI21_API_KEY=<your key>\n```\n\n## Citation\n\nIf you use grazier in your work, please cite:\n\n```\n@misc{grazier,\n  author = {David Chan},\n  title = {grazier: Easily call Large Language Models from a unified API},\n  year = {2023},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{\n    https://github.com/DavidMChan/grazier\n  }}\n}\n```\n\n## License\n\ngrazier is licensed under the terms of the MIT license. See [LICENSE](LICENSE) for more information.\n",
    "bugtrack_url": null,
    "license": " Copyright 2023, Regents of the University of California  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.  THIS SOFTWARE AND/OR DATA WAS DEPOSITED IN THE BAIR OPEN RESEARCH COMMONS REPOSITORY ON 02/28/2023 ",
    "summary": "A tool for calling (and calling out to) large language models.",
    "version": "0.1.1",
    "project_urls": {
        "documentation": "https://github.com/DavidMChan/grazier",
        "homepage": "https://github.com/DavidMChan/grazier",
        "repository": "https://github.com/DavidMChan/grazier"
    },
    "split_keywords": [
        "language-models",
        "api",
        "transformers",
        "huggingface",
        "openai",
        "anthropic",
        "gpt3",
        "gpt-j",
        "gpt-neo",
        "gpt4",
        "vertex",
        "palm",
        "bard"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7aa7235c7032f872bb835e39e68674bc8aa8f152a0edf37f9d40beb94c408eae",
                "md5": "4a037002f066ad9b55bb538591bed30d",
                "sha256": "73ec90d0aa7f759e2ecad7c9c5e624841bfea26a6698db49dd717ec4faa59527"
            },
            "downloads": -1,
            "filename": "grazier-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4a037002f066ad9b55bb538591bed30d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 58769,
            "upload_time": "2023-12-06T07:20:07",
            "upload_time_iso_8601": "2023-12-06T07:20:07.465044Z",
            "url": "https://files.pythonhosted.org/packages/7a/a7/235c7032f872bb835e39e68674bc8aa8f152a0edf37f9d40beb94c408eae/grazier-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b0905bc17090d3935ac2dd63c4f74e2207eb472802acb08230dfbeb41e314694",
                "md5": "c653d5fa6670cdf38ee655763444bafc",
                "sha256": "13f9a8d89a562ac0802b354a3fdf89d75923b3e46ef5a80d9b921c38becaa011"
            },
            "downloads": -1,
            "filename": "grazier-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "c653d5fa6670cdf38ee655763444bafc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 42993,
            "upload_time": "2023-12-06T07:20:09",
            "upload_time_iso_8601": "2023-12-06T07:20:09.418257Z",
            "url": "https://files.pythonhosted.org/packages/b0/90/5bc17090d3935ac2dd63c4f74e2207eb472802acb08230dfbeb41e314694/grazier-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-06 07:20:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "DavidMChan",
    "github_project": "grazier",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "grazier"
}
        
Elapsed time: 0.20446s