universalmodels


Nameuniversalmodels JSON
Version 0.0.6 PyPI version JSON
download
home_page
SummaryA series of wrappers to allow for multiple AI model sources to behave as huggingface transformers models
upload_time2024-02-17 20:15:27
maintainer
docs_urlNone
author
requires_python>=3.10
licenseMIT License Copyright (c) 2023 Matthew Pisano Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ai transformers openai huggingface adapter
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # The Universal Model Adapter

This package acts as an adapter between [Huggingface Transformers](https://github.com/huggingface/transformers) and several different APIs.  As of now, these are the [Huggingface Inference API](https://huggingface.co/inference-api) and the [OpenAI Inference API](https://platform.openai.com/docs/api-reference).

This works by mock `transformers.PreTrainedModel` classes that share the same `generate()` method, but make API calls on the backend.  Several `dev` models are also available for mocking generation or performing debugging tasks.

## Use Case

This package is best used in projects that use multiple different model sources interchangeably.  In these kinds of projects, a unified generation interface greatly simplifies a lot of code.  For example, a project that uses text generated from both Huggingface models and GPT models from OpenAI's API.

### Fine-Gained Source Control

An advantage of this package is that it can either automatically resolve the source of a model from its name, or you can specify the source (OpenAI, Huggingface, etc.) manually.  This can be done through an extra parameter to the `pretrained_from_...()` methods.  For example:

```python
from universalmodels.interface import pretrained_from_name, ModelSrc

model_name = "mistralai/Mistral-7B-v0.1"
# This will automatically resolve the model's source to 
# a local Huggingface transformers model (ModelSrc.HF_LOCAL)
local_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.AUTO)

# This will attempt to start the FastChat service and run 
# a local instance of the OpenAI API to run optimized generation
fschat_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.OPENAI_API)

# This will create a mock model without any generation logic attached.
# This is useful for when the shell of a model is needed as a reference.
# This option does not load any local models into memory or activate FastChat.
mock_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.NO_LOAD)
```

## Quick Start

### Installing from PyPI

```bash
pip3 install "universalmodels[fastchat]"
```

### Installing from Source

```bash
git clone https://github.com/matthew-pisano/UniversalModels
cd UniversalModels
pip3 install -e ".[fastchat]"
```

Installing the `fastchat` extra enables support for using fastchat on compatible locally installed huggingface models.  See [FastChat supported models](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md) for more information on which models are supported.

### Example Usage

In the following example, note that the interfaces for the Huggingface and OpenAI modles are the same.  This is the primary benefit of using this package.

```python
import torch
from universalmodels import pretrained_from_name
from universalmodels.constants import set_seed

# Set the global seed to encourage deterministic generation 
# NOTE: DOES NOT affect OpenAI API models
set_seed(42)

# Huggingface model example
hf_model_name = "mistralai/Mixtral-8x7B-Instruct-v0.1"
hf_model, hf_tokenizer = pretrained_from_name(hf_model_name)

hf_tokens = hf_tokenizer.encode("Repeat the following: 'Hello there from a huggingface model'")
hf_resp_tokens = hf_model.generate(torch.Tensor([hf_tokens]).int())[0]
hf_response = hf_tokenizer.decode(hf_resp_tokens)
print(hf_response)

# OpenAI model example
oai_model_name = "openai/gpt-3.5"
oai_model, oai_tokenizer = pretrained_from_name(oai_model_name)

oai_tokens = oai_tokenizer.encode("Repeat the following: 'Hello there from an openai model'")
oai_resp_tokens = oai_model.generate(torch.Tensor([oai_tokens]).int())[0]
oai_response = oai_tokenizer.decode(oai_resp_tokens)
print(oai_response)
```

> [!IMPORTANT]
> Make sure your API keys are set for OpenAI and Huggingface before using models that require them!

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "universalmodels",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "ai,transformers,openai,huggingface,adapter",
    "author": "",
    "author_email": "Matthew Pisano <matthewpisano14@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/33/12/8e1e291365d38889735b2ed447b56f1f2d3bf15fa9cb2c2fa50708ba5dac/universalmodels-0.0.6.tar.gz",
    "platform": null,
    "description": "# The Universal Model Adapter\n\nThis package acts as an adapter between [Huggingface Transformers](https://github.com/huggingface/transformers) and several different APIs.  As of now, these are the [Huggingface Inference API](https://huggingface.co/inference-api) and the [OpenAI Inference API](https://platform.openai.com/docs/api-reference).\n\nThis works by mock `transformers.PreTrainedModel` classes that share the same `generate()` method, but make API calls on the backend.  Several `dev` models are also available for mocking generation or performing debugging tasks.\n\n## Use Case\n\nThis package is best used in projects that use multiple different model sources interchangeably.  In these kinds of projects, a unified generation interface greatly simplifies a lot of code.  For example, a project that uses text generated from both Huggingface models and GPT models from OpenAI's API.\n\n### Fine-Gained Source Control\n\nAn advantage of this package is that it can either automatically resolve the source of a model from its name, or you can specify the source (OpenAI, Huggingface, etc.) manually.  This can be done through an extra parameter to the `pretrained_from_...()` methods.  For example:\n\n```python\nfrom universalmodels.interface import pretrained_from_name, ModelSrc\n\nmodel_name = \"mistralai/Mistral-7B-v0.1\"\n# This will automatically resolve the model's source to \n# a local Huggingface transformers model (ModelSrc.HF_LOCAL)\nlocal_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.AUTO)\n\n# This will attempt to start the FastChat service and run \n# a local instance of the OpenAI API to run optimized generation\nfschat_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.OPENAI_API)\n\n# This will create a mock model without any generation logic attached.\n# This is useful for when the shell of a model is needed as a reference.\n# This option does not load any local models into memory or activate FastChat.\nmock_model, tokenizer = pretrained_from_name(model_name, model_src=ModelSrc.NO_LOAD)\n```\n\n## Quick Start\n\n### Installing from PyPI\n\n```bash\npip3 install \"universalmodels[fastchat]\"\n```\n\n### Installing from Source\n\n```bash\ngit clone https://github.com/matthew-pisano/UniversalModels\ncd UniversalModels\npip3 install -e \".[fastchat]\"\n```\n\nInstalling the `fastchat` extra enables support for using fastchat on compatible locally installed huggingface models.  See [FastChat supported models](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md) for more information on which models are supported.\n\n### Example Usage\n\nIn the following example, note that the interfaces for the Huggingface and OpenAI modles are the same.  This is the primary benefit of using this package.\n\n```python\nimport torch\nfrom universalmodels import pretrained_from_name\nfrom universalmodels.constants import set_seed\n\n# Set the global seed to encourage deterministic generation \n# NOTE: DOES NOT affect OpenAI API models\nset_seed(42)\n\n# Huggingface model example\nhf_model_name = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\nhf_model, hf_tokenizer = pretrained_from_name(hf_model_name)\n\nhf_tokens = hf_tokenizer.encode(\"Repeat the following: 'Hello there from a huggingface model'\")\nhf_resp_tokens = hf_model.generate(torch.Tensor([hf_tokens]).int())[0]\nhf_response = hf_tokenizer.decode(hf_resp_tokens)\nprint(hf_response)\n\n# OpenAI model example\noai_model_name = \"openai/gpt-3.5\"\noai_model, oai_tokenizer = pretrained_from_name(oai_model_name)\n\noai_tokens = oai_tokenizer.encode(\"Repeat the following: 'Hello there from an openai model'\")\noai_resp_tokens = oai_model.generate(torch.Tensor([oai_tokens]).int())[0]\noai_response = oai_tokenizer.decode(oai_resp_tokens)\nprint(oai_response)\n```\n\n> [!IMPORTANT]\n> Make sure your API keys are set for OpenAI and Huggingface before using models that require them!\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Matthew Pisano  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "A series of wrappers to allow for multiple AI model sources to behave as huggingface transformers models",
    "version": "0.0.6",
    "project_urls": null,
    "split_keywords": [
        "ai",
        "transformers",
        "openai",
        "huggingface",
        "adapter"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "36c7f8cec04c0ecc49a620bf4c5766b1507a27b9f3c65d86b23c4424459cd418",
                "md5": "2274768cbad415b39c10340c927f1e8d",
                "sha256": "4677828da5931eb6a939a58cebe5ba22bd77f7aa10cd3733dac71e24c428c719"
            },
            "downloads": -1,
            "filename": "universalmodels-0.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2274768cbad415b39c10340c927f1e8d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 15985,
            "upload_time": "2024-02-17T20:15:24",
            "upload_time_iso_8601": "2024-02-17T20:15:24.399039Z",
            "url": "https://files.pythonhosted.org/packages/36/c7/f8cec04c0ecc49a620bf4c5766b1507a27b9f3c65d86b23c4424459cd418/universalmodels-0.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33128e1e291365d38889735b2ed447b56f1f2d3bf15fa9cb2c2fa50708ba5dac",
                "md5": "5572f9c4760f5881236ccfcad4717f85",
                "sha256": "23e852104373013494134549a652e0f54f193527b0ea330fa7964963ac39bae5"
            },
            "downloads": -1,
            "filename": "universalmodels-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "5572f9c4760f5881236ccfcad4717f85",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 16659,
            "upload_time": "2024-02-17T20:15:27",
            "upload_time_iso_8601": "2024-02-17T20:15:27.110594Z",
            "url": "https://files.pythonhosted.org/packages/33/12/8e1e291365d38889735b2ed447b56f1f2d3bf15fa9cb2c2fa50708ba5dac/universalmodels-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-17 20:15:27",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "universalmodels"
}
        
Elapsed time: 0.19149s