vellum-ai


Namevellum-ai JSON
Version 0.8.25 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-10-20 23:07:06
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.8
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Vellum Python Library

[![pypi](https://img.shields.io/pypi/v/vellum-ai.svg)](https://pypi.python.org/pypi/vellum-ai)
![license badge](https://img.shields.io/github/license/vellum-ai/vellum-client-python)
[![fern shield](https://img.shields.io/badge/%F0%9F%8C%BF-SDK%20generated%20by%20Fern-brightgreen)](https://buildwithfern.com/?utm_source=vellum-ai/vellum-client-python/readme)

The Vellum Python SDK provides access to the Vellum API from python.


## API Docs
You can find Vellum's complete API docs at [docs.vellum.ai](https://docs.vellum.ai/api-reference/introduction/getting-started).

## Installation

```sh
pip install --upgrade vellum-ai
```

## Usage
Below is how you would invoke a deployed Prompt from the Vellum API. For a complete list of all APIs
that Vellum supports, check out our [API Reference](https://docs.vellum.ai/api-reference/introduction/getting-started).

```python
from vellum import (
    StringInputRequest,
)
from vellum.client import Vellum

client = Vellum(
    api_key="YOUR_API_KEY",
)

def execute() -> str:
    result = client.execute_prompt(
        prompt_deployment_name="<example-deployment-name>>",
        release_tag="LATEST",
        inputs=[
            StringInputRequest(
                name="input_a",
                type="STRING",
                value="Hello, world!",
            )
        ],
    )
    
    if result.state == "REJECTED":
        raise Exception(result.error.message)

    return result.outputs[0].value

if __name__ == "__main__":
    print(execute())
```

> [!TIP]
> You can set a system environment variable `VELLUM_API_KEY` to avoid writing your api key within your code. To do so, add `export VELLUM_API_KEY=<your-api-token>`
> to your ~/.zshrc or ~/.bashrc, open a new terminal, and then any code calling `vellum.Vellum()` will read this key.

## Async Client
This SDK has an async version. Here's how to use it:



```python
import asyncio

import vellum
from vellum.client import AsyncVellum

client = AsyncVellum(api_key="YOUR_API_KEY")

async def execute() -> str:
    result = await client.execute_prompt(
        prompt_deployment_name="<example-deployment-name>>",
        release_tag="LATEST",
        inputs=[
            vellum.StringInputRequest(
                name="input_a",
                value="Hello, world!",
            )
        ],
    )

    if result.state == "REJECTED":
        raise Exception(result.error.message)
    
    return result.outputs[0].value

if __name__ == "__main__":
    print(asyncio.run(execute()))
```

## Contributing

While we value open-source contributions to this SDK, most of this library is generated programmatically.

Please feel free to make contributions to any of the directories or files below:
```plaintext
examples/*
src/vellum/lib/*
tests/*
README.md
```

Any additions made to files beyond those directories and files above would have to be moved over to our generation code
(found in the separate [vellum-client-generator](https://github.com/vellum-ai/vellum-client-generator) repo),
otherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept,
but know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "vellum-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/cd/d8/352c1c7a7acd712782bb9f41964456e164a7484d1728af15f052906b2efa/vellum_ai-0.8.25.tar.gz",
    "platform": null,
    "description": "# Vellum Python Library\n\n[![pypi](https://img.shields.io/pypi/v/vellum-ai.svg)](https://pypi.python.org/pypi/vellum-ai)\n![license badge](https://img.shields.io/github/license/vellum-ai/vellum-client-python)\n[![fern shield](https://img.shields.io/badge/%F0%9F%8C%BF-SDK%20generated%20by%20Fern-brightgreen)](https://buildwithfern.com/?utm_source=vellum-ai/vellum-client-python/readme)\n\nThe Vellum Python SDK provides access to the Vellum API from python.\n\n\n## API Docs\nYou can find Vellum's complete API docs at [docs.vellum.ai](https://docs.vellum.ai/api-reference/introduction/getting-started).\n\n## Installation\n\n```sh\npip install --upgrade vellum-ai\n```\n\n## Usage\nBelow is how you would invoke a deployed Prompt from the Vellum API. For a complete list of all APIs\nthat Vellum supports, check out our [API Reference](https://docs.vellum.ai/api-reference/introduction/getting-started).\n\n```python\nfrom vellum import (\n    StringInputRequest,\n)\nfrom vellum.client import Vellum\n\nclient = Vellum(\n    api_key=\"YOUR_API_KEY\",\n)\n\ndef execute() -> str:\n    result = client.execute_prompt(\n        prompt_deployment_name=\"<example-deployment-name>>\",\n        release_tag=\"LATEST\",\n        inputs=[\n            StringInputRequest(\n                name=\"input_a\",\n                type=\"STRING\",\n                value=\"Hello, world!\",\n            )\n        ],\n    )\n    \n    if result.state == \"REJECTED\":\n        raise Exception(result.error.message)\n\n    return result.outputs[0].value\n\nif __name__ == \"__main__\":\n    print(execute())\n```\n\n> [!TIP]\n> You can set a system environment variable `VELLUM_API_KEY` to avoid writing your api key within your code. To do so, add `export VELLUM_API_KEY=<your-api-token>`\n> to your ~/.zshrc or ~/.bashrc, open a new terminal, and then any code calling `vellum.Vellum()` will read this key.\n\n## Async Client\nThis SDK has an async version. Here's how to use it:\n\n\n\n```python\nimport asyncio\n\nimport vellum\nfrom vellum.client import AsyncVellum\n\nclient = AsyncVellum(api_key=\"YOUR_API_KEY\")\n\nasync def execute() -> str:\n    result = await client.execute_prompt(\n        prompt_deployment_name=\"<example-deployment-name>>\",\n        release_tag=\"LATEST\",\n        inputs=[\n            vellum.StringInputRequest(\n                name=\"input_a\",\n                value=\"Hello, world!\",\n            )\n        ],\n    )\n\n    if result.state == \"REJECTED\":\n        raise Exception(result.error.message)\n    \n    return result.outputs[0].value\n\nif __name__ == \"__main__\":\n    print(asyncio.run(execute()))\n```\n\n## Contributing\n\nWhile we value open-source contributions to this SDK, most of this library is generated programmatically.\n\nPlease feel free to make contributions to any of the directories or files below:\n```plaintext\nexamples/*\nsrc/vellum/lib/*\ntests/*\nREADME.md\n```\n\nAny additions made to files beyond those directories and files above would have to be moved over to our generation code\n(found in the separate [vellum-client-generator](https://github.com/vellum-ai/vellum-client-generator) repo),\notherwise they would be overwritten upon the next generated release. Feel free to open a PR as a proof of concept,\nbut know that we will not be able to merge it as-is. We suggest opening an issue first to discuss with us!\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": null,
    "version": "0.8.25",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "53ecdb05d876b69a83b485634a8427228db6599c312cb5ac01720c2233c036b8",
                "md5": "e4a48f0ba679df705080841ed840c403",
                "sha256": "0fe89ee254016d53656762ede1f7d6c9b0bcac8912fedf98ea29044a24391d85"
            },
            "downloads": -1,
            "filename": "vellum_ai-0.8.25-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e4a48f0ba679df705080841ed840c403",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 412820,
            "upload_time": "2024-10-20T23:07:04",
            "upload_time_iso_8601": "2024-10-20T23:07:04.890626Z",
            "url": "https://files.pythonhosted.org/packages/53/ec/db05d876b69a83b485634a8427228db6599c312cb5ac01720c2233c036b8/vellum_ai-0.8.25-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cdd8352c1c7a7acd712782bb9f41964456e164a7484d1728af15f052906b2efa",
                "md5": "719bf1e6b455f14724e0533df9250682",
                "sha256": "2e3d0471a992520d74d676022b11c94db7f65d3b67a522c92e469221fe869072"
            },
            "downloads": -1,
            "filename": "vellum_ai-0.8.25.tar.gz",
            "has_sig": false,
            "md5_digest": "719bf1e6b455f14724e0533df9250682",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 151565,
            "upload_time": "2024-10-20T23:07:06",
            "upload_time_iso_8601": "2024-10-20T23:07:06.704772Z",
            "url": "https://files.pythonhosted.org/packages/cd/d8/352c1c7a7acd712782bb9f41964456e164a7484d1728af15f052906b2efa/vellum_ai-0.8.25.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-20 23:07:06",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "vellum-ai"
}
        
Elapsed time: 0.56707s