# Spirit-GPU
- [Spirit-GPU](#spirit-gpu)
- [Install](#install)
- [Usage example](#usage-example)
- [Logging](#logging)
- [API](#api)
- [Builder](#builder)
## Install
```
pip install spirit-gpu
```
## Usage example
```python
from spirit_gpu import start
from spirit_gpu.env import Env
from typing import Dict, Any
def handler(request: Dict[str, Any], env: Env):
"""
request: Dict[str, Any], from client http request body.
request["input"]: Required.
request["webhook"]: Optional string for asynchronous requests.
returned object to be serialized into JSON and sent to the client.
in this case: '{"output": "hello"}'
"""
return {"output": "hello"}
def gen_handler(request: Dict[str, Any], env: Env):
"""
append yield output to array, serialize into JSON and send to client.
in this case: [0, 1, 2, 3, 4]
"""
for i in range(5):
yield i
async def async_handler(request: Dict[str, Any], env: Env):
"""
returned object to be serialized into JSON and sent to the client.
"""
return {"output": "hello"}
async def async_gen_handler(request: Dict[str, Any], env: Env):
"""
append yield output to array, serialize into JSON and send to client.
"""
for i in range(10):
yield i
def concurrency_modifier(current_allowed_concurrency: int) -> int:
"""
Adjusts the allowed concurrency level based on the current state.
For example, if the current allowed concurrency is 3 and resources are sufficient,
it can be increased to 5, allowing 5 tasks to run concurrently.
"""
allowed_concurrency = ...
return allowed_concurrency
"""
Register the handler with serverless.start().
Handlers can be synchronous, asynchronous, generators, or asynchronous generators.
"""
start({
"handler": async_handler, "concurrency_modifier": concurrency_modifier
})
```
## Logging
We provide a tool to log information. Default logging level is "INFO", you can call `logger.set_level(logging.DEBUG)` to change it.
> Please make sure you update to the `latest` version to use this feature.
```python
from spirit_gpu import start, logger
from spirit_gpu.env import Env
from typing import Dict, Any
def handler(request: Dict[str, Any], env: Env):
"""
request: Dict[str, Any], from client http request body.
request["input"]: Required.
request["webhook"]: Optional string for asynchronous requests.
we will only add request["meta"]["requestID"] if it not exist in your request.
"""
request_id = request["meta"]["requestID"]
logger.info("start to handle", request_id = request_id, caller=True)
return {"output": "hello"}
start({"handler": handler})
```
## API
Please read [API](https://github.com/datastone-spirit/spirit-gpu/blob/main/API.md) or [中文 API](https://github.com/datastone-spirit/spirit-gpu/blob/main/API.zh.md) for how to use spirit-gpu serverless apis and some other import policies.
## Builder
The `spirit-gpu-builder` allows you to quickly generate templates and skeleton code for `spirit-gpu` using OpenAPI or JSON schema definitions. Built on `datamodel-code-generator`, this tool simplifies the setup for serverless functions.
> `spirit-gpu-builder` is installed when you install `spirit-gpu >= 0.0.6`.
```
usage: spirit-gpu-builder [-h] [-i INPUT_FILE]
[--input-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}]
[-o OUTPUT_DIR]
[--data-type {pydantic_v2.BaseModel,dataclasses.dataclass}]
[--handler-type {sync,async,sync_generator,async_generator}]
[--model-only]
Generate spirit-gpu skeleton code from a OpenAPI or JSON schema, built on top of `datamodel-code-generator`.
```
Options:
- `-h, --help`: show this help message and exit
- `-i INPUT_FILE, --input-file INPUT_FILE` Path to the input file. Supported types: ['auto', 'openapi', 'jsonschema', 'json', 'yaml', 'dict', 'csv', 'graphql']. If not provided, will try to find default file in current directory, default files ['api.yaml', 'api.yml', 'api.json'].
- `--input-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}`: Specific the type of input file. Default: 'auto'.
- `-o OUTPUT_DIR, --output-dir OUTPUT_DIR`: Path to the output Python file. Default is current directory.
- `--data-type {pydantic_v2.BaseModel,dataclasses.dataclass}` Type of data model to generate. Default is 'pydantic_v2.BaseModel'.
- `--handler-type {sync,async,sync_generator,async_generator}` Type of handler to generate. Default is 'sync'.
- `--model-only`: Only generate the model file and skip the template repo and main file generation. Useful when update the api file.
The input file is the `input` part of body of your request to serverless of spirit-gpu, it can be json format, json schema format or openapi file.
**Examples**
The input file should define the expected `input` part request body for your serverless spirit-gpu function. Supported formats include JSON, JSON schema, or OpenAPI.
```yaml
openapi: 3.1.0·
components:
schemas:
RequestInput:
type: object
required:
- audio
properties:
audio:
type: string
description: URL to the audio file.
nullable: false
model:
type: string
description: Identifier for the model to be used.
default: null
nullable: true
```
Your request body to `spirit-gpu`:
```json
{
"input": {
"audio": "http://your-audio.wav",
"model": "base",
},
"webhook": "xxx"
}
```
Generated python model file:
```python
class RequestInput(BaseModel):
audio: str = Field(..., description='URL to the audio file.')
model: Optional[str] = Field(
None, description='Identifier for the model to be used.'
)
```
If using OpenAPI, ensure the main object in your YAML file is named RequestInput to allow automatic code generation.
```python
def get_request_input(request: Dict[str, Any]) -> RequestInput:
return RequestInput(**request["input"])
def handler_impl(request_input: RequestInput, request: Dict[str, Any], env: Env):
"""
Your handler implementation goes here.
"""
pass
def handler(request: Dict[str, Any], env: Env):
request_input = get_request_input(request)
return handler_impl(request_input, request, env)
```
All generated code like this.
```
├── Dockerfile
├── LICENSE
├── README.md
├── api.json
├── requirements.txt
├── scripts
│ ├── build.sh
│ └── start.sh
└── src
├── build.py
├── main.py
└── spirit_generated_model.py
```
Raw data
{
"_id": null,
"home_page": "https://github.com/datastone-spirit",
"name": "spirit-gpu",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "serverless, ai, gpu, machine learning, SDK, library, python, API",
"author": "spirit",
"author_email": "Spirit <pypi@datastone.cn>",
"download_url": "https://files.pythonhosted.org/packages/78/1b/86d10c5af435b56df83e4bfee03910871a703da9731771a4c7b32e2edf39/spirit_gpu-0.0.6.tar.gz",
"platform": null,
"description": "# Spirit-GPU\n\n- [Spirit-GPU](#spirit-gpu)\n - [Install](#install)\n - [Usage example](#usage-example)\n - [Logging](#logging)\n - [API](#api)\n - [Builder](#builder)\n\n## Install\n```\npip install spirit-gpu\n```\n\n## Usage example\n\n```python\nfrom spirit_gpu import start\nfrom spirit_gpu.env import Env\nfrom typing import Dict, Any\n\ndef handler(request: Dict[str, Any], env: Env):\n \"\"\"\n request: Dict[str, Any], from client http request body.\n request[\"input\"]: Required.\n request[\"webhook\"]: Optional string for asynchronous requests.\n\n returned object to be serialized into JSON and sent to the client.\n in this case: '{\"output\": \"hello\"}'\n \"\"\"\n return {\"output\": \"hello\"}\n\n\ndef gen_handler(request: Dict[str, Any], env: Env):\n \"\"\"\n append yield output to array, serialize into JSON and send to client.\n in this case: [0, 1, 2, 3, 4]\n \"\"\"\n for i in range(5):\n yield i\n\n\nasync def async_handler(request: Dict[str, Any], env: Env):\n \"\"\"\n returned object to be serialized into JSON and sent to the client.\n \"\"\"\n return {\"output\": \"hello\"}\n\n\nasync def async_gen_handler(request: Dict[str, Any], env: Env):\n \"\"\"\n append yield output to array, serialize into JSON and send to client.\n \"\"\"\n for i in range(10):\n yield i\n\n\ndef concurrency_modifier(current_allowed_concurrency: int) -> int:\n \"\"\"\n Adjusts the allowed concurrency level based on the current state.\n For example, if the current allowed concurrency is 3 and resources are sufficient,\n it can be increased to 5, allowing 5 tasks to run concurrently.\n \"\"\"\n allowed_concurrency = ...\n return allowed_concurrency\n\n\n\"\"\"\nRegister the handler with serverless.start().\nHandlers can be synchronous, asynchronous, generators, or asynchronous generators.\n\"\"\"\nstart({\n \"handler\": async_handler, \"concurrency_modifier\": concurrency_modifier\n})\n```\n\n## Logging\nWe provide a tool to log information. Default logging level is \"INFO\", you can call `logger.set_level(logging.DEBUG)` to change it.\n\n> Please make sure you update to the `latest` version to use this feature.\n```python\nfrom spirit_gpu import start, logger\nfrom spirit_gpu.env import Env\nfrom typing import Dict, Any\n\n\ndef handler(request: Dict[str, Any], env: Env):\n \"\"\"\n request: Dict[str, Any], from client http request body.\n request[\"input\"]: Required.\n request[\"webhook\"]: Optional string for asynchronous requests.\n\n we will only add request[\"meta\"][\"requestID\"] if it not exist in your request.\n \"\"\"\n request_id = request[\"meta\"][\"requestID\"]\n logger.info(\"start to handle\", request_id = request_id, caller=True)\n return {\"output\": \"hello\"}\n\nstart({\"handler\": handler})\n```\n\n## API\nPlease read [API](https://github.com/datastone-spirit/spirit-gpu/blob/main/API.md) or [\u4e2d\u6587 API](https://github.com/datastone-spirit/spirit-gpu/blob/main/API.zh.md) for how to use spirit-gpu serverless apis and some other import policies.\n\n## Builder\n\nThe `spirit-gpu-builder` allows you to quickly generate templates and skeleton code for `spirit-gpu` using OpenAPI or JSON schema definitions. Built on `datamodel-code-generator`, this tool simplifies the setup for serverless functions.\n\n> `spirit-gpu-builder` is installed when you install `spirit-gpu >= 0.0.6`.\n\n```\nusage: spirit-gpu-builder [-h] [-i INPUT_FILE]\n [--input-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}]\n [-o OUTPUT_DIR]\n [--data-type {pydantic_v2.BaseModel,dataclasses.dataclass}]\n [--handler-type {sync,async,sync_generator,async_generator}]\n [--model-only]\n\nGenerate spirit-gpu skeleton code from a OpenAPI or JSON schema, built on top of `datamodel-code-generator`. \n```\n\nOptions:\n- `-h, --help`: show this help message and exit\n- `-i INPUT_FILE, --input-file INPUT_FILE` Path to the input file. Supported types: ['auto', 'openapi', 'jsonschema', 'json', 'yaml', 'dict', 'csv', 'graphql']. If not provided, will try to find default file in current directory, default files ['api.yaml', 'api.yml', 'api.json'].\n- `--input-type {auto,openapi,jsonschema,json,yaml,dict,csv,graphql}`: Specific the type of input file. Default: 'auto'.\n- `-o OUTPUT_DIR, --output-dir OUTPUT_DIR`: Path to the output Python file. Default is current directory.\n- `--data-type {pydantic_v2.BaseModel,dataclasses.dataclass}` Type of data model to generate. Default is 'pydantic_v2.BaseModel'.\n- `--handler-type {sync,async,sync_generator,async_generator}` Type of handler to generate. Default is 'sync'.\n- `--model-only`: Only generate the model file and skip the template repo and main file generation. Useful when update the api file.\n\nThe input file is the `input` part of body of your request to serverless of spirit-gpu, it can be json format, json schema format or openapi file.\n\n**Examples**\n\nThe input file should define the expected `input` part request body for your serverless spirit-gpu function. Supported formats include JSON, JSON schema, or OpenAPI.\n\n```yaml\nopenapi: 3.1.0\u00b7\ncomponents:\n schemas:\n RequestInput:\n type: object\n required:\n - audio\n properties:\n audio:\n type: string\n description: URL to the audio file.\n nullable: false\n model:\n type: string\n description: Identifier for the model to be used.\n default: null\n nullable: true\n```\n\nYour request body to `spirit-gpu`:\n\n```json\n{\n \"input\": {\n \"audio\": \"http://your-audio.wav\",\n \"model\": \"base\",\n },\n \"webhook\": \"xxx\"\n}\n```\n\nGenerated python model file:\n```python\nclass RequestInput(BaseModel):\n audio: str = Field(..., description='URL to the audio file.')\n model: Optional[str] = Field(\n None, description='Identifier for the model to be used.'\n )\n```\n\nIf using OpenAPI, ensure the main object in your YAML file is named RequestInput to allow automatic code generation.\n\n```python\ndef get_request_input(request: Dict[str, Any]) -> RequestInput:\n return RequestInput(**request[\"input\"])\n\ndef handler_impl(request_input: RequestInput, request: Dict[str, Any], env: Env):\n \"\"\"\n Your handler implementation goes here.\n \"\"\"\n pass\n\ndef handler(request: Dict[str, Any], env: Env):\n request_input = get_request_input(request)\n return handler_impl(request_input, request, env)\n```\n\n\n\nAll generated code like this.\n```\n\u251c\u2500\u2500 Dockerfile\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 api.json\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 scripts\n\u2502 \u251c\u2500\u2500 build.sh\n\u2502 \u2514\u2500\u2500 start.sh\n\u2514\u2500\u2500 src\n \u251c\u2500\u2500 build.py\n \u251c\u2500\u2500 main.py\n \u2514\u2500\u2500 spirit_generated_model.py\n```\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "Python serverless framework for Datastone Spirit GPU.",
"version": "0.0.6",
"project_urls": {
"BugTracker": "https://github.com/datastone-spirit/spirit-gpu/issues",
"Documentation": "https://github.com/datastone-spirit/spirit-gpu/blob/main/README.md",
"Homepage": "https://github.com/datastone-spirit",
"Repository": "https://github.com/datastone-spirit/spirit-gpu"
},
"split_keywords": [
"serverless",
" ai",
" gpu",
" machine learning",
" sdk",
" library",
" python",
" api"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e764ab222ca77fa32f0a8aeaa880594be21ad155ee6e277b2f95946a832aa923",
"md5": "7a9745988b16466a6910e8fc2cb4cb2b",
"sha256": "a72e339b5cbeb76d62bb466e3b9e983fd2653607f73400a0865fb12a93837d46"
},
"downloads": -1,
"filename": "spirit_gpu-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7a9745988b16466a6910e8fc2cb4cb2b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 25149,
"upload_time": "2024-11-15T06:18:13",
"upload_time_iso_8601": "2024-11-15T06:18:13.209737Z",
"url": "https://files.pythonhosted.org/packages/e7/64/ab222ca77fa32f0a8aeaa880594be21ad155ee6e277b2f95946a832aa923/spirit_gpu-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "781b86d10c5af435b56df83e4bfee03910871a703da9731771a4c7b32e2edf39",
"md5": "8993e48b9e1f2bc1d29b16233473ea3e",
"sha256": "79fd9ab4eb4ea3332266718e23638fd6b791626733b50147c016c1ab8623ef5f"
},
"downloads": -1,
"filename": "spirit_gpu-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "8993e48b9e1f2bc1d29b16233473ea3e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 27113,
"upload_time": "2024-11-15T06:18:14",
"upload_time_iso_8601": "2024-11-15T06:18:14.925633Z",
"url": "https://files.pythonhosted.org/packages/78/1b/86d10c5af435b56df83e4bfee03910871a703da9731771a4c7b32e2edf39/spirit_gpu-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-15 06:18:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "datastone-spirit",
"github_project": "spirit-gpu",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "spirit-gpu"
}