Name | atoma-sdk JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | Python Client SDK Generated by Speakeasy. |
upload_time | 2025-02-08 13:36:45 |
maintainer | None |
docs_url | None |
author | Speakeasy |
requires_python | !=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8 |
license | None |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Atoma's Python SDK
<img src="https://github.com/atoma-network/atoma-node/blob/main/atoma-assets/atoma-banner.png" alt="Logo"/>
[]
[](https://x.com/Atoma_Network)
[](https://docs.atoma.network)
[](LICENSE)
<div align="left">
<a href="https://www.speakeasy.com/?utm_source=atoma-sdk&utm_campaign=python"><img src="https://custom-icon-badges.demolab.com/badge/-Built%20By%20Speakeasy-212015?style=for-the-badge&logoColor=FBE331&logo=speakeasy&labelColor=545454" /></a>
</div>
<!-- Start Summary [summary] -->
## Summary
<!-- End Summary [summary] -->
<!-- Start Table of Contents [toc] -->
## Table of Contents
<!-- $toc-max-depth=2 -->
* [Atoma's Python SDK](https://github.com/atoma-network/atoma-sdk-python/blob/master/#atomas-python-sdk)
* [SDK Installation](https://github.com/atoma-network/atoma-sdk-python/blob/master/#sdk-installation)
* [IDE Support](https://github.com/atoma-network/atoma-sdk-python/blob/master/#ide-support)
* [SDK Example Usage](https://github.com/atoma-network/atoma-sdk-python/blob/master/#sdk-example-usage)
* [Authentication](https://github.com/atoma-network/atoma-sdk-python/blob/master/#authentication)
* [Available Resources and Operations](https://github.com/atoma-network/atoma-sdk-python/blob/master/#available-resources-and-operations)
* [Server-sent event streaming](https://github.com/atoma-network/atoma-sdk-python/blob/master/#server-sent-event-streaming)
* [Retries](https://github.com/atoma-network/atoma-sdk-python/blob/master/#retries)
* [Error Handling](https://github.com/atoma-network/atoma-sdk-python/blob/master/#error-handling)
* [Server Selection](https://github.com/atoma-network/atoma-sdk-python/blob/master/#server-selection)
* [Custom HTTP Client](https://github.com/atoma-network/atoma-sdk-python/blob/master/#custom-http-client)
* [Debugging](https://github.com/atoma-network/atoma-sdk-python/blob/master/#debugging)
* [Development](https://github.com/atoma-network/atoma-sdk-python/blob/master/#development)
* [Maturity](https://github.com/atoma-network/atoma-sdk-python/blob/master/#maturity)
* [Contributions](https://github.com/atoma-network/atoma-sdk-python/blob/master/#contributions)
<!-- End Table of Contents [toc] -->
<!-- Start SDK Installation [installation] -->
## SDK Installation
> [!TIP]
> To finish publishing your SDK to PyPI you must [run your first generation action](https://www.speakeasy.com/docs/github-setup#step-by-step-guide).
The SDK can be installed with either *pip* or *poetry* package managers.
### PIP
*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.
```bash
pip install git+https://github.com/atoma-network/atoma-sdk-python.git
```
### Poetry
*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.
```bash
poetry add git+https://github.com/atoma-network/atoma-sdk-python.git
```
<!-- End SDK Installation [installation] -->
<!-- Start IDE Support [idesupport] -->
## IDE Support
### PyCharm
Generally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.
- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)
<!-- End IDE Support [idesupport] -->
<!-- Start SDK Example Usage [usage] -->
## SDK Example Usage
### Example
```python
# Synchronous Example
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
```
</br>
The same SDK client can also be used to make asychronous requests by importing asyncio.
```python
# Asynchronous Example
import asyncio
from atoma_sdk import AtomaSDK
import os
async def main():
async with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = await atoma_sdk.chat.create_async(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
asyncio.run(main())
```
<!-- End SDK Example Usage [usage] -->
<!-- Start Authentication [security] -->
## Authentication
### Per-Client Security Schemes
This SDK supports the following security scheme globally:
| Name | Type | Scheme | Environment Variable |
| ------------- | ---- | ----------- | ---------------------- |
| `bearer_auth` | http | HTTP Bearer | `ATOMASDK_BEARER_AUTH` |
To authenticate with the API the `bearer_auth` parameter must be set when initializing the SDK client instance. For example:
```python
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
```
<!-- End Authentication [security] -->
<!-- Start Available Resources and Operations [operations] -->
## Available Resources and Operations
<details open>
<summary>Available methods</summary>
### [chat](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md)
* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md#create) - Create chat completion
* [create_stream](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md#create_stream)
### [confidential_chat](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md)
* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md#create) - Create confidential chat completion
* [create_stream](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md#create_stream)
### [confidential_embeddings](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialembeddings/README.md)
* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialembeddings/README.md#create) - Create confidential embeddings
### [confidential_images](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialimages/README.md)
* [generate](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialimages/README.md#generate) - Create confidential image
### [embeddings](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/embeddings/README.md)
* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/embeddings/README.md#create) - Create embeddings
### [health](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/health/README.md)
* [health](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/health/README.md#health) - Health
### [images](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/images/README.md)
* [generate](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/images/README.md#generate) - Create image
### [models](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/models/README.md)
* [models_list](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/models/README.md#models_list) - List models
### [nodes](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md)
* [nodes_create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md#nodes_create) - Create node
* [nodes_create_lock](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md#nodes_create_lock) - Create a node lock for confidential compute
</details>
<!-- End Available Resources and Operations [operations] -->
<!-- Start Server-sent event streaming [eventstream] -->
## Server-sent event streaming
[Server-sent events][mdn-sse] are used to stream content from certain
operations. These operations will expose the stream as [Generator][generator] that
can be consumed using a simple `for` loop. The loop will
terminate when the server no longer has any events to send and closes the
underlying connection.
The stream is also a [Context Manager][context-manager] and can be used with the `with` statement and will close the
underlying connection when the context is exited.
```python
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create_stream(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
"name": "john_doe",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
with res as event_stream:
for event in event_stream:
# handle event
print(event, flush=True)
```
[mdn-sse]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
[generator]: https://book.pythontips.com/en/latest/generators.html
[context-manager]: https://book.pythontips.com/en/latest/context_managers.html
<!-- End Server-sent event streaming [eventstream] -->
<!-- Start Retries [retries] -->
## Retries
Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.
To change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:
```python
from atoma_sdk import AtomaSDK
from atoma_sdk.utils import BackoffStrategy, RetryConfig
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234",
RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False))
# Handle response
print(res)
```
If you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:
```python
from atoma_sdk import AtomaSDK
from atoma_sdk.utils import BackoffStrategy, RetryConfig
import os
with AtomaSDK(
retry_config=RetryConfig("backoff", BackoffStrategy(1, 50, 1.1, 100), False),
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
```
<!-- End Retries [retries] -->
<!-- Start Error Handling [errors] -->
## Error Handling
Handling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.
By default, an API error will raise a models.APIError exception, which has the following properties:
| Property | Type | Description |
|-----------------|------------------|-----------------------|
| `.status_code` | *int* | The HTTP status code |
| `.message` | *str* | The error message |
| `.raw_response` | *httpx.Response* | The raw HTTP response |
| `.body` | *str* | The response content |
When custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective *Errors* tables in SDK docs for more details on possible exception types for each operation. For example, the `create_async` method may raise the following exceptions:
| Error Type | Status Code | Content Type |
| --------------- | ----------- | ------------ |
| models.APIError | 4XX, 5XX | \*/\* |
### Example
```python
from atoma_sdk import AtomaSDK, models
import os
with AtomaSDK(
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = None
try:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
except models.APIError as e:
# handle exception
raise(e)
```
<!-- End Error Handling [errors] -->
<!-- Start Server Selection [server] -->
## Server Selection
### Override Server URL Per-Client
The default server can also be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example:
```python
from atoma_sdk import AtomaSDK
import os
with AtomaSDK(
server_url="https://api.atoma.network",
bearer_auth=os.getenv("ATOMASDK_BEARER_AUTH", ""),
) as atoma_sdk:
res = atoma_sdk.chat.create(messages=[
{
"content": "Hello! How can you help me today?",
"role": "user",
},
], model="meta-llama/Llama-3.3-70B-Instruct", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[
"json([\"stop\", \"halt\"])",
], temperature=0.7, top_p=1, user="user-1234")
# Handle response
print(res)
```
<!-- End Server Selection [server] -->
<!-- Start Custom HTTP Client [http-client] -->
## Custom HTTP Client
The Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.
Depending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.
This allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly.
For example, you could specify a header for every request that this sdk makes as follows:
```python
from atoma_sdk import AtomaSDK
import httpx
http_client = httpx.Client(headers={"x-custom-header": "someValue"})
s = AtomaSDK(client=http_client)
```
or you could wrap the client with your own custom logic:
```python
from atoma_sdk import AtomaSDK
from atoma_sdk.httpclient import AsyncHttpClient
import httpx
class CustomClient(AsyncHttpClient):
client: AsyncHttpClient
def __init__(self, client: AsyncHttpClient):
self.client = client
async def send(
self,
request: httpx.Request,
*,
stream: bool = False,
auth: Union[
httpx._types.AuthTypes, httpx._client.UseClientDefault, None
] = httpx.USE_CLIENT_DEFAULT,
follow_redirects: Union[
bool, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
) -> httpx.Response:
request.headers["Client-Level-Header"] = "added by client"
return await self.client.send(
request, stream=stream, auth=auth, follow_redirects=follow_redirects
)
def build_request(
self,
method: str,
url: httpx._types.URLTypes,
*,
content: Optional[httpx._types.RequestContent] = None,
data: Optional[httpx._types.RequestData] = None,
files: Optional[httpx._types.RequestFiles] = None,
json: Optional[Any] = None,
params: Optional[httpx._types.QueryParamTypes] = None,
headers: Optional[httpx._types.HeaderTypes] = None,
cookies: Optional[httpx._types.CookieTypes] = None,
timeout: Union[
httpx._types.TimeoutTypes, httpx._client.UseClientDefault
] = httpx.USE_CLIENT_DEFAULT,
extensions: Optional[httpx._types.RequestExtensions] = None,
) -> httpx.Request:
return self.client.build_request(
method,
url,
content=content,
data=data,
files=files,
json=json,
params=params,
headers=headers,
cookies=cookies,
timeout=timeout,
extensions=extensions,
)
s = AtomaSDK(async_client=CustomClient(httpx.AsyncClient()))
```
<!-- End Custom HTTP Client [http-client] -->
<!-- Start Debugging [debug] -->
## Debugging
You can setup your SDK to emit debug logs for SDK requests and responses.
You can pass your own logger class directly into your SDK.
```python
from atoma_sdk import AtomaSDK
import logging
logging.basicConfig(level=logging.DEBUG)
s = AtomaSDK(debug_logger=logging.getLogger("atoma_sdk"))
```
You can also enable a default debug logger by setting an environment variable `ATOMASDK_DEBUG` to true.
<!-- End Debugging [debug] -->
<!-- Placeholder for Future Speakeasy SDK Sections -->
# Development
## Maturity
This SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage
to a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally
looking for the latest version.
## Contributions
While we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation.
We look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.
### SDK Created by [Speakeasy](https://www.speakeasy.com/?utm_source=atoma-sdk&utm_campaign=python)
Raw data
{
"_id": null,
"home_page": null,
"name": "atoma-sdk",
"maintainer": null,
"docs_url": null,
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Speakeasy",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/c6/e8/614c9b84e148be8e761940ed8da82d195a5583dc225ada08bdca42e1696b/atoma_sdk-0.1.1.tar.gz",
"platform": null,
"description": "# Atoma's Python SDK\n\n<img src=\"https://github.com/atoma-network/atoma-node/blob/main/atoma-assets/atoma-banner.png\" alt=\"Logo\"/>\n\n[]\n[](https://x.com/Atoma_Network)\n[](https://docs.atoma.network)\n[](LICENSE)\n\n<div align=\"left\">\n <a href=\"https://www.speakeasy.com/?utm_source=atoma-sdk&utm_campaign=python\"><img src=\"https://custom-icon-badges.demolab.com/badge/-Built%20By%20Speakeasy-212015?style=for-the-badge&logoColor=FBE331&logo=speakeasy&labelColor=545454\" /></a>\n</div>\n\n<!-- Start Summary [summary] -->\n## Summary\n\n\n<!-- End Summary [summary] -->\n\n<!-- Start Table of Contents [toc] -->\n## Table of Contents\n<!-- $toc-max-depth=2 -->\n* [Atoma's Python SDK](https://github.com/atoma-network/atoma-sdk-python/blob/master/#atomas-python-sdk)\n * [SDK Installation](https://github.com/atoma-network/atoma-sdk-python/blob/master/#sdk-installation)\n * [IDE Support](https://github.com/atoma-network/atoma-sdk-python/blob/master/#ide-support)\n * [SDK Example Usage](https://github.com/atoma-network/atoma-sdk-python/blob/master/#sdk-example-usage)\n * [Authentication](https://github.com/atoma-network/atoma-sdk-python/blob/master/#authentication)\n * [Available Resources and Operations](https://github.com/atoma-network/atoma-sdk-python/blob/master/#available-resources-and-operations)\n * [Server-sent event streaming](https://github.com/atoma-network/atoma-sdk-python/blob/master/#server-sent-event-streaming)\n * [Retries](https://github.com/atoma-network/atoma-sdk-python/blob/master/#retries)\n * [Error Handling](https://github.com/atoma-network/atoma-sdk-python/blob/master/#error-handling)\n * [Server Selection](https://github.com/atoma-network/atoma-sdk-python/blob/master/#server-selection)\n * [Custom HTTP Client](https://github.com/atoma-network/atoma-sdk-python/blob/master/#custom-http-client)\n * [Debugging](https://github.com/atoma-network/atoma-sdk-python/blob/master/#debugging)\n* [Development](https://github.com/atoma-network/atoma-sdk-python/blob/master/#development)\n * [Maturity](https://github.com/atoma-network/atoma-sdk-python/blob/master/#maturity)\n * [Contributions](https://github.com/atoma-network/atoma-sdk-python/blob/master/#contributions)\n\n<!-- End Table of Contents [toc] -->\n\n<!-- Start SDK Installation [installation] -->\n## SDK Installation\n\n> [!TIP]\n> To finish publishing your SDK to PyPI you must [run your first generation action](https://www.speakeasy.com/docs/github-setup#step-by-step-guide).\n\n\nThe SDK can be installed with either *pip* or *poetry* package managers.\n\n### PIP\n\n*PIP* is the default package installer for Python, enabling easy installation and management of packages from PyPI via the command line.\n\n```bash\npip install git+https://github.com/atoma-network/atoma-sdk-python.git\n```\n\n### Poetry\n\n*Poetry* is a modern tool that simplifies dependency management and package publishing by using a single `pyproject.toml` file to handle project metadata and dependencies.\n\n```bash\npoetry add git+https://github.com/atoma-network/atoma-sdk-python.git\n```\n<!-- End SDK Installation [installation] -->\n\n<!-- Start IDE Support [idesupport] -->\n## IDE Support\n\n### PyCharm\n\nGenerally, the SDK will work well with most IDEs out of the box. However, when using PyCharm, you can enjoy much better integration with Pydantic by installing an additional plugin.\n\n- [PyCharm Pydantic Plugin](https://docs.pydantic.dev/latest/integrations/pycharm/)\n<!-- End IDE Support [idesupport] -->\n\n<!-- Start SDK Example Usage [usage] -->\n## SDK Example Usage\n\n### Example\n\n```python\n# Synchronous Example\nfrom atoma_sdk import AtomaSDK\nimport os\n\nwith AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n```\n\n</br>\n\nThe same SDK client can also be used to make asychronous requests by importing asyncio.\n```python\n# Asynchronous Example\nimport asyncio\nfrom atoma_sdk import AtomaSDK\nimport os\n\nasync def main():\n async with AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n ) as atoma_sdk:\n\n res = await atoma_sdk.chat.create_async(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n\nasyncio.run(main())\n```\n<!-- End SDK Example Usage [usage] -->\n\n<!-- Start Authentication [security] -->\n## Authentication\n\n### Per-Client Security Schemes\n\nThis SDK supports the following security scheme globally:\n\n| Name | Type | Scheme | Environment Variable |\n| ------------- | ---- | ----------- | ---------------------- |\n| `bearer_auth` | http | HTTP Bearer | `ATOMASDK_BEARER_AUTH` |\n\nTo authenticate with the API the `bearer_auth` parameter must be set when initializing the SDK client instance. For example:\n```python\nfrom atoma_sdk import AtomaSDK\nimport os\n\nwith AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n\n```\n<!-- End Authentication [security] -->\n\n<!-- Start Available Resources and Operations [operations] -->\n## Available Resources and Operations\n\n<details open>\n<summary>Available methods</summary>\n\n\n### [chat](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md)\n\n* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md#create) - Create chat completion\n* [create_stream](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/chat/README.md#create_stream)\n\n### [confidential_chat](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md)\n\n* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md#create) - Create confidential chat completion\n* [create_stream](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialchat/README.md#create_stream)\n\n### [confidential_embeddings](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialembeddings/README.md)\n\n* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialembeddings/README.md#create) - Create confidential embeddings\n\n### [confidential_images](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialimages/README.md)\n\n* [generate](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/confidentialimages/README.md#generate) - Create confidential image\n\n### [embeddings](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/embeddings/README.md)\n\n* [create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/embeddings/README.md#create) - Create embeddings\n\n### [health](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/health/README.md)\n\n* [health](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/health/README.md#health) - Health\n\n### [images](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/images/README.md)\n\n* [generate](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/images/README.md#generate) - Create image\n\n### [models](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/models/README.md)\n\n* [models_list](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/models/README.md#models_list) - List models\n\n### [nodes](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md)\n\n* [nodes_create](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md#nodes_create) - Create node\n* [nodes_create_lock](https://github.com/atoma-network/atoma-sdk-python/blob/master/docs/sdks/nodes/README.md#nodes_create_lock) - Create a node lock for confidential compute\n\n</details>\n<!-- End Available Resources and Operations [operations] -->\n\n<!-- Start Server-sent event streaming [eventstream] -->\n## Server-sent event streaming\n\n[Server-sent events][mdn-sse] are used to stream content from certain\noperations. These operations will expose the stream as [Generator][generator] that\ncan be consumed using a simple `for` loop. The loop will\nterminate when the server no longer has any events to send and closes the\nunderlying connection. \n\nThe stream is also a [Context Manager][context-manager] and can be used with the `with` statement and will close the\nunderlying connection when the context is exited.\n\n```python\nfrom atoma_sdk import AtomaSDK\nimport os\n\nwith AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create_stream(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n \"name\": \"john_doe\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n with res as event_stream:\n for event in event_stream:\n # handle event\n print(event, flush=True)\n\n```\n\n[mdn-sse]: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events\n[generator]: https://book.pythontips.com/en/latest/generators.html\n[context-manager]: https://book.pythontips.com/en/latest/context_managers.html\n<!-- End Server-sent event streaming [eventstream] -->\n\n<!-- Start Retries [retries] -->\n## Retries\n\nSome of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK.\n\nTo change the default retry strategy for a single API call, simply provide a `RetryConfig` object to the call:\n```python\nfrom atoma_sdk import AtomaSDK\nfrom atoma_sdk.utils import BackoffStrategy, RetryConfig\nimport os\n\nwith AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\",\n RetryConfig(\"backoff\", BackoffStrategy(1, 50, 1.1, 100), False))\n\n # Handle response\n print(res)\n\n```\n\nIf you'd like to override the default retry strategy for all operations that support retries, you can use the `retry_config` optional parameter when initializing the SDK:\n```python\nfrom atoma_sdk import AtomaSDK\nfrom atoma_sdk.utils import BackoffStrategy, RetryConfig\nimport os\n\nwith AtomaSDK(\n retry_config=RetryConfig(\"backoff\", BackoffStrategy(1, 50, 1.1, 100), False),\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n\n```\n<!-- End Retries [retries] -->\n\n<!-- Start Error Handling [errors] -->\n## Error Handling\n\nHandling errors in this SDK should largely match your expectations. All operations return a response object or raise an exception.\n\nBy default, an API error will raise a models.APIError exception, which has the following properties:\n\n| Property | Type | Description |\n|-----------------|------------------|-----------------------|\n| `.status_code` | *int* | The HTTP status code |\n| `.message` | *str* | The error message |\n| `.raw_response` | *httpx.Response* | The raw HTTP response |\n| `.body` | *str* | The response content |\n\nWhen custom error responses are specified for an operation, the SDK may also raise their associated exceptions. You can refer to respective *Errors* tables in SDK docs for more details on possible exception types for each operation. For example, the `create_async` method may raise the following exceptions:\n\n| Error Type | Status Code | Content Type |\n| --------------- | ----------- | ------------ |\n| models.APIError | 4XX, 5XX | \\*/\\* |\n\n### Example\n\n```python\nfrom atoma_sdk import AtomaSDK, models\nimport os\n\nwith AtomaSDK(\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n res = None\n try:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n\n except models.APIError as e:\n # handle exception\n raise(e)\n```\n<!-- End Error Handling [errors] -->\n\n<!-- Start Server Selection [server] -->\n## Server Selection\n\n### Override Server URL Per-Client\n\nThe default server can also be overridden globally by passing a URL to the `server_url: str` optional parameter when initializing the SDK client instance. For example:\n```python\nfrom atoma_sdk import AtomaSDK\nimport os\n\nwith AtomaSDK(\n server_url=\"https://api.atoma.network\",\n bearer_auth=os.getenv(\"ATOMASDK_BEARER_AUTH\", \"\"),\n) as atoma_sdk:\n\n res = atoma_sdk.chat.create(messages=[\n {\n \"content\": \"Hello! How can you help me today?\",\n \"role\": \"user\",\n },\n ], model=\"meta-llama/Llama-3.3-70B-Instruct\", frequency_penalty=0, max_tokens=2048, n=1, presence_penalty=0, seed=123, stop=[\n \"json([\\\"stop\\\", \\\"halt\\\"])\",\n ], temperature=0.7, top_p=1, user=\"user-1234\")\n\n # Handle response\n print(res)\n\n```\n<!-- End Server Selection [server] -->\n\n<!-- Start Custom HTTP Client [http-client] -->\n## Custom HTTP Client\n\nThe Python SDK makes API calls using the [httpx](https://www.python-httpx.org/) HTTP library. In order to provide a convenient way to configure timeouts, cookies, proxies, custom headers, and other low-level configuration, you can initialize the SDK client with your own HTTP client instance.\nDepending on whether you are using the sync or async version of the SDK, you can pass an instance of `HttpClient` or `AsyncHttpClient` respectively, which are Protocol's ensuring that the client has the necessary methods to make API calls.\nThis allows you to wrap the client with your own custom logic, such as adding custom headers, logging, or error handling, or you can just pass an instance of `httpx.Client` or `httpx.AsyncClient` directly.\n\nFor example, you could specify a header for every request that this sdk makes as follows:\n```python\nfrom atoma_sdk import AtomaSDK\nimport httpx\n\nhttp_client = httpx.Client(headers={\"x-custom-header\": \"someValue\"})\ns = AtomaSDK(client=http_client)\n```\n\nor you could wrap the client with your own custom logic:\n```python\nfrom atoma_sdk import AtomaSDK\nfrom atoma_sdk.httpclient import AsyncHttpClient\nimport httpx\n\nclass CustomClient(AsyncHttpClient):\n client: AsyncHttpClient\n\n def __init__(self, client: AsyncHttpClient):\n self.client = client\n\n async def send(\n self,\n request: httpx.Request,\n *,\n stream: bool = False,\n auth: Union[\n httpx._types.AuthTypes, httpx._client.UseClientDefault, None\n ] = httpx.USE_CLIENT_DEFAULT,\n follow_redirects: Union[\n bool, httpx._client.UseClientDefault\n ] = httpx.USE_CLIENT_DEFAULT,\n ) -> httpx.Response:\n request.headers[\"Client-Level-Header\"] = \"added by client\"\n\n return await self.client.send(\n request, stream=stream, auth=auth, follow_redirects=follow_redirects\n )\n\n def build_request(\n self,\n method: str,\n url: httpx._types.URLTypes,\n *,\n content: Optional[httpx._types.RequestContent] = None,\n data: Optional[httpx._types.RequestData] = None,\n files: Optional[httpx._types.RequestFiles] = None,\n json: Optional[Any] = None,\n params: Optional[httpx._types.QueryParamTypes] = None,\n headers: Optional[httpx._types.HeaderTypes] = None,\n cookies: Optional[httpx._types.CookieTypes] = None,\n timeout: Union[\n httpx._types.TimeoutTypes, httpx._client.UseClientDefault\n ] = httpx.USE_CLIENT_DEFAULT,\n extensions: Optional[httpx._types.RequestExtensions] = None,\n ) -> httpx.Request:\n return self.client.build_request(\n method,\n url,\n content=content,\n data=data,\n files=files,\n json=json,\n params=params,\n headers=headers,\n cookies=cookies,\n timeout=timeout,\n extensions=extensions,\n )\n\ns = AtomaSDK(async_client=CustomClient(httpx.AsyncClient()))\n```\n<!-- End Custom HTTP Client [http-client] -->\n\n<!-- Start Debugging [debug] -->\n## Debugging\n\nYou can setup your SDK to emit debug logs for SDK requests and responses.\n\nYou can pass your own logger class directly into your SDK.\n```python\nfrom atoma_sdk import AtomaSDK\nimport logging\n\nlogging.basicConfig(level=logging.DEBUG)\ns = AtomaSDK(debug_logger=logging.getLogger(\"atoma_sdk\"))\n```\n\nYou can also enable a default debug logger by setting an environment variable `ATOMASDK_DEBUG` to true.\n<!-- End Debugging [debug] -->\n\n<!-- Placeholder for Future Speakeasy SDK Sections -->\n\n# Development\n\n## Maturity\n\nThis SDK is in beta, and there may be breaking changes between versions without a major version update. Therefore, we recommend pinning usage\nto a specific package version. This way, you can install the same version each time without breaking changes unless you are intentionally\nlooking for the latest version.\n\n## Contributions\n\nWhile we value open-source contributions to this SDK, this library is generated programmatically. Any manual changes added to internal files will be overwritten on the next generation.\nWe look forward to hearing your feedback. Feel free to open a PR or an issue with a proof of concept and we'll do our best to include it in a future release.\n\n### SDK Created by [Speakeasy](https://www.speakeasy.com/?utm_source=atoma-sdk&utm_campaign=python)\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Python Client SDK Generated by Speakeasy.",
"version": "0.1.1",
"project_urls": {
"Repository": "https://github.com/atoma-network/atoma-sdk-python.git"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b5cb0855699adb22c37f4f44161c09b7dc32538dc3bb2518f1ad49246af84460",
"md5": "3e15a971ad887f5aea3c660379c6e555",
"sha256": "689a3aeb3fc559fdb37049ac9979ac314529b4ed95a578b7cad4d389fa624031"
},
"downloads": -1,
"filename": "atoma_sdk-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3e15a971ad887f5aea3c660379c6e555",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"size": 85600,
"upload_time": "2025-02-08T13:36:44",
"upload_time_iso_8601": "2025-02-08T13:36:44.488463Z",
"url": "https://files.pythonhosted.org/packages/b5/cb/0855699adb22c37f4f44161c09b7dc32538dc3bb2518f1ad49246af84460/atoma_sdk-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c6e8614c9b84e148be8e761940ed8da82d195a5583dc225ada08bdca42e1696b",
"md5": "ace8928d7cd5562ffbfb74aa9d7afc57",
"sha256": "218b7aedcd7aa6f95efb9433f6ba9572de99278248bebaa8917b931c60a2a5a7"
},
"downloads": -1,
"filename": "atoma_sdk-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "ace8928d7cd5562ffbfb74aa9d7afc57",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "!=2.7.*,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,!=3.7.*,>=3.8",
"size": 49616,
"upload_time": "2025-02-08T13:36:45",
"upload_time_iso_8601": "2025-02-08T13:36:45.824117Z",
"url": "https://files.pythonhosted.org/packages/c6/e8/614c9b84e148be8e761940ed8da82d195a5583dc225ada08bdca42e1696b/atoma_sdk-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-08 13:36:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "atoma-network",
"github_project": "atoma-sdk-python",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "atoma-sdk"
}