cspark


Namecspark JSON
Version 0.1.12 PyPI version JSON
download
home_pageNone
SummaryA Python SDK for interacting with Coherent Spark APIs
upload_time2025-01-24 00:40:22
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseNone
keywords api coherent restful sdk spark
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Coherent Spark Python SDK

[![PyPI version][version-img]][version-url]
[![CI build][ci-img]][ci-url]
[![License][license-img]][license-url]

The Coherent Spark Python SDK is designed to elevate the developer experience and
provide convenient access to Coherent Spark APIs.

👋 **Just a heads-up:**
This SDK is supported by the community. If you encounter any bumps while using it,
please report them [here](https://github.com/Coherent-Partners/spark-python-sdk/issues)
by creating a new issue.

## Installation

```bash
pip install cspark # or pip install 'cspark[cli]' for CLI support.
```

> 🫣 This Python library requires [Python 3.7+](https://www.python.org/downloads/).

## Usage

To use the SDK, you need a Coherent Spark account that lets you access the following:

- User authentication ([API key][api-key-docs], [bearer token][bearer-token-docs],
  or [OAuth2.0 client credentials][oauth2-docs] details)
- Base URL (including the environment and tenant name)
- Spark service URI (to locate a specific resource):
  - `folder` - the folder name (where the service is located)
  - `service` - the service name
  - `version` - the semantic version a.k.a revision number (e.g., 0.4.2)

A `folder` contains one or more `service`s, which can have multiple `version`s.
Technically speaking, when you're operating with a service, you're actually
interacting with a specific version of that service (the latest version by default -
unless specified otherwise).

Hence, there are various ways to indicate a Spark service URI:

- `{folder}/{service}[{version}]` - _version_ is optional.
- `service/{service_id}`
- `version/{version_id}`

> **IMPORTANT:** Avoid using URL-encoded characters in the service URI.

Here's an example of how to execute a Spark service:

```py
import cspark.sdk as Spark

spark = Spark.Client(env='my-env', tenant='my-tenant', api_key='my-api-key')
with spark.services as services:
    response = services.execute('my-folder/my-service', inputs={'value': 42})
    print(response.data)
```

Explore the [examples] and [docs] folders to find out more about the SDK's capabilities.

> **PRO TIP:**
> A service URI locator can be combined with other parameters to locate a specific
> service (or version of it) when it's not a string. For example, you may execute
> a public service using a `UriParams` object by specifying the `folder`, `service`,
> and `public` properties.

```py
import cspark.sdk as Spark

spark = Spark.Client(env='my-env', tenant='my-tenant', api_key='open')

with spark.services as services:
    uri = Spark.UriParams(folder='my-folder', service='my-service', public=True)
    response = services.execute(uri, inputs={'value': 42})
    print(response.data)

# The final URI in this case is:
#    'my-tenant/api/v3/public/folders/my-folder/services/my-service/execute'
```

See the [Uri and UriParams][uri-url] classes for more details.

## Client Options

As shown in the examples above, the `Spark.Client` is your entry point to the SDK.
It is quite flexible and can be configured with the following options:

### Base URL

`base_url` (default: `os.getenv['CSPARK_BASE_URL']`) indicates the base URL of
Coherent Spark APIs. It should include the tenant and environment information.

```py
spark = Spark.Client(base_url='https://excel.my-env.coherent.global/my-tenant')
```

Alternatively, a combination of `env` and `tenant` options can be used to construct
the base URL.

```py
spark = Spark.Client(env='my-env', tenant='my-tenant')
```

### Authentication

The SDK supports three types of authentication schemes:

- `api_key` (default: `os.getenv['CSPARK_API_KEY']`) indicates the API key
  (also known as synthetic key), which is sensitive and should be kept secure.

```py
spark = Spark.Client(api_key='my-api-key')
```

> **PRO TIP:**
> The Spark platform supports public APIs that can be accessed without any form
> of authentication. In that case, you need to set `api_key` to `open` in order to
> create a `Spark.Client`.

- `token` (default: `os.getenv['CSPARK_BEARER_TOKEN']`) indicates the bearer token.
  It can be prefixed with 'Bearer' or not. A bearer token is usually valid for a
  limited time and should be refreshed periodically.

```py
spark = Spark.Client(token='Bearer my-access-token') # with prefix
# or
spark = Spark.Client(token='my-access-token') # without prefix
```

- `oauth` (default: `os.getenv['CSPARK_CLIENT_ID']` and `os.getenv['CSPARK_CLIENT_SECRET']` or
  `os.getenv['CSPARK_OAUTH_PATH']`) indicates the OAuth2.0 client credentials.
  You can either provide the client ID and secret directly or the file path to
  the JSON file containing the credentials.

```py
spark = Spark.Client(oauth={'client_id': 'my-client-id', 'client_secret': 'my-client-secret'})
# or
spark = Spark.Client(oauth='path/to/oauth/credentials.json')
```

### Additional Settings

- `timeout` (default: `60000` ms) indicates the maximum amount of time that the
  client should wait for a response from Spark servers before timing out a request.

- `max_retries` (default: `2`) indicates the maximum number of times that the client
  will retry a request in case of a temporary failure, such as an unauthorized
  response or a status code greater than 400.

- `retry_interval` (default: `1` second) indicates the delay between each retry.

- `logger` (default: `True`) enables or disables the logger for the SDK.
  - If `bool`, determines whether or not the SDK should print logs.
  - If `dict`, the SDK will print logs in accordance with the specified keyword arguments.
  - If `LoggerOptions`, the SDK will print messages based on the specified options:
    - `context` (default: `CSPARK v{version}`): defines the context of the logs (e.g., `CSPARK v0.1.6`);
    - `disabled` (default: `False`) determines whether the logger should be disabled.
    - `colorful` (default: `True`) determines whether the logs should be colorful;
    - `timestamp` (default: `True`) determines whether the logs should include timestamps;
    - `datefmt` (default: `'%m/%d/%Y, %I:%M:%S %p'`) defines the date format for the logs;
    - `level` (default: `DEBUG`) defines the [logging level][logging-level] for the logs.

```py
spark = Spark.Client(logger=False)
# or
spark = Spark.Client(logger={'colorful': False})
```

## Client Errors

`SparkError` is the base class for all custom errors thrown by the SDK. There are
two types of it:

- `SparkSdkError`: usually thrown when an argument (user entry) fails to comply
  with the expected format. Because it's a client-side error, it will include the invalid
  entry as the `cause` in most cases.
- `SparkApiError`: when attempting to communicate with the API, the SDK will wrap
  any sort of failure (any error during the roundtrip) into `SparkApiError`, which
  includes the HTTP `status` code of the response and the `request_id`, a unique
  identifier of the request.

Some of the derived `SparkApiError` are:

| Type                      | Status | When                           |
| ------------------------- | ------ | ------------------------------ |
| `BadRequestError`         | 400    | invalid request                |
| `UnauthorizedError`       | 401    | missing or invalid credentials |
| `ForbiddenError`          | 403    | insufficient permissions       |
| `NotFoundError`           | 404    | resource not found             |
| `ConflictError`           | 409    | resource already exists        |
| `RateLimitError`          | 429    | too many requests              |
| `InternalServerError`     | 500    | server-side error              |
| `ServiceUnavailableError` | 503    | server is down                 |
| `UnknownApiError`         | `None` | unknown error                  |

## API Parity

The SDK aims to provide full parity with the Spark APIs over time. Below is a list
of the currently supported APIs.

[Authentication API](./docs/authentication.md) - manages access tokens using
OAuth2.0 Client Credentials flow:

- `Authorization.oauth.retrieve_token(config)` generates new access tokens.

[Services API](./docs/services.md) - manages Spark services:

- `Spark.services.create(data)` creates a new Spark service.
- `Spark.services.execute(uri, inputs)` executes a Spark service.
- `Spark.services.transform(uri, inputs)` executes a Spark service using `Transforms`.
- `Spark.services.get_versions(uri)` lists all the versions of a service.
- `Spark.services.get_schema(uri)` gets the schema of a service.
- `Spark.services.get_metadata(uri)` gets the metadata of a service.
- `Spark.services.download(uri)` downloads the excel file of a service.
- `Spark.services.recompile(uri)` recompiles a service using specific compiler versions.
- `Spark.services.validate(uri, data)` validates input data using static or dynamic validations.
- `Spark.services.delete(uri)` deletes an existing service, including all its versions.

[Batches API](./docs/batches.md) - manages asynchronous batch processing:

- `Spark.batches.describe()` describes the batch pipelines across a tenant.
- `Spark.batches.create(params, [options])` creates a new batch pipeline.
- `Spark.batches.of(id)` defines a client-side batch pipeline by ID.
- `Spark.batches.of(id).get_info()` gets the details of a batch pipeline.
- `Spark.batches.of(id).get_status()` gets the status of a batch pipeline.
- `Spark.batches.of(id).push(data, [options])` adds input data to a batch pipeline.
- `Spark.batches.of(id).pull([options])` retrieves the output data from a batch pipeline.
- `Spark.batches.of(id).dispose()` closes a batch pipeline.
- `Spark.batches.of(id).cancel()` cancels a batch pipeline.

[Log History API](./docs/history.md) - manages service execution logs:

- `Spark.logs.rehydrate(uri, call_id)` rehydrates the executed model into the original Excel file.
- `Spark.logs.download(data)` downloads service execution logs as `csv` or `json` file.

[ImpEx API](./docs/impex.md) - imports and exports Spark services:

- `Spark.impex.exp(data)` exports Spark entities (versions, services, or folders).
- `Spark.impex.imp(data)` imports previously exported Spark entities into the platform.

[Other APIs](./docs/misc.md) - for other functionality:

- `Spark.wasm.download(uri)` downloads a service's WebAssembly module.
- `Spark.files.download(url)` downloads temporary files issued by the Spark platform.

## Contributing

Feeling motivated enough to contribute? Great! Your help is always appreciated.

Please read [CONTRIBUTING.md][contributing-url] for details on the code of
conduct, and the process for submitting pull requests.

## Copyright and License

[Apache-2.0][license-url]

<!-- References -->
[version-img]: https://img.shields.io/pypi/v/cspark
[version-url]: https://pypi.python.org/pypi/cspark
[license-img]: https://img.shields.io/pypi/l/cspark
[license-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/LICENSE
[ci-img]: https://github.com/Coherent-Partners/spark-python-sdk/workflows/Build/badge.svg
[ci-url]: https://github.com/Coherent-Partners/spark-python-sdk/actions/workflows/build.yml

[api-key-docs]: https://docs.coherent.global/spark-apis/authorization-api-keys
[bearer-token-docs]: https://docs.coherent.global/spark-apis/authorization-bearer-token
[oauth2-docs]: https://docs.coherent.global/spark-apis/authorization-client-credentials
[contributing-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/CONTRIBUTING.md
[examples]: https://github.com/Coherent-Partners/spark-python-sdk/tree/main/examples
[docs]: https://github.com/Coherent-Partners/spark-python-sdk/tree/main/docs
[uri-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/src/cspark/sdk/resources/_base.py
[logging-level]: https://docs.python.org/3/library/logging.html#logging-levels

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "cspark",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "api, coherent, restful, sdk, spark",
    "author": null,
    "author_email": "Coherent <team@coherent.global>",
    "download_url": "https://files.pythonhosted.org/packages/57/87/58977daf017fb4d1990abd1620a72fbd78b0627a903f689ce317dee288c7/cspark-0.1.12.tar.gz",
    "platform": null,
    "description": "# Coherent Spark Python SDK\n\n[![PyPI version][version-img]][version-url]\n[![CI build][ci-img]][ci-url]\n[![License][license-img]][license-url]\n\nThe Coherent Spark Python SDK is designed to elevate the developer experience and\nprovide convenient access to Coherent Spark APIs.\n\n\ud83d\udc4b **Just a heads-up:**\nThis SDK is supported by the community. If you encounter any bumps while using it,\nplease report them [here](https://github.com/Coherent-Partners/spark-python-sdk/issues)\nby creating a new issue.\n\n## Installation\n\n```bash\npip install cspark # or pip install 'cspark[cli]' for CLI support.\n```\n\n> \ud83e\udee3 This Python library requires [Python 3.7+](https://www.python.org/downloads/).\n\n## Usage\n\nTo use the SDK, you need a Coherent Spark account that lets you access the following:\n\n- User authentication ([API key][api-key-docs], [bearer token][bearer-token-docs],\n  or [OAuth2.0 client credentials][oauth2-docs] details)\n- Base URL (including the environment and tenant name)\n- Spark service URI (to locate a specific resource):\n  - `folder` - the folder name (where the service is located)\n  - `service` - the service name\n  - `version` - the semantic version a.k.a revision number (e.g., 0.4.2)\n\nA `folder` contains one or more `service`s, which can have multiple `version`s.\nTechnically speaking, when you're operating with a service, you're actually\ninteracting with a specific version of that service (the latest version by default -\nunless specified otherwise).\n\nHence, there are various ways to indicate a Spark service URI:\n\n- `{folder}/{service}[{version}]` - _version_ is optional.\n- `service/{service_id}`\n- `version/{version_id}`\n\n> **IMPORTANT:** Avoid using URL-encoded characters in the service URI.\n\nHere's an example of how to execute a Spark service:\n\n```py\nimport cspark.sdk as Spark\n\nspark = Spark.Client(env='my-env', tenant='my-tenant', api_key='my-api-key')\nwith spark.services as services:\n    response = services.execute('my-folder/my-service', inputs={'value': 42})\n    print(response.data)\n```\n\nExplore the [examples] and [docs] folders to find out more about the SDK's capabilities.\n\n> **PRO TIP:**\n> A service URI locator can be combined with other parameters to locate a specific\n> service (or version of it) when it's not a string. For example, you may execute\n> a public service using a `UriParams` object by specifying the `folder`, `service`,\n> and `public` properties.\n\n```py\nimport cspark.sdk as Spark\n\nspark = Spark.Client(env='my-env', tenant='my-tenant', api_key='open')\n\nwith spark.services as services:\n    uri = Spark.UriParams(folder='my-folder', service='my-service', public=True)\n    response = services.execute(uri, inputs={'value': 42})\n    print(response.data)\n\n# The final URI in this case is:\n#    'my-tenant/api/v3/public/folders/my-folder/services/my-service/execute'\n```\n\nSee the [Uri and UriParams][uri-url] classes for more details.\n\n## Client Options\n\nAs shown in the examples above, the `Spark.Client` is your entry point to the SDK.\nIt is quite flexible and can be configured with the following options:\n\n### Base URL\n\n`base_url` (default: `os.getenv['CSPARK_BASE_URL']`) indicates the base URL of\nCoherent Spark APIs. It should include the tenant and environment information.\n\n```py\nspark = Spark.Client(base_url='https://excel.my-env.coherent.global/my-tenant')\n```\n\nAlternatively, a combination of `env` and `tenant` options can be used to construct\nthe base URL.\n\n```py\nspark = Spark.Client(env='my-env', tenant='my-tenant')\n```\n\n### Authentication\n\nThe SDK supports three types of authentication schemes:\n\n- `api_key` (default: `os.getenv['CSPARK_API_KEY']`) indicates the API key\n  (also known as synthetic key), which is sensitive and should be kept secure.\n\n```py\nspark = Spark.Client(api_key='my-api-key')\n```\n\n> **PRO TIP:**\n> The Spark platform supports public APIs that can be accessed without any form\n> of authentication. In that case, you need to set `api_key` to `open` in order to\n> create a `Spark.Client`.\n\n- `token` (default: `os.getenv['CSPARK_BEARER_TOKEN']`) indicates the bearer token.\n  It can be prefixed with 'Bearer' or not. A bearer token is usually valid for a\n  limited time and should be refreshed periodically.\n\n```py\nspark = Spark.Client(token='Bearer my-access-token') # with prefix\n# or\nspark = Spark.Client(token='my-access-token') # without prefix\n```\n\n- `oauth` (default: `os.getenv['CSPARK_CLIENT_ID']` and `os.getenv['CSPARK_CLIENT_SECRET']` or\n  `os.getenv['CSPARK_OAUTH_PATH']`) indicates the OAuth2.0 client credentials.\n  You can either provide the client ID and secret directly or the file path to\n  the JSON file containing the credentials.\n\n```py\nspark = Spark.Client(oauth={'client_id': 'my-client-id', 'client_secret': 'my-client-secret'})\n# or\nspark = Spark.Client(oauth='path/to/oauth/credentials.json')\n```\n\n### Additional Settings\n\n- `timeout` (default: `60000` ms) indicates the maximum amount of time that the\n  client should wait for a response from Spark servers before timing out a request.\n\n- `max_retries` (default: `2`) indicates the maximum number of times that the client\n  will retry a request in case of a temporary failure, such as an unauthorized\n  response or a status code greater than 400.\n\n- `retry_interval` (default: `1` second) indicates the delay between each retry.\n\n- `logger` (default: `True`) enables or disables the logger for the SDK.\n  - If `bool`, determines whether or not the SDK should print logs.\n  - If `dict`, the SDK will print logs in accordance with the specified keyword arguments.\n  - If `LoggerOptions`, the SDK will print messages based on the specified options:\n    - `context` (default: `CSPARK v{version}`): defines the context of the logs (e.g., `CSPARK v0.1.6`);\n    - `disabled` (default: `False`) determines whether the logger should be disabled.\n    - `colorful` (default: `True`) determines whether the logs should be colorful;\n    - `timestamp` (default: `True`) determines whether the logs should include timestamps;\n    - `datefmt` (default: `'%m/%d/%Y, %I:%M:%S %p'`) defines the date format for the logs;\n    - `level` (default: `DEBUG`) defines the [logging level][logging-level] for the logs.\n\n```py\nspark = Spark.Client(logger=False)\n# or\nspark = Spark.Client(logger={'colorful': False})\n```\n\n## Client Errors\n\n`SparkError` is the base class for all custom errors thrown by the SDK. There are\ntwo types of it:\n\n- `SparkSdkError`: usually thrown when an argument (user entry) fails to comply\n  with the expected format. Because it's a client-side error, it will include the invalid\n  entry as the `cause` in most cases.\n- `SparkApiError`: when attempting to communicate with the API, the SDK will wrap\n  any sort of failure (any error during the roundtrip) into `SparkApiError`, which\n  includes the HTTP `status` code of the response and the `request_id`, a unique\n  identifier of the request.\n\nSome of the derived `SparkApiError` are:\n\n| Type                      | Status | When                           |\n| ------------------------- | ------ | ------------------------------ |\n| `BadRequestError`         | 400    | invalid request                |\n| `UnauthorizedError`       | 401    | missing or invalid credentials |\n| `ForbiddenError`          | 403    | insufficient permissions       |\n| `NotFoundError`           | 404    | resource not found             |\n| `ConflictError`           | 409    | resource already exists        |\n| `RateLimitError`          | 429    | too many requests              |\n| `InternalServerError`     | 500    | server-side error              |\n| `ServiceUnavailableError` | 503    | server is down                 |\n| `UnknownApiError`         | `None` | unknown error                  |\n\n## API Parity\n\nThe SDK aims to provide full parity with the Spark APIs over time. Below is a list\nof the currently supported APIs.\n\n[Authentication API](./docs/authentication.md) - manages access tokens using\nOAuth2.0 Client Credentials flow:\n\n- `Authorization.oauth.retrieve_token(config)` generates new access tokens.\n\n[Services API](./docs/services.md) - manages Spark services:\n\n- `Spark.services.create(data)` creates a new Spark service.\n- `Spark.services.execute(uri, inputs)` executes a Spark service.\n- `Spark.services.transform(uri, inputs)` executes a Spark service using `Transforms`.\n- `Spark.services.get_versions(uri)` lists all the versions of a service.\n- `Spark.services.get_schema(uri)` gets the schema of a service.\n- `Spark.services.get_metadata(uri)` gets the metadata of a service.\n- `Spark.services.download(uri)` downloads the excel file of a service.\n- `Spark.services.recompile(uri)` recompiles a service using specific compiler versions.\n- `Spark.services.validate(uri, data)` validates input data using static or dynamic validations.\n- `Spark.services.delete(uri)` deletes an existing service, including all its versions.\n\n[Batches API](./docs/batches.md) - manages asynchronous batch processing:\n\n- `Spark.batches.describe()` describes the batch pipelines across a tenant.\n- `Spark.batches.create(params, [options])` creates a new batch pipeline.\n- `Spark.batches.of(id)` defines a client-side batch pipeline by ID.\n- `Spark.batches.of(id).get_info()` gets the details of a batch pipeline.\n- `Spark.batches.of(id).get_status()` gets the status of a batch pipeline.\n- `Spark.batches.of(id).push(data, [options])` adds input data to a batch pipeline.\n- `Spark.batches.of(id).pull([options])` retrieves the output data from a batch pipeline.\n- `Spark.batches.of(id).dispose()` closes a batch pipeline.\n- `Spark.batches.of(id).cancel()` cancels a batch pipeline.\n\n[Log History API](./docs/history.md) - manages service execution logs:\n\n- `Spark.logs.rehydrate(uri, call_id)` rehydrates the executed model into the original Excel file.\n- `Spark.logs.download(data)` downloads service execution logs as `csv` or `json` file.\n\n[ImpEx API](./docs/impex.md) - imports and exports Spark services:\n\n- `Spark.impex.exp(data)` exports Spark entities (versions, services, or folders).\n- `Spark.impex.imp(data)` imports previously exported Spark entities into the platform.\n\n[Other APIs](./docs/misc.md) - for other functionality:\n\n- `Spark.wasm.download(uri)` downloads a service's WebAssembly module.\n- `Spark.files.download(url)` downloads temporary files issued by the Spark platform.\n\n## Contributing\n\nFeeling motivated enough to contribute? Great! Your help is always appreciated.\n\nPlease read [CONTRIBUTING.md][contributing-url] for details on the code of\nconduct, and the process for submitting pull requests.\n\n## Copyright and License\n\n[Apache-2.0][license-url]\n\n<!-- References -->\n[version-img]: https://img.shields.io/pypi/v/cspark\n[version-url]: https://pypi.python.org/pypi/cspark\n[license-img]: https://img.shields.io/pypi/l/cspark\n[license-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/LICENSE\n[ci-img]: https://github.com/Coherent-Partners/spark-python-sdk/workflows/Build/badge.svg\n[ci-url]: https://github.com/Coherent-Partners/spark-python-sdk/actions/workflows/build.yml\n\n[api-key-docs]: https://docs.coherent.global/spark-apis/authorization-api-keys\n[bearer-token-docs]: https://docs.coherent.global/spark-apis/authorization-bearer-token\n[oauth2-docs]: https://docs.coherent.global/spark-apis/authorization-client-credentials\n[contributing-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/CONTRIBUTING.md\n[examples]: https://github.com/Coherent-Partners/spark-python-sdk/tree/main/examples\n[docs]: https://github.com/Coherent-Partners/spark-python-sdk/tree/main/docs\n[uri-url]: https://github.com/Coherent-Partners/spark-python-sdk/blob/main/src/cspark/sdk/resources/_base.py\n[logging-level]: https://docs.python.org/3/library/logging.html#logging-levels\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A Python SDK for interacting with Coherent Spark APIs",
    "version": "0.1.12",
    "project_urls": {
        "Changelog": "https://github.com/Coherent-Partners/spark-python-sdk/blob/main/CHANGELOG.md",
        "Homepage": "https://github.com/Coherent-Partners/spark-python-sdk/blob/main/docs/readme.md",
        "Repository": "https://github.com/Coherent-Partners/spark-python-sdk"
    },
    "split_keywords": [
        "api",
        " coherent",
        " restful",
        " sdk",
        " spark"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "40be8de334d1c0b4628106fde80d0545596f4aa28328135acf6f387c4230234a",
                "md5": "9ba17ab89cfd6b1038ba05c7ec3102c3",
                "sha256": "25a8b0829ff8c585e822bd1f4b4bcca1388d0fc7210a8a6e27ae080ea6daf15f"
            },
            "downloads": -1,
            "filename": "cspark-0.1.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9ba17ab89cfd6b1038ba05c7ec3102c3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 60278,
            "upload_time": "2025-01-24T00:40:20",
            "upload_time_iso_8601": "2025-01-24T00:40:20.583318Z",
            "url": "https://files.pythonhosted.org/packages/40/be/8de334d1c0b4628106fde80d0545596f4aa28328135acf6f387c4230234a/cspark-0.1.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "578758977daf017fb4d1990abd1620a72fbd78b0627a903f689ce317dee288c7",
                "md5": "407a1e55eb0c50da701025de3c03b2fc",
                "sha256": "d99cfdac9b75ed82050afc97e96456b2cbe9347188bf3ca52cab45f7258339ff"
            },
            "downloads": -1,
            "filename": "cspark-0.1.12.tar.gz",
            "has_sig": false,
            "md5_digest": "407a1e55eb0c50da701025de3c03b2fc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 44848,
            "upload_time": "2025-01-24T00:40:22",
            "upload_time_iso_8601": "2025-01-24T00:40:22.584631Z",
            "url": "https://files.pythonhosted.org/packages/57/87/58977daf017fb4d1990abd1620a72fbd78b0627a903f689ce317dee288c7/cspark-0.1.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-24 00:40:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Coherent-Partners",
    "github_project": "spark-python-sdk",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cspark"
}
        
Elapsed time: 0.94678s