neptune-scale


Nameneptune-scale JSON
Version 0.6.0 PyPI version JSON
download
home_pagehttps://github.com/neptune-ai/neptune-client-scale
SummaryA minimal client library
upload_time2024-09-09 14:01:32
maintainerNone
docs_urlNone
authorneptune.ai
requires_python<4.0,>=3.8
licenseApache-2.0
keywords mlops ml experiment tracking ml model registry ml model store ml metadata store
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Neptune Scale client

> [!NOTE]
> This package only works with the `3.0` version of neptune.ai called Neptune Scale, which is in beta.
>
> You can't use the Scale client with the stable Neptune `2.x` versions currently available to SaaS and self-hosting customers. For the Python client corresponding to Neptune `2.x`, see https://github.com/neptune-ai/neptune-client.

**What is Neptune?**

Neptune is an experiment tracker. It enables researchers to monitor their model training, visualize and compare model metadata, and collaborate on AI/ML projects within a team.

**What's different about Neptune Scale?**

Neptune Scale is the next major version of Neptune. It's built on an entirely new architecture for ingesting and rendering data, with a focus on responsiveness and accuracy at scale.

Neptune Scale supports forked experiments, with built-in mechanics for retaining run ancestry. This way, you can focus on analyzing the latest runs, but also visualize the full history of your experiments.

## Installation

```bash
pip install neptune-scale
```

### Configure API token and project

1. Log in to your Neptune Scale workspace.
1. Create a project, or find an existing project you want to send the run metadata to.
1. Get your API token from your user menu in the bottom left corner.

    > If you're a workspace admin, you can also set up a service account. This way, multiple people or machines can share the same API token. To get started, go to the workspace settings in the top right corner.

1. In the environment where neptune-scale is installed, set the following environment variables to the API token and project name:

    ```
    export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
    ```

    ```
    export NEPTUNE_PROJECT="team-alpha/project-x"
    ```

You're ready to start using Neptune Scale.

## Example usage

```python
from neptune_scale import Run

run = Run(
    family="RunFamilyName",
    run_id="SomeUniqueRunIdentifier",
)

run.log_configs(
    data={"learning_rate": 0.001, "batch_size": 64},
)

# inside a training loop
for step in range(100):
    run.log_metrics(
        step=step,
        data={"acc": 0.89, "loss": 0.17},
    )

run.add_tags(tags=["tag1", "tag2"])

run.close()
```

## API reference

### `Run`

Representation of experiment tracking metadata logged with Neptune Scale.

#### Initialization

Initialize with the class constructor:

```python
from neptune_scale import Run

run = Run(...)
```

or using a context manager:

```python
from neptune_scale import Run

with Run(...) as run:
    ...
```

__Parameters__

| Name             | Type             | Default | Description                                                               |
|------------------|------------------|---------|---------------------------------------------------------------------------|
| `family`         | `str`            | -       | Identifies related runs. All runs of the same lineage must have the same `family` value. That is, forking is only possible within the same family. Max length: 128 bytes. |
| `run_id`         | `str`            | -       | Identifier of the run. Must be unique within the project. Max length: 128 bytes. |
| `project`        | `str`, optional  | `None`  | Name of a project in the form `workspace-name/project-name`. If `None`, the value of the `NEPTUNE_PROJECT` environment variable is used. |
| `api_token`      | `str`, optional  | `None`  | Your Neptune API token or a service account's API token. If `None`, the value of the `NEPTUNE_API_TOKEN` environment variable is used. To keep your token secure, don't place it in source code. Instead, save it as an environment variable. |
| `resume`         | `bool`, optional | `False` | If `False` (default), creates a new run. To continue an existing run, set to `True` and pass the ID of an existing run to the `run_id` argument. To fork a run, use `fork_run_id` and `fork_step` instead. |
| `mode`           | `"async"` or `"disabled"` | `"async"` | Mode of operation. If set to `"disabled"`, the run doesn't log any metadata. |
| `experiment_name`  | `str`, optional  | `None` | Name of the experiment to associate the run with. Learn more about [experiments](https://docs-beta.neptune.ai/experiments) in the Neptune documentation. |
| `creation_time`  | `datetime`, optional | `None` | Custom creation time of the run. |
| `fork_run_id`    | `str`, optional  | `None` | The ID of the run to fork from. |
| `fork_step`      | `int`, optional  | `None` | The step number to fork from. |
| `max_queue_size` | `int`, optional  | 1M | Maximum number of operations queued for processing. 1 000 000 by default. You should raise this value if you see the `on_queue_full_callback` function being called. |
| `on_queue_full_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when the queue is full. The function must take as an argument the exception that made the queue full and, as an optional argument, a timestamp of when the exception was last raised. |
| `on_network_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a network error occurs. |
| `on_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | The default callback function triggered when an unrecoverable error occurs. Applies if an error wasn't caught by other callbacks. In this callback you can choose to perform your cleanup operations and close the training script. For how to end the run in this case, use [`terminate()`](#terminate). |
| `on_warning_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a warning occurs. |

__Examples__

Create a new run:

```python
from neptune_scale import Run

with Run(
    project="team-alpha/project-x",
    api_token="h0dHBzOi8aHR0cHM6...Y2MifQ==",
    family="aquarium",
    run_id="likable-barracuda",
) as run:
    ...
```

> [!TIP]
> Find your API token in your user menu, in the bottom-left corner of the Neptune app.
>
> Or, to use shared API tokens for multiple users or non-human accounts, create a service account in your workspace settings.

Create a forked run and mark it as an experiment:

```python
with Run(
    family="aquarium",
    run_id="adventurous-barracuda",
    experiment_name="swim-further",
    fork_run_id="likable-barracuda",
    fork_step=102,
) as run:
    ...
```

Continue a run:

```python
with Run(
    family="aquarium",
    run_id="likable-barracuda",  # a Neptune run with this ID already exists
    resume=True,
) as run:
    ...
```

### `close()`

The regular way to end a run. Waits for all locally queued data to be processed by Neptune (see [`wait_for_processing()`](#wait_for_processing)) and closes the run.

This is a blocking operation. Call the function at the end of your script, after your model training is completed.

__Examples__

```python
from neptune_scale import Run

run = Run(...)

# logging and training code

run.close()
```

If using a context manager, Neptune automatically closes the run upon exiting the context:

```python
with Run(...) as run:
    ...

# run is closed at the end of the context
```

### `log_configs()`

Logs the specified metadata to a Neptune run.

You can log configurations or other single values. Pass the metadata as a dictionary `{key: value}` with

- `key`: path to where the metadata should be stored in the run.
- `value`: the piece of metadata to log.

For example, `{"parameters/learning_rate": 0.001}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.

__Parameters__

| Name          | Type                                               | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `data`      | `Dict[str, Union[float, bool, int, str, datetime]]`, optional  | `None` | Dictionary of configs or other values to log. Available types: float, integer, Boolean, string, and datetime. |

__Examples__

Create a run and log metadata:

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(
        data={
            "parameters/learning_rate": 0.001,
            "parameters/batch_size": 64,
        },
    )
```

### `log_metrics()`

Logs the specified metrics to a Neptune run.

You can log metrics representing a series of numeric values. Pass the metadata as a dictionary `{key: value}` with

- `key`: path to where the metadata should be stored in the run.
- `value`: the piece of metadata to log.

For example, `{"metrics/accuracy": 0.89}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.

__Parameters__

| Name        | Type                                     | Default | Description                                                                                                                                                                                                                                                          |
|-------------|------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `step`      | `Union[float, int]`, optional            | `None`  | Index of the log entry. Must be increasing. If not specified, the `log_metrics()` call increments the step starting from the highest already logged value. **Tip:** Using float rather than int values can be useful, for example, when logging substeps in a batch. |
| `timestamp` | `datetime`, optional                     | `None`  | Time of logging the metadata.                                                                                                                                                                                                                                        |
| `data`      | `Dict[str, Union[float, int]]`, optional | `None`  | Dictionary of metrics to log. Each metric value is associated with a step. To log multiple metrics at once, pass multiple key-value pairs.                                                                                                                           |

__Examples__

Create a run and log metrics:

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_metrics(
        step=1.2,
        data={"loss": 0.14, "acc": 0.78},
    )
```

**Note:** To correlate logged values, make sure to send all metadata related to a step in a single `log_metrics()` call, or specify the step explicitly.

When the run is forked off an existing one, the step can't be smaller than the step value of the fork point.

### `add_tags()`

Adds the list of tags to the run.

__Parameters__

| Name          | Type                                         | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to add to the run. |
| `group_tags`  | `bool`, optional                             | `False`  | Add group tags instead of regular tags. |

__Example__

```python
with Run(...) as run:
    run.add_tags(tags=["tag1", "tag2", "tag3"])
```

### `remove_tags()`

Removes the specified tags from the run.

__Parameters__

| Name          | Type                                         | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to remove from the run. |
| `group_tags`  | `bool`, optional                             | `False`  | Remove group tags instead of regular tags. |

__Example__

```python
with Run(...) as run:
    run.remove_tags(tags=["tag2", "tag3"])
```

### `wait_for_submission()`

Waits until all metadata is submitted to Neptune for processing.

When submitted, the data is not yet saved in Neptune (see [`wait_for_processing()`](#wait_for_processing)).

__Parameters__

| Name      | Type              | Default | Description                                                               |
|-----------|-------------------|---------|---------------------------------------------------------------------------|
| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for submission.                      |
| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |

__Example__

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(...)
    ...
    run.wait_for_submission()
    run.log_metrics(...)  # called once queued Neptune operations have been submitted
```

### `wait_for_processing()`

Waits until all metadata is processed by Neptune.

Once the call is complete, the data is saved in Neptune.

__Parameters__

| Name      | Type              | Default | Description                                                               |
|-----------|-------------------|---------|---------------------------------------------------------------------------|
| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for processing.                      |
| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |

__Example__

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(...)
    ...
    run.wait_for_processing()
    run.log_metrics(...)  # called once submitted data has been processed
```

### `terminate()`

In case an unrecoverable error is encountered, you can terminate the failed run in your error callback.

**Note:** This effectively disables processing in-flight operations as well as logging new data. However,
the training process isn't interrupted.

__Example__

```python
from neptune_scale import Run

def my_error_callback(exc):
    run.terminate()


run = Run(..., on_error_callback=my_error_callback)
```

---

## Getting help

For help, contact support@neptune.ai.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neptune-ai/neptune-client-scale",
    "name": "neptune-scale",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": "MLOps, ML Experiment Tracking, ML Model Registry, ML Model Store, ML Metadata Store",
    "author": "neptune.ai",
    "author_email": "contact@neptune.ai",
    "download_url": "https://files.pythonhosted.org/packages/e4/02/ec9bcdee8e88d2ae2de496cdef35d1d67651052a62776724a06e93307452/neptune_scale-0.6.0.tar.gz",
    "platform": null,
    "description": "# Neptune Scale client\n\n> [!NOTE]\n> This package only works with the `3.0` version of neptune.ai called Neptune Scale, which is in beta.\n>\n> You can't use the Scale client with the stable Neptune `2.x` versions currently available to SaaS and self-hosting customers. For the Python client corresponding to Neptune `2.x`, see https://github.com/neptune-ai/neptune-client.\n\n**What is Neptune?**\n\nNeptune is an experiment tracker. It enables researchers to monitor their model training, visualize and compare model metadata, and collaborate on AI/ML projects within a team.\n\n**What's different about Neptune Scale?**\n\nNeptune Scale is the next major version of Neptune. It's built on an entirely new architecture for ingesting and rendering data, with a focus on responsiveness and accuracy at scale.\n\nNeptune Scale supports forked experiments, with built-in mechanics for retaining run ancestry. This way, you can focus on analyzing the latest runs, but also visualize the full history of your experiments.\n\n## Installation\n\n```bash\npip install neptune-scale\n```\n\n### Configure API token and project\n\n1. Log in to your Neptune Scale workspace.\n1. Create a project, or find an existing project you want to send the run metadata to.\n1. Get your API token from your user menu in the bottom left corner.\n\n    > If you're a workspace admin, you can also set up a service account. This way, multiple people or machines can share the same API token. To get started, go to the workspace settings in the top right corner.\n\n1. In the environment where neptune-scale is installed, set the following environment variables to the API token and project name:\n\n    ```\n    export NEPTUNE_API_TOKEN=\"h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ==\"\n    ```\n\n    ```\n    export NEPTUNE_PROJECT=\"team-alpha/project-x\"\n    ```\n\nYou're ready to start using Neptune Scale.\n\n## Example usage\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(\n    family=\"RunFamilyName\",\n    run_id=\"SomeUniqueRunIdentifier\",\n)\n\nrun.log_configs(\n    data={\"learning_rate\": 0.001, \"batch_size\": 64},\n)\n\n# inside a training loop\nfor step in range(100):\n    run.log_metrics(\n        step=step,\n        data={\"acc\": 0.89, \"loss\": 0.17},\n    )\n\nrun.add_tags(tags=[\"tag1\", \"tag2\"])\n\nrun.close()\n```\n\n## API reference\n\n### `Run`\n\nRepresentation of experiment tracking metadata logged with Neptune Scale.\n\n#### Initialization\n\nInitialize with the class constructor:\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(...)\n```\n\nor using a context manager:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    ...\n```\n\n__Parameters__\n\n| Name             | Type             | Default | Description                                                               |\n|------------------|------------------|---------|---------------------------------------------------------------------------|\n| `family`         | `str`            | -       | Identifies related runs. All runs of the same lineage must have the same `family` value. That is, forking is only possible within the same family. Max length: 128 bytes. |\n| `run_id`         | `str`            | -       | Identifier of the run. Must be unique within the project. Max length: 128 bytes. |\n| `project`        | `str`, optional  | `None`  | Name of a project in the form `workspace-name/project-name`. If `None`, the value of the `NEPTUNE_PROJECT` environment variable is used. |\n| `api_token`      | `str`, optional  | `None`  | Your Neptune API token or a service account's API token. If `None`, the value of the `NEPTUNE_API_TOKEN` environment variable is used. To keep your token secure, don't place it in source code. Instead, save it as an environment variable. |\n| `resume`         | `bool`, optional | `False` | If `False` (default), creates a new run. To continue an existing run, set to `True` and pass the ID of an existing run to the `run_id` argument. To fork a run, use `fork_run_id` and `fork_step` instead. |\n| `mode`           | `\"async\"` or `\"disabled\"` | `\"async\"` | Mode of operation. If set to `\"disabled\"`, the run doesn't log any metadata. |\n| `experiment_name`  | `str`, optional  | `None` | Name of the experiment to associate the run with. Learn more about [experiments](https://docs-beta.neptune.ai/experiments) in the Neptune documentation. |\n| `creation_time`  | `datetime`, optional | `None` | Custom creation time of the run. |\n| `fork_run_id`    | `str`, optional  | `None` | The ID of the run to fork from. |\n| `fork_step`      | `int`, optional  | `None` | The step number to fork from. |\n| `max_queue_size` | `int`, optional  | 1M | Maximum number of operations queued for processing. 1 000 000 by default. You should raise this value if you see the `on_queue_full_callback` function being called. |\n| `on_queue_full_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when the queue is full. The function must take as an argument the exception that made the queue full and, as an optional argument, a timestamp of when the exception was last raised. |\n| `on_network_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a network error occurs. |\n| `on_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | The default callback function triggered when an unrecoverable error occurs. Applies if an error wasn't caught by other callbacks. In this callback you can choose to perform your cleanup operations and close the training script. For how to end the run in this case, use [`terminate()`](#terminate). |\n| `on_warning_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a warning occurs. |\n\n__Examples__\n\nCreate a new run:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(\n    project=\"team-alpha/project-x\",\n    api_token=\"h0dHBzOi8aHR0cHM6...Y2MifQ==\",\n    family=\"aquarium\",\n    run_id=\"likable-barracuda\",\n) as run:\n    ...\n```\n\n> [!TIP]\n> Find your API token in your user menu, in the bottom-left corner of the Neptune app.\n>\n> Or, to use shared API tokens for multiple users or non-human accounts, create a service account in your workspace settings.\n\nCreate a forked run and mark it as an experiment:\n\n```python\nwith Run(\n    family=\"aquarium\",\n    run_id=\"adventurous-barracuda\",\n    experiment_name=\"swim-further\",\n    fork_run_id=\"likable-barracuda\",\n    fork_step=102,\n) as run:\n    ...\n```\n\nContinue a run:\n\n```python\nwith Run(\n    family=\"aquarium\",\n    run_id=\"likable-barracuda\",  # a Neptune run with this ID already exists\n    resume=True,\n) as run:\n    ...\n```\n\n### `close()`\n\nThe regular way to end a run. Waits for all locally queued data to be processed by Neptune (see [`wait_for_processing()`](#wait_for_processing)) and closes the run.\n\nThis is a blocking operation. Call the function at the end of your script, after your model training is completed.\n\n__Examples__\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(...)\n\n# logging and training code\n\nrun.close()\n```\n\nIf using a context manager, Neptune automatically closes the run upon exiting the context:\n\n```python\nwith Run(...) as run:\n    ...\n\n# run is closed at the end of the context\n```\n\n### `log_configs()`\n\nLogs the specified metadata to a Neptune run.\n\nYou can log configurations or other single values. Pass the metadata as a dictionary `{key: value}` with\n\n- `key`: path to where the metadata should be stored in the run.\n- `value`: the piece of metadata to log.\n\nFor example, `{\"parameters/learning_rate\": 0.001}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.\n\n__Parameters__\n\n| Name          | Type                                               | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `data`      | `Dict[str, Union[float, bool, int, str, datetime]]`, optional  | `None` | Dictionary of configs or other values to log. Available types: float, integer, Boolean, string, and datetime. |\n\n__Examples__\n\nCreate a run and log metadata:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(\n        data={\n            \"parameters/learning_rate\": 0.001,\n            \"parameters/batch_size\": 64,\n        },\n    )\n```\n\n### `log_metrics()`\n\nLogs the specified metrics to a Neptune run.\n\nYou can log metrics representing a series of numeric values. Pass the metadata as a dictionary `{key: value}` with\n\n- `key`: path to where the metadata should be stored in the run.\n- `value`: the piece of metadata to log.\n\nFor example, `{\"metrics/accuracy\": 0.89}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.\n\n__Parameters__\n\n| Name        | Type                                     | Default | Description                                                                                                                                                                                                                                                          |\n|-------------|------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `step`      | `Union[float, int]`, optional            | `None`  | Index of the log entry. Must be increasing. If not specified, the `log_metrics()` call increments the step starting from the highest already logged value. **Tip:** Using float rather than int values can be useful, for example, when logging substeps in a batch. |\n| `timestamp` | `datetime`, optional                     | `None`  | Time of logging the metadata.                                                                                                                                                                                                                                        |\n| `data`      | `Dict[str, Union[float, int]]`, optional | `None`  | Dictionary of metrics to log. Each metric value is associated with a step. To log multiple metrics at once, pass multiple key-value pairs.                                                                                                                           |\n\n__Examples__\n\nCreate a run and log metrics:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_metrics(\n        step=1.2,\n        data={\"loss\": 0.14, \"acc\": 0.78},\n    )\n```\n\n**Note:** To correlate logged values, make sure to send all metadata related to a step in a single `log_metrics()` call, or specify the step explicitly.\n\nWhen the run is forked off an existing one, the step can't be smaller than the step value of the fork point.\n\n### `add_tags()`\n\nAdds the list of tags to the run.\n\n__Parameters__\n\n| Name          | Type                                         | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to add to the run. |\n| `group_tags`  | `bool`, optional                             | `False`  | Add group tags instead of regular tags. |\n\n__Example__\n\n```python\nwith Run(...) as run:\n    run.add_tags(tags=[\"tag1\", \"tag2\", \"tag3\"])\n```\n\n### `remove_tags()`\n\nRemoves the specified tags from the run.\n\n__Parameters__\n\n| Name          | Type                                         | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to remove from the run. |\n| `group_tags`  | `bool`, optional                             | `False`  | Remove group tags instead of regular tags. |\n\n__Example__\n\n```python\nwith Run(...) as run:\n    run.remove_tags(tags=[\"tag2\", \"tag3\"])\n```\n\n### `wait_for_submission()`\n\nWaits until all metadata is submitted to Neptune for processing.\n\nWhen submitted, the data is not yet saved in Neptune (see [`wait_for_processing()`](#wait_for_processing)).\n\n__Parameters__\n\n| Name      | Type              | Default | Description                                                               |\n|-----------|-------------------|---------|---------------------------------------------------------------------------|\n| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for submission.                      |\n| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(...)\n    ...\n    run.wait_for_submission()\n    run.log_metrics(...)  # called once queued Neptune operations have been submitted\n```\n\n### `wait_for_processing()`\n\nWaits until all metadata is processed by Neptune.\n\nOnce the call is complete, the data is saved in Neptune.\n\n__Parameters__\n\n| Name      | Type              | Default | Description                                                               |\n|-----------|-------------------|---------|---------------------------------------------------------------------------|\n| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for processing.                      |\n| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(...)\n    ...\n    run.wait_for_processing()\n    run.log_metrics(...)  # called once submitted data has been processed\n```\n\n### `terminate()`\n\nIn case an unrecoverable error is encountered, you can terminate the failed run in your error callback.\n\n**Note:** This effectively disables processing in-flight operations as well as logging new data. However,\nthe training process isn't interrupted.\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\ndef my_error_callback(exc):\n    run.terminate()\n\n\nrun = Run(..., on_error_callback=my_error_callback)\n```\n\n---\n\n## Getting help\n\nFor help, contact support@neptune.ai.\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A minimal client library",
    "version": "0.6.0",
    "project_urls": {
        "Documentation": "https://docs.neptune.ai/",
        "Homepage": "https://github.com/neptune-ai/neptune-client-scale",
        "Repository": "https://github.com/neptune-ai/neptune-client-scale",
        "Tracker": "https://github.com/neptune-ai/neptune-client-scale/issues"
    },
    "split_keywords": [
        "mlops",
        " ml experiment tracking",
        " ml model registry",
        " ml model store",
        " ml metadata store"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6db39c1b6f9439973270fd7afc54ff56a0eb687a98ad973640007fb19dd5d524",
                "md5": "f4f96827bbb04e6a0071a674116e0d28",
                "sha256": "dd9a54562f738eafc7592b8137ff6e9ab49566236b53d3375659318f285566ba"
            },
            "downloads": -1,
            "filename": "neptune_scale-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f4f96827bbb04e6a0071a674116e0d28",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 36342,
            "upload_time": "2024-09-09T14:01:31",
            "upload_time_iso_8601": "2024-09-09T14:01:31.396132Z",
            "url": "https://files.pythonhosted.org/packages/6d/b3/9c1b6f9439973270fd7afc54ff56a0eb687a98ad973640007fb19dd5d524/neptune_scale-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e402ec9bcdee8e88d2ae2de496cdef35d1d67651052a62776724a06e93307452",
                "md5": "07516f833335bb40c1112f83a4a3a661",
                "sha256": "548724edca0f25eca094e629976c46c0c117093e9c2ea33001ecce489ce21d80"
            },
            "downloads": -1,
            "filename": "neptune_scale-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "07516f833335bb40c1112f83a4a3a661",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 32144,
            "upload_time": "2024-09-09T14:01:32",
            "upload_time_iso_8601": "2024-09-09T14:01:32.737543Z",
            "url": "https://files.pythonhosted.org/packages/e4/02/ec9bcdee8e88d2ae2de496cdef35d1d67651052a62776724a06e93307452/neptune_scale-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-09 14:01:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neptune-ai",
    "github_project": "neptune-client-scale",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "neptune-scale"
}
        
Elapsed time: 0.44790s