neptune-scale


Nameneptune-scale JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/neptune-ai/neptune-client-scale
SummaryA minimal client library
upload_time2024-10-28 14:03:53
maintainerNone
docs_urlNone
authorneptune.ai
requires_python<4.0,>=3.8
licenseApache-2.0
keywords mlops ml experiment tracking ml model registry ml model store ml metadata store
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Neptune Scale client

> [!NOTE]
> This package only works with the `3.0` version of neptune.ai called Neptune Scale, which is in beta.
>
> You can't use the Scale client with the stable Neptune `2.x` versions currently available to SaaS and self-hosting customers. For the Python client corresponding to Neptune `2.x`, see https://github.com/neptune-ai/neptune-client.

**What is Neptune?**

Neptune is an experiment tracker. It enables researchers to monitor their model training, visualize and compare model metadata, and collaborate on AI/ML projects within a team.

**What's different about Neptune Scale?**

Neptune Scale is the next major version of Neptune. It's built on an entirely new architecture for ingesting and rendering data, with a focus on responsiveness and accuracy at scale.

Neptune Scale supports forked experiments, with built-in mechanics for retaining run ancestry. This way, you can focus on analyzing the latest runs, but also visualize the full history of your experiments.

## Installation

```bash
pip install neptune-scale
```

### Configure API token and project

1. Log in to your Neptune Scale workspace.
1. Create a project, or find an existing project you want to send the run metadata to.
1. Get your API token from your user menu in the bottom left corner.

    > If you're a workspace admin, you can also set up a service account. This way, multiple people or machines can share the same API token. To get started, go to the workspace settings in the top right corner.

1. In the environment where neptune-scale is installed, set the following environment variables to the API token and project name:

    ```
    export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ=="
    ```

    ```
    export NEPTUNE_PROJECT="team-alpha/project-x"
    ```

You're ready to start using Neptune Scale.

For more help with setup, see [Get started][scale-docs] in the Neptune documentation.

## Example usage

Create an experiment:

```python
from neptune_scale import Run

run = Run(
    experiment_name="ExperimentName",
    run_id="SomeUniqueRunIdentifier",
)
```

Then, call logging methods on the run and pass the metadata as a dictionary.

Log configuration or other simple values with [`log_configs()`](#log_configs):

```python
run.log_configs(
    {
        "learning_rate": 0.001,
        "batch_size": 64,
    }
)
```

Inside a training loop or other iteration, use [`log_metrics()`](#log_metrics) to append metric values:

```python
# inside a loop
for step in range(100):
    run.log_metrics(
        data={"acc": 0.89, "loss": 0.17},
        step=step,
    )
```

To help identify and group runs, you can apply tags:

```python
run.add_tags(tags=["tag1", "tag2"])
```

The run is stopped when exiting the context or the script finishes execution, but you can use [`close()`](#close) to stop it once logging is no longer needed:

```python
run.close()
```

To explore your experiment, open the project in Neptune and navigate to **Runs**. For an example, [see the demo project &rarr;][demo-project]

For more instructions, see the Neptune documentation:

- [Quickstart][quickstart]
- [Create an experiment][new-experiment]
- [Log metadata][log-metadata]

## API reference

### `Run`

Representation of experiment tracking metadata logged with Neptune Scale.

#### Initialization

Initialize with the class constructor:

```python
from neptune_scale import Run

run = Run(...)
```

or using a context manager:

```python
from neptune_scale import Run

with Run(...) as run:
    ...
```

__Parameters__

| Name             | Type             | Default | Description                                                               |
|------------------|------------------|---------|---------------------------------------------------------------------------|
| `run_id`         | `str`            | -       | Identifier of the run. Must be unique within the project. Max length: 128 bytes. |
| `project`        | `str`, optional  | `None`  | Name of a project in the form `workspace-name/project-name`. If `None`, the value of the `NEPTUNE_PROJECT` environment variable is used. |
| `api_token`      | `str`, optional  | `None`  | Your Neptune API token or a service account's API token. If `None`, the value of the `NEPTUNE_API_TOKEN` environment variable is used. To keep your token secure, don't place it in source code. Instead, save it as an environment variable. |
| `resume`         | `bool`, optional | `False` | If `False` (default), creates a new run. To continue an existing run, set to `True` and pass the ID of an existing run to the `run_id` argument. To fork a run, use `fork_run_id` and `fork_step` instead. |
| `mode`           | `"async"` or `"disabled"` | `"async"` | Mode of operation. If set to `"disabled"`, the run doesn't log any metadata. |
| `experiment_name`  | `str`, optional  | `None` | Name of the experiment to associate the run with. Learn more about [experiments][experiments] in the Neptune documentation. |
| `creation_time`  | `datetime`, optional | `None` | Custom creation time of the run. |
| `fork_run_id`    | `str`, optional  | `None` | The ID of the run to fork from. |
| `fork_step`      | `int`, optional  | `None` | The step number to fork from. |
| `max_queue_size` | `int`, optional  | 1M | Maximum number of operations queued for processing. 1 000 000 by default. You should raise this value if you see the `on_queue_full_callback` function being called. |
| `on_queue_full_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when the queue is full. The function must take as an argument the exception that made the queue full and, as an optional argument, a timestamp of when the exception was last raised. |
| `on_network_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a network error occurs. |
| `on_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | The default callback function triggered when an unrecoverable error occurs. Applies if an error wasn't caught by other callbacks. In this callback you can choose to perform your cleanup operations and close the training script. For how to end the run in this case, use [`terminate()`](#terminate). |
| `on_warning_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a warning occurs. |

__Examples__

Create a new run:

```python
from neptune_scale import Run

with Run(
    project="team-alpha/project-x",
    api_token="h0dHBzOi8aHR0cHM6...Y2MifQ==",
    run_id="likable-barracuda",
) as run:
    ...
```

For help, see [Create an experiment][new-experiment] in the Neptune docs.

> [!TIP]
> Find your API token in your user menu, in the bottom-left corner of the Neptune app.
>
> Or, to use shared API tokens for multiple users or non-human accounts, create a service account in your workspace settings.

To restart an experiment, create a forked run:

```python
with Run(
    run_id="adventurous-barracuda",
    experiment_name="swim-further",
    fork_run_id="likable-barracuda",
    fork_step=102,
) as run:
    ...
```

Continue a run:

```python
with Run(
    run_id="likable-barracuda",  # a Neptune run with this ID already exists
    resume=True,
) as run:
    ...
```

### `close()`

The regular way to end a run. Waits for all locally queued data to be processed by Neptune (see [`wait_for_processing()`](#wait_for_processing)) and closes the run.

This is a blocking operation. Call the function at the end of your script, after your model training is completed.

__Examples__

```python
from neptune_scale import Run

run = Run(...)

# logging and training code

run.close()
```

If using a context manager, Neptune automatically closes the run upon exiting the context:

```python
with Run(...) as run:
    ...

# run is closed at the end of the context
```

### `log_configs()`

Logs the specified metadata to a Neptune run.

You can log configurations or other single values. Pass the metadata as a dictionary `{key: value}` with

- `key`: path to where the metadata should be stored in the run.
- `value`: the piece of metadata to log.

For example, `{"parameters/learning_rate": 0.001}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.

__Parameters__

| Name          | Type                                               | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `data`      | `Dict[str, Union[float, bool, int, str, datetime]]`, optional  | `None` | Dictionary of configs or other values to log. Available types: float, integer, Boolean, string, and datetime. |

__Examples__

Create a run and log metadata:

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(
        data={
            "parameters/learning_rate": 0.001,
            "parameters/batch_size": 64,
        },
    )
```

### `log_metrics()`

Logs the specified metrics to a Neptune run.

You can log metrics representing a series of numeric values. Pass the metadata as a dictionary `{key: value}` with

- `key`: path to where the metadata should be stored in the run.
- `value`: the piece of metadata to log.

For example, `{"metrics/accuracy": 0.89}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.

__Parameters__

| Name        | Type                                     | Default | Description                                                                                                                                                                                                                                                          |
|-------------|------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data`      | `Dict[str, Union[float, int]]` | `None`  | Dictionary of metrics to log. Each metric value is associated with a step. To log multiple metrics at once, pass multiple key-value pairs.                                                                                                                           |
| `step`      | `Union[float, int]`           | `None`  | Index of the log entry. Must be increasing. <br> **Tip:** Using float rather than int values can be useful, for example, when logging substeps in a batch. |
| `timestamp` | `datetime`, optional                     | `None`  | Time of logging the metadata.                                                                                                                                                                                                                                        |

__Examples__

Create a run and log metrics:

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_metrics(
        data={"loss": 0.14, "acc": 0.78},
        step=1.2,
    )
```

**Note:** To correlate logged values, make sure to send all metadata related to a step in a single `log_metrics()` call, or specify the step explicitly.

When the run is forked off an existing one, the step can't be smaller than the step value of the fork point.

### `add_tags()`

Adds the list of tags to the run.

__Parameters__

| Name          | Type                                         | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to add to the run. |
| `group_tags`  | `bool`, optional                             | `False`  | Add group tags instead of regular tags. |

__Example__

```python
with Run(...) as run:
    run.add_tags(tags=["tag1", "tag2", "tag3"])
```

### `remove_tags()`

Removes the specified tags from the run.

__Parameters__

| Name          | Type                                         | Default | Description                                                               |
|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|
| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to remove from the run. |
| `group_tags`  | `bool`, optional                             | `False`  | Remove group tags instead of regular tags. |

__Example__

```python
with Run(...) as run:
    run.remove_tags(tags=["tag2", "tag3"])
```

### `wait_for_submission()`

Waits until all metadata is submitted to Neptune for processing.

When submitted, the data is not yet saved in Neptune (see [`wait_for_processing()`](#wait_for_processing)).

__Parameters__

| Name      | Type              | Default | Description                                                               |
|-----------|-------------------|---------|---------------------------------------------------------------------------|
| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for submission.                      |
| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |

__Example__

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(...)
    ...
    run.wait_for_submission()
    run.log_metrics(...)  # called once queued Neptune operations have been submitted
```

### `wait_for_processing()`

Waits until all metadata is processed by Neptune.

Once the call is complete, the data is saved in Neptune.

__Parameters__

| Name      | Type              | Default | Description                                                               |
|-----------|-------------------|---------|---------------------------------------------------------------------------|
| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for processing.                      |
| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |

__Example__

```python
from neptune_scale import Run

with Run(...) as run:
    run.log_configs(...)
    ...
    run.wait_for_processing()
    run.log_metrics(...)  # called once submitted data has been processed
```

### `terminate()`

In case an unrecoverable error is encountered, you can terminate the failed run in your error callback.

**Note:** This effectively disables processing in-flight operations as well as logging new data. However,
the training process isn't interrupted.

__Example__

```python
from neptune_scale import Run

def my_error_callback(exc):
    run.terminate()


run = Run(..., on_error_callback=my_error_callback)
```

---

## Getting help

For help, contact support@neptune.ai.


[scale-docs]: https://docs-beta.neptune.ai/setup
[experiments]: https://docs-beta.neptune.ai/experiments
[log-metadata]: https://docs-beta.neptune.ai/log_metadata
[new-experiment]: https://docs-beta.neptune.ai/new_experiment
[quickstart]: https://docs-beta.neptune.ai/quickstart
[demo-project]: https://scale.neptune.ai/o/neptune/org/LLM-training-example/runs/compare?viewId=9d0e03d5-d0e9-4c0a-a546-f065181de1d2&dash=charts&compare=uItSQytpSbTH0c84P6iKGycQhv1rZr-qt4Z-CzEVBwD0


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/neptune-ai/neptune-client-scale",
    "name": "neptune-scale",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": "MLOps, ML Experiment Tracking, ML Model Registry, ML Model Store, ML Metadata Store",
    "author": "neptune.ai",
    "author_email": "contact@neptune.ai",
    "download_url": "https://files.pythonhosted.org/packages/e8/dd/058e7038ee803b189003d075d417e7a73af4c87c69108323b5882fd3e7e8/neptune_scale-0.7.1.tar.gz",
    "platform": null,
    "description": "# Neptune Scale client\n\n> [!NOTE]\n> This package only works with the `3.0` version of neptune.ai called Neptune Scale, which is in beta.\n>\n> You can't use the Scale client with the stable Neptune `2.x` versions currently available to SaaS and self-hosting customers. For the Python client corresponding to Neptune `2.x`, see https://github.com/neptune-ai/neptune-client.\n\n**What is Neptune?**\n\nNeptune is an experiment tracker. It enables researchers to monitor their model training, visualize and compare model metadata, and collaborate on AI/ML projects within a team.\n\n**What's different about Neptune Scale?**\n\nNeptune Scale is the next major version of Neptune. It's built on an entirely new architecture for ingesting and rendering data, with a focus on responsiveness and accuracy at scale.\n\nNeptune Scale supports forked experiments, with built-in mechanics for retaining run ancestry. This way, you can focus on analyzing the latest runs, but also visualize the full history of your experiments.\n\n## Installation\n\n```bash\npip install neptune-scale\n```\n\n### Configure API token and project\n\n1. Log in to your Neptune Scale workspace.\n1. Create a project, or find an existing project you want to send the run metadata to.\n1. Get your API token from your user menu in the bottom left corner.\n\n    > If you're a workspace admin, you can also set up a service account. This way, multiple people or machines can share the same API token. To get started, go to the workspace settings in the top right corner.\n\n1. In the environment where neptune-scale is installed, set the following environment variables to the API token and project name:\n\n    ```\n    export NEPTUNE_API_TOKEN=\"h0dHBzOi8aHR0cHM.4kl0jvYh3Kb8...ifQ==\"\n    ```\n\n    ```\n    export NEPTUNE_PROJECT=\"team-alpha/project-x\"\n    ```\n\nYou're ready to start using Neptune Scale.\n\nFor more help with setup, see [Get started][scale-docs] in the Neptune documentation.\n\n## Example usage\n\nCreate an experiment:\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(\n    experiment_name=\"ExperimentName\",\n    run_id=\"SomeUniqueRunIdentifier\",\n)\n```\n\nThen, call logging methods on the run and pass the metadata as a dictionary.\n\nLog configuration or other simple values with [`log_configs()`](#log_configs):\n\n```python\nrun.log_configs(\n    {\n        \"learning_rate\": 0.001,\n        \"batch_size\": 64,\n    }\n)\n```\n\nInside a training loop or other iteration, use [`log_metrics()`](#log_metrics) to append metric values:\n\n```python\n# inside a loop\nfor step in range(100):\n    run.log_metrics(\n        data={\"acc\": 0.89, \"loss\": 0.17},\n        step=step,\n    )\n```\n\nTo help identify and group runs, you can apply tags:\n\n```python\nrun.add_tags(tags=[\"tag1\", \"tag2\"])\n```\n\nThe run is stopped when exiting the context or the script finishes execution, but you can use [`close()`](#close) to stop it once logging is no longer needed:\n\n```python\nrun.close()\n```\n\nTo explore your experiment, open the project in Neptune and navigate to **Runs**. For an example, [see the demo project &rarr;][demo-project]\n\nFor more instructions, see the Neptune documentation:\n\n- [Quickstart][quickstart]\n- [Create an experiment][new-experiment]\n- [Log metadata][log-metadata]\n\n## API reference\n\n### `Run`\n\nRepresentation of experiment tracking metadata logged with Neptune Scale.\n\n#### Initialization\n\nInitialize with the class constructor:\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(...)\n```\n\nor using a context manager:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    ...\n```\n\n__Parameters__\n\n| Name             | Type             | Default | Description                                                               |\n|------------------|------------------|---------|---------------------------------------------------------------------------|\n| `run_id`         | `str`            | -       | Identifier of the run. Must be unique within the project. Max length: 128 bytes. |\n| `project`        | `str`, optional  | `None`  | Name of a project in the form `workspace-name/project-name`. If `None`, the value of the `NEPTUNE_PROJECT` environment variable is used. |\n| `api_token`      | `str`, optional  | `None`  | Your Neptune API token or a service account's API token. If `None`, the value of the `NEPTUNE_API_TOKEN` environment variable is used. To keep your token secure, don't place it in source code. Instead, save it as an environment variable. |\n| `resume`         | `bool`, optional | `False` | If `False` (default), creates a new run. To continue an existing run, set to `True` and pass the ID of an existing run to the `run_id` argument. To fork a run, use `fork_run_id` and `fork_step` instead. |\n| `mode`           | `\"async\"` or `\"disabled\"` | `\"async\"` | Mode of operation. If set to `\"disabled\"`, the run doesn't log any metadata. |\n| `experiment_name`  | `str`, optional  | `None` | Name of the experiment to associate the run with. Learn more about [experiments][experiments] in the Neptune documentation. |\n| `creation_time`  | `datetime`, optional | `None` | Custom creation time of the run. |\n| `fork_run_id`    | `str`, optional  | `None` | The ID of the run to fork from. |\n| `fork_step`      | `int`, optional  | `None` | The step number to fork from. |\n| `max_queue_size` | `int`, optional  | 1M | Maximum number of operations queued for processing. 1 000 000 by default. You should raise this value if you see the `on_queue_full_callback` function being called. |\n| `on_queue_full_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when the queue is full. The function must take as an argument the exception that made the queue full and, as an optional argument, a timestamp of when the exception was last raised. |\n| `on_network_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a network error occurs. |\n| `on_error_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | The default callback function triggered when an unrecoverable error occurs. Applies if an error wasn't caught by other callbacks. In this callback you can choose to perform your cleanup operations and close the training script. For how to end the run in this case, use [`terminate()`](#terminate). |\n| `on_warning_callback` | `Callable[[BaseException, Optional[float]], None]`, optional | `None` | Callback function triggered when a warning occurs. |\n\n__Examples__\n\nCreate a new run:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(\n    project=\"team-alpha/project-x\",\n    api_token=\"h0dHBzOi8aHR0cHM6...Y2MifQ==\",\n    run_id=\"likable-barracuda\",\n) as run:\n    ...\n```\n\nFor help, see [Create an experiment][new-experiment] in the Neptune docs.\n\n> [!TIP]\n> Find your API token in your user menu, in the bottom-left corner of the Neptune app.\n>\n> Or, to use shared API tokens for multiple users or non-human accounts, create a service account in your workspace settings.\n\nTo restart an experiment, create a forked run:\n\n```python\nwith Run(\n    run_id=\"adventurous-barracuda\",\n    experiment_name=\"swim-further\",\n    fork_run_id=\"likable-barracuda\",\n    fork_step=102,\n) as run:\n    ...\n```\n\nContinue a run:\n\n```python\nwith Run(\n    run_id=\"likable-barracuda\",  # a Neptune run with this ID already exists\n    resume=True,\n) as run:\n    ...\n```\n\n### `close()`\n\nThe regular way to end a run. Waits for all locally queued data to be processed by Neptune (see [`wait_for_processing()`](#wait_for_processing)) and closes the run.\n\nThis is a blocking operation. Call the function at the end of your script, after your model training is completed.\n\n__Examples__\n\n```python\nfrom neptune_scale import Run\n\nrun = Run(...)\n\n# logging and training code\n\nrun.close()\n```\n\nIf using a context manager, Neptune automatically closes the run upon exiting the context:\n\n```python\nwith Run(...) as run:\n    ...\n\n# run is closed at the end of the context\n```\n\n### `log_configs()`\n\nLogs the specified metadata to a Neptune run.\n\nYou can log configurations or other single values. Pass the metadata as a dictionary `{key: value}` with\n\n- `key`: path to where the metadata should be stored in the run.\n- `value`: the piece of metadata to log.\n\nFor example, `{\"parameters/learning_rate\": 0.001}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.\n\n__Parameters__\n\n| Name          | Type                                               | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `data`      | `Dict[str, Union[float, bool, int, str, datetime]]`, optional  | `None` | Dictionary of configs or other values to log. Available types: float, integer, Boolean, string, and datetime. |\n\n__Examples__\n\nCreate a run and log metadata:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(\n        data={\n            \"parameters/learning_rate\": 0.001,\n            \"parameters/batch_size\": 64,\n        },\n    )\n```\n\n### `log_metrics()`\n\nLogs the specified metrics to a Neptune run.\n\nYou can log metrics representing a series of numeric values. Pass the metadata as a dictionary `{key: value}` with\n\n- `key`: path to where the metadata should be stored in the run.\n- `value`: the piece of metadata to log.\n\nFor example, `{\"metrics/accuracy\": 0.89}`. In the field path, each forward slash `/` nests the field under a namespace. Use namespaces to structure the metadata into meaningful categories.\n\n__Parameters__\n\n| Name        | Type                                     | Default | Description                                                                                                                                                                                                                                                          |\n|-------------|------------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data`      | `Dict[str, Union[float, int]]` | `None`  | Dictionary of metrics to log. Each metric value is associated with a step. To log multiple metrics at once, pass multiple key-value pairs.                                                                                                                           |\n| `step`      | `Union[float, int]`           | `None`  | Index of the log entry. Must be increasing. <br> **Tip:** Using float rather than int values can be useful, for example, when logging substeps in a batch. |\n| `timestamp` | `datetime`, optional                     | `None`  | Time of logging the metadata.                                                                                                                                                                                                                                        |\n\n__Examples__\n\nCreate a run and log metrics:\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_metrics(\n        data={\"loss\": 0.14, \"acc\": 0.78},\n        step=1.2,\n    )\n```\n\n**Note:** To correlate logged values, make sure to send all metadata related to a step in a single `log_metrics()` call, or specify the step explicitly.\n\nWhen the run is forked off an existing one, the step can't be smaller than the step value of the fork point.\n\n### `add_tags()`\n\nAdds the list of tags to the run.\n\n__Parameters__\n\n| Name          | Type                                         | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to add to the run. |\n| `group_tags`  | `bool`, optional                             | `False`  | Add group tags instead of regular tags. |\n\n__Example__\n\n```python\nwith Run(...) as run:\n    run.add_tags(tags=[\"tag1\", \"tag2\", \"tag3\"])\n```\n\n### `remove_tags()`\n\nRemoves the specified tags from the run.\n\n__Parameters__\n\n| Name          | Type                                         | Default | Description                                                               |\n|---------------|----------------------------------------------------|---------|---------------------------------------------------------------------------|\n| `tags`        | `Union[List[str], Set[str]]`                 | - | List or set of tags to remove from the run. |\n| `group_tags`  | `bool`, optional                             | `False`  | Remove group tags instead of regular tags. |\n\n__Example__\n\n```python\nwith Run(...) as run:\n    run.remove_tags(tags=[\"tag2\", \"tag3\"])\n```\n\n### `wait_for_submission()`\n\nWaits until all metadata is submitted to Neptune for processing.\n\nWhen submitted, the data is not yet saved in Neptune (see [`wait_for_processing()`](#wait_for_processing)).\n\n__Parameters__\n\n| Name      | Type              | Default | Description                                                               |\n|-----------|-------------------|---------|---------------------------------------------------------------------------|\n| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for submission.                      |\n| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(...)\n    ...\n    run.wait_for_submission()\n    run.log_metrics(...)  # called once queued Neptune operations have been submitted\n```\n\n### `wait_for_processing()`\n\nWaits until all metadata is processed by Neptune.\n\nOnce the call is complete, the data is saved in Neptune.\n\n__Parameters__\n\n| Name      | Type              | Default | Description                                                               |\n|-----------|-------------------|---------|---------------------------------------------------------------------------|\n| `timeout` | `float`, optional | `None`  | In seconds, the maximum time to wait for processing.                      |\n| `verbose` | `bool`, optional  | `True`  | If True (default), prints messages about the waiting process.             |\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\nwith Run(...) as run:\n    run.log_configs(...)\n    ...\n    run.wait_for_processing()\n    run.log_metrics(...)  # called once submitted data has been processed\n```\n\n### `terminate()`\n\nIn case an unrecoverable error is encountered, you can terminate the failed run in your error callback.\n\n**Note:** This effectively disables processing in-flight operations as well as logging new data. However,\nthe training process isn't interrupted.\n\n__Example__\n\n```python\nfrom neptune_scale import Run\n\ndef my_error_callback(exc):\n    run.terminate()\n\n\nrun = Run(..., on_error_callback=my_error_callback)\n```\n\n---\n\n## Getting help\n\nFor help, contact support@neptune.ai.\n\n\n[scale-docs]: https://docs-beta.neptune.ai/setup\n[experiments]: https://docs-beta.neptune.ai/experiments\n[log-metadata]: https://docs-beta.neptune.ai/log_metadata\n[new-experiment]: https://docs-beta.neptune.ai/new_experiment\n[quickstart]: https://docs-beta.neptune.ai/quickstart\n[demo-project]: https://scale.neptune.ai/o/neptune/org/LLM-training-example/runs/compare?viewId=9d0e03d5-d0e9-4c0a-a546-f065181de1d2&dash=charts&compare=uItSQytpSbTH0c84P6iKGycQhv1rZr-qt4Z-CzEVBwD0\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A minimal client library",
    "version": "0.7.1",
    "project_urls": {
        "Documentation": "https://docs.neptune.ai/",
        "Homepage": "https://github.com/neptune-ai/neptune-client-scale",
        "Repository": "https://github.com/neptune-ai/neptune-client-scale",
        "Tracker": "https://github.com/neptune-ai/neptune-client-scale/issues"
    },
    "split_keywords": [
        "mlops",
        " ml experiment tracking",
        " ml model registry",
        " ml model store",
        " ml metadata store"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "879d75d60d44c5dea69aab0fd06d9fd0ca0bdcb61f626e3cbef99ffec999d669",
                "md5": "c2c0f2aa33e7780965ce00b755b4a0eb",
                "sha256": "076331dba88c74e56a700a75f7069843b895c0e7c46e0e1123afc3f1258ae056"
            },
            "downloads": -1,
            "filename": "neptune_scale-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c2c0f2aa33e7780965ce00b755b4a0eb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 37110,
            "upload_time": "2024-10-28T14:03:51",
            "upload_time_iso_8601": "2024-10-28T14:03:51.639425Z",
            "url": "https://files.pythonhosted.org/packages/87/9d/75d60d44c5dea69aab0fd06d9fd0ca0bdcb61f626e3cbef99ffec999d669/neptune_scale-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e8dd058e7038ee803b189003d075d417e7a73af4c87c69108323b5882fd3e7e8",
                "md5": "19e28b09d5cecfb12974341b76ca2e68",
                "sha256": "1bb3dcf10ff95724020e00d2322f66d8a7b327ac3d5130200ab8eb67564c0628"
            },
            "downloads": -1,
            "filename": "neptune_scale-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "19e28b09d5cecfb12974341b76ca2e68",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 33260,
            "upload_time": "2024-10-28T14:03:53",
            "upload_time_iso_8601": "2024-10-28T14:03:53.351463Z",
            "url": "https://files.pythonhosted.org/packages/e8/dd/058e7038ee803b189003d075d417e7a73af4c87c69108323b5882fd3e7e8/neptune_scale-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-28 14:03:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neptune-ai",
    "github_project": "neptune-client-scale",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "neptune-scale"
}
        
Elapsed time: 0.39023s