akerbp.mlops


Nameakerbp.mlops JSON
Version 3.5.0 PyPI version JSON
download
home_page
SummaryMLOps framework
upload_time2024-01-03 07:53:22
maintainer
docs_urlNone
author
requires_python>=3.8
licenseApache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2021 Aker BP ASA Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MLOps Framework
This is a framework for MLOps that deploys models as functions in Cognite Data
Fusion

# User Guide

## Reference guide
This assumes you are already familiar with the framework, and acts as a quick reference guide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framework.
1. Train model to generate model artifacts
2. Manually upload artifacts to your test environment
   - This includes model artifacts generated during training, mapping- and settings-file for the model, scaler object etc. Basically everything that is needed to preprocess the data and make predictions using the trained model.
3. Deploy prediction service to test
   - This is handled by the CI/CD pipeline in Bitbucket
4. Manually promote model artifacts from test to production
5. Manually trigger deployment of model to production
   - Trigger in the CI/CD pipeline
6. Call deployed model
   - See section  "Calling a deployed model prediction service hosted in CDF" below
## Getting Started:
Follow these steps:
- Install package: `pip install akerbp.mlops`
- Set up pipeline file `bitbucket-pipelines.yml` and config file
  `mlops_settings.yaml` by running this command from your repo's root folder:
  ```bash
  python -m akerbp.mlops.deployment.setup
  ```
- Fill in user settings and then validate them by running this (from repo root):
  ```python
  from akerbp.mlops.core.config import validate_user_settings
  validate_user_settings()
  ```
  alternatively, run the setup again:
  ```bash
  python -m akerbp.mlops.deployment.setup
  ```
- Commit the pipeline and settings files to your repo
- Become familiar with the model template (see folder `model_code`) and make
  sure your model follows the same interface and file structure (see [Files and Folders Structure](#files-and-folders-structure))
- Follow or request the Bitbucket setup (described later)

A this point every git push in master branch will trigger a deployment in the
test environment. More information about the deployments pipelines is provided
later.

## Updating MLOps
Follow these steps:
- Install a new version using pip, e.g. `pip install akerbp.mlops==x`, or upgrade your existing version to the latest release by running `pip install --upgrade akerbp.mlops`
- Run this command from your repo's root folder:
  ```bash
  python -m akerbp.mlops.deployment.setup
  ```
  This will update the bitbucket pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and
  you're ready to go!

## General Guidelines
Users should consider the following general guidelines:
- Model artifacts should **not** be committed to the repo. Folder `model_artifact`
  does store model artifacts for the model defined in `model_code`, but it is
  just to help users understand the framework ([see this section](#model-manager) on how to handle model artifacts)
- Follow the recommended file and folder structure ([see this section](#files-and-folders-structure))
- There can be several models in your repo: they need to be registered in the
  settings, and then they need to have their own model and test files
- Follow the import guidelines ([see this section](#import-guidelines))
- Make sure the prediction service gets access to model artifacts ([see this section](#model-manager))

## Configuration
MLOps configuration is stored in `mlops_settings.yaml`. Example for a project
with a single model:
```yaml
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
req_file: model_code/requirements.model
artifact_folder: model_artifact
artifact_version: 1 # Optional
test_file: model_code/test_model1.py
platform: cdf
dataset: mlops
python_version: py39
helper_models:
  - my_helper_model
info:
    prediction: &desc
        description: 'Description prediction service, model1'
        metadata:
          required_input:
            - ACS
            - RDEP
            - DEN
          training_wells:
            - 3/14
            - 2/7-18
          input_types:
            - int
            - float
            - string
          units:
            - s/ft
            - 1
            - kg/m3
          output_curves:
            - AC
          output_units:
            - s/ft
          petrel_exposure: False
          imputed: True
          num_filler: -999.15
          cat_filler: UNKNOWN
        owner: data@science.com
    training:
        << : *desc
        description: 'Description training service, model1'
        metadata:
          required_input:
            - ACS
            - RDEP
            - DEN
          output_curves:
            - AC
          hyperparameters:
            learning_rate: 1e-3
            batch_size: 100
            epochs: 10
```
| **Field**                   | **Description**                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| model_name                  | a suitable name for your model. No spaces or dashes are allowed                                                                                                                                                                                                                                                                                                                                                                                     |
| human_friendly_model_name   | Name of function (in CDF)                                                                                                                                                                                                                                                                                                                                                                                                                           |
| model_file                  | model file path relative to the repo's root folder. All required model code should be under the top folder in that path (`model_code` in the example above).                                                                                                                                                                                                                                                                                        |
| req_file                    | model requirement file. Do not use `.txt` extension!                                                                                                                                                                                                                                                                                                                                                                                                |
| artifact_folder             | model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to `null` if there is no model artifact. |
| artifact_version (optional) | artifact version number to use during deployment. Defaults to the latest version if not specified                                                                                                                                                                                                                                                                                                                                                   |
| test_file                   | test file to use. Set to `null` for no testing before deployment (not recommended).                                                                                                                                                                                                                                                                                                                                                                 |
| platform                    | deployment platforms, either `cdf` (Cognite) or `local` for local testing.                                                                                                                                                                                                                                                                                                                                                                                      |
| python_version              | If `platform` is set to `cdf`, the `python_version` required by the model to be deployed needs to be specified. Available versions can be found [here](https://cognite-sdk-python.readthedocs-hosted.com/en/latest/functions.html#create-function)                                                                                                                                                                                                                                                                                                                                                                                      |
| helper_models | Array of helper models using for feature engineering during preprocessing. During deployment, iterate through this list and check that helper model requirements are the same as the main model. For now we only check for akerbp.mlpet |
| dataset                     | CDF Dataset to use to read/write model artifacts (see [Model Manager](#model-manager)). Set to `null` is there is no dataset (not recommended).                                                                                                                                                                                                                                                                                                     |
| info                        | description, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service.                                                                                                                                                                                                                                                                                                   |

Note:
   all **paths** should be **unix style**, regardless of the platform.

Notes on metadata:
   We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the following
   - Keys can contain at most 16 characters
   - Values can contain at most 512 characters
   - At most 16 key-value pairs
   - Maximum size of entire metadata field is 512 bytes



If there are multiple models, model configuration should be separated using
`---`. Example:
```yaml
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
(...)
--- # <- this separates model1 and model2 :)
model_name: model2
human_friendly_model: 'My Second Model'
model_file: model_code/model2.py
(...)
```

## Files and Folders Structure
All the model code and files should be under a single folder, e.g. `model_code`.
**Required** files in this folder:
- `model.py`: implements the standard model interface
- `test_model.py`: tests to verify that the model code is correct and to verify
  correct deployment
- `requirements.model`: libraries needed (with specific **version numbers**),
  can't be called `requirements.txt`. Add the MLOps framework like this:
  ```bash
  # requirements.model
  (...) # your other reqs
  akerbp.mlops==MLOPS_VERSION
  ```
  During deployment, `MLOPS_VERSION` will be automatically replaced by the
  specific version **that you have installed locally**. Make sure you have the latest release on your local machine prior to model deployment.

For the prediction service we require the model interface to have the following class and function
  - initialization(), with required arguments
    - path to artifact folder
    - secrets
      - these arguments can safely be set to None, and the framework will handle everything under the hood.
      - only set path to artifact folder as None if not using any artifacts
  - predict(), with required arguments
    - data
    - init_object (output from initialization() function)
    - secrets
      - You can safely put the secrets argument to None, and the framework will handle the secrets under the hood.
  - ModelException class with inheritance from an Exception base class

For the training service we require the model interface to have the following class and function
  - train(), with required arguments
    - folder_path
      - path to store model artifacts to be consumed by the prediction service
  - ModelException class with inheritance from an Exception base class


The following structure is recommended for projects with multiple models:
- `model_code/model1/`
- `model_code/model2/`
- `model_code/common_code/`

This is because when deploying a model, e.g. `model1`, the top folder in the
path (`model_code` in the example above) is copied and deployed, i.e.
`common_code` folder (assumed to be needed by `model1`) is included. Note that
`model2` folder would also be deployed (this is assumed to be unnecessary but
harmless).

## Import Guidelines
The repo's root folder is the base folder when importing. For example, assume
you have these files in the folder with model code:
 - `model_code/model.py`
 - `model_code/helper.py`
 - `model_code/data.csv`

If `model.py` needs to import `helper.py`, use: `import model_code.helper`. If
`model.py` needs to read `data.csv`, the right path is
`os.path.join('model_code', 'data.csv')`.

It's of course possible to import from the Mlops package, e.g. its logger:
``` python
from akerbp.mlops.core import logger
logging=logger.get_logger("logger_name")
logging.debug("This is a debug log")
```

## Services
We consider two types of services: prediction and training.

Deployed services can be called with
```python
from akerbp.mlops.xx.helpers import call_function
output = call_function(external_id, data)
```
Where `xx` is either `'cdf'` or `'gc'`, and `external_id` follows the
structure `model-service-model_env`:
 - `model`: model name given by the user (settings file)
 - `service`: either `training` or `prediction`
 - `model_env`: either `dev`, `test` or `prod` (depending on the deployment
   environment)

The output has a status field (`ok` or `error`). If they are 'ok', they have
also a `prediction` and `prediction_file` or `training` field (depending on the type of service). The
former is determined by the `predict` method of the model, while the latter
combines artifact metadata and model metadata produced by the `train` function.
Prediction services have also a `model_id` field to keep track of which model
was used to predict.

See below for more details on how to call prediction services hosted in CDF.

## Deployment Platform
Model services (described below) can be deployed to CDF, i.e. Cognite Data Fusion or Google Cloud Run. The deployment platform is specified in the settings file.

CDF Functions include metadata when they are called. This information can be
used to redeploy a function (specifically, the `file_id` field). Example:

```python
import akerbp.mlops.cdf.helpers as cdf

human_readable_name = "My model"
external_id = "my_model-prediction-test"

cdf.set_up_cdf_client('deploy')
cdf.redeploy_function(
  human_readable_name
  external_id,
  file_id,
  'Description',
  'your@email.com'
)
```
Note that the external-id of a function needs to be unique, as this is used to distinguish functions between services and hosting environment.

It's possible to query available functions (can be filtered by environment
and/or tags). Example:
```python
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
all_functions = cdf.list_functions()
test_functions = cdf.list_functions(model_env="test")
tag_functions = cdf.list_functions(tags=["well_interpretation"])
```
Functions can be deleted. Example:
```python
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.delete_service("my_model-prediction-test")
```
Functions can be called in parallel. Example:
```python
from akerbp.mlops.cdf.helpers import call_function_parallel
function_name = 'my_function-prediction-prod'
data = [dict(data='data_call_1'), dict(data='data_call_2')]
response1, response2 = call_function_parallel(function_name, data)
```

#TODO - Document common use cases for GCR

## Model Manager
Model Manager is the module dedicated to managing the model artifacts used by
prediction services (and generated by training services). This module uses CDF
Files as backend.

Model artifacts are versioned and stored together with user-defined metadata.
Uploading a new model increases the version count by 1 for that model and
environment. When deploying a prediction service, the latest model version is
chosen. It would be possible to extend the framework to allow deploying specific
versions or filtering by metadata.

Model artifacts are segregated by environment (e.g. only production artifacts
can be deployed to production). Model artifacts have to be uploaded manually to
test (or dev) environment before deployment. Code example:
```python
import akerbp.mlops.model_manager as mm

metadata = train(model_dir, secrets) # or define it directly
mm.setup()
folder_info = mm.upload_new_model_version(
  model_name,
  model_env,
  folder_path,
  metadata
)
```
If there are multiple models, you need to do this one at at time. Note that
`model_name` corresponds to one of the elements in `model_names` defined in
`mlops_settings.py`, `model_env` is the target environment (where the model should be
available), `folder_path` is the local model artifact folder and `metadata` is a
dictionary with artifact metadata, e.g. performance, git commit, etc.

Model artifacts needs to be promoted to the production environment (i.e. after
they have been deployed successfully to test environment) so that a prediction
service can be deployed in production.
```python
# After a model's version has been successfully deployed to test
import akerbp.mlops.model_manager as mm

mm.setup()
mm.promote_model('model', 'version')
```

### Versioning
Each model artifact upload/promotion increments a version number (environment dependent)
available in Model Manager. However, this doesn't modify the model
artifacts used in existing prediction services (i.e. nothing changes in CDF
Functions). To reflect the newly uploaded/promoted model artifacts in the existing services one need to deploy the services again. Note that we dont have to specify the artifact version explicitly if we want to deploy using the latest artifacts, as this is done by default.

Recommended process to update a model artifact and prediction service:
1. New model features implemented in a feature branch
2. New artifact generated and uploaded to test environment
3. Feature branch merged with master
4. Test deployment is triggered automatically: prediction service is deployed to
   test environment with the latest artifact version (in test)
5. Prediction service in test is verified
6. Artifact version is promoted manually from command line whenever suitable
7. Production deployment is triggered manually from Bitbucket: prediction
   service is deployed to production with the latest artifact version (in prod)

It's possible to get an overview of the model artifacts managed by Model
Manager. Some examples (see `get_model_version_overview` documentation for other
possible queries):
```python
import akerbp.mlops.model_manager as mm
mm.setup()
# all artifacts
folder_info = mm.get_model_version_overview()
# all artifacts for a given model
folder_info = mm.get_model_version_overview(model_name='xx')
```
If the overview shows model artifacts that are not needed, it is possible to
remove them. For example if artifact "my_model/dev/5" is not needed:
```python
model_to_remove = "my_model/dev/5"
mm.delete_model_version(model_to_remove)
```
Model Manager will by default show information on the artifact to delete and ask
for user confirmation before proceeding. It's possible (but not recommended) to
disable this check. There's no identity check, so it's possible to delete any
model artifact (from other data scientist). Be careful!

It's possible to download a model artifact (e.g. to verify its content). For
example:
```python
mm.download_model_version('model_name', 'test', 'artifact_folder', version=5)
```
If no version is specified, the latest one is downloaded by default.

By default, Model Manager assumes artifacts are stored in the `mlops` dataset.
If your project uses a different one, you need to specify during setup (see
`setup` function).

Further information:
- Model Manager requires specific environmental variables (see next
  section) or a suitable secrets to be passed to the `setup` function.
- In projects with a training service, you can rely on it to upload a first
  version of the model. The first prediction service deployment will fail, but
  you can deploy again after the training service has produced a model.
- When you deploy from the development environment (covered later in this
  document), the model artifacts in the settings file can point to existing
  local folders. These will then be used for the deployment. Version is then
  fixed to `model_name/dev/1`. Note that these artifacts are not uploaded to CDF
  Files.
- Prediction services are deployed with model artifacts (i.e. the artifact is
  copied to the project file used to create the CDF Function) so that they are
  available at prediction time. Downloading artifacts at run time would require
  waiting time, and files written during run time consume ram memory).

## Model versioning
To allow for model versioning and rolling back to previous model deployments, the external id of the functions (in CDF) includes a version number that is reflected by the latest artifact version number when deploying the function (see above).
Everytime we upload/promote new model artifacts and deploy our services, the version number of the external id of the functions representing the services are incremented (just as the version number for the artifacts).

To distinguish the latest model from the remaining model versions, we redeploy the latest model version using a predictable external id that does not contain the version number. By doing so we relieve the clients need of dealing with version numbers, and they will call the latest model by default. For every new deployment, we will thus have two model deployments - one with the version number, and one without the version number in the external id.  However, the predictable external id is persisted across new model versions, so when deploying a new version the latest one, with the predictable external id, is simply overwritten.

We are thus concerned with two structures for the external id
- ```<model_name>-<service>-<model_env>-<version>``` for rolling back to previous versions, and
- ```<model_name>-<service>-<model_env>``` for the latest deployed model

For the latest model with a predictable external id, we tag the description of the model to specify that the model is in fact the latest version, and add the version number to the function metadata.

We can now list out multiple models with the same model name and external id prefix, and choose to make predictions and do inference with a specific model version. An example is shown below.
```python
# List all prediction services (i.e. models) with name "My Model" hosted in the test environment, and model corresponding to the first element of the list
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
my_models = client.functions.list(name="My Model", external_id_prefix="mymodel-prediction-test")
my_model_specific_version = my_models[0]
```
## Calling a deployed model prediction service hosted in CDF
This section describes how you can call deployed models and obtain predictions for doing inference.
We have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key "data" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the corresponding columns.

First, load your data and transform it to a dictionary as assumed by the framework. Note that the data dictionary you pass to the function might vary based on your model interface. Make sure to align with what you specified in your `model.py` interface.
```python
import pandas as pd
data = pd.read_csv("path_to_data")
input_data = data.drop(columns=[target_variables])
data_dict = {"data": input_data.to_dict(orient=list), "to_file": True}
```
The "to_file" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided below

Calling deployed model using MLOps:
1. Set up a cognite client with sufficient access rights
2. Extract the response directly by specifying the external id of the model and passing your data as a dictionary
    - Note that the external id is on the form
      - ```"<model_name>-<service>-<model_env>-<version>"```, and
      - ```"<model_name>-<service>-<model_env>"```

Use the latter external id if you want to call the latest model. The former external id can be used if you want to call a previous version of your model.

```python
from akerbp.mlops.cdf.helpers import set_up_cdf_client, call_function
set_up_cdf_client(context="deploy") #access CDF data, files and functions with deploy context
response = call_function(function_name="<model_name>-prediction-<model_env>", data=data_dict)
```

Calling deployed model using the Cognite SDK:
1. set up cognite client with sufficient access rights
2. Retreive model from CDF by specifying the external-id of the model
3. Call the function
4. Extract the function call response from the function call

```python
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
client = CogniteClient(config=cnf)
function = client.functions.retrieve(external_id="<model_name>-prediction-<model_env>")
function_call = function.call(data=data_dict)
response = function_call.get_response()

```
Depending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files.
If the input data dictionary contains a key "to_file" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the response will contain a reference to the file containing the predictions. If "to_file" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.

If "to_file" = True, we can extract the predictions using the following code-snippet
```python
file_id = response["prediction_file"]
bytes_data = client.files.download_bytes(external_id=file_id)
predictions_df = pd.DataFrame.from_dict(json.loads(bytes_data))
```
Otherwise, the predictions are directly accessible from the response as follows.
```python
predictions = response["predictions"]
```

## Extracting metadata from deployed model in CDF
Once a model is deployed, a user can extract potentially valuable metadata as follows.
```python
my_function = client.functions.retrieve(external_id="my_model-prediction-test")
metadata = my_function.metadata
```
Where the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata
```
{'cat_filler': 'UNKNOWN',
 'imputed': 'True',
 'input_types': '[int, float, string]',
 'num_filler': '-999.15',
 'output_curves': '[AC]',
 'output_unit': '[s/ft]',
 'petrel_exposure': 'False',
 'required_input': '[ACS, RDEP, DEN]',
 'training_wells': '[3/1-4]',
 'units': '[s/ft, 1, kg/m3]'}
```


## Local Testing and Deployment
It's possible to tests the functions locally, which can help you debug errors
quickly. This is recommended before a deployment.

Define the following environmental variables (e.g. in `.bashrc`):
```bash
export MODEL_ENV=dev
export COGNITE_OIDC_BASE_URL=https://api.cognitedata.com
export COGNITE_TENANT_ID=<tenant id>
export COGNITE_CLIENT_ID_WRITE=<write access client id>
export COGNITE_CLIENT_SECRET_WRITE=<write access client secret>
export COGNITE_CLIENT_ID_READ=<read access client id>
export COGNITE_CLIENT_SECRET_READ=<read access client secret>
```

From your repo's root folder:
- `python -m pytest model_code` (replace `model_code` by your model code folder
  name)
- `deploy_prediction_service`
- `deploy_training_service` (if there's a training service)

The first one will run your model tests. The last two run model tests but also
the service tests implemented in the framework and simulate deployment.

If you want to run tests only you need to set `TESTING_ONLY=True` before calling the deployment script.

## Automated Deployments from Bitbucket
Deployments to the test environment are triggered by commits (you need to push
them). Deployments to the production environment are enabled manually from the
Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master.
Branches that match `feature/*` run tests only (i.e. do not deploy).

It is assumed that most projects won't include a training service. A branch that
matches 'mlops/*' deploys both prediction and training services. If a project
includes both services, the pipeline file could instead be edited so that master
deployed both services.

It is possible to schedule the training service in CDF, and then it can make
sense to schedule the deployment pipeline of the model service (as often as new
models are trained)

NOTE: Previous version of akerbp.mlops assumes that calling
`LOCAL_DEPLOYMENT=True deploy_prediction_service` will not deploy models and run tests.
The package is now refactored to only trigger tests when the environment variable
`TESTING_ONLY` is set to `True`.
Make sure to update the pipeline definition for branches with prefix `feature/`to call
`TESTING_ONLY=True deploy_prediction_service` instead.

## Bitbucket Setup
The following environments need to be defined in `repository settings >
deployments`:
- test deployments: `test-prediction` and `test-training`, each with `MODEL_ENV=test`
- production deployments: `production-prediction` and `production-training`,
  each with `MODEL_ENV=prod`

The following need to be defined in `respository settings > repository
variables`: `COGNITE_CLIENT_ID_WRITE`, `COGNITE_CLIENT_SECRET_WRITE`,
`COGNITE_CLIENT_ID_READ`, `COGNITE_CLIENT_SECRET_READ` (these should be CDF client id and secrets for respective read and write access).

The pipeline needs to be enabled.


# Developer/Admin Guide
## Package versioning
The versioning of the package follows [PEP440](https://peps.python.org/pep-0440/), using the `MAJOR.MINOR.PATCH` structure. We are thus updating the package version using the following convention
1. Increment MAJOR when making incompatible API changes
2. Increment MINOR when adding backwards compatible functionality
3. Increment PATCH when making backwards compatible bug-fixes

The version is updated based on the latest commit to the repo, and we are currently using the following rules.
- The MAJOR version is incremented if the commit message includes the word `major`
- The MINOR version is incremented if the commit message includes the word `minor`
- The PATCH number is incremented if neither `major` nor `minor` if found in the commit message
- If the commit message includes the phrase `pre-release`, the package version is extended with `a`, thus taking the form `MAJOR.MINOR.PATCHa`.

Note that the above keywords are **not** case sensitive. Moreover, `major` takes precedence over `minor`, so if both keywords are found in the commit message, the MAJOR version is incremented and the MINOR version is kept unchanged.

In dev and test environment, we release the package using the pre-release tag, and the package takes the following version number `MAJOR.MINOR.PATCHaPRERELEASE`.

The version number is automatically generated by [setuptools_scm](https://github.com/pypa/setuptools_scm/) and is based off git tagging and the incremental version numbering system mentioned above.


## MLOps Files and Folders
These are the files and folders in the MLOps repo:
- `src` contains the MLOps framework package
- `mlops_settings.yaml` contains the user settings for the dummy model
- `model_code` is a model template included to show the model interface. It is
  not needed by the framework, but it is recommended to become familiar with it.
- `model_artifact` stores the artifacts for the model shown in  `model_code`.
  This is to help to test the model and learn the framework.
- `bitbucket-pipelines.yml` describes the deployment pipeline in Bitbucket
- `build.sh` is the script to build and upload the package
- `setup.py` is used to build the package
- `LICENSE` is the package's license

## CDF Datasets
In order to control access to the artifacts:
1. Set up a CDF Dataset with `write_protected=True` and a `external_id`, which
   by default is expected to be `mlops`.
2. Create a group of owners (CDF Dashboard), i.e. those that should have write
   access

## Local Testing (only implemented for the prediction service)
To perform local testing before pushing to Bitbucket, you can run the following
commands:
```bash
LOCAL_MLOPS_TESTING deploy_prediction_service
```
(assuming you have first run `pip install -e ".[dev]"` in the same environment)

## Build and Upload Package
Create an account in pypi, then create a token and a `$HOME/.pypirc` file if you want to deploy from local. Edit
`pyproject.toml` file and note the following:
- Dependencies need to be registered
- Bash scripts will be installed in a `bin` folder in the `PATH`.

The pipeline is setup to build the library from Bitbucket, but it's possible to
build and upload the library from the development environment as well:
```bash
bash build.sh
```
In order to authenticate to bitbucket you need to setup a token. Copy its content and add that to the secured repository/deployment variable `TWINE_PASSWORD`. Set the variable `TWINE_USERNAME` to `__token__`.

```
pip install -e .
```
In this mode, the installed package links to the source code, so that it can be
modified without the need to reinstall.

## Bitbucket Setup
In addition to the user setup, the following is needed to build the package:
- `test-pypi`: `MODEL_ENV=test`, `TWINE_USERNAME=__token__` and `TWINE_PASSWORD`
  (token generated from pypi)
- `prod-pypi`: `MODEL_ENV=prod`, `TWINE_USERNAME=__token__` and `TWINE_PASSWORD`
  (token generated from pypi, can be the same as above)

## Notes on the code

Service testing happens in an independent process (subprocess library) to avoid
setup problems:
 - When deploying multiple models the service had to be reloaded before testing
   it, otherwise it would be the first model's service. Model initialization in
   the prediction service is designed to load artifacts only once in the process
 - If the model and the MLOps framework rely on different versions of the same
   library, the version would be changed during runtime, but the
   upgraded/downgraded version would not be available for the current process

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "akerbp.mlops",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "\"Christian N. Lehre\" <christian.lehre@soprasteria.com>",
    "keywords": "",
    "author": "",
    "author_email": "\"Alfonso M. Canterla\" <alfonso.canterla@soprasteria.com>",
    "download_url": "https://files.pythonhosted.org/packages/53/38/853451a43a1cc3ee031b742e9b77ed80b5289b255ab3ec418034a627fe8a/akerbp.mlops-3.5.0.tar.gz",
    "platform": null,
    "description": "# MLOps Framework\nThis is a framework for MLOps that deploys models as functions in Cognite Data\nFusion\n\n# User Guide\n\n## Reference guide\nThis assumes you are already familiar with the framework, and acts as a quick reference guide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framework.\n1. Train model to generate model artifacts\n2. Manually upload artifacts to your test environment\n   - This includes model artifacts generated during training, mapping- and settings-file for the model, scaler object etc. Basically everything that is needed to preprocess the data and make predictions using the trained model.\n3. Deploy prediction service to test\n   - This is handled by the CI/CD pipeline in Bitbucket\n4. Manually promote model artifacts from test to production\n5. Manually trigger deployment of model to production\n   - Trigger in the CI/CD pipeline\n6. Call deployed model\n   - See section  \"Calling a deployed model prediction service hosted in CDF\" below\n## Getting Started:\nFollow these steps:\n- Install package: `pip install akerbp.mlops`\n- Set up pipeline file `bitbucket-pipelines.yml` and config file\n  `mlops_settings.yaml` by running this command from your repo's root folder:\n  ```bash\n  python -m akerbp.mlops.deployment.setup\n  ```\n- Fill in user settings and then validate them by running this (from repo root):\n  ```python\n  from akerbp.mlops.core.config import validate_user_settings\n  validate_user_settings()\n  ```\n  alternatively, run the setup again:\n  ```bash\n  python -m akerbp.mlops.deployment.setup\n  ```\n- Commit the pipeline and settings files to your repo\n- Become familiar with the model template (see folder `model_code`) and make\n  sure your model follows the same interface and file structure (see [Files and Folders Structure](#files-and-folders-structure))\n- Follow or request the Bitbucket setup (described later)\n\nA this point every git push in master branch will trigger a deployment in the\ntest environment. More information about the deployments pipelines is provided\nlater.\n\n## Updating MLOps\nFollow these steps:\n- Install a new version using pip, e.g. `pip install akerbp.mlops==x`, or upgrade your existing version to the latest release by running `pip install --upgrade akerbp.mlops`\n- Run this command from your repo's root folder:\n  ```bash\n  python -m akerbp.mlops.deployment.setup\n  ```\n  This will update the bitbucket pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and\n  you're ready to go!\n\n## General Guidelines\nUsers should consider the following general guidelines:\n- Model artifacts should **not** be committed to the repo. Folder `model_artifact`\n  does store model artifacts for the model defined in `model_code`, but it is\n  just to help users understand the framework ([see this section](#model-manager) on how to handle model artifacts)\n- Follow the recommended file and folder structure ([see this section](#files-and-folders-structure))\n- There can be several models in your repo: they need to be registered in the\n  settings, and then they need to have their own model and test files\n- Follow the import guidelines ([see this section](#import-guidelines))\n- Make sure the prediction service gets access to model artifacts ([see this section](#model-manager))\n\n## Configuration\nMLOps configuration is stored in `mlops_settings.yaml`. Example for a project\nwith a single model:\n```yaml\nmodel_name: model1\nhuman_friendly_model_name: 'My First Model'\nmodel_file: model_code/model1.py\nreq_file: model_code/requirements.model\nartifact_folder: model_artifact\nartifact_version: 1 # Optional\ntest_file: model_code/test_model1.py\nplatform: cdf\ndataset: mlops\npython_version: py39\nhelper_models:\n  - my_helper_model\ninfo:\n    prediction: &desc\n        description: 'Description prediction service, model1'\n        metadata:\n          required_input:\n            - ACS\n            - RDEP\n            - DEN\n          training_wells:\n            - 3/14\n            - 2/7-18\n          input_types:\n            - int\n            - float\n            - string\n          units:\n            - s/ft\n            - 1\n            - kg/m3\n          output_curves:\n            - AC\n          output_units:\n            - s/ft\n          petrel_exposure: False\n          imputed: True\n          num_filler: -999.15\n          cat_filler: UNKNOWN\n        owner: data@science.com\n    training:\n        << : *desc\n        description: 'Description training service, model1'\n        metadata:\n          required_input:\n            - ACS\n            - RDEP\n            - DEN\n          output_curves:\n            - AC\n          hyperparameters:\n            learning_rate: 1e-3\n            batch_size: 100\n            epochs: 10\n```\n| **Field**                   | **Description**                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| model_name                  | a suitable name for your model. No spaces or dashes are allowed                                                                                                                                                                                                                                                                                                                                                                                     |\n| human_friendly_model_name   | Name of function (in CDF)                                                                                                                                                                                                                                                                                                                                                                                                                           |\n| model_file                  | model file path relative to the repo's root folder. All required model code should be under the top folder in that path (`model_code` in the example above).                                                                                                                                                                                                                                                                                        |\n| req_file                    | model requirement file. Do not use `.txt` extension!                                                                                                                                                                                                                                                                                                                                                                                                |\n| artifact_folder             | model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to `null` if there is no model artifact. |\n| artifact_version (optional) | artifact version number to use during deployment. Defaults to the latest version if not specified                                                                                                                                                                                                                                                                                                                                                   |\n| test_file                   | test file to use. Set to `null` for no testing before deployment (not recommended).                                                                                                                                                                                                                                                                                                                                                                 |\n| platform                    | deployment platforms, either `cdf` (Cognite) or `local` for local testing.                                                                                                                                                                                                                                                                                                                                                                                      |\n| python_version              | If `platform` is set to `cdf`, the `python_version` required by the model to be deployed needs to be specified. Available versions can be found [here](https://cognite-sdk-python.readthedocs-hosted.com/en/latest/functions.html#create-function)                                                                                                                                                                                                                                                                                                                                                                                      |\n| helper_models | Array of helper models using for feature engineering during preprocessing. During deployment, iterate through this list and check that helper model requirements are the same as the main model. For now we only check for akerbp.mlpet |\n| dataset                     | CDF Dataset to use to read/write model artifacts (see [Model Manager](#model-manager)). Set to `null` is there is no dataset (not recommended).                                                                                                                                                                                                                                                                                                     |\n| info                        | description, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service.                                                                                                                                                                                                                                                                                                   |\n\nNote:\n   all **paths** should be **unix style**, regardless of the platform.\n\nNotes on metadata:\n   We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the following\n   - Keys can contain at most 16 characters\n   - Values can contain at most 512 characters\n   - At most 16 key-value pairs\n   - Maximum size of entire metadata field is 512 bytes\n\n\n\nIf there are multiple models, model configuration should be separated using\n`---`. Example:\n```yaml\nmodel_name: model1\nhuman_friendly_model_name: 'My First Model'\nmodel_file: model_code/model1.py\n(...)\n--- # <- this separates model1 and model2 :)\nmodel_name: model2\nhuman_friendly_model: 'My Second Model'\nmodel_file: model_code/model2.py\n(...)\n```\n\n## Files and Folders Structure\nAll the model code and files should be under a single folder, e.g. `model_code`.\n**Required** files in this folder:\n- `model.py`: implements the standard model interface\n- `test_model.py`: tests to verify that the model code is correct and to verify\n  correct deployment\n- `requirements.model`: libraries needed (with specific **version numbers**),\n  can't be called `requirements.txt`. Add the MLOps framework like this:\n  ```bash\n  # requirements.model\n  (...) # your other reqs\n  akerbp.mlops==MLOPS_VERSION\n  ```\n  During deployment, `MLOPS_VERSION` will be automatically replaced by the\n  specific version **that you have installed locally**. Make sure you have the latest release on your local machine prior to model deployment.\n\nFor the prediction service we require the model interface to have the following class and function\n  - initialization(), with required arguments\n    - path to artifact folder\n    - secrets\n      - these arguments can safely be set to None, and the framework will handle everything under the hood.\n      - only set path to artifact folder as None if not using any artifacts\n  - predict(), with required arguments\n    - data\n    - init_object (output from initialization() function)\n    - secrets\n      - You can safely put the secrets argument to None, and the framework will handle the secrets under the hood.\n  - ModelException class with inheritance from an Exception base class\n\nFor the training service we require the model interface to have the following class and function\n  - train(), with required arguments\n    - folder_path\n      - path to store model artifacts to be consumed by the prediction service\n  - ModelException class with inheritance from an Exception base class\n\n\nThe following structure is recommended for projects with multiple models:\n- `model_code/model1/`\n- `model_code/model2/`\n- `model_code/common_code/`\n\nThis is because when deploying a model, e.g. `model1`, the top folder in the\npath (`model_code` in the example above) is copied and deployed, i.e.\n`common_code` folder (assumed to be needed by `model1`) is included. Note that\n`model2` folder would also be deployed (this is assumed to be unnecessary but\nharmless).\n\n## Import Guidelines\nThe repo's root folder is the base folder when importing. For example, assume\nyou have these files in the folder with model code:\n - `model_code/model.py`\n - `model_code/helper.py`\n - `model_code/data.csv`\n\nIf `model.py` needs to import `helper.py`, use: `import model_code.helper`. If\n`model.py` needs to read `data.csv`, the right path is\n`os.path.join('model_code', 'data.csv')`.\n\nIt's of course possible to import from the Mlops package, e.g. its logger:\n``` python\nfrom akerbp.mlops.core import logger\nlogging=logger.get_logger(\"logger_name\")\nlogging.debug(\"This is a debug log\")\n```\n\n## Services\nWe consider two types of services: prediction and training.\n\nDeployed services can be called with\n```python\nfrom akerbp.mlops.xx.helpers import call_function\noutput = call_function(external_id, data)\n```\nWhere `xx` is either `'cdf'` or `'gc'`, and `external_id` follows the\nstructure `model-service-model_env`:\n - `model`: model name given by the user (settings file)\n - `service`: either `training` or `prediction`\n - `model_env`: either `dev`, `test` or `prod` (depending on the deployment\n   environment)\n\nThe output has a status field (`ok` or `error`). If they are 'ok', they have\nalso a `prediction` and `prediction_file` or `training` field (depending on the type of service). The\nformer is determined by the `predict` method of the model, while the latter\ncombines artifact metadata and model metadata produced by the `train` function.\nPrediction services have also a `model_id` field to keep track of which model\nwas used to predict.\n\nSee below for more details on how to call prediction services hosted in CDF.\n\n## Deployment Platform\nModel services (described below) can be deployed to CDF, i.e. Cognite Data Fusion or Google Cloud Run. The deployment platform is specified in the settings file.\n\nCDF Functions include metadata when they are called. This information can be\nused to redeploy a function (specifically, the `file_id` field). Example:\n\n```python\nimport akerbp.mlops.cdf.helpers as cdf\n\nhuman_readable_name = \"My model\"\nexternal_id = \"my_model-prediction-test\"\n\ncdf.set_up_cdf_client('deploy')\ncdf.redeploy_function(\n  human_readable_name\n  external_id,\n  file_id,\n  'Description',\n  'your@email.com'\n)\n```\nNote that the external-id of a function needs to be unique, as this is used to distinguish functions between services and hosting environment.\n\nIt's possible to query available functions (can be filtered by environment\nand/or tags). Example:\n```python\nimport akerbp.mlops.cdf.helpers as cdf\ncdf.set_up_cdf_client('deploy')\nall_functions = cdf.list_functions()\ntest_functions = cdf.list_functions(model_env=\"test\")\ntag_functions = cdf.list_functions(tags=[\"well_interpretation\"])\n```\nFunctions can be deleted. Example:\n```python\nimport akerbp.mlops.cdf.helpers as cdf\ncdf.set_up_cdf_client('deploy')\ncdf.delete_service(\"my_model-prediction-test\")\n```\nFunctions can be called in parallel. Example:\n```python\nfrom akerbp.mlops.cdf.helpers import call_function_parallel\nfunction_name = 'my_function-prediction-prod'\ndata = [dict(data='data_call_1'), dict(data='data_call_2')]\nresponse1, response2 = call_function_parallel(function_name, data)\n```\n\n#TODO - Document common use cases for GCR\n\n## Model Manager\nModel Manager is the module dedicated to managing the model artifacts used by\nprediction services (and generated by training services). This module uses CDF\nFiles as backend.\n\nModel artifacts are versioned and stored together with user-defined metadata.\nUploading a new model increases the version count by 1 for that model and\nenvironment. When deploying a prediction service, the latest model version is\nchosen. It would be possible to extend the framework to allow deploying specific\nversions or filtering by metadata.\n\nModel artifacts are segregated by environment (e.g. only production artifacts\ncan be deployed to production). Model artifacts have to be uploaded manually to\ntest (or dev) environment before deployment. Code example:\n```python\nimport akerbp.mlops.model_manager as mm\n\nmetadata = train(model_dir, secrets) # or define it directly\nmm.setup()\nfolder_info = mm.upload_new_model_version(\n  model_name,\n  model_env,\n  folder_path,\n  metadata\n)\n```\nIf there are multiple models, you need to do this one at at time. Note that\n`model_name` corresponds to one of the elements in `model_names` defined in\n`mlops_settings.py`, `model_env` is the target environment (where the model should be\navailable), `folder_path` is the local model artifact folder and `metadata` is a\ndictionary with artifact metadata, e.g. performance, git commit, etc.\n\nModel artifacts needs to be promoted to the production environment (i.e. after\nthey have been deployed successfully to test environment) so that a prediction\nservice can be deployed in production.\n```python\n# After a model's version has been successfully deployed to test\nimport akerbp.mlops.model_manager as mm\n\nmm.setup()\nmm.promote_model('model', 'version')\n```\n\n### Versioning\nEach model artifact upload/promotion increments a version number (environment dependent)\navailable in Model Manager. However, this doesn't modify the model\nartifacts used in existing prediction services (i.e. nothing changes in CDF\nFunctions). To reflect the newly uploaded/promoted model artifacts in the existing services one need to deploy the services again. Note that we dont have to specify the artifact version explicitly if we want to deploy using the latest artifacts, as this is done by default.\n\nRecommended process to update a model artifact and prediction service:\n1. New model features implemented in a feature branch\n2. New artifact generated and uploaded to test environment\n3. Feature branch merged with master\n4. Test deployment is triggered automatically: prediction service is deployed to\n   test environment with the latest artifact version (in test)\n5. Prediction service in test is verified\n6. Artifact version is promoted manually from command line whenever suitable\n7. Production deployment is triggered manually from Bitbucket: prediction\n   service is deployed to production with the latest artifact version (in prod)\n\nIt's possible to get an overview of the model artifacts managed by Model\nManager. Some examples (see `get_model_version_overview` documentation for other\npossible queries):\n```python\nimport akerbp.mlops.model_manager as mm\nmm.setup()\n# all artifacts\nfolder_info = mm.get_model_version_overview()\n# all artifacts for a given model\nfolder_info = mm.get_model_version_overview(model_name='xx')\n```\nIf the overview shows model artifacts that are not needed, it is possible to\nremove them. For example if artifact \"my_model/dev/5\" is not needed:\n```python\nmodel_to_remove = \"my_model/dev/5\"\nmm.delete_model_version(model_to_remove)\n```\nModel Manager will by default show information on the artifact to delete and ask\nfor user confirmation before proceeding. It's possible (but not recommended) to\ndisable this check. There's no identity check, so it's possible to delete any\nmodel artifact (from other data scientist). Be careful!\n\nIt's possible to download a model artifact (e.g. to verify its content). For\nexample:\n```python\nmm.download_model_version('model_name', 'test', 'artifact_folder', version=5)\n```\nIf no version is specified, the latest one is downloaded by default.\n\nBy default, Model Manager assumes artifacts are stored in the `mlops` dataset.\nIf your project uses a different one, you need to specify during setup (see\n`setup` function).\n\nFurther information:\n- Model Manager requires specific environmental variables (see next\n  section) or a suitable secrets to be passed to the `setup` function.\n- In projects with a training service, you can rely on it to upload a first\n  version of the model. The first prediction service deployment will fail, but\n  you can deploy again after the training service has produced a model.\n- When you deploy from the development environment (covered later in this\n  document), the model artifacts in the settings file can point to existing\n  local folders. These will then be used for the deployment. Version is then\n  fixed to `model_name/dev/1`. Note that these artifacts are not uploaded to CDF\n  Files.\n- Prediction services are deployed with model artifacts (i.e. the artifact is\n  copied to the project file used to create the CDF Function) so that they are\n  available at prediction time. Downloading artifacts at run time would require\n  waiting time, and files written during run time consume ram memory).\n\n## Model versioning\nTo allow for model versioning and rolling back to previous model deployments, the external id of the functions (in CDF) includes a version number that is reflected by the latest artifact version number when deploying the function (see above).\nEverytime we upload/promote new model artifacts and deploy our services, the version number of the external id of the functions representing the services are incremented (just as the version number for the artifacts).\n\nTo distinguish the latest model from the remaining model versions, we redeploy the latest model version using a predictable external id that does not contain the version number. By doing so we relieve the clients need of dealing with version numbers, and they will call the latest model by default. For every new deployment, we will thus have two model deployments - one with the version number, and one without the version number in the external id.  However, the predictable external id is persisted across new model versions, so when deploying a new version the latest one, with the predictable external id, is simply overwritten.\n\nWe are thus concerned with two structures for the external id\n- ```<model_name>-<service>-<model_env>-<version>``` for rolling back to previous versions, and\n- ```<model_name>-<service>-<model_env>``` for the latest deployed model\n\nFor the latest model with a predictable external id, we tag the description of the model to specify that the model is in fact the latest version, and add the version number to the function metadata.\n\nWe can now list out multiple models with the same model name and external id prefix, and choose to make predictions and do inference with a specific model version. An example is shown below.\n```python\n# List all prediction services (i.e. models) with name \"My Model\" hosted in the test environment, and model corresponding to the first element of the list\nfrom akerbp.mlops.cdf.helpers import get_client\nclient = get_client(client_id=<client_id>, client_secret=<client_secret>)\nmy_models = client.functions.list(name=\"My Model\", external_id_prefix=\"mymodel-prediction-test\")\nmy_model_specific_version = my_models[0]\n```\n## Calling a deployed model prediction service hosted in CDF\nThis section describes how you can call deployed models and obtain predictions for doing inference.\nWe have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key \"data\" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the corresponding columns.\n\nFirst, load your data and transform it to a dictionary as assumed by the framework. Note that the data dictionary you pass to the function might vary based on your model interface. Make sure to align with what you specified in your `model.py` interface.\n```python\nimport pandas as pd\ndata = pd.read_csv(\"path_to_data\")\ninput_data = data.drop(columns=[target_variables])\ndata_dict = {\"data\": input_data.to_dict(orient=list), \"to_file\": True}\n```\nThe \"to_file\" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided below\n\nCalling deployed model using MLOps:\n1. Set up a cognite client with sufficient access rights\n2. Extract the response directly by specifying the external id of the model and passing your data as a dictionary\n    - Note that the external id is on the form\n      - ```\"<model_name>-<service>-<model_env>-<version>\"```, and\n      - ```\"<model_name>-<service>-<model_env>\"```\n\nUse the latter external id if you want to call the latest model. The former external id can be used if you want to call a previous version of your model.\n\n```python\nfrom akerbp.mlops.cdf.helpers import set_up_cdf_client, call_function\nset_up_cdf_client(context=\"deploy\") #access CDF data, files and functions with deploy context\nresponse = call_function(function_name=\"<model_name>-prediction-<model_env>\", data=data_dict)\n```\n\nCalling deployed model using the Cognite SDK:\n1. set up cognite client with sufficient access rights\n2. Retreive model from CDF by specifying the external-id of the model\n3. Call the function\n4. Extract the function call response from the function call\n\n```python\nfrom akerbp.mlops.cdf.helpers import get_client\nclient = get_client(client_id=<client_id>, client_secret=<client_secret>)\nclient = CogniteClient(config=cnf)\nfunction = client.functions.retrieve(external_id=\"<model_name>-prediction-<model_env>\")\nfunction_call = function.call(data=data_dict)\nresponse = function_call.get_response()\n\n```\nDepending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files.\nIf the input data dictionary contains a key \"to_file\" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the response will contain a reference to the file containing the predictions. If \"to_file\" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.\n\nIf \"to_file\" = True, we can extract the predictions using the following code-snippet\n```python\nfile_id = response[\"prediction_file\"]\nbytes_data = client.files.download_bytes(external_id=file_id)\npredictions_df = pd.DataFrame.from_dict(json.loads(bytes_data))\n```\nOtherwise, the predictions are directly accessible from the response as follows.\n```python\npredictions = response[\"predictions\"]\n```\n\n## Extracting metadata from deployed model in CDF\nOnce a model is deployed, a user can extract potentially valuable metadata as follows.\n```python\nmy_function = client.functions.retrieve(external_id=\"my_model-prediction-test\")\nmetadata = my_function.metadata\n```\nWhere the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata\n```\n{'cat_filler': 'UNKNOWN',\n 'imputed': 'True',\n 'input_types': '[int, float, string]',\n 'num_filler': '-999.15',\n 'output_curves': '[AC]',\n 'output_unit': '[s/ft]',\n 'petrel_exposure': 'False',\n 'required_input': '[ACS, RDEP, DEN]',\n 'training_wells': '[3/1-4]',\n 'units': '[s/ft, 1, kg/m3]'}\n```\n\n\n## Local Testing and Deployment\nIt's possible to tests the functions locally, which can help you debug errors\nquickly. This is recommended before a deployment.\n\nDefine the following environmental variables (e.g. in `.bashrc`):\n```bash\nexport MODEL_ENV=dev\nexport COGNITE_OIDC_BASE_URL=https://api.cognitedata.com\nexport COGNITE_TENANT_ID=<tenant id>\nexport COGNITE_CLIENT_ID_WRITE=<write access client id>\nexport COGNITE_CLIENT_SECRET_WRITE=<write access client secret>\nexport COGNITE_CLIENT_ID_READ=<read access client id>\nexport COGNITE_CLIENT_SECRET_READ=<read access client secret>\n```\n\nFrom your repo's root folder:\n- `python -m pytest model_code` (replace `model_code` by your model code folder\n  name)\n- `deploy_prediction_service`\n- `deploy_training_service` (if there's a training service)\n\nThe first one will run your model tests. The last two run model tests but also\nthe service tests implemented in the framework and simulate deployment.\n\nIf you want to run tests only you need to set `TESTING_ONLY=True` before calling the deployment script.\n\n## Automated Deployments from Bitbucket\nDeployments to the test environment are triggered by commits (you need to push\nthem). Deployments to the production environment are enabled manually from the\nBitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master.\nBranches that match `feature/*` run tests only (i.e. do not deploy).\n\nIt is assumed that most projects won't include a training service. A branch that\nmatches 'mlops/*' deploys both prediction and training services. If a project\nincludes both services, the pipeline file could instead be edited so that master\ndeployed both services.\n\nIt is possible to schedule the training service in CDF, and then it can make\nsense to schedule the deployment pipeline of the model service (as often as new\nmodels are trained)\n\nNOTE: Previous version of akerbp.mlops assumes that calling\n`LOCAL_DEPLOYMENT=True deploy_prediction_service` will not deploy models and run tests.\nThe package is now refactored to only trigger tests when the environment variable\n`TESTING_ONLY` is set to `True`.\nMake sure to update the pipeline definition for branches with prefix `feature/`to call\n`TESTING_ONLY=True deploy_prediction_service` instead.\n\n## Bitbucket Setup\nThe following environments need to be defined in `repository settings >\ndeployments`:\n- test deployments: `test-prediction` and `test-training`, each with `MODEL_ENV=test`\n- production deployments: `production-prediction` and `production-training`,\n  each with `MODEL_ENV=prod`\n\nThe following need to be defined in `respository settings > repository\nvariables`: `COGNITE_CLIENT_ID_WRITE`, `COGNITE_CLIENT_SECRET_WRITE`,\n`COGNITE_CLIENT_ID_READ`, `COGNITE_CLIENT_SECRET_READ` (these should be CDF client id and secrets for respective read and write access).\n\nThe pipeline needs to be enabled.\n\n\n# Developer/Admin Guide\n## Package versioning\nThe versioning of the package follows [PEP440](https://peps.python.org/pep-0440/), using the `MAJOR.MINOR.PATCH` structure. We are thus updating the package version using the following convention\n1. Increment MAJOR when making incompatible API changes\n2. Increment MINOR when adding backwards compatible functionality\n3. Increment PATCH when making backwards compatible bug-fixes\n\nThe version is updated based on the latest commit to the repo, and we are currently using the following rules.\n- The MAJOR version is incremented if the commit message includes the word `major`\n- The MINOR version is incremented if the commit message includes the word `minor`\n- The PATCH number is incremented if neither `major` nor `minor` if found in the commit message\n- If the commit message includes the phrase `pre-release`, the package version is extended with `a`, thus taking the form `MAJOR.MINOR.PATCHa`.\n\nNote that the above keywords are **not** case sensitive. Moreover, `major` takes precedence over `minor`, so if both keywords are found in the commit message, the MAJOR version is incremented and the MINOR version is kept unchanged.\n\nIn dev and test environment, we release the package using the pre-release tag, and the package takes the following version number `MAJOR.MINOR.PATCHaPRERELEASE`.\n\nThe version number is automatically generated by [setuptools_scm](https://github.com/pypa/setuptools_scm/) and is based off git tagging and the incremental version numbering system mentioned above.\n\n\n## MLOps Files and Folders\nThese are the files and folders in the MLOps repo:\n- `src` contains the MLOps framework package\n- `mlops_settings.yaml` contains the user settings for the dummy model\n- `model_code` is a model template included to show the model interface. It is\n  not needed by the framework, but it is recommended to become familiar with it.\n- `model_artifact` stores the artifacts for the model shown in  `model_code`.\n  This is to help to test the model and learn the framework.\n- `bitbucket-pipelines.yml` describes the deployment pipeline in Bitbucket\n- `build.sh` is the script to build and upload the package\n- `setup.py` is used to build the package\n- `LICENSE` is the package's license\n\n## CDF Datasets\nIn order to control access to the artifacts:\n1. Set up a CDF Dataset with `write_protected=True` and a `external_id`, which\n   by default is expected to be `mlops`.\n2. Create a group of owners (CDF Dashboard), i.e. those that should have write\n   access\n\n## Local Testing (only implemented for the prediction service)\nTo perform local testing before pushing to Bitbucket, you can run the following\ncommands:\n```bash\nLOCAL_MLOPS_TESTING deploy_prediction_service\n```\n(assuming you have first run `pip install -e \".[dev]\"` in the same environment)\n\n## Build and Upload Package\nCreate an account in pypi, then create a token and a `$HOME/.pypirc` file if you want to deploy from local. Edit\n`pyproject.toml` file and note the following:\n- Dependencies need to be registered\n- Bash scripts will be installed in a `bin` folder in the `PATH`.\n\nThe pipeline is setup to build the library from Bitbucket, but it's possible to\nbuild and upload the library from the development environment as well:\n```bash\nbash build.sh\n```\nIn order to authenticate to bitbucket you need to setup a token. Copy its content and add that to the secured repository/deployment variable `TWINE_PASSWORD`. Set the variable `TWINE_USERNAME` to `__token__`.\n\n```\npip install -e .\n```\nIn this mode, the installed package links to the source code, so that it can be\nmodified without the need to reinstall.\n\n## Bitbucket Setup\nIn addition to the user setup, the following is needed to build the package:\n- `test-pypi`: `MODEL_ENV=test`, `TWINE_USERNAME=__token__` and `TWINE_PASSWORD`\n  (token generated from pypi)\n- `prod-pypi`: `MODEL_ENV=prod`, `TWINE_USERNAME=__token__` and `TWINE_PASSWORD`\n  (token generated from pypi, can be the same as above)\n\n## Notes on the code\n\nService testing happens in an independent process (subprocess library) to avoid\nsetup problems:\n - When deploying multiple models the service had to be reloaded before testing\n   it, otherwise it would be the first model's service. Model initialization in\n   the prediction service is designed to load artifacts only once in the process\n - If the model and the MLOps framework rely on different versions of the same\n   library, the version would be changed during runtime, but the\n   upgraded/downgraded version would not be available for the current process\n",
    "bugtrack_url": null,
    "license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS  APPENDIX: How to apply the Apache License to your work.  To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!)  The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives.  Copyright 2021 Aker BP ASA  Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at  http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
    "summary": "MLOps framework",
    "version": "3.5.0",
    "project_urls": {
        "Homepage": "https://bitbucket.org/akerbp/akerbp.mlops/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "59ae447f8e999bd3cf4f8fca73553684f833710ebb1dd99461b5649a3fc62704",
                "md5": "3abfb1e19b452077f94d1521a8404e66",
                "sha256": "e575ed100bb5460fa89a5c5f289a9e24851cb6e93ff22d0f3987117421e2c684"
            },
            "downloads": -1,
            "filename": "akerbp.mlops-3.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3abfb1e19b452077f94d1521a8404e66",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 69603,
            "upload_time": "2024-01-03T07:53:16",
            "upload_time_iso_8601": "2024-01-03T07:53:16.257846Z",
            "url": "https://files.pythonhosted.org/packages/59/ae/447f8e999bd3cf4f8fca73553684f833710ebb1dd99461b5649a3fc62704/akerbp.mlops-3.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5338853451a43a1cc3ee031b742e9b77ed80b5289b255ab3ec418034a627fe8a",
                "md5": "acac26b45fcc8b99ccb7115d7ef52c3d",
                "sha256": "7c4537e3331d4eedbe747eb6f7aba81dc62a8cd67b7fd016b2d1afe20b5481e3"
            },
            "downloads": -1,
            "filename": "akerbp.mlops-3.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "acac26b45fcc8b99ccb7115d7ef52c3d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 95941,
            "upload_time": "2024-01-03T07:53:22",
            "upload_time_iso_8601": "2024-01-03T07:53:22.991798Z",
            "url": "https://files.pythonhosted.org/packages/53/38/853451a43a1cc3ee031b742e9b77ed80b5289b255ab3ec418034a627fe8a/akerbp.mlops-3.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-03 07:53:22",
    "github": false,
    "gitlab": false,
    "bitbucket": true,
    "codeberg": false,
    "bitbucket_user": "akerbp",
    "bitbucket_project": "akerbp.mlops",
    "lcname": "akerbp.mlops"
}
        
Elapsed time: 0.16177s