mlem


Namemlem JSON
Version 0.4.3 PyPI version JSON
download
home_pagehttps://mlem.ai
SummaryVersion and deploy your models following GitOps principles
upload_time2023-01-27 13:25:54
maintainerIterative
docs_urlNone
authorMikhail Sveshnikov
requires_python>=3.6
licenseApache License 2.0
keywords data-science data-version-control machine-learning git mlops developer-tools reproducibility collaboration ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![image](https://user-images.githubusercontent.com/6797716/165590476-994d4d93-8e98-4afb-b5f8-6f42b9d56efc.png)


[![Check, test and release](https://github.com/iterative/mlem/actions/workflows/check-test-release.yml/badge.svg)](https://github.com/iterative/mlem/actions/workflows/check-test-release.yml)
[![codecov](https://codecov.io/gh/iterative/mlem/branch/main/graph/badge.svg?token=WHU4OAB6O2)](https://codecov.io/gh/iterative/mlem)
[![PyPi](https://img.shields.io/pypi/v/mlem.svg?label=pip&logo=PyPI&logoColor=white)](https://pypi.org/project/mlem)
[![License: Apache 2.0](https://img.shields.io/github/license/iterative/mlem)](https://github.com/iterative/mlem/blob/master/LICENSE)
<!-- [![Maintainability](https://codeclimate.com/github/iterative/mlem/badges/gpa.svg)](https://codeclimate.com/github/iterative/mlem) -->

MLEM helps you package and deploy machine learning models.
It saves ML models in a standard format that can be used in a variety of production scenarios such as real-time REST serving or batch processing.

- **Run your ML models anywhere:**
  Wrap models as a Python package or Docker Image, or deploy them to Heroku, SageMaker or Kubernetes (more platforms coming soon).
  Switch between platforms transparently, with a single command.

- **Model metadata into YAML automatically:**
  Automatically include Python requirements and input data needs into a human-readable, deployment-ready format.
  Use the same metafile on any ML framework.

- **Stick to your training workflow:**
  MLEM doesn't ask you to rewrite model training code.
  Add just two lines around your Python code: one to import the library and one to save the model.

- **Developer-first experience:**
  Use the CLI when you feel like DevOps, or the API if you feel like a developer.

## Why is MLEM special?

The main reason to use MLEM instead of other tools is to adopt a **GitOps approach** to manage model lifecycles.

- **Git as a single source of truth:**
  MLEM writes model metadata to a plain text file that can be versioned in Git along with code.
  This enables GitFlow and other software engineering best practices.

- **Unify model and software deployment:**
  Release models using the same processes used for software updates (branching, pull requests, etc.).

- **Reuse existing Git infrastructure:**
  Use familiar hosting like Github or Gitlab for model management, instead of having separate services.

- **UNIX philosophy:**
  MLEM is a modular tool that solves one problem very well.
  It integrates well into a larger toolset from Iterative.ai, such as [DVC](https://dvc.org/) and [CML](https://cml.dev/).

## Usage

This a quick walkthrough showcasing deployment functionality of MLEM.

Please read [Get Started guide](https://mlem.ai/doc/get-started) for a full version.

### Installation

MLEM requires Python 3.

```console
$ python -m pip install mlem
```

> To install the pre-release version:
>
> ```console
> $ python -m pip install git+https://github.com/iterative/mlem
> ```

### Saving the model

```python
# train.py
from mlem.api import save
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

def main():
    data, y = load_iris(return_X_y=True, as_frame=True)
    rf = RandomForestClassifier(
        n_jobs=2,
        random_state=42,
    )
    rf.fit(data, y)

    save(
        rf,
        "models/rf",
        sample_data=data,
    )

if __name__ == "__main__":
    main()
```

### Codification

Check out what we have:

```shell
$ ls models/
rf
rf.mlem
$ cat rf.mlem
```
<details>
  <summary> Click to show `cat` output</summary>

```yaml
artifacts:
  data:
    hash: ea4f1bf769414fdacc2075ef9de73be5
    size: 163651
    uri: rf
model_type:
  methods:
    predict:
      args:
      - name: data
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict
      returns:
        dtype: int64
        shape:
        - null
        type: ndarray
    predict_proba:
      args:
      - name: data
        type_:
          columns:
          - sepal length (cm)
          - sepal width (cm)
          - petal length (cm)
          - petal width (cm)
          dtypes:
          - float64
          - float64
          - float64
          - float64
          index_cols: []
          type: dataframe
      name: predict_proba
      returns:
        dtype: float64
        shape:
        - null
        - 3
        type: ndarray
  type: sklearn
object_type: model
requirements:
- module: sklearn
  version: 1.0.2
- module: pandas
  version: 1.4.1
- module: numpy
  version: 1.22.3
```
</details>

### Deploying the model

If you want to follow this Quick Start, you'll need to sign up on https://heroku.com,
create an API_KEY and populate `HEROKU_API_KEY` env var (or run `heroku login` in command line).
Besides, you'll need to run `heroku container:login`. This will log you in to Heroku
container registry.

Now we can [deploy the model with `mlem deploy`](https://mlem.ai/doc/get-started/deploying)
(you need to use different `app_name`, since it's going to be published on https://herokuapp.com):

```shell
$ mlem deployment run heroku app.mlem \
  --model models/rf \
  --app_name example-mlem-get-started-app
⏳️ Loading model from models/rf.mlem
⏳️ Loading deployment from app.mlem
🛠 Creating docker image for heroku
  🛠 Building MLEM wheel file...
  💼 Adding model files...
  🛠 Generating dockerfile...
  💼 Adding sources...
  💼 Generating requirements file...
  🛠 Building docker image registry.heroku.com/example-mlem-get-started-app/web...
  ✅  Built docker image registry.heroku.com/example-mlem-get-started-app/web
  🔼 Pushing image registry.heroku.com/example-mlem-get-started-app/web to registry.heroku.com
  ✅  Pushed image registry.heroku.com/example-mlem-get-started-app/web to registry.heroku.com
🛠 Releasing app example-mlem-get-started-app formation
✅  Service example-mlem-get-started-app is up. You can check it out at https://example-mlem-get-started-app.herokuapp.com/
```

## Contributing

Contributions are welcome! Please see our [Contributing Guide](https://mlem.ai/doc/contributing/core)
for more details.

Check out the [MLEM weekly board](https://github.com/orgs/iterative/projects/322/views/4)
to learn about what we do, and about the exciting new functionality that is going to be added soon.

Thanks to all our contributors!

## Copyright

This project is distributed under the Apache license version 2.0 (see the LICENSE file in the project root).

By submitting a pull request to this project, you agree to license your contribution under the Apache license version 2.0 to this project.

            

Raw data

            {
    "_id": null,
    "home_page": "https://mlem.ai",
    "name": "mlem",
    "maintainer": "Iterative",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "support@mlem.ai",
    "keywords": "data-science data-version-control machine-learning git mlops developer-tools reproducibility collaboration ai",
    "author": "Mikhail Sveshnikov",
    "author_email": "mike0sv@iterative.ai",
    "download_url": "https://files.pythonhosted.org/packages/ac/1b/050f777fa6e46be61f443b21f47c252d17ed8990fdb8061b7a494aecba77/mlem-0.4.3.tar.gz",
    "platform": null,
    "description": "![image](https://user-images.githubusercontent.com/6797716/165590476-994d4d93-8e98-4afb-b5f8-6f42b9d56efc.png)\n\n\n[![Check, test and release](https://github.com/iterative/mlem/actions/workflows/check-test-release.yml/badge.svg)](https://github.com/iterative/mlem/actions/workflows/check-test-release.yml)\n[![codecov](https://codecov.io/gh/iterative/mlem/branch/main/graph/badge.svg?token=WHU4OAB6O2)](https://codecov.io/gh/iterative/mlem)\n[![PyPi](https://img.shields.io/pypi/v/mlem.svg?label=pip&logo=PyPI&logoColor=white)](https://pypi.org/project/mlem)\n[![License: Apache 2.0](https://img.shields.io/github/license/iterative/mlem)](https://github.com/iterative/mlem/blob/master/LICENSE)\n<!-- [![Maintainability](https://codeclimate.com/github/iterative/mlem/badges/gpa.svg)](https://codeclimate.com/github/iterative/mlem) -->\n\nMLEM helps you package and deploy machine learning models.\nIt saves ML models in a standard format that can be used in a variety of production scenarios such as real-time REST serving or batch processing.\n\n- **Run your ML models anywhere:**\n  Wrap models as a Python package or Docker Image, or deploy them to Heroku, SageMaker or Kubernetes (more platforms coming soon).\n  Switch between platforms transparently, with a single command.\n\n- **Model metadata into YAML automatically:**\n  Automatically include Python requirements and input data needs into a human-readable, deployment-ready format.\n  Use the same metafile on any ML framework.\n\n- **Stick to your training workflow:**\n  MLEM doesn't ask you to rewrite model training code.\n  Add just two lines around your Python code: one to import the library and one to save the model.\n\n- **Developer-first experience:**\n  Use the CLI when you feel like DevOps, or the API if you feel like a developer.\n\n## Why is MLEM special?\n\nThe main reason to use MLEM instead of other tools is to adopt a **GitOps approach** to manage model lifecycles.\n\n- **Git as a single source of truth:**\n  MLEM writes model metadata to a plain text file that can be versioned in Git along with code.\n  This enables GitFlow and other software engineering best practices.\n\n- **Unify model and software deployment:**\n  Release models using the same processes used for software updates (branching, pull requests, etc.).\n\n- **Reuse existing Git infrastructure:**\n  Use familiar hosting like Github or Gitlab for model management, instead of having separate services.\n\n- **UNIX philosophy:**\n  MLEM is a modular tool that solves one problem very well.\n  It integrates well into a larger toolset from Iterative.ai, such as [DVC](https://dvc.org/) and [CML](https://cml.dev/).\n\n## Usage\n\nThis a quick walkthrough showcasing deployment functionality of MLEM.\n\nPlease read [Get Started guide](https://mlem.ai/doc/get-started) for a full version.\n\n### Installation\n\nMLEM requires Python 3.\n\n```console\n$ python -m pip install mlem\n```\n\n> To install the pre-release version:\n>\n> ```console\n> $ python -m pip install git+https://github.com/iterative/mlem\n> ```\n\n### Saving the model\n\n```python\n# train.py\nfrom mlem.api import save\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import load_iris\n\ndef main():\n    data, y = load_iris(return_X_y=True, as_frame=True)\n    rf = RandomForestClassifier(\n        n_jobs=2,\n        random_state=42,\n    )\n    rf.fit(data, y)\n\n    save(\n        rf,\n        \"models/rf\",\n        sample_data=data,\n    )\n\nif __name__ == \"__main__\":\n    main()\n```\n\n### Codification\n\nCheck out what we have:\n\n```shell\n$ ls models/\nrf\nrf.mlem\n$ cat rf.mlem\n```\n<details>\n  <summary> Click to show `cat` output</summary>\n\n```yaml\nartifacts:\n  data:\n    hash: ea4f1bf769414fdacc2075ef9de73be5\n    size: 163651\n    uri: rf\nmodel_type:\n  methods:\n    predict:\n      args:\n      - name: data\n        type_:\n          columns:\n          - sepal length (cm)\n          - sepal width (cm)\n          - petal length (cm)\n          - petal width (cm)\n          dtypes:\n          - float64\n          - float64\n          - float64\n          - float64\n          index_cols: []\n          type: dataframe\n      name: predict\n      returns:\n        dtype: int64\n        shape:\n        - null\n        type: ndarray\n    predict_proba:\n      args:\n      - name: data\n        type_:\n          columns:\n          - sepal length (cm)\n          - sepal width (cm)\n          - petal length (cm)\n          - petal width (cm)\n          dtypes:\n          - float64\n          - float64\n          - float64\n          - float64\n          index_cols: []\n          type: dataframe\n      name: predict_proba\n      returns:\n        dtype: float64\n        shape:\n        - null\n        - 3\n        type: ndarray\n  type: sklearn\nobject_type: model\nrequirements:\n- module: sklearn\n  version: 1.0.2\n- module: pandas\n  version: 1.4.1\n- module: numpy\n  version: 1.22.3\n```\n</details>\n\n### Deploying the model\n\nIf you want to follow this Quick Start, you'll need to sign up on https://heroku.com,\ncreate an API_KEY and populate `HEROKU_API_KEY` env var (or run `heroku login` in command line).\nBesides, you'll need to run `heroku container:login`. This will log you in to Heroku\ncontainer registry.\n\nNow we can [deploy the model with `mlem deploy`](https://mlem.ai/doc/get-started/deploying)\n(you need to use different `app_name`, since it's going to be published on https://herokuapp.com):\n\n```shell\n$ mlem deployment run heroku app.mlem \\\n  --model models/rf \\\n  --app_name example-mlem-get-started-app\n\u23f3\ufe0f Loading model from models/rf.mlem\n\u23f3\ufe0f Loading deployment from app.mlem\n\ud83d\udee0 Creating docker image for heroku\n  \ud83d\udee0 Building MLEM wheel file...\n  \ud83d\udcbc Adding model files...\n  \ud83d\udee0 Generating dockerfile...\n  \ud83d\udcbc Adding sources...\n  \ud83d\udcbc Generating requirements file...\n  \ud83d\udee0 Building docker image registry.heroku.com/example-mlem-get-started-app/web...\n  \u2705  Built docker image registry.heroku.com/example-mlem-get-started-app/web\n  \ud83d\udd3c Pushing image registry.heroku.com/example-mlem-get-started-app/web to registry.heroku.com\n  \u2705  Pushed image registry.heroku.com/example-mlem-get-started-app/web to registry.heroku.com\n\ud83d\udee0 Releasing app example-mlem-get-started-app formation\n\u2705  Service example-mlem-get-started-app is up. You can check it out at https://example-mlem-get-started-app.herokuapp.com/\n```\n\n## Contributing\n\nContributions are welcome! Please see our [Contributing Guide](https://mlem.ai/doc/contributing/core)\nfor more details.\n\nCheck out the [MLEM weekly board](https://github.com/orgs/iterative/projects/322/views/4)\nto learn about what we do, and about the exciting new functionality that is going to be added soon.\n\nThanks to all our contributors!\n\n## Copyright\n\nThis project is distributed under the Apache license version 2.0 (see the LICENSE file in the project root).\n\nBy submitting a pull request to this project, you agree to license your contribution under the Apache license version 2.0 to this project.\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Version and deploy your models following GitOps principles",
    "version": "0.4.3",
    "split_keywords": [
        "data-science",
        "data-version-control",
        "machine-learning",
        "git",
        "mlops",
        "developer-tools",
        "reproducibility",
        "collaboration",
        "ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "24445636d61cf5fb312b8b46a18b4e8d2ef17db0755fef86e4628997055ce047",
                "md5": "f4833f6af0d252864c8e3ebb7411c2e9",
                "sha256": "44db41c196afeb002253182b55063be1eba06cde9093dce705e1ce748e72ce52"
            },
            "downloads": -1,
            "filename": "mlem-0.4.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f4833f6af0d252864c8e3ebb7411c2e9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 213293,
            "upload_time": "2023-01-27T13:25:52",
            "upload_time_iso_8601": "2023-01-27T13:25:52.687465Z",
            "url": "https://files.pythonhosted.org/packages/24/44/5636d61cf5fb312b8b46a18b4e8d2ef17db0755fef86e4628997055ce047/mlem-0.4.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac1b050f777fa6e46be61f443b21f47c252d17ed8990fdb8061b7a494aecba77",
                "md5": "617cdcd9af5aaa7e6b70835078ee6e6a",
                "sha256": "f44d3be46414e51714758d165e3cbf7ae296b0808170399326e49e6805c04cc3"
            },
            "downloads": -1,
            "filename": "mlem-0.4.3.tar.gz",
            "has_sig": false,
            "md5_digest": "617cdcd9af5aaa7e6b70835078ee6e6a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 255329,
            "upload_time": "2023-01-27T13:25:54",
            "upload_time_iso_8601": "2023-01-27T13:25:54.929970Z",
            "url": "https://files.pythonhosted.org/packages/ac/1b/050f777fa6e46be61f443b21f47c252d17ed8990fdb8061b7a494aecba77/mlem-0.4.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-27 13:25:54",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "mlem"
}
        
Elapsed time: 0.11114s