ml-deploy-lite


Nameml-deploy-lite JSON
Version 0.7.1 PyPI version JSON
download
home_pagehttps://github.com/Blacksujit/ML-Deploy-Lite.git
SummaryA library to simplify your ML model deployments
upload_time2024-11-21 17:59:52
maintainerNone
docs_urlNone
authorSujit Nirmal (@blacksujit)
requires_python>=3.6
licenseNone
keywords python machine learning deep learning mlops deployements model-deployement ml-models state-of-art-deployements
VCS
bugtrack_url
requirements Flask gunicorn docker pyyaml joblib scikit-learn prometheus_flask_exporter
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# ML Deploy Lite

The ML Deploy Lite Library is a powerful and user-friendly solution designed to simplify the deployment of machine learning models in production environments. This library provides a comprehensive set of tools and utilities to facilitate the management, serving, and monitoring of machine learning models, making it easier for developers and data scientists to integrate their models into applications.

Installation:
To install the library, run the following command:

```
pip install ml_deploy_lite
```

## Key Features:
- Model Serving: The library offers robust APIs for serving machine learning models, allowing users to expose their models as RESTful services. This enables easy integration with web applications and other services.

- Version Management: ML Deploy Lite supports versioning of machine learning models, enabling users to manage multiple versions of their models seamlessly. This feature is crucial for maintaining and updating models in production without downtime.

- Monitoring and Logging: The library includes built-in monitoring tools to track model performance and usage metrics. Users can log requests, responses, and performance statistics to ensure their models are functioning optimally.

- Containerization Support: ML Deploy Lite provides utilities for containerizing machine learning models using Docker, facilitating easy deployment across various environments, including cloud platforms and on-premises servers.

- Configuration Management: Users can easily configure deployment settings, such as model paths, API endpoints, and logging preferences, through a simple configuration file, allowing for flexible and customizable deployments.

- Integration with Popular Frameworks: The library is designed to work seamlessly with popular machine learning frameworks like TensorFlow, PyTorch, and Scikit-learn, making it a versatile choice for developers.

## Getting Started:

1. Prepare Your Model:
   Ensure you have a trained machine learning model saved in a format compatible with joblib. For example:
   ```python
   import joblib
   from sklearn.datasets import load_iris
   from sklearn.ensemble import RandomForestClassifier

   iris = load_iris()
   X, y = iris.data, iris.target
   model = RandomForestClassifier()
   model.fit(X, y)
   joblib.dump(model, 'model/sample_model.pkl')
   ```

2. Deploy Your Model:
   Use the MLDeployLite class to deploy your model:
   ```python
   from ml_deploy_lite import MLDeployLite

   deployer = MLDeployLite('model/sample_model.pkl')
   deployer.run()
   ```

3. Making Predictions:
   Send a POST request to the /predict endpoint:
   ```bash
   curl -X POST http://localhost:5000/predict -H "Content-Type: application/json" -d '{"features": [5.1, 3.5, 1.4, 0.2]}'
   ```

4. Monitoring and Logging:
   The library automatically logs incoming requests and predictions. You can customize the logging level in the setup_logging method.

5. Docker Integration:
   To create a Docker image for your application, use the provided create_dockerfile function in ml_deploy_lite/docker.py.

6. Kubernetes Integration:
   To create a Kubernetes deployment configuration, use the create_k8s_deployment function in ml_deploy_lite/k8s.py.

## Conclusion:

The ML Deploy Lite Library is designed to make the deployment of machine learning models straightforward and efficient. With its robust features and easy-to-use interface, you can quickly turn your models into production-ready services. For more information, check the GitHub repository for documentation and updates: https://github.com/Blacksujit/ML-Deploy-Lite.git


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Blacksujit/ML-Deploy-Lite.git",
    "name": "ml-deploy-lite",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "Python, Machine Learning, Deep Learning, MLops, Deployements, Model-Deployement, ML-models, state-of-art-deployements",
    "author": "Sujit Nirmal (@blacksujit)",
    "author_email": "nirmalsujit981@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/b8/b1/14f09f3b1a6a941a174de8fe16d3e1642a750d3c2468c5710af8bc81db70/ml_deploy_lite-0.7.1.tar.gz",
    "platform": null,
    "description": "\r\n# ML Deploy Lite\r\n\r\nThe ML Deploy Lite Library is a powerful and user-friendly solution designed to simplify the deployment of machine learning models in production environments. This library provides a comprehensive set of tools and utilities to facilitate the management, serving, and monitoring of machine learning models, making it easier for developers and data scientists to integrate their models into applications.\r\n\r\nInstallation:\r\nTo install the library, run the following command:\r\n\r\n```\r\npip install ml_deploy_lite\r\n```\r\n\r\n## Key Features:\r\n- Model Serving: The library offers robust APIs for serving machine learning models, allowing users to expose their models as RESTful services. This enables easy integration with web applications and other services.\r\n\r\n- Version Management: ML Deploy Lite supports versioning of machine learning models, enabling users to manage multiple versions of their models seamlessly. This feature is crucial for maintaining and updating models in production without downtime.\r\n\r\n- Monitoring and Logging: The library includes built-in monitoring tools to track model performance and usage metrics. Users can log requests, responses, and performance statistics to ensure their models are functioning optimally.\r\n\r\n- Containerization Support: ML Deploy Lite provides utilities for containerizing machine learning models using Docker, facilitating easy deployment across various environments, including cloud platforms and on-premises servers.\r\n\r\n- Configuration Management: Users can easily configure deployment settings, such as model paths, API endpoints, and logging preferences, through a simple configuration file, allowing for flexible and customizable deployments.\r\n\r\n- Integration with Popular Frameworks: The library is designed to work seamlessly with popular machine learning frameworks like TensorFlow, PyTorch, and Scikit-learn, making it a versatile choice for developers.\r\n\r\n## Getting Started:\r\n\r\n1. Prepare Your Model:\r\n   Ensure you have a trained machine learning model saved in a format compatible with joblib. For example:\r\n   ```python\r\n   import joblib\r\n   from sklearn.datasets import load_iris\r\n   from sklearn.ensemble import RandomForestClassifier\r\n\r\n   iris = load_iris()\r\n   X, y = iris.data, iris.target\r\n   model = RandomForestClassifier()\r\n   model.fit(X, y)\r\n   joblib.dump(model, 'model/sample_model.pkl')\r\n   ```\r\n\r\n2. Deploy Your Model:\r\n   Use the MLDeployLite class to deploy your model:\r\n   ```python\r\n   from ml_deploy_lite import MLDeployLite\r\n\r\n   deployer = MLDeployLite('model/sample_model.pkl')\r\n   deployer.run()\r\n   ```\r\n\r\n3. Making Predictions:\r\n   Send a POST request to the /predict endpoint:\r\n   ```bash\r\n   curl -X POST http://localhost:5000/predict -H \"Content-Type: application/json\" -d '{\"features\": [5.1, 3.5, 1.4, 0.2]}'\r\n   ```\r\n\r\n4. Monitoring and Logging:\r\n   The library automatically logs incoming requests and predictions. You can customize the logging level in the setup_logging method.\r\n\r\n5. Docker Integration:\r\n   To create a Docker image for your application, use the provided create_dockerfile function in ml_deploy_lite/docker.py.\r\n\r\n6. Kubernetes Integration:\r\n   To create a Kubernetes deployment configuration, use the create_k8s_deployment function in ml_deploy_lite/k8s.py.\r\n\r\n## Conclusion:\r\n\r\nThe ML Deploy Lite Library is designed to make the deployment of machine learning models straightforward and efficient. With its robust features and easy-to-use interface, you can quickly turn your models into production-ready services. For more information, check the GitHub repository for documentation and updates: https://github.com/Blacksujit/ML-Deploy-Lite.git\r\n\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A library to simplify your ML model deployments",
    "version": "0.7.1",
    "project_urls": {
        "Homepage": "https://github.com/Blacksujit/ML-Deploy-Lite.git"
    },
    "split_keywords": [
        "python",
        " machine learning",
        " deep learning",
        " mlops",
        " deployements",
        " model-deployement",
        " ml-models",
        " state-of-art-deployements"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f60844b4a87dcdacc64c9db6bc7da9e967e3c3ab969f30ec4b26a0b3ba182179",
                "md5": "a4e4e9a74125f28cb197da63d05630ce",
                "sha256": "81ffd835670fcd3caefc99e784a76f5f3b8ffb51211b6aaa38cb15e68873b1ec"
            },
            "downloads": -1,
            "filename": "ml_deploy_lite-0.7.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a4e4e9a74125f28cb197da63d05630ce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 6958,
            "upload_time": "2024-11-21T17:59:49",
            "upload_time_iso_8601": "2024-11-21T17:59:49.998366Z",
            "url": "https://files.pythonhosted.org/packages/f6/08/44b4a87dcdacc64c9db6bc7da9e967e3c3ab969f30ec4b26a0b3ba182179/ml_deploy_lite-0.7.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b8b114f09f3b1a6a941a174de8fe16d3e1642a750d3c2468c5710af8bc81db70",
                "md5": "8c9751925288f4ec2f2df08159f4fdad",
                "sha256": "c48b8241f26b055a8bcad100ba51f238adb7d08911ea98935d47ba12e3d63d3d"
            },
            "downloads": -1,
            "filename": "ml_deploy_lite-0.7.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8c9751925288f4ec2f2df08159f4fdad",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 6527,
            "upload_time": "2024-11-21T17:59:52",
            "upload_time_iso_8601": "2024-11-21T17:59:52.009617Z",
            "url": "https://files.pythonhosted.org/packages/b8/b1/14f09f3b1a6a941a174de8fe16d3e1642a750d3c2468c5710af8bc81db70/ml_deploy_lite-0.7.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-21 17:59:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Blacksujit",
    "github_project": "ML-Deploy-Lite",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "Flask",
            "specs": []
        },
        {
            "name": "gunicorn",
            "specs": []
        },
        {
            "name": "docker",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "joblib",
            "specs": []
        },
        {
            "name": "scikit-learn",
            "specs": []
        },
        {
            "name": "prometheus_flask_exporter",
            "specs": []
        }
    ],
    "lcname": "ml-deploy-lite"
}
        
Elapsed time: 2.29360s