merlin


Namemerlin JSON
Version 1.12.1 PyPI version JSON
download
home_pagehttps://github.com/LLNL/merlin
SummaryThe building blocks of workflows!
upload_time2024-04-16 15:22:31
maintainerNone
docs_urlNone
authorMerlin Dev team
requires_pythonNone
licenseMIT
keywords machine learning workflow
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Python versions](https://img.shields.io/pypi/pyversions/merlin)
[![License](https://img.shields.io/pypi/l/merlin)](https://pypi.org/project/merlin/)
![Activity](https://img.shields.io/github/commit-activity/m/LLNL/merlin)
[![Issues](https://img.shields.io/github/issues/LLNL/merlin)](https://github.com/LLNL/merlin/issues)
[![Pull requests](https://img.shields.io/github/issues-pr/LLNL/merlin)](https://github.com/LLNL/merlin/pulls)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/LLNL/merlin.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/LLNL/merlin/context:python)

![Merlin](https://raw.githubusercontent.com/LLNL/merlin/main/docs/images/merlin.png)

## A brief introduction to Merlin
Merlin is a tool for running machine learning based workflows. The goal of
Merlin is to make it easy to build, run, and process the kinds of large
scale HPC workflows needed for cognitive simulation.

At its heart, Merlin is a distributed task queuing system, designed to allow complex
HPC workflows to scale to large numbers of simulations 
(we've done 100 Million on the Sierra Supercomputer).

Why would you want to run that many simulations?
To become your own Big Data generator.

Data sets of this size can be large enough to train deep neural networks
that can mimic your HPC application, to be used for such
things as design optimization, uncertainty quantification and statistical
experimental inference. Merlin's been used to study inertial confinement
fusion, extreme ultraviolet light generation, structural mechanics and
atomic physics, to name a few.

How does it work?

In essence, Merlin coordinates complex workflows through a persistent
external queue server that lives outside of your HPC systems, but that
can talk to nodes on your cluster(s). As jobs spin up across your ecosystem,
workers on those allocations pull work from a central server, which
coordinates the task dependencies for your workflow. Since this coordination
is done via direct connections to the workers (i.e. not through a file
system), your workflow can scale to very large numbers of workers,
which means a very large number of simulations with very little overhead.

Furthermore, since the workers pull their instructions from the central
server, you can do a lot of other neat things, like having multiple
batch allocations contribute to the same work (think surge computing), or
specialize workers to different machines (think CPU workers for your
application and GPU workers that train your neural network). Another
neat feature is that these workers can add more work back to central
server, which enables a variety of dynamic workflows, such as may be
necessary for the intelligent sampling of design spaces or reinforcement
learning tasks.

Merlin does all of this by leveraging some key HPC and cloud computing
technologies, building off open source components. It uses
[maestro]( https://github.com/LLNL/maestrowf) to
provide an interface for describing workflows, as well as for defining
workflow task dependencies. It translates those dependencies into concrete
tasks via [celery](https://docs.celeryproject.org/), 
which can be configured for a variety of backend
technologies ([rabbitmq](https://www.rabbitmq.com) and
[redis](https://redis.io) are currently supported). Although not
a hard dependency, we encourage the use of
[flux](http://flux-framework.org) for interfacing with
HPC batch systems, since it can scale to a very large number of jobs.

The integrated system looks a little something like this:

![A Typical Merlin Workflow](docs/assets/images/merlin_arch.png)

In this example, here's how it all works:

1. The scientist describes her HPC workflow as a maestro DAG (directed acyclic graph)
"spec" file `workflow.yaml`
2. She then sends it to the persistent server with  `merlin run workflow.yaml` .
Merlin translates the file into tasks.
3. The scientist submits a job request to her HPC center. These jobs ask for workers via
the command `merlin run-workers workflow.yaml`.
4. Coffee break.
5. As jobs stand up, they pull work from the queue, making calls to flux to get the 
necessary HPC resources.
5. Later, workers on a different allocation, with GPU resources connect to the 
server and contribute to processing the workload.

The central queue server deals with task dependencies and keeps the workers fed.

For more details, check out the rest of the [documentation](https://merlin.readthedocs.io/).

Need help? <merlin@llnl.gov>

## Quick Start

Note: Merlin supports Python 3.6+.

To install Merlin and its dependencies, run:

    $ pip3 install merlin
    
Create your application config file:

    $ merlin config

That's it.

To run something a little more like what you're interested in,
namely a demo workflow that has simulation and machine learning,
first generate an example workflow:

    $ merlin example feature_demo

Then install the workflow's dependencies:

    $ pip install -r feature_demo/requirements.txt

Then process the workflow and create tasks on the server:

    $ merlin run feature_demo/feature_demo.yaml

And finally, launch workers that can process those tasks:

    $ merlin run-workers feature_demo/feature_demo.yaml


## Documentation
[**Full documentation**](http://merlin.readthedocs.io/) is available, or
run:

    $ merlin --help

(or add `--help` to the end of any sub-command you
want to learn more about.)


## Code of Conduct
Please note that Merlin has a
[**Code of Conduct**](.github/CODE_OF_CONDUCT.md). By participating in
the Merlin community, you agree to abide by its rules.


## License
Merlin is distributed under the terms of the [MIT LICENSE](https://github.com/LLNL/merlin/blob/main/LICENSE).

LLNL-CODE-797170

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/LLNL/merlin",
    "name": "merlin",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "machine learning workflow",
    "author": "Merlin Dev team",
    "author_email": "merlin@llnl.gov",
    "download_url": "https://files.pythonhosted.org/packages/36/8a/7cc06cfdb6f9e2f0571d28e242087f58e25ab5fe07c2d3c902156877186e/merlin-1.12.1.tar.gz",
    "platform": null,
    "description": "![Python versions](https://img.shields.io/pypi/pyversions/merlin)\n[![License](https://img.shields.io/pypi/l/merlin)](https://pypi.org/project/merlin/)\n![Activity](https://img.shields.io/github/commit-activity/m/LLNL/merlin)\n[![Issues](https://img.shields.io/github/issues/LLNL/merlin)](https://github.com/LLNL/merlin/issues)\n[![Pull requests](https://img.shields.io/github/issues-pr/LLNL/merlin)](https://github.com/LLNL/merlin/pulls)\n[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/LLNL/merlin.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/LLNL/merlin/context:python)\n\n![Merlin](https://raw.githubusercontent.com/LLNL/merlin/main/docs/images/merlin.png)\n\n## A brief introduction to Merlin\nMerlin is a tool for running machine learning based workflows. The goal of\nMerlin is to make it easy to build, run, and process the kinds of large\nscale HPC workflows needed for cognitive simulation.\n\nAt its heart, Merlin is a distributed task queuing system, designed to allow complex\nHPC workflows to scale to large numbers of simulations \n(we've done 100 Million on the Sierra Supercomputer).\n\nWhy would you want to run that many simulations?\nTo become your own Big Data generator.\n\nData sets of this size can be large enough to train deep neural networks\nthat can mimic your HPC application, to be used for such\nthings as design optimization, uncertainty quantification and statistical\nexperimental inference. Merlin's been used to study inertial confinement\nfusion, extreme ultraviolet light generation, structural mechanics and\natomic physics, to name a few.\n\nHow does it work?\n\nIn essence, Merlin coordinates complex workflows through a persistent\nexternal queue server that lives outside of your HPC systems, but that\ncan talk to nodes on your cluster(s). As jobs spin up across your ecosystem,\nworkers on those allocations pull work from a central server, which\ncoordinates the task dependencies for your workflow. Since this coordination\nis done via direct connections to the workers (i.e. not through a file\nsystem), your workflow can scale to very large numbers of workers,\nwhich means a very large number of simulations with very little overhead.\n\nFurthermore, since the workers pull their instructions from the central\nserver, you can do a lot of other neat things, like having multiple\nbatch allocations contribute to the same work (think surge computing), or\nspecialize workers to different machines (think CPU workers for your\napplication and GPU workers that train your neural network). Another\nneat feature is that these workers can add more work back to central\nserver, which enables a variety of dynamic workflows, such as may be\nnecessary for the intelligent sampling of design spaces or reinforcement\nlearning tasks.\n\nMerlin does all of this by leveraging some key HPC and cloud computing\ntechnologies, building off open source components. It uses\n[maestro]( https://github.com/LLNL/maestrowf) to\nprovide an interface for describing workflows, as well as for defining\nworkflow task dependencies. It translates those dependencies into concrete\ntasks via [celery](https://docs.celeryproject.org/), \nwhich can be configured for a variety of backend\ntechnologies ([rabbitmq](https://www.rabbitmq.com) and\n[redis](https://redis.io) are currently supported). Although not\na hard dependency, we encourage the use of\n[flux](http://flux-framework.org) for interfacing with\nHPC batch systems, since it can scale to a very large number of jobs.\n\nThe integrated system looks a little something like this:\n\n![A Typical Merlin Workflow](docs/assets/images/merlin_arch.png)\n\nIn this example, here's how it all works:\n\n1. The scientist describes her HPC workflow as a maestro DAG (directed acyclic graph)\n\"spec\" file `workflow.yaml`\n2. She then sends it to the persistent server with  `merlin run workflow.yaml` .\nMerlin translates the file into tasks.\n3. The scientist submits a job request to her HPC center. These jobs ask for workers via\nthe command `merlin run-workers workflow.yaml`.\n4. Coffee break.\n5. As jobs stand up, they pull work from the queue, making calls to flux to get the \nnecessary HPC resources.\n5. Later, workers on a different allocation, with GPU resources connect to the \nserver and contribute to processing the workload.\n\nThe central queue server deals with task dependencies and keeps the workers fed.\n\nFor more details, check out the rest of the [documentation](https://merlin.readthedocs.io/).\n\nNeed help? <merlin@llnl.gov>\n\n## Quick Start\n\nNote: Merlin supports Python 3.6+.\n\nTo install Merlin and its dependencies, run:\n\n    $ pip3 install merlin\n    \nCreate your application config file:\n\n    $ merlin config\n\nThat's it.\n\nTo run something a little more like what you're interested in,\nnamely a demo workflow that has simulation and machine learning,\nfirst generate an example workflow:\n\n    $ merlin example feature_demo\n\nThen install the workflow's dependencies:\n\n    $ pip install -r feature_demo/requirements.txt\n\nThen process the workflow and create tasks on the server:\n\n    $ merlin run feature_demo/feature_demo.yaml\n\nAnd finally, launch workers that can process those tasks:\n\n    $ merlin run-workers feature_demo/feature_demo.yaml\n\n\n## Documentation\n[**Full documentation**](http://merlin.readthedocs.io/) is available, or\nrun:\n\n    $ merlin --help\n\n(or add `--help` to the end of any sub-command you\nwant to learn more about.)\n\n\n## Code of Conduct\nPlease note that Merlin has a\n[**Code of Conduct**](.github/CODE_OF_CONDUCT.md). By participating in\nthe Merlin community, you agree to abide by its rules.\n\n\n## License\nMerlin is distributed under the terms of the [MIT LICENSE](https://github.com/LLNL/merlin/blob/main/LICENSE).\n\nLLNL-CODE-797170\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "The building blocks of workflows!",
    "version": "1.12.1",
    "project_urls": {
        "Homepage": "https://github.com/LLNL/merlin"
    },
    "split_keywords": [
        "machine",
        "learning",
        "workflow"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "baff7b376934e6740ab89385c2eb3338f48fb505a584935585e7a0eaa7b666c6",
                "md5": "d9904ad830ac9b1c465558747e2d33ce",
                "sha256": "7c8f771d043948183c2065e3720be7b5e806bbc808c275fd7e1002db2ac3b024"
            },
            "downloads": -1,
            "filename": "merlin-1.12.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d9904ad830ac9b1c465558747e2d33ce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 330628,
            "upload_time": "2024-04-16T15:22:28",
            "upload_time_iso_8601": "2024-04-16T15:22:28.859729Z",
            "url": "https://files.pythonhosted.org/packages/ba/ff/7b376934e6740ab89385c2eb3338f48fb505a584935585e7a0eaa7b666c6/merlin-1.12.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "368a7cc06cfdb6f9e2f0571d28e242087f58e25ab5fe07c2d3c902156877186e",
                "md5": "b8d23e7373091da53c422aa7fa5d64b0",
                "sha256": "1e1fb41fe496f0cb6295a35846f62baa26c7208a1196d8a1f5c4e3c891578d3d"
            },
            "downloads": -1,
            "filename": "merlin-1.12.1.tar.gz",
            "has_sig": false,
            "md5_digest": "b8d23e7373091da53c422aa7fa5d64b0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 227091,
            "upload_time": "2024-04-16T15:22:31",
            "upload_time_iso_8601": "2024-04-16T15:22:31.799987Z",
            "url": "https://files.pythonhosted.org/packages/36/8a/7cc06cfdb6f9e2f0571d28e242087f58e25ab5fe07c2d3c902156877186e/merlin-1.12.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-16 15:22:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "LLNL",
    "github_project": "merlin",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "merlin"
}
        
Elapsed time: 0.23073s