omnipy


Nameomnipy JSON
Version 0.15.12 PyPI version JSON
download
home_pagehttps://fairtracks.net/fair/#fair-07-transformation
SummaryOmnipy is a high level Python library for type-driven data wrangling and scalable workflow orchestration (under development)
upload_time2024-04-22 22:18:15
maintainerNone
docs_urlNone
authorSveinung Gundersen
requires_python<3.12,>=3.10
licenseApache-2.0
keywords data wrangling metadata workflows etl research data prefect pydantic fair ontologies json tabular type-driven orchestration data models universal
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![Omnypy logo](https://fairtracks.net/_nuxt/img/9a84303.webp)

Omnipy is a high level Python library for type-driven data wrangling and scalable workflow
orchestration.

![Conceptual overview of Omnipy](https://fairtracks.net/materials/images/omnipy-overview.png)

# Updates

- **Feb 3, 2023:** Documentation of the Omnipy API is still sparse. However, for examples of running
  code, please check out the [omnipy-examples repo](https://github.com/fairtracks/omnipy_examples).
- **Dec 22, 2022:** Omnipy is the new name of the Python package formerly known as uniFAIR.
  _We are very grateful to Dr. Jamin Chen, who gracefully transferred ownership of the (mostly 
  unused) "omnipy" name in PyPI to us!__

# Installation and use

For basic information on installation and use of omnipy, read the [INSTALL.md](INSTALL.md) 
file.

# Contribute to omnipy development

For basic information on how to set up a development environment to effectively contribute to 
the omnipy library, read the [CONTRIBUTING.md](CONTRIBUTING.md) file.

# Overview of Omnipy

## Generic functionality

_(NOTE: Read the
section [Transformation on the FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation)
for a more detailed and better formatted version of the following description!)_

Omnipy is designed primarily to simplify development and deployment of (meta)data transformation
processes in the context of FAIRification and data brokering efforts. However, the functionality is
very generic and can also be used to support research data (and metadata) transformations in a range
of fields and contexts beyond life science, including day-to-day research scenarios:

## Data wrangling in day-to-day research

Researchers in life science and other data-centric fields
often need to extract, manipulate and integrate data and/or metadata from different sources, such as
repositories, databases or flat files. Much research time is spent on trivial and not-so-trivial
details of such ["data wrangling"](https://en.wikipedia.org/wiki/Data_wrangling):

- reformat data structures
- clean up errors
- remove duplicate data
- map and integrate dataset fields
- etc.

General software for data wrangling and analysis, such as [Pandas](https://pandas.pydata.org/),
[R](https://www.r-project.org/) or [Frictionless](https://frictionlessdata.io/), are useful, but
researchers still regularly end up with hard-to-reuse scripts, often with manual steps.

## Step-wise data model transformations

With the Omnipy Python package, researchers can import (meta)data in almost any shape or form:
_nested JSON; tabular
(relational) data; binary streams; or other data structures_. Through a step-by-step process, data
is continuously parsed and reshaped according to a series of data model transformations.

## "Parse, don't validate"

Omnipy follows the principles of "Type-driven design" (read
_Technical note #2: "Parse, don't validate"_ on the
[FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation) for more info). It
makes use of cutting-edge [Python type hints](https://peps.python.org/pep-0484/) and the popular
[pydantic](https://pydantic-docs.helpmanual.io/) package to "pour" data into precisely defined data
models that can range from very general (e.g. _"any kind of JSON data", "any kind of tabular data"_,
etc.) to very specific (e.g. _"follow the FAIRtracks JSON Schema for track files with the extra
restriction of only allowing BigBED files"_).

## Data types as contracts

Omnipy _tasks_ (single steps) or _flows_ (workflows) are defined as
transformations from specific _input_ data models to specific _output_ data models.
[pydantic](https://pydantic-docs.helpmanual.io/)-based parsing guarantees that the input and output
data always follows the data models (i.e. data types). Thus, the data models defines "contracts"
that simplifies reuse of tasks and flows in a _mix-and-match_ fashion.

## Catalog of common processing steps

Omnipy is built from the ground up to be modular. We aim
to provide a catalog of commonly useful functionality ranging from:

- data import from REST API endpoints, common flat file formats, database dumps, etc.
- flattening of complex, nested JSON structures
- standardization of relational tabular data (i.e. removing redundancy)
- mapping of tabular data between schemas
- lookup and mapping of ontology terms
- semi-automatic data cleaning (through e.g. [Open Refine](https://openrefine.org/))
- support for common data manipulation software and libraries, such as
  [Pandas](https://pandas.pydata.org/), [R](https://www.r-project.org/),
  [Frictionless](https://frictionlessdata.io/), etc.

In particular, we will provide a _FAIRtracks_ module that contains data models and processing steps
to transform metadata to follow the [FAIRtracks standard](/standards/#standards-01-fairtracks).

![Catalog of commonly useful processing steps, data modules and tool integrations](https://fairtracks.net/_nuxt/img/7101c5f-1280.png)

## Refine and apply templates

An Omnipy module typically consists of a set of generic _task_ and
_flow templates_ with related data models, (de)serializers, and utility functions. The user can then
pick task and flow templates from this extensible, modular catalog, further refine them in the
context of a custom, use case-specific flow, and apply them to the desired compute engine to carry
out the transformations needed to wrangle data into the required shape.

## Rerun only when needed

When piecing together a custom flow in Omnipy, the user has persistent
access to the state of the data at every step of the process. Persistent intermediate data allows
for caching of tasks based on the input data and parameters. Hence, if the input data and parameters
of a task does not change between runs, the task is not rerun. This is particularly useful for
importing from REST API endpoints, as a flow can be continuously rerun without taxing the remote
server; data import will only carried out in the initial iteration or when the REST API signals that
the data has changed.

## Scale up with external compute resources

In the case of large datasets, the researcher can set
up a flow based on a representative sample of the full dataset, in a size that is suited for running
locally on, say, a laptop. Once the flow has produced the correct output on the sample data, the
operation can be seamlessly scaled up to the full dataset and sent off in
[software containers](https://www.docker.com/resources/what-container/) to run on external compute
resources, using e.g. [Kubernetes](https://kubernetes.io/). Such offloaded flows
can be easily monitored using a web GUI.

![Working with Omnipy directly from an Integrated Development Environment (IDE)](https://fairtracks.net/_nuxt/img/f9be071-1440.png)

## Industry-standard ETL backbone

Offloading of flows to external compute resources is provided by
the integration of Omnipy with a workflow engine based on the [Prefect](https://www.prefect.io/)
Python package. Prefect is an industry-leading platform for dataflow automation and orchestration
that brings a [series of powerful features](https://www.prefect.io/opensource/) to Omnipy:

- Predefined integrations with a range of compute infrastructure solutions
- Predefined integration with various services to support extraction, transformation, and loading
  (ETL) of data and metadata
- Code as workflow ("If Python can write it, Prefect can run it")
- Dynamic workflows: no predefined Direct Acyclic Graphs (DAGs) needed!
- Command line and web GUI-based visibility and control of jobs
- Trigger jobs from external events such as GitHub commits, file uploads, etc.
- Define continuously running workflows that still respond to external events
- Run tasks concurrently through support for asynchronous tasks

![Overview of the compute and storage infrastructure integrations that comes built-in with Prefect](https://fairtracks.net/_nuxt/img/ccc322a-1440.png)

## Pluggable workflow engines

It is also possible to integrate Omnipy with other workflow
backends by implementing new workflow engine plugins. This is relatively easy to do, as the core
architecture of Omnipy allows the user to easily switch the workflow engine at runtime. Omnipy
supports both traditional DAG-based and the more _avant garde_ code-based definition of flows. Two
workflow engines are currently supported: _local_ and _prefect_.

            

Raw data

            {
    "_id": null,
    "home_page": "https://fairtracks.net/fair/#fair-07-transformation",
    "name": "omnipy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.10",
    "maintainer_email": null,
    "keywords": "data wrangling, metadata, workflows, etl, research data, prefect, pydantic, FAIR, ontologies, JSON, tabular, type-driven, orchestration, data models, universal",
    "author": "Sveinung Gundersen",
    "author_email": "sveinugu@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/94/96/e6bb18dfc3b9f58b73709c3b5f22d5f59d97ffbb7eca7280b0f018dc9552/omnipy-0.15.12.tar.gz",
    "platform": null,
    "description": "![Omnypy logo](https://fairtracks.net/_nuxt/img/9a84303.webp)\n\nOmnipy is a high level Python library for type-driven data wrangling and scalable workflow\norchestration.\n\n![Conceptual overview of Omnipy](https://fairtracks.net/materials/images/omnipy-overview.png)\n\n# Updates\n\n- **Feb 3, 2023:** Documentation of the Omnipy API is still sparse. However, for examples of running\n  code, please check out the [omnipy-examples repo](https://github.com/fairtracks/omnipy_examples).\n- **Dec 22, 2022:** Omnipy is the new name of the Python package formerly known as uniFAIR.\n  _We are very grateful to Dr. Jamin Chen, who gracefully transferred ownership of the (mostly \n  unused) \"omnipy\" name in PyPI to us!__\n\n# Installation and use\n\nFor basic information on installation and use of omnipy, read the [INSTALL.md](INSTALL.md) \nfile.\n\n# Contribute to omnipy development\n\nFor basic information on how to set up a development environment to effectively contribute to \nthe omnipy library, read the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n# Overview of Omnipy\n\n## Generic functionality\n\n_(NOTE: Read the\nsection [Transformation on the FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation)\nfor a more detailed and better formatted version of the following description!)_\n\nOmnipy is designed primarily to simplify development and deployment of (meta)data transformation\nprocesses in the context of FAIRification and data brokering efforts. However, the functionality is\nvery generic and can also be used to support research data (and metadata) transformations in a range\nof fields and contexts beyond life science, including day-to-day research scenarios:\n\n## Data wrangling in day-to-day research\n\nResearchers in life science and other data-centric fields\noften need to extract, manipulate and integrate data and/or metadata from different sources, such as\nrepositories, databases or flat files. Much research time is spent on trivial and not-so-trivial\ndetails of such [\"data wrangling\"](https://en.wikipedia.org/wiki/Data_wrangling):\n\n- reformat data structures\n- clean up errors\n- remove duplicate data\n- map and integrate dataset fields\n- etc.\n\nGeneral software for data wrangling and analysis, such as [Pandas](https://pandas.pydata.org/),\n[R](https://www.r-project.org/) or [Frictionless](https://frictionlessdata.io/), are useful, but\nresearchers still regularly end up with hard-to-reuse scripts, often with manual steps.\n\n## Step-wise data model transformations\n\nWith the Omnipy Python package, researchers can import (meta)data in almost any shape or form:\n_nested JSON; tabular\n(relational) data; binary streams; or other data structures_. Through a step-by-step process, data\nis continuously parsed and reshaped according to a series of data model transformations.\n\n## \"Parse, don't validate\"\n\nOmnipy follows the principles of \"Type-driven design\" (read\n_Technical note #2: \"Parse, don't validate\"_ on the\n[FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation) for more info). It\nmakes use of cutting-edge [Python type hints](https://peps.python.org/pep-0484/) and the popular\n[pydantic](https://pydantic-docs.helpmanual.io/) package to \"pour\" data into precisely defined data\nmodels that can range from very general (e.g. _\"any kind of JSON data\", \"any kind of tabular data\"_,\netc.) to very specific (e.g. _\"follow the FAIRtracks JSON Schema for track files with the extra\nrestriction of only allowing BigBED files\"_).\n\n## Data types as contracts\n\nOmnipy _tasks_ (single steps) or _flows_ (workflows) are defined as\ntransformations from specific _input_ data models to specific _output_ data models.\n[pydantic](https://pydantic-docs.helpmanual.io/)-based parsing guarantees that the input and output\ndata always follows the data models (i.e. data types). Thus, the data models defines \"contracts\"\nthat simplifies reuse of tasks and flows in a _mix-and-match_ fashion.\n\n## Catalog of common processing steps\n\nOmnipy is built from the ground up to be modular. We aim\nto provide a catalog of commonly useful functionality ranging from:\n\n- data import from REST API endpoints, common flat file formats, database dumps, etc.\n- flattening of complex, nested JSON structures\n- standardization of relational tabular data (i.e. removing redundancy)\n- mapping of tabular data between schemas\n- lookup and mapping of ontology terms\n- semi-automatic data cleaning (through e.g. [Open Refine](https://openrefine.org/))\n- support for common data manipulation software and libraries, such as\n  [Pandas](https://pandas.pydata.org/), [R](https://www.r-project.org/),\n  [Frictionless](https://frictionlessdata.io/), etc.\n\nIn particular, we will provide a _FAIRtracks_ module that contains data models and processing steps\nto transform metadata to follow the [FAIRtracks standard](/standards/#standards-01-fairtracks).\n\n![Catalog of commonly useful processing steps, data modules and tool integrations](https://fairtracks.net/_nuxt/img/7101c5f-1280.png)\n\n## Refine and apply templates\n\nAn Omnipy module typically consists of a set of generic _task_ and\n_flow templates_ with related data models, (de)serializers, and utility functions. The user can then\npick task and flow templates from this extensible, modular catalog, further refine them in the\ncontext of a custom, use case-specific flow, and apply them to the desired compute engine to carry\nout the transformations needed to wrangle data into the required shape.\n\n## Rerun only when needed\n\nWhen piecing together a custom flow in Omnipy, the user has persistent\naccess to the state of the data at every step of the process. Persistent intermediate data allows\nfor caching of tasks based on the input data and parameters. Hence, if the input data and parameters\nof a task does not change between runs, the task is not rerun. This is particularly useful for\nimporting from REST API endpoints, as a flow can be continuously rerun without taxing the remote\nserver; data import will only carried out in the initial iteration or when the REST API signals that\nthe data has changed.\n\n## Scale up with external compute resources\n\nIn the case of large datasets, the researcher can set\nup a flow based on a representative sample of the full dataset, in a size that is suited for running\nlocally on, say, a laptop. Once the flow has produced the correct output on the sample data, the\noperation can be seamlessly scaled up to the full dataset and sent off in\n[software containers](https://www.docker.com/resources/what-container/) to run on external compute\nresources, using e.g. [Kubernetes](https://kubernetes.io/). Such offloaded flows\ncan be easily monitored using a web GUI.\n\n![Working with Omnipy directly from an Integrated Development Environment (IDE)](https://fairtracks.net/_nuxt/img/f9be071-1440.png)\n\n## Industry-standard ETL backbone\n\nOffloading of flows to external compute resources is provided by\nthe integration of Omnipy with a workflow engine based on the [Prefect](https://www.prefect.io/)\nPython package. Prefect is an industry-leading platform for dataflow automation and orchestration\nthat brings a [series of powerful features](https://www.prefect.io/opensource/) to Omnipy:\n\n- Predefined integrations with a range of compute infrastructure solutions\n- Predefined integration with various services to support extraction, transformation, and loading\n  (ETL) of data and metadata\n- Code as workflow (\"If Python can write it, Prefect can run it\")\n- Dynamic workflows: no predefined Direct Acyclic Graphs (DAGs) needed!\n- Command line and web GUI-based visibility and control of jobs\n- Trigger jobs from external events such as GitHub commits, file uploads, etc.\n- Define continuously running workflows that still respond to external events\n- Run tasks concurrently through support for asynchronous tasks\n\n![Overview of the compute and storage infrastructure integrations that comes built-in with Prefect](https://fairtracks.net/_nuxt/img/ccc322a-1440.png)\n\n## Pluggable workflow engines\n\nIt is also possible to integrate Omnipy with other workflow\nbackends by implementing new workflow engine plugins. This is relatively easy to do, as the core\narchitecture of Omnipy allows the user to easily switch the workflow engine at runtime. Omnipy\nsupports both traditional DAG-based and the more _avant garde_ code-based definition of flows. Two\nworkflow engines are currently supported: _local_ and _prefect_.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Omnipy is a high level Python library for type-driven data wrangling and scalable workflow orchestration (under development)",
    "version": "0.15.12",
    "project_urls": {
        "Documentation": "http://omnipy.readthedocs.io/",
        "Homepage": "https://fairtracks.net/fair/#fair-07-transformation",
        "Repository": "http://github.com/fairtracks/omnipy"
    },
    "split_keywords": [
        "data wrangling",
        " metadata",
        " workflows",
        " etl",
        " research data",
        " prefect",
        " pydantic",
        " fair",
        " ontologies",
        " json",
        " tabular",
        " type-driven",
        " orchestration",
        " data models",
        " universal"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3321c2ba99a591351eed3f79ed8971a62f91315522a83466f6d1c3c2cdacb39b",
                "md5": "297bceab941fc202d4204448aead4100",
                "sha256": "64d7fee4281657cff0fb7a0900c5d7c66f4c76abf8a9523835d1426df023a17b"
            },
            "downloads": -1,
            "filename": "omnipy-0.15.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "297bceab941fc202d4204448aead4100",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.10",
            "size": 151266,
            "upload_time": "2024-04-22T22:18:13",
            "upload_time_iso_8601": "2024-04-22T22:18:13.624407Z",
            "url": "https://files.pythonhosted.org/packages/33/21/c2ba99a591351eed3f79ed8971a62f91315522a83466f6d1c3c2cdacb39b/omnipy-0.15.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9496e6bb18dfc3b9f58b73709c3b5f22d5f59d97ffbb7eca7280b0f018dc9552",
                "md5": "40ad4b8c8751890fa724ee15e3b3fa72",
                "sha256": "f62cae2ceb8e52e94e672ffce88af4ccf8c37931158483768fb595a5d7d4c804"
            },
            "downloads": -1,
            "filename": "omnipy-0.15.12.tar.gz",
            "has_sig": false,
            "md5_digest": "40ad4b8c8751890fa724ee15e3b3fa72",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.12,>=3.10",
            "size": 114412,
            "upload_time": "2024-04-22T22:18:15",
            "upload_time_iso_8601": "2024-04-22T22:18:15.648893Z",
            "url": "https://files.pythonhosted.org/packages/94/96/e6bb18dfc3b9f58b73709c3b5f22d5f59d97ffbb7eca7280b0f018dc9552/omnipy-0.15.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-22 22:18:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fairtracks",
    "github_project": "omnipy",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "omnipy"
}
        
Elapsed time: 0.23946s