![Omnypy logo](https://fairtracks.net/_nuxt/img/9a84303.webp)
Omnipy is a high level Python library for type-driven data wrangling and scalable workflow
orchestration.
![Conceptual overview of Omnipy](https://fairtracks.net/materials/images/omnipy-overview.png)
## Updates
- **June 22, 2024:** We're not very good at writing updates. Expect a larger update soon on an important
and potentially groundbreaking new feature of Omnipy: the capability of model objects to automatically
mimic behaviour of the modelled class – with the addition of snapshots and rollbacks.
So e.g. `Model[list[int]]()` is not just a run-time typesafe parser that continuously makes sure that the
elements in the list are, in fact, integers; the object can also be operated as a list using e.g.
`.append()`, `.insert()` and concatenation with the `+` operator; and furthermore: if you append an
unparseable element, say `"abc"` instead of `"123"`, it will roll back the contents to the previously
validated snapshot!
- **Feb 3, 2023:** Documentation of the Omnipy API is still sparse. However, for examples of running
code, please check out the [omnipy-examples repo](https://github.com/fairtracks/omnipy_examples).
- **Dec 22, 2022:** Omnipy is the new name of the Python package formerly known as uniFAIR.
_We are very grateful to Dr. Jamin Chen, who gracefully transferred ownership of the (mostly
unused) "omnipy" name in PyPI to us!__
## Overview of Omnipy
### Generic functionality
_(NOTE: Read the
section [Transformation on the FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation)
for a more detailed and better formatted version of the following description!)_
Omnipy is designed primarily to simplify development and deployment of (meta)data transformation
processes in the context of FAIRification and data brokering efforts. However, the functionality is
very generic and can also be used to support research data (and metadata) transformations in a range
of fields and contexts beyond life science, including day-to-day research scenarios:
### Data wrangling in day-to-day research
Researchers in life science and other data-centric fields
often need to extract, manipulate and integrate data and/or metadata from different sources, such as
repositories, databases or flat files. Much research time is spent on trivial and not-so-trivial
details of such ["data wrangling"](https://en.wikipedia.org/wiki/Data_wrangling):
- reformat data structures
- clean up errors
- remove duplicate data
- map and integrate dataset fields
- etc.
General software for data wrangling and analysis, such as [Pandas](https://pandas.pydata.org/),
[R](https://www.r-project.org/) or [Frictionless](https://frictionlessdata.io/), are useful, but
researchers still regularly end up with hard-to-reuse scripts, often with manual steps.
### Step-wise data model transformations
With the Omnipy Python package, researchers can import (meta)data in almost any shape or form:
_nested JSON; tabular
(relational) data; binary streams; or other data structures_. Through a step-by-step process, data
is continuously parsed and reshaped according to a series of data model transformations.
### "Parse, don't validate"
Omnipy follows the principles of "Type-driven design" (read
_Technical note #2: "Parse, don't validate"_ on the
[FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation) for more info). It
makes use of cutting-edge [Python type hints](https://peps.python.org/pep-0484/) and the popular
[pydantic](https://pydantic-docs.helpmanual.io/) package to "pour" data into precisely defined data
models that can range from very general (e.g. _"any kind of JSON data", "any kind of tabular data"_,
etc.) to very specific (e.g. _"follow the FAIRtracks JSON Schema for track files with the extra
restriction of only allowing BigBED files"_).
### Data types as contracts
Omnipy _tasks_ (single steps) or _flows_ (workflows) are defined as
transformations from specific _input_ data models to specific _output_ data models.
[pydantic](https://pydantic-docs.helpmanual.io/)-based parsing guarantees that the input and output
data always follows the data models (i.e. data types). Thus, the data models defines "contracts"
that simplifies reuse of tasks and flows in a _mix-and-match_ fashion.
### Catalog of common processing steps
Omnipy is built from the ground up to be modular. We aim
to provide a catalog of commonly useful functionality ranging from:
- data import from REST API endpoints, common flat file formats, database dumps, etc.
- flattening of complex, nested JSON structures
- standardization of relational tabular data (i.e. removing redundancy)
- mapping of tabular data between schemas
- lookup and mapping of ontology terms
- semi-automatic data cleaning (through e.g. [Open Refine](https://openrefine.org/))
- support for common data manipulation software and libraries, such as
[Pandas](https://pandas.pydata.org/), [R](https://www.r-project.org/),
[Frictionless](https://frictionlessdata.io/), etc.
In particular, we will provide a _FAIRtracks_ module that contains data models and processing steps
to transform metadata to follow the [FAIRtracks standard](/standards/#standards-01-fairtracks).
![Catalog of commonly useful processing steps, data modules and tool integrations](https://fairtracks.net/_nuxt/img/7101c5f-1280.png)
### Refine and apply templates
An Omnipy module typically consists of a set of generic _task_ and
_flow templates_ with related data models, (de)serializers, and utility functions. The user can then
pick task and flow templates from this extensible, modular catalog, further refine them in the
context of a custom, use case-specific flow, and apply them to the desired compute engine to carry
out the transformations needed to wrangle data into the required shape.
### Rerun only when needed
When piecing together a custom flow in Omnipy, the user has persistent
access to the state of the data at every step of the process. Persistent intermediate data allows
for caching of tasks based on the input data and parameters. Hence, if the input data and parameters
of a task does not change between runs, the task is not rerun. This is particularly useful for
importing from REST API endpoints, as a flow can be continuously rerun without taxing the remote
server; data import will only carried out in the initial iteration or when the REST API signals that
the data has changed.
### Scale up with external compute resources
In the case of large datasets, the researcher can set
up a flow based on a representative sample of the full dataset, in a size that is suited for running
locally on, say, a laptop. Once the flow has produced the correct output on the sample data, the
operation can be seamlessly scaled up to the full dataset and sent off in
[software containers](https://www.docker.com/resources/what-container/) to run on external compute
resources, using e.g. [Kubernetes](https://kubernetes.io/). Such offloaded flows
can be easily monitored using a web GUI.
![Working with Omnipy directly from an Integrated Development Environment (IDE)](https://fairtracks.net/_nuxt/img/f9be071-1440.png)
### Industry-standard ETL backbone
Offloading of flows to external compute resources is provided by
the integration of Omnipy with a workflow engine based on the [Prefect](https://www.prefect.io/)
Python package. Prefect is an industry-leading platform for dataflow automation and orchestration
that brings a [series of powerful features](https://www.prefect.io/opensource/) to Omnipy:
- Predefined integrations with a range of compute infrastructure solutions
- Predefined integration with various services to support extraction, transformation, and loading
(ETL) of data and metadata
- Code as workflow ("If Python can write it, Prefect can run it")
- Dynamic workflows: no predefined Direct Acyclic Graphs (DAGs) needed!
- Command line and web GUI-based visibility and control of jobs
- Trigger jobs from external events such as GitHub commits, file uploads, etc.
- Define continuously running workflows that still respond to external events
- Run tasks concurrently through support for asynchronous tasks
![Overview of the compute and storage infrastructure integrations that comes built-in with Prefect](https://fairtracks.net/_nuxt/img/ccc322a-1440.png)
### Pluggable workflow engines
It is also possible to integrate Omnipy with other workflow
backends by implementing new workflow engine plugins. This is relatively easy to do, as the core
architecture of Omnipy allows the user to easily switch the workflow engine at runtime. Omnipy
supports both traditional DAG-based and the more _avant garde_ code-based definition of flows. Two
workflow engines are currently supported: _local_ and _prefect_.
Raw data
{
"_id": null,
"home_page": "https://fairtracks.net/fair/#fair-07-transformation",
"name": "omnipy",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "data wrangling, metadata, workflows, etl, research data, prefect, pydantic, FAIR, ontologies, JSON, tabular, type-driven, orchestration, data models, universal",
"author": "Sveinung Gundersen",
"author_email": "sveinugu@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/0e/07/14dd8bb0e35599193c60e4683856803bf2c78724eabd9f8bed5c18d43a6a/omnipy-0.19.0.tar.gz",
"platform": null,
"description": "![Omnypy logo](https://fairtracks.net/_nuxt/img/9a84303.webp)\n\nOmnipy is a high level Python library for type-driven data wrangling and scalable workflow\norchestration.\n\n![Conceptual overview of Omnipy](https://fairtracks.net/materials/images/omnipy-overview.png)\n\n## Updates\n\n- **June 22, 2024:** We're not very good at writing updates. Expect a larger update soon on an important \n and potentially groundbreaking new feature of Omnipy: the capability of model objects to automatically \n mimic behaviour of the modelled class \u2013 with the addition of snapshots and rollbacks.\n So e.g. `Model[list[int]]()` is not just a run-time typesafe parser that continuously makes sure that the \n elements in the list are, in fact, integers; the object can also be operated as a list using e.g. \n `.append()`, `.insert()` and concatenation with the `+` operator; and furthermore: if you append an\n unparseable element, say `\"abc\"` instead of `\"123\"`, it will roll back the contents to the previously \n validated snapshot!\n- **Feb 3, 2023:** Documentation of the Omnipy API is still sparse. However, for examples of running\n code, please check out the [omnipy-examples repo](https://github.com/fairtracks/omnipy_examples).\n- **Dec 22, 2022:** Omnipy is the new name of the Python package formerly known as uniFAIR.\n _We are very grateful to Dr. Jamin Chen, who gracefully transferred ownership of the (mostly \n unused) \"omnipy\" name in PyPI to us!__\n\n## Overview of Omnipy\n\n### Generic functionality\n\n_(NOTE: Read the\nsection [Transformation on the FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation)\nfor a more detailed and better formatted version of the following description!)_\n\nOmnipy is designed primarily to simplify development and deployment of (meta)data transformation\nprocesses in the context of FAIRification and data brokering efforts. However, the functionality is\nvery generic and can also be used to support research data (and metadata) transformations in a range\nof fields and contexts beyond life science, including day-to-day research scenarios:\n\n### Data wrangling in day-to-day research\n\nResearchers in life science and other data-centric fields\noften need to extract, manipulate and integrate data and/or metadata from different sources, such as\nrepositories, databases or flat files. Much research time is spent on trivial and not-so-trivial\ndetails of such [\"data wrangling\"](https://en.wikipedia.org/wiki/Data_wrangling):\n\n- reformat data structures\n- clean up errors\n- remove duplicate data\n- map and integrate dataset fields\n- etc.\n\nGeneral software for data wrangling and analysis, such as [Pandas](https://pandas.pydata.org/),\n[R](https://www.r-project.org/) or [Frictionless](https://frictionlessdata.io/), are useful, but\nresearchers still regularly end up with hard-to-reuse scripts, often with manual steps.\n\n### Step-wise data model transformations\n\nWith the Omnipy Python package, researchers can import (meta)data in almost any shape or form:\n_nested JSON; tabular\n(relational) data; binary streams; or other data structures_. Through a step-by-step process, data\nis continuously parsed and reshaped according to a series of data model transformations.\n\n### \"Parse, don't validate\"\n\nOmnipy follows the principles of \"Type-driven design\" (read\n_Technical note #2: \"Parse, don't validate\"_ on the\n[FAIRtracks.net website](https://fairtracks.net/fair/#fair-07-transformation) for more info). It\nmakes use of cutting-edge [Python type hints](https://peps.python.org/pep-0484/) and the popular\n[pydantic](https://pydantic-docs.helpmanual.io/) package to \"pour\" data into precisely defined data\nmodels that can range from very general (e.g. _\"any kind of JSON data\", \"any kind of tabular data\"_,\netc.) to very specific (e.g. _\"follow the FAIRtracks JSON Schema for track files with the extra\nrestriction of only allowing BigBED files\"_).\n\n### Data types as contracts\n\nOmnipy _tasks_ (single steps) or _flows_ (workflows) are defined as\ntransformations from specific _input_ data models to specific _output_ data models.\n[pydantic](https://pydantic-docs.helpmanual.io/)-based parsing guarantees that the input and output\ndata always follows the data models (i.e. data types). Thus, the data models defines \"contracts\"\nthat simplifies reuse of tasks and flows in a _mix-and-match_ fashion.\n\n### Catalog of common processing steps\n\nOmnipy is built from the ground up to be modular. We aim\nto provide a catalog of commonly useful functionality ranging from:\n\n- data import from REST API endpoints, common flat file formats, database dumps, etc.\n- flattening of complex, nested JSON structures\n- standardization of relational tabular data (i.e. removing redundancy)\n- mapping of tabular data between schemas\n- lookup and mapping of ontology terms\n- semi-automatic data cleaning (through e.g. [Open Refine](https://openrefine.org/))\n- support for common data manipulation software and libraries, such as\n [Pandas](https://pandas.pydata.org/), [R](https://www.r-project.org/),\n [Frictionless](https://frictionlessdata.io/), etc.\n\nIn particular, we will provide a _FAIRtracks_ module that contains data models and processing steps\nto transform metadata to follow the [FAIRtracks standard](/standards/#standards-01-fairtracks).\n\n![Catalog of commonly useful processing steps, data modules and tool integrations](https://fairtracks.net/_nuxt/img/7101c5f-1280.png)\n\n### Refine and apply templates\n\nAn Omnipy module typically consists of a set of generic _task_ and\n_flow templates_ with related data models, (de)serializers, and utility functions. The user can then\npick task and flow templates from this extensible, modular catalog, further refine them in the\ncontext of a custom, use case-specific flow, and apply them to the desired compute engine to carry\nout the transformations needed to wrangle data into the required shape.\n\n### Rerun only when needed\n\nWhen piecing together a custom flow in Omnipy, the user has persistent\naccess to the state of the data at every step of the process. Persistent intermediate data allows\nfor caching of tasks based on the input data and parameters. Hence, if the input data and parameters\nof a task does not change between runs, the task is not rerun. This is particularly useful for\nimporting from REST API endpoints, as a flow can be continuously rerun without taxing the remote\nserver; data import will only carried out in the initial iteration or when the REST API signals that\nthe data has changed.\n\n### Scale up with external compute resources\n\nIn the case of large datasets, the researcher can set\nup a flow based on a representative sample of the full dataset, in a size that is suited for running\nlocally on, say, a laptop. Once the flow has produced the correct output on the sample data, the\noperation can be seamlessly scaled up to the full dataset and sent off in\n[software containers](https://www.docker.com/resources/what-container/) to run on external compute\nresources, using e.g. [Kubernetes](https://kubernetes.io/). Such offloaded flows\ncan be easily monitored using a web GUI.\n\n![Working with Omnipy directly from an Integrated Development Environment (IDE)](https://fairtracks.net/_nuxt/img/f9be071-1440.png)\n\n### Industry-standard ETL backbone\n\nOffloading of flows to external compute resources is provided by\nthe integration of Omnipy with a workflow engine based on the [Prefect](https://www.prefect.io/)\nPython package. Prefect is an industry-leading platform for dataflow automation and orchestration\nthat brings a [series of powerful features](https://www.prefect.io/opensource/) to Omnipy:\n\n- Predefined integrations with a range of compute infrastructure solutions\n- Predefined integration with various services to support extraction, transformation, and loading\n (ETL) of data and metadata\n- Code as workflow (\"If Python can write it, Prefect can run it\")\n- Dynamic workflows: no predefined Direct Acyclic Graphs (DAGs) needed!\n- Command line and web GUI-based visibility and control of jobs\n- Trigger jobs from external events such as GitHub commits, file uploads, etc.\n- Define continuously running workflows that still respond to external events\n- Run tasks concurrently through support for asynchronous tasks\n\n![Overview of the compute and storage infrastructure integrations that comes built-in with Prefect](https://fairtracks.net/_nuxt/img/ccc322a-1440.png)\n\n### Pluggable workflow engines\n\nIt is also possible to integrate Omnipy with other workflow\nbackends by implementing new workflow engine plugins. This is relatively easy to do, as the core\narchitecture of Omnipy allows the user to easily switch the workflow engine at runtime. Omnipy\nsupports both traditional DAG-based and the more _avant garde_ code-based definition of flows. Two\nworkflow engines are currently supported: _local_ and _prefect_.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Omnipy is a high level Python library for type-driven data wrangling and scalable workflow orchestration (under development)",
"version": "0.19.0",
"project_urls": {
"Documentation": "http://omnipy.readthedocs.io/",
"Homepage": "https://fairtracks.net/fair/#fair-07-transformation",
"Repository": "http://github.com/fairtracks/omnipy"
},
"split_keywords": [
"data wrangling",
" metadata",
" workflows",
" etl",
" research data",
" prefect",
" pydantic",
" fair",
" ontologies",
" json",
" tabular",
" type-driven",
" orchestration",
" data models",
" universal"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "dac631360023396bb08c030a624628af7b44b1de288cd69620b2a8631fc68792",
"md5": "816964978119b56f5a48e8504c8b843d",
"sha256": "0f7f0a6b7a4ba395e995460dc4376576b1f99cd38ec4b14fec59771b2effa2ee"
},
"downloads": -1,
"filename": "omnipy-0.19.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "816964978119b56f5a48e8504c8b843d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 193728,
"upload_time": "2024-12-17T01:36:23",
"upload_time_iso_8601": "2024-12-17T01:36:23.778470Z",
"url": "https://files.pythonhosted.org/packages/da/c6/31360023396bb08c030a624628af7b44b1de288cd69620b2a8631fc68792/omnipy-0.19.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0e0714dd8bb0e35599193c60e4683856803bf2c78724eabd9f8bed5c18d43a6a",
"md5": "fa57ea967779be8cc5c7b5c0062caa51",
"sha256": "e4f0b60965f475c147477878243b13c33010d13923588602e32060c319efcba0"
},
"downloads": -1,
"filename": "omnipy-0.19.0.tar.gz",
"has_sig": false,
"md5_digest": "fa57ea967779be8cc5c7b5c0062caa51",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 149388,
"upload_time": "2024-12-17T01:36:25",
"upload_time_iso_8601": "2024-12-17T01:36:25.487695Z",
"url": "https://files.pythonhosted.org/packages/0e/07/14dd8bb0e35599193c60e4683856803bf2c78724eabd9f8bed5c18d43a6a/omnipy-0.19.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-17 01:36:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "fairtracks",
"github_project": "omnipy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "omnipy"
}