# articat
[![CI](https://github.com/related-sciences/articat/actions/workflows/build.yml/badge.svg?branch=main)](https://github.com/related-sciences/articat/actions/workflows/build.yml)
[![PYPI](https://img.shields.io/pypi/v/articat.svg)](https://pypi.org/project/articat/)
Minimal metadata catalog to store and retrieve metadata about data artifacts.
## Getting started
At a high level, *articat* is simply a key-value store. Value being the Artifact metadata.
Key a.k.a. "Artifact Spec" being:
* globally unique `id`
* optional timestamp: `partition`
* optional arbitrary string: `version`
To publish a file system Artifact (`FSArtifact`):
```python
from articat import FSArtifact
from pathlib import Path
from datetime import date
# Apart from being a metadata containers, Artifact classes have optional
# convenience methods to help in data publishing flow:
with FSArtifact.partitioned("foo", partition=date(1643, 1, 4)) as fsa:
# To create a new Artifact, always use `with` statement, and
# either `partitioned` or `versioned` methods. Use:
# * `partitioned(...)`, for Artifacts with explicit `datetime` partition
# * `versioned(...)`, for Artifacts with explicit `str` version
# Next we produce some local data, this could be a Spark job,
# ML model etc.
data_path = Path("/tmp/data")
data_path.write_text("42")
# Now let's stage that data, temporary and final data directories/buckets
# are configurable (see below)
fsa.stage(data_path)
# Additionally let's provide some description, here we could also
# save some extra arbitrary metadata like model metrics, hyperparameters etc.
fsa.metadata.description = "Answer to the Ultimate Question of Life, the Universe, and Everything"
```
To retrieve the metadata about the Artifact above:
```python
from articat.fs_artifact import FSArtifact
from datetime import date
from pathlib import Path
# To retrieve the metadata, use Artifact object, and `fetch` method:
fsa = FSArtifact.partitioned("foo", partition=date(1643, 1, 4)).fetch()
fsa.id # "foo"
fsa.created # <CREATION-TIMESTAMP>
fsa.partition # <CREATION-TIMESTAMP>
fsa.metadata.description # "Answer to the Ultimate Question of Life, the Universe, and Everything"
fsa.main_dir # Data directory, this is where the data was stored after staging
Path(fsa.joinpath("data")).read_text() # 42
```
## Features
* store and retrieve metadata about your data artifacts
* no long running services (low maintenance)
* data publishing utils builtin
* IO/data format agnostic
* immutable metadata
* development mode
## Artifact flavours
Currently available Artifact flavours:
* `FSArtifact`: metadata/utils for files or objects (supports: local FS, GCS, S3 and more)
* `BQArtifact`: metadata/utils for BigQuery tables
* `NotebookArtifact`: metadata/utils for Jupyter Notebooks
## Development mode
To ease development of Artifacts, *articat* supports development/dev mode.
Development Artifact can be indicated by `dev` parameter (preferred), or
`_dev` prefix in the Artifact `id`. Dev mode supports:
* overwriting Artifact metadata
* configure separate locations (e.g. `dev_prefix` for `FSArtifact`), with
potentially different retention periods etc
## Backend
* `local`: mostly for testing/demo, metadata is stored locally (configurable, default: `~/.config/articat/local`)
* `gcp_datastore`: metadata is stored in the Google Cloud Datastore
## Configuration
*articat* configuration can be provided in the API, or configuration files. By default configuration
is loaded from `~/.config/articat/articat.cfg` and `articat.cfg` in current working directory. You
can also point at the configuration file via environment variable `ARTICAT_CONFIG`.
You use `local` mode without configuration file. Available options:
```toml
[main]
# local or gcp_datastore, default: local
# mode =
# local DB directory, default: ~/.config/articat/local
# local_db_dir =
[fs]
# temporary directory/prefix
# tmp_prefix =
# development data directory/prefix
# dev_prefix =
# production data directory/prefix
# prod_prefix =
[gcp]
# GCP project
# project =
[bq]
# development data BigQuery dataset
# dev_dataset =
# production data BigQuery dataset
# prod_dataset =
```
## Our/example setup
Below you can see a diagram of our setup, Articat is just one piece of our system, and solves a specific problem. This should give you an idea where it might fit into your environment:
<p align="center">
<img src="https://docs.google.com/drawings/d/1wll4Q_PlKGHVu-C2IN8jUIxzFTD8jwFWnvwgFrvq2ls/export/png" alt="Our setup diagram"/>
</p>
Raw data
{
"_id": null,
"home_page": "https://github.com/related-sciences/articat",
"name": "articat",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "data, catalog, metadata, data-discovery, data-catalog",
"author": "Related Sciences LLC",
"author_email": "rav@related.vc",
"download_url": "https://files.pythonhosted.org/packages/ba/16/bb49e6fc33b3b4e04a965cd2fe7beb519d1b7cb36caef383dbb58c7b20d0/articat-0.1.16.tar.gz",
"platform": null,
"description": "# articat\n[![CI](https://github.com/related-sciences/articat/actions/workflows/build.yml/badge.svg?branch=main)](https://github.com/related-sciences/articat/actions/workflows/build.yml)\n[![PYPI](https://img.shields.io/pypi/v/articat.svg)](https://pypi.org/project/articat/)\n\nMinimal metadata catalog to store and retrieve metadata about data artifacts.\n\n## Getting started\n\nAt a high level, *articat* is simply a key-value store. Value being the Artifact metadata.\nKey a.k.a. \"Artifact Spec\" being:\n * globally unique `id`\n * optional timestamp: `partition`\n * optional arbitrary string: `version`\n\nTo publish a file system Artifact (`FSArtifact`):\n\n```python\nfrom articat import FSArtifact\nfrom pathlib import Path\nfrom datetime import date\n\n# Apart from being a metadata containers, Artifact classes have optional\n# convenience methods to help in data publishing flow:\n\nwith FSArtifact.partitioned(\"foo\", partition=date(1643, 1, 4)) as fsa:\n # To create a new Artifact, always use `with` statement, and\n # either `partitioned` or `versioned` methods. Use:\n # * `partitioned(...)`, for Artifacts with explicit `datetime` partition\n # * `versioned(...)`, for Artifacts with explicit `str` version\n\n # Next we produce some local data, this could be a Spark job,\n # ML model etc.\n data_path = Path(\"/tmp/data\")\n data_path.write_text(\"42\")\n\n # Now let's stage that data, temporary and final data directories/buckets\n # are configurable (see below)\n fsa.stage(data_path)\n\n # Additionally let's provide some description, here we could also\n # save some extra arbitrary metadata like model metrics, hyperparameters etc.\n fsa.metadata.description = \"Answer to the Ultimate Question of Life, the Universe, and Everything\"\n```\n\nTo retrieve the metadata about the Artifact above:\n\n```python\nfrom articat.fs_artifact import FSArtifact\nfrom datetime import date\nfrom pathlib import Path\n\n# To retrieve the metadata, use Artifact object, and `fetch` method:\nfsa = FSArtifact.partitioned(\"foo\", partition=date(1643, 1, 4)).fetch()\n\nfsa.id # \"foo\"\nfsa.created # <CREATION-TIMESTAMP>\nfsa.partition # <CREATION-TIMESTAMP>\nfsa.metadata.description # \"Answer to the Ultimate Question of Life, the Universe, and Everything\"\nfsa.main_dir # Data directory, this is where the data was stored after staging\nPath(fsa.joinpath(\"data\")).read_text() # 42\n```\n\n## Features\n\n * store and retrieve metadata about your data artifacts\n * no long running services (low maintenance)\n * data publishing utils builtin\n * IO/data format agnostic\n * immutable metadata\n * development mode\n\n## Artifact flavours\n\nCurrently available Artifact flavours:\n * `FSArtifact`: metadata/utils for files or objects (supports: local FS, GCS, S3 and more)\n * `BQArtifact`: metadata/utils for BigQuery tables\n * `NotebookArtifact`: metadata/utils for Jupyter Notebooks\n\n## Development mode\n\nTo ease development of Artifacts, *articat* supports development/dev mode.\nDevelopment Artifact can be indicated by `dev` parameter (preferred), or\n`_dev` prefix in the Artifact `id`. Dev mode supports:\n * overwriting Artifact metadata\n * configure separate locations (e.g. `dev_prefix` for `FSArtifact`), with\n potentially different retention periods etc\n\n## Backend\n\n * `local`: mostly for testing/demo, metadata is stored locally (configurable, default: `~/.config/articat/local`)\n * `gcp_datastore`: metadata is stored in the Google Cloud Datastore\n\n## Configuration\n\n*articat* configuration can be provided in the API, or configuration files. By default configuration\nis loaded from `~/.config/articat/articat.cfg` and `articat.cfg` in current working directory. You\ncan also point at the configuration file via environment variable `ARTICAT_CONFIG`.\n\nYou use `local` mode without configuration file. Available options:\n\n ```toml\n[main]\n# local or gcp_datastore, default: local\n# mode =\n\n# local DB directory, default: ~/.config/articat/local\n# local_db_dir =\n\n[fs]\n# temporary directory/prefix\n# tmp_prefix =\n# development data directory/prefix\n# dev_prefix =\n# production data directory/prefix\n# prod_prefix =\n\n[gcp]\n# GCP project\n# project =\n\n[bq]\n# development data BigQuery dataset\n# dev_dataset =\n# production data BigQuery dataset\n# prod_dataset =\n```\n\n## Our/example setup\n\nBelow you can see a diagram of our setup, Articat is just one piece of our system, and solves a specific problem. This should give you an idea where it might fit into your environment:\n\n<p align=\"center\">\n <img src=\"https://docs.google.com/drawings/d/1wll4Q_PlKGHVu-C2IN8jUIxzFTD8jwFWnvwgFrvq2ls/export/png\" alt=\"Our setup diagram\"/>\n</p>\n",
"bugtrack_url": null,
"license": "Apache",
"summary": "articat: data artifact catalog",
"version": "0.1.16",
"project_urls": {
"Homepage": "https://github.com/related-sciences/articat"
},
"split_keywords": [
"data",
" catalog",
" metadata",
" data-discovery",
" data-catalog"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fa7b00700d6438ad9a97c0b0ea3cfac5fc8fe16578395a19444baa1c00da4cab",
"md5": "000bbc5101e079eadfb4debad63103f7",
"sha256": "a56f68685f56515162e8dbfb1e08d628d55b8980979e5461dbbf0830913049f9"
},
"downloads": -1,
"filename": "articat-0.1.16-py3-none-any.whl",
"has_sig": false,
"md5_digest": "000bbc5101e079eadfb4debad63103f7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 52031,
"upload_time": "2024-07-06T01:34:58",
"upload_time_iso_8601": "2024-07-06T01:34:58.133088Z",
"url": "https://files.pythonhosted.org/packages/fa/7b/00700d6438ad9a97c0b0ea3cfac5fc8fe16578395a19444baa1c00da4cab/articat-0.1.16-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ba16bb49e6fc33b3b4e04a965cd2fe7beb519d1b7cb36caef383dbb58c7b20d0",
"md5": "77375ea9a603b0ab5fcc71d4a98bfc6e",
"sha256": "b6480cc1c0d41bf72b7225737ffe0cdebad0bae7349b531deba14f2b2a0b7e83"
},
"downloads": -1,
"filename": "articat-0.1.16.tar.gz",
"has_sig": false,
"md5_digest": "77375ea9a603b0ab5fcc71d4a98bfc6e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 43429,
"upload_time": "2024-07-06T01:35:00",
"upload_time_iso_8601": "2024-07-06T01:35:00.399229Z",
"url": "https://files.pythonhosted.org/packages/ba/16/bb49e6fc33b3b4e04a965cd2fe7beb519d1b7cb36caef383dbb58c7b20d0/articat-0.1.16.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-06 01:35:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "related-sciences",
"github_project": "articat",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "articat"
}