data-harvesting


Namedata-harvesting JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting
SummarySet of tools to harvest, process and uplift (meta)data from metadata providers within the Helmholtz association to be included in the Helmholtz Knowledge Graph (Helmholtz-KG). The harvested linked data in the form of schema.org jsonld is aggregated and uplifted in data pipelines to be included into a single large knowledge graph (KG).
upload_time2023-10-31 13:02:02
maintainerJens Bröder
docs_urlNone
authorJens Bröder
requires_python>=3.9,<4.0
licenseMIT
keywords unhide helmholtz association data mining hmc metadata data publications software publication rse fair linked data knowledge graph json-ld schema.org restruct
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![MIT license](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
# Data Harvesting

This repository contains harvesters, aggregators for linked Data and tools around them. 
This software allows to harvest small subgraphs exposed by certain sources on the web and
and enrich them such that they can be combined to a single larger linked data graph. 

This software was written for and is mainly currently deployed as a part of the backend for the unified Helmholtz Information and Data Exchange (unHIDE) project by the Helmholtz Metadata Collaboration (HMC) to create
a knowledge graph for the Helmholtz association which allows to monitor, check, enrich metadata as well as
identify gabs and needs.

Contributions of any kind by you are always welcome!

## Approach:

We establish certain data pipelines of certain data providers with linked metadata and complement it, by combining it with other sources. For the unhide project this data is annotated in schema.org semantics and serialized mainly in JSON-LD.

Data pipelines contain code to execute harvesting from a local to a global level. 
They are exposed through a cmdline interface (cli) and thus easily integrated in a cron job and can therefore be used to stream data on a time interval bases into some data eco system

Data harvester pipelines so far:
- gitlab pipeline: harvest all public projects in Helmholtz gitlab instances and extracts and complements codemeta.jsonld files. (todo: extend to github)
- sitemap pipeline: extract JSON-LD metadata a data provider over its sitemap, which contains links to the data entries and when they have been last updated
- oai pmh pipeline: extract metadata over oai-pmh endpoints from a data provider. it contains a list of entries and when they where last updated. This pipeline uses a converter from dublin core to schema.org, since many providers provide just dublin core so far.
- datacite pipeline: extract JSON-LD metadata from datacite.org connected to a given organization identifier.
- schoolix pipeline (todo): Extract links and related resources for a list of given PIDs of any kind

Besides the harvesters there are aggregators which allow one to specify how linked data should be processed while tracking the provenance of the processing in a reversible way. This is done by storing graph updates, so called patches, for each subgraph. These updates can also be then applied directly to a graph database. Processes changes can be provided as SPARQL updates or through python function with a specific interface.

All harvesters and Aggregators read from a single config file (as example see [configs/config.yaml](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/dev/data_harvesting/configs/config.yaml)), which contains als sources and specific operations. 

## Documentation:

Currently only in code documentation. In the future under the docs folder and hosted somewhere.

## Installation

```
git clone git@codebase.helmholtz.cloud:hmc/hmc-public/unhide/data_harvesting.git
cd data_harvesting
pip install .
```
as a developer install with
```
pip install -e .
```
You can also setup the project using poetry instead of pip.
```
poetry install --with dev
```

The individual pipelines have further dependencies outside of python.

For example the gitlab pipeline relies an codemeta-harvester (https://github.com/proycon/codemeta-harvester)

## How to use this

For examples look at the `examples` folder. Also the tests in `tests` folder may provide some insight.
Also once installed there is a command line interface (CLI), 'hmc-unhide' for example one can execute the gitlab pipeline via:

```
hmc-unhide harvester run --name gitlab --out ~/work/data/gitlab_pipeline
```

further the cli exposes some other utility on the command line for example to convert linked data files 
into different formats.

## License

The software is distributed under the terms and conditions of the MIT license which is specified in the `LICENSE` file.
## Acknowledgement

This project was supported by the Helmholtz Metadata Collaboration (HMC), an incubator-platform of the Helmholtz Association within the framework of the Information and Data Science strategic initiative.

            

Raw data

            {
    "_id": null,
    "home_page": "https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting",
    "name": "data-harvesting",
    "maintainer": "Jens Br\u00f6der",
    "docs_url": null,
    "requires_python": ">=3.9,<4.0",
    "maintainer_email": "j.broeder@fz-juelich.de",
    "keywords": "unhide,Helmholtz association,data mining,HMC,metadata,data publications,software publication,RSE,FAIR,linked data,knowledge graph,json-ld,schema.org,restruct",
    "author": "Jens Br\u00f6der",
    "author_email": "j.broeder@fz-juelich.de",
    "download_url": "https://files.pythonhosted.org/packages/9b/44/5e24ebe61aa0f4f5d6df6918b04489b5b19987b8d9c7e8eb75f8a409c691/data_harvesting-1.1.0.tar.gz",
    "platform": null,
    "description": "[![MIT license](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n# Data Harvesting\n\nThis repository contains harvesters, aggregators for linked Data and tools around them. \nThis software allows to harvest small subgraphs exposed by certain sources on the web and\nand enrich them such that they can be combined to a single larger linked data graph. \n\nThis software was written for and is mainly currently deployed as a part of the backend for the unified Helmholtz Information and Data Exchange (unHIDE) project by the Helmholtz Metadata Collaboration (HMC) to create\na knowledge graph for the Helmholtz association which allows to monitor, check, enrich metadata as well as\nidentify gabs and needs.\n\nContributions of any kind by you are always welcome!\n\n## Approach:\n\nWe establish certain data pipelines of certain data providers with linked metadata and complement it, by combining it with other sources. For the unhide project this data is annotated in schema.org semantics and serialized mainly in JSON-LD.\n\nData pipelines contain code to execute harvesting from a local to a global level. \nThey are exposed through a cmdline interface (cli) and thus easily integrated in a cron job and can therefore be used to stream data on a time interval bases into some data eco system\n\nData harvester pipelines so far:\n- gitlab pipeline: harvest all public projects in Helmholtz gitlab instances and extracts and complements codemeta.jsonld files. (todo: extend to github)\n- sitemap pipeline: extract JSON-LD metadata a data provider over its sitemap, which contains links to the data entries and when they have been last updated\n- oai pmh pipeline: extract metadata over oai-pmh endpoints from a data provider. it contains a list of entries and when they where last updated. This pipeline uses a converter from dublin core to schema.org, since many providers provide just dublin core so far.\n- datacite pipeline: extract JSON-LD metadata from datacite.org connected to a given organization identifier.\n- schoolix pipeline (todo): Extract links and related resources for a list of given PIDs of any kind\n\nBesides the harvesters there are aggregators which allow one to specify how linked data should be processed while tracking the provenance of the processing in a reversible way. This is done by storing graph updates, so called patches, for each subgraph. These updates can also be then applied directly to a graph database. Processes changes can be provided as SPARQL updates or through python function with a specific interface.\n\nAll harvesters and Aggregators read from a single config file (as example see [configs/config.yaml](https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting/-/blob/dev/data_harvesting/configs/config.yaml)), which contains als sources and specific operations. \n\n## Documentation:\n\nCurrently only in code documentation. In the future under the docs folder and hosted somewhere.\n\n## Installation\n\n```\ngit clone git@codebase.helmholtz.cloud:hmc/hmc-public/unhide/data_harvesting.git\ncd data_harvesting\npip install .\n```\nas a developer install with\n```\npip install -e .\n```\nYou can also setup the project using poetry instead of pip.\n```\npoetry install --with dev\n```\n\nThe individual pipelines have further dependencies outside of python.\n\nFor example the gitlab pipeline relies an codemeta-harvester (https://github.com/proycon/codemeta-harvester)\n\n## How to use this\n\nFor examples look at the `examples` folder. Also the tests in `tests` folder may provide some insight.\nAlso once installed there is a command line interface (CLI), 'hmc-unhide' for example one can execute the gitlab pipeline via:\n\n```\nhmc-unhide harvester run --name gitlab --out ~/work/data/gitlab_pipeline\n```\n\nfurther the cli exposes some other utility on the command line for example to convert linked data files \ninto different formats.\n\n## License\n\nThe software is distributed under the terms and conditions of the MIT license which is specified in the `LICENSE` file.\n## Acknowledgement\n\nThis project was supported by the Helmholtz Metadata Collaboration (HMC), an incubator-platform of the Helmholtz Association within the framework of the Information and Data Science strategic initiative.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Set of tools to harvest, process and uplift (meta)data from metadata providers within the Helmholtz association to be included in the Helmholtz Knowledge Graph (Helmholtz-KG). The harvested linked data in the form of schema.org jsonld is aggregated and uplifted in data pipelines to be included into a single large knowledge graph (KG).",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting",
        "Repository": "https://codebase.helmholtz.cloud/hmc/hmc-public/unhide/data_harvesting.git"
    },
    "split_keywords": [
        "unhide",
        "helmholtz association",
        "data mining",
        "hmc",
        "metadata",
        "data publications",
        "software publication",
        "rse",
        "fair",
        "linked data",
        "knowledge graph",
        "json-ld",
        "schema.org",
        "restruct"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "85d7fe24e00fe223c4c698af6d9ca9a9c81f408b5d7b7b78c84f181aa8073e97",
                "md5": "136d541a648aaff7df35bff3c6104488",
                "sha256": "a21c72409f24c458047af9fe5588b090ddf80de1aa91931919e96e0a0007e87a"
            },
            "downloads": -1,
            "filename": "data_harvesting-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "136d541a648aaff7df35bff3c6104488",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9,<4.0",
            "size": 594786,
            "upload_time": "2023-10-31T13:02:00",
            "upload_time_iso_8601": "2023-10-31T13:02:00.385416Z",
            "url": "https://files.pythonhosted.org/packages/85/d7/fe24e00fe223c4c698af6d9ca9a9c81f408b5d7b7b78c84f181aa8073e97/data_harvesting-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9b445e24ebe61aa0f4f5d6df6918b04489b5b19987b8d9c7e8eb75f8a409c691",
                "md5": "90930000d9fb3ce64d2b75be3d19745b",
                "sha256": "d4ce339e1656fb14ad744cbcba1c330b7eb31fa1a5ecbbc0fd3cac81abda48ae"
            },
            "downloads": -1,
            "filename": "data_harvesting-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "90930000d9fb3ce64d2b75be3d19745b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9,<4.0",
            "size": 563934,
            "upload_time": "2023-10-31T13:02:02",
            "upload_time_iso_8601": "2023-10-31T13:02:02.938967Z",
            "url": "https://files.pythonhosted.org/packages/9b/44/5e24ebe61aa0f4f5d6df6918b04489b5b19987b8d9c7e8eb75f8a409c691/data_harvesting-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-31 13:02:02",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "data-harvesting"
}
        
Elapsed time: 0.13049s