eodatasets3


Nameeodatasets3 JSON
Version 0.30.5 PyPI version JSON
download
home_pagehttps://github.com/opendatacube/eo-datasets
SummaryPackaging, metadata and provenance for OpenDataCube EO3 datasets
upload_time2024-04-04 04:09:18
maintainerNone
docs_urlNone
authorOpen Data Cube
requires_python>=3.8
licenseApache Software License 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            ## EO Datasets

[![Linting](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/lint.yml/badge.svg)](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/lint.yml)
[![Tests](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/test.yml/badge.svg)](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/test.yml)
[![Coverage Status](https://img.shields.io/codecov/c/github/GeoscienceAustralia/eo-datasets)](https://app.codecov.io/gh/GeoscienceAustralia/eo-datasets)

A tool to easily write, validate and convert [ODC](https://github.com/opendatacube/datacube-core)
datasets and metadata.


## Installation

    pip install eodatasets3

Python 3.8+ is supported.

## Dataset assembly

The assembler api aims to make it easy to write datasets.

```python
from datetime import datetime
from pathlib import Path

from eodatasets3 import DatasetAssembler

with DatasetAssembler(
    Path("/some/output/collection/path"), naming_conventions="default"
) as p:
    # Add some common metadata fields.
    p.platform = "landsat-7"
    p.instrument = "ETM"
    p.datetime = datetime(2019, 7, 4, 13, 7, 5)
    p.processed_now()

    # Support for custom metadata fields
    p.properties["fmask:cloud_shadow"] = 42.0

    # If you have a source dataset, you can include it as provenance.
    # Assembler can also copy common metadata properties from it.
    # (... so we didn't need to set the "platform" above!)
    p.add_source_path(source_dataset, auto_inherit_properties=True)

    # Write measurements. They can be from numpy arrays, open rasterio datasets,
    # file paths, ODC Datasets...
    p.write_measurement("red", red_path)
    ...  # now write more measurements

    # Create a jpg thumbnail image using the measurements we've written
    p.write_thumbnail(red="swir1", green="swir2", blue="red")

    # Validate the dataset and write it to the destination folder atomically.
    p.done()
```

The Assembler will write a folder of [COG](https://www.cogeo.org/) imagery, an [eo3](#open-data-cube-compatibility)
metadata doc for Open Data Cube, and create appropriate file and folder structures for the chosen naming conventions.

If you already have existing imagery, you can use DatasetAssembler to create a matching metadata document.

See [the documentation guide for more features and examples](https://eodatasets.readthedocs.io/en/latest/).

## Open Data Cube compatibility

The assembler writes a format called "eo3", which will be the native metadata format for Open Data Cube
2.0. We recommend new products are written with this format, even if targeting Open Data Cube 1.
Datacube versions from 1.8 onwards are compatible natively with eo3.

eo3 adds information about the native grid of the data, and aims to be more easily interoperable
with the upcoming [Stac Item metadata](https://github.com/radiantearth/stac-spec/tree/master/item-spec).

# Other Tools Included

## Validator


`eo3-validate` a lint-like checker to check ODC documents.

Give it ODC documents for your products, types and/or datasets to find
errors quickly.

    ❯ eo3-validate my-product.odc-product.yaml /tmp/path/to/dataset.odc-metadata.yaml
    ❯ eo3-validate https://explorer.dea.ga.gov.au/products/ga_ls_fc_3.odc-product.yaml

Note that documents are processed in order. A product should be specified before its dataset,
so that the product is known by the validator before the dataset is checked.

Similarly, a Metadata Type should ideally be specified before its product, so that the type is known.
But all are optional for basic checks

- **Note** You can use `--odc` or `--explorer-url` to automatically load a list of types and products.

### Help text

    ❯ eo3-validate --help
	Usage: eo3-validate [OPTIONS] [PATHS]...

	  Validate ODC dataset documents

	  Paths can be products, dataset documents, or directories to scan (for files
	  matching names '*.odc-metadata.yaml' etc), either local or URLs.

	  Datasets are validated against matching products that have been scanned
	  already, so specify products first, and datasets later, to ensure they can
	  be matched.

	Options:
	  --version                       Show the version and exit.
	  -W, --warnings-as-errors        Fail if any warnings are produced
	  -f, --output-format [plain|quiet|github]
					  Output format  [default: plain]
	  --thorough                      Attempt to read the data/measurements, and
					  check their properties match
	  --explorer-url TEXT             Use product definitions from the given
					  Explorer URL to validate datasets. Eg:
					  "https://explorer.dea.ga.gov.au/"
	  --odc                           Use product definitions from datacube to
					  validate datasets
	  -q, --quiet                     Only print problems, one per line
	  --help                          Show this message and exit.


### Eo3-validate github action (market place)
#### Inputs

`command`: **Required** The command to run. Default `"eo3-validate"`.
`filepath`: **Required** The filepath to odc documents. Default `""`.

#### Example usage

```
uses: actions/eo3-validate@v1
with:
  command: 'eo3-validate'
  filepath: '/'
```

### Eo3-validate github action (docker)
#### Example usage

```
name: Run eo3-validate
run: |
    docker run -v $PWD/:/code/odc-files opendatacube/eo-datasets:latest eo3-validate ./odc-files

```


### Disabling warnings

ODC is very configurable, and sometimes the validator will be too strict
for you.

You can ease some restrictions for a product by adding
a `default_allowances` section to the end of your Product definition.

(The same section can also be included in any Metadata Type, and will apply
to all products of that type)

Example supported fields:

```yaml

# Possible restriction to ease.
# All fields can be omitted (please don't include them if you don't need them).
default_allowances:

  # "It's okay if some datasets to have null geometry."
  require_geometry: false

  # Allow some metadata fields (from the metadata type) to be null
  # (but they must still be in each dataset document)
  allow_nullable_fields:
    - dataset_maturity

  # Allow some metadata fields to be entirely missing from datasets
  allow_missing_fields:
    - sentinel_product_name
    - s2cloudless_clear

  # Allow datasets to have extra measurements that weren't in the product definition.
  allow_extra_measurements: [nbar_blue, nbar_green, nbar_red]
```

### 'Thorough' mode

By default, the validator will only look at the given metadata document(s).

But when the `--thorough` flag is given, the validator will attempt to read any
referenced imagery, checking their properties match the definitions (nodata, dtype etc).

## Stac metadata conversion

`eo3-to-stac`: Convert an EO3 metadata doc to a Stac Item

	❯ eo3-to-stac --help
	Usage: eo3-to-stac [OPTIONS] [ODC_METADATA_FILES]...

	  Convert an EO3 metadata doc to a Stac Item.

	Options:
	  -v, --verbose
	  -u, --stac-base-url TEXT      Base URL of the STAC file
	  -e, --explorer-base-url TEXT  Base URL of the ODC Explorer
	  --validate / --no-validate    Validate output STAC Item against online
					schemas

	  --help                        Show this message and exit.


Example usage:

	❯ eo3-to-stac LT05_L1TP_113081_19880330_20170209_01_T1.odc-metadata.yaml
	❯ ls
	LT05_L1TP_113081_19880330_20170209_01_T1.odc-metadata.yaml
	LT05_L1TP_113081_19880330_20170209_01_T1.stac-item.json

## Prep Scripts

Some scripts are included for preparing common metadata documents,
such as landsat scenes.

`eo3-prepare`: Prepare ODC metadata from the commandline.

Some sub-commands need the ancillary dependencies, for reading from
exotic formats: `pip install .[ancillary]`

	❯ eo3-prepare --help
	Usage: eo3-prepare [OPTIONS] COMMAND [ARGS]...

	Options:
	  --version  Show the version and exit.
	  --help     Show this message and exit.

	Commands:
	  landsat-l1     Prepare eo3 metadata for USGS Landsat Level 1 data.
	  modis-mcd43a1  Prepare MODIS MCD43A1 tiles for indexing into a Data...
	  noaa-prwtr     Prepare NCEP/NCAR reanalysis 1 water pressure datasets...
	  sentinel-l1   Prepare eo3 metadata for Sentinel-2 Level 1C data produced...

Prep scripts have their own options, for example Sentinel L1 generation can filter by time
or region, if the inputs follow a common directory structure:

```
❯ eo3-prepare sentinel-l1 --help
Usage: eo3-prepare sentinel-l1 [OPTIONS] [DATASETS]...

  Prepare eo3 metadata for Sentinel-2 Level 1C data produced by Sinergise or
  ESA.

  Takes ESA zipped datasets or Sinergise dataset directories

Options:
  -v, --verbose
  -f, --datasets-path FILE        A file to read input dataset paths from, one
                                  per line
  -j, --jobs INTEGER              Number of workers to run in parallel
  --overwrite-existing / --skip-existing
                                  Overwrite if exists (otherwise skip)
  --embed-location / --no-embed-location
                                  Embed the location of the dataset in the
                                  metadata? (if you wish to store them
                                  separately. default: auto)
  --always-granule-id / --never-granule-id
                                  Include the granule id in metadata
                                  filenames? (default: auto -- include only
                                  for multi-granule files). Beware that multi-
                                  granule datasets without a granule id in the
                                  filename will overwrite each-other
  --throughly-check-existing / --cheaply-check-existing
                                  Should we open every dataset to check if
                                  *all* inner granules have been produced?
                                  Default: false.
  --provider [sinergise.com|esa.int]
                                  Restrict scanning to only packages of the
                                  given provider. (ESA assumes a zip file,
                                  sinergise a directory)
  --output-base DIRECTORY         Write metadata files into a directory
                                  instead of alongside each dataset
  --input-relative-to DIRECTORY   Input root folder that should be used for
                                  the subfolder hierarchy in the output-base
  --only-regions-in-file FILE     Only process datasets in the given regions.
                                  Expects a file with one region code per
                                  line. (Note that some older ESA datasets
                                  have no region code, and will not match any
                                  region here.)
  --after-month YEAR-MONTH        Limit the scan to datasets newer than a
                                  given month (expressed as {year}-{month}, eg
                                  '2010-01')
  --before-month YEAR-MONTH       Limit the scan to datasets older than the
                                  given month (expressed as {year}-{month}, eg
                                  '2010-01')
  --index                         Index newly-generated metadata into the
                                  configured datacube
  --dry-run                       Show what would be created, but don't create
                                  anything
  -E, --env TEXT
  -C, --config, --config_file TEXT
  --help                          Show this message and exit.
```

An example of preparing metadata in a separate directory (not alongside the datasets) at NCI
can be as follows:

```bash
module use -a /g/data/v10/private/modules/modulefiles /g/data/v10/public/modules/modulefiles
module load eodatasets3

# With a folder of input paths, 4 workers, and separate output directory:
eo3-prepare sentinel-l1 -j 4 --output-base /output/metadata/directory \
  /g/data/fj7/Copernicus/Sentinel-2/MSI/L1C/2021

# Using a file for input paths. Filter them to a certain region list and recent months:
eo3-prepare sentinel-l1 \
    --output-base /g/data/v10/agdc/jez/c3/L1C  \
    --only-regions-in-file test-regions.txt \
    --after-month 2022-04 \
    -f l1cs-2022-05-02.txt

```


`eo3-package-wagl`: Convert and package WAGL HDF5 outputs.

 Needs the wagl dependencies group: `pip install .[wagl]`

	❯ eo3-package-wagl --help
	Usage: eo3-package-wagl [OPTIONS] H5_FILE

	  Package WAGL HDF5 Outputs

	  This will convert the HDF5 file (and sibling fmask/gqa files) into
	  GeoTIFFS (COGs) with datacube metadata using the DEA naming conventions
	  for files.

	Options:
	  --level1 FILE                   Optional path to the input level1 metadata
					  doc (otherwise it will be loaded from the
					  level1 path in the HDF5)

	  --output DIRECTORY              Put the output package into this directory
					  [required]

	  -p, --product [nbar|nbart|lambertian|sbt]
					  Package only the given products (can specify
					  multiple times)

	  --with-oa / --no-oa             Include observation attributes (default:
					  true)

	  --with-oa / --no-oa             Include observation attributes (default:
					  true)

	  --oa-resolution FLOAT           Resolution choice for observation attributes
					  (default: automatic based on sensor)

	  --help                          Show this message and exit.


# Development Setup

Run the tests using [pytest](http://pytest.org/).

	❯ pytest

You may need to install test dependencies first:

	❯ pip install -e .[test]

Dependencies such as gdal can be tricky to install on some systems. You
may prefer to use the included Docker file for development: run `make
build` to create a container, and `make test` to run tests.

We have strict linting and formatting checks on this reposistory, so
please run pre-commit (below) after checkout.

## Pre-commit setup

	❯ pip install pre-commit
	❯ pre-commit install

(if you are using Conda, you need to `conda install pre_commit` instead of using pip)

Your code will now be formatted and validated before each commit. You can also invoke it manually by running `pre-commit run`

This allows you to immediately catch and fix issues before you raise a pull request that fails.

Most notably, all code is formatted using
[black](https://github.com/ambv/black), and checked with
[pyflakes](https://github.com/PyCQA/pyflakes).


## Docker dependencies

To update the set of frozen dependencies inside docker:

1) Ensure you have an existing, working image. ie, run `make build`. If the current image is broken,
you may need to `git checkout` the last working version first.
2) Run `make dependency-update`. This will recalculate the list of all dependencies from the definitions in setup.py.

Note that this will run pip-compile _inside_ the docker container for maximum compatibility, hence the need for an existing container.

## Creating Releases

First, draft [some release notes](https://github.com/GeoscienceAustralia/eo-datasets/releases)
for users of the library.

Now tag and upload:

```
# Be up-to-date.
git fetch origin

# Create a tag for the new version
# (using semantic versioning https://semver.org/)
git tag eodatasets3-<version> origin/eodatasets3

# Create package
python3 setup.py sdist bdist_wheel

# Upload it (Jeremy, Damien, Kirill have pypi ownership)
python3 -m twine upload  dist/*

# Push tag to main repository
git push origin --tags
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/opendatacube/eo-datasets",
    "name": "eodatasets3",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Open Data Cube",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/37/58/a54987cbbf086fd1e345a53b3099b1ded9738412ea0f20452af7705cfcdf/eodatasets3-0.30.5.tar.gz",
    "platform": null,
    "description": "## EO Datasets\n\n[![Linting](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/lint.yml/badge.svg)](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/lint.yml)\n[![Tests](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/test.yml/badge.svg)](https://github.com/GeoscienceAustralia/eo-datasets/actions/workflows/test.yml)\n[![Coverage Status](https://img.shields.io/codecov/c/github/GeoscienceAustralia/eo-datasets)](https://app.codecov.io/gh/GeoscienceAustralia/eo-datasets)\n\nA tool to easily write, validate and convert [ODC](https://github.com/opendatacube/datacube-core)\ndatasets and metadata.\n\n\n## Installation\n\n    pip install eodatasets3\n\nPython 3.8+ is supported.\n\n## Dataset assembly\n\nThe assembler api aims to make it easy to write datasets.\n\n```python\nfrom datetime import datetime\nfrom pathlib import Path\n\nfrom eodatasets3 import DatasetAssembler\n\nwith DatasetAssembler(\n    Path(\"/some/output/collection/path\"), naming_conventions=\"default\"\n) as p:\n    # Add some common metadata fields.\n    p.platform = \"landsat-7\"\n    p.instrument = \"ETM\"\n    p.datetime = datetime(2019, 7, 4, 13, 7, 5)\n    p.processed_now()\n\n    # Support for custom metadata fields\n    p.properties[\"fmask:cloud_shadow\"] = 42.0\n\n    # If you have a source dataset, you can include it as provenance.\n    # Assembler can also copy common metadata properties from it.\n    # (... so we didn't need to set the \"platform\" above!)\n    p.add_source_path(source_dataset, auto_inherit_properties=True)\n\n    # Write measurements. They can be from numpy arrays, open rasterio datasets,\n    # file paths, ODC Datasets...\n    p.write_measurement(\"red\", red_path)\n    ...  # now write more measurements\n\n    # Create a jpg thumbnail image using the measurements we've written\n    p.write_thumbnail(red=\"swir1\", green=\"swir2\", blue=\"red\")\n\n    # Validate the dataset and write it to the destination folder atomically.\n    p.done()\n```\n\nThe Assembler will write a folder of [COG](https://www.cogeo.org/) imagery, an [eo3](#open-data-cube-compatibility)\nmetadata doc for Open Data Cube, and create appropriate file and folder structures for the chosen naming conventions.\n\nIf you already have existing imagery, you can use DatasetAssembler to create a matching metadata document.\n\nSee [the documentation guide for more features and examples](https://eodatasets.readthedocs.io/en/latest/).\n\n## Open Data Cube compatibility\n\nThe assembler writes a format called \"eo3\", which will be the native metadata format for Open Data Cube\n2.0. We recommend new products are written with this format, even if targeting Open Data Cube 1.\nDatacube versions from 1.8 onwards are compatible natively with eo3.\n\neo3 adds information about the native grid of the data, and aims to be more easily interoperable\nwith the upcoming [Stac Item metadata](https://github.com/radiantearth/stac-spec/tree/master/item-spec).\n\n# Other Tools Included\n\n## Validator\n\n\n`eo3-validate` a lint-like checker to check ODC documents.\n\nGive it ODC documents for your products, types and/or datasets to find\nerrors quickly.\n\n    \u276f eo3-validate my-product.odc-product.yaml /tmp/path/to/dataset.odc-metadata.yaml\n    \u276f eo3-validate https://explorer.dea.ga.gov.au/products/ga_ls_fc_3.odc-product.yaml\n\nNote that documents are processed in order. A product should be specified before its dataset,\nso that the product is known by the validator before the dataset is checked.\n\nSimilarly, a Metadata Type should ideally be specified before its product, so that the type is known.\nBut all are optional for basic checks\n\n- **Note** You can use `--odc` or `--explorer-url` to automatically load a list of types and products.\n\n### Help text\n\n    \u276f eo3-validate --help\n\tUsage: eo3-validate [OPTIONS] [PATHS]...\n\n\t  Validate ODC dataset documents\n\n\t  Paths can be products, dataset documents, or directories to scan (for files\n\t  matching names '*.odc-metadata.yaml' etc), either local or URLs.\n\n\t  Datasets are validated against matching products that have been scanned\n\t  already, so specify products first, and datasets later, to ensure they can\n\t  be matched.\n\n\tOptions:\n\t  --version                       Show the version and exit.\n\t  -W, --warnings-as-errors        Fail if any warnings are produced\n\t  -f, --output-format [plain|quiet|github]\n\t\t\t\t\t  Output format  [default: plain]\n\t  --thorough                      Attempt to read the data/measurements, and\n\t\t\t\t\t  check their properties match\n\t  --explorer-url TEXT             Use product definitions from the given\n\t\t\t\t\t  Explorer URL to validate datasets. Eg:\n\t\t\t\t\t  \"https://explorer.dea.ga.gov.au/\"\n\t  --odc                           Use product definitions from datacube to\n\t\t\t\t\t  validate datasets\n\t  -q, --quiet                     Only print problems, one per line\n\t  --help                          Show this message and exit.\n\n\n### Eo3-validate github action (market place)\n#### Inputs\n\n`command`: **Required** The command to run. Default `\"eo3-validate\"`.\n`filepath`: **Required** The filepath to odc documents. Default `\"\"`.\n\n#### Example usage\n\n```\nuses: actions/eo3-validate@v1\nwith:\n  command: 'eo3-validate'\n  filepath: '/'\n```\n\n### Eo3-validate github action (docker)\n#### Example usage\n\n```\nname: Run eo3-validate\nrun: |\n    docker run -v $PWD/:/code/odc-files opendatacube/eo-datasets:latest eo3-validate ./odc-files\n\n```\n\n\n### Disabling warnings\n\nODC is very configurable, and sometimes the validator will be too strict\nfor you.\n\nYou can ease some restrictions for a product by adding\na `default_allowances` section to the end of your Product definition.\n\n(The same section can also be included in any Metadata Type, and will apply\nto all products of that type)\n\nExample supported fields:\n\n```yaml\n\n# Possible restriction to ease.\n# All fields can be omitted (please don't include them if you don't need them).\ndefault_allowances:\n\n  # \"It's okay if some datasets to have null geometry.\"\n  require_geometry: false\n\n  # Allow some metadata fields (from the metadata type) to be null\n  # (but they must still be in each dataset document)\n  allow_nullable_fields:\n    - dataset_maturity\n\n  # Allow some metadata fields to be entirely missing from datasets\n  allow_missing_fields:\n    - sentinel_product_name\n    - s2cloudless_clear\n\n  # Allow datasets to have extra measurements that weren't in the product definition.\n  allow_extra_measurements: [nbar_blue, nbar_green, nbar_red]\n```\n\n### 'Thorough' mode\n\nBy default, the validator will only look at the given metadata document(s).\n\nBut when the `--thorough` flag is given, the validator will attempt to read any\nreferenced imagery, checking their properties match the definitions (nodata, dtype etc).\n\n## Stac metadata conversion\n\n`eo3-to-stac`: Convert an EO3 metadata doc to a Stac Item\n\n\t\u276f eo3-to-stac --help\n\tUsage: eo3-to-stac [OPTIONS] [ODC_METADATA_FILES]...\n\n\t  Convert an EO3 metadata doc to a Stac Item.\n\n\tOptions:\n\t  -v, --verbose\n\t  -u, --stac-base-url TEXT      Base URL of the STAC file\n\t  -e, --explorer-base-url TEXT  Base URL of the ODC Explorer\n\t  --validate / --no-validate    Validate output STAC Item against online\n\t\t\t\t\tschemas\n\n\t  --help                        Show this message and exit.\n\n\nExample usage:\n\n\t\u276f eo3-to-stac LT05_L1TP_113081_19880330_20170209_01_T1.odc-metadata.yaml\n\t\u276f ls\n\tLT05_L1TP_113081_19880330_20170209_01_T1.odc-metadata.yaml\n\tLT05_L1TP_113081_19880330_20170209_01_T1.stac-item.json\n\n## Prep Scripts\n\nSome scripts are included for preparing common metadata documents,\nsuch as landsat scenes.\n\n`eo3-prepare`: Prepare ODC metadata from the commandline.\n\nSome sub-commands need the ancillary dependencies, for reading from\nexotic formats: `pip install .[ancillary]`\n\n\t\u276f eo3-prepare --help\n\tUsage: eo3-prepare [OPTIONS] COMMAND [ARGS]...\n\n\tOptions:\n\t  --version  Show the version and exit.\n\t  --help     Show this message and exit.\n\n\tCommands:\n\t  landsat-l1     Prepare eo3 metadata for USGS Landsat Level 1 data.\n\t  modis-mcd43a1  Prepare MODIS MCD43A1 tiles for indexing into a Data...\n\t  noaa-prwtr     Prepare NCEP/NCAR reanalysis 1 water pressure datasets...\n\t  sentinel-l1   Prepare eo3 metadata for Sentinel-2 Level 1C data produced...\n\nPrep scripts have their own options, for example Sentinel L1 generation can filter by time\nor region, if the inputs follow a common directory structure:\n\n```\n\u276f eo3-prepare sentinel-l1 --help\nUsage: eo3-prepare sentinel-l1 [OPTIONS] [DATASETS]...\n\n  Prepare eo3 metadata for Sentinel-2 Level 1C data produced by Sinergise or\n  ESA.\n\n  Takes ESA zipped datasets or Sinergise dataset directories\n\nOptions:\n  -v, --verbose\n  -f, --datasets-path FILE        A file to read input dataset paths from, one\n                                  per line\n  -j, --jobs INTEGER              Number of workers to run in parallel\n  --overwrite-existing / --skip-existing\n                                  Overwrite if exists (otherwise skip)\n  --embed-location / --no-embed-location\n                                  Embed the location of the dataset in the\n                                  metadata? (if you wish to store them\n                                  separately. default: auto)\n  --always-granule-id / --never-granule-id\n                                  Include the granule id in metadata\n                                  filenames? (default: auto -- include only\n                                  for multi-granule files). Beware that multi-\n                                  granule datasets without a granule id in the\n                                  filename will overwrite each-other\n  --throughly-check-existing / --cheaply-check-existing\n                                  Should we open every dataset to check if\n                                  *all* inner granules have been produced?\n                                  Default: false.\n  --provider [sinergise.com|esa.int]\n                                  Restrict scanning to only packages of the\n                                  given provider. (ESA assumes a zip file,\n                                  sinergise a directory)\n  --output-base DIRECTORY         Write metadata files into a directory\n                                  instead of alongside each dataset\n  --input-relative-to DIRECTORY   Input root folder that should be used for\n                                  the subfolder hierarchy in the output-base\n  --only-regions-in-file FILE     Only process datasets in the given regions.\n                                  Expects a file with one region code per\n                                  line. (Note that some older ESA datasets\n                                  have no region code, and will not match any\n                                  region here.)\n  --after-month YEAR-MONTH        Limit the scan to datasets newer than a\n                                  given month (expressed as {year}-{month}, eg\n                                  '2010-01')\n  --before-month YEAR-MONTH       Limit the scan to datasets older than the\n                                  given month (expressed as {year}-{month}, eg\n                                  '2010-01')\n  --index                         Index newly-generated metadata into the\n                                  configured datacube\n  --dry-run                       Show what would be created, but don't create\n                                  anything\n  -E, --env TEXT\n  -C, --config, --config_file TEXT\n  --help                          Show this message and exit.\n```\n\nAn example of preparing metadata in a separate directory (not alongside the datasets) at NCI\ncan be as follows:\n\n```bash\nmodule use -a /g/data/v10/private/modules/modulefiles /g/data/v10/public/modules/modulefiles\nmodule load eodatasets3\n\n# With a folder of input paths, 4 workers, and separate output directory:\neo3-prepare sentinel-l1 -j 4 --output-base /output/metadata/directory \\\n  /g/data/fj7/Copernicus/Sentinel-2/MSI/L1C/2021\n\n# Using a file for input paths. Filter them to a certain region list and recent months:\neo3-prepare sentinel-l1 \\\n    --output-base /g/data/v10/agdc/jez/c3/L1C  \\\n    --only-regions-in-file test-regions.txt \\\n    --after-month 2022-04 \\\n    -f l1cs-2022-05-02.txt\n\n```\n\n\n`eo3-package-wagl`: Convert and package WAGL HDF5 outputs.\n\n Needs the wagl dependencies group: `pip install .[wagl]`\n\n\t\u276f eo3-package-wagl --help\n\tUsage: eo3-package-wagl [OPTIONS] H5_FILE\n\n\t  Package WAGL HDF5 Outputs\n\n\t  This will convert the HDF5 file (and sibling fmask/gqa files) into\n\t  GeoTIFFS (COGs) with datacube metadata using the DEA naming conventions\n\t  for files.\n\n\tOptions:\n\t  --level1 FILE                   Optional path to the input level1 metadata\n\t\t\t\t\t  doc (otherwise it will be loaded from the\n\t\t\t\t\t  level1 path in the HDF5)\n\n\t  --output DIRECTORY              Put the output package into this directory\n\t\t\t\t\t  [required]\n\n\t  -p, --product [nbar|nbart|lambertian|sbt]\n\t\t\t\t\t  Package only the given products (can specify\n\t\t\t\t\t  multiple times)\n\n\t  --with-oa / --no-oa             Include observation attributes (default:\n\t\t\t\t\t  true)\n\n\t  --with-oa / --no-oa             Include observation attributes (default:\n\t\t\t\t\t  true)\n\n\t  --oa-resolution FLOAT           Resolution choice for observation attributes\n\t\t\t\t\t  (default: automatic based on sensor)\n\n\t  --help                          Show this message and exit.\n\n\n# Development Setup\n\nRun the tests using [pytest](http://pytest.org/).\n\n\t\u276f pytest\n\nYou may need to install test dependencies first:\n\n\t\u276f pip install -e .[test]\n\nDependencies such as gdal can be tricky to install on some systems. You\nmay prefer to use the included Docker file for development: run `make\nbuild` to create a container, and `make test` to run tests.\n\nWe have strict linting and formatting checks on this reposistory, so\nplease run pre-commit (below) after checkout.\n\n## Pre-commit setup\n\n\t\u276f pip install pre-commit\n\t\u276f pre-commit install\n\n(if you are using Conda, you need to `conda install pre_commit` instead of using pip)\n\nYour code will now be formatted and validated before each commit. You can also invoke it manually by running `pre-commit run`\n\nThis allows you to immediately catch and fix issues before you raise a pull request that fails.\n\nMost notably, all code is formatted using\n[black](https://github.com/ambv/black), and checked with\n[pyflakes](https://github.com/PyCQA/pyflakes).\n\n\n## Docker dependencies\n\nTo update the set of frozen dependencies inside docker:\n\n1) Ensure you have an existing, working image. ie, run `make build`. If the current image is broken,\nyou may need to `git checkout` the last working version first.\n2) Run `make dependency-update`. This will recalculate the list of all dependencies from the definitions in setup.py.\n\nNote that this will run pip-compile _inside_ the docker container for maximum compatibility, hence the need for an existing container.\n\n## Creating Releases\n\nFirst, draft [some release notes](https://github.com/GeoscienceAustralia/eo-datasets/releases)\nfor users of the library.\n\nNow tag and upload:\n\n```\n# Be up-to-date.\ngit fetch origin\n\n# Create a tag for the new version\n# (using semantic versioning https://semver.org/)\ngit tag eodatasets3-<version> origin/eodatasets3\n\n# Create package\npython3 setup.py sdist bdist_wheel\n\n# Upload it (Jeremy, Damien, Kirill have pypi ownership)\npython3 -m twine upload  dist/*\n\n# Push tag to main repository\ngit push origin --tags\n```\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Packaging, metadata and provenance for OpenDataCube EO3 datasets",
    "version": "0.30.5",
    "project_urls": {
        "Bug Reports": "https://github.com/opendatacube/eo-datasets/issues",
        "Homepage": "https://github.com/opendatacube/eo-datasets",
        "Source": "https://github.com/opendatacube/eo-datasets"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d8b9db7420b7c2cc365d373ee0bcd1bf3a7f586cff112251cf3af35169fe6829",
                "md5": "f6782943bf6c222f130a6c55b0c7d085",
                "sha256": "b0ee921ec3f893acaf6fdd1d381ffb2f79658a42182814e193a17d2a7fc457eb"
            },
            "downloads": -1,
            "filename": "eodatasets3-0.30.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f6782943bf6c222f130a6c55b0c7d085",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 782711,
            "upload_time": "2024-04-04T04:08:25",
            "upload_time_iso_8601": "2024-04-04T04:08:25.350494Z",
            "url": "https://files.pythonhosted.org/packages/d8/b9/db7420b7c2cc365d373ee0bcd1bf3a7f586cff112251cf3af35169fe6829/eodatasets3-0.30.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3758a54987cbbf086fd1e345a53b3099b1ded9738412ea0f20452af7705cfcdf",
                "md5": "938d3c570b32627aafd5130b2b22bd62",
                "sha256": "b025b3bcf679df2bac90c9feef4cc69baa8e320edc0265cf611c1d632ed2f9f8"
            },
            "downloads": -1,
            "filename": "eodatasets3-0.30.5.tar.gz",
            "has_sig": false,
            "md5_digest": "938d3c570b32627aafd5130b2b22bd62",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 44482666,
            "upload_time": "2024-04-04T04:09:18",
            "upload_time_iso_8601": "2024-04-04T04:09:18.310986Z",
            "url": "https://files.pythonhosted.org/packages/37/58/a54987cbbf086fd1e345a53b3099b1ded9738412ea0f20452af7705cfcdf/eodatasets3-0.30.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-04 04:09:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "opendatacube",
    "github_project": "eo-datasets",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "eodatasets3"
}
        
Elapsed time: 0.22994s