geo-inference


Namegeo-inference JSON
Version 3.1.1 PyPI version JSON
download
home_pageNone
SummaryExtract features from geospatial imagery using deep learning models
upload_time2024-12-04 17:54:37
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseOpen Government License - Canada Copyright (c) His Majesty the King in Right of Canada, as represented by the Minister of Natural Resources, 2024 You are encouraged to use the Information that is available under this licence with only a few conditions. Using Information under this licence - Use of any Information indicates your acceptance of the terms below. - The Information Provider grants you a worldwide, royalty-free, perpetual, non-exclusive licence to use the Information, including for commercial purposes, subject to the terms below. You are free to: - Copy, modify, publish, translate, adapt, distribute or otherwise use the Information in any medium, mode or format for any lawful purpose. You must, where you do any of the above: - Acknowledge the source of the Information by including any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence. - If the Information Provider does not provide a specific attribution statement, or if you are using Information from several information providers and multiple attributions are not practical for your product or application, you must use the following attribution statement: Contains information licensed under the Open Government Licence – Canada. The terms of this licence are important, and if you fail to comply with any of them, the rights granted to you under this licence, or any similar licence granted by the Information Provider, will end automatically. Exemptions This licence does not grant you any right to use: - Personal Information; - third party rights the Information Provider is not authorized to license; - the names, crests, logos, or other official symbols of the Information Provider; and - Information subject to other intellectual property rights, including patents, trade-marks and official marks. Non-endorsement This licence does not grant you any right to use the Information in a way that suggests any official status or that the Information Provider endorses you or your use of the Information. No Warranty The Information is licensed “as is”, and the Information Provider excludes all representations, warranties, obligations, and liabilities, whether express or implied, to the maximum extent permitted by law. The Information Provider is not liable for any errors or omissions in the Information, and will not under any circumstances be liable for any direct, indirect, special, incidental, consequential, or other loss, injury or damage caused by its use or otherwise arising in connection with this licence or the Information, even if specifically advised of the possibility of such loss, injury or damage. Governing Law This licence is governed by the laws of the province of Ontario and the applicable laws of Canada. Legal proceedings related to this licence may only be brought in the courts of Ontario or the Federal Court of Canada. Definitions In this licence, the terms below have the following meanings: "Information" means information resources protected by copyright or other information that is offered for use under the terms of this licence. "Information Provider" means His Majesty the King in right of Canada. “Personal Information” means “personal information” as defined in section 3 of the Privacy Act, R.S.C. 1985, c. P-21. "You" means the natural or legal person, or body of persons corporate or incorporate, acquiring rights under this licence. Versioning This is version 2.0 of the Open Government Licence – Canada. The Information Provider may make changes to the terms of this licence from time to time and issue a new version of the licence. Your use of the Information will be governed by the terms of the licence in force as of the date you accessed the information.
keywords pytorch deep learning machine learning remote sensing satellite imagery earth observation geospatial
VCS
bugtrack_url
requirements torchgeo affine colorlog scipy pyyaml pynvml geopandas dask-image dask requests xarray pystac rioxarray ttach
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Geo Inference

[![PyPI - Version](https://img.shields.io/pypi/v/geo-inference)](https://pypi.org/project/geo-inference/)
[![Codecov](https://img.shields.io/codecov/c/github/valhassan/geo-inference)](https://app.codecov.io/github/valhassan/geo-inference)
[![tests](https://github.com/valhassan/geo-inference/actions/workflows/test.yml/badge.svg)](https://github.com/valhassan/geo-inference/actions/workflows/test.yml)





geo-inference is a Python package designed for feature extraction from geospatial imagery using compatible deep learning models. It provides a convenient way to extract features from large TIFF images and save the output mask as a TIFF file. It also supports converting the output mask to vector format (*file_name.geojson*), YOLO format (*file_name.csv*), and COCO format (*file_name.json*). This package is particularly useful for applications in remote sensing, environmental monitoring, and urban planning.

## Installation

Geo-inference requires Python 3.11.  

### Linux Installation  
To install the package, use:

```
pip install geo-inference
```  

### Windows Installation
The recipe to use cuda-enabled Geo-inference on Windows OS is slightly different than on Linux-based OS.  

- Validate the nvidia drivers version installed on your computer by running `nvcc --version`: 
``` shell
PS C:\> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
```
> Note: If the command returns an error, you need to download and install the Nvidia-drivers first at https://developer.nvidia.com/cuda-downloads.  

- Install pytorch-cuda following one method suggested here: https://pytorch.org/get-started/locally/.
> Note: Make sure to select the cuda version matching the driver installed on your computer.  
- Test the installation:  
```shell
PS C:\> python
>>> import torch
>>> torch.cuda.is_available()
>>> True
```  
- install geo-inference using `pip`:
```
pip install geo-inference
```

### Docker installation
Alternatively, you can build the [Dockerfile](./Dockerfile) to use Geo-Inference.  

## Usage

**Input:** GeoTiffs with compatible TorchScript model. For example: A pytorch model trained on high resolution geospatial imagery with the following features:

- pixel size (0.1m to 3m)
- data type (uint8)

expects an input image with the same features. An example notebook for how the package is used is provided in this repo. 


*Here's an example of how to use Geo Inference (Command line and Script):*

**Command line**
```bash
geo_inference -a <args>
```
- `-a`, `--args`: Path to arguments stored in yaml, consult ./config/sample_config.yaml
```bash
geo_inference -i <image> -br <bands_requested> -m <model> -wd <work_dir> -ps <patch_size> -v <vec> -d <device> -id <gpu_id> -cls <classes> -mg <mgpu> -pr <pr_thr>
```
- `-i`, `--image`: Path to Geotiff
- `-bb`, `--bbox`: AOI bbox in this format "minx, miny, maxx, maxy" (Optional)
- `-br`, `--bands_requested`: The requested bands from provided Geotiff (if not provided, it uses all bands)
- `-m`, `--model`: Path or URL to the model file
- `-wd`, `--work_dir`: Working Directory
- `-ps`, `--patch_size`: The patch Size, the size of dask chunks, Default = 1024
- `-w`, `--workers`: Number of workers used by dask, Default = Nb of cores available on the host, minus 1
- `-v`, `--vec`: Vector Conversion
- `-y`, `--yolo`: Yolo Conversion
- `-c`, `--coco`: Coco Conversion
- `-d`, `--device`: CPU or GPU Device
- `-id`, `--gpu_id`: GPU ID, Default = 0
- `-cls`, `--classes`: The number of classes that model outputs, Default = 5
- `-mg`, `--mgpu`: Whether to use multi-gpu processing or not, Default = False
- `-pr`, `--prediction_thr` : Prediction probability Threshold (fraction of 1) to use. Default = 0.3
- `-tr`, `--transformers`: Allow Test-time augmentations.  
- `tr_f`, `transformer_flip`: Perform horizontal and vertical flips.  
- `tr_e`, `transformer_rotate`: perform 90 degree rotation.  


You can also use the `-h` option to get a list of supported arguments:

```bash
geo_inference -h
```

**Import script**
```python
from geo_inference.geo_inference import GeoInference

# Initialize the GeoInference object
geo_inference = GeoInference(
    model="/path/to/segformer_B5.pt",
    work_dir="/path/to/work/dir",
    mask_to_vec=False,
    mask_to_yolo=False,
    mask_to_coco=False, 
    device="gpu",
    multi_gpu=False,
    gpu_id=0, 
    num_classes=5,
    prediction_threshold=0.3,
    transformers=True,
    transformer_flip=False,
    transformer_rotate=True,
)

# Perform feature extraction on a TIFF image
image_path = "/path/to/image.tif"
bands_requested = "1,2,3"
patch_size = 1024
workers = 0
patch_size = 512
bbox = "0, 0, 1000, 1000"
geo_inference(
    inference_input = image_path,  
    bands_requested = bands_requested, 
    patch_size = patch_size, 
    workers = workers, 
    bbox=bbox
)
```

## Parameters

Initiating the `GeoInference` class takes the following parameters:

- `model`: The path or URL to the model file (.pt for PyTorch models) to use for feature extraction.
- `work_dir`: The path to the working directory. Default is `"~/.cache"`.
- `mask_to_vec`: If set to `"True"`, vector data will be created from mask. Default is `"False"`
- `mask_to_yolo`: If set to `"True"`, vector data will be converted to YOLO format. Default is `"False"`
- `mask_to_coco`: If set to `"True"`, vector data will be converted to COCO format. Default is `"False"`
- `device`: The device to use for feature extraction. Can be `"cpu"` or `"gpu"`. Default is `"gpu"`.
- `multi_gpu`: If set to `"True"`, uses multi-gpu for running the inference. Default is `"False"`
- `gpu_id`: The ID of the GPU to use for feature extraction. Default is `0`.
- `num_classes`: The number of classes that the TorchScript model outputs. Default is `5`.
- `prediction_threshold`: Prediction probability Threshold (fraction of 1) to use. Default is `0.3`.  
- `transformers`: Allow Test-time augmentations.  
- `transformer_flip`: Perform horizontal and vertical flips.  
- `transformer_rotate`: perform 90 degree rotation.  

Calling the GeoInference object takes the following parameters:  
- `inference_input`: Path to Geotiff. 
- `bands_requested`: The requested bands from provided Geotiff (if not provided, it uses all bands).
- `patch_size`: The patch size to use for feature extraction. Default is `1024`.
- `workers`: Number of workers used by Dask, Default is `0` = Number of cores available on the host, minus 1.
- `bbox`: AOI bbox in this format "minx, miny, maxx, maxy", in the image's crs. Default is `None`.


## Output

The `GeoInference` class outputs the following files:

- `mask.tif`: The output mask file in TIFF format.
- `polygons.geojson`: The output polygon file in GeoJSON format. This file is only generated if the `mask_to_vec` parameter is set to `True`.
- `yolo.csv`: The output YOLO file in CSV format. This file is only generated if the `mask_to_vec`, `vec_to_yolo` parameters are set to `True`.
- `coco.json`: The output COCO file in JSON format. This file is only generated if the `mask_to_vec`, `vec_to_coco` parameters are set to `True`.

Each file contains the extracted features from the input geospatial imagery.

## License

Geo Inference is released under the Open Government License - Canada. See [`LICENSE`](https://github.com/NRCan/geo-inference/blob/main/LICENSE) for more information.

## Contact

For any questions or concerns, please open an issue on GitHub.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "geo-inference",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "pytorch, deep learning, machine learning, remote sensing, satellite imagery, earth observation, geospatial",
    "author": null,
    "author_email": "Victor Alhassan <victor.alhassan@nrcan-rncan.gc.ca>",
    "download_url": "https://files.pythonhosted.org/packages/c1/e5/8eed85f2618323f959c4b7c04658ee0fe9c7b704bd643072d25faadec641/geo_inference-3.1.1.tar.gz",
    "platform": null,
    "description": "# Geo Inference\r\n\r\n[![PyPI - Version](https://img.shields.io/pypi/v/geo-inference)](https://pypi.org/project/geo-inference/)\r\n[![Codecov](https://img.shields.io/codecov/c/github/valhassan/geo-inference)](https://app.codecov.io/github/valhassan/geo-inference)\r\n[![tests](https://github.com/valhassan/geo-inference/actions/workflows/test.yml/badge.svg)](https://github.com/valhassan/geo-inference/actions/workflows/test.yml)\r\n\r\n\r\n\r\n\r\n\r\ngeo-inference is a Python package designed for feature extraction from geospatial imagery using compatible deep learning models. It provides a convenient way to extract features from large TIFF images and save the output mask as a TIFF file. It also supports converting the output mask to vector format (*file_name.geojson*), YOLO format (*file_name.csv*), and COCO format (*file_name.json*). This package is particularly useful for applications in remote sensing, environmental monitoring, and urban planning.\r\n\r\n## Installation\r\n\r\nGeo-inference requires Python 3.11.  \r\n\r\n### Linux Installation  \r\nTo install the package, use:\r\n\r\n```\r\npip install geo-inference\r\n```  \r\n\r\n### Windows Installation\r\nThe recipe to use cuda-enabled Geo-inference on Windows OS is slightly different than on Linux-based OS.  \r\n\r\n- Validate the nvidia drivers version installed on your computer by running `nvcc --version`: \r\n``` shell\r\nPS C:\\> nvcc --version\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2024 NVIDIA Corporation\r\nBuilt on Tue_Feb_27_16:28:36_Pacific_Standard_Time_2024\r\nCuda compilation tools, release 12.4, V12.4.99\r\nBuild cuda_12.4.r12.4/compiler.33961263_0\r\n```\r\n> Note: If the command returns an error, you need to download and install the Nvidia-drivers first at https://developer.nvidia.com/cuda-downloads.  \r\n\r\n- Install pytorch-cuda following one method suggested here: https://pytorch.org/get-started/locally/.\r\n> Note: Make sure to select the cuda version matching the driver installed on your computer.  \r\n- Test the installation:  \r\n```shell\r\nPS C:\\> python\r\n>>> import torch\r\n>>> torch.cuda.is_available()\r\n>>> True\r\n```  \r\n- install geo-inference using `pip`:\r\n```\r\npip install geo-inference\r\n```\r\n\r\n### Docker installation\r\nAlternatively, you can build the [Dockerfile](./Dockerfile) to use Geo-Inference.  \r\n\r\n## Usage\r\n\r\n**Input:** GeoTiffs with compatible TorchScript model. For example: A pytorch model trained on high resolution geospatial imagery with the following features:\r\n\r\n- pixel size (0.1m to 3m)\r\n- data type (uint8)\r\n\r\nexpects an input image with the same features. An example notebook for how the package is used is provided in this repo. \r\n\r\n\r\n*Here's an example of how to use Geo Inference (Command line and Script):*\r\n\r\n**Command line**\r\n```bash\r\ngeo_inference -a <args>\r\n```\r\n- `-a`, `--args`: Path to arguments stored in yaml, consult ./config/sample_config.yaml\r\n```bash\r\ngeo_inference -i <image> -br <bands_requested> -m <model> -wd <work_dir> -ps <patch_size> -v <vec> -d <device> -id <gpu_id> -cls <classes> -mg <mgpu> -pr <pr_thr>\r\n```\r\n- `-i`, `--image`: Path to Geotiff\r\n- `-bb`, `--bbox`: AOI bbox in this format \"minx, miny, maxx, maxy\" (Optional)\r\n- `-br`, `--bands_requested`: The requested bands from provided Geotiff (if not provided, it uses all bands)\r\n- `-m`, `--model`: Path or URL to the model file\r\n- `-wd`, `--work_dir`: Working Directory\r\n- `-ps`, `--patch_size`: The patch Size, the size of dask chunks, Default = 1024\r\n- `-w`, `--workers`: Number of workers used by dask, Default = Nb of cores available on the host, minus 1\r\n- `-v`, `--vec`: Vector Conversion\r\n- `-y`, `--yolo`: Yolo Conversion\r\n- `-c`, `--coco`: Coco Conversion\r\n- `-d`, `--device`: CPU or GPU Device\r\n- `-id`, `--gpu_id`: GPU ID, Default = 0\r\n- `-cls`, `--classes`: The number of classes that model outputs, Default = 5\r\n- `-mg`, `--mgpu`: Whether to use multi-gpu processing or not, Default = False\r\n- `-pr`, `--prediction_thr` : Prediction probability Threshold (fraction of 1) to use. Default = 0.3\r\n- `-tr`, `--transformers`: Allow Test-time augmentations.  \r\n- `tr_f`, `transformer_flip`: Perform horizontal and vertical flips.  \r\n- `tr_e`, `transformer_rotate`: perform 90 degree rotation.  \r\n\r\n\r\nYou can also use the `-h` option to get a list of supported arguments:\r\n\r\n```bash\r\ngeo_inference -h\r\n```\r\n\r\n**Import script**\r\n```python\r\nfrom geo_inference.geo_inference import GeoInference\r\n\r\n# Initialize the GeoInference object\r\ngeo_inference = GeoInference(\r\n    model=\"/path/to/segformer_B5.pt\",\r\n    work_dir=\"/path/to/work/dir\",\r\n    mask_to_vec=False,\r\n    mask_to_yolo=False,\r\n    mask_to_coco=False, \r\n    device=\"gpu\",\r\n    multi_gpu=False,\r\n    gpu_id=0, \r\n    num_classes=5,\r\n    prediction_threshold=0.3,\r\n    transformers=True,\r\n    transformer_flip=False,\r\n    transformer_rotate=True,\r\n)\r\n\r\n# Perform feature extraction on a TIFF image\r\nimage_path = \"/path/to/image.tif\"\r\nbands_requested = \"1,2,3\"\r\npatch_size = 1024\r\nworkers = 0\r\npatch_size = 512\r\nbbox = \"0, 0, 1000, 1000\"\r\ngeo_inference(\r\n    inference_input = image_path,  \r\n    bands_requested = bands_requested, \r\n    patch_size = patch_size, \r\n    workers = workers, \r\n    bbox=bbox\r\n)\r\n```\r\n\r\n## Parameters\r\n\r\nInitiating the `GeoInference` class takes the following parameters:\r\n\r\n- `model`: The path or URL to the model file (.pt for PyTorch models) to use for feature extraction.\r\n- `work_dir`: The path to the working directory. Default is `\"~/.cache\"`.\r\n- `mask_to_vec`: If set to `\"True\"`, vector data will be created from mask. Default is `\"False\"`\r\n- `mask_to_yolo`: If set to `\"True\"`, vector data will be converted to YOLO format. Default is `\"False\"`\r\n- `mask_to_coco`: If set to `\"True\"`, vector data will be converted to COCO format. Default is `\"False\"`\r\n- `device`: The device to use for feature extraction. Can be `\"cpu\"` or `\"gpu\"`. Default is `\"gpu\"`.\r\n- `multi_gpu`: If set to `\"True\"`, uses multi-gpu for running the inference. Default is `\"False\"`\r\n- `gpu_id`: The ID of the GPU to use for feature extraction. Default is `0`.\r\n- `num_classes`: The number of classes that the TorchScript model outputs. Default is `5`.\r\n- `prediction_threshold`: Prediction probability Threshold (fraction of 1) to use. Default is `0.3`.  \r\n- `transformers`: Allow Test-time augmentations.  \r\n- `transformer_flip`: Perform horizontal and vertical flips.  \r\n- `transformer_rotate`: perform 90 degree rotation.  \r\n\r\nCalling the GeoInference object takes the following parameters:  \r\n- `inference_input`: Path to Geotiff. \r\n- `bands_requested`: The requested bands from provided Geotiff (if not provided, it uses all bands).\r\n- `patch_size`: The patch size to use for feature extraction. Default is `1024`.\r\n- `workers`: Number of workers used by Dask, Default is `0` = Number of cores available on the host, minus 1.\r\n- `bbox`: AOI bbox in this format \"minx, miny, maxx, maxy\", in the image's crs. Default is `None`.\r\n\r\n\r\n## Output\r\n\r\nThe `GeoInference` class outputs the following files:\r\n\r\n- `mask.tif`: The output mask file in TIFF format.\r\n- `polygons.geojson`: The output polygon file in GeoJSON format. This file is only generated if the `mask_to_vec` parameter is set to `True`.\r\n- `yolo.csv`: The output YOLO file in CSV format. This file is only generated if the `mask_to_vec`, `vec_to_yolo` parameters are set to `True`.\r\n- `coco.json`: The output COCO file in JSON format. This file is only generated if the `mask_to_vec`, `vec_to_coco` parameters are set to `True`.\r\n\r\nEach file contains the extracted features from the input geospatial imagery.\r\n\r\n## License\r\n\r\nGeo Inference is released under the Open Government License - Canada. See [`LICENSE`](https://github.com/NRCan/geo-inference/blob/main/LICENSE) for more information.\r\n\r\n## Contact\r\n\r\nFor any questions or concerns, please open an issue on GitHub.\r\n",
    "bugtrack_url": null,
    "license": "Open Government License - Canada  Copyright (c) His Majesty the King in Right of Canada, as represented by the Minister of Natural Resources, 2024  You are encouraged to use the Information that is available under this licence with only a few conditions.  Using Information under this licence - Use of any Information indicates your acceptance of the terms below. - The Information Provider grants you a worldwide, royalty-free, perpetual, non-exclusive licence to use the Information, including for commercial purposes, subject to the terms below.  You are free to: - Copy, modify, publish, translate, adapt, distribute or otherwise use the Information in any medium, mode or format for any lawful purpose.  You must, where you do any of the above: - Acknowledge the source of the Information by including any attribution statement specified by the Information Provider(s) and, where possible, provide a link to this licence. - If the Information Provider does not provide a specific attribution statement, or if you are using Information from several information providers and multiple attributions are not practical for your product or application, you must use the following attribution statement:  Contains information licensed under the Open Government Licence \u2013 Canada. The terms of this licence are important, and if you fail to comply with any of them, the rights granted to you under this licence, or any similar licence granted by the Information Provider, will end automatically.  Exemptions This licence does not grant you any right to use: - Personal Information; - third party rights the Information Provider is not authorized to license; - the names, crests, logos, or other official symbols of the Information Provider; and - Information subject to other intellectual property rights, including patents, trade-marks and official marks.  Non-endorsement This licence does not grant you any right to use the Information in a way that suggests any official status or that the Information Provider endorses you or your use of the Information.  No Warranty The Information is licensed \u201cas is\u201d, and the Information Provider excludes all representations, warranties, obligations, and liabilities, whether express or implied, to the maximum extent permitted by law.  The Information Provider is not liable for any errors or omissions in the Information, and will not under any circumstances be liable for any direct, indirect, special, incidental, consequential, or other loss, injury or damage caused by its use or otherwise arising in connection with this licence or the Information, even if specifically advised of the possibility of such loss, injury or damage.  Governing Law This licence is governed by the laws of the province of Ontario and the applicable laws of Canada.  Legal proceedings related to this licence may only be brought in the courts of Ontario or the Federal Court of Canada.  Definitions In this licence, the terms below have the following meanings:  \"Information\" means information resources protected by copyright or other information that is offered for use under the terms of this licence. \"Information Provider\" means His Majesty the King in right of Canada. \u201cPersonal Information\u201d means \u201cpersonal information\u201d as defined in section 3 of the Privacy Act, R.S.C. 1985, c. P-21. \"You\" means the natural or legal person, or body of persons corporate or incorporate, acquiring rights under this licence. Versioning This is version 2.0 of the Open Government Licence \u2013 Canada. The Information Provider may make changes to the terms of this licence from time to time and issue a new version of the licence. Your use of the Information will be governed by the terms of the licence in force as of the date you accessed the information.",
    "summary": "Extract features from geospatial imagery using deep learning models",
    "version": "3.1.1",
    "project_urls": {
        "Homepage": "https://github.com/NRCan/geo-inference"
    },
    "split_keywords": [
        "pytorch",
        " deep learning",
        " machine learning",
        " remote sensing",
        " satellite imagery",
        " earth observation",
        " geospatial"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a6ad404463672a26be56717f0fa8f29f834cc00bd432a30cf3023e453600f36b",
                "md5": "ba576b1ad8e9ee7a2f0057f9f49fb84b",
                "sha256": "bb0d45768b42ffc9a24d52577319ae7a134172a0712eb18238f716f1653b42f2"
            },
            "downloads": -1,
            "filename": "geo_inference-3.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ba576b1ad8e9ee7a2f0057f9f49fb84b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 36044,
            "upload_time": "2024-12-04T17:54:35",
            "upload_time_iso_8601": "2024-12-04T17:54:35.568218Z",
            "url": "https://files.pythonhosted.org/packages/a6/ad/404463672a26be56717f0fa8f29f834cc00bd432a30cf3023e453600f36b/geo_inference-3.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c1e58eed85f2618323f959c4b7c04658ee0fe9c7b704bd643072d25faadec641",
                "md5": "8eb2734f72514f4f60fd32d096373071",
                "sha256": "efddfe613627d07b9f8c3a9368926ba7fca47889d079d38885241ce2ff3b4e69"
            },
            "downloads": -1,
            "filename": "geo_inference-3.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8eb2734f72514f4f60fd32d096373071",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 38788,
            "upload_time": "2024-12-04T17:54:37",
            "upload_time_iso_8601": "2024-12-04T17:54:37.056550Z",
            "url": "https://files.pythonhosted.org/packages/c1/e5/8eed85f2618323f959c4b7c04658ee0fe9c7b704bd643072d25faadec641/geo_inference-3.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-04 17:54:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NRCan",
    "github_project": "geo-inference",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torchgeo",
            "specs": [
                [
                    ">=",
                    "0.5.2"
                ]
            ]
        },
        {
            "name": "affine",
            "specs": [
                [
                    ">=",
                    "2.4.0"
                ]
            ]
        },
        {
            "name": "colorlog",
            "specs": [
                [
                    "==",
                    "6.7.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.13.1"
                ]
            ]
        },
        {
            "name": "pyyaml",
            "specs": [
                [
                    ">=",
                    "5.2"
                ]
            ]
        },
        {
            "name": "pynvml",
            "specs": [
                [
                    ">=",
                    "11.0"
                ]
            ]
        },
        {
            "name": "geopandas",
            "specs": [
                [
                    ">=",
                    "0.14.4"
                ]
            ]
        },
        {
            "name": "dask-image",
            "specs": [
                [
                    ">=",
                    "2024.5.3"
                ]
            ]
        },
        {
            "name": "dask",
            "specs": [
                [
                    ">=",
                    "2024.6.2"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "xarray",
            "specs": [
                [
                    ">=",
                    "2024.6.0"
                ]
            ]
        },
        {
            "name": "pystac",
            "specs": [
                [
                    ">=",
                    "1.10.1"
                ]
            ]
        },
        {
            "name": "rioxarray",
            "specs": [
                [
                    ">=",
                    "0.15.6"
                ]
            ]
        },
        {
            "name": "ttach",
            "specs": [
                [
                    ">=",
                    "0.0.3"
                ]
            ]
        }
    ],
    "lcname": "geo-inference"
}
        
Elapsed time: 0.38452s