metnet3


Namemetnet3 JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/metnet3
SummaryMetnet - Pytorch
upload_time2023-11-04 20:08:59
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.9,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# Metnet3
Pytorch implementation of the model `MetNet-3` that utilizes a Unet -> MaxVit with topological embeddings

# Install
`pip install metnet3`


## Usage
```


```


## Architecture Overview

MetNet-3 is a neural network designed to process and predict spatial weather patterns with high precision. This sophisticated model incorporates a fusion of cutting-edge techniques including topographical embeddings, a U-Net backbone, and a modified MaxVit transformer to capture long-range dependencies. With a total of 227 million trainable parameters, MetNet-3 is at the forefront of meteorological modeling.

### Topographical Embeddings

Leveraging a grid of trainable embeddings, MetNet-3 can automatically learn and utilize topographical features relevant to weather forecasting. Each grid point, spaced with a stride of 4 km, is associated with 20 parameters. These embeddings are then bilinearly interpolated for each input pixel, enabling the network to effectively encode the underlying geography for each data point.

### Model Diagram

MetNet-3's architecture is complex, ingesting both high-resolution (2496 km² at 4 km resolution) and low-resolution (4992 km² at 8 km resolution) spatial inputs. The model processes these inputs through a series of layers and operations, as depicted in the following ASCII flow diagram:

```
Input Data
   │
   │ High-resolution inputs
   │ concatenated with current time
   │ (624x624x793)
   │
   ▼
 [Embed Topographical Embeddings]
   │
   ├─►[2x ResNet Blocks]───►[Downsampling to 8 km]
   │                            │
   │                            ├─►[Pad to 4992 km²]───►[Concatenate Low-res Inputs]
   │                            │
   ▼                            ▼
 [U-Net Backbone]            [2x ResNet Blocks]
   │                            │
   ├─►[Downsampling to 16 km]   │
   │                            │
   ▼                            │
 [Modified MaxVit Blocks]◄──────┘
   │
   │
 [Central Crop to 768 km²]
   │
   ├─►[Upsampling Path with Skip Connections]
   │
   │
 [Central Crop to 512 km²]
   │
   ├─►[MLP for Weather State Channels at 4 km resolution]
   │
   ├─►[Upsampling to 1 km for Precipitation Targets]
   │
   ▼
[Output Predictions]
```

#### Dense and Sparse Inputs

The model uniquely processes both dense and sparse inputs, integrating temporal information such as the time of prediction and the forecast lead time.

#### Target Outputs

MetNet-3 produces both categorical and deterministic predictions for various weather-related variables, including precipitation and surface conditions, using a combination of loss functions tailored to the nature of each target.

#### ResNet Blocks and MaxVit

Central to the network's ability to capture complex patterns are the ResNet blocks, which handle local interactions, and the MaxVit blocks, which facilitate global comprehension of the input data through attention mechanisms.

## Technical Specifications

- **Input Spatial Resolutions**: 4 km and 8 km
- **Output Resolutions**: From 1 km to 4 km depending on the variable
- **Embedding Stride**: 4 km
- **Topographical Embedding Parameters**: 20 per grid point
- **Network Parameters**: 227 million
- **Input Channels**: Various, including 617+1 channels from HRRR assimilation
- **Output Variables**: 6+617 for surface and assimilated state variables, respectively
- **Model Backbone**: U-Net with MaxVit transformer
- **Upsampling and Downsampling**: Implemented within the network to transition between different resolutions

## Low-Level Details and Optimization

Further technical details on architecture intricacies, optimization strategies, and hyperparameter selections are disclosed in Supplement B, providing an in-depth understanding of the model's operational framework.

This README intends to serve as a technical overview for researchers and engineers looking to grasp the functional composition and capabilities of MetNet-3. For implementation and collaboration inquiries, the supplementary materials should be referred to for comprehensive insights.


## Citation
```bibtex
@article{Andrychowicz2023DeepLF,
    title   = {Deep Learning for Day Forecasts from Sparse Observations},
    author  = {Marcin Andrychowicz and Lasse Espeholt and Di Li and Samier Merchant and Alexander Merose and Fred Zyda and Shreya Agrawal and Nal Kalchbrenner},
    journal = {ArXiv},
    year    = {2023},
    volume  = {abs/2306.06079},
    url     = {https://api.semanticscholar.org/CorpusID:259129311}
}

```


# License
MIT




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/metnet3",
    "name": "metnet3",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/65/a0/d0f8f90269dc97fbc96cc9d153fe4c22b55a6de9574b01eb1bbbd370289f/metnet3-0.0.3.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# Metnet3\nPytorch implementation of the model `MetNet-3` that utilizes a Unet -> MaxVit with topological embeddings\n\n# Install\n`pip install metnet3`\n\n\n## Usage\n```\n\n\n```\n\n\n## Architecture Overview\n\nMetNet-3 is a neural network designed to process and predict spatial weather patterns with high precision. This sophisticated model incorporates a fusion of cutting-edge techniques including topographical embeddings, a U-Net backbone, and a modified MaxVit transformer to capture long-range dependencies. With a total of 227 million trainable parameters, MetNet-3 is at the forefront of meteorological modeling.\n\n### Topographical Embeddings\n\nLeveraging a grid of trainable embeddings, MetNet-3 can automatically learn and utilize topographical features relevant to weather forecasting. Each grid point, spaced with a stride of 4 km, is associated with 20 parameters. These embeddings are then bilinearly interpolated for each input pixel, enabling the network to effectively encode the underlying geography for each data point.\n\n### Model Diagram\n\nMetNet-3's architecture is complex, ingesting both high-resolution (2496 km\u00b2 at 4 km resolution) and low-resolution (4992 km\u00b2 at 8 km resolution) spatial inputs. The model processes these inputs through a series of layers and operations, as depicted in the following ASCII flow diagram:\n\n```\nInput Data\n   \u2502\n   \u2502 High-resolution inputs\n   \u2502 concatenated with current time\n   \u2502 (624x624x793)\n   \u2502\n   \u25bc\n [Embed Topographical Embeddings]\n   \u2502\n   \u251c\u2500\u25ba[2x ResNet Blocks]\u2500\u2500\u2500\u25ba[Downsampling to 8 km]\n   \u2502                            \u2502\n   \u2502                            \u251c\u2500\u25ba[Pad to 4992 km\u00b2]\u2500\u2500\u2500\u25ba[Concatenate Low-res Inputs]\n   \u2502                            \u2502\n   \u25bc                            \u25bc\n [U-Net Backbone]            [2x ResNet Blocks]\n   \u2502                            \u2502\n   \u251c\u2500\u25ba[Downsampling to 16 km]   \u2502\n   \u2502                            \u2502\n   \u25bc                            \u2502\n [Modified MaxVit Blocks]\u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n   \u2502\n   \u2502\n [Central Crop to 768 km\u00b2]\n   \u2502\n   \u251c\u2500\u25ba[Upsampling Path with Skip Connections]\n   \u2502\n   \u2502\n [Central Crop to 512 km\u00b2]\n   \u2502\n   \u251c\u2500\u25ba[MLP for Weather State Channels at 4 km resolution]\n   \u2502\n   \u251c\u2500\u25ba[Upsampling to 1 km for Precipitation Targets]\n   \u2502\n   \u25bc\n[Output Predictions]\n```\n\n#### Dense and Sparse Inputs\n\nThe model uniquely processes both dense and sparse inputs, integrating temporal information such as the time of prediction and the forecast lead time.\n\n#### Target Outputs\n\nMetNet-3 produces both categorical and deterministic predictions for various weather-related variables, including precipitation and surface conditions, using a combination of loss functions tailored to the nature of each target.\n\n#### ResNet Blocks and MaxVit\n\nCentral to the network's ability to capture complex patterns are the ResNet blocks, which handle local interactions, and the MaxVit blocks, which facilitate global comprehension of the input data through attention mechanisms.\n\n## Technical Specifications\n\n- **Input Spatial Resolutions**: 4 km and 8 km\n- **Output Resolutions**: From 1 km to 4 km depending on the variable\n- **Embedding Stride**: 4 km\n- **Topographical Embedding Parameters**: 20 per grid point\n- **Network Parameters**: 227 million\n- **Input Channels**: Various, including 617+1 channels from HRRR assimilation\n- **Output Variables**: 6+617 for surface and assimilated state variables, respectively\n- **Model Backbone**: U-Net with MaxVit transformer\n- **Upsampling and Downsampling**: Implemented within the network to transition between different resolutions\n\n## Low-Level Details and Optimization\n\nFurther technical details on architecture intricacies, optimization strategies, and hyperparameter selections are disclosed in Supplement B, providing an in-depth understanding of the model's operational framework.\n\nThis README intends to serve as a technical overview for researchers and engineers looking to grasp the functional composition and capabilities of MetNet-3. For implementation and collaboration inquiries, the supplementary materials should be referred to for comprehensive insights.\n\n\n## Citation\n```bibtex\n@article{Andrychowicz2023DeepLF,\n    title   = {Deep Learning for Day Forecasts from Sparse Observations},\n    author  = {Marcin Andrychowicz and Lasse Espeholt and Di Li and Samier Merchant and Alexander Merose and Fred Zyda and Shreya Agrawal and Nal Kalchbrenner},\n    journal = {ArXiv},\n    year    = {2023},\n    volume  = {abs/2306.06079},\n    url     = {https://api.semanticscholar.org/CorpusID:259129311}\n}\n\n```\n\n\n# License\nMIT\n\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Metnet - Pytorch",
    "version": "0.0.3",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/metnet3",
        "Repository": "https://github.com/kyegomez/metnet3"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7880434ae88e715de63a5599174c89e6ca3b7581240be54bb4965b32ec92a0ee",
                "md5": "b533bb61d61b3084efda3c392ce1f383",
                "sha256": "c58ee2a86b498b3e754ff67d93cc814b1090c78afc68ab4a0ee4863caf5b008c"
            },
            "downloads": -1,
            "filename": "metnet3-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b533bb61d61b3084efda3c392ce1f383",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9,<4.0",
            "size": 12575,
            "upload_time": "2023-11-04T20:08:58",
            "upload_time_iso_8601": "2023-11-04T20:08:58.318116Z",
            "url": "https://files.pythonhosted.org/packages/78/80/434ae88e715de63a5599174c89e6ca3b7581240be54bb4965b32ec92a0ee/metnet3-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "65a0d0f8f90269dc97fbc96cc9d153fe4c22b55a6de9574b01eb1bbbd370289f",
                "md5": "e042083f54f8548d2ab28d668d8c1dda",
                "sha256": "2936e4617aca76dbdf970914a92178e574c5a2d372ce18f32e2c30c6b9fa5fe7"
            },
            "downloads": -1,
            "filename": "metnet3-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "e042083f54f8548d2ab28d668d8c1dda",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9,<4.0",
            "size": 11743,
            "upload_time": "2023-11-04T20:08:59",
            "upload_time_iso_8601": "2023-11-04T20:08:59.781896Z",
            "url": "https://files.pythonhosted.org/packages/65/a0/d0f8f90269dc97fbc96cc9d153fe4c22b55a6de9574b01eb1bbbd370289f/metnet3-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-04 20:08:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "metnet3",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "metnet3"
}
        
Elapsed time: 0.16905s