Name | orthogonium JSON |
Version |
0.0.1
JSON |
| download |
home_page | None |
Summary | None |
upload_time | 2025-01-14 15:12:52 |
maintainer | None |
docs_url | None |
author | Thibaut Boissin |
requires_python | None |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
<img src="assets/banner.png" width="50%" alt="Orthogonium" align="center" />
</div>
<br>
<div align="center">
<a href="#">
<img src="https://img.shields.io/badge/Python-3.9+-efefef">
</a>
<a href="#">
<img src="https://img.shields.io/badge/Pytorch-2.0+-00008b">
</a>
<a href="https://github.com/thib-s/orthogonium/actions/workflows/linters.yml">
<img alt="PyLint" src="https://github.com/thib-s/orthogonium/actions/workflows/linters.yml/badge.svg">
</a>
<a href='https://coveralls.io/github/thib-s/orthogonium?branch=main'>
<img src='https://coveralls.io/repos/github/thib-s/orthogonium/badge.svg?branch=main' alt='Coverage Status' />
</a>
<a href="https://github.com/thib-s/orthogonium/actions/workflows/tests.yml">
<img alt="Tests" src="https://github.com/thib-s/orthogonium/actions/workflows/tests.yml/badge.svg">
</a>
<a href="https://github.com/thib-s/orthogonium/actions/workflows/python-publish.yml">
<img alt="Pypi" src="https://github.com/thib-s/orthogonium/actions/workflows/python-publish.yml/badge.svg">
</a>
<a href="https://pepy.tech/project/orthogonium">
<img alt="Pepy" src="https://static.pepy.tech/badge/orthogonium">
</a>
<a href="#">
<img src="https://img.shields.io/badge/License-MIT-efefef">
</a>
<a href="https://thib-s.github.io/orthogonium/">
<img alt="Documentation" src="https://img.shields.io/badge/Docs-here-0000ff">
</a>
</div>
<br>
# β¨ Orthogonium: Improved implementations of orthogonal layers
This library aims to centralize, standardize and improve methods to
build orthogonal layers, with a focus on convolutional layers . We noticed that a layer's implementation play a
significant role in the final performance : a more efficient implementation
allows larger networks and more training steps within the same compute
budget. So our implementation differs from original papers in order to
be faster, to consume less memory or be more flexible. Feel free to read the [documentation](https://thib-s.github.io/orthogonium/)!
# π What is included in this library ?
| Layer name | Description | Orthogonal ? | Usage | Status |
|---------------------|------------------------------------------------------------------------------------------------------------------------------------|--------------|------------------------------------------------------------------------------------------------------------------------------------|----------------|
| AOC (Adaptive-BCOP) | The most scalable method to build orthogonal convolution. Allows control of kernel size, stride, groups dilation and convtranspose | Orthogonal | A flexible method for complex architectures. Preserve orthogonality and works on large scale images. | done |
| Adaptive-SC-Fac | Same as previous layer but based on SC-Fac instead of BCOP, which claims a complete parametrization of separable convolutions | Orthogonal | Same as above | pending |
| Adaptive-SOC | SOC modified to be: i) faster and memory efficient ii) handle stride, groups, dilation & convtranspose | Orthogonal | Good for depthwise convolutions and cases where control over the kernel size is not required | in progress |
| SLL | The original SLL layer, which is already quite efficient. | 1-Lipschitz | Well suited for residual blocks, it also contains ReLU activations. | done |
| SLL-AOC | SLL-AOC is to the downsampling block what SLL is to the residual block (see ResNet paper) | 1-Lipschitz | Allows to construct a "strided" residual block than can change the number of channels. It adds a convolution in the residual path. | done |
| Sandwish-AOC | Sandwish convolutions that uses AOC to replace the FFT. Allowing it to scale to large images. | 1-Lipschitz | | pending |
| Adaptive-ECO | ECO modified to i) handle stride, groups & convtranspose | Orthogonal | | (low priority) |
## directory structure
```
orthogonium
βββ layers
β βββ conv
β β βββ AOC
β β β βββ ortho_conv.py # contains AdaptiveOrthoConv2d layer
β β βββ AdaptiveSOC
β β β βββ ortho_conv.py # contains AdaptiveSOCConv2d layer (untested)
β β βββ SLL
β β β βββ sll_layer.py # contains SDPBasedLipschitzConv, SDPBasedLipschitzDense, SLLxAOCLipschitzResBlock
β βββ legacy
β β βββ original code of BCOP, SOC, Cayley etc.
β βββ linear
β β βββ ortho_linear.py # contains OrthoLinear layer (can be used with BB, QR and Exp parametrization)
β βββ normalization.py # contains Batch centering and Layer centering
β βββ custom_activations.py # contains custom activations for 1 lipschitz networks
β βββ channel_shuffle.py # contains channel shuffle layer
βββ model_factory.py # factory function to construct various models for the zoo
βββ losses # loss functions, VRA estimation
```
## AOC:
AOC is a method that allows to build orthogonal convolutions with
an explicit kernel, that support all features like stride, conv transposed,
grouped convolutions and dilation (and all compositions of these parameters). This approach is highly scalable, and can
be applied to problems like Imagenet-1K.
## Adaptive-SC-FAC:
As AOC is built on top of BCOP method, we can construct an equivalent method constructed on top of SC-Fac instead.
This will allow to compare performance of the two methods given that they have very similar parametrization. (See our
paper for discussions about the similarities and differences between the two methods).
## Adaptive-SOC:
Adaptive-SOC blend the approach of AOC and SOC. It differs from SOC in the way that it is more memory efficient and
sometimes faster. It also allows to handle stride, groups, dilation and transposed convolutions. However, it does not allow to
control the kernel size explicitly as the resulting kernel size is larger than the requested kernel size.
It is due to the computation to the exponential of a kernel that increases the kernel size at each iteration.
Its development is still in progress, so extra testing is still require to ensure exact orthogonality.
## SLL:
SLL is a method that allows to construct small residual blocks with ReLU activations. We kept most to the original
implementation, and added `SLLxAOCLipschitzResBlock` that construct a down-sampling residual block by fusing SLL with
$AOC.
## more layers are coming soon !
# π Install the library:
The library is available on pip,so you can install it by running the following command:
```
pip install orthogonium
```
If you wish to deep dive in the code and edit your local versin, you can clone the repository and run the following command
to install it locally:
```
git clone git@github.com:thib-s/orthogonium.git
pip install -e .
```
## Use the layer:
```python
from orthogonium.layers.conv.AOC import AdaptiveOrthoConv2d, AdaptiveOrthoConvTranspose2d
from orthogonium.reparametrizers import DEFAULT_ORTHO_PARAMS
# use OrthoConv2d with the same params as torch.nn.Conv2d
kernel_size = 3
conv = AdaptiveOrthoConv2d(
kernel_size=kernel_size,
in_channels=256,
out_channels=256,
stride=2,
groups=16,
dilation=2,
padding_mode="circular",
ortho_params=DEFAULT_ORTHO_PARAMS,
)
# conv.weight can be assigned to a torch.nn.Conv2d
# this works similarly for ConvTranspose2d:
conv_transpose = AdaptiveOrthoConvTranspose2d(
in_channels=256,
out_channels=256,
kernel_size=kernel_size,
stride=2,
dilation=2,
groups=16,
ortho_params=DEFAULT_ORTHO_PARAMS,
)
```
# π― Model Zoo
Stay tuned, a model zoo will be available soon !
# π₯Disclaimer
Given the great quality of the original implementations, orthogonium do not focus on reproducing exactly the results of
the original papers, but rather on providing a more efficient implementation. Some degradations in the final provable
accuracy may be observed when reproducing the results of the original papers, we consider this acceptable is the gain
in terms of scalability is worth it. This library aims to provide more scalable and versatile implementations for people who seek to use orthogonal layers
in a larger scale setting.
# π Ressources
## 1 Lipschitz CNNs and orthogonal CNNs
- 1-Lipschitz Layers Compared: [github](https://github.com/berndprach/1LipschitzLayersCompared) and [paper](https://berndprach.github.io/publication/1LipschitzLayersCompared)
- BCOP: [github](https://github.com/ColinQiyangLi/LConvNet) and [paper](https://arxiv.org/abs/1911.00937)
- SC-Fac: [paper](https://arxiv.org/abs/2106.09121)
- ECO: [paper](https://openreview.net/forum?id=Zr5W2LSRhD)
- Cayley: [github](https://github.com/locuslab/orthogonal-convolutions) and [paper](https://arxiv.org/abs/2104.07167)
- LOT: [github](https://github.com/AI-secure/Layerwise-Orthogonal-Training) and [paper](https://arxiv.org/abs/2210.11620)
- ProjUNN-T: [github](https://github.com/facebookresearch/projUNN) and [paper](https://arxiv.org/abs/2203.05483)
- SLL: [github](https://github.com/araujoalexandre/Lipschitz-SLL-Networks) and [paper](https://arxiv.org/abs/2303.03169)
- Sandwish: [github](https://github.com/acfr/LBDN) and [paper](https://arxiv.org/abs/2301.11526)
- AOL: [github](https://github.com/berndprach/AOL) and [paper](https://arxiv.org/abs/2208.03160)
- SOC: [github](https://github.com/singlasahil14/SOC) and [paper 1](https://arxiv.org/abs/2105.11417), [paper 2](https://arxiv.org/abs/2211.08453)
## Lipschitz constant evaluation
- [Spectral Norm of Convolutional Layers with Circular and Zero Paddings](https://arxiv.org/abs/2402.00240)
- [Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration](https://arxiv.org/abs/2305.16173)
- [github of the two papers](https://github.com/blaisedelattre/lip4conv/tree/main)
# π» Contributing
This library is still in a very early stage, so expect some bugs and missing features. Also, before the version 1.0.0,
the API may change and no backward compatibility will be ensured (code is expected to keep working under minor changes
but the loading of parametrized network could fail). This will allow a rapid integration of new features, if you project
to release a trained architecture, exporting the convolutions to torch.nn.conv2D is advised (by saving the `weight`
attribute of a layer). If you plan to release a training script, fix the version in your requirements.
In order to prioritize the development, we will focus on the most used layers and models. If you have a specific need,
please open an issue, and we will try to address it as soon as possible.
Also, if you have a model that you would like to share, please open a PR with the model and the training script. We will
be happy to include it in the zoo.
If you want to contribute, please open a PR with the new feature or bug fix. We will review it as soon as possible.
## Ongoing developments
Layers:
- SOC:
- remove channels padding to handle ci != co efficiently
- enable groups
- enable support for native stride, transposition and dilation
- AOL:
- torch implementation of AOL
- Sandwish:
- import code
- plug AOC into Sandwish conv
ZOO:
- models from the paper
Raw data
{
"_id": null,
"home_page": null,
"name": "orthogonium",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Thibaut Boissin",
"author_email": "thibaut.boissin@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/42/41/ddbe1768300d60b8e8b1878e795195c94fa4d62b3b2abb73a7a90aec5758/orthogonium-0.0.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"assets/banner.png\" width=\"50%\" alt=\"Orthogonium\" align=\"center\" />\n</div>\n<br>\n\n\n<div align=\"center\">\n <a href=\"#\">\n <img src=\"https://img.shields.io/badge/Python-3.9+-efefef\">\n </a>\n <a href=\"#\">\n <img src=\"https://img.shields.io/badge/Pytorch-2.0+-00008b\">\n </a>\n <a href=\"https://github.com/thib-s/orthogonium/actions/workflows/linters.yml\">\n <img alt=\"PyLint\" src=\"https://github.com/thib-s/orthogonium/actions/workflows/linters.yml/badge.svg\">\n </a>\n <a href='https://coveralls.io/github/thib-s/orthogonium?branch=main'>\n <img src='https://coveralls.io/repos/github/thib-s/orthogonium/badge.svg?branch=main' alt='Coverage Status' />\n </a>\n <a href=\"https://github.com/thib-s/orthogonium/actions/workflows/tests.yml\">\n <img alt=\"Tests\" src=\"https://github.com/thib-s/orthogonium/actions/workflows/tests.yml/badge.svg\">\n </a>\n <a href=\"https://github.com/thib-s/orthogonium/actions/workflows/python-publish.yml\">\n <img alt=\"Pypi\" src=\"https://github.com/thib-s/orthogonium/actions/workflows/python-publish.yml/badge.svg\">\n </a>\n <a href=\"https://pepy.tech/project/orthogonium\">\n <img alt=\"Pepy\" src=\"https://static.pepy.tech/badge/orthogonium\">\n </a>\n <a href=\"#\">\n <img src=\"https://img.shields.io/badge/License-MIT-efefef\">\n </a>\n <a href=\"https://thib-s.github.io/orthogonium/\">\n <img alt=\"Documentation\" src=\"https://img.shields.io/badge/Docs-here-0000ff\">\n </a>\n</div>\n<br>\n\n# \u2728 Orthogonium: Improved implementations of orthogonal layers\n\nThis library aims to centralize, standardize and improve methods to \nbuild orthogonal layers, with a focus on convolutional layers . We noticed that a layer's implementation play a\nsignificant role in the final performance : a more efficient implementation \nallows larger networks and more training steps within the same compute \nbudget. So our implementation differs from original papers in order to \nbe faster, to consume less memory or be more flexible. Feel free to read the [documentation](https://thib-s.github.io/orthogonium/)!\n\n# \ud83d\udcc3 What is included in this library ?\n\n| Layer name | Description | Orthogonal ? | Usage | Status |\n|---------------------|------------------------------------------------------------------------------------------------------------------------------------|--------------|------------------------------------------------------------------------------------------------------------------------------------|----------------|\n| AOC (Adaptive-BCOP) | The most scalable method to build orthogonal convolution. Allows control of kernel size, stride, groups dilation and convtranspose | Orthogonal | A flexible method for complex architectures. Preserve orthogonality and works on large scale images. | done |\n| Adaptive-SC-Fac | Same as previous layer but based on SC-Fac instead of BCOP, which claims a complete parametrization of separable convolutions | Orthogonal | Same as above | pending |\n| Adaptive-SOC | SOC modified to be: i) faster and memory efficient ii) handle stride, groups, dilation & convtranspose | Orthogonal | Good for depthwise convolutions and cases where control over the kernel size is not required | in progress |\n| SLL | The original SLL layer, which is already quite efficient. | 1-Lipschitz | Well suited for residual blocks, it also contains ReLU activations. | done |\n| SLL-AOC | SLL-AOC is to the downsampling block what SLL is to the residual block (see ResNet paper) | 1-Lipschitz | Allows to construct a \"strided\" residual block than can change the number of channels. It adds a convolution in the residual path. | done |\n| Sandwish-AOC | Sandwish convolutions that uses AOC to replace the FFT. Allowing it to scale to large images. | 1-Lipschitz | | pending |\n| Adaptive-ECO | ECO modified to i) handle stride, groups & convtranspose | Orthogonal | | (low priority) |\n\n## directory structure\n\n```\northogonium\n\u251c\u2500\u2500 layers\n\u2502 \u251c\u2500\u2500 conv\n\u2502 \u2502 \u251c\u2500\u2500 AOC\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ortho_conv.py # contains AdaptiveOrthoConv2d layer\n\u2502 \u2502 \u251c\u2500\u2500 AdaptiveSOC\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 ortho_conv.py # contains AdaptiveSOCConv2d layer (untested)\n\u2502 \u2502 \u251c\u2500\u2500 SLL\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 sll_layer.py # contains SDPBasedLipschitzConv, SDPBasedLipschitzDense, SLLxAOCLipschitzResBlock\n\u2502 \u251c\u2500\u2500 legacy\n\u2502 \u2502 \u251c\u2500\u2500 original code of BCOP, SOC, Cayley etc.\n\u2502 \u251c\u2500\u2500 linear\n\u2502 \u2502 \u251c\u2500\u2500 ortho_linear.py # contains OrthoLinear layer (can be used with BB, QR and Exp parametrization)\n\u2502 \u251c\u2500\u2500 normalization.py # contains Batch centering and Layer centering\n\u2502 \u251c\u2500\u2500 custom_activations.py # contains custom activations for 1 lipschitz networks\n\u2502 \u251c\u2500\u2500 channel_shuffle.py # contains channel shuffle layer \n\u251c\u2500\u2500 model_factory.py # factory function to construct various models for the zoo\n\u251c\u2500\u2500 losses # loss functions, VRA estimation\n```\n\n## AOC:\n\nAOC is a method that allows to build orthogonal convolutions with \nan explicit kernel, that support all features like stride, conv transposed,\ngrouped convolutions and dilation (and all compositions of these parameters). This approach is highly scalable, and can\nbe applied to problems like Imagenet-1K.\n\n## Adaptive-SC-FAC:\n\nAs AOC is built on top of BCOP method, we can construct an equivalent method constructed on top of SC-Fac instead.\nThis will allow to compare performance of the two methods given that they have very similar parametrization. (See our \npaper for discussions about the similarities and differences between the two methods).\n\n## Adaptive-SOC:\n\nAdaptive-SOC blend the approach of AOC and SOC. It differs from SOC in the way that it is more memory efficient and \nsometimes faster. It also allows to handle stride, groups, dilation and transposed convolutions. However, it does not allow to \ncontrol the kernel size explicitly as the resulting kernel size is larger than the requested kernel size. \nIt is due to the computation to the exponential of a kernel that increases the kernel size at each iteration.\n\nIts development is still in progress, so extra testing is still require to ensure exact orthogonality.\n\n## SLL:\n\nSLL is a method that allows to construct small residual blocks with ReLU activations. We kept most to the original \nimplementation, and added `SLLxAOCLipschitzResBlock` that construct a down-sampling residual block by fusing SLL with \n$AOC.\n\n## more layers are coming soon !\n\n# \ud83c\udfe0 Install the library:\n\nThe library is available on pip,so you can install it by running the following command:\n```\npip install orthogonium\n```\n\nIf you wish to deep dive in the code and edit your local versin, you can clone the repository and run the following command \nto install it locally:\n```\ngit clone git@github.com:thib-s/orthogonium.git\npip install -e .\n```\n\n## Use the layer:\n\n```python\nfrom orthogonium.layers.conv.AOC import AdaptiveOrthoConv2d, AdaptiveOrthoConvTranspose2d\nfrom orthogonium.reparametrizers import DEFAULT_ORTHO_PARAMS\n\n# use OrthoConv2d with the same params as torch.nn.Conv2d\nkernel_size = 3\nconv = AdaptiveOrthoConv2d(\n kernel_size=kernel_size,\n in_channels=256,\n out_channels=256,\n stride=2,\n groups=16,\n dilation=2,\n padding_mode=\"circular\",\n ortho_params=DEFAULT_ORTHO_PARAMS,\n)\n# conv.weight can be assigned to a torch.nn.Conv2d \n\n# this works similarly for ConvTranspose2d:\nconv_transpose = AdaptiveOrthoConvTranspose2d(\n in_channels=256,\n out_channels=256,\n kernel_size=kernel_size,\n stride=2,\n dilation=2,\n groups=16,\n ortho_params=DEFAULT_ORTHO_PARAMS,\n)\n```\n\n# \ud83d\udc2f Model Zoo\n\nStay tuned, a model zoo will be available soon !\n\n\n\n# \ud83d\udca5Disclaimer\n\nGiven the great quality of the original implementations, orthogonium do not focus on reproducing exactly the results of\nthe original papers, but rather on providing a more efficient implementation. Some degradations in the final provable \naccuracy may be observed when reproducing the results of the original papers, we consider this acceptable is the gain \nin terms of scalability is worth it. This library aims to provide more scalable and versatile implementations for people who seek to use orthogonal layers \nin a larger scale setting.\n\n# \ud83d\udd2d Ressources\n\n## 1 Lipschitz CNNs and orthogonal CNNs\n\n- 1-Lipschitz Layers Compared: [github](https://github.com/berndprach/1LipschitzLayersCompared) and [paper](https://berndprach.github.io/publication/1LipschitzLayersCompared)\n- BCOP: [github](https://github.com/ColinQiyangLi/LConvNet) and [paper](https://arxiv.org/abs/1911.00937)\n- SC-Fac: [paper](https://arxiv.org/abs/2106.09121)\n- ECO: [paper](https://openreview.net/forum?id=Zr5W2LSRhD)\n- Cayley: [github](https://github.com/locuslab/orthogonal-convolutions) and [paper](https://arxiv.org/abs/2104.07167)\n- LOT: [github](https://github.com/AI-secure/Layerwise-Orthogonal-Training) and [paper](https://arxiv.org/abs/2210.11620)\n- ProjUNN-T: [github](https://github.com/facebookresearch/projUNN) and [paper](https://arxiv.org/abs/2203.05483)\n- SLL: [github](https://github.com/araujoalexandre/Lipschitz-SLL-Networks) and [paper](https://arxiv.org/abs/2303.03169)\n- Sandwish: [github](https://github.com/acfr/LBDN) and [paper](https://arxiv.org/abs/2301.11526)\n- AOL: [github](https://github.com/berndprach/AOL) and [paper](https://arxiv.org/abs/2208.03160)\n- SOC: [github](https://github.com/singlasahil14/SOC) and [paper 1](https://arxiv.org/abs/2105.11417), [paper 2](https://arxiv.org/abs/2211.08453)\n\n## Lipschitz constant evaluation\n\n- [Spectral Norm of Convolutional Layers with Circular and Zero Paddings](https://arxiv.org/abs/2402.00240) \n- [Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram Iteration](https://arxiv.org/abs/2305.16173)\n- [github of the two papers](https://github.com/blaisedelattre/lip4conv/tree/main)\n\n# \ud83c\udf7b Contributing\n\nThis library is still in a very early stage, so expect some bugs and missing features. Also, before the version 1.0.0,\nthe API may change and no backward compatibility will be ensured (code is expected to keep working under minor changes\nbut the loading of parametrized network could fail). This will allow a rapid integration of new features, if you project\nto release a trained architecture, exporting the convolutions to torch.nn.conv2D is advised (by saving the `weight` \nattribute of a layer). If you plan to release a training script, fix the version in your requirements.\nIn order to prioritize the development, we will focus on the most used layers and models. If you have a specific need,\nplease open an issue, and we will try to address it as soon as possible.\n\nAlso, if you have a model that you would like to share, please open a PR with the model and the training script. We will\nbe happy to include it in the zoo.\n\nIf you want to contribute, please open a PR with the new feature or bug fix. We will review it as soon as possible.\n\n## Ongoing developments\n\nLayers:\n- SOC:\n - remove channels padding to handle ci != co efficiently\n - enable groups\n - enable support for native stride, transposition and dilation\n- AOL:\n - torch implementation of AOL\n- Sandwish:\n - import code\n - plug AOC into Sandwish conv\n\nZOO:\n- models from the paper\n",
"bugtrack_url": null,
"license": "MIT",
"summary": null,
"version": "0.0.1",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7f7cdda51ef722346e45c8b4da60c7c06e82b98cc7aa7e26cbf93904fca397aa",
"md5": "2062324644641a35dc2cae923e92b160",
"sha256": "f304feac3f9f4178ae3263df9dff961b0b66a9fb728b5e6956178c81fb2c418f"
},
"downloads": -1,
"filename": "orthogonium-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2062324644641a35dc2cae923e92b160",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 61735,
"upload_time": "2025-01-14T15:12:50",
"upload_time_iso_8601": "2025-01-14T15:12:50.984485Z",
"url": "https://files.pythonhosted.org/packages/7f/7c/dda51ef722346e45c8b4da60c7c06e82b98cc7aa7e26cbf93904fca397aa/orthogonium-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4241ddbe1768300d60b8e8b1878e795195c94fa4d62b3b2abb73a7a90aec5758",
"md5": "938e6911ccfd57b678821cfc12a1fb87",
"sha256": "af6543f26637d92d8a8a095810c5e77584e7b1f82cc36583884dd178bb83d68f"
},
"downloads": -1,
"filename": "orthogonium-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "938e6911ccfd57b678821cfc12a1fb87",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 64825,
"upload_time": "2025-01-14T15:12:52",
"upload_time_iso_8601": "2025-01-14T15:12:52.494080Z",
"url": "https://files.pythonhosted.org/packages/42/41/ddbe1768300d60b8e8b1878e795195c94fa4d62b3b2abb73a7a90aec5758/orthogonium-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-14 15:12:52",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "orthogonium"
}