Name | PyxLSTM JSON |
Version |
1.0.1
JSON |
| download |
home_page | None |
Summary | PyxLSTM: An efficient and extensible implementation of the xLSTM architecture |
upload_time | 2024-05-10 07:50:31 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.6 |
license | MIT License Copyright (c) 2024 Mudit Bhargava Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
xlstm
lstm
language modeling
sequence modeling
|
VCS |
|
bugtrack_url |
|
requirements |
torch
numpy
pyyaml
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# PyxLSTM
PyxLSTM is a Python library that provides an efficient and extensible implementation of the Extended Long Short-Term Memory (xLSTM) architecture based on the research paper ["xLSTM: Extended Long Short-Term Memory"](https://arxiv.org/abs/2405.04517) by Beck et al. (2024). xLSTM enhances the traditional LSTM by introducing exponential gating, memory mixing, and a matrix memory structure, enabling improved performance and scalability for sequence modeling tasks.
## Features
- Implements the sLSTM (scalar LSTM) and mLSTM (matrix LSTM) variants of xLSTM
- Supports pre and post up-projection block structures for flexible model architectures
- Provides high-level model definition and training utilities for ease of use
- Includes scripts for training, evaluation, and text generation
- Offers data processing utilities and customizable dataset classes
- Lightweight and modular design for seamless integration into existing projects
- Extensively tested and documented for reliability and usability
- Suitable for a wide range of sequence modeling tasks, including language modeling, text generation, and more
## Installation
To install PyxLSTM, you can use pip:
```bash
pip install PyxLSTM
```
Alternatively, you can clone the repository and install it manually:
```bash
git clone https://github.com/muditbhargava66/PyxLSTM.git
cd PyxLSTM
pip install -r requirements.txt
python setup.py install
```
## Usage
Here's a basic example of how to use PyxLSTM for language modeling:
```python
from xLSTM.model import xLSTM
from xLSTM.data import LanguageModelingDataset, Tokenizer
from xLSTM.utils import load_config, set_seed, get_device
# Load configuration
config = load_config("path/to/config.yaml")
set_seed(config.seed)
device = get_device()
# Initialize tokenizer and dataset
tokenizer = Tokenizer(config.vocab_file)
train_dataset = LanguageModelingDataset(config.train_data, tokenizer, config.max_length)
# Create xLSTM model
model = xLSTM(len(tokenizer), config.embedding_size, config.hidden_size,
config.num_layers, config.num_blocks, config.dropout,
config.bidirectional, config.lstm_type)
model.to(device)
# Train the model
optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
criterion = torch.nn.CrossEntropyLoss(ignore_index=tokenizer.pad_token_id)
train(model, train_dataset, optimizer, criterion, config, device)
```
For more detailed usage instructions and examples, please refer to the documentation.
## Code Directory Structure
```
xLSTM/
│
├── xLSTM/
│ ├── __init__.py
│ ├── slstm.py
│ ├── mlstm.py
│ ├── block.py
│ └── model.py
│
├── scripts/
│ ├── train.py
│ ├── evaluate.py
│ └── generate.py
│
├── data/
│ ├── dataset.py
│ └── tokenizer.py
│
├── utils/
│ ├── config.py
│ ├── logging.py
│ └── utils.py
│
├── tests/
│ ├── test_slstm.py
│ ├── test_mlstm.py
│ ├── test_block.py
│ └── test_model.py
│
├── docs/
│ ├── slstm.md
│ ├── mlstm.md
│ └── training.md
│
├── examples/
│ └── language_modeling.py
│
├── .gitignore
├── pyproject.toml
├── MANIFEST.in
├── requirements.txt
├── README.md
└── LICENSE
```
- xLSTM/: The main Python package containing the implementation.
- slstm.py: Implementation of the sLSTM module.
- mlstm.py: Implementation of the mLSTM module.
- block.py: Implementation of the xLSTM blocks (pre and post up-projection).
- model.py: High-level xLSTM model definition.
- scripts/: Scripts for training, evaluation, and text generation.
- train.py: Script for training the xLSTM model.
- evaluate.py: Script for evaluating the trained model.
- generate.py: Script for generating text using the trained model.
- data/: Data processing utilities.
- dataset.py: Custom dataset classes for loading and processing data.
- tokenizer.py: Tokenization utilities.
- utils/: Utility modules.
- config.py: Configuration management.
- logging.py: Logging setup.
- utils.py: Miscellaneous utility functions.
- tests/: Unit tests for different modules.
- test_slstm.py: Tests for sLSTM module.
- test_mlstm.py: Tests for mLSTM module.
- test_block.py: Tests for xLSTM blocks.
- test_model.py: Tests for the overall xLSTM model.
- docs/: Documentation files.
- README.md: Main documentation file.
- slstm.md: Documentation for sLSTM.
- mlstm.md: Documentation for mLSTM.
- training.md: Training guide.
- examples/: Example usage scripts.
- language_modeling.py: Example script for language modeling with xLSTM.
- .gitignore: Git ignore file to exclude unnecessary files/directories.
- setup.py: Package setup script.
- requirements.txt: List of required Python dependencies.
- README.md: Project README file.
- LICENSE: Project license file.
## Running and Testing the Codebase
To run and test the PyxLSTM codebase, follow these steps:
1. Clone the PyxLSTM repository:
```bash
git clone https://github.com/muditbhargava66/PyxLSTM.git
```
2. Navigate to the cloned directory:
```bash
cd PyxLSTM
```
3. Install the required dependencies:
```bash
pip install -r requirements.txt
```
4. Run the unit tests:
```bash
python -m unittest discover tests
```
This command will run all the unit tests located in the `tests` directory. It will execute the test files `test_slstm.py`, `test_mlstm.py`, `test_block.py`, and `test_model.py`.
5. If all the tests pass successfully, you can proceed to run the example script:
```bash
python examples/language_modeling.py --config path/to/config.yaml
```
Replace `path/to/config.yaml` with the actual path to your configuration file. The configuration file should specify the dataset paths, model hyperparameters, and other settings.
The `language_modeling.py` script will train an xLSTM model on the specified dataset using the provided configuration.
6. Monitor the training progress and metrics:
During training, the script will display the training progress, including the current epoch, training loss, and validation loss. Keep an eye on these metrics to ensure the model is learning properly.
7. Evaluate the trained model:
After training, you can evaluate the trained model on a test dataset using the `evaluate.py` script:
```bash
python scripts/evaluate.py --test_data path/to/test_data.txt --vocab_file path/to/vocab.txt --checkpoint_path path/to/checkpoint.pt
```
Replace the placeholders with the actual paths to your test data, vocabulary file, and the checkpoint file generated during training.
The `evaluate.py` script will load the trained model from the checkpoint and evaluate its performance on the test dataset, providing metrics such as test loss and perplexity.
8. Generate text using the trained model:
You can use the trained model to generate text using the `generate.py` script:
```bash
python scripts/generate.py --vocab_file path/to/vocab.txt --checkpoint_path path/to/checkpoint.pt --prompt "Your prompt text"
```
Replace the placeholders with the actual paths to your vocabulary file, checkpoint file, and provide a prompt text to initiate the generation.
The `generate.py` script will load the trained model and generate text based on the provided prompt.
These steps should help you run and test the PyxLSTM codebase. Make sure you have the necessary dependencies installed and the required data files (train, validation, and test datasets) available.
If you encounter any issues or have further questions, please refer to the PyxLSTM documentation or reach out to the maintainers for assistance.
## Documentation
The documentation for PyxLSTM can be found in the `docs` directory. It provides detailed information about the library's components, usage guidelines, and examples.
## Citation
If you use PyxLSTM in your research or projects, please cite the original xLSTM paper:
```bibtex
@article{Beck2024xLSTM,
title={xLSTM: Extended Long Short-Term Memory},
author={Beck, Maximilian and Pöppel, Korbinian and Spanring, Markus and Auer, Andreas and Prudnikova, Oleksandra and Kopp, Michael and Klambauer, Günter and Brandstetter, Johannes and Hochreiter, Sepp},
journal={arXiv preprint arXiv:2405.04517},
year={2024}
}
```
Paper link: [https://arxiv.org/abs/2405.04517](https://arxiv.org/abs/2405.04517)
## Contributing
Contributions to PyxLSTM are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.
## License
PyxLSTM is released under the MIT License. See the `LICENSE` file for more information.
## Acknowledgements
We would like to acknowledge the original authors of the xLSTM architecture for their valuable research and contributions to the field of sequence modeling.
## Contact
For any questions or inquiries, please contact the project maintainer:
- Name: Mudit Bhargava
- GitHub: [@muditbhargava66](https://github.com/muditbhargava66)
We hope you find PyxLSTM useful for your sequence modeling projects!
## Star History
<a href="https://star-history.com/#muditbhargava66/PyxLSTM&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date" />
</picture>
</a>
---
Raw data
{
"_id": null,
"home_page": null,
"name": "PyxLSTM",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": "xLSTM, LSTM, language modeling, sequence modeling",
"author": null,
"author_email": "Mudit Bhargava <muditbhargava666@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/3f/9e/4efe4c863871c84b3114adc2e74dda9d1e8ff37d3fac1d8c9d46242fe030/pyxlstm-1.0.1.tar.gz",
"platform": null,
"description": "# PyxLSTM\n\nPyxLSTM is a Python library that provides an efficient and extensible implementation of the Extended Long Short-Term Memory (xLSTM) architecture based on the research paper [\"xLSTM: Extended Long Short-Term Memory\"](https://arxiv.org/abs/2405.04517) by Beck et al. (2024). xLSTM enhances the traditional LSTM by introducing exponential gating, memory mixing, and a matrix memory structure, enabling improved performance and scalability for sequence modeling tasks.\n\n## Features\n\n- Implements the sLSTM (scalar LSTM) and mLSTM (matrix LSTM) variants of xLSTM\n- Supports pre and post up-projection block structures for flexible model architectures\n- Provides high-level model definition and training utilities for ease of use\n- Includes scripts for training, evaluation, and text generation\n- Offers data processing utilities and customizable dataset classes\n- Lightweight and modular design for seamless integration into existing projects\n- Extensively tested and documented for reliability and usability\n- Suitable for a wide range of sequence modeling tasks, including language modeling, text generation, and more\n\n## Installation\n\nTo install PyxLSTM, you can use pip:\n\n```bash\npip install PyxLSTM\n```\n\nAlternatively, you can clone the repository and install it manually:\n\n```bash\ngit clone https://github.com/muditbhargava66/PyxLSTM.git\ncd PyxLSTM\npip install -r requirements.txt\npython setup.py install\n```\n\n## Usage\n\nHere's a basic example of how to use PyxLSTM for language modeling:\n\n```python\nfrom xLSTM.model import xLSTM\nfrom xLSTM.data import LanguageModelingDataset, Tokenizer\nfrom xLSTM.utils import load_config, set_seed, get_device\n\n# Load configuration\nconfig = load_config(\"path/to/config.yaml\")\nset_seed(config.seed)\ndevice = get_device()\n\n# Initialize tokenizer and dataset\ntokenizer = Tokenizer(config.vocab_file)\ntrain_dataset = LanguageModelingDataset(config.train_data, tokenizer, config.max_length)\n\n# Create xLSTM model\nmodel = xLSTM(len(tokenizer), config.embedding_size, config.hidden_size,\n config.num_layers, config.num_blocks, config.dropout,\n config.bidirectional, config.lstm_type)\nmodel.to(device)\n\n# Train the model\noptimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)\ncriterion = torch.nn.CrossEntropyLoss(ignore_index=tokenizer.pad_token_id)\ntrain(model, train_dataset, optimizer, criterion, config, device)\n```\n\nFor more detailed usage instructions and examples, please refer to the documentation.\n\n## Code Directory Structure\n\n```\nxLSTM/\n\u2502\n\u251c\u2500\u2500 xLSTM/\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 slstm.py\n\u2502 \u251c\u2500\u2500 mlstm.py\n\u2502 \u251c\u2500\u2500 block.py\n\u2502 \u2514\u2500\u2500 model.py\n\u2502 \n\u251c\u2500\u2500 scripts/\n\u2502 \u251c\u2500\u2500 train.py\n\u2502 \u251c\u2500\u2500 evaluate.py\n\u2502 \u2514\u2500\u2500 generate.py\n\u2502\n\u251c\u2500\u2500 data/\n\u2502 \u251c\u2500\u2500 dataset.py\n\u2502 \u2514\u2500\u2500 tokenizer.py\n\u2502\n\u251c\u2500\u2500 utils/\n\u2502 \u251c\u2500\u2500 config.py\n\u2502 \u251c\u2500\u2500 logging.py\n\u2502 \u2514\u2500\u2500 utils.py\n\u2502\n\u251c\u2500\u2500 tests/\n\u2502 \u251c\u2500\u2500 test_slstm.py \n\u2502 \u251c\u2500\u2500 test_mlstm.py\n\u2502 \u251c\u2500\u2500 test_block.py\n\u2502 \u2514\u2500\u2500 test_model.py\n\u2502\n\u251c\u2500\u2500 docs/\n\u2502 \u251c\u2500\u2500 slstm.md\n\u2502 \u251c\u2500\u2500 mlstm.md\n\u2502 \u2514\u2500\u2500 training.md\n\u2502\n\u251c\u2500\u2500 examples/\n\u2502 \u2514\u2500\u2500 language_modeling.py\n\u2502\n\u251c\u2500\u2500 .gitignore\n\u251c\u2500\u2500 pyproject.toml\n\u251c\u2500\u2500 MANIFEST.in\n\u251c\u2500\u2500 requirements.txt\n\u251c\u2500\u2500 README.md\n\u2514\u2500\u2500 LICENSE\n```\n\n- xLSTM/: The main Python package containing the implementation.\n - slstm.py: Implementation of the sLSTM module.\n - mlstm.py: Implementation of the mLSTM module.\n - block.py: Implementation of the xLSTM blocks (pre and post up-projection).\n - model.py: High-level xLSTM model definition.\n\n- scripts/: Scripts for training, evaluation, and text generation.\n - train.py: Script for training the xLSTM model.\n - evaluate.py: Script for evaluating the trained model.\n - generate.py: Script for generating text using the trained model.\n\n- data/: Data processing utilities.\n - dataset.py: Custom dataset classes for loading and processing data.\n - tokenizer.py: Tokenization utilities.\n\n- utils/: Utility modules.\n - config.py: Configuration management.\n - logging.py: Logging setup.\n - utils.py: Miscellaneous utility functions.\n\n- tests/: Unit tests for different modules.\n - test_slstm.py: Tests for sLSTM module. \n - test_mlstm.py: Tests for mLSTM module.\n - test_block.py: Tests for xLSTM blocks.\n - test_model.py: Tests for the overall xLSTM model.\n\n- docs/: Documentation files.\n - README.md: Main documentation file.\n - slstm.md: Documentation for sLSTM.\n - mlstm.md: Documentation for mLSTM.\n - training.md: Training guide.\n\n- examples/: Example usage scripts.\n - language_modeling.py: Example script for language modeling with xLSTM.\n\n- .gitignore: Git ignore file to exclude unnecessary files/directories.\n- setup.py: Package setup script.\n- requirements.txt: List of required Python dependencies.\n- README.md: Project README file.\n- LICENSE: Project license file.\n\n## Running and Testing the Codebase\n\nTo run and test the PyxLSTM codebase, follow these steps:\n\n1. Clone the PyxLSTM repository:\n ```bash\n git clone https://github.com/muditbhargava66/PyxLSTM.git\n ```\n\n2. Navigate to the cloned directory:\n ```bash\n cd PyxLSTM\n ```\n\n3. Install the required dependencies:\n ```bash\n pip install -r requirements.txt\n ```\n\n4. Run the unit tests:\n ```bash\n python -m unittest discover tests\n ```\n This command will run all the unit tests located in the `tests` directory. It will execute the test files `test_slstm.py`, `test_mlstm.py`, `test_block.py`, and `test_model.py`.\n\n5. If all the tests pass successfully, you can proceed to run the example script:\n ```bash\n python examples/language_modeling.py --config path/to/config.yaml\n ```\n Replace `path/to/config.yaml` with the actual path to your configuration file. The configuration file should specify the dataset paths, model hyperparameters, and other settings.\n\n The `language_modeling.py` script will train an xLSTM model on the specified dataset using the provided configuration.\n\n6. Monitor the training progress and metrics:\n During training, the script will display the training progress, including the current epoch, training loss, and validation loss. Keep an eye on these metrics to ensure the model is learning properly.\n\n7. Evaluate the trained model:\n After training, you can evaluate the trained model on a test dataset using the `evaluate.py` script:\n ```bash\n python scripts/evaluate.py --test_data path/to/test_data.txt --vocab_file path/to/vocab.txt --checkpoint_path path/to/checkpoint.pt\n ```\n Replace the placeholders with the actual paths to your test data, vocabulary file, and the checkpoint file generated during training.\n\n The `evaluate.py` script will load the trained model from the checkpoint and evaluate its performance on the test dataset, providing metrics such as test loss and perplexity.\n\n8. Generate text using the trained model:\n You can use the trained model to generate text using the `generate.py` script:\n ```bash\n python scripts/generate.py --vocab_file path/to/vocab.txt --checkpoint_path path/to/checkpoint.pt --prompt \"Your prompt text\"\n ```\n Replace the placeholders with the actual paths to your vocabulary file, checkpoint file, and provide a prompt text to initiate the generation.\n\n The `generate.py` script will load the trained model and generate text based on the provided prompt.\n\nThese steps should help you run and test the PyxLSTM codebase. Make sure you have the necessary dependencies installed and the required data files (train, validation, and test datasets) available.\n\nIf you encounter any issues or have further questions, please refer to the PyxLSTM documentation or reach out to the maintainers for assistance.\n\n## Documentation\n\nThe documentation for PyxLSTM can be found in the `docs` directory. It provides detailed information about the library's components, usage guidelines, and examples.\n\n## Citation\n\nIf you use PyxLSTM in your research or projects, please cite the original xLSTM paper:\n\n```bibtex\n@article{Beck2024xLSTM,\n title={xLSTM: Extended Long Short-Term Memory},\n author={Beck, Maximilian and P\u00f6ppel, Korbinian and Spanring, Markus and Auer, Andreas and Prudnikova, Oleksandra and Kopp, Michael and Klambauer, G\u00fcnter and Brandstetter, Johannes and Hochreiter, Sepp},\n journal={arXiv preprint arXiv:2405.04517},\n year={2024}\n}\n```\n\nPaper link: [https://arxiv.org/abs/2405.04517](https://arxiv.org/abs/2405.04517)\n\n## Contributing\n\nContributions to PyxLSTM are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository.\n\n## License\n\nPyxLSTM is released under the MIT License. See the `LICENSE` file for more information.\n\n## Acknowledgements\n\nWe would like to acknowledge the original authors of the xLSTM architecture for their valuable research and contributions to the field of sequence modeling.\n\n## Contact\n\nFor any questions or inquiries, please contact the project maintainer:\n\n- Name: Mudit Bhargava\n- GitHub: [@muditbhargava66](https://github.com/muditbhargava66)\n\nWe hope you find PyxLSTM useful for your sequence modeling projects!\n\n## Star History\n\n<a href=\"https://star-history.com/#muditbhargava66/PyxLSTM&Date\">\n <picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date&theme=dark\" />\n <source media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date\" />\n <img alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=muditbhargava66/PyxLSTM&type=Date\" />\n </picture>\n</a>\n\n---\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 Mudit Bhargava Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "PyxLSTM: An efficient and extensible implementation of the xLSTM architecture",
"version": "1.0.1",
"project_urls": {
"Homepage": "https://github.com/muditbhargava66/PyxLSTM",
"Issues": "https://github.com/muditbhargava66/PyxLSTM/issues",
"Repository": "https://github.com/muditbhargava66/PyxLSTM"
},
"split_keywords": [
"xlstm",
" lstm",
" language modeling",
" sequence modeling"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e5f63533d2e24156c23994ec7b65cd8692e893f34fbd6b6e29d44eaf361cfe59",
"md5": "fa534ae5a896f408deaa3654bfb825ef",
"sha256": "700e1f51061a66c399c477cf1e1311bd7845a8d5db641c32c51f5f2c1cade31d"
},
"downloads": -1,
"filename": "PyxLSTM-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "fa534ae5a896f408deaa3654bfb825ef",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 9939,
"upload_time": "2024-05-10T07:50:29",
"upload_time_iso_8601": "2024-05-10T07:50:29.376142Z",
"url": "https://files.pythonhosted.org/packages/e5/f6/3533d2e24156c23994ec7b65cd8692e893f34fbd6b6e29d44eaf361cfe59/PyxLSTM-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3f9e4efe4c863871c84b3114adc2e74dda9d1e8ff37d3fac1d8c9d46242fe030",
"md5": "a6bfd1bd62c03ac70df3b4f6ca0bfc1b",
"sha256": "59c566c3ad281432f912cabfcdc236c523edfa0b6a1756a9751f5a32f5107a34"
},
"downloads": -1,
"filename": "pyxlstm-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "a6bfd1bd62c03ac70df3b4f6ca0bfc1b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 15542,
"upload_time": "2024-05-10T07:50:31",
"upload_time_iso_8601": "2024-05-10T07:50:31.340972Z",
"url": "https://files.pythonhosted.org/packages/3f/9e/4efe4c863871c84b3114adc2e74dda9d1e8ff37d3fac1d8c9d46242fe030/pyxlstm-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-10 07:50:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "muditbhargava66",
"github_project": "PyxLSTM",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "torch",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "pyyaml",
"specs": []
}
],
"lcname": "pyxlstm"
}