# Yoyodyne 🪀 Pretrained
[](https://pypi.org/project/yoyodyne-pretrained)
[](https://pypi.org/project/yoyodyne-pretrained)
[](https://dl.circleci.com/status-badge/redirect/gh/CUNY-CL/yoyodyne-pretrained/tree/master)
Yoyodyne Pretrained provides sequence-to-sequence transduction with pretrained
transformer modules.
These models are implemented using [PyTorch](https://pytorch.org/),
[Lightning](https://lightning.ai/), and [Hugging Face
transformers](https://huggingface.co/docs/transformers/en/index).
## Philosophy
Yoyodyne Pretrained inherits many of the same features as Yoyodyne itself, but
limits itself to two types of pretrained transformers:
- a pretrained transformer encoder and a pretrained transformer decoder with
a randomly-initialized cross-attention (à la Rothe et al. 2020)
- a T5 model
Because these modules are pretrained, there are few architectural
hyperparameters to set once one has determined which encoder and decoder to
warm-start from. To keep Yoyodyne as simple as possible, Yoyodyne Pretrained is
a separate library though it has many of the same features and interfaces.
## Installation
### Local installation
To install Yoyodyne Pretrained and its dependencies, run the following command:
pip install .
### Google Colab
Yoyodyne Pretrained is also compatible with [Google
Colab](https://colab.research.google.com/) GPU runtimes.
1. Click "Runtime" > "Change Runtime Type".
2. In the dialogue box, under the "Hardware accelerator" dropdown box, select "GPU", then click "Save".
3. You may be prompted to delete the old runtime. Do so if you wish.
4. Then install and run using the `!` as a prefix to shell commands.
## File formats
Other than YAML configuration files, Yoyodyne Pretrained operates on basic
tab-separated values (TSV) data files. The user can specify source, features,
and target columns. If a feature column is specified, it is concatenated (with a
separating space) to the source.
## Usage
The `yoyodyne_pretrained` command-line tool uses a subcommand interface with
four different modes. To see the full set of options available for each
subcommand, use the `--print_config` flag. For example:
yoyodyne_pretrained fit --print_config
will show all configuration options (and their default values) for the `fit`
subcommand.
### Training (`fit`)
In `fit` mode, one trains a Yoyodyne Pretrained model from scratch. Naturally,
most configuration options need to be set at training time. E.g., it is not
possible to switch between different pretrained encoders after training a
model.
This mode is invoked using the `fit` subcommand, like so:
yoyodyne_pretrained fit --config path/to/config.yaml
#### Seeding
Setting the `seed_everything:` argument to some fixed value ensures a reproducible
experiment (modulo hardware non-determism).
#### Model architectures
##### Encoder-decoder models
In practice it is usually wise to tie the encoder and decoder parameters, as in
the following YAML snippet:
...
model:
class_path: yoyodyne_pretrained.models.EncoderDecoderModel
init_args:
model_name: google-bert/bert-base-multilingual-cased
tie_encoder_decoder: true
...
##### T5 models
The following snippet shows a simple configuration T5 configuration using ByT5:
class_path: yoyodyne_pretrained.models.T5Model
init_args:
model_name: google/byt5-base
tie_encoder_decoder: true
...
#### Optimization
Yoyodyne Pretrained requires an optimizer and an learning rate scheduler. The default
optimizer is [`torch.optim.Adam`](https://docs.pytorch.org/docs/stable/generated/torch.optim.Adam.html), and the default scheduler is `yoyodyne_pretrained.schedulers.Dummy`, which keeps learning rate fixed at its initial value and takes no explicit configuration arguments.
The following YAML snippet shows the use of the Adam optimizer with a non-default initial learning rate and the `yoyodyne_pretrained.schedulers.WarmupInverseSquareRoot` LR scheduler:
...
model:
...
optimizer:
class_path: torch.optim.Adam
init_args:
lr: 1.0e-5
scheduler:
class_path: yoyodyne_pretrained.schedulers.WarmupInverseSquareRoot
init_args:
warmup_epochs: 10
...
#### Checkpointing
The
[`ModelCheckpoint`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.ModelCheckpoint.html)
is used to control the generation of checkpoint files:
...
checkpoint:
filename: "model-{epoch:03d}-{val_accuracy:.4f}"
mode: max
monitor: val_accuracy
verbose: true
...
Alternatively, one can specify a checkpointing that minimizes validation loss, as
follows:
...
checkpoint:
filename: "model-{epoch:03d}-{val_loss:.4f}"
mode: min
monitor: val_loss
verbose: true
...
A checkpoint config must be specified or Yoyodyne Pretrained will not generate
any checkpoints.
#### Callbacks
The user will likely want to configure additional callbacks. Some useful
examples are given below.
The
[`LearningRateMonitor`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.LearningRateMonitor.html)
callback records learning rates:
...
trainer:
callbacks:
- class_path: lightning.pytorch.callbacks.LearningRateMonitor
init_args:
logging_interval: epoch
...
The
[`EarlyStopping`](https://lightning.ai/docs/pytorch/stable/common/early_stopping.html)
callback enables early stopping based on a monitored quantity and a fixed
`patience`:
...
trainer:
callbacks:
- class_path: lightning.pytorch.callbacks.EarlyStopping
init_args:
monitor: val_loss
patience: 10
verbose: true
...
#### Logging
By default, Yoyodyne Pretrained performs some minimal logging to standard error
and uses progress bars to keep track of progress during each epoch. However, one
can enable additional logging faculties during training, using a similar syntax
to the one we saw above for callbacks.
The
[`CSVLogger`](https://lightning.ai/docs/pytorch/stable/extensions/generated/lightning.pytorch.loggers.CSVLogger.html)
logs all monitored quantities to a CSV file. A sample configuration is given
below.
...
trainer:
logger:
- class_path: lightning.pytorch.loggers.CSVLogger
init_args:
save_dir: /Users/Shinji/models
...
The
[`WandbLogger`](https://lightning.ai/docs/pytorch/stable/extensions/generated/lightning.pytorch.loggers.WandbLogger.html)
works similarly to the `CSVLogger`, but sends the data to the third-party
website [Weights & Biases](https://wandb.ai/site), where it can be used to
generate charts or share artifacts. A sample configuration is given below.
...
trainer:
logger:
- class_path: lightning.pytorch.loggers.WandbLogger
init_args:
project: unit1
save_dir: /Users/Shinji/models
...
Note that this functionality requires a working account with Weights & Biases.
#### Other options
Dropout probability and/or label smoothing are specified as arguments to the
`model`, as shown in the following YAML snippet.
...
model:
dropout: 0.5
label_smoothing: 0.1
...
Decoding is performed with beam search if `model: num_beams: ...` is set to a
value greater than 1; the beam width ("number of beams") defaults to 5.
Batch size is specified using `data: batch_size: ...` and defaults to 32.
By default, training uses 32-bit precision. However, the `trainer.: precision: ` flag
allows the user to perform training with half precision (`16`),
or with mixed-precision formats like `bf16-mixed` if
supported by the accelerator. This might reduce the size of the model and batches
in memory, allowing one to use larger batches, or it may simply provide small speed-ups.
There are a number of ways to specify how long a model should train for. For
example, the following YAML snippet specifies that training should run for 100
epochs or 6 wall-clock hours, whichever comes first.
...
trainer:
max_epochs: 100
max_time: 00:06:00:00
...
### Validation (`validate`)
In `validation` mode, one runs the validation step over labeled validation data
(specified as `data: val: path/to/validation.tsv`) using a previously trained
checkpoint (`--ckpt_path path/to/checkpoint.ckpt` from the command line),
recording loss and other statistics for the validation set. In practice this is mostly useful
for debugging.
This mode is invoked using the `validate` subcommand, like so:
yoyodyne_pretrained validate --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
### Evaluation (`test`)
In `test` mode, one computes accuracy over held-out test data (specified as
`data: test: path/to/test.tsv`) using a previously trained checkpoint
(`--ckpt_path path/to/checkpoint.ckpt` from the command line); it differs from
validation mode in that it uses the `test` file rather than the `val` file.
This mode is invoked using the `test` subcommand, like so:
yoyodyne_pretrained test --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
### Inference (`predict`)
In `predict` mode, a previously trained model checkpoint
(`--ckpt_path path/to/checkpoint.ckpt` from the command line) is used to label
an input file. One must also specify the path where the predictions will be
written.
...
predict:
path: /Users/Shinji/predictions.conllu
...
This mode is invoked using the `predict` subcommand, like so:
yoyodyne_pretrained predict --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt
Many tokenizers, including the BERT tokenizer, are lossy in the sense
that they may introduce spaces not present in the input, particularly adjacent
to word-internal punctuation like dashes (e.g., *state-of-the-art*).
Unfortunately, there is little that can be done about this within this library,
but it may be possible to fix this as a post-processing step.
## Examples
See [`examples`](examples/README.md) for some worked examples including
hyperparameter sweeping with [Weights & Biases](https://wandb.ai/site).
## Testing
Given the size of the models, a basic integration test of Yoyodyne Pretrained
exceeds what is feasible without access to reasonably powerful GPU. Thus tests
have to be run locally rather than via cloud-based continuous integration
systems. The integration tests take roughly 30 minutes in total. To test the
system, run the following:
pytest -vvv tests
### License
Yoyodyne Pretrained is distributed under an [Apache 2.0 license](LICENSE.txt).
## For developers
We welcome contributions using the fork-and-pull model.
### Releasing
1. Create a new branch. E.g., if you want to call this branch "release":
`git checkout -b release`
2. Sync your fork's branch to the upstream master branch. E.g., if the upstream
remote is called "upstream": `git pull upstream master`
3. Increment the version field in [`pyproject.toml`](pyproject.toml).
4. Stage your changes: `git add pyproject.toml`.
5. Commit your changes: `git commit -m "your commit message here"`
6. Push your changes. E.g., if your branch is called "release":
`git push origin release`
7. Submit a PR for your release and wait for it to be merged into `master`.
8. Tag the `master` branch's last commit. The tag should begin with `v`; e.g.,
if the new version is 3.1.4, the tag should be `v3.1.4`. This can be done:
- on GitHub itself: click the "Releases" or "Create a new release" link on
the right-hand side of the Yoyodyne GitHub page) and follow the
dialogues.
- from the command-line using `git tag`.
9. Build the new release: `python -m build`
10. Upload the result to PyPI: `twine upload dist/*`
## References
Rothe, S., Narayan, S., and Severyn, A. 2020. Leveraging pre-trained checkpoints
for sequence generation tasks. *Transactions of the Association for
Computational Linguistics* 8: 264-280.
(See also [`yoyodyne-pretrained.bib`](yoyodyne-pretrained.bib) for more work
used during the development of this library.)
Raw data
{
"_id": null,
"home_page": null,
"name": "yoyodyne-pretrained",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.10",
"maintainer_email": null,
"keywords": "computational linguistics, morphology, natural language processing, language",
"author": null,
"author_email": "Kyle Gorman <kylebgorman@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/13/a0/e94f2115fc655b8d70c438ac7be265b6fa7911d212c87e57096b43b07b3b/yoyodyne_pretrained-0.1.3.tar.gz",
"platform": null,
"description": "# Yoyodyne \ud83e\ude80 Pretrained\n\n[](https://pypi.org/project/yoyodyne-pretrained)\n[](https://pypi.org/project/yoyodyne-pretrained)\n[](https://dl.circleci.com/status-badge/redirect/gh/CUNY-CL/yoyodyne-pretrained/tree/master)\n\nYoyodyne Pretrained provides sequence-to-sequence transduction with pretrained\ntransformer modules.\n\nThese models are implemented using [PyTorch](https://pytorch.org/),\n[Lightning](https://lightning.ai/), and [Hugging Face\ntransformers](https://huggingface.co/docs/transformers/en/index).\n\n## Philosophy\n\nYoyodyne Pretrained inherits many of the same features as Yoyodyne itself, but\nlimits itself to two types of pretrained transformers:\n\n- a pretrained transformer encoder and a pretrained transformer decoder with\n a randomly-initialized cross-attention (\u00e0 la Rothe et al.\u00a02020)\n- a T5 model\n\nBecause these modules are pretrained, there are few architectural\nhyperparameters to set once one has determined which encoder and decoder to\nwarm-start from. To keep Yoyodyne as simple as possible, Yoyodyne Pretrained is\na separate library though it has many of the same features and interfaces.\n\n## Installation\n\n### Local installation\n\nTo install Yoyodyne Pretrained and its dependencies, run the following command:\n\n pip install .\n\n### Google Colab\n\nYoyodyne Pretrained is also compatible with [Google\nColab](https://colab.research.google.com/) GPU runtimes.\n\n1. Click \"Runtime\" > \"Change Runtime Type\".\n2. In the dialogue box, under the \"Hardware accelerator\" dropdown box, select \"GPU\", then click \"Save\".\n3. You may be prompted to delete the old runtime. Do so if you wish.\n4. Then install and run using the `!` as a prefix to shell commands.\n\n## File formats\n\nOther than YAML configuration files, Yoyodyne Pretrained operates on basic\ntab-separated values (TSV) data files. The user can specify source, features,\nand target columns. If a feature column is specified, it is concatenated (with a\nseparating space) to the source.\n\n## Usage\n\nThe `yoyodyne_pretrained` command-line tool uses a subcommand interface with\nfour different modes. To see the full set of options available for each\nsubcommand, use the `--print_config` flag. For example:\n\n yoyodyne_pretrained fit --print_config\n\nwill show all configuration options (and their default values) for the `fit`\nsubcommand.\n\n### Training (`fit`)\n\nIn `fit` mode, one trains a Yoyodyne Pretrained model from scratch. Naturally,\nmost configuration options need to be set at training time. E.g., it is not\npossible to switch between different pretrained encoders after training a\nmodel.\n\nThis mode is invoked using the `fit` subcommand, like so:\n\n yoyodyne_pretrained fit --config path/to/config.yaml\n\n#### Seeding\n\nSetting the `seed_everything:` argument to some fixed value ensures a reproducible\nexperiment (modulo hardware non-determism).\n\n#### Model architectures\n\n##### Encoder-decoder models\n\nIn practice it is usually wise to tie the encoder and decoder parameters, as in\nthe following YAML snippet:\n\n ...\n model:\n class_path: yoyodyne_pretrained.models.EncoderDecoderModel\n init_args:\n model_name: google-bert/bert-base-multilingual-cased\n tie_encoder_decoder: true\n ...\n\n##### T5 models\n\nThe following snippet shows a simple configuration T5 configuration using ByT5:\n\n class_path: yoyodyne_pretrained.models.T5Model\n init_args:\n model_name: google/byt5-base\n tie_encoder_decoder: true\n ...\n\n#### Optimization\n\nYoyodyne Pretrained requires an optimizer and an learning rate scheduler. The default\noptimizer is [`torch.optim.Adam`](https://docs.pytorch.org/docs/stable/generated/torch.optim.Adam.html), and the default scheduler is `yoyodyne_pretrained.schedulers.Dummy`, which keeps learning rate fixed at its initial value and takes no explicit configuration arguments.\n\nThe following YAML snippet shows the use of the Adam optimizer with a non-default initial learning rate and the `yoyodyne_pretrained.schedulers.WarmupInverseSquareRoot` LR scheduler:\n\n ...\n model:\n ...\n optimizer:\n class_path: torch.optim.Adam\n init_args:\n lr: 1.0e-5\n scheduler:\n class_path: yoyodyne_pretrained.schedulers.WarmupInverseSquareRoot\n init_args:\n warmup_epochs: 10\n ...\n\n#### Checkpointing\n\nThe\n[`ModelCheckpoint`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.ModelCheckpoint.html)\nis used to control the generation of checkpoint files:\n\n ...\n checkpoint:\n filename: \"model-{epoch:03d}-{val_accuracy:.4f}\"\n mode: max\n monitor: val_accuracy\n verbose: true\n ...\n\nAlternatively, one can specify a checkpointing that minimizes validation loss, as\nfollows:\n\n ...\n checkpoint:\n filename: \"model-{epoch:03d}-{val_loss:.4f}\"\n mode: min\n monitor: val_loss\n verbose: true\n ...\n\nA checkpoint config must be specified or Yoyodyne Pretrained will not generate\nany checkpoints.\n\n#### Callbacks\n\nThe user will likely want to configure additional callbacks. Some useful\nexamples are given below.\n\nThe\n[`LearningRateMonitor`](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.callbacks.LearningRateMonitor.html)\ncallback records learning rates:\n\n ...\n trainer:\n callbacks:\n - class_path: lightning.pytorch.callbacks.LearningRateMonitor\n init_args:\n logging_interval: epoch\n ...\n\nThe\n[`EarlyStopping`](https://lightning.ai/docs/pytorch/stable/common/early_stopping.html)\ncallback enables early stopping based on a monitored quantity and a fixed\n`patience`:\n\n ...\n trainer:\n callbacks:\n - class_path: lightning.pytorch.callbacks.EarlyStopping\n init_args:\n monitor: val_loss\n patience: 10\n verbose: true\n ...\n\n#### Logging\n\nBy default, Yoyodyne Pretrained performs some minimal logging to standard error\nand uses progress bars to keep track of progress during each epoch. However, one\ncan enable additional logging faculties during training, using a similar syntax\nto the one we saw above for callbacks.\n\nThe\n[`CSVLogger`](https://lightning.ai/docs/pytorch/stable/extensions/generated/lightning.pytorch.loggers.CSVLogger.html)\nlogs all monitored quantities to a CSV file. A sample configuration is given\nbelow.\n\n ...\n trainer:\n logger:\n - class_path: lightning.pytorch.loggers.CSVLogger\n init_args:\n save_dir: /Users/Shinji/models\n ...\n\nThe\n[`WandbLogger`](https://lightning.ai/docs/pytorch/stable/extensions/generated/lightning.pytorch.loggers.WandbLogger.html)\nworks similarly to the `CSVLogger`, but sends the data to the third-party\nwebsite [Weights & Biases](https://wandb.ai/site), where it can be used to\ngenerate charts or share artifacts. A sample configuration is given below.\n\n ...\n trainer:\n logger:\n - class_path: lightning.pytorch.loggers.WandbLogger\n init_args:\n project: unit1\n save_dir: /Users/Shinji/models\n ...\n\nNote that this functionality requires a working account with Weights & Biases.\n\n#### Other options\n\nDropout probability and/or label smoothing are specified as arguments to the\n`model`, as shown in the following YAML snippet.\n\n ...\n model:\n dropout: 0.5\n label_smoothing: 0.1\n ...\n\nDecoding is performed with beam search if `model: num_beams: ...` is set to a\nvalue greater than 1; the beam width (\"number of beams\") defaults to 5.\n\nBatch size is specified using `data: batch_size: ...` and defaults to 32.\n\nBy default, training uses 32-bit precision. However, the `trainer.: precision: ` flag\nallows the user to perform training with half precision (`16`),\nor with mixed-precision formats like `bf16-mixed` if\nsupported by the accelerator. This might reduce the size of the model and batches\nin memory, allowing one to use larger batches, or it may simply provide small speed-ups.\n\nThere are a number of ways to specify how long a model should train for. For\nexample, the following YAML snippet specifies that training should run for 100\nepochs or 6 wall-clock hours, whichever comes first.\n\n ...\n trainer:\n max_epochs: 100\n max_time: 00:06:00:00\n ...\n\n### Validation (`validate`)\n\nIn `validation` mode, one runs the validation step over labeled validation data\n(specified as `data: val: path/to/validation.tsv`) using a previously trained\ncheckpoint (`--ckpt_path path/to/checkpoint.ckpt` from the command line),\nrecording loss and other statistics for the validation set. In practice this is mostly useful\nfor debugging.\n\nThis mode is invoked using the `validate` subcommand, like so:\n\n yoyodyne_pretrained validate --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt\n\n### Evaluation (`test`)\n\nIn `test` mode, one computes accuracy over held-out test data (specified as\n`data: test: path/to/test.tsv`) using a previously trained checkpoint\n(`--ckpt_path path/to/checkpoint.ckpt` from the command line); it differs from\nvalidation mode in that it uses the `test` file rather than the `val` file.\n\nThis mode is invoked using the `test` subcommand, like so:\n\n yoyodyne_pretrained test --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt\n\n### Inference (`predict`)\n\nIn `predict` mode, a previously trained model checkpoint\n(`--ckpt_path path/to/checkpoint.ckpt` from the command line) is used to label\nan input file. One must also specify the path where the predictions will be\nwritten.\n\n ...\n predict:\n path: /Users/Shinji/predictions.conllu\n ...\n\nThis mode is invoked using the `predict` subcommand, like so:\n\n yoyodyne_pretrained predict --config path/to/config.yaml --ckpt_path path/to/checkpoint.ckpt\n\nMany tokenizers, including the BERT tokenizer, are lossy in the sense\nthat they may introduce spaces not present in the input, particularly adjacent\nto word-internal punctuation like dashes (e.g., *state-of-the-art*).\nUnfortunately, there is little that can be done about this within this library,\nbut it may be possible to fix this as a post-processing step.\n\n## Examples\n\nSee [`examples`](examples/README.md) for some worked examples including\nhyperparameter sweeping with [Weights & Biases](https://wandb.ai/site).\n\n## Testing\n\nGiven the size of the models, a basic integration test of Yoyodyne Pretrained\nexceeds what is feasible without access to reasonably powerful GPU. Thus tests\nhave to be run locally rather than via cloud-based continuous integration\nsystems. The integration tests take roughly 30 minutes in total. To test the\nsystem, run the following:\n\n pytest -vvv tests\n\n### License\n\nYoyodyne Pretrained is distributed under an [Apache 2.0 license](LICENSE.txt).\n\n## For developers\n\nWe welcome contributions using the fork-and-pull model.\n\n### Releasing\n\n1. Create a new branch. E.g., if you want to call this branch \"release\":\n `git checkout -b release`\n2. Sync your fork's branch to the upstream master branch. E.g., if the upstream\n remote is called \"upstream\": `git pull upstream master`\n3. Increment the version field in [`pyproject.toml`](pyproject.toml).\n4. Stage your changes: `git add pyproject.toml`.\n5. Commit your changes: `git commit -m \"your commit message here\"`\n6. Push your changes. E.g., if your branch is called \"release\":\n `git push origin release`\n7. Submit a PR for your release and wait for it to be merged into `master`.\n8. Tag the `master` branch's last commit. The tag should begin with `v`; e.g.,\n if the new version is 3.1.4, the tag should be `v3.1.4`. This can be done:\n - on GitHub itself: click the \"Releases\" or \"Create a new release\" link on\n the right-hand side of the Yoyodyne GitHub page) and follow the\n dialogues.\n - from the command-line using `git tag`.\n9. Build the new release: `python -m build`\n10. Upload the result to PyPI: `twine upload dist/*`\n\n## References\n\nRothe, S., Narayan, S., and Severyn, A. 2020. Leveraging pre-trained checkpoints\nfor sequence generation tasks. *Transactions of the Association for\nComputational Linguistics* 8: 264-280.\n\n(See also [`yoyodyne-pretrained.bib`](yoyodyne-pretrained.bib) for more work\nused during the development of this library.)\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "Small-vocabulary transformer sequence-to-sequence models with warm starts",
"version": "0.1.3",
"project_urls": {
"homepage": "https://github.com/CUNY-CL/yoyodyne-pretrained"
},
"split_keywords": [
"computational linguistics",
" morphology",
" natural language processing",
" language"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0c7be71d15d1f780e78d85f955c6cc6483f63eca4af65f59c3117c79e629e53e",
"md5": "4fa927498301b7296f6ed84ff0338584",
"sha256": "21f87edd662c4d603556119a77fa90e742510eaee9fd9e5c2b6dbe68292924b6"
},
"downloads": -1,
"filename": "yoyodyne_pretrained-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4fa927498301b7296f6ed84ff0338584",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.10",
"size": 26084,
"upload_time": "2025-07-20T21:13:37",
"upload_time_iso_8601": "2025-07-20T21:13:37.195571Z",
"url": "https://files.pythonhosted.org/packages/0c/7b/e71d15d1f780e78d85f955c6cc6483f63eca4af65f59c3117c79e629e53e/yoyodyne_pretrained-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "13a0e94f2115fc655b8d70c438ac7be265b6fa7911d212c87e57096b43b07b3b",
"md5": "a6a01578db255c80e191844c0981966d",
"sha256": "32285c5d605f8f8ff79095b4f80b119cdd400dc93fe8f43f294a4401d705520e"
},
"downloads": -1,
"filename": "yoyodyne_pretrained-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "a6a01578db255c80e191844c0981966d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.10",
"size": 25148,
"upload_time": "2025-07-20T21:13:38",
"upload_time_iso_8601": "2025-07-20T21:13:38.395280Z",
"url": "https://files.pythonhosted.org/packages/13/a0/e94f2115fc655b8d70c438ac7be265b6fa7911d212c87e57096b43b07b3b/yoyodyne_pretrained-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 21:13:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "CUNY-CL",
"github_project": "yoyodyne-pretrained",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"circle": true,
"requirements": [
{
"name": "black",
"specs": [
[
">=",
"25.1.0"
]
]
},
{
"name": "build",
"specs": [
[
">=",
"1.2.2.post1"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"7.2.0"
]
]
},
{
"name": "lightning",
"specs": [
[
"<",
"3.0.0"
],
[
">=",
"2.4.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"2.2.6"
]
]
},
{
"name": "parameterized",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"8.3.5"
]
]
},
{
"name": "setuptools",
"specs": [
[
">=",
"80.8.0"
]
]
},
{
"name": "torch",
"specs": [
[
"<",
"3.0.0"
],
[
">=",
"2.4.0"
]
]
},
{
"name": "torchmetrics",
"specs": [
[
">=",
"1.4.0.post0"
],
[
"<",
"2.0.0"
]
]
},
{
"name": "transformers",
"specs": [
[
">=",
"4.44.0"
],
[
"<",
"5.0.0"
]
]
},
{
"name": "twine",
"specs": [
[
">=",
"6.1.0"
]
]
},
{
"name": "wandb",
"specs": [
[
">=",
"0.18.7"
],
[
"<",
"0.19.0"
]
]
},
{
"name": "wheel",
"specs": [
[
">=",
"0.45.1"
]
]
}
],
"lcname": "yoyodyne-pretrained"
}