| Name | seqthetic JSON |
| Version |
0.1.11
JSON |
| download |
| home_page | None |
| Summary | Creates sequence data for pretraining and benchmarking sequence models |
| upload_time | 2024-06-24 02:55:25 |
| maintainer | None |
| docs_url | None |
| author | Shom |
| requires_python | <3.11,>=3.10 |
| license | None |
| keywords |
|
| VCS |
|
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# Seqthetic
This tool generates synthetic sequence data to help test various ideas in pretraining sequence models with synthetic data. It is used in meta-language repo (Coming Soon).
Features:
1. **Diversity**: Supports generating data following various patterns including fractional Brownian Motion(fbm), [LIME](https://arxiv.org/pdf/2101.06223)(TODO), [TILT](https://arxiv.org/abs/2004.14601)(TODO) and [synthetic pretraining tasks](https://arxiv.org/abs/2206.10139) etc.
2. **Spec-Driven**: Everything about the dataset is described by a spec, which helps with documenting each ablation and high-level manipulation.
3. **Reproducibility**: Processes involving randomness have their seeds recorded in the spec file. This means you can transfer the dataset by only
## Installation
```
pip install -e .
```
## Usage
### Generation
To generate a synthetic dataset, just write a spec and use synthesizer to make the dataset. For details on spec, please see [Concepts](#concepts):
```python
# write the spec
spec = SynthesisSpec(...)
# pass it to the synthesizer
szer = Synthesizer(spec)
# call make_dataset
dataset = szer.make_dataset()
# save dataset, or call dataset.save().
szer.save_dataset()
```
You will get a json file and a csv file. The json file stores the spec and is ended with `.sqf.json`, and the csv stores the dataset. Their names are the `name` field in the spec or an unique id if `name` is not given.
### Save & load
Please make sure the spec json file and csv file is under the same directory.
```
# Pass the name of the spec.only loads the spec
spec = SynthesisSpec.load('ABC')
# Pass the name of the dataset. loads both the spec and the dataset. Please use the seqthetic.Dataset class.
dataset = Dataset.load('ABC')
spec_in_dataset = dataset.spec
```
### Creating New Dependency
Creating new dependency has several requirements:
1. Please add a generator field by: `generator: str = 'xxx'`, where xxx is the name of the generation method you will use. This field is used to discriminate different dependencies when parse spec files;
2. Please add custom_seed_schema wrapped with `SchemaList`: `custom_seed_schema = SchemaList(['hurst', 'dependency'])` and record every seed used for random sampling. `custom_seed_schema` is used for storing seeds and loading them to the dependency.
3. Please add metadata_schema to specify what will be stored in the metadata field in the `Dataset`. This is not enforced but helps for documentation.
### Register Dependency
If you want to use custom dependency in the spec, you can register it with `SynthesisSpec.register_dependency`:
```python
SynthesisSpec.register_dependency(MyDependency)
```
## Concepts
The synthesis spec employs several concepts to enable flexible generation of datasets:
1. **Vocabulary**: All sequences are simply a series of vocabulary which are integers. The frequencies of each vocabulary can be specified, for details see [Vocabulary section](#vocabulary).
2. **Domain**: A dataset can be composed of a number of domains with different characteristics like the length distribution and the **dependency** pattern(see below). It's similar to natural language pretraining corpus containing various kind of data: news, code, arxiv papers etc. Each domain has a `mixture_ratio` option which determines how much tokens it accounts for in the whole dataset.
3. **Dependency**: A domain is mostly defined by the dependency of its sequences, which is the occurrence pattern of tokens. For example, the sequence "abcdabcd" is defined by the repeating the former sequence. It doesn't matter what sequence is repeated, but the structure is important. We hypothesize that learning the dependency by properly storing and retrieving tokens is central to the various abilities of language models, like in-context learning abilities.
4. **Mapping**: Though dependency defines a domain, it needs to be realized as a series of tokens from the vocabulary, which is specified by the `mapping` option. Dependencies can be mapped according to their frequency in the sentence, and one can split or duplicate them to create multiple sequence from one series of dependency.
The process is:
```
for domain in domains:
dependencies = domain.dependency.make_dependency()
```
## Classes
### SynthesisSpec
### Dataset
### Vocabulary
We support the following vocabulary distributions:
1. **Zipf Vocabulary**: the Zipf's law means the frequency of any word is inversely proportional to its rank in the frequency table, but here we use Zipf-Mandelbrot law for generality: $\text { frequency } \propto \frac{1}{(\operatorname{rank}+b)^a}$.
2. **Uniform Vocabulary**: each vocabulary has same frequency.
3. **Loglinear Vocabulary(TODO)**: applied in the [paper](https://arxiv.org/pdf/2203.10326.pdf).
4. **Corpus Vocabulary(TODO)**: vocabulary with each frequency specified. often calculated from a realistic corpus.
To create more realistic distributions, an optional `DistributionNoise` can be added to them. Noise can be `additive` or `multiplicative`.
For example:
```python
zipf_vocab = ZipfVocabulary(size=1000, alpha=1, beta=2.7)
uniform_vocab_with_noise = UniformVocabulary(
size=2000,
noise=DistributionNoise(
type='additive',
level=0.01)
)
```
### Dependency
We support the following dependency generators:
1. **FBMDependency**: The dependency is discretized sample of fractional brownian motion(fBm).This is inspired by the hypothesis that language possesses [fractal structure](https://arxiv.org/abs/2402.01825), and fractional brownian motion is an easy way to construct fractal sequences with a given fractal metric called [hurst exponent](https://en.wikipedia.org/wiki/Hurst_exponent).
2. **RandomDependency**: The dependency is randomly sampled from a normal distribution. Mainly used as baseline.
3. **FunctionDependency**: The dependency is discretized function specified by the user. For example one can use $\sin (x)$ to create periodic dependency.
### Mapping
Mapping contains following options:
1.**sample_by**: How to sample vocabularies. The choices are `frequency` and `random`, where `frequency` means sampling based on frequency of vocabulary, and `random` means sampling with no regard to frequency.
2. **map_by**: Strategy of mapping dependency to vocabulary. The choices are `frequency` and `random`, where `frequency` means higher frequency dependencies are mapped to sampled vocabulary with higher probability, and `random` means mapping dependency to vocabulary randomly.
For example, the dependency sequence with `333221` has three dependency valules: 1, 2, 3. For this sequence we sample three vocabularies: `a: 0.3, b: 0.2, c: 0.1`, where numbers are the probability. so under `frequency` mapping strategy, we map `3` to `a`, `2` to `b`, and `1` to `c`.
Note: we don't consider mapping multiple dependency to vocabulary or vice versa as they will break the dependency structure, which introduces variation that can be more cleanly specified by more domains or `Range` in fields such as hurst exponent.
### Seed
Creating synthetic data involves a lot of random sampling, so to ensure reproducibility, we record seeds for random generators used by vocabulary sampling and dependency generation for each domain. We use `np.random.SeedSequence.entropy` to generate seeds.
The main method of `Seed` class is `get_rng`, which instantiates a numpy random generator for sampling:
```python
# get a random generator from given seed
rng = seed.get_rng('dependency')
# get a list of random generators that are spawned from given seed
rngs = seed.get_rng('dependency', 3)
assert len(rngs) == 3
# get a list of random generators that are spawned from given seed, useful for passing variables
rngs = seed.get_rng('dependency', num_sequence, return_list=True)
assert type(rngs) == list
```
### Range
When specifying dependencies, `Range` can be used for fields to specify a distribution between a range of value to improve diversity. A similar class `FlexibleRange` is used for cases that allow both single number input and `Range` input, where single number input will be converted to `Range`.
```python
# input with range
dep = RandomDependency(
num_dependency=Range(min=10, max=16)
sequence_length=Range(min=200, max=1000)
)
dep_num = RandomDependency(
num_dependency=16,
sequence_length=1000)
assert isinstance(dep_num.num_dependency, Range)
```
### Vary
The space of is immense, which makes it necessary to explore different combinations of parameters. The `vary` function can be used to create from a basic `SynthesisSpec` different specs with some parameters changed according to `Variation` and these specs are saved to a `SynthesisSpecGroup`. You can separately save the group file and the specs.
### Variation
1. For variating total_token, you can use `compute_ops` like `Mul`, `Div`, `Add`, `Sub`. You can also specify a number:
```python
assert spec.total_token == 2000
group = vary(spec, Variation(total_token=[Mul(2), Add(2000), Div(2), Sub(1000), 5000]))
# base spec total_token multiplied by 2
assert group.specs[0].total_token == 4000
# base spec total_token added by 2000
assert group.specs[1].total_token == 4000
# base spec total_token divided by 2
assert group.specs[2].total_token == 1000
# base spec total_token subtracted by 1000
assert group.specs[3].total_token == 1000
# base spec total_token set to 5000
assert group.specs[4].total_token == 5000
```
2. for varying mixture_ratio of domains, a list of list of mixture_ratio must be used. each list of mixture ratio must match the domain length of the base spec:
```python
vary(spec, Variation(mixture=[[0.1, 0.3, 0.6], [0.2, 0.4, 0.4]]))
```
3. the operation of domain is diverse and is deferred to [Domain Operation](#domain-operation)
### Domain Operation
There are several basic domain operations:
1. `Vary`: Vary the domain dependency or mapping parameters.
2. `Insert`: Add new domain to specified position.
3. `Remove`: Remove domain from specified position.
4. `Replace`: Replace domain at specified position.
5. `Shuffle`: Shuffle the order of domains.
6. `ChangeSeed`: Change the seed of domain.
One can choose two combination patterns:
1. `Zip`: like the zip function in python, for example the `zip([1, 2], [3, 4])` becomes `[1, 3], [2, 4]`, useful for conducting multiple actions on one domain at the same time
2. `Product`: like the `itertools.product` function , for example the `product([1, 2], [3, 4])` becomes `[1, 3], [1, 4], [2, 3], [2, 4]`, useful for conducting multiple actions on different domains at the same time.
For example:
```python
Zip(
ops=[
Vary(domain=0, dependency={
'hurst': [0.5, 0.6]
}),
Vary(domain=1, dependency={
'num_dependency': [Range(min=10, max=20)]
})
]
)
```
## Roadmap
- [ ]**tests**
- [ ]**vary stress test**
- [ ]spec reproducibility
- [ ]dependency combination
- [ ]function dependency
- [ ]file related
- dependencies
- [-] **dynamically register dependency** (spec metadata)
- [ ] **add seq_op dependencies from synthetic_pretraining**
- [ ] bracket, dyck
- [ ] LIME
- [ ] DFS automata/transducer deduction/induction
- [ ] arithmetic
- [ ] math derivations
- [ ] cellular automata
- [ ] dynamical system
- [ ] discretized IFS
- [ ] sine function and variants
- [ ] multifractional brownian motion
- [ ] fractional brownian field
- [ ] **merge**
- [-] spec_group
- [-] generate
- [-] save
- [ ] notebooks
- [ ] fractal, fbm, mbm, discretize, bincount
- [ ] dependency, frequency
- [ ] vocab
- [ ] mapping
- [ ]vocab
- [ ] loglinear
- [ ] corpus vocab
- [ ] domain vocab
- [ ] **evolution**
- [ ] synonyms, antonyms, supernyms
- [ ] mapping
- [ ] **multiple**
- [ ] **clip**
- dataloader related?
- fix Range validation?
Raw data
{
"_id": null,
"home_page": null,
"name": "seqthetic",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Shom",
"author_email": "shaomi_lin@126.com",
"download_url": "https://files.pythonhosted.org/packages/08/a2/0a14950da354f21f4ca6b101610cc70ac8e4b03e64817c40b40168d9abd0/seqthetic-0.1.11.tar.gz",
"platform": null,
"description": "# Seqthetic \n\nThis tool generates synthetic sequence data to help test various ideas in pretraining sequence models with synthetic data. It is used in meta-language repo (Coming Soon).\n\nFeatures:\n1. **Diversity**: Supports generating data following various patterns including fractional Brownian Motion(fbm), [LIME](https://arxiv.org/pdf/2101.06223)(TODO), [TILT](https://arxiv.org/abs/2004.14601)(TODO) and [synthetic pretraining tasks](https://arxiv.org/abs/2206.10139) etc.\n2. **Spec-Driven**: Everything about the dataset is described by a spec, which helps with documenting each ablation and high-level manipulation. \n3. **Reproducibility**: Processes involving randomness have their seeds recorded in the spec file. This means you can transfer the dataset by only \n\n## Installation \n\n```\npip install -e .\n\n```\n## Usage\n\n### Generation\nTo generate a synthetic dataset, just write a spec and use synthesizer to make the dataset. For details on spec, please see [Concepts](#concepts):\n```python\n# write the spec \nspec = SynthesisSpec(...)\n# pass it to the synthesizer\nszer = Synthesizer(spec)\n# call make_dataset\ndataset = szer.make_dataset()\n# save dataset, or call dataset.save(). \nszer.save_dataset()\n```\nYou will get a json file and a csv file. The json file stores the spec and is ended with `.sqf.json`, and the csv stores the dataset. Their names are the `name` field in the spec or an unique id if `name` is not given. \n\n### Save & load \nPlease make sure the spec json file and csv file is under the same directory.\n```\n# Pass the name of the spec.only loads the spec\nspec = SynthesisSpec.load('ABC')\n# Pass the name of the dataset. loads both the spec and the dataset. Please use the seqthetic.Dataset class.\ndataset = Dataset.load('ABC')\nspec_in_dataset = dataset.spec\n```\n\n### Creating New Dependency\nCreating new dependency has several requirements:\n1. Please add a generator field by: `generator: str = 'xxx'`, where xxx is the name of the generation method you will use. This field is used to discriminate different dependencies when parse spec files;\n2. Please add custom_seed_schema wrapped with `SchemaList`: `custom_seed_schema = SchemaList(['hurst', 'dependency'])` and record every seed used for random sampling. `custom_seed_schema` is used for storing seeds and loading them to the dependency. \n3. Please add metadata_schema to specify what will be stored in the metadata field in the `Dataset`. This is not enforced but helps for documentation.\n\n### Register Dependency\nIf you want to use custom dependency in the spec, you can register it with `SynthesisSpec.register_dependency`:\n```python\nSynthesisSpec.register_dependency(MyDependency)\n```\n\n## Concepts\nThe synthesis spec employs several concepts to enable flexible generation of datasets:\n1. **Vocabulary**: All sequences are simply a series of vocabulary which are integers. The frequencies of each vocabulary can be specified, for details see [Vocabulary section](#vocabulary).\n2. **Domain**: A dataset can be composed of a number of domains with different characteristics like the length distribution and the **dependency** pattern(see below). It's similar to natural language pretraining corpus containing various kind of data: news, code, arxiv papers etc. Each domain has a `mixture_ratio` option which determines how much tokens it accounts for in the whole dataset.\n3. **Dependency**: A domain is mostly defined by the dependency of its sequences, which is the occurrence pattern of tokens. For example, the sequence \"abcdabcd\" is defined by the repeating the former sequence. It doesn't matter what sequence is repeated, but the structure is important. We hypothesize that learning the dependency by properly storing and retrieving tokens is central to the various abilities of language models, like in-context learning abilities. \n4. **Mapping**: Though dependency defines a domain, it needs to be realized as a series of tokens from the vocabulary, which is specified by the `mapping` option. Dependencies can be mapped according to their frequency in the sentence, and one can split or duplicate them to create multiple sequence from one series of dependency.\n\nThe process is:\n```\nfor domain in domains:\n dependencies = domain.dependency.make_dependency()\n\n```\n## Classes\n\n### SynthesisSpec\n\n### Dataset\n\n### Vocabulary\nWe support the following vocabulary distributions: \n1. **Zipf Vocabulary**: the Zipf's law means the frequency of any word is inversely proportional to its rank in the frequency table, but here we use Zipf-Mandelbrot law for generality: $\\text { frequency } \\propto \\frac{1}{(\\operatorname{rank}+b)^a}$. \n2. **Uniform Vocabulary**: each vocabulary has same frequency.\n3. **Loglinear Vocabulary(TODO)**: applied in the [paper](https://arxiv.org/pdf/2203.10326.pdf). \n4. **Corpus Vocabulary(TODO)**: vocabulary with each frequency specified. often calculated from a realistic corpus.\n\nTo create more realistic distributions, an optional `DistributionNoise` can be added to them. Noise can be `additive` or `multiplicative`.\n\nFor example: \n```python\nzipf_vocab = ZipfVocabulary(size=1000, alpha=1, beta=2.7)\nuniform_vocab_with_noise = UniformVocabulary(\n size=2000,\n noise=DistributionNoise(\n type='additive',\n level=0.01)\n)\n```\n### Dependency\nWe support the following dependency generators: \n1. **FBMDependency**: The dependency is discretized sample of fractional brownian motion(fBm).This is inspired by the hypothesis that language possesses [fractal structure](https://arxiv.org/abs/2402.01825), and fractional brownian motion is an easy way to construct fractal sequences with a given fractal metric called [hurst exponent](https://en.wikipedia.org/wiki/Hurst_exponent).\n2. **RandomDependency**: The dependency is randomly sampled from a normal distribution. Mainly used as baseline.\n3. **FunctionDependency**: The dependency is discretized function specified by the user. For example one can use $\\sin (x)$ to create periodic dependency. \n\n### Mapping \nMapping contains following options:\n\n1.**sample_by**: How to sample vocabularies. The choices are `frequency` and `random`, where `frequency` means sampling based on frequency of vocabulary, and `random` means sampling with no regard to frequency.\n\n2. **map_by**: Strategy of mapping dependency to vocabulary. The choices are `frequency` and `random`, where `frequency` means higher frequency dependencies are mapped to sampled vocabulary with higher probability, and `random` means mapping dependency to vocabulary randomly.\n\nFor example, the dependency sequence with `333221` has three dependency valules: 1, 2, 3. For this sequence we sample three vocabularies: `a: 0.3, b: 0.2, c: 0.1`, where numbers are the probability. so under `frequency` mapping strategy, we map `3` to `a`, `2` to `b`, and `1` to `c`. \n\nNote: we don't consider mapping multiple dependency to vocabulary or vice versa as they will break the dependency structure, which introduces variation that can be more cleanly specified by more domains or `Range` in fields such as hurst exponent. \n\n### Seed\nCreating synthetic data involves a lot of random sampling, so to ensure reproducibility, we record seeds for random generators used by vocabulary sampling and dependency generation for each domain. We use `np.random.SeedSequence.entropy` to generate seeds. \n\nThe main method of `Seed` class is `get_rng`, which instantiates a numpy random generator for sampling:\n```python\n# get a random generator from given seed\nrng = seed.get_rng('dependency')\n# get a list of random generators that are spawned from given seed\nrngs = seed.get_rng('dependency', 3)\nassert len(rngs) == 3\n# get a list of random generators that are spawned from given seed, useful for passing variables\nrngs = seed.get_rng('dependency', num_sequence, return_list=True)\nassert type(rngs) == list\n```\n\n### Range\n\nWhen specifying dependencies, `Range` can be used for fields to specify a distribution between a range of value to improve diversity. A similar class `FlexibleRange` is used for cases that allow both single number input and `Range` input, where single number input will be converted to `Range`.\n\n```python\n# input with range\ndep = RandomDependency(\n num_dependency=Range(min=10, max=16)\n sequence_length=Range(min=200, max=1000)\n )\n\ndep_num = RandomDependency(\n num_dependency=16,\n sequence_length=1000)\nassert isinstance(dep_num.num_dependency, Range)\n```\n### Vary\n\nThe space of is immense, which makes it necessary to explore different combinations of parameters. The `vary` function can be used to create from a basic `SynthesisSpec` different specs with some parameters changed according to `Variation` and these specs are saved to a `SynthesisSpecGroup`. You can separately save the group file and the specs.\n\n### Variation\n\n1. For variating total_token, you can use `compute_ops` like `Mul`, `Div`, `Add`, `Sub`. You can also specify a number:\n```python\nassert spec.total_token == 2000\ngroup = vary(spec, Variation(total_token=[Mul(2), Add(2000), Div(2), Sub(1000), 5000]))\n# base spec total_token multiplied by 2\nassert group.specs[0].total_token == 4000\n# base spec total_token added by 2000\nassert group.specs[1].total_token == 4000\n# base spec total_token divided by 2\nassert group.specs[2].total_token == 1000\n# base spec total_token subtracted by 1000\nassert group.specs[3].total_token == 1000\n# base spec total_token set to 5000\nassert group.specs[4].total_token == 5000\n```\n2. for varying mixture_ratio of domains, a list of list of mixture_ratio must be used. each list of mixture ratio must match the domain length of the base spec:\n```python\nvary(spec, Variation(mixture=[[0.1, 0.3, 0.6], [0.2, 0.4, 0.4]]))\n```\n3. the operation of domain is diverse and is deferred to [Domain Operation](#domain-operation)\n\n### Domain Operation\n\nThere are several basic domain operations:\n1. `Vary`: Vary the domain dependency or mapping parameters.\n2. `Insert`: Add new domain to specified position.\n3. `Remove`: Remove domain from specified position.\n4. `Replace`: Replace domain at specified position.\n5. `Shuffle`: Shuffle the order of domains.\n6. `ChangeSeed`: Change the seed of domain.\n\nOne can choose two combination patterns:\n1. `Zip`: like the zip function in python, for example the `zip([1, 2], [3, 4])` becomes `[1, 3], [2, 4]`, useful for conducting multiple actions on one domain at the same time\n2. `Product`: like the `itertools.product` function , for example the `product([1, 2], [3, 4])` becomes `[1, 3], [1, 4], [2, 3], [2, 4]`, useful for conducting multiple actions on different domains at the same time.\n\nFor example: \n```python\nZip(\n ops=[\n Vary(domain=0, dependency={\n 'hurst': [0.5, 0.6]\n }),\n Vary(domain=1, dependency={\n 'num_dependency': [Range(min=10, max=20)]\n })\n ]\n)\n```\n## Roadmap\n\n- [ ]**tests**\n - [ ]**vary stress test**\n - [ ]spec reproducibility\n - [ ]dependency combination\n - [ ]function dependency\n - [ ]file related\n- dependencies\n - [-] **dynamically register dependency** (spec metadata)\n - [ ] **add seq_op dependencies from synthetic_pretraining**\n - [ ] bracket, dyck\n - [ ] LIME\n - [ ] DFS automata/transducer deduction/induction\n - [ ] arithmetic\n - [ ] math derivations \n - [ ] cellular automata\n - [ ] dynamical system\n - [ ] discretized IFS\n - [ ] sine function and variants\n - [ ] multifractional brownian motion\n - [ ] fractional brownian field\n- [ ] **merge**\n- [-] spec_group\n - [-] generate\n - [-] save\n- [ ] notebooks\n - [ ] fractal, fbm, mbm, discretize, bincount \n - [ ] dependency, frequency\n - [ ] vocab\n - [ ] mapping\n- [ ]vocab\n - [ ] loglinear\n - [ ] corpus vocab\n - [ ] domain vocab \n - [ ] **evolution**\n - [ ] synonyms, antonyms, supernyms\n- [ ] mapping\n - [ ] **multiple**\n - [ ] **clip**\n- dataloader related?\n- fix Range validation?",
"bugtrack_url": null,
"license": null,
"summary": "Creates sequence data for pretraining and benchmarking sequence models",
"version": "0.1.11",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "32e99bfae7b5c2b3199e5ec47cd6398c19c04df0fe2a76bd7b7605f7ffb02545",
"md5": "7e914538a6c5fc7622b675eda9550a9e",
"sha256": "0fa5d17c52b7d4d8979111dc2f470c4edc843063b4a3f548ff9df6bb3de83736"
},
"downloads": -1,
"filename": "seqthetic-0.1.11-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7e914538a6c5fc7622b675eda9550a9e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.10",
"size": 28136,
"upload_time": "2024-06-24T02:55:23",
"upload_time_iso_8601": "2024-06-24T02:55:23.259040Z",
"url": "https://files.pythonhosted.org/packages/32/e9/9bfae7b5c2b3199e5ec47cd6398c19c04df0fe2a76bd7b7605f7ffb02545/seqthetic-0.1.11-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "08a20a14950da354f21f4ca6b101610cc70ac8e4b03e64817c40b40168d9abd0",
"md5": "cb63078a6142a12017eddcb8f0c47808",
"sha256": "b4a25be28e537b570096c0abe06cd859fd3088c2f3182e34d0739cc549bee6da"
},
"downloads": -1,
"filename": "seqthetic-0.1.11.tar.gz",
"has_sig": false,
"md5_digest": "cb63078a6142a12017eddcb8f0c47808",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.10",
"size": 25402,
"upload_time": "2024-06-24T02:55:25",
"upload_time_iso_8601": "2024-06-24T02:55:25.259253Z",
"url": "https://files.pythonhosted.org/packages/08/a2/0a14950da354f21f4ca6b101610cc70ac8e4b03e64817c40b40168d9abd0/seqthetic-0.1.11.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-24 02:55:25",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "seqthetic"
}