# GLiREL : Generalist and Lightweight model for Zero-Shot Relation Extraction
GLiREL is a Relation Extraction model capable of classifying unseen relations given the entities within a text. This builds upon the excelent work done by Urchade Zaratiana, Nadi Tomeh, Pierre Holat, Thierry Charnois on the [GLiNER](https://github.com/urchade/GLiNER) library which enables efficient zero-shot Named Entity Recognition.
* GLiNER paper: [GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer](https://arxiv.org/abs/2311.08526)
* Train a Zero-shot model: <a href="https://colab.research.google.com/github/jackboyla/GLiREL/blob/main/train.ipynb" target="_blank">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
<!-- <img src="demo.jpg" alt="Demo Image" width="50%"/> -->
---
# Installation
```bash
pip install glirel
```
## Usage
Once you've downloaded the GLiREL library, you can import the `GLiREL` class. You can then load this model using `GLiREL.from_pretrained` and predict entities with `predict_relations`.
```python
from glirel import GLiREL
import spacy
model = GLiREL.from_pretrained("jackboyla/glirel_beta")
nlp = spacy.load('en_core_web_sm')
text = 'Derren Nesbitt had a history of being cast in "Doctor Who", having played villainous warlord Tegana in the 1964 First Doctor serial "Marco Polo".'
doc = nlp(text)
tokens = [token.text for token in doc]
labels = ['country of origin', 'licensed to broadcast to', 'father', 'followed by', 'characters']
ner = [[26, 27, 'PERSON', 'Marco Polo'], [22, 23, 'Q2989412', 'First Doctor']] # 'type' is not used -- it can be any string!
relations = model.predict_relations(tokens, labels, threshold=0.0, ner=ner, top_k=1)
print('Number of relations:', len(relations))
sorted_data_desc = sorted(relations, key=lambda x: x['score'], reverse=True)
print("\nDescending Order by Score:")
for item in sorted_data_desc:
print(f"{item['head_text']} --> {item['label']} --> {item['tail_text']} | score: {item['score']}")
```
### Expected Output
```
Number of relations: 2
Descending Order by Score:
{'head_pos': [26, 28], 'tail_pos': [22, 24], 'head_text': ['Marco', 'Polo'], 'tail_text': ['First', 'Doctor'], 'label': 'characters', 'score': 0.9923334121704102}
{'head_pos': [22, 24], 'tail_pos': [26, 28], 'head_text': ['First', 'Doctor'], 'tail_text': ['Marco', 'Polo'], 'label': 'characters', 'score': 0.9915636777877808}
```
## Constrain labels
In practice, we usually want to define the types of entities that can exist as a head and/or tail of a relationship. This is already implemented in GLiREL:
```python
labels = {"glirel_labels": {
'co-founder': {"allowed_head": ["PERSON"], "allowed_tail": ["ORG"]},
'no relation': {}, # head and tail can be any entity type
'country of origin': {"allowed_head": ["PERSON", "ORG"], "allowed_tail": ["LOC", "GPE"]},
'parent': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'located in or next to body of water': {"allowed_head": ["LOC", "GPE", "FAC"], "allowed_tail": ["LOC", "GPE"]},
'spouse': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'child': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'founder': {"allowed_head": ["PERSON"], "allowed_tail": ["ORG"]},
'founded on date': {"allowed_head": ["ORG"], "allowed_tail": ["DATE"]},
'headquartered in': {"allowed_head": ["ORG"], "allowed_tail": ["LOC", "GPE", "FAC"]},
'acquired by': {"allowed_head": ["ORG"], "allowed_tail": ["ORG", "PERSON"]},
'subsidiary of': {"allowed_head": ["ORG"], "allowed_tail": ["ORG", "PERSON"]},
}
}
```
## Usage with spaCy
You can also load GliREL into a regular spaCy NLP pipeline. Here's an example using an English pipeline.
```python
import spacy
import glirel
# Load a blank spaCy model or an existing one
nlp = spacy.load('en_core_web_sm')
# Add the GLiREL component to the pipeline
nlp.add_pipe("glirel", after="ner")
# Now you can use the pipeline with the GLiREL component
text = "Apple Inc. was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976. The company is headquartered in Cupertino, California."
labels = {"glirel_labels": {
'co-founder': {"allowed_head": ["PERSON"], "allowed_tail": ["ORG"]},
'country of origin': {"allowed_head": ["PERSON", "ORG"], "allowed_tail": ["LOC", "GPE"]},
'licensed to broadcast to': {"allowed_head": ["ORG"]},
'no relation': {},
'parent': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'followed by': {"allowed_head": ["PERSON", "ORG"], "allowed_tail": ["PERSON", "ORG"]},
'located in or next to body of water': {"allowed_head": ["LOC", "GPE", "FAC"], "allowed_tail": ["LOC", "GPE"]},
'spouse': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'child': {"allowed_head": ["PERSON"], "allowed_tail": ["PERSON"]},
'founder': {"allowed_head": ["PERSON"], "allowed_tail": ["ORG"]},
'headquartered in': {"allowed_head": ["ORG"], "allowed_tail": ["LOC", "GPE", "FAC"]},
'acquired by': {"allowed_head": ["ORG"], "allowed_tail": ["ORG", "PERSON"]},
'subsidiary of': {"allowed_head": ["ORG"], "allowed_tail": ["ORG", "PERSON"]},
}
}
# Add the labels to the pipeline at inference time
docs = list( nlp.pipe([(text, labels)], as_tuples=True) )
relations = docs[0][0]._.relations
print('Number of relations:', len(relations))
sorted_data_desc = sorted(relations, key=lambda x: x['score'], reverse=True)
print("\nDescending Order by Score:")
for item in sorted_data_desc:
print(f"{item['head_text']} --> {item['label']} --> {item['tail_text']} | score: {item['score']}")
```
### Expected Output
```
Number of relations: 5
Descending Order by Score:
['Apple', 'Inc.'] --> headquartered in --> ['California'] | score: 0.9854260683059692
['Apple', 'Inc.'] --> headquartered in --> ['Cupertino'] | score: 0.9569844603538513
['Steve', 'Wozniak'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.09025496244430542
['Steve', 'Jobs'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.08805803954601288
['Ronald', 'Wayne'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.07996643334627151
```
## To run experiments
FewRel: ~56k examples
WikiZSL: ~85k examples
```bash
# few_rel
cd data
python process_few_rel.py
cd ..
# adjust config
python train.py --config config_few_rel.yaml
```
```bash
# wiki_zsl
cd data
python process_wiki_zsl.py
cd ..
# <adjust config>
python train.py --config config_wiki_zsl.yaml
```
## Example training data
NOTE that the entity indices are inclusive i.e `"Binsey"` is `[7, 7]`. This differs from spaCy where the end index is exclusive (in this case spaCy would set the indices to `[7, 8]`)
JSONL file:
```json
{
"ner": [
[7, 7, "Q4914513", "Binsey"],
[11, 12, "Q19686", "River Thames"]
],
"relations": [
{
"head": {"mention": "Binsey", "position": [7, 7], "type": "LOC"}, # 'type' is not used -- it can be any string!
"tail": {"mention": "River Thames", "position": [11, 12], "type": "Q19686"},
"relation_text": "located in or next to body of water"
}
],
"tokenized_text": ["The", "race", "took", "place", "between", "Godstow", "and", "Binsey", "along", "the", "Upper", "River", "Thames", "."]
},
{
"ner": [
[9, 10, "Q4386693", "Legislative Assembly"],
[1, 3, "Q1848835", "Parliament of Victoria"]
],
"relations": [
{
"head": {"mention": "Legislative Assembly", "position": [9, 10], "type": "Q4386693"},
"tail": {"mention": "Parliament of Victoria", "position": [1, 3], "type": "Q1848835"},
"relation_text": "part of"
}
],
"tokenized_text": ["The", "Parliament", "of", "Victoria", "consists", "of", "the", "lower", "house", "Legislative", "Assembly", ",", "the", "upper", "house", "Legislative", "Council", "and", "the", "Queen", "of", "Australia", "."]
}
```
## License
[GLiREL](https://github.com/jackboyla/GLiREL) by [Jack Boylan](https://github.com/jackboyla) is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1).
<a href="https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer">
<img src="https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1" alt="CC Logo" style="height: 20px; margin-right: 5px; vertical-align: text-bottom;">
<img src="https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1" alt="BY Logo" style="height: 20px; margin-right: 5px; vertical-align: text-bottom;">
<img src="https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1" alt="NC Logo" style="height: 20px; margin-right: 5px; vertical-align: text-bottom;">
<img src="https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1" alt="SA Logo" style="height: 20px; margin-right: 5px; vertical-align: text-bottom;">
</a>
Raw data
{
"_id": null,
"home_page": null,
"name": "glirel",
"maintainer": "Jack Boylan",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "named-entity-recognition, ner, data-science, natural-language-processing, artificial-intelligence, nlp, machine-learning, transformers",
"author": "Jack Boylan, Urchade Zaratiana, Nadi Tomeh, Pierre Holat, Thierry Charnois",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/5d/5f/6f60934c2a5fc2e2b64511fbb87408601e98d1d300a1c37cf38b98154d8b/glirel-1.0.0.tar.gz",
"platform": null,
"description": "# GLiREL : Generalist and Lightweight model for Zero-Shot Relation Extraction\n\nGLiREL is a Relation Extraction model capable of classifying unseen relations given the entities within a text. This builds upon the excelent work done by Urchade Zaratiana, Nadi Tomeh, Pierre Holat, Thierry Charnois on the [GLiNER](https://github.com/urchade/GLiNER) library which enables efficient zero-shot Named Entity Recognition.\n\n* GLiNER paper: [GLiNER: Generalist Model for Named Entity Recognition using Bidirectional Transformer](https://arxiv.org/abs/2311.08526)\n\n* Train a Zero-shot model: <a href=\"https://colab.research.google.com/github/jackboyla/GLiREL/blob/main/train.ipynb\" target=\"_blank\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n\n<!-- <img src=\"demo.jpg\" alt=\"Demo Image\" width=\"50%\"/> -->\n\n---\n# Installation\n\n```bash\npip install glirel\n```\n\n## Usage\nOnce you've downloaded the GLiREL library, you can import the `GLiREL` class. You can then load this model using `GLiREL.from_pretrained` and predict entities with `predict_relations`.\n\n```python\nfrom glirel import GLiREL\nimport spacy\n\nmodel = GLiREL.from_pretrained(\"jackboyla/glirel_beta\")\n\nnlp = spacy.load('en_core_web_sm')\n\ntext = 'Derren Nesbitt had a history of being cast in \"Doctor Who\", having played villainous warlord Tegana in the 1964 First Doctor serial \"Marco Polo\".'\ndoc = nlp(text)\ntokens = [token.text for token in doc]\n\nlabels = ['country of origin', 'licensed to broadcast to', 'father', 'followed by', 'characters']\n\nner = [[26, 27, 'PERSON', 'Marco Polo'], [22, 23, 'Q2989412', 'First Doctor']] # 'type' is not used -- it can be any string!\n\nrelations = model.predict_relations(tokens, labels, threshold=0.0, ner=ner, top_k=1)\n\nprint('Number of relations:', len(relations))\n\nsorted_data_desc = sorted(relations, key=lambda x: x['score'], reverse=True)\nprint(\"\\nDescending Order by Score:\")\nfor item in sorted_data_desc:\n print(f\"{item['head_text']} --> {item['label']} --> {item['tail_text']} | score: {item['score']}\")\n```\n\n### Expected Output\n\n```\nNumber of relations: 2\n\nDescending Order by Score:\n{'head_pos': [26, 28], 'tail_pos': [22, 24], 'head_text': ['Marco', 'Polo'], 'tail_text': ['First', 'Doctor'], 'label': 'characters', 'score': 0.9923334121704102}\n{'head_pos': [22, 24], 'tail_pos': [26, 28], 'head_text': ['First', 'Doctor'], 'tail_text': ['Marco', 'Polo'], 'label': 'characters', 'score': 0.9915636777877808}\n```\n\n## Constrain labels\nIn practice, we usually want to define the types of entities that can exist as a head and/or tail of a relationship. This is already implemented in GLiREL:\n\n```python\nlabels = {\"glirel_labels\": {\n 'co-founder': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"ORG\"]}, \n 'no relation': {}, # head and tail can be any entity type \n 'country of origin': {\"allowed_head\": [\"PERSON\", \"ORG\"], \"allowed_tail\": [\"LOC\", \"GPE\"]}, \n 'parent': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'located in or next to body of water': {\"allowed_head\": [\"LOC\", \"GPE\", \"FAC\"], \"allowed_tail\": [\"LOC\", \"GPE\"]}, \n 'spouse': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'child': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'founder': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"ORG\"]}, \n 'founded on date': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"DATE\"]},\n 'headquartered in': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"LOC\", \"GPE\", \"FAC\"]}, \n 'acquired by': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"ORG\", \"PERSON\"]}, \n 'subsidiary of': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"ORG\", \"PERSON\"]}, \n }\n}\n```\n\n## Usage with spaCy\n\nYou can also load GliREL into a regular spaCy NLP pipeline. Here's an example using an English pipeline.\n\n```python\nimport spacy\nimport glirel\n\n# Load a blank spaCy model or an existing one\nnlp = spacy.load('en_core_web_sm')\n\n# Add the GLiREL component to the pipeline\nnlp.add_pipe(\"glirel\", after=\"ner\")\n\n# Now you can use the pipeline with the GLiREL component\ntext = \"Apple Inc. was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976. The company is headquartered in Cupertino, California.\"\n\nlabels = {\"glirel_labels\": {\n 'co-founder': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"ORG\"]}, \n 'country of origin': {\"allowed_head\": [\"PERSON\", \"ORG\"], \"allowed_tail\": [\"LOC\", \"GPE\"]}, \n 'licensed to broadcast to': {\"allowed_head\": [\"ORG\"]}, \n 'no relation': {}, \n 'parent': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'followed by': {\"allowed_head\": [\"PERSON\", \"ORG\"], \"allowed_tail\": [\"PERSON\", \"ORG\"]}, \n 'located in or next to body of water': {\"allowed_head\": [\"LOC\", \"GPE\", \"FAC\"], \"allowed_tail\": [\"LOC\", \"GPE\"]}, \n 'spouse': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'child': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"PERSON\"]}, \n 'founder': {\"allowed_head\": [\"PERSON\"], \"allowed_tail\": [\"ORG\"]}, \n 'headquartered in': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"LOC\", \"GPE\", \"FAC\"]}, \n 'acquired by': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"ORG\", \"PERSON\"]}, \n 'subsidiary of': {\"allowed_head\": [\"ORG\"], \"allowed_tail\": [\"ORG\", \"PERSON\"]}, \n }\n}\n\n# Add the labels to the pipeline at inference time\ndocs = list( nlp.pipe([(text, labels)], as_tuples=True) )\nrelations = docs[0][0]._.relations\n\nprint('Number of relations:', len(relations))\n\nsorted_data_desc = sorted(relations, key=lambda x: x['score'], reverse=True)\nprint(\"\\nDescending Order by Score:\")\nfor item in sorted_data_desc:\n print(f\"{item['head_text']} --> {item['label']} --> {item['tail_text']} | score: {item['score']}\")\n\n```\n\n### Expected Output\n\n```\nNumber of relations: 5\n\nDescending Order by Score:\n['Apple', 'Inc.'] --> headquartered in --> ['California'] | score: 0.9854260683059692\n['Apple', 'Inc.'] --> headquartered in --> ['Cupertino'] | score: 0.9569844603538513\n['Steve', 'Wozniak'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.09025496244430542\n['Steve', 'Jobs'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.08805803954601288\n['Ronald', 'Wayne'] --> co-founder --> ['Apple', 'Inc.'] | score: 0.07996643334627151\n```\n\n\n## To run experiments\n\nFewRel: ~56k examples\nWikiZSL: ~85k examples\n\n```bash\n# few_rel\ncd data\npython process_few_rel.py\ncd ..\n# adjust config\npython train.py --config config_few_rel.yaml\n```\n\n```bash\n# wiki_zsl\ncd data\npython process_wiki_zsl.py\ncd ..\n# <adjust config>\npython train.py --config config_wiki_zsl.yaml\n```\n\n## Example training data\n\nNOTE that the entity indices are inclusive i.e `\"Binsey\"` is `[7, 7]`. This differs from spaCy where the end index is exclusive (in this case spaCy would set the indices to `[7, 8]`)\n\nJSONL file:\n```json\n{\n \"ner\": [\n [7, 7, \"Q4914513\", \"Binsey\"], \n [11, 12, \"Q19686\", \"River Thames\"]\n ], \n \"relations\": [\n {\n \"head\": {\"mention\": \"Binsey\", \"position\": [7, 7], \"type\": \"LOC\"}, # 'type' is not used -- it can be any string!\n \"tail\": {\"mention\": \"River Thames\", \"position\": [11, 12], \"type\": \"Q19686\"}, \n \"relation_text\": \"located in or next to body of water\"\n }\n ], \n \"tokenized_text\": [\"The\", \"race\", \"took\", \"place\", \"between\", \"Godstow\", \"and\", \"Binsey\", \"along\", \"the\", \"Upper\", \"River\", \"Thames\", \".\"]\n},\n{\n \"ner\": [\n [9, 10, \"Q4386693\", \"Legislative Assembly\"], \n [1, 3, \"Q1848835\", \"Parliament of Victoria\"]\n ], \n \"relations\": [\n {\n \"head\": {\"mention\": \"Legislative Assembly\", \"position\": [9, 10], \"type\": \"Q4386693\"}, \n \"tail\": {\"mention\": \"Parliament of Victoria\", \"position\": [1, 3], \"type\": \"Q1848835\"}, \n \"relation_text\": \"part of\"\n }\n ], \n \"tokenized_text\": [\"The\", \"Parliament\", \"of\", \"Victoria\", \"consists\", \"of\", \"the\", \"lower\", \"house\", \"Legislative\", \"Assembly\", \",\", \"the\", \"upper\", \"house\", \"Legislative\", \"Council\", \"and\", \"the\", \"Queen\", \"of\", \"Australia\", \".\"]\n}\n```\n\n## License\n\n[GLiREL](https://github.com/jackboyla/GLiREL) by [Jack Boylan](https://github.com/jackboyla) is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1).\n\n<a href=\"https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1\" target=\"_blank\" rel=\"license noopener noreferrer\">\n <img src=\"https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1\" alt=\"CC Logo\" style=\"height: 20px; margin-right: 5px; vertical-align: text-bottom;\">\n <img src=\"https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1\" alt=\"BY Logo\" style=\"height: 20px; margin-right: 5px; vertical-align: text-bottom;\">\n <img src=\"https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1\" alt=\"NC Logo\" style=\"height: 20px; margin-right: 5px; vertical-align: text-bottom;\">\n <img src=\"https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1\" alt=\"SA Logo\" style=\"height: 20px; margin-right: 5px; vertical-align: text-bottom;\">\n</a>\n\n\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Generalist model for Relation Extraction (Extract any relation types from texts)",
"version": "1.0.0",
"project_urls": {
"Homepage": "https://github.com/jackboyla/GLiREL"
},
"split_keywords": [
"named-entity-recognition",
" ner",
" data-science",
" natural-language-processing",
" artificial-intelligence",
" nlp",
" machine-learning",
" transformers"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "638df64ce551288608d852a0f0370850a01bd04cfe5ac9da607f5128e6b50d28",
"md5": "67bdf0c9b122cd8fed1ede00113f3b41",
"sha256": "8c0f8a3708dc119c3f4e2b8cf30dbc78c21f17a2c68e054daf7aeae7b2ac6063"
},
"downloads": -1,
"filename": "glirel-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "67bdf0c9b122cd8fed1ede00113f3b41",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 39736,
"upload_time": "2024-11-01T14:27:11",
"upload_time_iso_8601": "2024-11-01T14:27:11.139134Z",
"url": "https://files.pythonhosted.org/packages/63/8d/f64ce551288608d852a0f0370850a01bd04cfe5ac9da607f5128e6b50d28/glirel-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5d5f6f60934c2a5fc2e2b64511fbb87408601e98d1d300a1c37cf38b98154d8b",
"md5": "758ed4a0c501026af45650e6df47eb96",
"sha256": "24f4f718601e3c832782b24e45b72719b389b828c69ef894fc359d6e7779551d"
},
"downloads": -1,
"filename": "glirel-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "758ed4a0c501026af45650e6df47eb96",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 37669,
"upload_time": "2024-11-01T14:27:12",
"upload_time_iso_8601": "2024-11-01T14:27:12.402236Z",
"url": "https://files.pythonhosted.org/packages/5d/5f/6f60934c2a5fc2e2b64511fbb87408601e98d1d300a1c37cf38b98154d8b/glirel-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-01 14:27:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jackboyla",
"github_project": "GLiREL",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "glirel"
}