Python API for Pathling
=======================
This is the Python API for [Pathling](https://pathling.csiro.au). It provides a
set of tools that aid the use of FHIR terminology services and FHIR data within
Python applications and data science workflows.
[View the API documentation →](https://pathling.csiro.au/docs/python/pathling.html)
## Installation
Prerequisites:
- Python 3.8+ with pip
To install, run this command:
```bash
pip install pathling
```
## Encoders
The Python library features a set of encoders for converting FHIR data into
Spark dataframes.
### Reading in NDJSON
[NDJSON](http://ndjson.org) is a format commonly used for bulk FHIR data, and
consists of files (one per resource type) that contains one JSON resource per
line.
```python
from pathling import PathlingContext
pc = PathlingContext.create()
# Read each line from the NDJSON into a row within a Spark data set.
ndjson_dir = '/some/path/ndjson/'
json_resources = pc.spark.read.text(ndjson_dir)
# Convert the data set of strings into a structured FHIR data set.
patients = pc.encode(json_resources, 'Patient')
# Do some stuff.
patients.select('id', 'gender', 'birthDate').show()
```
### Reading in Bundles
The FHIR [Bundle](https://hl7.org/fhir/R4/bundle.html) resource can contain a
collection of FHIR resources. It is often used to represent a set of related
resources, perhaps generated as part of the same event.
```python
from pathling import PathlingContext
pc = PathlingContext.create()
# Read each Bundle into a row within a Spark data set.
bundles_dir = '/some/path/bundles/'
bundles = pc.spark.read.text(bundles_dir, wholetext=True)
# Convert the data set of strings into a structured FHIR data set.
patients = pc.encode_bundle(bundles, 'Patient')
# JSON is the default format, XML Bundles can be encoded using input type.
# patients = pc.encodeBundle(bundles, 'Patient', inputType=MimeType.FHIR_XML)
# Do some stuff.
patients.select('id', 'gender', 'birthDate').show()
```
## Terminology functions
The library also provides a set of functions for querying a FHIR terminology
server from within your queries and transformations.
### Value set membership
The `member_of` function can be used to test the membership of a code within a
[FHIR value set](https://hl7.org/fhir/valueset.html). This can be used with both
explicit value sets (i.e. those that have been pre-defined and loaded into the
terminology server) and implicit value sets (e.g. SNOMED CT
[Expression Constraint Language](http://snomed.org/ecl)).
In this example, we take a list of SNOMED CT diagnosis codes and
create a new column which shows which are viral infections. We use an ECL
expression to define viral infection as a disease with a pathological process
of "Infectious process", and a causative agent of "Virus".
```python
result = pc.member_of(csv, to_coding(csv.CODE, 'http://snomed.info/sct'),
to_ecl_value_set("""
<< 64572001|Disease| : (
<< 370135005|Pathological process| = << 441862004|Infectious process|,
<< 246075003|Causative agent| = << 49872002|Virus|
)
"""), 'VIRAL_INFECTION')
result.select('CODE', 'DESCRIPTION', 'VIRAL_INFECTION').show()
```
Results in:
| CODE | DESCRIPTION | VIRAL_INFECTION |
|-----------|---------------------------|-----------------|
| 65363002 | Otitis media | false |
| 16114001 | Fracture of ankle | false |
| 444814009 | Viral sinusitis | true |
| 444814009 | Viral sinusitis | true |
| 43878008 | Streptococcal sore throat | false |
### Concept translation
The `translate` function can be used to translate codes from one code system to
another using maps that are known to the terminology server. In this example, we
translate our SNOMED CT diagnosis codes into Read CTV3.
```python
result = pc.translate(csv, to_coding(csv.CODE, 'http://snomed.info/sct'),
'http://snomed.info/sct/900000000000207008?fhir_cm='
'900000000000497000',
output_column_name='READ_CODE')
result = result.withColumn('READ_CODE', result.READ_CODE.code)
result.select('CODE', 'DESCRIPTION', 'READ_CODE').show()
```
Results in:
| CODE | DESCRIPTION | READ_CODE |
|-----------|---------------------------|-----------|
| 65363002 | Otitis media | X00ik |
| 16114001 | Fracture of ankle | S34.. |
| 444814009 | Viral sinusitis | XUjp0 |
| 444814009 | Viral sinusitis | XUjp0 |
| 43878008 | Streptococcal sore throat | A340. |
### Subsumption testing
Subsumption test is a fancy way of saying "is this code equal or a subtype of
this other code".
For example, a code representing "ankle fracture" is subsumed
by another code representing "fracture". The "fracture" code is more general,
and using it with subsumption can help us find other codes representing
different subtypes of fracture.
The `subsumes` function allows us to perform subsumption testing on codes within
our data. The order of the left and right operands can be reversed to query
whether a code is "subsumed by" another code.
```python
# 232208008 |Ear, nose and throat disorder|
left_coding = Coding('http://snomed.info/sct', '232208008')
right_coding_column = to_coding(csv.CODE, 'http://snomed.info/sct')
result = pc.subsumes(csv, 'IS_ENT',
left_coding=left_coding,
right_coding_column=right_coding_column)
result.select('CODE', 'DESCRIPTION', 'IS_ENT').show()
```
Results in:
| CODE | DESCRIPTION | IS_ENT |
|-----------|-------------------|--------|
| 65363002 | Otitis media | true |
| 16114001 | Fracture of ankle | false |
| 444814009 | Viral sinusitis | true |
### Retrieving properties
Some terminologies contain additional properties that are associated with codes.
You can query these properties using the `property_of` function.
There is also a `display` function that can be used to retrieve the preferred
display term for each code.
```python
# Get the parent codes for each code in the dataset.
parents = csv.withColumn(
"PARENTS",
property_of(to_snomed_coding(csv.CODE), "parent", PropertyType.CODE),
)
# Split each parent code into a separate row.
exploded_parents = parents.selectExpr(
"CODE", "DESCRIPTION", "explode_outer(PARENTS) AS PARENT"
)
# Retrieve the preferred term for each parent code.
with_displays = exploded_parents.withColumn(
"PARENT_DISPLAY", display(to_snomed_coding(exploded_parents.PARENT))
)
```
Results in:
| CODE | DESCRIPTION | PARENT | PARENT_DISPLAY |
|-----------|---------------------------|-----------|--------------------------------------------|
| 65363002 | Otitis media | 43275000 | Otitis |
| 65363002 | Otitis media | 68996008 | Disorder of middle ear |
| 16114001 | Fracture of ankle | 125603006 | Injury of ankle |
| 16114001 | Fracture of ankle | 46866001 | Fracture of lower limb |
| 444814009 | Viral sinusitis | 36971009 | Sinusitis |
| 444814009 | Viral sinusitis | 281794004 | Viral upper respiratory tract infection |
| 444814009 | Viral sinusitis | 363166002 | Infective disorder of head |
| 444814009 | Viral sinusitis | 36971009 | Sinusitis |
| 444814009 | Viral sinusitis | 281794004 | Viral upper respiratory tract infection |
| 444814009 | Viral sinusitis | 363166002 | Infective disorder of head |
### Retrieving designations
Some terminologies contain additional display terms for codes. These can be used
for language translations, synonyms, and more. You can query these terms using the `designation` function.
```python
# Get the synonyms for each code in the dataset.
synonyms = csv.withColumn(
"SYNONYMS",
designation(to_snomed_coding(csv.CODE), Coding.of_snomed("900000000000013009")),
)
# Split each synonyms into a separate row.
exploded_synonyms = synonyms.selectExpr(
"CODE", "DESCRIPTION", "explode_outer(SYNONYMS) AS SYNONYM"
)
```
Results in:
| CODE | DESCRIPTION | SYNONYM |
|-----------|--------------------------------------|--------------------------------------------|
| 65363002 | Otitis media | OM - Otitis media |
| 16114001 | Fracture of ankle | Ankle fracture |
| 16114001 | Fracture of ankle | Fracture of distal end of tibia and fibula |
| 444814009 | Viral sinusitis (disorder) | NULL |
| 444814009 | Viral sinusitis (disorder) | NULL |
| 43878008 | Streptococcal sore throat (disorder) | Septic sore throat |
| 43878008 | Streptococcal sore throat (disorder) | Strep throat |
| 43878008 | Streptococcal sore throat (disorder) | Strept throat |
| 43878008 | Streptococcal sore throat (disorder) | Streptococcal angina |
| 43878008 | Streptococcal sore throat (disorder) | Streptococcal pharyngitis |
### Terminology server authentication
Pathling can be configured to connect to a protected terminology server by
supplying a set of OAuth2 client credentials and a token endpoint.
Here is an example of how to authenticate to
the [NHS terminology server](https://ontology.nhs.uk/):
```python
from pathling import PathlingContext
pc = PathlingContext.create(
terminology_server_url='https://ontology.nhs.uk/production1/fhir',
token_endpoint='https://ontology.nhs.uk/authorisation/auth/realms/nhs-digital-terminology/protocol/openid-connect/token',
client_id='[client ID]',
client_secret='[client secret]'
)
```
## Installation in Databricks
To make the Pathling library available within notebooks, navigate to the
"Compute" section and click on the cluster. Click on the "Libraries" tab, and
click "Install new".
Install both the `pathling` PyPI package, and
the `au.csiro.pathling:library-api`
Maven package. Once the cluster is restarted, the libraries should be available
for import and use within all notebooks.
By default, Databricks uses Java 8 within its clusters, while Pathling requires
Java 17. To enable Java 17 support within your cluster, navigate to __Advanced
Options > Spark > Environment Variables__ and add the following:
```bash
JNAME=zulu17-ca-amd64
```
See the Databricks documentation on
[Libraries](https://docs.databricks.com/libraries/index.html) for more
information.
## Spark cluster configuration
If you are running your own Spark cluster, or using a Docker image (such as
[jupyter/pyspark-notebook](https://hub.docker.com/r/jupyter/pyspark-notebook))
,
you will need to configure Pathling as a Spark package.
You can do this by adding the following to your `spark-defaults.conf` file:
```
spark.jars.packages au.csiro.pathling:library-api:[some version]
```
See the [Configuration](https://spark.apache.org/docs/latest/configuration.html)
page of the Spark documentation for more information about `spark.jars.packages`
and other related configuration options.
To create a Pathling notebook Docker image, your `Dockerfile` might look like
this:
```dockerfile
FROM jupyter/pyspark-notebook
USER root
RUN echo "spark.jars.packages au.csiro.pathling:library-api:[some version]" >> /usr/local/spark/conf/spark-defaults.conf
USER ${NB_UID}
RUN pip install --quiet --no-cache-dir pathling && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
```
Pathling is copyright © 2018-2023, Commonwealth Scientific and Industrial
Research Organisation
(CSIRO) ABN 41 687 119 230. Licensed under
the [Apache License, version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Raw data
{
"_id": null,
"home_page": "https://github.com/aehrc/pathling",
"name": "pathling",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "pathling, fhir, analytics, spark, standards, terminology",
"author": "Australian e-Health Research Centre, CSIRO",
"author_email": "ontoserver-support@csiro.au",
"download_url": "https://files.pythonhosted.org/packages/35/1b/4006b9d8689c2444880b3f4741e261d81ce3dae3df702e9ed49dd0924395/pathling-7.0.1.tar.gz",
"platform": null,
"description": "Python API for Pathling\n=======================\n\nThis is the Python API for [Pathling](https://pathling.csiro.au). It provides a \nset of tools that aid the use of FHIR terminology services and FHIR data within \nPython applications and data science workflows.\n\n[View the API documentation →](https://pathling.csiro.au/docs/python/pathling.html)\n\n## Installation\n\nPrerequisites:\n\n- Python 3.8+ with pip\n\nTo install, run this command:\n\n```bash\npip install pathling \n```\n\n## Encoders\n\nThe Python library features a set of encoders for converting FHIR data into\nSpark dataframes.\n\n### Reading in NDJSON\n\n[NDJSON](http://ndjson.org) is a format commonly used for bulk FHIR data, and\nconsists of files (one per resource type) that contains one JSON resource per\nline.\n\n```python\nfrom pathling import PathlingContext\n\npc = PathlingContext.create()\n\n# Read each line from the NDJSON into a row within a Spark data set.\nndjson_dir = '/some/path/ndjson/'\njson_resources = pc.spark.read.text(ndjson_dir)\n\n# Convert the data set of strings into a structured FHIR data set.\npatients = pc.encode(json_resources, 'Patient')\n\n# Do some stuff.\npatients.select('id', 'gender', 'birthDate').show()\n```\n\n### Reading in Bundles\n\nThe FHIR [Bundle](https://hl7.org/fhir/R4/bundle.html) resource can contain a\ncollection of FHIR resources. It is often used to represent a set of related\nresources, perhaps generated as part of the same event.\n\n```python\nfrom pathling import PathlingContext\n\npc = PathlingContext.create()\n\n# Read each Bundle into a row within a Spark data set.\nbundles_dir = '/some/path/bundles/'\nbundles = pc.spark.read.text(bundles_dir, wholetext=True)\n\n# Convert the data set of strings into a structured FHIR data set.\npatients = pc.encode_bundle(bundles, 'Patient')\n\n# JSON is the default format, XML Bundles can be encoded using input type.\n# patients = pc.encodeBundle(bundles, 'Patient', inputType=MimeType.FHIR_XML)\n\n# Do some stuff.\npatients.select('id', 'gender', 'birthDate').show()\n```\n\n## Terminology functions\n\nThe library also provides a set of functions for querying a FHIR terminology\nserver from within your queries and transformations.\n\n### Value set membership\n\nThe `member_of` function can be used to test the membership of a code within a\n[FHIR value set](https://hl7.org/fhir/valueset.html). This can be used with both\nexplicit value sets (i.e. those that have been pre-defined and loaded into the\nterminology server) and implicit value sets (e.g. SNOMED CT\n[Expression Constraint Language](http://snomed.org/ecl)).\n\nIn this example, we take a list of SNOMED CT diagnosis codes and\ncreate a new column which shows which are viral infections. We use an ECL\nexpression to define viral infection as a disease with a pathological process\nof \"Infectious process\", and a causative agent of \"Virus\".\n\n```python\nresult = pc.member_of(csv, to_coding(csv.CODE, 'http://snomed.info/sct'),\n to_ecl_value_set(\"\"\"\n<< 64572001|Disease| : (\n << 370135005|Pathological process| = << 441862004|Infectious process|,\n << 246075003|Causative agent| = << 49872002|Virus|\n)\n \"\"\"), 'VIRAL_INFECTION')\nresult.select('CODE', 'DESCRIPTION', 'VIRAL_INFECTION').show()\n```\n\nResults in:\n\n| CODE | DESCRIPTION | VIRAL_INFECTION |\n|-----------|---------------------------|-----------------|\n| 65363002 | Otitis media | false |\n| 16114001 | Fracture of ankle | false |\n| 444814009 | Viral sinusitis | true |\n| 444814009 | Viral sinusitis | true |\n| 43878008 | Streptococcal sore throat | false |\n\n### Concept translation\n\nThe `translate` function can be used to translate codes from one code system to\nanother using maps that are known to the terminology server. In this example, we\ntranslate our SNOMED CT diagnosis codes into Read CTV3.\n\n```python\nresult = pc.translate(csv, to_coding(csv.CODE, 'http://snomed.info/sct'),\n 'http://snomed.info/sct/900000000000207008?fhir_cm='\n '900000000000497000',\n output_column_name='READ_CODE')\nresult = result.withColumn('READ_CODE', result.READ_CODE.code)\nresult.select('CODE', 'DESCRIPTION', 'READ_CODE').show()\n```\n\nResults in:\n\n| CODE | DESCRIPTION | READ_CODE |\n|-----------|---------------------------|-----------|\n| 65363002 | Otitis media | X00ik |\n| 16114001 | Fracture of ankle | S34.. |\n| 444814009 | Viral sinusitis | XUjp0 |\n| 444814009 | Viral sinusitis | XUjp0 |\n| 43878008 | Streptococcal sore throat | A340. |\n\n### Subsumption testing\n\nSubsumption test is a fancy way of saying \"is this code equal or a subtype of\nthis other code\".\n\nFor example, a code representing \"ankle fracture\" is subsumed\nby another code representing \"fracture\". The \"fracture\" code is more general,\nand using it with subsumption can help us find other codes representing\ndifferent subtypes of fracture.\n\nThe `subsumes` function allows us to perform subsumption testing on codes within\nour data. The order of the left and right operands can be reversed to query\nwhether a code is \"subsumed by\" another code.\n\n```python\n# 232208008 |Ear, nose and throat disorder|\nleft_coding = Coding('http://snomed.info/sct', '232208008')\nright_coding_column = to_coding(csv.CODE, 'http://snomed.info/sct')\n\nresult = pc.subsumes(csv, 'IS_ENT',\n left_coding=left_coding,\n right_coding_column=right_coding_column)\n\nresult.select('CODE', 'DESCRIPTION', 'IS_ENT').show()\n```\n\nResults in:\n\n| CODE | DESCRIPTION | IS_ENT |\n|-----------|-------------------|--------|\n| 65363002 | Otitis media | true |\n| 16114001 | Fracture of ankle | false |\n| 444814009 | Viral sinusitis | true |\n\n### Retrieving properties\n\nSome terminologies contain additional properties that are associated with codes.\nYou can query these properties using the `property_of` function.\n\nThere is also a `display` function that can be used to retrieve the preferred\ndisplay term for each code.\n\n```python\n# Get the parent codes for each code in the dataset.\nparents = csv.withColumn(\n \"PARENTS\",\n property_of(to_snomed_coding(csv.CODE), \"parent\", PropertyType.CODE),\n)\n# Split each parent code into a separate row.\nexploded_parents = parents.selectExpr(\n \"CODE\", \"DESCRIPTION\", \"explode_outer(PARENTS) AS PARENT\"\n)\n# Retrieve the preferred term for each parent code.\nwith_displays = exploded_parents.withColumn(\n \"PARENT_DISPLAY\", display(to_snomed_coding(exploded_parents.PARENT))\n)\n```\n\nResults in:\n\n| CODE | DESCRIPTION | PARENT | PARENT_DISPLAY |\n|-----------|---------------------------|-----------|--------------------------------------------|\n| 65363002 | Otitis media | 43275000 | Otitis |\n| 65363002 | Otitis media | 68996008 | Disorder of middle ear |\n| 16114001 | Fracture of ankle | 125603006 | Injury of ankle |\n| 16114001 | Fracture of ankle | 46866001 | Fracture of lower limb |\n| 444814009 | Viral sinusitis | 36971009 | Sinusitis |\n| 444814009 | Viral sinusitis | 281794004 | Viral upper respiratory tract infection |\n| 444814009 | Viral sinusitis | 363166002 | Infective disorder of head |\n| 444814009 | Viral sinusitis | 36971009 | Sinusitis |\n| 444814009 | Viral sinusitis | 281794004 | Viral upper respiratory tract infection |\n| 444814009 | Viral sinusitis | 363166002 | Infective disorder of head |\n\n### Retrieving designations\n\nSome terminologies contain additional display terms for codes. These can be used\nfor language translations, synonyms, and more. You can query these terms using the `designation` function.\n\n```python\n# Get the synonyms for each code in the dataset.\nsynonyms = csv.withColumn(\n \"SYNONYMS\",\n designation(to_snomed_coding(csv.CODE), Coding.of_snomed(\"900000000000013009\")),\n)\n# Split each synonyms into a separate row.\nexploded_synonyms = synonyms.selectExpr(\n \"CODE\", \"DESCRIPTION\", \"explode_outer(SYNONYMS) AS SYNONYM\"\n)\n```\n\nResults in:\n\n| CODE | DESCRIPTION | SYNONYM |\n|-----------|--------------------------------------|--------------------------------------------|\n| 65363002 | Otitis media | OM - Otitis media |\n| 16114001 | Fracture of ankle | Ankle fracture |\n| 16114001 | Fracture of ankle | Fracture of distal end of tibia and fibula |\n| 444814009 | Viral sinusitis (disorder) | NULL |\n| 444814009 | Viral sinusitis (disorder) | NULL |\n| 43878008 | Streptococcal sore throat (disorder) | Septic sore throat |\n| 43878008 | Streptococcal sore throat (disorder) | Strep throat |\n| 43878008 | Streptococcal sore throat (disorder) | Strept throat |\n| 43878008 | Streptococcal sore throat (disorder) | Streptococcal angina |\n| 43878008 | Streptococcal sore throat (disorder) | Streptococcal pharyngitis |\n\n### Terminology server authentication\n\nPathling can be configured to connect to a protected terminology server by\nsupplying a set of OAuth2 client credentials and a token endpoint.\n\nHere is an example of how to authenticate to\nthe [NHS terminology server](https://ontology.nhs.uk/):\n\n```python\nfrom pathling import PathlingContext\n\npc = PathlingContext.create(\n terminology_server_url='https://ontology.nhs.uk/production1/fhir',\n token_endpoint='https://ontology.nhs.uk/authorisation/auth/realms/nhs-digital-terminology/protocol/openid-connect/token',\n client_id='[client ID]',\n client_secret='[client secret]'\n)\n```\n\n## Installation in Databricks\n\nTo make the Pathling library available within notebooks, navigate to the\n\"Compute\" section and click on the cluster. Click on the \"Libraries\" tab, and\nclick \"Install new\".\n\nInstall both the `pathling` PyPI package, and\nthe `au.csiro.pathling:library-api`\nMaven package. Once the cluster is restarted, the libraries should be available\nfor import and use within all notebooks.\n\nBy default, Databricks uses Java 8 within its clusters, while Pathling requires\nJava 17. To enable Java 17 support within your cluster, navigate to __Advanced\nOptions > Spark > Environment Variables__ and add the following:\n\n```bash\nJNAME=zulu17-ca-amd64\n```\n\nSee the Databricks documentation on\n[Libraries](https://docs.databricks.com/libraries/index.html) for more\ninformation.\n\n## Spark cluster configuration\n\nIf you are running your own Spark cluster, or using a Docker image (such as\n[jupyter/pyspark-notebook](https://hub.docker.com/r/jupyter/pyspark-notebook))\n,\nyou will need to configure Pathling as a Spark package.\n\nYou can do this by adding the following to your `spark-defaults.conf` file:\n\n```\nspark.jars.packages au.csiro.pathling:library-api:[some version]\n```\n\nSee the [Configuration](https://spark.apache.org/docs/latest/configuration.html)\npage of the Spark documentation for more information about `spark.jars.packages`\nand other related configuration options.\n\nTo create a Pathling notebook Docker image, your `Dockerfile` might look like\nthis:\n\n```dockerfile\nFROM jupyter/pyspark-notebook\n\nUSER root\nRUN echo \"spark.jars.packages au.csiro.pathling:library-api:[some version]\" >> /usr/local/spark/conf/spark-defaults.conf\n\nUSER ${NB_UID}\n\nRUN pip install --quiet --no-cache-dir pathling && \\\n fix-permissions \"${CONDA_DIR}\" && \\\n fix-permissions \"/home/${NB_USER}\"\n```\n\nPathling is copyright \u00a9 2018-2023, Commonwealth Scientific and Industrial\nResearch Organisation\n(CSIRO) ABN 41 687 119 230. Licensed under\nthe [Apache License, version 2.0](https://www.apache.org/licenses/LICENSE-2.0).\n",
"bugtrack_url": null,
"license": "Apache License, version 2.0",
"summary": "Python API for Pathling",
"version": "7.0.1",
"project_urls": {
"Homepage": "https://github.com/aehrc/pathling"
},
"split_keywords": [
"pathling",
" fhir",
" analytics",
" spark",
" standards",
" terminology"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fc741ad55454ec28af588466831f8924e5aae3e779fb7872e41f8c5eb0e292f6",
"md5": "b9c2ccddee4e66d5b39bf017dffc974e",
"sha256": "df26d29dbabbd50bbed7e34af0dac8883266ec25641d9f85436c72dd59507efa"
},
"downloads": -1,
"filename": "pathling-7.0.1-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "b9c2ccddee4e66d5b39bf017dffc974e",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.8",
"size": 62322,
"upload_time": "2024-05-22T02:13:20",
"upload_time_iso_8601": "2024-05-22T02:13:20.153901Z",
"url": "https://files.pythonhosted.org/packages/fc/74/1ad55454ec28af588466831f8924e5aae3e779fb7872e41f8c5eb0e292f6/pathling-7.0.1-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "351b4006b9d8689c2444880b3f4741e261d81ce3dae3df702e9ed49dd0924395",
"md5": "058020eab33c81e1e88d8b6d680a7ac1",
"sha256": "d728756945dd2af516ef9a125a6f1e2717ac464bcdf55c3d7e6a61c63f737191"
},
"downloads": -1,
"filename": "pathling-7.0.1.tar.gz",
"has_sig": false,
"md5_digest": "058020eab33c81e1e88d8b6d680a7ac1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 52547,
"upload_time": "2024-05-22T02:13:22",
"upload_time_iso_8601": "2024-05-22T02:13:22.214033Z",
"url": "https://files.pythonhosted.org/packages/35/1b/4006b9d8689c2444880b3f4741e261d81ce3dae3df702e9ed49dd0924395/pathling-7.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-22 02:13:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "aehrc",
"github_project": "pathling",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "pathling"
}