==================
HEPData Validator
==================
.. image:: https://github.com/HEPData/hepdata-validator/workflows/Continuous%20Integration/badge.svg?branch=main
:target: https://github.com/HEPData/hepdata-validator/actions?query=branch%3Amain
:alt: GitHub Actions Build Status
.. image:: https://coveralls.io/repos/github/HEPData/hepdata-validator/badge.svg?branch=main
:target: https://coveralls.io/github/HEPData/hepdata-validator?branch=main
:alt: Coveralls Status
.. image:: https://img.shields.io/github/license/HEPData/hepdata-validator.svg
:target: https://github.com/HEPData/hepdata-validator/blob/main/LICENSE.txt
:alt: License
.. image:: https://img.shields.io/github/release/hepdata/hepdata-validator.svg?maxAge=2592000
:target: https://github.com/HEPData/hepdata-validator/releases
:alt: GitHub Releases
.. image:: https://img.shields.io/pypi/v/hepdata-validator
:target: https://pypi.org/project/hepdata-validator/
:alt: PyPI Version
.. image:: https://img.shields.io/github/issues/hepdata/hepdata-validator.svg?maxAge=2592000
:target: https://github.com/HEPData/hepdata-validator/issues
:alt: GitHub Issues
.. image:: https://readthedocs.org/projects/hepdata-validator/badge/?version=latest
:target: https://hepdata-validator.readthedocs.io/en/latest/?badge=latest
:alt: Documentation Status
JSON schema and validation code (in Python 3) for HEPData submissions
* Documentation: https://hepdata-validator.readthedocs.io
Installation
------------
If you can, install `LibYAML <https://pyyaml.org/wiki/LibYAML>`_ (a C library for parsing and emitting YAML) on your machine.
This will allow for the use of ``CSafeLoader`` (instead of Python ``SafeLoader``) for faster loading of YAML files.
Not a big deal for small files, but performs markedly better on larger documents.
Install from `PyPI <https://pypi.org/project/hepdata-validator/>`_ using ``pip``:
.. code:: bash
$ pip install --user hepdata-validator
$ hepdata-validate --help
If you would like to use LibYAML, you may need an additional step if running on an M1 Mac, to ensure pyyaml is built
with the LibYAML bindings. Run the following after installing LibYAML via Homebrew:
.. code:: bash
$ LDFLAGS="-L$(brew --prefix)/lib" CFLAGS="-I$(brew --prefix)/include" pip install --global-option="--with-libyaml" --force pyyaml
Developers
==========
Developers should install from GitHub in a `virtual environment <https://docs.python.org/3/tutorial/venv.html>`_:
.. code:: bash
$ git clone https://github.com/HEPData/hepdata-validator
$ cd hepdata-validator
$ python3.9 -m venv venv
$ source venv/bin/activate
(venv)$ pip install --upgrade pip
(venv)$ pip install --upgrade -e ".[all]"
Tests should be run both with and without LibYAML, as error messages from the different YAML parsers vary:
.. code:: bash
(venv) $ USE_LIBYAML=True pytest testsuite
(venv) $ USE_LIBYAML=False pytest testsuite
Usage
-----
The ``hepdata-validator`` package allows you to validate (via the command line or Python):
* A full directory of submission and data files
* An archive file (.zip, .tar, .tar.gz, .tgz) containing all of the files (`full details <https://hepdata-submission.readthedocs.io/en/latest/introduction.html>`_)
* A `single .yaml or .yaml.gz file <https://hepdata-submission.readthedocs.io/en/latest/single_yaml.html>`_ (but *not* ``submission.yaml`` or a YAML data file)
* A ``submission.yaml`` file or individual YAML data file (via Python only, not via the command line)
The same package is used for validating uploads made to `hepdata.net <https://www.hepdata.net>`_, therefore
first validating offline can be more efficient in checking your submission is valid before uploading.
Command line
============
Installing the ``hepdata-validator`` package adds the command ``hepdata-validate`` to your path, which allows you to validate a
`HEPData submission <https://hepdata-submission.readthedocs.io/en/latest/introduction.html>`_ offline.
Examples
^^^^^^^^
To validate a submission comprising of multiple files in the current directory:
.. code:: bash
$ hepdata-validate
To validate a submission comprising of multiple files in another directory:
.. code:: bash
$ hepdata-validate -d ../TestHEPSubmission
To validate an archive file (.zip, .tar, .tar.gz, .tgz) in the current directory:
.. code:: bash
$ hepdata-validate -a TestHEPSubmission.zip
To validate a single YAML file in the current directory:
.. code:: bash
$ hepdata-validate -f single_yaml_file.yaml
Usage options
^^^^^^^^^^^^^
.. code:: bash
$ hepdata-validate --help
Usage: hepdata-validate [OPTIONS]
Offline validation of submission.yaml and YAML data files. Can check either
a directory, an archive file, or the single YAML file format.
Options:
-d, --directory TEXT Directory to check (defaults to current working
directory)
-f, --file TEXT Single .yaml or .yaml.gz file (but not submission.yaml
or a YAML data file) to check - see https://hepdata-
submission.readthedocs.io/en/latest/single_yaml.html.
(Overrides directory)
-a, --archive TEXT Archive file (.zip, .tar, .tar.gz, .tgz) to check.
(Overrides directory and file)
--help Show this message and exit.
Python
======
Validating a full submission
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To validate a full submission, instantiate a ``FullSubmissionValidator`` object:
.. code:: python
from hepdata_validator.full_submission_validator import FullSubmissionValidator, SchemaType
full_submission_validator = FullSubmissionValidator()
# validate a directory
is_dir_valid = full_submission_validator.validate(directory='TestHEPSubmission')
# or uncomment to validate an archive file
# is_archive_valid = full_submission_validator.validate(archive='TestHEPSubmission.zip')
# or uncomment to validate a single file
# is_file_valid = full_submission_validator.validate(file='single_yaml_file.yaml')
# if there are any error messages, they are retrievable through this call
full_submission_validator.get_messages()
# the error messages can be printed for each file
full_submission_validator.print_errors('submission.yaml')
# the list of valid files can be retrieved via the valid_files property, which is a
# dict mapping SchemaType (e.g. SUBMISSION, DATA, SINGLE_YAML, REMOTE) to lists of
# valid files
full_submission_validator.valid_files[SchemaType.SUBMISSION]
full_submission_validator.valid_files[SchemaType.DATA]
# full_submission_validator.valid_files[SchemaType.SINGLE_YAML]
# if a remote schema is used, valid_files is a list of tuples (schema, file)
# full_submission_validator.valid_files[SchemaType.REMOTE]
# the list of valid files can be printed
full_submission_validator.print_valid_files()
Validating individual files
^^^^^^^^^^^^^^^^^^^^^^^^^^^
To validate submission files, instantiate a ``SubmissionFileValidator`` object:
.. code:: python
from hepdata_validator.submission_file_validator import SubmissionFileValidator
submission_file_validator = SubmissionFileValidator()
submission_file_path = 'submission.yaml'
# the validate method takes a string representing the file path
is_valid_submission_file = submission_file_validator.validate(file_path=submission_file_path)
# if there are any error messages, they are retrievable through this call
submission_file_validator.get_messages()
# the error messages can be printed
submission_file_validator.print_errors(submission_file_path)
To validate data files, instantiate a ``DataFileValidator`` object:
.. code:: python
from hepdata_validator.data_file_validator import DataFileValidator
data_file_validator = DataFileValidator()
# the validate method takes a string representing the file path
data_file_validator.validate(file_path='data.yaml')
# if there are any error messages, they are retrievable through this call
data_file_validator.get_messages()
# the error messages can be printed
data_file_validator.print_errors('data.yaml')
Optionally, if you have already loaded the YAML object, then you can pass it through
as a ``data`` object. You must also pass through the ``file_path`` since this is used as a key
for the error message lookup map.
.. code:: python
from hepdata_validator.data_file_validator import DataFileValidator
import yaml
file_contents = yaml.safe_load(open('data.yaml', 'r'))
data_file_validator = DataFileValidator()
data_file_validator.validate(file_path='data.yaml', data=file_contents)
data_file_validator.get_messages('data.yaml')
data_file_validator.print_errors('data.yaml')
For the analogous case of the ``SubmissionFileValidator``:
.. code:: python
from hepdata_validator.submission_file_validator import SubmissionFileValidator
import yaml
submission_file_path = 'submission.yaml'
# convert a generator returned by yaml.safe_load_all into a list
docs = list(yaml.safe_load_all(open(submission_file_path, 'r')))
submission_file_validator = SubmissionFileValidator()
is_valid_submission_file = submission_file_validator.validate(file_path=submission_file_path, data=docs)
submission_file_validator.print_errors(submission_file_path)
Schema Versions
---------------
When considering **native HEPData JSON schemas**, there are multiple `versions
<https://github.com/HEPData/hepdata-validator/tree/main/hepdata_validator/schemas>`_.
In most cases you should use the **latest** version (the default). If you need to use a different version,
you can pass a keyword argument ``schema_version`` when initialising the validator:
.. code:: python
submission_file_validator = SubmissionFileValidator(schema_version='0.1.0')
data_file_validator = DataFileValidator(schema_version='0.1.0')
Remote Schemas
--------------
When using **remotely defined schemas**, versions depend on the organization providing those schemas,
and it is their responsibility to offer a way of keeping track of different schema versions.
The ``JsonSchemaResolver`` object resolves ``$ref`` in the JSON schema. The ``HTTPSchemaDownloader`` object retrieves
schemas from a remote location, and optionally saves them in the local file system, following the structure:
``schemas_remote/<org>/<project>/<version>/<schema_name>``. An example may be:
.. code:: python
from hepdata_validator.data_file_validator import DataFileValidator
data_validator = DataFileValidator()
# Split remote schema path and schema name
schema_path = 'https://scikit-hep.org/pyhf/schemas/1.0.0/'
schema_name = 'workspace.json'
# Create JsonSchemaResolver object to resolve $ref in JSON schema
from hepdata_validator.schema_resolver import JsonSchemaResolver
pyhf_resolver = JsonSchemaResolver(schema_path)
# Create HTTPSchemaDownloader object to validate against remote schema
from hepdata_validator.schema_downloader import HTTPSchemaDownloader
pyhf_downloader = HTTPSchemaDownloader(pyhf_resolver, schema_path)
# Retrieve and save the remote schema in the local path
pyhf_type = pyhf_downloader.get_schema_type(schema_name)
pyhf_spec = pyhf_downloader.get_schema_spec(schema_name)
pyhf_downloader.save_locally(schema_name, pyhf_spec)
# Load the custom schema as a custom type
import os
pyhf_path = os.path.join(pyhf_downloader.schemas_path, schema_name)
data_validator.load_custom_schema(pyhf_type, pyhf_path)
# Validate a specific schema instance
data_validator.validate(file_path='pyhf_workspace.json', file_type=pyhf_type)
The native HEPData JSON schema are provided as part of the ``hepdata-validator`` package and it is not necessary to
download them. However, in principle, for testing purposes, note that the same mechanism above could be used with:
.. code:: python
schema_path = 'https://hepdata.net/submission/schemas/1.1.1/'
schema_name = 'data_schema.json'
and passing a HEPData YAML data file as the ``file_path`` argument of the ``validate`` method.
Raw data
{
"_id": null,
"home_page": "https://github.com/hepdata/hepdata-validator",
"name": "hepdata-validator",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "hepdata validator",
"author": "HEPData Team",
"author_email": "info@hepdata.net",
"download_url": "https://files.pythonhosted.org/packages/0d/0c/a6c8cfae5738bc30ec0db7093aa0f99a502c2dc97bc65e9eca4cfd0c472f/hepdata_validator-0.3.5.tar.gz",
"platform": "any",
"description": "==================\n HEPData Validator\n==================\n\n.. image:: https://github.com/HEPData/hepdata-validator/workflows/Continuous%20Integration/badge.svg?branch=main\n :target: https://github.com/HEPData/hepdata-validator/actions?query=branch%3Amain\n :alt: GitHub Actions Build Status\n\n.. image:: https://coveralls.io/repos/github/HEPData/hepdata-validator/badge.svg?branch=main\n :target: https://coveralls.io/github/HEPData/hepdata-validator?branch=main\n :alt: Coveralls Status\n\n.. image:: https://img.shields.io/github/license/HEPData/hepdata-validator.svg\n :target: https://github.com/HEPData/hepdata-validator/blob/main/LICENSE.txt\n :alt: License\n\n.. image:: https://img.shields.io/github/release/hepdata/hepdata-validator.svg?maxAge=2592000\n :target: https://github.com/HEPData/hepdata-validator/releases\n :alt: GitHub Releases\n\n.. image:: https://img.shields.io/pypi/v/hepdata-validator\n :target: https://pypi.org/project/hepdata-validator/\n :alt: PyPI Version\n\n.. image:: https://img.shields.io/github/issues/hepdata/hepdata-validator.svg?maxAge=2592000\n :target: https://github.com/HEPData/hepdata-validator/issues\n :alt: GitHub Issues\n\n.. image:: https://readthedocs.org/projects/hepdata-validator/badge/?version=latest\n :target: https://hepdata-validator.readthedocs.io/en/latest/?badge=latest\n :alt: Documentation Status\n\nJSON schema and validation code (in Python 3) for HEPData submissions\n\n* Documentation: https://hepdata-validator.readthedocs.io\n\n\nInstallation\n------------\n\nIf you can, install `LibYAML <https://pyyaml.org/wiki/LibYAML>`_ (a C library for parsing and emitting YAML) on your machine.\nThis will allow for the use of ``CSafeLoader`` (instead of Python ``SafeLoader``) for faster loading of YAML files.\nNot a big deal for small files, but performs markedly better on larger documents.\n\nInstall from `PyPI <https://pypi.org/project/hepdata-validator/>`_ using ``pip``:\n\n.. code:: bash\n\n $ pip install --user hepdata-validator\n $ hepdata-validate --help\n\nIf you would like to use LibYAML, you may need an additional step if running on an M1 Mac, to ensure pyyaml is built\nwith the LibYAML bindings. Run the following after installing LibYAML via Homebrew:\n\n.. code:: bash\n\n $ LDFLAGS=\"-L$(brew --prefix)/lib\" CFLAGS=\"-I$(brew --prefix)/include\" pip install --global-option=\"--with-libyaml\" --force pyyaml\n\n\nDevelopers\n==========\nDevelopers should install from GitHub in a `virtual environment <https://docs.python.org/3/tutorial/venv.html>`_:\n\n.. code:: bash\n\n $ git clone https://github.com/HEPData/hepdata-validator\n $ cd hepdata-validator\n $ python3.9 -m venv venv\n $ source venv/bin/activate\n (venv)$ pip install --upgrade pip\n (venv)$ pip install --upgrade -e \".[all]\"\n\nTests should be run both with and without LibYAML, as error messages from the different YAML parsers vary:\n\n.. code:: bash\n\n (venv) $ USE_LIBYAML=True pytest testsuite\n (venv) $ USE_LIBYAML=False pytest testsuite\n\nUsage\n-----\n\nThe ``hepdata-validator`` package allows you to validate (via the command line or Python):\n\n* A full directory of submission and data files\n* An archive file (.zip, .tar, .tar.gz, .tgz) containing all of the files (`full details <https://hepdata-submission.readthedocs.io/en/latest/introduction.html>`_)\n* A `single .yaml or .yaml.gz file <https://hepdata-submission.readthedocs.io/en/latest/single_yaml.html>`_ (but *not* ``submission.yaml`` or a YAML data file)\n* A ``submission.yaml`` file or individual YAML data file (via Python only, not via the command line)\n\nThe same package is used for validating uploads made to `hepdata.net <https://www.hepdata.net>`_, therefore\nfirst validating offline can be more efficient in checking your submission is valid before uploading.\n\n\nCommand line\n============\n\nInstalling the ``hepdata-validator`` package adds the command ``hepdata-validate`` to your path, which allows you to validate a\n`HEPData submission <https://hepdata-submission.readthedocs.io/en/latest/introduction.html>`_ offline.\n\nExamples\n^^^^^^^^\n\nTo validate a submission comprising of multiple files in the current directory:\n\n.. code:: bash\n\n $ hepdata-validate\n\nTo validate a submission comprising of multiple files in another directory:\n\n.. code:: bash\n\n $ hepdata-validate -d ../TestHEPSubmission\n\nTo validate an archive file (.zip, .tar, .tar.gz, .tgz) in the current directory:\n\n.. code:: bash\n\n $ hepdata-validate -a TestHEPSubmission.zip\n\nTo validate a single YAML file in the current directory:\n\n.. code:: bash\n\n $ hepdata-validate -f single_yaml_file.yaml\n\nUsage options\n^^^^^^^^^^^^^\n\n.. code:: bash\n\n $ hepdata-validate --help\n Usage: hepdata-validate [OPTIONS]\n\n Offline validation of submission.yaml and YAML data files. Can check either\n a directory, an archive file, or the single YAML file format.\n\n Options:\n -d, --directory TEXT Directory to check (defaults to current working\n directory)\n -f, --file TEXT Single .yaml or .yaml.gz file (but not submission.yaml\n or a YAML data file) to check - see https://hepdata-\n submission.readthedocs.io/en/latest/single_yaml.html.\n (Overrides directory)\n -a, --archive TEXT Archive file (.zip, .tar, .tar.gz, .tgz) to check.\n (Overrides directory and file)\n --help Show this message and exit.\n\n\nPython\n======\n\nValidating a full submission\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo validate a full submission, instantiate a ``FullSubmissionValidator`` object:\n\n.. code:: python\n\n from hepdata_validator.full_submission_validator import FullSubmissionValidator, SchemaType\n full_submission_validator = FullSubmissionValidator()\n\n # validate a directory\n is_dir_valid = full_submission_validator.validate(directory='TestHEPSubmission')\n\n # or uncomment to validate an archive file\n # is_archive_valid = full_submission_validator.validate(archive='TestHEPSubmission.zip')\n\n # or uncomment to validate a single file\n # is_file_valid = full_submission_validator.validate(file='single_yaml_file.yaml')\n\n # if there are any error messages, they are retrievable through this call\n full_submission_validator.get_messages()\n\n # the error messages can be printed for each file\n full_submission_validator.print_errors('submission.yaml')\n\n # the list of valid files can be retrieved via the valid_files property, which is a\n # dict mapping SchemaType (e.g. SUBMISSION, DATA, SINGLE_YAML, REMOTE) to lists of\n # valid files\n full_submission_validator.valid_files[SchemaType.SUBMISSION]\n full_submission_validator.valid_files[SchemaType.DATA]\n # full_submission_validator.valid_files[SchemaType.SINGLE_YAML]\n\n # if a remote schema is used, valid_files is a list of tuples (schema, file)\n # full_submission_validator.valid_files[SchemaType.REMOTE]\n\n # the list of valid files can be printed\n full_submission_validator.print_valid_files()\n\n\nValidating individual files\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo validate submission files, instantiate a ``SubmissionFileValidator`` object:\n\n.. code:: python\n\n from hepdata_validator.submission_file_validator import SubmissionFileValidator\n\n submission_file_validator = SubmissionFileValidator()\n submission_file_path = 'submission.yaml'\n\n # the validate method takes a string representing the file path\n is_valid_submission_file = submission_file_validator.validate(file_path=submission_file_path)\n\n # if there are any error messages, they are retrievable through this call\n submission_file_validator.get_messages()\n\n # the error messages can be printed\n submission_file_validator.print_errors(submission_file_path)\n\n\nTo validate data files, instantiate a ``DataFileValidator`` object:\n\n.. code:: python\n\n from hepdata_validator.data_file_validator import DataFileValidator\n\n data_file_validator = DataFileValidator()\n\n # the validate method takes a string representing the file path\n data_file_validator.validate(file_path='data.yaml')\n\n # if there are any error messages, they are retrievable through this call\n data_file_validator.get_messages()\n\n # the error messages can be printed\n data_file_validator.print_errors('data.yaml')\n\n\nOptionally, if you have already loaded the YAML object, then you can pass it through\nas a ``data`` object. You must also pass through the ``file_path`` since this is used as a key\nfor the error message lookup map.\n\n.. code:: python\n\n from hepdata_validator.data_file_validator import DataFileValidator\n import yaml\n\n file_contents = yaml.safe_load(open('data.yaml', 'r'))\n data_file_validator = DataFileValidator()\n\n data_file_validator.validate(file_path='data.yaml', data=file_contents)\n\n data_file_validator.get_messages('data.yaml')\n\n data_file_validator.print_errors('data.yaml')\n\nFor the analogous case of the ``SubmissionFileValidator``:\n\n.. code:: python\n\n from hepdata_validator.submission_file_validator import SubmissionFileValidator\n import yaml\n submission_file_path = 'submission.yaml'\n\n # convert a generator returned by yaml.safe_load_all into a list\n docs = list(yaml.safe_load_all(open(submission_file_path, 'r')))\n\n submission_file_validator = SubmissionFileValidator()\n is_valid_submission_file = submission_file_validator.validate(file_path=submission_file_path, data=docs)\n submission_file_validator.print_errors(submission_file_path)\n\n\nSchema Versions\n---------------\n\nWhen considering **native HEPData JSON schemas**, there are multiple `versions\n<https://github.com/HEPData/hepdata-validator/tree/main/hepdata_validator/schemas>`_.\nIn most cases you should use the **latest** version (the default). If you need to use a different version,\nyou can pass a keyword argument ``schema_version`` when initialising the validator:\n\n.. code:: python\n\n submission_file_validator = SubmissionFileValidator(schema_version='0.1.0')\n data_file_validator = DataFileValidator(schema_version='0.1.0')\n\n\nRemote Schemas\n--------------\n\nWhen using **remotely defined schemas**, versions depend on the organization providing those schemas,\nand it is their responsibility to offer a way of keeping track of different schema versions.\n\nThe ``JsonSchemaResolver`` object resolves ``$ref`` in the JSON schema. The ``HTTPSchemaDownloader`` object retrieves\nschemas from a remote location, and optionally saves them in the local file system, following the structure:\n``schemas_remote/<org>/<project>/<version>/<schema_name>``. An example may be:\n\n.. code:: python\n\n from hepdata_validator.data_file_validator import DataFileValidator\n data_validator = DataFileValidator()\n\n # Split remote schema path and schema name\n schema_path = 'https://scikit-hep.org/pyhf/schemas/1.0.0/'\n schema_name = 'workspace.json'\n\n # Create JsonSchemaResolver object to resolve $ref in JSON schema\n from hepdata_validator.schema_resolver import JsonSchemaResolver\n pyhf_resolver = JsonSchemaResolver(schema_path)\n\n # Create HTTPSchemaDownloader object to validate against remote schema\n from hepdata_validator.schema_downloader import HTTPSchemaDownloader\n pyhf_downloader = HTTPSchemaDownloader(pyhf_resolver, schema_path)\n\n # Retrieve and save the remote schema in the local path\n pyhf_type = pyhf_downloader.get_schema_type(schema_name)\n pyhf_spec = pyhf_downloader.get_schema_spec(schema_name)\n pyhf_downloader.save_locally(schema_name, pyhf_spec)\n\n # Load the custom schema as a custom type\n import os\n pyhf_path = os.path.join(pyhf_downloader.schemas_path, schema_name)\n data_validator.load_custom_schema(pyhf_type, pyhf_path)\n\n # Validate a specific schema instance\n data_validator.validate(file_path='pyhf_workspace.json', file_type=pyhf_type)\n\n\nThe native HEPData JSON schema are provided as part of the ``hepdata-validator`` package and it is not necessary to\ndownload them. However, in principle, for testing purposes, note that the same mechanism above could be used with:\n\n.. code:: python\n\n schema_path = 'https://hepdata.net/submission/schemas/1.1.1/'\n schema_name = 'data_schema.json'\n\nand passing a HEPData YAML data file as the ``file_path`` argument of the ``validate`` method.\n\n\n",
"bugtrack_url": null,
"license": "GPLv2",
"summary": "JSON schema and validation code for HEPData submissions",
"version": "0.3.5",
"project_urls": {
"Homepage": "https://github.com/hepdata/hepdata-validator"
},
"split_keywords": [
"hepdata",
"validator"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "55f64d870b584a1d624da6f9a3abe87f289af70890c9c48516ff6c4fd464bb24",
"md5": "e19cf8f8ae297baccb221d23c3a8b359",
"sha256": "0f5a97ef4f1a8e305b89c801a9ae294801c2f5f88a2f0c374218fe483f255ed7"
},
"downloads": -1,
"filename": "hepdata_validator-0.3.5-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "e19cf8f8ae297baccb221d23c3a8b359",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.6",
"size": 44698,
"upload_time": "2023-08-24T12:25:38",
"upload_time_iso_8601": "2023-08-24T12:25:38.273184Z",
"url": "https://files.pythonhosted.org/packages/55/f6/4d870b584a1d624da6f9a3abe87f289af70890c9c48516ff6c4fd464bb24/hepdata_validator-0.3.5-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0d0ca6c8cfae5738bc30ec0db7093aa0f99a502c2dc97bc65e9eca4cfd0c472f",
"md5": "26ba46ccdd1646cf3ef6b69f45af44c8",
"sha256": "d93a23e4eeb41df03011494482b619ac59088f13054bd389718111d135f2ab10"
},
"downloads": -1,
"filename": "hepdata_validator-0.3.5.tar.gz",
"has_sig": false,
"md5_digest": "26ba46ccdd1646cf3ef6b69f45af44c8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 33773,
"upload_time": "2023-08-24T12:25:39",
"upload_time_iso_8601": "2023-08-24T12:25:39.758479Z",
"url": "https://files.pythonhosted.org/packages/0d/0c/a6c8cfae5738bc30ec0db7093aa0f99a502c2dc97bc65e9eca4cfd0c472f/hepdata_validator-0.3.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-24 12:25:39",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hepdata",
"github_project": "hepdata-validator",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "hepdata-validator"
}