ont-fast5-api


Nameont-fast5-api JSON
Version 4.1.3 PyPI version JSON
download
home_pagehttps://github.com/nanoporetech/ont_fast5_api
SummaryOxford Nanopore Technologies fast5 API software
upload_time2024-02-28 08:20:29
maintainer
docs_urlNone
authorOxford Nanopore Technologies, Limited
requires_python>=3.7
licenseMPL 2.0
keywords fast5 nanopore
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            .. image:: img/ONT_logo.png
  :width: 800
  :alt:  .


ont_fast5_api
===============================================================================

``ont_fast5_api`` is a simple interface to HDF5 files of the Oxford Nanopore
.fast5 file format.

- Source code: https://github.com/nanoporetech/ont_fast5_api
- Fast5 File Schema: https://github.com/nanoporetech/ont_h5_validator

It provides:

- Concrete implementation of the fast5 file schema using the generic h5py library
- Plain-english-named methods to interact with and reflect the fast5 file schema
- Tools to convert between `multi_read` and `single_read` formats
- Tools to compress/decompress raw data in files

Getting Started
===============================================================================
The ``ont_fast5_api`` is available on PyPI and can be installed via pip::

    pip install ont-fast5-api

Alternatively, it is available on github where it can be built from source::

    git clone https://github.com/nanoporetech/ont_fast5_api
    pip install ./ont_fast5_api

Dependencies
-------------------------------------------------------------------------------
``ont_fast5_api`` is a pure python project and should run on most python
versions and operating systems.

It requires:

- `h5py <http://www.h5py.org>`_: 2.6 or higher
- `NumPy <https://www.numpy.org>`_: 1.11 or higher
- `six <https://github.com/benjaminp/six>`_: 1.10 or higher
- `progressbar33 <https://github.com/germangh/python-progressbar>`_: 2.3.1 or higher

Interface - get_fast5_file
===============================================================================

The ont_fast5_api provides a simple interface to access the data structures in .fast5
files of either single- or multi- read format using the same method calls.

For example to print the raw data from all reads in a file::

    from ont_fast5_api.fast5_interface import get_fast5_file

    def print_all_raw_data():
        fast5_filepath = "test/data/single_reads/read0.fast5" # This can be a single- or multi-read file
        with get_fast5_file(fast5_filepath, mode="r") as f5:
            for read in f5.get_reads():
                raw_data = read.get_raw_data()
                print(read.read_id, raw_data)

Interface - Console Scripts
===============================================================================
The ``ont_fast5_api`` provides terminal/command-line ``console_scripts`` for
converting between files in the Oxford Nanopore ``single_read`` and
``multi_read`` .fast5 file formats. These are provided to ensure compatibility between
tools which expect either the ``single_read`` or ``multi_read`` .fast5 file
formats.

The scripts are added during installation and can be called from the
terminal/command-line or from within python.

single_to_multi_fast5
-------------------------------------------------------------------------------
This script converts folders containing ``single_read_fast5`` files into
``multi_read_fast5_files``::

    single_to_multi_fast5
    [required]
        -i, --input_path    INPUT_PATH      <(path) folder containing single_read_fast5 files>
        -s, --save_path     SAVE_PATH       <(path) to folder where multi_read fast5 files will be output>

    [optional]
        -t, --threads       THREADS         <(int) number of CPU threads to use; default=1>
        -f, --filename_base FILENAME_BASE   <(string) name for new multi_read file; default="batch" (see note-1)>
        -n, --batch_size    BATCH_SIZE      <(int) number of single_reads to include in each multi_read file; default=4000>
        --recursive                         <if included, recursively search sub-directories for single_read files>

*note-1:* newly created ``multi_read`` files require a name. This is the
``filename_base`` with the batch count and ``.fast5`` appended to it; e.g.
``-f batch`` yields ``batch_0.fast5, batch_1.fast5, ...``

**example usage**::

    single_to_multi_fast5 --input_path /data/reads --save_path /data/multi_reads
        --filename_base batch_output --batch_size 100 --recursive

Where ``/data/reads`` and/or its subfolders contain ``single_read`` .fast5
files. The output will be ``multi_read`` fast5 files each containing 100 reads,
in the folder: ``/data/multi_reads`` with the names: ``batch_output_0.fast5``,
``batch_output_1.fast5`` etc.

multi_to_single_fast5
-------------------------------------------------------------------------------
This script converts folders containing ``multi_read_fast5`` files into
``single_read_fast5`` files::

    multi_to_single_fast5
    [required]
        -i, --input_path    INPUT_PATH  <(path) folder containing multi_read_fast5 files>
        -s, --save_path     SAVE_PATH   <(path) to folder where single_read fast5 files will be output

    [optional]
        -t, --threads       THREADS     <(int) number of CPU threads to use; default=1>
        --recursive                     <if included, recursively search sub-directories for multi_read files>

**example usage**::

    multi_to_single_fast5 --input_path /data/multi_reads --save_path /data/single_reads
        --recursive

Where ``/data/multi_reads`` and/or its subfolders contain ``multi_read``  .fast5
files. The output will be ``single_read`` .fast5 files in the folder
``/data/single_reads`` with one subfolder per ``multi_read`` input file

fast5_subset
-------------------------------------------------------------------------------
This script extracts reads from ``multi_read_fast5_file(s)`` based on a list of read_ids::

    fast5_subset
    [required]
        -i, --input         INPUT_PATH      <(path) to folder containing multi_read_fast5 files or an individual multi_read_fast5 file>
        -s, --save_path     SAVE_PATH       <(path) to folder where multi_read fast5 files will be output>
        -l,--read_id_list   SUMMARY_PATH    <(file) either sequencing_summary.txt file or a file containing a list of read_ids>

    [optional]
        -f, --filename_base FILENAME_BASE   <(string) name for new multi_read file; default="batch" (see note-1)>
        -n, --batch_size    BATCH_SIZE      <(int) number of single_reads to include in each multi_read file; default=4000>
        --recursive                         <if included, recursively search sub-directories for single_read files>

**example usage**::

    fast5_subset --input /data/multi_reads --save_path /data/subset
        --read_id_list read_id_list.txt --batch_size 100 --recursive

Where ``/data/multi_reads`` and/or its subfolders contain ``multi_read`` .fast5
files and ``read_id_list.txt`` is a text file either containing 1 read_id per line
or a tsv file with a column named ``read_id``.
The output will be ``multi_read`` .fast5 files each containing 100 reads,
in the folder: ``/data/multi_reads`` with the names: ``batch_output_0.fast5``,
``batch_output_1.fast5`` etc.

demux_fast5
-------------------------------------------------------------------------------
This script for ``demultiplexing`` reads from ``multi_read_fast5_file(s)``.

Extracts reads into multiple directories based on column value in a summary file::

    demux_fast5.py
    [required]
      -i, --input          INPUT_PATH    <Path to Fast5 file or directory of Fast5 files>
      -s, --save_path      SAVE_PATH     <Directory to output MultiRead subsets>
      -l, --summary_file   SUMMARY_PATH  <TSV file containing read_id and demultiplex columns>

    [optional]
      --read_id_column     COLUMN_NAME   <Name of read_id column in summary file (default 'read_id')>
      --demultiplex_column COLUMN_NAME   <Name of column for demultiplexing in summary file (default 'barcoding_arrangement')>
      -f, --filename_base  FILENAME_BASE <Root of output filename, default='batch' -> 'batch_0.fast5'>
      -n, --batch_size     BATCH_SIZE    <Number of reads per multi-read file, default 4000>
      -t, --threads        THREADS       <Maximum number of processes to use>
      -r, --recursive                    <Flag to search recursively through input directory for MultiRead fast5 files>
      --ignore_symlinks                  <Ignore symlinks when searching recursively for fast5 files>
      -c --compression     COMPRESSION   <Target output compression type (vbz,vbz_legacy_v0,gzip,None)>

Intended use is for multiplexed experiments, for reads with different barcodes or from different genomes.

**example usage**::

    demux_fast5 --input /data/multi_reads --save_path /data/demultiplexed_reads --summary_file barcoding_summary.txt

Where ``/data/multi_reads`` and/or its subfolders contain fast5 files from multiplexed experiment,
``barcoding_summary.txt`` is the output of guppy_barcoder. ``/data/demultiplexed_reads`` will contain a directory per
barcode, containing ``multi_read`` .fast5 files with names: ``/data/demultiplexed_reads/barcode01/batch_0.fast5``,
``/data/demultiplexed_reads/barcode02/batch_0.fast5`` etc. Directories are named by values in demultiplex column.

compress_fast5
-------------------------------------------------------------------------------
This script copies and converts raw data between `vbz` and `gzip` compression formats::

    compress_fast5
    [required]
        -i, --input_path    INPUT_PATH  <(path) folder containing multi_read_fast5 files>
        -s, --save_path     SAVE_PATH   <(path) to folder where single_read fast5 files will be output>
        -c, --compression   COMPRESSION <(str) [vbz, gzip] target compression format>

    [optional]
        -t, --threads       THREADS     <(int) number of CPU threads to use; default=1>
        --recursive                     <if included, recursively search sub-directories for fast5 files>
        --sanitize                      <flag to remove optional groups (such as basecalling and modified base information)>

**example usage**::

    compress_fast5 --input_path /data/uncompressed_reads --save_path /data/compressed_reads
        --compression vbz --recursive --threads 40

Where ``/data/uncompressed_reads`` and/or its subfolders contain .fast5 files. The output will be a copy of the input
folder structure containing compressed reads preserving both the folder structure and file type.

The optional ``--sanitize`` option can be used to greatly reduce file size when files contain optional data
from the Guppy basecaller that could in principle be regenerated by running Guppy. The files output
when using the ``sanitize`` option will be identical in structure to those output by MinKNOW when
live basecalling is disabled.

NB `compress_fast5` will copy .fast5 files in order to compress them due to HDF5 implementation constraints.
Further detail of HDF5 data management strategies can be found:
https://support.hdfgroup.org/HDF5/doc/Advanced/FileSpaceManagement/FileSpaceManagement.pdf


VBZ Compression
==============================================================================
VBZ compression is a compression algorithm developed by Oxford Nanopore to reduce file size and improve read/write
performance when handling raw data in Fast5 files. Previously, the default compression was GZIP and comparing to GZIP
we see a compression improvement of >30% and a CPU performance improvement of >10X for compression and >5X for
decompression. Further details of the implementation and benchmarks can be found here:
https://github.com/nanoporetech/vbz_compression

Benchmarking the performance of compression within the ont_fast5_api against a normal file copy showed
compressing from `gzip` to `vbz` was approximately 2x slower than copying files.  In other words, if it would take two
hours to copy a set of files from an input folder to an output folder then it should take four hours to compress those
files with VBZ. Running the script without compressing (i.e. the same type of compression in and out; gzip->gzip)
was approximately 2x faster than a file copy since it can utilise mutiple threads.


Glossary of Terms:
==============================================================================

**HDF5 file format** - a portable file format for storing and managing
data. It is designed for flexible and efficient I/O and for high volume and
complex data

**Fast5** - an implementation of the HDF5 file format, with specific data
schemas for Oxford Nanopore sequencing data

**Single read fast5** - A  fast5 file containing all the data pertaining to a
single Oxford Nanopore read. This may include raw signal data, run metadata,
fastq-basecalls and any other additional analyses

**Multi read fast5** - A fast5 file containing data pertaining to a multiple
Oxford Nanopore reads.

**Demultiplexing** - A process of separating reads of an experiment where multiple samples were mixed together
(multiplexed), into corresponding samples. Demultiplexing is based on markers that identify
sample origin, e.g. unique barcodes or alignment to a reference genome.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/nanoporetech/ont_fast5_api",
    "name": "ont-fast5-api",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "fast5 nanopore",
    "author": "Oxford Nanopore Technologies, Limited",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/6d/1a/6d108133f1b7770c9550bf63398119f9eb70492b0928b1f566704ec63ac9/ont-fast5-api-4.1.3.tar.gz",
    "platform": null,
    "description": ".. image:: img/ONT_logo.png\n  :width: 800\n  :alt:  .\n\n\nont_fast5_api\n===============================================================================\n\n``ont_fast5_api`` is a simple interface to HDF5 files of the Oxford Nanopore\n.fast5 file format.\n\n- Source code: https://github.com/nanoporetech/ont_fast5_api\n- Fast5 File Schema: https://github.com/nanoporetech/ont_h5_validator\n\nIt provides:\n\n- Concrete implementation of the fast5 file schema using the generic h5py library\n- Plain-english-named methods to interact with and reflect the fast5 file schema\n- Tools to convert between `multi_read` and `single_read` formats\n- Tools to compress/decompress raw data in files\n\nGetting Started\n===============================================================================\nThe ``ont_fast5_api`` is available on PyPI and can be installed via pip::\n\n    pip install ont-fast5-api\n\nAlternatively, it is available on github where it can be built from source::\n\n    git clone https://github.com/nanoporetech/ont_fast5_api\n    pip install ./ont_fast5_api\n\nDependencies\n-------------------------------------------------------------------------------\n``ont_fast5_api`` is a pure python project and should run on most python\nversions and operating systems.\n\nIt requires:\n\n- `h5py <http://www.h5py.org>`_: 2.6 or higher\n- `NumPy <https://www.numpy.org>`_: 1.11 or higher\n- `six <https://github.com/benjaminp/six>`_: 1.10 or higher\n- `progressbar33 <https://github.com/germangh/python-progressbar>`_: 2.3.1 or higher\n\nInterface - get_fast5_file\n===============================================================================\n\nThe ont_fast5_api provides a simple interface to access the data structures in .fast5\nfiles of either single- or multi- read format using the same method calls.\n\nFor example to print the raw data from all reads in a file::\n\n    from ont_fast5_api.fast5_interface import get_fast5_file\n\n    def print_all_raw_data():\n        fast5_filepath = \"test/data/single_reads/read0.fast5\" # This can be a single- or multi-read file\n        with get_fast5_file(fast5_filepath, mode=\"r\") as f5:\n            for read in f5.get_reads():\n                raw_data = read.get_raw_data()\n                print(read.read_id, raw_data)\n\nInterface - Console Scripts\n===============================================================================\nThe ``ont_fast5_api`` provides terminal/command-line ``console_scripts`` for\nconverting between files in the Oxford Nanopore ``single_read`` and\n``multi_read`` .fast5 file formats. These are provided to ensure compatibility between\ntools which expect either the ``single_read`` or ``multi_read`` .fast5 file\nformats.\n\nThe scripts are added during installation and can be called from the\nterminal/command-line or from within python.\n\nsingle_to_multi_fast5\n-------------------------------------------------------------------------------\nThis script converts folders containing ``single_read_fast5`` files into\n``multi_read_fast5_files``::\n\n    single_to_multi_fast5\n    [required]\n        -i, --input_path    INPUT_PATH      <(path) folder containing single_read_fast5 files>\n        -s, --save_path     SAVE_PATH       <(path) to folder where multi_read fast5 files will be output>\n\n    [optional]\n        -t, --threads       THREADS         <(int) number of CPU threads to use; default=1>\n        -f, --filename_base FILENAME_BASE   <(string) name for new multi_read file; default=\"batch\" (see note-1)>\n        -n, --batch_size    BATCH_SIZE      <(int) number of single_reads to include in each multi_read file; default=4000>\n        --recursive                         <if included, recursively search sub-directories for single_read files>\n\n*note-1:* newly created ``multi_read`` files require a name. This is the\n``filename_base`` with the batch count and ``.fast5`` appended to it; e.g.\n``-f batch`` yields ``batch_0.fast5, batch_1.fast5, ...``\n\n**example usage**::\n\n    single_to_multi_fast5 --input_path /data/reads --save_path /data/multi_reads\n        --filename_base batch_output --batch_size 100 --recursive\n\nWhere ``/data/reads`` and/or its subfolders contain ``single_read`` .fast5\nfiles. The output will be ``multi_read`` fast5 files each containing 100 reads,\nin the folder: ``/data/multi_reads`` with the names: ``batch_output_0.fast5``,\n``batch_output_1.fast5`` etc.\n\nmulti_to_single_fast5\n-------------------------------------------------------------------------------\nThis script converts folders containing ``multi_read_fast5`` files into\n``single_read_fast5`` files::\n\n    multi_to_single_fast5\n    [required]\n        -i, --input_path    INPUT_PATH  <(path) folder containing multi_read_fast5 files>\n        -s, --save_path     SAVE_PATH   <(path) to folder where single_read fast5 files will be output\n\n    [optional]\n        -t, --threads       THREADS     <(int) number of CPU threads to use; default=1>\n        --recursive                     <if included, recursively search sub-directories for multi_read files>\n\n**example usage**::\n\n    multi_to_single_fast5 --input_path /data/multi_reads --save_path /data/single_reads\n        --recursive\n\nWhere ``/data/multi_reads`` and/or its subfolders contain ``multi_read``  .fast5\nfiles. The output will be ``single_read`` .fast5 files in the folder\n``/data/single_reads`` with one subfolder per ``multi_read`` input file\n\nfast5_subset\n-------------------------------------------------------------------------------\nThis script extracts reads from ``multi_read_fast5_file(s)`` based on a list of read_ids::\n\n    fast5_subset\n    [required]\n        -i, --input         INPUT_PATH      <(path) to folder containing multi_read_fast5 files or an individual multi_read_fast5 file>\n        -s, --save_path     SAVE_PATH       <(path) to folder where multi_read fast5 files will be output>\n        -l,--read_id_list   SUMMARY_PATH    <(file) either sequencing_summary.txt file or a file containing a list of read_ids>\n\n    [optional]\n        -f, --filename_base FILENAME_BASE   <(string) name for new multi_read file; default=\"batch\" (see note-1)>\n        -n, --batch_size    BATCH_SIZE      <(int) number of single_reads to include in each multi_read file; default=4000>\n        --recursive                         <if included, recursively search sub-directories for single_read files>\n\n**example usage**::\n\n    fast5_subset --input /data/multi_reads --save_path /data/subset\n        --read_id_list read_id_list.txt --batch_size 100 --recursive\n\nWhere ``/data/multi_reads`` and/or its subfolders contain ``multi_read`` .fast5\nfiles and ``read_id_list.txt`` is a text file either containing 1 read_id per line\nor a tsv file with a column named ``read_id``.\nThe output will be ``multi_read`` .fast5 files each containing 100 reads,\nin the folder: ``/data/multi_reads`` with the names: ``batch_output_0.fast5``,\n``batch_output_1.fast5`` etc.\n\ndemux_fast5\n-------------------------------------------------------------------------------\nThis script for ``demultiplexing`` reads from ``multi_read_fast5_file(s)``.\n\nExtracts reads into multiple directories based on column value in a summary file::\n\n    demux_fast5.py\n    [required]\n      -i, --input          INPUT_PATH    <Path to Fast5 file or directory of Fast5 files>\n      -s, --save_path      SAVE_PATH     <Directory to output MultiRead subsets>\n      -l, --summary_file   SUMMARY_PATH  <TSV file containing read_id and demultiplex columns>\n\n    [optional]\n      --read_id_column     COLUMN_NAME   <Name of read_id column in summary file (default 'read_id')>\n      --demultiplex_column COLUMN_NAME   <Name of column for demultiplexing in summary file (default 'barcoding_arrangement')>\n      -f, --filename_base  FILENAME_BASE <Root of output filename, default='batch' -> 'batch_0.fast5'>\n      -n, --batch_size     BATCH_SIZE    <Number of reads per multi-read file, default 4000>\n      -t, --threads        THREADS       <Maximum number of processes to use>\n      -r, --recursive                    <Flag to search recursively through input directory for MultiRead fast5 files>\n      --ignore_symlinks                  <Ignore symlinks when searching recursively for fast5 files>\n      -c --compression     COMPRESSION   <Target output compression type (vbz,vbz_legacy_v0,gzip,None)>\n\nIntended use is for multiplexed experiments, for reads with different barcodes or from different genomes.\n\n**example usage**::\n\n    demux_fast5 --input /data/multi_reads --save_path /data/demultiplexed_reads --summary_file barcoding_summary.txt\n\nWhere ``/data/multi_reads`` and/or its subfolders contain fast5 files from multiplexed experiment,\n``barcoding_summary.txt`` is the output of guppy_barcoder. ``/data/demultiplexed_reads`` will contain a directory per\nbarcode, containing ``multi_read`` .fast5 files with names: ``/data/demultiplexed_reads/barcode01/batch_0.fast5``,\n``/data/demultiplexed_reads/barcode02/batch_0.fast5`` etc. Directories are named by values in demultiplex column.\n\ncompress_fast5\n-------------------------------------------------------------------------------\nThis script copies and converts raw data between `vbz` and `gzip` compression formats::\n\n    compress_fast5\n    [required]\n        -i, --input_path    INPUT_PATH  <(path) folder containing multi_read_fast5 files>\n        -s, --save_path     SAVE_PATH   <(path) to folder where single_read fast5 files will be output>\n        -c, --compression   COMPRESSION <(str) [vbz, gzip] target compression format>\n\n    [optional]\n        -t, --threads       THREADS     <(int) number of CPU threads to use; default=1>\n        --recursive                     <if included, recursively search sub-directories for fast5 files>\n        --sanitize                      <flag to remove optional groups (such as basecalling and modified base information)>\n\n**example usage**::\n\n    compress_fast5 --input_path /data/uncompressed_reads --save_path /data/compressed_reads\n        --compression vbz --recursive --threads 40\n\nWhere ``/data/uncompressed_reads`` and/or its subfolders contain .fast5 files. The output will be a copy of the input\nfolder structure containing compressed reads preserving both the folder structure and file type.\n\nThe optional ``--sanitize`` option can be used to greatly reduce file size when files contain optional data\nfrom the Guppy basecaller that could in principle be regenerated by running Guppy. The files output\nwhen using the ``sanitize`` option will be identical in structure to those output by MinKNOW when\nlive basecalling is disabled.\n\nNB `compress_fast5` will copy .fast5 files in order to compress them due to HDF5 implementation constraints.\nFurther detail of HDF5 data management strategies can be found:\nhttps://support.hdfgroup.org/HDF5/doc/Advanced/FileSpaceManagement/FileSpaceManagement.pdf\n\n\nVBZ Compression\n==============================================================================\nVBZ compression is a compression algorithm developed by Oxford Nanopore to reduce file size and improve read/write\nperformance when handling raw data in Fast5 files. Previously, the default compression was GZIP and comparing to GZIP\nwe see a compression improvement of >30% and a CPU performance improvement of >10X for compression and >5X for\ndecompression. Further details of the implementation and benchmarks can be found here:\nhttps://github.com/nanoporetech/vbz_compression\n\nBenchmarking the performance of compression within the ont_fast5_api against a normal file copy showed\ncompressing from `gzip` to `vbz` was approximately 2x slower than copying files.  In other words, if it would take two\nhours to copy a set of files from an input folder to an output folder then it should take four hours to compress those\nfiles with VBZ. Running the script without compressing (i.e. the same type of compression in and out; gzip->gzip)\nwas approximately 2x faster than a file copy since it can utilise mutiple threads.\n\n\nGlossary of Terms:\n==============================================================================\n\n**HDF5 file format** - a portable file format for storing and managing\ndata. It is designed for flexible and efficient I/O and for high volume and\ncomplex data\n\n**Fast5** - an implementation of the HDF5 file format, with specific data\nschemas for Oxford Nanopore sequencing data\n\n**Single read fast5** - A  fast5 file containing all the data pertaining to a\nsingle Oxford Nanopore read. This may include raw signal data, run metadata,\nfastq-basecalls and any other additional analyses\n\n**Multi read fast5** - A fast5 file containing data pertaining to a multiple\nOxford Nanopore reads.\n\n**Demultiplexing** - A process of separating reads of an experiment where multiple samples were mixed together\n(multiplexed), into corresponding samples. Demultiplexing is based on markers that identify\nsample origin, e.g. unique barcodes or alignment to a reference genome.\n",
    "bugtrack_url": null,
    "license": "MPL 2.0",
    "summary": "Oxford Nanopore Technologies fast5 API software",
    "version": "4.1.3",
    "project_urls": {
        "Homepage": "https://github.com/nanoporetech/ont_fast5_api"
    },
    "split_keywords": [
        "fast5",
        "nanopore"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f4ce0d6fe4e6fd7fbebd6948511663a14c2a6642a355ab550294dbb5f8065c58",
                "md5": "5dd62d9fce94d7aad39d6ac75cc2a38b",
                "sha256": "642a89775b370e44206625f03bd41330650656bbb29325dd958a34010667a970"
            },
            "downloads": -1,
            "filename": "ont_fast5_api-4.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5dd62d9fce94d7aad39d6ac75cc2a38b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 2349694,
            "upload_time": "2024-02-28T08:20:25",
            "upload_time_iso_8601": "2024-02-28T08:20:25.710678Z",
            "url": "https://files.pythonhosted.org/packages/f4/ce/0d6fe4e6fd7fbebd6948511663a14c2a6642a355ab550294dbb5f8065c58/ont_fast5_api-4.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6d1a6d108133f1b7770c9550bf63398119f9eb70492b0928b1f566704ec63ac9",
                "md5": "4390bb2e34d41acb8fc2a04fd1e39a3d",
                "sha256": "302d10ed87b439f8f22c2c06d45d68d017e47dd8df9bd48f155cad041f464b68"
            },
            "downloads": -1,
            "filename": "ont-fast5-api-4.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "4390bb2e34d41acb8fc2a04fd1e39a3d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 2304525,
            "upload_time": "2024-02-28T08:20:29",
            "upload_time_iso_8601": "2024-02-28T08:20:29.467272Z",
            "url": "https://files.pythonhosted.org/packages/6d/1a/6d108133f1b7770c9550bf63398119f9eb70492b0928b1f566704ec63ac9/ont-fast5-api-4.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-28 08:20:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nanoporetech",
    "github_project": "ont_fast5_api",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "ont-fast5-api"
}
        
Elapsed time: 0.24532s