xpypact


Namexpypact JSON
Version 0.12.2 PyPI version JSON
download
home_pagehttps://github.com/MC-kit/xpypact
Summary"Python workflow framework for FISPACT."
upload_time2024-08-07 10:41:42
maintainerNone
docs_urlNone
authordvp
requires_python<3.13,>=3.10
licenseMIT
keywords element nuclide isotope abundance fispact activation duckdb polars
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ==============================================================================
*xpypact*: FISPACT output to Polars or DuckDB converter
==============================================================================



|Maintained| |License| |Versions| |PyPI| |Docs|

.. contents::


.. note::

    This document is in progress.

Description
-----------

The module loads FISPACT JSON output files and converts to Polars dataframes
with minor data normalization.
This allows efficient data extraction and aggregation.
Multiple JSON files can be combined using simple additional identification for different
FISPACT runs. So far we use just two-dimensional identification by material
and case. The case usually identifies certain neutron flux.


Implemented functionality
-------------------------

- export to DuckDB
- export to parquet files

.. note::

    Currently available FISPACT v.5 API uses rather old python version (3.6).
    That prevents direct use of their API in our package (>=3.10).
    Check if own python integration with FISPACT is reasonable and feasible.
    Or provide own FISPACT python binding.


Installation
------------

From PyPI

.. code-block::

    pip install xpypact


As dependency

.. code-block::

    poetry add xpypact


From source

.. code-block::

    pip install htpps://github.com/MC-kit/xpypact.git


Examples
--------

.. code-block::

    from xpypact import FullDataCollector, Inventory

    def get_material_id(p: Path) -> int:
        ...

    def get_case_id(p: Path) -> int:
        ...

    jsons = [path1, path2, ...]
    material_ids = {p: get_material_id(p) for p in jsons }
    case_ids = {c: get_case_id(p) for p in jsons }

    collector = FullDataCollector()

    if sequential_load:
        for json in jsons:
            inventory = Inventory.from_json(json)
            collector.append(inventory, material_id=material_ids[json], case_id=case_ids[json])

    else:  # multithreading is allowed for collector as well

        task_list = ...  # list of tuples[directory, case_id, tasks_sequence]
        threads = 16  # whatever

        def _find_path(arg) -> tuple[int, int, Path]:
            _case, path, inventory = arg
            json_path: Path = (Path(path) / inventory).with_suffix(".json")
            if not json_path.exists():
                msg = f"Cannot find file {json_path}"
                raise FindPathError(msg)
            try:
                material_id = int(inventory[_LEN_INVENTORY:])
                case_str = json_path.parent.parts[-1]
                case_id = int(case_str[_LEN_CASE:])
            except (ValueError, IndexError) as x:
                msg = f"Cannot define material_id and case_id from {json_path}"
                raise FindPathError(msg) from x
            if case_id != _case:
                msg = f"Contradicting values of case_id in case path and database: {case_id} != {_case}"
                raise FindPathError(msg)
            return material_id, case_id, json_path

        with futures.ThreadPoolExecutor(max_workers=threads) as executor:
            mcp_futures = [
                executor.submit(_find_path, arg)
                for arg in (
                    (task_case[0], task_case[1], task)
                    for task_case in task_list
                    for task in task_case[2].split(",")
                    if task.startswith("inventory-")
                )
            ]

        mips = [x.result() for x in futures.as_completed(mcp_futures)]
        mips.sort(key=lambda x: x[0:2])  # sort by material_id, case_id

        def _load_json(arg) -> None:
            collector, material_id, case_id, json_path = arg
            collector.append(from_json(json_path.read_text(encoding="utf8")), material_id, case_id)

        with futures.ThreadPoolExecutor(max_workers=threads) as executor:
            executor.map(_load_json, ((collector, *mip) for mip in mips))


    collected = collector.get_result()

    # save to parquet files

    collected.save_to_parquets(Path.cwd() / "parquets")

    # or use DuckDB database

    import from xpypact.dao save
    import duckdb as db

    con = db.connect()
    save(con, collected)

    gamma_from_db = con.sql(
        """
        select
        g, rate
        from timestep_gamma
        where material_id = 1 and case_id = 54 and time_step_number = 7
        order by g
        """,
    ).fetchall()


Contributing
------------

.. image:: https://github.com/MC-kit/xpypact/workflows/Tests/badge.svg
   :target: https://github.com/MC-kit/xpypact/actions?query=workflow%3ATests
   :alt: Tests
.. image:: https://codecov.io/gh/MC-kit/xpypact/branch/master/graph/badge.svg?token=P6DPGSWM94
   :target: https://codecov.io/gh/MC-kit/xpypact
   :alt: Coverage
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
   :target: https://github.com/psf/black
.. image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336
   :target: https://pycqa.github.io/isort/
.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white
   :target: https://github.com/pre-commit/pre-commit
   :alt: pre-commit
.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json
   :target: https://github.com/charliermarsh/ruff
   :alt: linter

Just follow ordinary practice:

    - `Commit message <https://github.com/angular/angular/blob/22b96b9/CONTRIBUTING.md#-commit-message-guidelines>`_
    - `Conventional commits <https://www.conventionalcommits.org/en/v1.0.0/#summary>`_


References
----------

.. note::

    add references to FISPACT, pypact and used tools:  poetry etc


.. Substitutions

.. |Maintained| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg
   :target: https://github.com/MC-kit/xpypact/graphs/commit-activity
.. |Tests| image:: https://github.com/MC-kit/xpypact/workflows/Tests/badge.svg
   :target: https://github.com/MC-kit/xpypact/actions?workflow=Tests
   :alt: Tests
.. |License| image:: https://img.shields.io/github/license/MC-kit/xpypact
   :target: https://github.com/MC-kit/xpypact
.. |Versions| image:: https://img.shields.io/pypi/pyversions/xpypact
   :alt: PyPI - Python Version
.. |PyPI| image:: https://img.shields.io/pypi/v/xpypact
   :target: https://pypi.org/project/xpypact/
   :alt: PyPI
.. |Docs| image:: https://readthedocs.org/projects/xpypact/badge/?version=latest
   :target: https://xpypact.readthedocs.io/en/latest/?badge=latest
   :alt: Documentation Status

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/MC-kit/xpypact",
    "name": "xpypact",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.10",
    "maintainer_email": null,
    "keywords": "element, nuclide, isotope, abundance, FISPACT, activation, duckdb, polars",
    "author": "dvp",
    "author_email": "dmitri_portnov@yahoo.com",
    "download_url": "https://files.pythonhosted.org/packages/22/4f/81504c51f1eb652fe8fbe10f108abdfbebe024eb43bf409eceeb17d24031/xpypact-0.12.2.tar.gz",
    "platform": null,
    "description": "==============================================================================\n*xpypact*: FISPACT output to Polars or DuckDB converter\n==============================================================================\n\n\n\n|Maintained| |License| |Versions| |PyPI| |Docs|\n\n.. contents::\n\n\n.. note::\n\n    This document is in progress.\n\nDescription\n-----------\n\nThe module loads FISPACT JSON output files and converts to Polars dataframes\nwith minor data normalization.\nThis allows efficient data extraction and aggregation.\nMultiple JSON files can be combined using simple additional identification for different\nFISPACT runs. So far we use just two-dimensional identification by material\nand case. The case usually identifies certain neutron flux.\n\n\nImplemented functionality\n-------------------------\n\n- export to DuckDB\n- export to parquet files\n\n.. note::\n\n    Currently available FISPACT v.5 API uses rather old python version (3.6).\n    That prevents direct use of their API in our package (>=3.10).\n    Check if own python integration with FISPACT is reasonable and feasible.\n    Or provide own FISPACT python binding.\n\n\nInstallation\n------------\n\nFrom PyPI\n\n.. code-block::\n\n    pip install xpypact\n\n\nAs dependency\n\n.. code-block::\n\n    poetry add xpypact\n\n\nFrom source\n\n.. code-block::\n\n    pip install htpps://github.com/MC-kit/xpypact.git\n\n\nExamples\n--------\n\n.. code-block::\n\n    from xpypact import FullDataCollector, Inventory\n\n    def get_material_id(p: Path) -> int:\n        ...\n\n    def get_case_id(p: Path) -> int:\n        ...\n\n    jsons = [path1, path2, ...]\n    material_ids = {p: get_material_id(p) for p in jsons }\n    case_ids = {c: get_case_id(p) for p in jsons }\n\n    collector = FullDataCollector()\n\n    if sequential_load:\n        for json in jsons:\n            inventory = Inventory.from_json(json)\n            collector.append(inventory, material_id=material_ids[json], case_id=case_ids[json])\n\n    else:  # multithreading is allowed for collector as well\n\n        task_list = ...  # list of tuples[directory, case_id, tasks_sequence]\n        threads = 16  # whatever\n\n        def _find_path(arg) -> tuple[int, int, Path]:\n            _case, path, inventory = arg\n            json_path: Path = (Path(path) / inventory).with_suffix(\".json\")\n            if not json_path.exists():\n                msg = f\"Cannot find file {json_path}\"\n                raise FindPathError(msg)\n            try:\n                material_id = int(inventory[_LEN_INVENTORY:])\n                case_str = json_path.parent.parts[-1]\n                case_id = int(case_str[_LEN_CASE:])\n            except (ValueError, IndexError) as x:\n                msg = f\"Cannot define material_id and case_id from {json_path}\"\n                raise FindPathError(msg) from x\n            if case_id != _case:\n                msg = f\"Contradicting values of case_id in case path and database: {case_id} != {_case}\"\n                raise FindPathError(msg)\n            return material_id, case_id, json_path\n\n        with futures.ThreadPoolExecutor(max_workers=threads) as executor:\n            mcp_futures = [\n                executor.submit(_find_path, arg)\n                for arg in (\n                    (task_case[0], task_case[1], task)\n                    for task_case in task_list\n                    for task in task_case[2].split(\",\")\n                    if task.startswith(\"inventory-\")\n                )\n            ]\n\n        mips = [x.result() for x in futures.as_completed(mcp_futures)]\n        mips.sort(key=lambda x: x[0:2])  # sort by material_id, case_id\n\n        def _load_json(arg) -> None:\n            collector, material_id, case_id, json_path = arg\n            collector.append(from_json(json_path.read_text(encoding=\"utf8\")), material_id, case_id)\n\n        with futures.ThreadPoolExecutor(max_workers=threads) as executor:\n            executor.map(_load_json, ((collector, *mip) for mip in mips))\n\n\n    collected = collector.get_result()\n\n    # save to parquet files\n\n    collected.save_to_parquets(Path.cwd() / \"parquets\")\n\n    # or use DuckDB database\n\n    import from xpypact.dao save\n    import duckdb as db\n\n    con = db.connect()\n    save(con, collected)\n\n    gamma_from_db = con.sql(\n        \"\"\"\n        select\n        g, rate\n        from timestep_gamma\n        where material_id = 1 and case_id = 54 and time_step_number = 7\n        order by g\n        \"\"\",\n    ).fetchall()\n\n\nContributing\n------------\n\n.. image:: https://github.com/MC-kit/xpypact/workflows/Tests/badge.svg\n   :target: https://github.com/MC-kit/xpypact/actions?query=workflow%3ATests\n   :alt: Tests\n.. image:: https://codecov.io/gh/MC-kit/xpypact/branch/master/graph/badge.svg?token=P6DPGSWM94\n   :target: https://codecov.io/gh/MC-kit/xpypact\n   :alt: Coverage\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n   :target: https://github.com/psf/black\n.. image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336\n   :target: https://pycqa.github.io/isort/\n.. image:: https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white\n   :target: https://github.com/pre-commit/pre-commit\n   :alt: pre-commit\n.. image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/charliermarsh/ruff/main/assets/badge/v1.json\n   :target: https://github.com/charliermarsh/ruff\n   :alt: linter\n\nJust follow ordinary practice:\n\n    - `Commit message <https://github.com/angular/angular/blob/22b96b9/CONTRIBUTING.md#-commit-message-guidelines>`_\n    - `Conventional commits <https://www.conventionalcommits.org/en/v1.0.0/#summary>`_\n\n\nReferences\n----------\n\n.. note::\n\n    add references to FISPACT, pypact and used tools:  poetry etc\n\n\n.. Substitutions\n\n.. |Maintained| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg\n   :target: https://github.com/MC-kit/xpypact/graphs/commit-activity\n.. |Tests| image:: https://github.com/MC-kit/xpypact/workflows/Tests/badge.svg\n   :target: https://github.com/MC-kit/xpypact/actions?workflow=Tests\n   :alt: Tests\n.. |License| image:: https://img.shields.io/github/license/MC-kit/xpypact\n   :target: https://github.com/MC-kit/xpypact\n.. |Versions| image:: https://img.shields.io/pypi/pyversions/xpypact\n   :alt: PyPI - Python Version\n.. |PyPI| image:: https://img.shields.io/pypi/v/xpypact\n   :target: https://pypi.org/project/xpypact/\n   :alt: PyPI\n.. |Docs| image:: https://readthedocs.org/projects/xpypact/badge/?version=latest\n   :target: https://xpypact.readthedocs.io/en/latest/?badge=latest\n   :alt: Documentation Status\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "\"Python workflow framework for FISPACT.\"",
    "version": "0.12.2",
    "project_urls": {
        "Changelog": "https://github.com/MC-kit/xpypact/releases",
        "Documentation": "https://xpypact.readthedocs.io",
        "Homepage": "https://github.com/MC-kit/xpypact",
        "Repository": "https://github.com/MC-kit/xpypact",
        "documentation": "https://xpypact.readthedocs.io"
    },
    "split_keywords": [
        "element",
        " nuclide",
        " isotope",
        " abundance",
        " fispact",
        " activation",
        " duckdb",
        " polars"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5f2030784e2458a9c21111a89d57347206f7c485410225f449a4d496a44e52e2",
                "md5": "5dad9690dbae8404cf32efd81532b745",
                "sha256": "73ab61097915162dae2ae8eef3ef881b10559448bf7f4259897c8c43a250b98f"
            },
            "downloads": -1,
            "filename": "xpypact-0.12.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5dad9690dbae8404cf32efd81532b745",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.10",
            "size": 21949,
            "upload_time": "2024-08-07T10:41:41",
            "upload_time_iso_8601": "2024-08-07T10:41:41.431711Z",
            "url": "https://files.pythonhosted.org/packages/5f/20/30784e2458a9c21111a89d57347206f7c485410225f449a4d496a44e52e2/xpypact-0.12.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "224f81504c51f1eb652fe8fbe10f108abdfbebe024eb43bf409eceeb17d24031",
                "md5": "8ccba0765a55b9425fafaeb9498c46a7",
                "sha256": "e670382e474b20522ed18f88961bff67f6fc801032a90a932220488a0d929b7f"
            },
            "downloads": -1,
            "filename": "xpypact-0.12.2.tar.gz",
            "has_sig": false,
            "md5_digest": "8ccba0765a55b9425fafaeb9498c46a7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.10",
            "size": 25681,
            "upload_time": "2024-08-07T10:41:42",
            "upload_time_iso_8601": "2024-08-07T10:41:42.905114Z",
            "url": "https://files.pythonhosted.org/packages/22/4f/81504c51f1eb652fe8fbe10f108abdfbebe024eb43bf409eceeb17d24031/xpypact-0.12.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-07 10:41:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MC-kit",
    "github_project": "xpypact",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "xpypact"
}
        
dvp
Elapsed time: 0.35412s