jmux


Namejmux JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryJMux: A Python package for demultiplexing a JSON string into multiple awaitable variables.
upload_time2025-08-08 05:51:04
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT License Copyright (c) 2025 Johannes A.I. Unruh Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, subject to the following conditions: 1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 2. You may use this software, including for commercial purposes, as long as you do not sell, license, or otherwise distribute the original or substantially similar versions of this software for a fee. 3. This restriction does not apply to using this software as a dependency in your own commercial applications or products, provided you are not selling this software itself as a standalone product. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
keywords demultiplexer python package json
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # JMux: A Python package for demultiplexing a JSON string into multiple awaitable variables.

JMux is a powerful Python package that allows you to demultiplex a JSON stream into multiple awaitable variables. It is specifically designed for asynchronous applications that interact with Large Language Models (LLMs) using libraries like `litellm`. When an LLM streams a JSON response, `jmux` enables you to parse and use parts of the JSON object _before_ the complete response has been received, significantly improving responsiveness.

## Inspiration

This package is inspired by `Snapshot Streaming` mentioned in the [`WWDC25: Meet the Foundation Models framework`](https://youtu.be/mJMvFyBvZEk?si=DVIvxzuJOA87lb7I&t=465) keynote by Apple.

## Features

- **Asynchronous by Design**: Built on top of `asyncio`, JMux is perfect for modern, high-performance Python applications.
- **Pydantic Integration**: Validate your `JMux` classes against Pydantic models to ensure type safety and consistency.
- **Awaitable and Streamable Sinks**: Use `AwaitableValue` for single values and `StreamableValues` for streams of values.
- **Robust Error Handling**: JMux provides a comprehensive set of exceptions to handle parsing errors and other issues.
- **Lightweight**: JMux has only a few external dependencies, making it easy to integrate into any project.

## Installation

You can install JMux from PyPI using pip:

```bash
pip install jmux
```

## Usage with LLMs (e.g., `litellm`)

The primary use case for `jmux` is to process streaming JSON responses from LLMs. This allows you to react to parts of the data as it arrives, rather than waiting for the entire JSON object to be transmitted. While this should be obvious, I should mention, that **the order in which the pydantic model defines the properties, defines which stream is filled first**.

Here’s a conceptual example of how you might integrate `jmux` with an LLM call, such as one made with `litellm`:

```python
import asyncio
from pydantic import BaseModel
from jmux import JMux, AwaitableValue, StreamableValues
# litellm is used conceptually here
# from litellm import acompletion

# 1. Define the Pydantic model for the expected JSON response
class LlmResponse(BaseModel):
    thought: str # **This property is filled first**
    tool_code: str

# 2. Define the corresponding JMux class
class LlmResponseMux(JMux):
    thought: AwaitableValue[str]
    tool_code: StreamableValues[str] # Stream the code as it's generated

# 3. Validate that the JMux class matches the Pydantic model
LlmResponseMux.assert_conforms_to(LlmResponse)

# A mock function that simulates a streaming LLM call
async def mock_llm_stream():
    json_stream = '{"thought": "I need to write some code.", "tool_code": "print(\'Hello, World!\')"}'
    for char in json_stream:
        yield char
        await asyncio.sleep(0.01) # Simulate network latency

# Main function to orchestrate the call and processing
async def process_llm_response():
    jmux_instance = LlmResponseMux()

    # This task will consume the LLM stream and feed it to jmux
    async def feed_stream():
        async for chunk in mock_llm_stream():
            await jmux_instance.feed_chunks(chunk)

    # These tasks will consume the demultiplexed data from jmux
    async def consume_thought():
        thought = await jmux_instance.thought
        print(f"LLM's thought received: '{thought}'")
        # You can act on the thought immediately
        # without waiting for the tool_code to finish streaming.

    async def consume_tool_code():
        print("Receiving tool code...")
        full_code = ""
        async for code_fragment in jmux_instance.tool_code:
            full_code += code_fragment
            print(f"  -> Received fragment: {code_fragment}")
        print(f"Full tool code received: {full_code}")

    # Run all tasks concurrently
    await asyncio.gather(
        feed_stream(),
        consume_thought(),
        consume_tool_code()
    )

if __name__ == "__main__":
    asyncio.run(process_llm_response())
```

## Example Implementation

<details>
<summary>Python Code</summary>

```python
def create_json_streaming_completion[T: BaseModel, J: IJsonDemuxer](
        self,
        messages: List[ILlmMessage],
        ReturnType: Type[T],
        JMux: Type[J],
        retries: int = 3,
    ) -> StreamResponseTuple[T, J]:
        try:
            JMux.assert_conforms_to(ReturnType)
            litellm_messages = self._convert_messages(messages)
            jmux_instance: J = JMux()

            async def stream_feeding_llm_call() -> T:
                nonlocal jmux_instance
                buffer = ""
                stream: CustomStreamWrapper = await self._router.acompletion( # see litellm `router`
                    model=self._internal_model_name.value,
                    messages=litellm_messages,
                    stream=True,
                    num_retries=retries,
                    response_format=ReturnType,
                    **self._maybe_google_credentials_param,
                    **self._model_params.model_dump(exclude_none=True),
                    **self._additional_params,
                )

                async for chunk in stream:
                    content_fragment: str | None = None

                    tool_calls = chunk.choices[0].delta.tool_calls
                    if tool_calls:
                        content_fragment = tool_calls[0].function.arguments
                    elif chunk.choices[0].delta.content:
                        content_fragment = chunk.choices[0].delta.content

                    if content_fragment:
                        try:
                            buffer += content_fragment
                            await jmux_instance.feed_chunks(content_fragment)
                        except Exception as e:
                            logger.warning(f"error in JMux feed_chunks: {e}")
                            raise e

                return ReturnType.model_validate_json(buffer)

            awaitable_llm_result = create_task(stream_feeding_llm_call())
            return (awaitable_llm_result, jmux_instance)
        except Exception as e:
            logger.warning(f"error in create_json_streaming_completion: {e}")
            raise e
```

The code above shows an example implementation that uses a `litellm` router for `acompletion`.

You can either `await awaitable_llm_result` if you need the full result, or use `await jmux_instance.your_awaitable_value` or `async for ele in jmux_instance.your_streamable_values` to access partial results.

</details>

## Basic Usage

Here is a simple example of how to use JMux to parse a JSON stream:

```python
import asyncio
from enum import Enum
from types import NoneType
from pydantic import BaseModel

from jmux import JMux, AwaitableValue, StreamableValues

# 1. Define your JMux class
class SObject(JMux):
    class SNested(JMux):
        key_str: AwaitableValue[str]

    class SEnum(Enum):
        VALUE1 = "value1"
        VALUE2 = "value2"

    key_str: AwaitableValue[str]
    key_int: AwaitableValue[int]
    key_float: AwaitableValue[float]
    key_bool: AwaitableValue[bool]
    key_none: AwaitableValue[NoneType]
    key_stream: StreamableValues[str]
    key_enum: AwaitableValue[SEnum]
    key_nested: AwaitableValue[SNested]

# 2. (Optional) Define a Pydantic model for validation
class PObject(BaseModel):
    class PNested(BaseModel):
        key_str: str

    class PEnum(Enum):
        VALUE1 = "value1"
        VALUE2 = "value2"

    key_str: str
    key_int: int
    key_float: float
    key_bool: bool
    key_none: NoneType
    key_stream: str
    key_enum: PEnum
    key_nested: PNested

# 3. Validate the JMux class against the Pydantic model
SObject.assert_conforms_to(PObject)

# 4. Create an instance of your JMux class
s_object = SObject()

# 5. Feed the JSON stream to the JMux instance
async def main():
    json_stream = '{"key_str": "hello", "key_int": 42, "key_float": 3.14, "key_bool": true, "key_none": null, "key_stream": "world", "key_enum": "value1", "key_nested": {"key_str": "nested"}}'

    async def produce():
        for char in json_stream:
            await s_object.feed_char(char)

    async def consume():
        key_str = await s_object.key_str
        print(f"key_str: {key_str}")

        key_int = await s_object.key_int
        print(f"key_int: {key_int}")

        key_float = await s_object.key_float
        print(f"key_float: {key_float}")

        key_bool = await s_object.key_bool
        print(f"key_bool: {key_bool}")

        key_none = await s_object.key_none
        print(f"key_none: {key_none}")

        key_stream = ""
        async for char in s_object.key_stream:
            key_stream += char
        print(f"key_stream: {key_stream}")

        key_enum = await s_object.key_enum
        print(f"key_enum: {key_enum}")

        key_nested = await s_object.key_nested
        nested_key_str = await key_nested.key_str
        print(f"nested_key_str: {nested_key_str}")

    await asyncio.gather(produce(), consume())

if __name__ == "__main__":
    asyncio.run(main())
```

## API Reference

### Abstract Calss `jmux.JMux`

The abstract base class for creating JSON demultiplexers.

> `JMux.assert_conforms_to(pydantic_model: Type[BaseModel]) -> None`

Asserts that the JMux class conforms to a given Pydantic model.

> `async JMux.feed_char(ch: str) -> None`

Feeds a character to the JMux parser.

> `async JMux.feed_chunks(chunks: str) -> None`

Feeds a string of characters to the JMux parser.

### Class `jmux.AwaitableValue[T]`

A class that represents a value that will be available in the future. You are awaiting the full value and do not get partial results.

Allowed types here are (they can all be combined with `Optional`):

- `int`, `float`, `str`, `bool`, `NoneType`
- `JMux`
- `Enum`

In all cases, the corresponding `pydantic.BaseModel` should **not** be `list`

### Class `jmux.StreamableValues[T]`

A class that represents a stream of values that can be asynchronously iterated over.

Allowed types are listed below and should all be wrapped in a `list` on the pydantic model:

- `int`, `float`, `str`, `bool`, `NoneType`
- `JMux`
- `Enum`

Additionally the following type is supported without being wrapped into `list`:

- `str`

This allows you to fully stream strings directly to a sink.

## License

This project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE) file for details.

## Planned Improvements

- Add support for older Python versions

## Contributions

As you might see, this repo has only been created recently and so far I am the only developer working on it. If you want to contribute, reach out via `johannes@unruh.ai` or `johannes.a.unruh@gmail.com`.

If you have suggestions or find any errors in my implementation, feel free to create an issue or also reach out via email.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "jmux",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "demultiplexer, python, package, json",
    "author": null,
    "author_email": "\"Johannes A.I. Unruh\" <johannes@unruh.ai>",
    "download_url": "https://files.pythonhosted.org/packages/74/0a/6994457a1c03ed185934b4f1b708b6c80959cb152e9d5b1c9b43603667bb/jmux-0.0.3.tar.gz",
    "platform": null,
    "description": "# JMux: A Python package for demultiplexing a JSON string into multiple awaitable variables.\n\nJMux is a powerful Python package that allows you to demultiplex a JSON stream into multiple awaitable variables. It is specifically designed for asynchronous applications that interact with Large Language Models (LLMs) using libraries like `litellm`. When an LLM streams a JSON response, `jmux` enables you to parse and use parts of the JSON object _before_ the complete response has been received, significantly improving responsiveness.\n\n## Inspiration\n\nThis package is inspired by `Snapshot Streaming` mentioned in the [`WWDC25: Meet the Foundation Models framework`](https://youtu.be/mJMvFyBvZEk?si=DVIvxzuJOA87lb7I&t=465) keynote by Apple.\n\n## Features\n\n- **Asynchronous by Design**: Built on top of `asyncio`, JMux is perfect for modern, high-performance Python applications.\n- **Pydantic Integration**: Validate your `JMux` classes against Pydantic models to ensure type safety and consistency.\n- **Awaitable and Streamable Sinks**: Use `AwaitableValue` for single values and `StreamableValues` for streams of values.\n- **Robust Error Handling**: JMux provides a comprehensive set of exceptions to handle parsing errors and other issues.\n- **Lightweight**: JMux has only a few external dependencies, making it easy to integrate into any project.\n\n## Installation\n\nYou can install JMux from PyPI using pip:\n\n```bash\npip install jmux\n```\n\n## Usage with LLMs (e.g., `litellm`)\n\nThe primary use case for `jmux` is to process streaming JSON responses from LLMs. This allows you to react to parts of the data as it arrives, rather than waiting for the entire JSON object to be transmitted. While this should be obvious, I should mention, that **the order in which the pydantic model defines the properties, defines which stream is filled first**.\n\nHere\u2019s a conceptual example of how you might integrate `jmux` with an LLM call, such as one made with `litellm`:\n\n```python\nimport asyncio\nfrom pydantic import BaseModel\nfrom jmux import JMux, AwaitableValue, StreamableValues\n# litellm is used conceptually here\n# from litellm import acompletion\n\n# 1. Define the Pydantic model for the expected JSON response\nclass LlmResponse(BaseModel):\n    thought: str # **This property is filled first**\n    tool_code: str\n\n# 2. Define the corresponding JMux class\nclass LlmResponseMux(JMux):\n    thought: AwaitableValue[str]\n    tool_code: StreamableValues[str] # Stream the code as it's generated\n\n# 3. Validate that the JMux class matches the Pydantic model\nLlmResponseMux.assert_conforms_to(LlmResponse)\n\n# A mock function that simulates a streaming LLM call\nasync def mock_llm_stream():\n    json_stream = '{\"thought\": \"I need to write some code.\", \"tool_code\": \"print(\\'Hello, World!\\')\"}'\n    for char in json_stream:\n        yield char\n        await asyncio.sleep(0.01) # Simulate network latency\n\n# Main function to orchestrate the call and processing\nasync def process_llm_response():\n    jmux_instance = LlmResponseMux()\n\n    # This task will consume the LLM stream and feed it to jmux\n    async def feed_stream():\n        async for chunk in mock_llm_stream():\n            await jmux_instance.feed_chunks(chunk)\n\n    # These tasks will consume the demultiplexed data from jmux\n    async def consume_thought():\n        thought = await jmux_instance.thought\n        print(f\"LLM's thought received: '{thought}'\")\n        # You can act on the thought immediately\n        # without waiting for the tool_code to finish streaming.\n\n    async def consume_tool_code():\n        print(\"Receiving tool code...\")\n        full_code = \"\"\n        async for code_fragment in jmux_instance.tool_code:\n            full_code += code_fragment\n            print(f\"  -> Received fragment: {code_fragment}\")\n        print(f\"Full tool code received: {full_code}\")\n\n    # Run all tasks concurrently\n    await asyncio.gather(\n        feed_stream(),\n        consume_thought(),\n        consume_tool_code()\n    )\n\nif __name__ == \"__main__\":\n    asyncio.run(process_llm_response())\n```\n\n## Example Implementation\n\n<details>\n<summary>Python Code</summary>\n\n```python\ndef create_json_streaming_completion[T: BaseModel, J: IJsonDemuxer](\n        self,\n        messages: List[ILlmMessage],\n        ReturnType: Type[T],\n        JMux: Type[J],\n        retries: int = 3,\n    ) -> StreamResponseTuple[T, J]:\n        try:\n            JMux.assert_conforms_to(ReturnType)\n            litellm_messages = self._convert_messages(messages)\n            jmux_instance: J = JMux()\n\n            async def stream_feeding_llm_call() -> T:\n                nonlocal jmux_instance\n                buffer = \"\"\n                stream: CustomStreamWrapper = await self._router.acompletion( # see litellm `router`\n                    model=self._internal_model_name.value,\n                    messages=litellm_messages,\n                    stream=True,\n                    num_retries=retries,\n                    response_format=ReturnType,\n                    **self._maybe_google_credentials_param,\n                    **self._model_params.model_dump(exclude_none=True),\n                    **self._additional_params,\n                )\n\n                async for chunk in stream:\n                    content_fragment: str | None = None\n\n                    tool_calls = chunk.choices[0].delta.tool_calls\n                    if tool_calls:\n                        content_fragment = tool_calls[0].function.arguments\n                    elif chunk.choices[0].delta.content:\n                        content_fragment = chunk.choices[0].delta.content\n\n                    if content_fragment:\n                        try:\n                            buffer += content_fragment\n                            await jmux_instance.feed_chunks(content_fragment)\n                        except Exception as e:\n                            logger.warning(f\"error in JMux feed_chunks: {e}\")\n                            raise e\n\n                return ReturnType.model_validate_json(buffer)\n\n            awaitable_llm_result = create_task(stream_feeding_llm_call())\n            return (awaitable_llm_result, jmux_instance)\n        except Exception as e:\n            logger.warning(f\"error in create_json_streaming_completion: {e}\")\n            raise e\n```\n\nThe code above shows an example implementation that uses a `litellm` router for `acompletion`.\n\nYou can either `await awaitable_llm_result` if you need the full result, or use `await jmux_instance.your_awaitable_value` or `async for ele in jmux_instance.your_streamable_values` to access partial results.\n\n</details>\n\n## Basic Usage\n\nHere is a simple example of how to use JMux to parse a JSON stream:\n\n```python\nimport asyncio\nfrom enum import Enum\nfrom types import NoneType\nfrom pydantic import BaseModel\n\nfrom jmux import JMux, AwaitableValue, StreamableValues\n\n# 1. Define your JMux class\nclass SObject(JMux):\n    class SNested(JMux):\n        key_str: AwaitableValue[str]\n\n    class SEnum(Enum):\n        VALUE1 = \"value1\"\n        VALUE2 = \"value2\"\n\n    key_str: AwaitableValue[str]\n    key_int: AwaitableValue[int]\n    key_float: AwaitableValue[float]\n    key_bool: AwaitableValue[bool]\n    key_none: AwaitableValue[NoneType]\n    key_stream: StreamableValues[str]\n    key_enum: AwaitableValue[SEnum]\n    key_nested: AwaitableValue[SNested]\n\n# 2. (Optional) Define a Pydantic model for validation\nclass PObject(BaseModel):\n    class PNested(BaseModel):\n        key_str: str\n\n    class PEnum(Enum):\n        VALUE1 = \"value1\"\n        VALUE2 = \"value2\"\n\n    key_str: str\n    key_int: int\n    key_float: float\n    key_bool: bool\n    key_none: NoneType\n    key_stream: str\n    key_enum: PEnum\n    key_nested: PNested\n\n# 3. Validate the JMux class against the Pydantic model\nSObject.assert_conforms_to(PObject)\n\n# 4. Create an instance of your JMux class\ns_object = SObject()\n\n# 5. Feed the JSON stream to the JMux instance\nasync def main():\n    json_stream = '{\"key_str\": \"hello\", \"key_int\": 42, \"key_float\": 3.14, \"key_bool\": true, \"key_none\": null, \"key_stream\": \"world\", \"key_enum\": \"value1\", \"key_nested\": {\"key_str\": \"nested\"}}'\n\n    async def produce():\n        for char in json_stream:\n            await s_object.feed_char(char)\n\n    async def consume():\n        key_str = await s_object.key_str\n        print(f\"key_str: {key_str}\")\n\n        key_int = await s_object.key_int\n        print(f\"key_int: {key_int}\")\n\n        key_float = await s_object.key_float\n        print(f\"key_float: {key_float}\")\n\n        key_bool = await s_object.key_bool\n        print(f\"key_bool: {key_bool}\")\n\n        key_none = await s_object.key_none\n        print(f\"key_none: {key_none}\")\n\n        key_stream = \"\"\n        async for char in s_object.key_stream:\n            key_stream += char\n        print(f\"key_stream: {key_stream}\")\n\n        key_enum = await s_object.key_enum\n        print(f\"key_enum: {key_enum}\")\n\n        key_nested = await s_object.key_nested\n        nested_key_str = await key_nested.key_str\n        print(f\"nested_key_str: {nested_key_str}\")\n\n    await asyncio.gather(produce(), consume())\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n## API Reference\n\n### Abstract Calss `jmux.JMux`\n\nThe abstract base class for creating JSON demultiplexers.\n\n> `JMux.assert_conforms_to(pydantic_model: Type[BaseModel]) -> None`\n\nAsserts that the JMux class conforms to a given Pydantic model.\n\n> `async JMux.feed_char(ch: str) -> None`\n\nFeeds a character to the JMux parser.\n\n> `async JMux.feed_chunks(chunks: str) -> None`\n\nFeeds a string of characters to the JMux parser.\n\n### Class `jmux.AwaitableValue[T]`\n\nA class that represents a value that will be available in the future. You are awaiting the full value and do not get partial results.\n\nAllowed types here are (they can all be combined with `Optional`):\n\n- `int`, `float`, `str`, `bool`, `NoneType`\n- `JMux`\n- `Enum`\n\nIn all cases, the corresponding `pydantic.BaseModel` should **not** be `list`\n\n### Class `jmux.StreamableValues[T]`\n\nA class that represents a stream of values that can be asynchronously iterated over.\n\nAllowed types are listed below and should all be wrapped in a `list` on the pydantic model:\n\n- `int`, `float`, `str`, `bool`, `NoneType`\n- `JMux`\n- `Enum`\n\nAdditionally the following type is supported without being wrapped into `list`:\n\n- `str`\n\nThis allows you to fully stream strings directly to a sink.\n\n## License\n\nThis project is licensed under the terms of the MIT license. See the [LICENSE](LICENSE) file for details.\n\n## Planned Improvements\n\n- Add support for older Python versions\n\n## Contributions\n\nAs you might see, this repo has only been created recently and so far I am the only developer working on it. If you want to contribute, reach out via `johannes@unruh.ai` or `johannes.a.unruh@gmail.com`.\n\nIf you have suggestions or find any errors in my implementation, feel free to create an issue or also reach out via email.\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Johannes A.I. Unruh\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, subject to the following conditions:\n        \n        1. The above copyright notice and this permission notice shall be included in all\n           copies or substantial portions of the Software.\n        \n        2. You may use this software, including for commercial purposes, as long as\n           you do not sell, license, or otherwise distribute the original or\n           substantially similar versions of this software for a fee.\n        \n        3. This restriction does not apply to using this software as a dependency in\n           your own commercial applications or products, provided you are not selling\n           this software itself as a standalone product.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.",
    "summary": "JMux: A Python package for demultiplexing a JSON string into multiple awaitable variables.",
    "version": "0.0.3",
    "project_urls": {
        "Homepage": "https://github.com/jaunruh/jmux",
        "Repository": "https://github.com/jaunruh/jmux"
    },
    "split_keywords": [
        "demultiplexer",
        " python",
        " package",
        " json"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2e9eeaccdad46f2aac652df32b840ed516ef228da81b8e79708084aa0796a860",
                "md5": "fcbb3ce0fdea0a3d561eeff92fe56bb2",
                "sha256": "087a1334f9c73975a639257c75b43752a0f1b981c6100a259277a85d249519e8"
            },
            "downloads": -1,
            "filename": "jmux-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fcbb3ce0fdea0a3d561eeff92fe56bb2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 17758,
            "upload_time": "2025-08-08T05:51:03",
            "upload_time_iso_8601": "2025-08-08T05:51:03.282160Z",
            "url": "https://files.pythonhosted.org/packages/2e/9e/eaccdad46f2aac652df32b840ed516ef228da81b8e79708084aa0796a860/jmux-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "740a6994457a1c03ed185934b4f1b708b6c80959cb152e9d5b1c9b43603667bb",
                "md5": "ebb294bf50815d75c5dddb9e73896b36",
                "sha256": "2f6924ba261519e9c43fd81a2a0bdfdfa5a72277331f32cbc29000e4dcb3b023"
            },
            "downloads": -1,
            "filename": "jmux-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "ebb294bf50815d75c5dddb9e73896b36",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 55028,
            "upload_time": "2025-08-08T05:51:04",
            "upload_time_iso_8601": "2025-08-08T05:51:04.764675Z",
            "url": "https://files.pythonhosted.org/packages/74/0a/6994457a1c03ed185934b4f1b708b6c80959cb152e9d5b1c9b43603667bb/jmux-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-08 05:51:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jaunruh",
    "github_project": "jmux",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "jmux"
}
        
Elapsed time: 1.51884s