threadfactory


Namethreadfactory JSON
Version 1.5.2 PyPI version JSON
download
home_pageNone
SummaryHigh-performance thread-safe (No-GILโ€“friendly) data structures and parallel operations for Python 3.13+.
upload_time2025-07-11 22:25:28
maintainerNone
docs_urlNone
authorNone
requires_python>=3.13
licenseNone
keywords concurrency parallelism thread-safe no-gil threading parallel processing concurrent collections multithreading high-performance python concurrency free-threading thread factory threadfactory
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![PyPI version](https://badge.fury.io/py/threadfactory.svg)](https://badge.fury.io/py/threadfactory)
[![License](https://img.shields.io/github/license/Synaptic724/threadfactory)](https://github.com/Synaptic724/threadfactory/blob/production/LICENSE)
[![Python Version](https://img.shields.io/pypi/pyversions/threadfactory)](https://pypi.org/project/threadfactory)

[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory/month)](https://pepy.tech/projects/threadfactory)
[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory/week)](https://pepy.tech/projects/threadfactory)
[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory)](https://pepy.tech/projects/threadfactory)

[![Upload Python Package](https://github.com/Synaptic724/ThreadFactory/actions/workflows/python-publish.yml/badge.svg)](https://github.com/Synaptic724/threadfactory/actions/workflows/python-publish.yml)
[![Docs](https://readthedocs.org/projects/threadfactory/badge/?version=latest)](https://threadfactory.readthedocs.io/en/latest/)


<!--[![Coverage Status](https://coveralls.io/repos/github/Synaptic724/threadfactory/badge.svg?branch=main)](https://coveralls.io/github/Synaptic724/threadfactory?branch=main) -->
<!--[![CodeFactor](https://www.codefactor.io/repository/github/synaptic724/threadfactory/badge)](https://www.codefactor.io/repository/github/synaptic724/threadfactory) -->

High-performance concurrency toolkit โ€” built **exclusively** for No-GIL Python 3.13+.  
Scale across threads. Control the flow. Welcome to Pythonโ€™s next generation of parallelism.

## โœจ Why ThreadFactory? Unlocking Peak Concurrency Performance

Tired of battling race conditions and deadlocks in your multiprocessing Python applications? ThreadFactory provides a meticulously crafted suite of tools designed for **uncompromising thread safety and blazing-fast performance**.

Here's how ThreadFactory elevates your concurrency game:

* ๐Ÿ”’ **Sync Types: Atomic & Immutable-like Control**
    Experience effortless thread-safe manipulation of fundamental data types. Our `SyncInt`, `SyncBool`, `SyncString`, and more, act as atomic wrappers, guaranteeing data integrity without complex locking rituals.  These types are also now reference types and are no longer treated like simple values (Use them cautiously).

* ๐Ÿค **Concurrent Collections: High-Performance Shared Data Structures**
    Transform your shared data management. Access and modify dictionaries, lists, sets, queues, stacks, and buffers with confidence, knowing they are built for high-load, concurrent environments. **๐Ÿ”ฅ Say goodbye to data corruption!**

* ๐Ÿ“ฆ **Pack / Package: Delegate-Style Callables for Agentic Threads**  
    Encapsulate sync functions with full thread-safe state control. `Pack` stores arguments, supports currying, composition (`|`, `+`), and dynamic introspection. Ideal for agent behaviors, orchestration flows, and deferred execution.  
    **โ†’ Think `functools.partial` meets `Promise`, optimized for concurrency.**

* ๐Ÿ”ฌ **First-Principles Primitives: Building Blocks for Robust Systems**
    Dive deeper with powerful, low-level synchronization constructs like `Dynaphore` (dynamic semaphores), `SmartCondition` (intelligent condition variables), and `SignalLatch` (one-shot signal mechanisms). Engineer sophisticated thread interactions with precision.

* ๐Ÿงฉ **Orchestrators & Barriers: Harmonize Complex Workflows**
    Coordinate your threads with elegance. Leverage `TransitBarrier` for phased execution, `SignalBarrier` for event-driven synchronization, and `Conductor` for orchestrating intricate task flows. Ensure your threads march in perfect unison.

* โšก **Dispatchers & Gates: Fine-Grained Thread Control**
    Control thread execution with surgical precision. Utilize `Fork` for parallel execution, `SyncFork` for synchronized branching, and `TransitGate` for managing access to critical sections.

* ๐Ÿš€ **Benchmarks that Prove the Speed: 2ร—โ€“5ร— Faster Under Load!**
    Don't just take our word for it. ThreadFactory isn't just safer; it's *faster*. Our rigorous benchmarks consistently demonstrate **2x to 5x speed improvements** over standard library alternatives under heavy concurrent loads. All battle-tested with **10 Million to 20 Million operation** stress runs and ***zero* deadlocks**.

**ThreadFactory: Build Confidently. Run Faster.**

> **NOTE**  
> ThreadFactory is designed and tested against Python 3.13+ in **No-GIL** mode.  
> This library will only function on 3.13 and higher as it is a **No-GIL Exclusive** library.

Please see the benchmarks at the bottom of this page and if you are interested there are more in the repository.  

[Repository Benchmarks ๐Ÿš€](https://github.com/Synaptic724/threadfactory/blob/production/benchmarks/benchmark_data/general_benchmarks.md)  
[Jump to Benchmarks Below๐Ÿ”ฅ](#-benchmark-results-10000000-ops--10-producers--10-consumers)

---

## ๐ŸŒŸ Support the Project

If you find **ThreadFactory** useful, **please consider [starring the repository](https://github.com/Synaptic724/threadfactory/)** and **watching it for updates** ๐Ÿ””!

Your support helps:
- Grow awareness ๐Ÿง   
- Justify deeper development ๐Ÿ’ป  
- Keep high-performance Python in the spotlight โšก

Every โญ star shows there's a need for **GIL-free, scalable concurrency** in Python.  
Thank you for helping make that vision real โค๏ธ

If you really love my work please connect with me on [LinkedIn](https://www.linkedin.com/in/mark-geleta/) and feel free to chat with me there.

> You can also [open an issue](https://github.com/Synaptic724/ThreadFactory/issues) or [start a discussion](https://github.com/Synaptic724/threadfactory/discussions) โ€” Iโ€™d love to hear how you're using ThreadFactory or what you'd like to see next!
---

## ๐Ÿš€ Features

## ๐Ÿ”’ Sync Types โ€“ `thread_factory.concurrency.sync_types`

ThreadFactory's **Sync Types** are thread-safe wrappers for Pythonโ€™s core data types. They're built for deterministic, low-contention, concurrent access across threads, making them perfect for shared state in threaded environments, worker pools, and agent execution contexts.

* `SyncInt`: An atomic integer wrapper with full arithmetic and bitwise operation support.
* `SyncFloat`: A thread-safe float that supports all arithmetic operations, ensuring precision in concurrent calculations.
* `SyncBool`: A thread-safe boolean that handles all logical operations safely.
* `SyncString`: A thread-safe mutable wrapper around Pythonโ€™s `str`, offering comprehensive dunder method and string method coverage.
* `SyncRef` : A thread-safe, atomic reference to any object with conditional updates and safe data access.

---

## ๐Ÿ“ฆ Concurrent Data Structures - `thread_factory.concurrency`

ThreadFactory provides a robust suite of **Concurrent Data Structures**, designed for high-performance shared data management in multi-threaded applications.

### `ConcurrentDict`
- A thread-safe dictionary.
- Supports typical dict operations (`update`, `popitem`, etc.).
- Provides `map`, `filter`, and `reduce` for safe, bulk operations.
- **Freeze support**: When frozen, the dictionary becomes read-only. Lock acquisition is skipped during reads, dramatically improving performance in high-read workloads.

### `ConcurrentList`
- A thread-safe list supporting concurrent access and modification.
- Slice assignment, in-place operators (`+=`, `*=`), and advanced operations (`map`, `filter`, `reduce`).
- **Freeze support**: Prevents structural modifications while enabling safe, lock-free reads (e.g., `__getitem__`, iteration, and slicing). Ideal for caching and broadcast scenarios.

### `ConcurrentSet`
- A thread-safe set implementation supporting all standard set algebra operations.
- Supports `add`, `discard`, `remove`, and all bitwise set operations (`|`, `&`, `^`, `-`) along with their in-place forms.
- Provides `map`, `filter`, `reduce`, and `batch_update` to safely perform bulk transformations.
- **Freeze support**: Once frozen, the set cannot be modified โ€” but read operations become lock-free and extremely efficient.
- Ideal for workloads where the set is mutated during setup but then used repeatedly in a read-only context (e.g., filters, routing tables, permissions).

### `ConcurrentQueue`
- A thread-safe FIFO queue built atop `collections.deque`.
- Tested and outperforms deque alone by up to 64% in our benchmark.
- Supports `enqueue`, `dequeue`, `peek`, `map`, `filter`, and `reduce`.
- Raises `Empty` when `dequeue` or `peek` is called on an empty queue.
- Outperforms multiprocessing queues by over 400% in some cases โ€” clone and run unit tests to see.

### `ConcurrentStack`
- A thread-safe LIFO stack.
- Supports `push`, `pop`, `peek` operations.
- Ideal for last-in, first-out (LIFO) workloads.
- Built on `deque` for fast appends and pops.
- Similar performance to ConcurrentQueue.

### `ConcurrentBuffer`
- A **high-performance**, thread-safe buffer using **sharded deques** for low-contention access.
- Designed to handle massive producer/consumer loads with better throughput than standard queues.
- Supports `enqueue`, `dequeue`, `peek`, `clear`, and bulk operations (`map`, `filter`, `reduce`).
- **Timestamp-based ordering** ensures approximate FIFO behavior across shards.
- Outperforms `ConcurrentQueue` by up to **60%** in mid-range concurrency in even thread Producer/Consumer configuration with 10 shards.
- Automatically balances items across shards; ideal for parallel pipelines and low-latency workloads.
- Best used with `shard_count โ‰ˆ thread_count / 2` for optimal performance, but keep shards at or below 10.

### `ConcurrentCollection`
- An unordered, thread-safe alternative to `ConcurrentBuffer`.
- Optimized for high-concurrency scenarios where strict FIFO is not required.
- Uses fair circular scans seeded by bit-mixed monotonic clocks to distribute dequeues evenly.
- Benchmarks (10 producers / 20 consumers, 2M ops) show **~5.6% higher throughput** than `ConcurrentBuffer`:
    - **ConcurrentCollection**: 108,235 ops/sec
    - **ConcurrentBuffer**: 102,494 ops/sec
    - Better scaling under thread contention.

### `ConcurrentBag`
- A thread-safe โ€œmultisetโ€ collection that allows duplicates.
- Methods like `add`, `remove`, `discard`, etc.
- Ideal for collections where duplicate elements matter.

---

## ๐Ÿ›  Primitives & Coordination Mechanisms

ThreadFactory goes beyond collections, offering finely engineered synchronization primitives and specialized tools for orchestration, diagnostics, and thread-safe control.

### ๐Ÿง  Core Primitives โ€“ `thread_factory.synchronization.primitives`

* ๐ŸŽ› `Dynaphore`: A **dynamically resizable permit gate** for adaptive resource control and elastic thread pools.
* ๐Ÿ” `FlowRegulator`: A **smart semaphore** with factory ID targeting, callback routing, and bias buffering for dynamic wakeups in agentic worker systems.
* ๐Ÿง  `SmartCondition`: A **next-generation `Condition` replacement** enabling **targeted wakeups**, ULID tracking, and direct callback delivery to waiting threads.
* ๐Ÿ”” `TransitCondition`: A **minimalist wait/notify condition** where callbacks execute within the waiting thread, ensuring lightweight and FIFO-safe signaling.
* ๐Ÿ›‘ `SignalLatch`: A **latch with observer signaling support**, capable of notifying a controller before blocking. It natively connects to a `SignalController` for streamlined lifecycle management.
* ๐Ÿ”’ `Latch`: A classic **reusable latch** that, once opened, permanently releases all waiting threads until explicitly reset.
* 
### โšก Coordinators & Barriers โ€“ `thread_factory.synchronization.coordinators`

* ๐ŸŽฏ `TransitBarrier`: A **reusable barrier** for sophisticated threshold coordination, with the option to execute a callable once all threads arrive.
* ๐Ÿšฆ `SignalBarrier`: A **reusable, signal-based barrier** that supports thresholds, timeouts, and failure states, natively connecting to a `SignalController` for integrated lifecycle management.
* โฐ `ClockBarrier`: A **barrier with a global timeout** that breaks and raises an exception if all threads don't arrive within the specified duration. It natively connects to a `SignalController`.
* ๐Ÿšฆ `Conductor`: A **reusable group synchronizer** that executes tasks after a threshold is met, supporting timeouts and failure states. This object also natively connects to a `SignalController`.
* ๐Ÿง  `MultiConductor`: A **multi-group execution coordinator** that manages multiple `Group` objects with a global thread threshold. Supports synchronized execution, `Fork`-based distributed dispatch, and `SyncFork`-based barrier coordination. 
                       Each task can produce multiple outcomes. Fully reusable and natively integrated with a `SignalController`.
* ๐Ÿ” `Scout`: A **predicate-based monitor** where a single thread blocks while evaluating a custom predicate, complete with timeout, success, and failure callbacks.

### ๐Ÿš‰ Execution Gates โ€“ `thread_factory.synchronization.execution`

* ๐Ÿ”€ `BypassConductor`: Allows up to `N` threads to **execute a pre-bound callable pipeline**, capturing results via `Outcome`. It collapses once the execution cap is reached, making it ideal for controlled bootstraps or one-time initializers.

### ๐ŸŽ› Dispatchers โ€“ `thread_factory.synchronization.dispatchers`

* ๐Ÿ”ง `Fork`: A **thread dispatcher** that assigns callables based on usage caps, ensuring each executes a fixed number of times for simple routing.
* ๐Ÿ”„ `SyncFork`: A **dispatcher that coordinates `N` threads** into callable groups, where all callables execute simultaneously once slots are filled. It supports timeouts and reuse.
* ๐Ÿ”„ `SyncSignalFork`: Similar to `SyncFork`, but with the added ability to **execute a callable as a signal**. This object natively connects to a `SignalController` for enhanced integration.
* ๐Ÿšฆ `SignalFork`: A **non-blocking dispatcher** that routes threads to callables immediately upon arrival. Triggers a one-time callback and controller notification once all slots are consumed.

### ๐ŸŽฎ Central Controllers โ€“ `thread_factory.synchronization.controller`

* ### `SignalController`
    The **central registry and backbone** for lifecycle-managed objects within ThreadFactory. It offers robust support for:
    * **`register()` / `unregister()`**: Dynamically add or remove managed objects.
    * **`invoke()` with pre/post hooks**: Trigger operations across registered components with custom logic before and after.
    * **Event notification (`notify`)**: Broadcast events to all interested managed objects.
    * **Full-thread-safe `dispose()`**: Recursively and safely tears down all managed objects, ensuring proper resource release and preventing leaks in complex systems.
    The `SignalController` forms the foundation for global coordination, status tracking, and command dispatch, providing a powerful hub for your concurrency architecture.
---

## โšก Parallel Utilities - `thread_factory.concurrency`

ThreadFactory provides a powerful collection of **parallel programming utilities** inspired by .NET's Task Parallel Library (TPL), simplifying common concurrent patterns.

* ### `parallel_for`
    Executes a traditional `for` loop in parallel across multiple threads. It supports automatic chunking, optional `local_init`/`local_finalize` for per-thread state, and `stop_on_exception` for early abortion on error.

* ### `parallel_foreach`
    Executes an `action` function on each item of an iterable in parallel. It handles both pre-known-length and streaming iterables, with optional `chunk_size` tuning and `stop_on_exception` to halt on errors. Ideal for efficient processing of large or streaming datasets.

* ### `parallel_invoke`
    Executes multiple independent functions concurrently. It accepts an arbitrary number of functions, returning a list of futures representing their execution, with an option to wait for all to finish. This simplifies running unrelated tasks in parallel with easy error propagation.

* ### `parallel_map`
    The **parallel equivalent of Pythonโ€™s built-in `map()`**. It applies a `transform` function to each item in an iterable concurrently, maintaining result order. Work is automatically split into chunks for efficient multi-threaded execution, returning a fully materialized list of results.

### Notes for Parallel Utilities

* All utilities automatically default to `max_workers = os.cpu_count()` if unspecified.
* `chunk_size` can be manually tuned or defaults to roughly `4 ร— #workers` for balanced performance.
* Exceptions raised inside tasks are properly propagated to the caller.

---

## โฑ๏ธ Utilities โ€“ `thread_factory.utilities`

ThreadFactory includes precise **utility tools** for orchestration, diagnostics, and thread-safe execution in concurrent applications.

* โฒ๏ธ `AutoResetTimer`: A **self-resetting timer** that automatically expires and restarts โ€” perfect for retry loops, cooldowns, debounce filters, and heartbeat monitoring.
* ๐Ÿ•ฐ๏ธ `Stopwatch`: A **high-resolution, nanosecond-accurate profiler** built on `time.perf_counter_ns()` โ€” ideal for measuring critical path latency, thread timing, and pinpointing performance bottlenecks.
* ๐Ÿ“ฆ `Package`: A **thread-safe, delegate-style callable wrapper** that stores arguments, supports currying and composition, and enables introspectable call chaining. Perfect for orchestration tools like `Conductor`, `Fork`, and `SyncFork`.


---

## ๐Ÿ“– Documentation

Full API reference and usage examples are available at:

โžก๏ธ [https://threadfactory.readthedocs.io](https://threadfactory.readthedocs.io)

---

## โš™๏ธ Installation

### Option 1: Clone and Install Locally (Recommended for Development)

```bash
# Clone the repository
git clone https://github.com/Synaptic724/ThreadFactory.git
cd threadfactory

# Create a Python 3.13+ virtual environment (No-GIL/Free concurrency recommended)
python -m venv .venv
source .venv/bin/activate  # or .venv\Scripts\activate on Windows
```

### Option 2: Install the library from PyPI
```bash
# Install the library in editable mode
pip install threadfactory
```


---

## ๐Ÿ“ˆ Real-World Benchmarking

Below are benchmark results from live multi-threaded scenarios using 10โ€“40 real threads,  
with millions of operations processed under load.

These benchmarks aren't just numbers โ€”  
they are proof that **ThreadFactory's concurrent collections outperform traditional Python structures by 2xโ€“5x**,  
especially in the new No-GIL world Python 3.13+ is unlocking.

Performance under pressure.  
Architecture built for the future.

These are just our Concurrent Datastructures and not even the real thing.  
Threadfactory is coming soon...

---

> All benchmark tests below are available if you clone the library and run the tests.  
> See the [Benchmark Details ๐Ÿš€](https://github.com/Synaptic724/threadfactory/blob/production/benchmarks/benchmark_data/general_benchmarks.md) for more benchmark stats.


## ๐Ÿ”ฅ Benchmark Results (10,000,000 ops โ€” 10 producers / 10 consumers)

| Queue Type                                  | Time (sec) | Throughput (ops/sec) | Notes                                                                                             |
|---------------------------------------------|------------|----------------------|---------------------------------------------------------------------------------------------------|
| `multiprocessing.Queue`                     | 119.99     | ~83,336              | Not suited for thread-only workloads, incurs unnecessary overhead.                                |
| `thread_factory.ConcurrentBuffer` | **23.27**      | **~429,651**            | โšก Dominant here. Consistent and efficient under moderate concurrency. |
| `thread_factory.ConcurrentQueue`  | 37.87      | ~264,014              | Performs solidly. Shows stable behavior even at higher operation counts.                                                   |
| `collections.deque`                         | 64.16      | ~155,876              | Suffers from contention. Simplicity comes at the cost of throughput.                                  |


### โœ… Highlights:
- `ConcurrentBuffer` outperformed `multiprocessing.Queue` by **96.72 seconds**.
- `ConcurrentBuffer` outperformed `ConcurrentQueue` by **14.6 seconds**.
- `ConcurrentBuffer` outperformed `collections.deque` by **40.89 seconds**.

### ๐Ÿ’ก Observations:
- `ConcurrentBuffer` continues to be the best performer under moderate concurrency.
- `ConcurrentQueue` maintains a consistent performance but is outperformed by `ConcurrentBuffer`.
- All queues emptied correctly (`final length = 0`).
---
## ๐Ÿ”ฅ Benchmark Results (20,000,000 ops โ€” 20 Producers / 20 Consumers)

| Queue Type                                        | Time (sec) | Throughput (ops/sec) | Notes                                                                                         |
|---------------------------------------------------|------------|----------------------|-----------------------------------------------------------------------------------------------|
| `multiprocessing.Queue`                           | 249.92     | ~80,020              | Severely limited by thread-unfriendly IPC locks.                                  |
| `thread_factory.ConcurrentBuffer`      | 138.64     | ~144,270             | 	Solid under moderate producer-consumer balance. Benefits from shard windowing.    |
| `thread_factory.ConcurrentBuffer` | 173.89     | ~115,010             | Too many shards increased internal complexity, leading to lower throughput. |
| `thread_factory.ConcurrentQueue` | **77.69**  | **~257,450**         | โšก Fastest overall. Ideal for large-scale multi-producer, multi-consumer scenarios.        |
| `collections.deque`                               | 190.91     | ~104,771             | Still usable, but scalability is poor compared to specialized implementations.         |

### โœ… Notes:
- `ConcurrentBuffer` performs better with **10 shards** than **20 shards** at this concurrency level.
- `ConcurrentQueue` continues to be the most stable performer under moderate-to-high thread counts.
- `multiprocessing.Queue` remains unfit for threaded-only workloads due to its heavy IPC-oriented design.

### ๐Ÿ’ก Observations:
- **Shard count** tuning in `ConcurrentBuffer` is crucial โ€” too many shards can reduce performance.
- **Bit-flip balancing** in `ConcurrentBuffer` helps under moderate concurrency but hits diminishing returns with excessive sharding.
- `ConcurrentQueue` is proving to be the general-purpose winner for most balanced threaded workloads.
- For **~40 threads**, `ConcurrentBuffer` shows ~**25% drop** when doubling the number of shards due to increased dequeue complexity.
- All queues emptied correctly (`final length = 0`).

---

## ๐Ÿงช Coming Soon: ThreadFactory Evolves

ThreadFactory isn't stopping at collections and locks โ€” we're building the **foundation of a full concurrency ecosystem**.

### ๐Ÿ”ฎ On the Roadmap:
- **Dynamic Executors**: Adaptive thread pools with per-worker routing, priorities, and work stealing.
- **Event Semaphores**: Async-aware signaling for mixed coroutine + thread pipelines.
- **Factory-Orchestrated Graph-based Execution**: Push-based directed graphs or generic graphs of work that dynamically scale.
- **Thread-Aware Async Hooks**: Bridging `asyncio` and raw threads using hybrid schedulers.
- **Task Affinity Routing**: Route work based on thread-local cache or historical execution profile.
- **Metrics and Diagnostics API**: Inspect thread throughput, wait time, and contention hotspots live.

> ThreadFactory isn't just a library.  
> It's becoming a platform.

Stay tuned.  
You haven't seen anything yet.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "threadfactory",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.13",
    "maintainer_email": "Mark Geleta <SynapticAISystems@gmail.com>",
    "keywords": "concurrency, parallelism, thread-safe, no-GIL, threading, parallel processing, concurrent collections, multithreading, high-performance, python concurrency, free-threading, thread factory, ThreadFactory",
    "author": null,
    "author_email": "Mark Geleta <SynapticAISystems@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/8a/80/37f01e194aceddaa391bc6dfa1a4e04f275b2fb5f8f9abb41e39b7b6adde/threadfactory-1.5.2.tar.gz",
    "platform": null,
    "description": "[![PyPI version](https://badge.fury.io/py/threadfactory.svg)](https://badge.fury.io/py/threadfactory)\n[![License](https://img.shields.io/github/license/Synaptic724/threadfactory)](https://github.com/Synaptic724/threadfactory/blob/production/LICENSE)\n[![Python Version](https://img.shields.io/pypi/pyversions/threadfactory)](https://pypi.org/project/threadfactory)\n\n[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory/month)](https://pepy.tech/projects/threadfactory)\n[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory/week)](https://pepy.tech/projects/threadfactory)\n[![PyPI Downloads](https://static.pepy.tech/badge/threadfactory)](https://pepy.tech/projects/threadfactory)\n\n[![Upload Python Package](https://github.com/Synaptic724/ThreadFactory/actions/workflows/python-publish.yml/badge.svg)](https://github.com/Synaptic724/threadfactory/actions/workflows/python-publish.yml)\n[![Docs](https://readthedocs.org/projects/threadfactory/badge/?version=latest)](https://threadfactory.readthedocs.io/en/latest/)\n\n\n<!--[![Coverage Status](https://coveralls.io/repos/github/Synaptic724/threadfactory/badge.svg?branch=main)](https://coveralls.io/github/Synaptic724/threadfactory?branch=main) -->\n<!--[![CodeFactor](https://www.codefactor.io/repository/github/synaptic724/threadfactory/badge)](https://www.codefactor.io/repository/github/synaptic724/threadfactory) -->\n\nHigh-performance concurrency toolkit \u2014 built **exclusively** for No-GIL Python 3.13+.  \nScale across threads. Control the flow. Welcome to Python\u2019s next generation of parallelism.\n\n## \u2728 Why ThreadFactory? Unlocking Peak Concurrency Performance\n\nTired of battling race conditions and deadlocks in your multiprocessing Python applications? ThreadFactory provides a meticulously crafted suite of tools designed for **uncompromising thread safety and blazing-fast performance**.\n\nHere's how ThreadFactory elevates your concurrency game:\n\n* \ud83d\udd12 **Sync Types: Atomic & Immutable-like Control**\n    Experience effortless thread-safe manipulation of fundamental data types. Our `SyncInt`, `SyncBool`, `SyncString`, and more, act as atomic wrappers, guaranteeing data integrity without complex locking rituals.  These types are also now reference types and are no longer treated like simple values (Use them cautiously).\n\n* \ud83e\udd1d **Concurrent Collections: High-Performance Shared Data Structures**\n    Transform your shared data management. Access and modify dictionaries, lists, sets, queues, stacks, and buffers with confidence, knowing they are built for high-load, concurrent environments. **\ud83d\udd25 Say goodbye to data corruption!**\n\n* \ud83d\udce6 **Pack / Package: Delegate-Style Callables for Agentic Threads**  \n    Encapsulate sync functions with full thread-safe state control. `Pack` stores arguments, supports currying, composition (`|`, `+`), and dynamic introspection. Ideal for agent behaviors, orchestration flows, and deferred execution.  \n    **\u2192 Think `functools.partial` meets `Promise`, optimized for concurrency.**\n\n* \ud83d\udd2c **First-Principles Primitives: Building Blocks for Robust Systems**\n    Dive deeper with powerful, low-level synchronization constructs like `Dynaphore` (dynamic semaphores), `SmartCondition` (intelligent condition variables), and `SignalLatch` (one-shot signal mechanisms). Engineer sophisticated thread interactions with precision.\n\n* \ud83e\udde9 **Orchestrators & Barriers: Harmonize Complex Workflows**\n    Coordinate your threads with elegance. Leverage `TransitBarrier` for phased execution, `SignalBarrier` for event-driven synchronization, and `Conductor` for orchestrating intricate task flows. Ensure your threads march in perfect unison.\n\n* \u26a1 **Dispatchers & Gates: Fine-Grained Thread Control**\n    Control thread execution with surgical precision. Utilize `Fork` for parallel execution, `SyncFork` for synchronized branching, and `TransitGate` for managing access to critical sections.\n\n* \ud83d\ude80 **Benchmarks that Prove the Speed: 2\u00d7\u20135\u00d7 Faster Under Load!**\n    Don't just take our word for it. ThreadFactory isn't just safer; it's *faster*. Our rigorous benchmarks consistently demonstrate **2x to 5x speed improvements** over standard library alternatives under heavy concurrent loads. All battle-tested with **10 Million to 20 Million operation** stress runs and ***zero* deadlocks**.\n\n**ThreadFactory: Build Confidently. Run Faster.**\n\n> **NOTE**  \n> ThreadFactory is designed and tested against Python 3.13+ in **No-GIL** mode.  \n> This library will only function on 3.13 and higher as it is a **No-GIL Exclusive** library.\n\nPlease see the benchmarks at the bottom of this page and if you are interested there are more in the repository.  \n\n[Repository Benchmarks \ud83d\ude80](https://github.com/Synaptic724/threadfactory/blob/production/benchmarks/benchmark_data/general_benchmarks.md)  \n[Jump to Benchmarks Below\ud83d\udd25](#-benchmark-results-10000000-ops--10-producers--10-consumers)\n\n---\n\n## \ud83c\udf1f Support the Project\n\nIf you find **ThreadFactory** useful, **please consider [starring the repository](https://github.com/Synaptic724/threadfactory/)** and **watching it for updates** \ud83d\udd14!\n\nYour support helps:\n- Grow awareness \ud83e\udde0  \n- Justify deeper development \ud83d\udcbb  \n- Keep high-performance Python in the spotlight \u26a1\n\nEvery \u2b50 star shows there's a need for **GIL-free, scalable concurrency** in Python.  \nThank you for helping make that vision real \u2764\ufe0f\n\nIf you really love my work please connect with me on [LinkedIn](https://www.linkedin.com/in/mark-geleta/) and feel free to chat with me there.\n\n> You can also [open an issue](https://github.com/Synaptic724/ThreadFactory/issues) or [start a discussion](https://github.com/Synaptic724/threadfactory/discussions) \u2014 I\u2019d love to hear how you're using ThreadFactory or what you'd like to see next!\n---\n\n## \ud83d\ude80 Features\n\n## \ud83d\udd12 Sync Types \u2013 `thread_factory.concurrency.sync_types`\n\nThreadFactory's **Sync Types** are thread-safe wrappers for Python\u2019s core data types. They're built for deterministic, low-contention, concurrent access across threads, making them perfect for shared state in threaded environments, worker pools, and agent execution contexts.\n\n* `SyncInt`: An atomic integer wrapper with full arithmetic and bitwise operation support.\n* `SyncFloat`: A thread-safe float that supports all arithmetic operations, ensuring precision in concurrent calculations.\n* `SyncBool`: A thread-safe boolean that handles all logical operations safely.\n* `SyncString`: A thread-safe mutable wrapper around Python\u2019s `str`, offering comprehensive dunder method and string method coverage.\n* `SyncRef` : A thread-safe, atomic reference to any object with conditional updates and safe data access.\n\n---\n\n## \ud83d\udce6 Concurrent Data Structures - `thread_factory.concurrency`\n\nThreadFactory provides a robust suite of **Concurrent Data Structures**, designed for high-performance shared data management in multi-threaded applications.\n\n### `ConcurrentDict`\n- A thread-safe dictionary.\n- Supports typical dict operations (`update`, `popitem`, etc.).\n- Provides `map`, `filter`, and `reduce` for safe, bulk operations.\n- **Freeze support**: When frozen, the dictionary becomes read-only. Lock acquisition is skipped during reads, dramatically improving performance in high-read workloads.\n\n### `ConcurrentList`\n- A thread-safe list supporting concurrent access and modification.\n- Slice assignment, in-place operators (`+=`, `*=`), and advanced operations (`map`, `filter`, `reduce`).\n- **Freeze support**: Prevents structural modifications while enabling safe, lock-free reads (e.g., `__getitem__`, iteration, and slicing). Ideal for caching and broadcast scenarios.\n\n### `ConcurrentSet`\n- A thread-safe set implementation supporting all standard set algebra operations.\n- Supports `add`, `discard`, `remove`, and all bitwise set operations (`|`, `&`, `^`, `-`) along with their in-place forms.\n- Provides `map`, `filter`, `reduce`, and `batch_update` to safely perform bulk transformations.\n- **Freeze support**: Once frozen, the set cannot be modified \u2014 but read operations become lock-free and extremely efficient.\n- Ideal for workloads where the set is mutated during setup but then used repeatedly in a read-only context (e.g., filters, routing tables, permissions).\n\n### `ConcurrentQueue`\n- A thread-safe FIFO queue built atop `collections.deque`.\n- Tested and outperforms deque alone by up to 64% in our benchmark.\n- Supports `enqueue`, `dequeue`, `peek`, `map`, `filter`, and `reduce`.\n- Raises `Empty` when `dequeue` or `peek` is called on an empty queue.\n- Outperforms multiprocessing queues by over 400% in some cases \u2014 clone and run unit tests to see.\n\n### `ConcurrentStack`\n- A thread-safe LIFO stack.\n- Supports `push`, `pop`, `peek` operations.\n- Ideal for last-in, first-out (LIFO) workloads.\n- Built on `deque` for fast appends and pops.\n- Similar performance to ConcurrentQueue.\n\n### `ConcurrentBuffer`\n- A **high-performance**, thread-safe buffer using **sharded deques** for low-contention access.\n- Designed to handle massive producer/consumer loads with better throughput than standard queues.\n- Supports `enqueue`, `dequeue`, `peek`, `clear`, and bulk operations (`map`, `filter`, `reduce`).\n- **Timestamp-based ordering** ensures approximate FIFO behavior across shards.\n- Outperforms `ConcurrentQueue` by up to **60%** in mid-range concurrency in even thread Producer/Consumer configuration with 10 shards.\n- Automatically balances items across shards; ideal for parallel pipelines and low-latency workloads.\n- Best used with `shard_count \u2248 thread_count / 2` for optimal performance, but keep shards at or below 10.\n\n### `ConcurrentCollection`\n- An unordered, thread-safe alternative to `ConcurrentBuffer`.\n- Optimized for high-concurrency scenarios where strict FIFO is not required.\n- Uses fair circular scans seeded by bit-mixed monotonic clocks to distribute dequeues evenly.\n- Benchmarks (10 producers / 20 consumers, 2M ops) show **~5.6% higher throughput** than `ConcurrentBuffer`:\n    - **ConcurrentCollection**: 108,235 ops/sec\n    - **ConcurrentBuffer**: 102,494 ops/sec\n    - Better scaling under thread contention.\n\n### `ConcurrentBag`\n- A thread-safe \u201cmultiset\u201d collection that allows duplicates.\n- Methods like `add`, `remove`, `discard`, etc.\n- Ideal for collections where duplicate elements matter.\n\n---\n\n## \ud83d\udee0 Primitives & Coordination Mechanisms\n\nThreadFactory goes beyond collections, offering finely engineered synchronization primitives and specialized tools for orchestration, diagnostics, and thread-safe control.\n\n### \ud83e\udde0 Core Primitives \u2013 `thread_factory.synchronization.primitives`\n\n* \ud83c\udf9b `Dynaphore`: A **dynamically resizable permit gate** for adaptive resource control and elastic thread pools.\n* \ud83d\udd01 `FlowRegulator`: A **smart semaphore** with factory ID targeting, callback routing, and bias buffering for dynamic wakeups in agentic worker systems.\n* \ud83e\udde0 `SmartCondition`: A **next-generation `Condition` replacement** enabling **targeted wakeups**, ULID tracking, and direct callback delivery to waiting threads.\n* \ud83d\udd14 `TransitCondition`: A **minimalist wait/notify condition** where callbacks execute within the waiting thread, ensuring lightweight and FIFO-safe signaling.\n* \ud83d\uded1 `SignalLatch`: A **latch with observer signaling support**, capable of notifying a controller before blocking. It natively connects to a `SignalController` for streamlined lifecycle management.\n* \ud83d\udd12 `Latch`: A classic **reusable latch** that, once opened, permanently releases all waiting threads until explicitly reset.\n* \n### \u26a1 Coordinators & Barriers \u2013 `thread_factory.synchronization.coordinators`\n\n* \ud83c\udfaf `TransitBarrier`: A **reusable barrier** for sophisticated threshold coordination, with the option to execute a callable once all threads arrive.\n* \ud83d\udea6 `SignalBarrier`: A **reusable, signal-based barrier** that supports thresholds, timeouts, and failure states, natively connecting to a `SignalController` for integrated lifecycle management.\n* \u23f0 `ClockBarrier`: A **barrier with a global timeout** that breaks and raises an exception if all threads don't arrive within the specified duration. It natively connects to a `SignalController`.\n* \ud83d\udea6 `Conductor`: A **reusable group synchronizer** that executes tasks after a threshold is met, supporting timeouts and failure states. This object also natively connects to a `SignalController`.\n* \ud83e\udde0 `MultiConductor`: A **multi-group execution coordinator** that manages multiple `Group` objects with a global thread threshold. Supports synchronized execution, `Fork`-based distributed dispatch, and `SyncFork`-based barrier coordination. \n                       Each task can produce multiple outcomes. Fully reusable and natively integrated with a `SignalController`.\n* \ud83d\udd0d `Scout`: A **predicate-based monitor** where a single thread blocks while evaluating a custom predicate, complete with timeout, success, and failure callbacks.\n\n### \ud83d\ude89 Execution Gates \u2013 `thread_factory.synchronization.execution`\n\n* \ud83d\udd00 `BypassConductor`: Allows up to `N` threads to **execute a pre-bound callable pipeline**, capturing results via `Outcome`. It collapses once the execution cap is reached, making it ideal for controlled bootstraps or one-time initializers.\n\n### \ud83c\udf9b Dispatchers \u2013 `thread_factory.synchronization.dispatchers`\n\n* \ud83d\udd27 `Fork`: A **thread dispatcher** that assigns callables based on usage caps, ensuring each executes a fixed number of times for simple routing.\n* \ud83d\udd04 `SyncFork`: A **dispatcher that coordinates `N` threads** into callable groups, where all callables execute simultaneously once slots are filled. It supports timeouts and reuse.\n* \ud83d\udd04 `SyncSignalFork`: Similar to `SyncFork`, but with the added ability to **execute a callable as a signal**. This object natively connects to a `SignalController` for enhanced integration.\n* \ud83d\udea6 `SignalFork`: A **non-blocking dispatcher** that routes threads to callables immediately upon arrival. Triggers a one-time callback and controller notification once all slots are consumed.\n\n### \ud83c\udfae Central Controllers \u2013 `thread_factory.synchronization.controller`\n\n* ### `SignalController`\n    The **central registry and backbone** for lifecycle-managed objects within ThreadFactory. It offers robust support for:\n    * **`register()` / `unregister()`**: Dynamically add or remove managed objects.\n    * **`invoke()` with pre/post hooks**: Trigger operations across registered components with custom logic before and after.\n    * **Event notification (`notify`)**: Broadcast events to all interested managed objects.\n    * **Full-thread-safe `dispose()`**: Recursively and safely tears down all managed objects, ensuring proper resource release and preventing leaks in complex systems.\n    The `SignalController` forms the foundation for global coordination, status tracking, and command dispatch, providing a powerful hub for your concurrency architecture.\n---\n\n## \u26a1 Parallel Utilities - `thread_factory.concurrency`\n\nThreadFactory provides a powerful collection of **parallel programming utilities** inspired by .NET's Task Parallel Library (TPL), simplifying common concurrent patterns.\n\n* ### `parallel_for`\n    Executes a traditional `for` loop in parallel across multiple threads. It supports automatic chunking, optional `local_init`/`local_finalize` for per-thread state, and `stop_on_exception` for early abortion on error.\n\n* ### `parallel_foreach`\n    Executes an `action` function on each item of an iterable in parallel. It handles both pre-known-length and streaming iterables, with optional `chunk_size` tuning and `stop_on_exception` to halt on errors. Ideal for efficient processing of large or streaming datasets.\n\n* ### `parallel_invoke`\n    Executes multiple independent functions concurrently. It accepts an arbitrary number of functions, returning a list of futures representing their execution, with an option to wait for all to finish. This simplifies running unrelated tasks in parallel with easy error propagation.\n\n* ### `parallel_map`\n    The **parallel equivalent of Python\u2019s built-in `map()`**. It applies a `transform` function to each item in an iterable concurrently, maintaining result order. Work is automatically split into chunks for efficient multi-threaded execution, returning a fully materialized list of results.\n\n### Notes for Parallel Utilities\n\n* All utilities automatically default to `max_workers = os.cpu_count()` if unspecified.\n* `chunk_size` can be manually tuned or defaults to roughly `4 \u00d7 #workers` for balanced performance.\n* Exceptions raised inside tasks are properly propagated to the caller.\n\n---\n\n## \u23f1\ufe0f Utilities \u2013 `thread_factory.utilities`\n\nThreadFactory includes precise **utility tools** for orchestration, diagnostics, and thread-safe execution in concurrent applications.\n\n* \u23f2\ufe0f `AutoResetTimer`: A **self-resetting timer** that automatically expires and restarts \u2014 perfect for retry loops, cooldowns, debounce filters, and heartbeat monitoring.\n* \ud83d\udd70\ufe0f `Stopwatch`: A **high-resolution, nanosecond-accurate profiler** built on `time.perf_counter_ns()` \u2014 ideal for measuring critical path latency, thread timing, and pinpointing performance bottlenecks.\n* \ud83d\udce6 `Package`: A **thread-safe, delegate-style callable wrapper** that stores arguments, supports currying and composition, and enables introspectable call chaining. Perfect for orchestration tools like `Conductor`, `Fork`, and `SyncFork`.\n\n\n---\n\n## \ud83d\udcd6 Documentation\n\nFull API reference and usage examples are available at:\n\n\u27a1\ufe0f [https://threadfactory.readthedocs.io](https://threadfactory.readthedocs.io)\n\n---\n\n## \u2699\ufe0f Installation\n\n### Option 1: Clone and Install Locally (Recommended for Development)\n\n```bash\n# Clone the repository\ngit clone https://github.com/Synaptic724/ThreadFactory.git\ncd threadfactory\n\n# Create a Python 3.13+ virtual environment (No-GIL/Free concurrency recommended)\npython -m venv .venv\nsource .venv/bin/activate  # or .venv\\Scripts\\activate on Windows\n```\n\n### Option 2: Install the library from PyPI\n```bash\n# Install the library in editable mode\npip install threadfactory\n```\n\n\n---\n\n## \ud83d\udcc8 Real-World Benchmarking\n\nBelow are benchmark results from live multi-threaded scenarios using 10\u201340 real threads,  \nwith millions of operations processed under load.\n\nThese benchmarks aren't just numbers \u2014  \nthey are proof that **ThreadFactory's concurrent collections outperform traditional Python structures by 2x\u20135x**,  \nespecially in the new No-GIL world Python 3.13+ is unlocking.\n\nPerformance under pressure.  \nArchitecture built for the future.\n\nThese are just our Concurrent Datastructures and not even the real thing.  \nThreadfactory is coming soon...\n\n---\n\n> All benchmark tests below are available if you clone the library and run the tests.  \n> See the [Benchmark Details \ud83d\ude80](https://github.com/Synaptic724/threadfactory/blob/production/benchmarks/benchmark_data/general_benchmarks.md) for more benchmark stats.\n\n\n## \ud83d\udd25 Benchmark Results (10,000,000 ops \u2014 10 producers / 10 consumers)\n\n| Queue Type                                  | Time (sec) | Throughput (ops/sec) | Notes                                                                                             |\n|---------------------------------------------|------------|----------------------|---------------------------------------------------------------------------------------------------|\n| `multiprocessing.Queue`                     | 119.99     | ~83,336              | Not suited for thread-only workloads, incurs unnecessary overhead.                                |\n| `thread_factory.ConcurrentBuffer` | **23.27**      | **~429,651**            | \u26a1 Dominant here. Consistent and efficient under moderate concurrency. |\n| `thread_factory.ConcurrentQueue`  | 37.87      | ~264,014              | Performs solidly. Shows stable behavior even at higher operation counts.                                                   |\n| `collections.deque`                         | 64.16      | ~155,876              | Suffers from contention. Simplicity comes at the cost of throughput.                                  |\n\n\n### \u2705 Highlights:\n- `ConcurrentBuffer` outperformed `multiprocessing.Queue` by **96.72 seconds**.\n- `ConcurrentBuffer` outperformed `ConcurrentQueue` by **14.6 seconds**.\n- `ConcurrentBuffer` outperformed `collections.deque` by **40.89 seconds**.\n\n### \ud83d\udca1 Observations:\n- `ConcurrentBuffer` continues to be the best performer under moderate concurrency.\n- `ConcurrentQueue` maintains a consistent performance but is outperformed by `ConcurrentBuffer`.\n- All queues emptied correctly (`final length = 0`).\n---\n## \ud83d\udd25 Benchmark Results (20,000,000 ops \u2014 20 Producers / 20 Consumers)\n\n| Queue Type                                        | Time (sec) | Throughput (ops/sec) | Notes                                                                                         |\n|---------------------------------------------------|------------|----------------------|-----------------------------------------------------------------------------------------------|\n| `multiprocessing.Queue`                           | 249.92     | ~80,020              | Severely limited by thread-unfriendly IPC locks.                                  |\n| `thread_factory.ConcurrentBuffer`      | 138.64     | ~144,270             | \tSolid under moderate producer-consumer balance. Benefits from shard windowing.    |\n| `thread_factory.ConcurrentBuffer` | 173.89     | ~115,010             | Too many shards increased internal complexity, leading to lower throughput. |\n| `thread_factory.ConcurrentQueue` | **77.69**  | **~257,450**         | \u26a1 Fastest overall. Ideal for large-scale multi-producer, multi-consumer scenarios.        |\n| `collections.deque`                               | 190.91     | ~104,771             | Still usable, but scalability is poor compared to specialized implementations.         |\n\n### \u2705 Notes:\n- `ConcurrentBuffer` performs better with **10 shards** than **20 shards** at this concurrency level.\n- `ConcurrentQueue` continues to be the most stable performer under moderate-to-high thread counts.\n- `multiprocessing.Queue` remains unfit for threaded-only workloads due to its heavy IPC-oriented design.\n\n### \ud83d\udca1 Observations:\n- **Shard count** tuning in `ConcurrentBuffer` is crucial \u2014 too many shards can reduce performance.\n- **Bit-flip balancing** in `ConcurrentBuffer` helps under moderate concurrency but hits diminishing returns with excessive sharding.\n- `ConcurrentQueue` is proving to be the general-purpose winner for most balanced threaded workloads.\n- For **~40 threads**, `ConcurrentBuffer` shows ~**25% drop** when doubling the number of shards due to increased dequeue complexity.\n- All queues emptied correctly (`final length = 0`).\n\n---\n\n## \ud83e\uddea Coming Soon: ThreadFactory Evolves\n\nThreadFactory isn't stopping at collections and locks \u2014 we're building the **foundation of a full concurrency ecosystem**.\n\n### \ud83d\udd2e On the Roadmap:\n- **Dynamic Executors**: Adaptive thread pools with per-worker routing, priorities, and work stealing.\n- **Event Semaphores**: Async-aware signaling for mixed coroutine + thread pipelines.\n- **Factory-Orchestrated Graph-based Execution**: Push-based directed graphs or generic graphs of work that dynamically scale.\n- **Thread-Aware Async Hooks**: Bridging `asyncio` and raw threads using hybrid schedulers.\n- **Task Affinity Routing**: Route work based on thread-local cache or historical execution profile.\n- **Metrics and Diagnostics API**: Inspect thread throughput, wait time, and contention hotspots live.\n\n> ThreadFactory isn't just a library.  \n> It's becoming a platform.\n\nStay tuned.  \nYou haven't seen anything yet.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "High-performance thread-safe (No-GIL\u2013friendly) data structures and parallel operations for Python 3.13+.",
    "version": "1.5.2",
    "project_urls": {
        "Documentation": "https://threadfactory.readthedocs.io/en/latest/",
        "Homepage": "https://github.com/Synaptic724/ThreadFactory",
        "Issues": "https://github.com/Synaptic724/ThreadFactory/issues",
        "LinkedIn": "https://www.linkedin.com/in/mark-geleta-4b59b752/",
        "Repository": "https://github.com/Synaptic724/ThreadFactory"
    },
    "split_keywords": [
        "concurrency",
        " parallelism",
        " thread-safe",
        " no-gil",
        " threading",
        " parallel processing",
        " concurrent collections",
        " multithreading",
        " high-performance",
        " python concurrency",
        " free-threading",
        " thread factory",
        " threadfactory"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3bda5daf85b80590f0275b7631b685e373752d721a3331e4f100011fde3ecf9f",
                "md5": "6cd95e63e7cf02cb574c594d1432e9b6",
                "sha256": "a070759d8d17409345cdbdeb2619ed7edded9d049e3c4d703a7d2f3563a4bb5d"
            },
            "downloads": -1,
            "filename": "threadfactory-1.5.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6cd95e63e7cf02cb574c594d1432e9b6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.13",
            "size": 198864,
            "upload_time": "2025-07-11T22:25:26",
            "upload_time_iso_8601": "2025-07-11T22:25:26.829897Z",
            "url": "https://files.pythonhosted.org/packages/3b/da/5daf85b80590f0275b7631b685e373752d721a3331e4f100011fde3ecf9f/threadfactory-1.5.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8a8037f01e194aceddaa391bc6dfa1a4e04f275b2fb5f8f9abb41e39b7b6adde",
                "md5": "4f64cfc12e3f498a8c9dbd4a2936047a",
                "sha256": "b11b22d8ed81d311b9260401880dc068a23e5a36cb2c63fcbfa09edb45a7e8c1"
            },
            "downloads": -1,
            "filename": "threadfactory-1.5.2.tar.gz",
            "has_sig": false,
            "md5_digest": "4f64cfc12e3f498a8c9dbd4a2936047a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.13",
            "size": 161680,
            "upload_time": "2025-07-11T22:25:28",
            "upload_time_iso_8601": "2025-07-11T22:25:28.321928Z",
            "url": "https://files.pythonhosted.org/packages/8a/80/37f01e194aceddaa391bc6dfa1a4e04f275b2fb5f8f9abb41e39b7b6adde/threadfactory-1.5.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-11 22:25:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Synaptic724",
    "github_project": "ThreadFactory",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "threadfactory"
}
        
Elapsed time: 0.49178s