# CaptionFlow
scalable, fault-tolerant **vLLM-powered image captioning**. this "first round" focuses on a fast websocket orchestrator plus lightweight gpu workers that batch requests through vLLM.
* **orchestrator**: hands out work in chunked shards, collects captions, checkpoints progress, and keeps simple stats.
* **workers (vLLM)**: connect to the orchestrator, stream in image samples, batch them, and generate 1..N captions per image using prompts supplied by the orchestrator.
* **config-driven**: all components read YAML config; flags can override.
* **tui monitor (optional)**: a monitor client is wired into the CLI; ship a `monitor` module to enable it.
> no conda. just `venv` + `pip`.
---
## install
```bash
python -m venv .venv
source .venv/bin/activate # windows: .venv\Scripts\activate
pip install --upgrade pip
pip install -e . # installs the `caption-flow` command
```
## quickstart (single box)
1. copy + edit the sample configs
```bash
cp orchestrator.yaml my-orchestrator.yaml
cp worker.yaml my-worker.yaml
cp monitor.yaml my-monitor.yaml # optional; requires a monitor module
```
set a unique shared token in both `my-orchestrator.yaml` and `my-worker.yaml` (see `auth.worker_tokens` in the orchestrator config and `worker.token` in the worker config). if you use private hugging face datasets/models, export `HUGGINGFACE_HUB_TOKEN` before starting workers.
2. start the orchestrator
```bash
caption-flow orchestrator --config my-orchestrator.yaml
```
3. start one or more vLLM workers
```bash
# gpu 0 on the same host
caption-flow worker --config my-worker.yaml --gpu-id 0
# your second GPU
caption-flow worker --config my-worker.yaml --gpu-id 1
```
4. (optional) start the monitor
```bash
caption-flow monitor --config my-monitor.yaml
```
5. (optional) scan/fix chunks on disk if you had crashes
```bash
caption-flow scan_chunks --data-dir ./caption_data --checkpoint-dir ./checkpoints --fix
```
---
## how it’s wired
### orchestrator
* **websocket server** (default `0.0.0.0:8765`) with three client roles: workers, data-feeders, and admin.
* **dataset control**: the orchestrator centrally defines the dataset (`huggingface` or `local`) and version/name. it chunk-slices shards and assigns work.
* **vLLM config broadcast**: model, tp size, dtype, max seq len, memory targets, batching, sampling params, and **inference prompts** are all pushed to workers; workers can apply many changes without a model reload.
* **storage + checkpoints**: captions buffer to disk with periodic checkpoints. chunk state is tracked so restarts don’t double-work.
* **auth**: token lists for `worker`, `monitor`, and `admin` roles.
start flags you’ll likely use:
```text
--config PATH # yaml config for the orchestrator
--port INT, --host STR # bind controls
--data-dir PATH # overrides storage.data_dir
--cert PATH, --key PATH # enable TLS (or use --no-ssl for ws:// in dev)
--vllm # use the vLLM-style orchestrator (webdataset/hf)
```
### vLLM worker
* **one process per gpu**. select the device with `--gpu-id` (or `worker.gpu_id` in YAML).
* **gets its marching orders** from the orchestrator: dataset info, model, prompts, batch size, and sampling.
* **resilient**: detects disconnects, abandons the current chunk cleanly, clears queues, reconnects, and resumes.
* **batched generate()**: images are resized down for consistent batching; each image can get multiple captions (one per prompt).
start flags you’ll likely use:
```text
--config PATH # yaml for the worker
--server URL # ws(s)://host:port
--token STR # must match an allowed worker token on the orchestrator
--name STR # display name
--batch-size INT # override vLLM batch size
--vllm # use the vLLM worker implementation
--gpu-id INT # which gpu to use
--precision STR, --model STR # optional overrides for dtype/model
--no-verify-ssl # accept self-signed certs in dev
```
### (optional) monitor
* a CLI entry exists for a TUI monitor; wire in a `monitor` module to enable it. config lives in `monitor.yaml` or inside `orchestrator.yaml` under `monitor:`.
---
## configuration
### config discovery order
for any component, the CLI looks for config in this order (first match wins):
1. `--config /path/to/file.yaml`
2. `./<component>.yaml` (current directory)
3. `~/.caption-flow/<component>.yaml`
4. `$XDG_CONFIG_HOME/caption-flow/<component>.yaml`
5. `/etc/caption-flow/<component>.yaml`
6. any `$XDG_CONFIG_DIRS` entries under `caption-flow/`
7. `./examples/<component>.yaml` (fallback)
### orchestrator.yaml (highlights)
```yaml
orchestrator:
host: 0.0.0.0
port: 8765
# ssl:
# cert: /path/fullchain.pem
# key: /path/privkey.pem
dataset:
type: huggingface # or "local"
path: <hf-dataset-or-local-path>
name: <logical-name>
version: "1.0"
vllm:
model: Qwen/Qwen2.5-VL-3B-Instruct
tensor_parallel_size: 1
max_model_len: 16384
dtype: float16
gpu_memory_utilization: 0.92
enforce_eager: true
disable_mm_preprocessor_cache: true
limit_mm_per_prompt: { image: 1 }
batch_size: 8
sampling:
temperature: 0.7
top_p: 0.95
max_tokens: 256
repetition_penalty: 1.05
skip_special_tokens: true
stop: ["<|end|>", "<|endoftext|>", "<|im_end|>"]
inference_prompts:
- "describe this image in detail"
- "provide a comprehensive description of the visual content"
- "what are the key elements in this image?"
storage:
data_dir: ./caption_data
checkpoint_dir: ./checkpoints
caption_buffer_size: 100
checkpoint_interval: 1000
# chunking/queueing
chunk_size: 1000
chunks_per_request: 2
chunk_buffer_multiplier: 3
min_chunk_buffer: 10
auth:
worker_tokens:
- { token: "example-worker-token", name: "Example Worker" }
monitor_tokens:
- { token: "letmein", name: "Default monitor" }
admin_tokens:
- { token: "admin-secret-2024", name: "Admin" }
```
### worker.yaml (highlights)
```yaml
worker:
server: ws://localhost:8765 # use wss:// in prod
token: example-worker-token
name: local-gpu
gpu_id: 0
vllm: true
# local queues
readahead_size: 256
inference_queue_size: 128
```
### monitor.yaml (optional)
```yaml
monitor:
server: ws://localhost:8765
token: letmein
refresh_rate: 1.0
show_contributors: true
show_quality_metrics: true
max_activity_items: 20
show_chunk_progress: true
show_worker_queues: true
show_throughput_graph: true
```
---
## tls / certificates
use the built-in helpers during development:
```bash
# self-signed certs for quick local testing
caption-flow generate_cert --self-signed --domain localhost --output-dir ./certs
# inspect any certificate file
caption-flow inspect_cert ./certs/fullchain.pem
```
then point the orchestrator at the resulting cert/key (or run `--no-ssl` for dev-only ws\://).
---
## tips & notes
* **multi-gpu**: start one worker process per gpu (set `--gpu-id` or `worker.gpu_id`).
* **throughput**: tune `vllm.batch_size` in the orchestrator config (or override with `--batch-size` at worker start). higher isn’t always better; watch VRAM.
* **prompts**: add more strings under `vllm.inference_prompts` to get multiple captions per image; the worker returns only non-empty generations.
* **private HF**: if your dataset/model needs auth, export `HUGGINGFACE_HUB_TOKEN` before `caption-flow worker ...`.
* **self-signed ssl**: pass `--no-verify-ssl` to workers/monitors in dev.
* **recovery**: if you hard-crash mid-run, `caption-flow scan_chunks --fix` can reset abandoned chunks so the orchestrator can reissue them cleanly.
---
## roadmap
* hot config reload via the admin websocket path.
* dedicated data-feeder clients (separate from gpu workers) that push samples into the orchestrator.
* richer monitor TUI.
PRs welcome. keep it simple and fast.
## architecture
```
┌─────────────┐ WebSocket ┌─────────────┐
│ Worker │◄──────────────────►│ │
└─────────────┘ │ │ ┌──────────────┐
│ Orchestrator│────►│Arrow/Parquet │
┌─────────────┐ │ │ │ Storage │
│ Worker │◄──────────────────►│ │ └──────────────┘
└─────────────┘ └─────────────┘
▲
┌─────────────┐ │
│ Monitor │◄──────────────────────────┘
└─────────────┘
```
## Storage Schema
### captions.parquet
- `job_id`: Unique job identifier
- `dataset`: Dataset name
- `shard`: Shard identifier
- `item_key`: Item within shard
- `caption`: Generated caption text
- `contributor_id`: Worker who generated it
- `timestamp`: Generation time
- `quality_score`: Optional quality metric
### jobs.parquet
- `job_id`: Unique identifier
- `dataset`: Dataset name
- `shard`: Shard identifier
- `status`: pending/processing/completed/failed
- `assigned_to`: Worker ID
- `timestamp`: Status change time
### contributors.parquet
- `contributor_id`: Unique identifier
- `name`: Display name
- `total_captions`: Lifetime count
- `trust_level`: Quality tier (0-5)
## Development
```bash
# Install with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black src/
ruff --fix src/
# Type checking
mypy src/
```
## Community Contribution
To contribute compute:
1. Install caption-flow: `pip install caption-flow`
2. Get a worker token from the project maintainer
3. Run: `caption-flow worker --server wss://project.domain.com:8765 --token YOUR_TOKEN`
Your contributions will be tracked and attributed in the final dataset!
## License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "caption-flow",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "captioning, distributed, vllm, dataset, community",
"author": null,
"author_email": "bghira <bghira@users.github.com>",
"download_url": "https://files.pythonhosted.org/packages/dd/1c/f8019c105e2d88a2195a77bdc95a49db66974972007a4f717ee6ede93a96/caption_flow-0.2.2.tar.gz",
"platform": null,
"description": "# CaptionFlow\n\nscalable, fault-tolerant **vLLM-powered image captioning**. this \"first round\" focuses on a fast websocket orchestrator plus lightweight gpu workers that batch requests through vLLM.\n\n* **orchestrator**: hands out work in chunked shards, collects captions, checkpoints progress, and keeps simple stats.\n* **workers (vLLM)**: connect to the orchestrator, stream in image samples, batch them, and generate 1..N captions per image using prompts supplied by the orchestrator.\n* **config-driven**: all components read YAML config; flags can override.\n* **tui monitor (optional)**: a monitor client is wired into the CLI; ship a `monitor` module to enable it.\n\n> no conda. just `venv` + `pip`.\n\n---\n\n## install\n\n```bash\npython -m venv .venv\nsource .venv/bin/activate # windows: .venv\\Scripts\\activate\npip install --upgrade pip\npip install -e . # installs the `caption-flow` command\n```\n\n## quickstart (single box)\n\n1. copy + edit the sample configs\n\n```bash\ncp orchestrator.yaml my-orchestrator.yaml\ncp worker.yaml my-worker.yaml\ncp monitor.yaml my-monitor.yaml # optional; requires a monitor module\n```\n\nset a unique shared token in both `my-orchestrator.yaml` and `my-worker.yaml` (see `auth.worker_tokens` in the orchestrator config and `worker.token` in the worker config). if you use private hugging face datasets/models, export `HUGGINGFACE_HUB_TOKEN` before starting workers.\n\n2. start the orchestrator\n\n```bash\ncaption-flow orchestrator --config my-orchestrator.yaml\n```\n\n3. start one or more vLLM workers\n\n```bash\n# gpu 0 on the same host\ncaption-flow worker --config my-worker.yaml --gpu-id 0\n\n# your second GPU\ncaption-flow worker --config my-worker.yaml --gpu-id 1\n```\n\n4. (optional) start the monitor\n\n```bash\ncaption-flow monitor --config my-monitor.yaml\n```\n\n5. (optional) scan/fix chunks on disk if you had crashes\n\n```bash\ncaption-flow scan_chunks --data-dir ./caption_data --checkpoint-dir ./checkpoints --fix\n```\n\n---\n\n## how it\u2019s wired\n\n### orchestrator\n\n* **websocket server** (default `0.0.0.0:8765`) with three client roles: workers, data-feeders, and admin.\n* **dataset control**: the orchestrator centrally defines the dataset (`huggingface` or `local`) and version/name. it chunk-slices shards and assigns work.\n* **vLLM config broadcast**: model, tp size, dtype, max seq len, memory targets, batching, sampling params, and **inference prompts** are all pushed to workers; workers can apply many changes without a model reload.\n* **storage + checkpoints**: captions buffer to disk with periodic checkpoints. chunk state is tracked so restarts don\u2019t double-work.\n* **auth**: token lists for `worker`, `monitor`, and `admin` roles.\n\nstart flags you\u2019ll likely use:\n\n```text\n--config PATH # yaml config for the orchestrator\n--port INT, --host STR # bind controls\n--data-dir PATH # overrides storage.data_dir\n--cert PATH, --key PATH # enable TLS (or use --no-ssl for ws:// in dev)\n--vllm # use the vLLM-style orchestrator (webdataset/hf)\n```\n\n### vLLM worker\n\n* **one process per gpu**. select the device with `--gpu-id` (or `worker.gpu_id` in YAML).\n* **gets its marching orders** from the orchestrator: dataset info, model, prompts, batch size, and sampling.\n* **resilient**: detects disconnects, abandons the current chunk cleanly, clears queues, reconnects, and resumes.\n* **batched generate()**: images are resized down for consistent batching; each image can get multiple captions (one per prompt).\n\nstart flags you\u2019ll likely use:\n\n```text\n--config PATH # yaml for the worker\n--server URL # ws(s)://host:port\n--token STR # must match an allowed worker token on the orchestrator\n--name STR # display name\n--batch-size INT # override vLLM batch size\n--vllm # use the vLLM worker implementation\n--gpu-id INT # which gpu to use\n--precision STR, --model STR # optional overrides for dtype/model\n--no-verify-ssl # accept self-signed certs in dev\n```\n\n### (optional) monitor\n\n* a CLI entry exists for a TUI monitor; wire in a `monitor` module to enable it. config lives in `monitor.yaml` or inside `orchestrator.yaml` under `monitor:`.\n\n---\n\n## configuration\n\n### config discovery order\n\nfor any component, the CLI looks for config in this order (first match wins):\n\n1. `--config /path/to/file.yaml`\n2. `./<component>.yaml` (current directory)\n3. `~/.caption-flow/<component>.yaml`\n4. `$XDG_CONFIG_HOME/caption-flow/<component>.yaml`\n5. `/etc/caption-flow/<component>.yaml`\n6. any `$XDG_CONFIG_DIRS` entries under `caption-flow/`\n7. `./examples/<component>.yaml` (fallback)\n\n### orchestrator.yaml (highlights)\n\n```yaml\norchestrator:\n host: 0.0.0.0\n port: 8765\n # ssl:\n # cert: /path/fullchain.pem\n # key: /path/privkey.pem\n\n dataset:\n type: huggingface # or \"local\"\n path: <hf-dataset-or-local-path>\n name: <logical-name>\n version: \"1.0\"\n\n vllm:\n model: Qwen/Qwen2.5-VL-3B-Instruct\n tensor_parallel_size: 1\n max_model_len: 16384\n dtype: float16\n gpu_memory_utilization: 0.92\n enforce_eager: true\n disable_mm_preprocessor_cache: true\n limit_mm_per_prompt: { image: 1 }\n\n batch_size: 8\n\n sampling:\n temperature: 0.7\n top_p: 0.95\n max_tokens: 256\n repetition_penalty: 1.05\n skip_special_tokens: true\n stop: [\"<|end|>\", \"<|endoftext|>\", \"<|im_end|>\"]\n\n inference_prompts:\n - \"describe this image in detail\"\n - \"provide a comprehensive description of the visual content\"\n - \"what are the key elements in this image?\"\n\n storage:\n data_dir: ./caption_data\n checkpoint_dir: ./checkpoints\n caption_buffer_size: 100\n checkpoint_interval: 1000\n\n # chunking/queueing\n chunk_size: 1000\n chunks_per_request: 2\n chunk_buffer_multiplier: 3\n min_chunk_buffer: 10\n\n auth:\n worker_tokens:\n - { token: \"example-worker-token\", name: \"Example Worker\" }\n monitor_tokens:\n - { token: \"letmein\", name: \"Default monitor\" }\n admin_tokens:\n - { token: \"admin-secret-2024\", name: \"Admin\" }\n```\n\n### worker.yaml (highlights)\n\n```yaml\nworker:\n server: ws://localhost:8765 # use wss:// in prod\n token: example-worker-token\n name: local-gpu\n gpu_id: 0\n vllm: true\n\n # local queues\n readahead_size: 256\n inference_queue_size: 128\n```\n\n### monitor.yaml (optional)\n\n```yaml\nmonitor:\n server: ws://localhost:8765\n token: letmein\n refresh_rate: 1.0\n show_contributors: true\n show_quality_metrics: true\n max_activity_items: 20\n show_chunk_progress: true\n show_worker_queues: true\n show_throughput_graph: true\n```\n\n---\n\n## tls / certificates\n\nuse the built-in helpers during development:\n\n```bash\n# self-signed certs for quick local testing\ncaption-flow generate_cert --self-signed --domain localhost --output-dir ./certs\n\n# inspect any certificate file\ncaption-flow inspect_cert ./certs/fullchain.pem\n```\n\nthen point the orchestrator at the resulting cert/key (or run `--no-ssl` for dev-only ws\\://).\n\n---\n\n## tips & notes\n\n* **multi-gpu**: start one worker process per gpu (set `--gpu-id` or `worker.gpu_id`).\n* **throughput**: tune `vllm.batch_size` in the orchestrator config (or override with `--batch-size` at worker start). higher isn\u2019t always better; watch VRAM.\n* **prompts**: add more strings under `vllm.inference_prompts` to get multiple captions per image; the worker returns only non-empty generations.\n* **private HF**: if your dataset/model needs auth, export `HUGGINGFACE_HUB_TOKEN` before `caption-flow worker ...`.\n* **self-signed ssl**: pass `--no-verify-ssl` to workers/monitors in dev.\n* **recovery**: if you hard-crash mid-run, `caption-flow scan_chunks --fix` can reset abandoned chunks so the orchestrator can reissue them cleanly.\n\n---\n\n## roadmap\n\n* hot config reload via the admin websocket path.\n* dedicated data-feeder clients (separate from gpu workers) that push samples into the orchestrator.\n* richer monitor TUI.\n\nPRs welcome. keep it simple and fast.\n\n## architecture\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 WebSocket \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Worker \u2502\u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25ba\u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 Orchestrator\u2502\u2500\u2500\u2500\u2500\u25ba\u2502Arrow/Parquet \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 Storage \u2502\n\u2502 Worker \u2502\u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25ba\u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u25b2\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n\u2502 Monitor \u2502\u25c4\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n## Storage Schema\n\n### captions.parquet\n- `job_id`: Unique job identifier\n- `dataset`: Dataset name\n- `shard`: Shard identifier\n- `item_key`: Item within shard\n- `caption`: Generated caption text\n- `contributor_id`: Worker who generated it\n- `timestamp`: Generation time\n- `quality_score`: Optional quality metric\n\n### jobs.parquet\n- `job_id`: Unique identifier\n- `dataset`: Dataset name\n- `shard`: Shard identifier\n- `status`: pending/processing/completed/failed\n- `assigned_to`: Worker ID\n- `timestamp`: Status change time\n\n### contributors.parquet\n- `contributor_id`: Unique identifier\n- `name`: Display name\n- `total_captions`: Lifetime count\n- `trust_level`: Quality tier (0-5)\n\n## Development\n\n```bash\n# Install with dev dependencies\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Format code\nblack src/\nruff --fix src/\n\n# Type checking\nmypy src/\n```\n\n## Community Contribution\n\nTo contribute compute:\n\n1. Install caption-flow: `pip install caption-flow`\n2. Get a worker token from the project maintainer\n3. Run: `caption-flow worker --server wss://project.domain.com:8765 --token YOUR_TOKEN`\n\nYour contributions will be tracked and attributed in the final dataset!\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Self-contained distributed community captioning system",
"version": "0.2.2",
"project_urls": null,
"split_keywords": [
"captioning",
" distributed",
" vllm",
" dataset",
" community"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "cfc4ea07e66705e38af90023bfbbedf2862949beb48f9ca64b70b54a3c40b7cb",
"md5": "15fdad9ad43b86e7b756d95a4de4f8b3",
"sha256": "c8bec2375d30a540112e2efc7d9e0afc55c5f21dba2cb62fda46953f4d3a6395"
},
"downloads": -1,
"filename": "caption_flow-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "15fdad9ad43b86e7b756d95a4de4f8b3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 100105,
"upload_time": "2025-08-20T02:03:57",
"upload_time_iso_8601": "2025-08-20T02:03:57.102361Z",
"url": "https://files.pythonhosted.org/packages/cf/c4/ea07e66705e38af90023bfbbedf2862949beb48f9ca64b70b54a3c40b7cb/caption_flow-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "dd1cf8019c105e2d88a2195a77bdc95a49db66974972007a4f717ee6ede93a96",
"md5": "c6d933a92215a2cd1d02f438b8eefd2c",
"sha256": "caec3fddf6fcd17ea2394d935f7bfb6f6199f781a12d55877cadf61bcae6bd68"
},
"downloads": -1,
"filename": "caption_flow-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "c6d933a92215a2cd1d02f438b8eefd2c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 94287,
"upload_time": "2025-08-20T02:03:58",
"upload_time_iso_8601": "2025-08-20T02:03:58.476990Z",
"url": "https://files.pythonhosted.org/packages/dd/1c/f8019c105e2d88a2195a77bdc95a49db66974972007a4f717ee6ede93a96/caption_flow-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-20 02:03:58",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "caption-flow"
}