flowfoundry


Nameflowfoundry JSON
Version 1.2.0 PyPI version JSON
download
home_pageNone
SummaryFlowFoundry: a strategy-first, cloud-agnostic agentic workflow framework (LangGraph/LangChain)
upload_time2025-08-31 00:58:49
maintainerNone
docs_urlNone
authorMandar Parab
requires_python>=3.10
licenseApache-2.0
keywords rag langgraph langchain agents llm framework
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FlowFoundry

*A strategy-first, cloud-agnostic framework for LLM workflows.*  
Compose chunking, indexing, retrieval, reranking, and agentic flows — with **Keras-like ergonomics** over LangChain / LangGraph.

---

## ✨ Features

- **Strategies**: chunking, indexing, retrieval, reranking  
- **Functional API**: call strategies directly as Python functions  
- **Blocks API**: compose strategies like layers  
- **Nodes & Graphs**: LangGraph-backed workflows (YAML or Python)  
- **Extensible**: register custom strategies or nodes  

---

## Installation

Core only:

```bash
pip install flowfoundry
```

With extras:
```bash
pip install "flowfoundry[rag,search,rerank,qdrant,openai,llm-openai]"
```

Extras include: chromadb, qdrant-client, sentence-transformers, rank-bm25, openai, etc.
All examples run offline by default (echo LLM). Missing deps no-op gracefully.

Sanity check:
```python
from flowfoundry import ping, hello
print(ping())          # -> "flowfoundry: ok"
print(hello("there"))  # -> "hello, there!"
```

## Quickstart (Functional API)

```python
from flowfoundry.functional import (
  chunk_recursive, index_chroma_upsert, index_chroma_query, preselect_bm25
)

text   = "FlowFoundry lets you mix strategies to build RAG."
chunks = chunk_recursive(text, chunk_size=120, chunk_overlap=20, doc_id="demo")

# Index & query (requires chromadb extra)
index_chroma_upsert(chunks, path=".ff_chroma", collection="docs")
hits = index_chroma_query("What is FlowFoundry?", path=".ff_chroma", collection="docs", k=8)

# Optional rerank (requires rank-bm25)
hits = preselect_bm25("What is FlowFoundry?", hits, top_k=5)

print(hits[0]["text"])
```

### CLI

All registered strategies are available via the flowfoundry CLI.

Run:
```bash
# list families and functions
flowfoundry list

# call a strategy directly
flowfoundry chunking fixed --kwargs '{"data":"hello world","chunk_size":5}'

# equivalent generic call
flowfoundry call chunking fixed --kwargs '{"data":"hello world","chunk_size":5}'
```

## Functional API Reference

Available in `flowfoundry.functional`:

---

### Chunking

| Function          | Purpose              | Extra deps |
|-------------------|----------------------|------------|
| `chunk_fixed`     | Fixed-size splitter  | –          |
| `chunk_recursive` | Recursive splitter   | `langchain-text-splitters` |
| `chunk_hybrid`    | Hybrid splitter      | –          |

```python
chunk_fixed(text, *, chunk_size=800, chunk_overlap=80, doc_id="doc") -> list[Chunk]
chunk_recursive(text, *, chunk_size=800, chunk_overlap=80, doc_id="doc") -> list[Chunk]
chunk_hybrid(text, **kwargs) -> list[Chunk]
```

---

### Indexing (Chroma)

| Function              | Purpose              | Extra deps |
|-----------------------|----------------------|------------|
| `index_chroma_upsert` | Upsert chunks into Chroma  |`chromadb` |
| `index_chroma_query`  | Query Chroma   | `chromadb` |

```python
index_chroma_upsert(chunks, *, path=".ff_chroma", collection="docs") -> str
index_chroma_query(query, *, path, collection, k=5) -> list[Hit]
```
---

### Reranking

| Function          | Purpose              | Extra deps |
|-------------------|----------------------|------------|
| `rerank_identity`     | No-op reranker  | –          |
| `preselect_bm25` | BM25 preselect   | `rank-bm25` |
| `rerank_cross_encoder`    | Cross-encoder reranker      |`sentence-transformers` |

```python
rerank_identity(query, hits, top_k=None) -> list[Hit]
preselect_bm25(query, hits, top_k=20) -> list[Hit]
rerank_cross_encoder(query, hits, *, model, top_k=None) -> list[Hit]
```

### Composition (LLM Answering)

| Function          | Purpose              | Providers supported | Extra deps |
|-------------------|----------------------|------------|----------------------|
| `compose_llm`     | Generate an answer from hits via an LLM  | openai, ollama, huggingface, langchain | provider-specific |

```python
compose_llm(
    question: str,
    hits: list[Hit],
    *,
    provider: str,        # "openai", "ollama", "huggingface", "langchain"
    model: str,           # e.g. "gpt-4o-mini", "llama3:8b", "distilgpt2"
    max_context_chars=6000,
    max_tokens=512,
    reuse_provider=True,
    **provider_kwargs     # api_key, host, backend, device, etc.
) -> str
```

## Example Code:
```python
from flowfoundry import index_chroma_query, preselect_bm25, compose_llm

question = "What is people's budget?"
hits = index_chroma_query(question, path=".ff_chroma", collection="docs", k=8)
hits = preselect_bm25(question, hits, top_k=5)

# OpenAI provider
answer = compose_llm(
    question, hits,
    provider="openai",
    model="gpt-4o-mini",
    max_tokens=400,
)
print(answer)

# Ollama provider
answer = compose_llm(
    question, hits,
    provider="ollama",
    model="llama3:8b",
    host="http://localhost:11434",
    max_tokens=400,
)
print(answer)

# HuggingFace local transformers
answer = compose_llm(
    question, hits,
    provider="huggingface",
    model="distilgpt2",
    max_tokens=200,
)
print(answer)
```

Example (CLI)

Save retrieval hits into JSON first, then pass them to compose_llm:

Step 1: query (Chroma)
```bash
flowfoundry indexing chroma_query \
  --kwargs '{"query":"What is people'\''s budget?","path":".ff_chroma","collection":"docs","k":8}' > hits.json
```

Step 2: rerank (BM25)
```bash
 flowfoundry rerank bm25_preselect \
  --kwargs "{\"query\":\"What is people's budget?\",\"hits\":$(cat hits.json),\"top_k\":5}" > hits_top5.json
```

Step 3: compose answer with OpenAI
``` bash
export OPENAI_API_KEY=...
flowfoundry compose llm \
  --kwargs "{\"question\":\"What is people's budget?\",\"hits\":$(cat hits_top5.json),\"provider\":\"openai\",\"model\":\"gpt-4o-mini\",\"max_tokens\":400}"
```

## YAML based run

Planned Schema:
```yaml
version: 1
vars:        # optional globals you can reference later
  key: value
steps:       # ordered list of steps
  - id: step_name
    use: family.function_name      # e.g., chunking.chunk_recursive
    with:                          # kwargs passed to that function
      param1: foo
      param2: ${{ vars.key }}      # reference vars or prior steps
outputs:     # optional; what to print at the end
  result: ${{ step_name }}
```

Example 1 — Minimal RAG (inline text)
```yaml
version: 1

vars:
  data_path: docs/samples/                   
  store_path: .ff_chroma2
  collection: docs
  question: "Summarize the pdfs"

steps:
  # 1) Load PDFs (your existing strategy)
  - id: pages
    use: ingestion.pdf_loader
    with:
      path: ${{ vars.data_path }}

  # 2) Chunk every page, preserving source/page metadata
  - id: chunks
    use: chunking.recursive
    with:
      data: ${{ pages }}
      chunk_size: 800
      chunk_overlap: 120

  # 3) Upsert chunks into Chroma
  - id: upsert
    use: indexing.chroma_upsert
    with:
      chunks: ${{ chunks }}
      path: ${{ vars.store_path }}
      collection: ${{ vars.collection }}

  # 4) Retrieve relevant chunks
  - id: retrieve
    use: indexing.chroma_query
    with:
      query: ${{ vars.question }}
      path: ${{ vars.store_path }}
      collection: ${{ vars.collection }}
      k: 12

  # 5) (Optional) BM25 preselect
  - id: preselect
    use: rerank.bm25_preselect
    with:
      query: ${{ vars.question }}
      hits: ${{ retrieve }}
      top_k: 6

  # 6) Compose final answer (pick your provider)
  - id: answer
    use: compose.llm
    with:
      question: ${{ vars.question }}
      hits: ${{ preselect }}
      provider: openai               # or "ollama" / "huggingface"
      model: gpt-4o-mini
      max_tokens: 400

outputs:
  final_answer: ${{ answer }}
```

Run:
```bash
pip install "flowfoundry[rag,rerank,openai,llm-openai]"
export OPENAI_API_KEY=...
flowfoundry run rag_sample.yaml -V question="Summarize the PDFs"
```

## Custom Logic

Create a file anywhere (e.g., examples/external_plugins/pdf_loader_openai.py):

```python
# examples/external_plugins/pdf_loader_openai.py
from __future__ import annotations
from pathlib import Path
from typing import Dict, List, Union
from flowfoundry.utils import register_strategy, FFIngestionError

@register_strategy("ingestion", "pdf_loader_openai")
def pdf_loader_openai(path: Union[str, Path]) -> List[Dict]:
    """
    Return page dicts compatible with FlowFoundry indexing:
      {"source": str, "page": int, "text": str}
    """
    p = Path(path)
    if not p.exists():
        raise FFIngestionError(f"Path not found: {p}")
    pdfs = [p] if (p.is_file() and p.suffix.lower()==".pdf") else list(p.rglob("*.pdf"))
    if not pdfs:
        raise FFIngestionError(f"No PDFs under {p}")

    # Replace with your own logic. This stub just makes empty pages:
    return [{"source": str(pdf.resolve()), "page": 1, "text": f"stub text for {pdf.name}"} for pdf in pdfs]

# Optional: bind this function into `flowfoundry.functional` for ergonomic imports
FF_EXPORTS = [
    ("ingestion", "pdf_loader_openai", "pdf_loader_openai"),
    # You can also add a convenience alias:
    # ("ingestion", "pdf_loader_openai", "pdf_loader"),
]
```

Use it from Python

```python
from flowfoundry.utils.plugin_loader import load_plugins
from flowfoundry.utils.functional_registry import strategies

# 1) Load your file(s) so decorators run (and optional FF_EXPORTS bind)
load_plugins(["examples/external_plugins/pdf_loader_openai.py"], export_to_functional=True)

# 2) Grab it by registry name (robust)
pdf_loader = strategies.get("ingestion", "pdf_loader_openai")
pages = pdf_loader("docs/samples")

# 3) Continue with the Functional API
from flowfoundry.functional import chunk_recursive, index_chroma_upsert
chunks = []
for pg in pages:
    for ch in chunk_recursive(pg["text"], chunk_size=500, chunk_overlap=50, doc_id="demo"):
        ch["meta"] = {"source": pg["source"], "page": pg["page"]}
        chunks.append(ch)
index_chroma_upsert(chunks, path=".ff_chroma", collection="docs")
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "flowfoundry",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "RAG, LangGraph, LangChain, agents, LLM, framework",
    "author": "Mandar Parab",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/e9/89/0b344387402de863f8caa0b79ea0ef19b2337783b7226fe54761b30fbd01/flowfoundry-1.2.0.tar.gz",
    "platform": null,
    "description": "# FlowFoundry\n\n*A strategy-first, cloud-agnostic framework for LLM workflows.*  \nCompose chunking, indexing, retrieval, reranking, and agentic flows \u2014 with **Keras-like ergonomics** over LangChain / LangGraph.\n\n---\n\n## \u2728 Features\n\n- **Strategies**: chunking, indexing, retrieval, reranking  \n- **Functional API**: call strategies directly as Python functions  \n- **Blocks API**: compose strategies like layers  \n- **Nodes & Graphs**: LangGraph-backed workflows (YAML or Python)  \n- **Extensible**: register custom strategies or nodes  \n\n---\n\n## Installation\n\nCore only:\n\n```bash\npip install flowfoundry\n```\n\nWith extras:\n```bash\npip install \"flowfoundry[rag,search,rerank,qdrant,openai,llm-openai]\"\n```\n\nExtras include: chromadb, qdrant-client, sentence-transformers, rank-bm25, openai, etc.\nAll examples run offline by default (echo LLM). Missing deps no-op gracefully.\n\nSanity check:\n```python\nfrom flowfoundry import ping, hello\nprint(ping())          # -> \"flowfoundry: ok\"\nprint(hello(\"there\"))  # -> \"hello, there!\"\n```\n\n## Quickstart (Functional API)\n\n```python\nfrom flowfoundry.functional import (\n  chunk_recursive, index_chroma_upsert, index_chroma_query, preselect_bm25\n)\n\ntext   = \"FlowFoundry lets you mix strategies to build RAG.\"\nchunks = chunk_recursive(text, chunk_size=120, chunk_overlap=20, doc_id=\"demo\")\n\n# Index & query (requires chromadb extra)\nindex_chroma_upsert(chunks, path=\".ff_chroma\", collection=\"docs\")\nhits = index_chroma_query(\"What is FlowFoundry?\", path=\".ff_chroma\", collection=\"docs\", k=8)\n\n# Optional rerank (requires rank-bm25)\nhits = preselect_bm25(\"What is FlowFoundry?\", hits, top_k=5)\n\nprint(hits[0][\"text\"])\n```\n\n### CLI\n\nAll registered strategies are available via the flowfoundry CLI.\n\nRun:\n```bash\n# list families and functions\nflowfoundry list\n\n# call a strategy directly\nflowfoundry chunking fixed --kwargs '{\"data\":\"hello world\",\"chunk_size\":5}'\n\n# equivalent generic call\nflowfoundry call chunking fixed --kwargs '{\"data\":\"hello world\",\"chunk_size\":5}'\n```\n\n## Functional API Reference\n\nAvailable in `flowfoundry.functional`:\n\n---\n\n### Chunking\n\n| Function          | Purpose              | Extra deps |\n|-------------------|----------------------|------------|\n| `chunk_fixed`     | Fixed-size splitter  | \u2013          |\n| `chunk_recursive` | Recursive splitter   | `langchain-text-splitters` |\n| `chunk_hybrid`    | Hybrid splitter      | \u2013          |\n\n```python\nchunk_fixed(text, *, chunk_size=800, chunk_overlap=80, doc_id=\"doc\") -> list[Chunk]\nchunk_recursive(text, *, chunk_size=800, chunk_overlap=80, doc_id=\"doc\") -> list[Chunk]\nchunk_hybrid(text, **kwargs) -> list[Chunk]\n```\n\n---\n\n### Indexing (Chroma)\n\n| Function              | Purpose              | Extra deps |\n|-----------------------|----------------------|------------|\n| `index_chroma_upsert` | Upsert chunks into Chroma  |`chromadb` |\n| `index_chroma_query`  | Query Chroma   | `chromadb` |\n\n```python\nindex_chroma_upsert(chunks, *, path=\".ff_chroma\", collection=\"docs\") -> str\nindex_chroma_query(query, *, path, collection, k=5) -> list[Hit]\n```\n---\n\n### Reranking\n\n| Function          | Purpose              | Extra deps |\n|-------------------|----------------------|------------|\n| `rerank_identity`     | No-op reranker  | \u2013          |\n| `preselect_bm25` | BM25 preselect   | `rank-bm25` |\n| `rerank_cross_encoder`    | Cross-encoder reranker      |`sentence-transformers` |\n\n```python\nrerank_identity(query, hits, top_k=None) -> list[Hit]\npreselect_bm25(query, hits, top_k=20) -> list[Hit]\nrerank_cross_encoder(query, hits, *, model, top_k=None) -> list[Hit]\n```\n\n### Composition (LLM Answering)\n\n| Function          | Purpose              | Providers supported | Extra deps |\n|-------------------|----------------------|------------|----------------------|\n| `compose_llm`     | Generate an answer from hits via an LLM  | openai, ollama, huggingface, langchain | provider-specific |\n\n```python\ncompose_llm(\n    question: str,\n    hits: list[Hit],\n    *,\n    provider: str,        # \"openai\", \"ollama\", \"huggingface\", \"langchain\"\n    model: str,           # e.g. \"gpt-4o-mini\", \"llama3:8b\", \"distilgpt2\"\n    max_context_chars=6000,\n    max_tokens=512,\n    reuse_provider=True,\n    **provider_kwargs     # api_key, host, backend, device, etc.\n) -> str\n```\n\n## Example Code:\n```python\nfrom flowfoundry import index_chroma_query, preselect_bm25, compose_llm\n\nquestion = \"What is people's budget?\"\nhits = index_chroma_query(question, path=\".ff_chroma\", collection=\"docs\", k=8)\nhits = preselect_bm25(question, hits, top_k=5)\n\n# OpenAI provider\nanswer = compose_llm(\n    question, hits,\n    provider=\"openai\",\n    model=\"gpt-4o-mini\",\n    max_tokens=400,\n)\nprint(answer)\n\n# Ollama provider\nanswer = compose_llm(\n    question, hits,\n    provider=\"ollama\",\n    model=\"llama3:8b\",\n    host=\"http://localhost:11434\",\n    max_tokens=400,\n)\nprint(answer)\n\n# HuggingFace local transformers\nanswer = compose_llm(\n    question, hits,\n    provider=\"huggingface\",\n    model=\"distilgpt2\",\n    max_tokens=200,\n)\nprint(answer)\n```\n\nExample (CLI)\n\nSave retrieval hits into JSON first, then pass them to compose_llm:\n\nStep 1: query (Chroma)\n```bash\nflowfoundry indexing chroma_query \\\n  --kwargs '{\"query\":\"What is people'\\''s budget?\",\"path\":\".ff_chroma\",\"collection\":\"docs\",\"k\":8}' > hits.json\n```\n\nStep 2: rerank (BM25)\n```bash\n flowfoundry rerank bm25_preselect \\\n  --kwargs \"{\\\"query\\\":\\\"What is people's budget?\\\",\\\"hits\\\":$(cat hits.json),\\\"top_k\\\":5}\" > hits_top5.json\n```\n\nStep 3: compose answer with OpenAI\n``` bash\nexport OPENAI_API_KEY=...\nflowfoundry compose llm \\\n  --kwargs \"{\\\"question\\\":\\\"What is people's budget?\\\",\\\"hits\\\":$(cat hits_top5.json),\\\"provider\\\":\\\"openai\\\",\\\"model\\\":\\\"gpt-4o-mini\\\",\\\"max_tokens\\\":400}\"\n```\n\n## YAML based run\n\nPlanned Schema:\n```yaml\nversion: 1\nvars:        # optional globals you can reference later\n  key: value\nsteps:       # ordered list of steps\n  - id: step_name\n    use: family.function_name      # e.g., chunking.chunk_recursive\n    with:                          # kwargs passed to that function\n      param1: foo\n      param2: ${{ vars.key }}      # reference vars or prior steps\noutputs:     # optional; what to print at the end\n  result: ${{ step_name }}\n```\n\nExample 1 \u2014 Minimal RAG (inline text)\n```yaml\nversion: 1\n\nvars:\n  data_path: docs/samples/                   \n  store_path: .ff_chroma2\n  collection: docs\n  question: \"Summarize the pdfs\"\n\nsteps:\n  # 1) Load PDFs (your existing strategy)\n  - id: pages\n    use: ingestion.pdf_loader\n    with:\n      path: ${{ vars.data_path }}\n\n  # 2) Chunk every page, preserving source/page metadata\n  - id: chunks\n    use: chunking.recursive\n    with:\n      data: ${{ pages }}\n      chunk_size: 800\n      chunk_overlap: 120\n\n  # 3) Upsert chunks into Chroma\n  - id: upsert\n    use: indexing.chroma_upsert\n    with:\n      chunks: ${{ chunks }}\n      path: ${{ vars.store_path }}\n      collection: ${{ vars.collection }}\n\n  # 4) Retrieve relevant chunks\n  - id: retrieve\n    use: indexing.chroma_query\n    with:\n      query: ${{ vars.question }}\n      path: ${{ vars.store_path }}\n      collection: ${{ vars.collection }}\n      k: 12\n\n  # 5) (Optional) BM25 preselect\n  - id: preselect\n    use: rerank.bm25_preselect\n    with:\n      query: ${{ vars.question }}\n      hits: ${{ retrieve }}\n      top_k: 6\n\n  # 6) Compose final answer (pick your provider)\n  - id: answer\n    use: compose.llm\n    with:\n      question: ${{ vars.question }}\n      hits: ${{ preselect }}\n      provider: openai               # or \"ollama\" / \"huggingface\"\n      model: gpt-4o-mini\n      max_tokens: 400\n\noutputs:\n  final_answer: ${{ answer }}\n```\n\nRun:\n```bash\npip install \"flowfoundry[rag,rerank,openai,llm-openai]\"\nexport OPENAI_API_KEY=...\nflowfoundry run rag_sample.yaml -V question=\"Summarize the PDFs\"\n```\n\n## Custom Logic\n\nCreate a file anywhere (e.g., examples/external_plugins/pdf_loader_openai.py):\n\n```python\n# examples/external_plugins/pdf_loader_openai.py\nfrom __future__ import annotations\nfrom pathlib import Path\nfrom typing import Dict, List, Union\nfrom flowfoundry.utils import register_strategy, FFIngestionError\n\n@register_strategy(\"ingestion\", \"pdf_loader_openai\")\ndef pdf_loader_openai(path: Union[str, Path]) -> List[Dict]:\n    \"\"\"\n    Return page dicts compatible with FlowFoundry indexing:\n      {\"source\": str, \"page\": int, \"text\": str}\n    \"\"\"\n    p = Path(path)\n    if not p.exists():\n        raise FFIngestionError(f\"Path not found: {p}\")\n    pdfs = [p] if (p.is_file() and p.suffix.lower()==\".pdf\") else list(p.rglob(\"*.pdf\"))\n    if not pdfs:\n        raise FFIngestionError(f\"No PDFs under {p}\")\n\n    # Replace with your own logic. This stub just makes empty pages:\n    return [{\"source\": str(pdf.resolve()), \"page\": 1, \"text\": f\"stub text for {pdf.name}\"} for pdf in pdfs]\n\n# Optional: bind this function into `flowfoundry.functional` for ergonomic imports\nFF_EXPORTS = [\n    (\"ingestion\", \"pdf_loader_openai\", \"pdf_loader_openai\"),\n    # You can also add a convenience alias:\n    # (\"ingestion\", \"pdf_loader_openai\", \"pdf_loader\"),\n]\n```\n\nUse it from Python\n\n```python\nfrom flowfoundry.utils.plugin_loader import load_plugins\nfrom flowfoundry.utils.functional_registry import strategies\n\n# 1) Load your file(s) so decorators run (and optional FF_EXPORTS bind)\nload_plugins([\"examples/external_plugins/pdf_loader_openai.py\"], export_to_functional=True)\n\n# 2) Grab it by registry name (robust)\npdf_loader = strategies.get(\"ingestion\", \"pdf_loader_openai\")\npages = pdf_loader(\"docs/samples\")\n\n# 3) Continue with the Functional API\nfrom flowfoundry.functional import chunk_recursive, index_chroma_upsert\nchunks = []\nfor pg in pages:\n    for ch in chunk_recursive(pg[\"text\"], chunk_size=500, chunk_overlap=50, doc_id=\"demo\"):\n        ch[\"meta\"] = {\"source\": pg[\"source\"], \"page\": pg[\"page\"]}\n        chunks.append(ch)\nindex_chroma_upsert(chunks, path=\".ff_chroma\", collection=\"docs\")\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "FlowFoundry: a strategy-first, cloud-agnostic agentic workflow framework (LangGraph/LangChain)",
    "version": "1.2.0",
    "project_urls": {
        "Documentation": "https://github.com/m-np/flowfoundry#readme",
        "Homepage": "https://github.com/m-np/flowfoundry",
        "Issues": "https://github.com/m-np/flowfoundry/issues",
        "Repository": "https://github.com/m-np/flowfoundry"
    },
    "split_keywords": [
        "rag",
        " langgraph",
        " langchain",
        " agents",
        " llm",
        " framework"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "39ac90b974eb1b32cf01bd7240d554b3c72d597425c1b3637f4f27da840dacaa",
                "md5": "8a5e0d807fee5b5df88867aa00f9ca53",
                "sha256": "1f2404a18554c3395168bece960ed5e76c372193bda6d4a00862531233a67cfd"
            },
            "downloads": -1,
            "filename": "flowfoundry-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8a5e0d807fee5b5df88867aa00f9ca53",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 37556,
            "upload_time": "2025-08-31T00:58:48",
            "upload_time_iso_8601": "2025-08-31T00:58:48.696000Z",
            "url": "https://files.pythonhosted.org/packages/39/ac/90b974eb1b32cf01bd7240d554b3c72d597425c1b3637f4f27da840dacaa/flowfoundry-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e9890b344387402de863f8caa0b79ea0ef19b2337783b7226fe54761b30fbd01",
                "md5": "7bc300c22041d1aa6cf2ac90a8666965",
                "sha256": "49203a5cee0f6ae6bcd2d12d15a6e8123b18a28c4bd24d147577f74690850bd2"
            },
            "downloads": -1,
            "filename": "flowfoundry-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "7bc300c22041d1aa6cf2ac90a8666965",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 32378,
            "upload_time": "2025-08-31T00:58:49",
            "upload_time_iso_8601": "2025-08-31T00:58:49.593295Z",
            "url": "https://files.pythonhosted.org/packages/e9/89/0b344387402de863f8caa0b79ea0ef19b2337783b7226fe54761b30fbd01/flowfoundry-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-31 00:58:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "m-np",
    "github_project": "flowfoundry#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "flowfoundry"
}
        
Elapsed time: 0.51796s