ragbits


Nameragbits JSON
Version 1.2.2 PyPI version JSON
download
home_pageNone
SummaryBuilding blocks for rapid development of GenAI applications
upload_time2025-08-09 18:12:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords genai generative ai llms large language models prompt management rag retrieval augmented generation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<h1>🐰 Ragbits</h1>

*Building blocks for rapid development of GenAI applications*

[Homepage](https://deepsense.ai/rd-hub/ragbits/) | [Documentation](https://ragbits.deepsense.ai) | [Contact](https://deepsense.ai/contact/)

<a href="https://trendshift.io/repositories/13966" target="_blank"><img src="https://trendshift.io/api/badge/repositories/13966" alt="deepsense-ai%2Fragbits | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>


[![PyPI - License](https://img.shields.io/pypi/l/ragbits)](https://pypi.org/project/ragbits)
[![PyPI - Version](https://img.shields.io/pypi/v/ragbits)](https://pypi.org/project/ragbits)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ragbits)](https://pypi.org/project/ragbits)

</div>

---

## Features

### 🔨 Build Reliable & Scalable GenAI Apps

- **Swap LLMs anytime** – Switch between [100+ LLMs via LiteLLM](https://ragbits.deepsense.ai/how-to/llms/use_llms/) or run [local models](https://ragbits.deepsense.ai/how-to/llms/use_local_llms/).
- **Type-safe LLM calls** – Use Python generics to [enforce strict type safety](https://ragbits.deepsense.ai/how-to/prompts/use_prompting/#how-to-configure-prompts-output-data-type) in model interactions.
- **Bring your own vector store** – Connect to [Qdrant](https://ragbits.deepsense.ai/api_reference/core/vector-stores/#ragbits.core.vector_stores.qdrant.QdrantVectorStore), [PgVector](https://ragbits.deepsense.ai/api_reference/core/vector-stores/#ragbits.core.vector_stores.pgvector.PgVectorStore), and more with built-in support.
- **Developer tools included** – [Manage vector stores](https://ragbits.deepsense.ai/cli/main/#ragbits-vector-store), query pipelines, and [test prompts from your terminal](https://ragbits.deepsense.ai/quickstart/quickstart1_prompts/#testing-the-prompt-from-the-cli).
- **Modular installation** – Install only what you need, reducing dependencies and improving performance.

### 📚 Fast & Flexible RAG Processing

- **Ingest 20+ formats** – Process PDFs, HTML, spreadsheets, presentations, and more. Process data using [Docling](https://github.com/docling-project/docling), [Unstructured](https://github.com/Unstructured-IO/unstructured) or create a custom parser.
- **Handle complex data** – Extract tables, images, and structured content with built-in VLMs support.
- **Connect to any data source** – Use prebuilt connectors for S3, GCS, Azure, or implement your own.
- **Scale ingestion** – Process large datasets quickly with [Ray-based parallel processing](https://ragbits.deepsense.ai/how-to/document_search/distributed_ingestion/#how-to-ingest-documents-in-a-distributed-fashion).

### 🤖 Build Multi-Agent Workflows with Ease

- **Multi-agent coordination** – Create teams of specialized agents with role-based collaboration using [A2A protocol](https://ragbits.deepsense.ai/tutorials/agents) for interoperability.
- **Real-time data integration** – Leverage [Model Context Protocol (MCP)](https://ragbits.deepsense.ai/how-to/agents/provide_mcp_tools) for live web access, database queries, and API integrations.
- **Conversation state management** – Maintain context across interactions with [automatic history tracking](https://ragbits.deepsense.ai/how-to/agents/define_and_use_agents/#conversation-history).

### 🚀 Deploy & Monitor with Confidence

- **Real-time observability** – Track performance with [OpenTelemetry](https://ragbits.deepsense.ai/how-to/project/use_tracing/#opentelemetry-trace-handler) and [CLI insights](https://ragbits.deepsense.ai/how-to/project/use_tracing/#cli-trace-handler).
- **Built-in testing** – Validate prompts [with promptfoo](https://ragbits.deepsense.ai/how-to/prompts/promptfoo/) before deployment.
- **Auto-optimization** – Continuously evaluate and refine model performance.
- **Chat UI** – Deploy [chatbot interface](https://ragbits.deepsense.ai/how-to/chatbots/api/) with API, persistance and user feedback.

## Installation

To get started quickly, you can install with:

```sh
pip install ragbits
```

This is a starter bundle of packages, containing:

- [`ragbits-core`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-core) - fundamental tools for working with prompts, LLMs and vector databases.
- [`ragbits-agents`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-agents) - abstractions for building agentic systems.
- [`ragbits-document-search`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-document-search) - retrieval and ingestion piplines for knowledge bases.
- [`ragbits-evaluate`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-evaluate) - unified evaluation framework for Ragbits components.
- [`ragbits-chat`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-chat) - full-stack infrastructure for building conversational AI applications.
- [`ragbits-cli`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-cli) - `ragbits` shell command for interacting with Ragbits components.

Alternatively, you can use individual components of the stack by installing their respective packages.

## Quickstart

### Basics

To define a prompt and run LLM:

```python
import asyncio
from pydantic import BaseModel
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt import Prompt

class QuestionAnswerPromptInput(BaseModel):
    question: str

class QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):
    system_prompt = """
    You are a question answering agent. Answer the question to the best of your ability.
    """
    user_prompt = """
    Question: {{ question }}
    """

llm = LiteLLM(model_name="gpt-4.1-nano")

async def main() -> None:
    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question="What are high memory and low memory on linux?"))
    response = await llm.generate(prompt)
    print(response)

if __name__ == "__main__":
    asyncio.run(main())
```

### Document Search

To build and query a simple vector store index:

```python
import asyncio
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch

embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)

async def run() -> None:
    await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
    result = await document_search.search("What are the key findings presented in this paper?")
    print(result)

if __name__ == "__main__":
    asyncio.run(run())
```

### Retrieval-Augmented Generation

To build a simple RAG pipeline:

```python
import asyncio
from collections.abc import Iterable
from pydantic import BaseModel
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM
from ragbits.core.prompt import Prompt
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch
from ragbits.document_search.documents.element import Element

class QuestionAnswerPromptInput(BaseModel):
    question: str
    context: Iterable[Element]

class QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):
    system_prompt = """
    You are a question answering agent. Answer the question that will be provided using context.
    If in the given context there is not enough information refuse to answer.
    """
    user_prompt = """
    Question: {{ question }}
    Context: {% for chunk in context %}{{ chunk.text_representation }}{%- endfor %}
    """

llm = LiteLLM(model_name="gpt-4.1-nano")
embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)

async def run() -> None:
    question = "What are the key findings presented in this paper?"

    await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
    chunks = await document_search.search(question)

    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=question, context=chunks))
    response = await llm.generate(prompt)
    print(response)

if __name__ == "__main__":
    asyncio.run(run())
```

### Agentic RAG

To build an agentic RAG pipeline:

```python
import asyncio
from ragbits.agents import Agent
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch

embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)

llm = LiteLLM(model_name="gpt-4.1-nano")
agent = Agent(llm=llm, tools=[document_search.search])

async def main() -> None:
    await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")
    response = await agent.run("What are the key findings presented in this paper?")
    print(response.content)

if __name__ == "__main__":
    asyncio.run(main())
```

### Chat UI

To expose your GenAI application through Ragbits API:

```python
from collections.abc import AsyncGenerator
from ragbits.agents import Agent, ToolCallResult
from ragbits.chat.api import RagbitsAPI
from ragbits.chat.interface import ChatInterface
from ragbits.chat.interface.types import ChatContext, ChatResponse, LiveUpdateType
from ragbits.core.embeddings import LiteLLMEmbedder
from ragbits.core.llms import LiteLLM, ToolCall
from ragbits.core.prompt import ChatFormat
from ragbits.core.vector_stores import InMemoryVectorStore
from ragbits.document_search import DocumentSearch

embedder = LiteLLMEmbedder(model_name="text-embedding-3-small")
vector_store = InMemoryVectorStore(embedder=embedder)
document_search = DocumentSearch(vector_store=vector_store)

llm = LiteLLM(model_name="gpt-4.1-nano")
agent = Agent(llm=llm, tools=[document_search.search])

class MyChat(ChatInterface):
    async def setup(self) -> None:
        await document_search.ingest("web://https://arxiv.org/pdf/1706.03762")

    async def chat(
        self,
        message: str,
        history: ChatFormat | None = None,
        context: ChatContext | None = None,
    ) -> AsyncGenerator[ChatResponse]:
        async for result in agent.run_streaming(message):
            match result:
                case str():
                    yield self.create_live_update(
                        update_id="1",
                        type=LiveUpdateType.START,
                        label="Answering...",
                    )
                    yield self.create_text_response(result)
                case ToolCall():
                    yield self.create_live_update(
                        update_id="2",
                        type=LiveUpdateType.START,
                        label="Searching...",
                    )
                case ToolCallResult():
                    yield self.create_live_update(
                        update_id="2",
                        type=LiveUpdateType.FINISH,
                        label="Search",
                        description=f"Found {len(result.result)} relevant chunks.",
                    )

        yield self.create_live_update(
            update_id="1",
            type=LiveUpdateType.FINISH,
            label="Answer",
        )

if __name__ == "__main__":
    api = RagbitsAPI(MyChat)
    api.run()
```

## Rapid development

Create Ragbits projects from templates:

```sh
uvx create-ragbits-app
```

Explore `create-ragbits-app` repo [here](https://github.com/deepsense-ai/create-ragbits-app). If you have a new idea for a template, feel free to contribute!

## Documentation

- [Tutorials](https://ragbits.deepsense.ai/tutorials/intro) - Get started with Ragbits in a few minutes
- [How-to](https://ragbits.deepsense.ai/how-to/prompts/use_prompting) - Learn how to use Ragbits in your projects
- [CLI](https://ragbits.deepsense.ai/cli/main) - Learn how to run Ragbits in your terminal
- [API reference](https://ragbits.deepsense.ai/api_reference/core/prompt) - Explore the underlying Ragbits API

## Contributing

We welcome contributions! Please read [CONTRIBUTING.md](https://github.com/deepsense-ai/ragbits/tree/main/CONTRIBUTING.md) for more information.

## License

Ragbits is licensed under the [MIT License](https://github.com/deepsense-ai/ragbits/tree/main/LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ragbits",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "GenAI, Generative AI, LLMs, Large Language Models, Prompt Management, RAG, Retrieval Augmented Generation",
    "author": null,
    "author_email": "\"deepsense.ai\" <ragbits@deepsense.ai>",
    "download_url": "https://files.pythonhosted.org/packages/eb/02/aaa50420597a283eca4c5df553a7324af00450a4da9f591827a6ff1b722d/ragbits-1.2.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<h1>\ud83d\udc30 Ragbits</h1>\n\n*Building blocks for rapid development of GenAI applications*\n\n[Homepage](https://deepsense.ai/rd-hub/ragbits/) | [Documentation](https://ragbits.deepsense.ai) | [Contact](https://deepsense.ai/contact/)\n\n<a href=\"https://trendshift.io/repositories/13966\" target=\"_blank\"><img src=\"https://trendshift.io/api/badge/repositories/13966\" alt=\"deepsense-ai%2Fragbits | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/></a>\n\n\n[![PyPI - License](https://img.shields.io/pypi/l/ragbits)](https://pypi.org/project/ragbits)\n[![PyPI - Version](https://img.shields.io/pypi/v/ragbits)](https://pypi.org/project/ragbits)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ragbits)](https://pypi.org/project/ragbits)\n\n</div>\n\n---\n\n## Features\n\n### \ud83d\udd28 Build Reliable & Scalable GenAI Apps\n\n- **Swap LLMs anytime** \u2013 Switch between [100+ LLMs via LiteLLM](https://ragbits.deepsense.ai/how-to/llms/use_llms/) or run [local models](https://ragbits.deepsense.ai/how-to/llms/use_local_llms/).\n- **Type-safe LLM calls** \u2013 Use Python generics to [enforce strict type safety](https://ragbits.deepsense.ai/how-to/prompts/use_prompting/#how-to-configure-prompts-output-data-type) in model interactions.\n- **Bring your own vector store** \u2013 Connect to [Qdrant](https://ragbits.deepsense.ai/api_reference/core/vector-stores/#ragbits.core.vector_stores.qdrant.QdrantVectorStore), [PgVector](https://ragbits.deepsense.ai/api_reference/core/vector-stores/#ragbits.core.vector_stores.pgvector.PgVectorStore), and more with built-in support.\n- **Developer tools included** \u2013 [Manage vector stores](https://ragbits.deepsense.ai/cli/main/#ragbits-vector-store), query pipelines, and [test prompts from your terminal](https://ragbits.deepsense.ai/quickstart/quickstart1_prompts/#testing-the-prompt-from-the-cli).\n- **Modular installation** \u2013 Install only what you need, reducing dependencies and improving performance.\n\n### \ud83d\udcda Fast & Flexible RAG Processing\n\n- **Ingest 20+ formats** \u2013 Process PDFs, HTML, spreadsheets, presentations, and more. Process data using [Docling](https://github.com/docling-project/docling), [Unstructured](https://github.com/Unstructured-IO/unstructured) or create a custom parser.\n- **Handle complex data** \u2013 Extract tables, images, and structured content with built-in VLMs support.\n- **Connect to any data source** \u2013 Use prebuilt connectors for S3, GCS, Azure, or implement your own.\n- **Scale ingestion** \u2013 Process large datasets quickly with [Ray-based parallel processing](https://ragbits.deepsense.ai/how-to/document_search/distributed_ingestion/#how-to-ingest-documents-in-a-distributed-fashion).\n\n### \ud83e\udd16 Build Multi-Agent Workflows with Ease\n\n- **Multi-agent coordination** \u2013 Create teams of specialized agents with role-based collaboration using [A2A protocol](https://ragbits.deepsense.ai/tutorials/agents) for interoperability.\n- **Real-time data integration** \u2013 Leverage [Model Context Protocol (MCP)](https://ragbits.deepsense.ai/how-to/agents/provide_mcp_tools) for live web access, database queries, and API integrations.\n- **Conversation state management** \u2013 Maintain context across interactions with [automatic history tracking](https://ragbits.deepsense.ai/how-to/agents/define_and_use_agents/#conversation-history).\n\n### \ud83d\ude80 Deploy & Monitor with Confidence\n\n- **Real-time observability** \u2013 Track performance with [OpenTelemetry](https://ragbits.deepsense.ai/how-to/project/use_tracing/#opentelemetry-trace-handler) and [CLI insights](https://ragbits.deepsense.ai/how-to/project/use_tracing/#cli-trace-handler).\n- **Built-in testing** \u2013 Validate prompts [with promptfoo](https://ragbits.deepsense.ai/how-to/prompts/promptfoo/) before deployment.\n- **Auto-optimization** \u2013 Continuously evaluate and refine model performance.\n- **Chat UI** \u2013 Deploy [chatbot interface](https://ragbits.deepsense.ai/how-to/chatbots/api/) with API, persistance and user feedback.\n\n## Installation\n\nTo get started quickly, you can install with:\n\n```sh\npip install ragbits\n```\n\nThis is a starter bundle of packages, containing:\n\n- [`ragbits-core`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-core) - fundamental tools for working with prompts, LLMs and vector databases.\n- [`ragbits-agents`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-agents) - abstractions for building agentic systems.\n- [`ragbits-document-search`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-document-search) - retrieval and ingestion piplines for knowledge bases.\n- [`ragbits-evaluate`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-evaluate) - unified evaluation framework for Ragbits components.\n- [`ragbits-chat`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-chat) - full-stack infrastructure for building conversational AI applications.\n- [`ragbits-cli`](https://github.com/deepsense-ai/ragbits/tree/main/packages/ragbits-cli) - `ragbits` shell command for interacting with Ragbits components.\n\nAlternatively, you can use individual components of the stack by installing their respective packages.\n\n## Quickstart\n\n### Basics\n\nTo define a prompt and run LLM:\n\n```python\nimport asyncio\nfrom pydantic import BaseModel\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.prompt import Prompt\n\nclass QuestionAnswerPromptInput(BaseModel):\n    question: str\n\nclass QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):\n    system_prompt = \"\"\"\n    You are a question answering agent. Answer the question to the best of your ability.\n    \"\"\"\n    user_prompt = \"\"\"\n    Question: {{ question }}\n    \"\"\"\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\n\nasync def main() -> None:\n    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=\"What are high memory and low memory on linux?\"))\n    response = await llm.generate(prompt)\n    print(response)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Document Search\n\nTo build and query a simple vector store index:\n\n```python\nimport asyncio\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nasync def run() -> None:\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    result = await document_search.search(\"What are the key findings presented in this paper?\")\n    print(result)\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n### Retrieval-Augmented Generation\n\nTo build a simple RAG pipeline:\n\n```python\nimport asyncio\nfrom collections.abc import Iterable\nfrom pydantic import BaseModel\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.prompt import Prompt\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\nfrom ragbits.document_search.documents.element import Element\n\nclass QuestionAnswerPromptInput(BaseModel):\n    question: str\n    context: Iterable[Element]\n\nclass QuestionAnswerPrompt(Prompt[QuestionAnswerPromptInput, str]):\n    system_prompt = \"\"\"\n    You are a question answering agent. Answer the question that will be provided using context.\n    If in the given context there is not enough information refuse to answer.\n    \"\"\"\n    user_prompt = \"\"\"\n    Question: {{ question }}\n    Context: {% for chunk in context %}{{ chunk.text_representation }}{%- endfor %}\n    \"\"\"\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nasync def run() -> None:\n    question = \"What are the key findings presented in this paper?\"\n\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    chunks = await document_search.search(question)\n\n    prompt = QuestionAnswerPrompt(QuestionAnswerPromptInput(question=question, context=chunks))\n    response = await llm.generate(prompt)\n    print(response)\n\nif __name__ == \"__main__\":\n    asyncio.run(run())\n```\n\n### Agentic RAG\n\nTo build an agentic RAG pipeline:\n\n```python\nimport asyncio\nfrom ragbits.agents import Agent\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nagent = Agent(llm=llm, tools=[document_search.search])\n\nasync def main() -> None:\n    await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n    response = await agent.run(\"What are the key findings presented in this paper?\")\n    print(response.content)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Chat UI\n\nTo expose your GenAI application through Ragbits API:\n\n```python\nfrom collections.abc import AsyncGenerator\nfrom ragbits.agents import Agent, ToolCallResult\nfrom ragbits.chat.api import RagbitsAPI\nfrom ragbits.chat.interface import ChatInterface\nfrom ragbits.chat.interface.types import ChatContext, ChatResponse, LiveUpdateType\nfrom ragbits.core.embeddings import LiteLLMEmbedder\nfrom ragbits.core.llms import LiteLLM, ToolCall\nfrom ragbits.core.prompt import ChatFormat\nfrom ragbits.core.vector_stores import InMemoryVectorStore\nfrom ragbits.document_search import DocumentSearch\n\nembedder = LiteLLMEmbedder(model_name=\"text-embedding-3-small\")\nvector_store = InMemoryVectorStore(embedder=embedder)\ndocument_search = DocumentSearch(vector_store=vector_store)\n\nllm = LiteLLM(model_name=\"gpt-4.1-nano\")\nagent = Agent(llm=llm, tools=[document_search.search])\n\nclass MyChat(ChatInterface):\n    async def setup(self) -> None:\n        await document_search.ingest(\"web://https://arxiv.org/pdf/1706.03762\")\n\n    async def chat(\n        self,\n        message: str,\n        history: ChatFormat | None = None,\n        context: ChatContext | None = None,\n    ) -> AsyncGenerator[ChatResponse]:\n        async for result in agent.run_streaming(message):\n            match result:\n                case str():\n                    yield self.create_live_update(\n                        update_id=\"1\",\n                        type=LiveUpdateType.START,\n                        label=\"Answering...\",\n                    )\n                    yield self.create_text_response(result)\n                case ToolCall():\n                    yield self.create_live_update(\n                        update_id=\"2\",\n                        type=LiveUpdateType.START,\n                        label=\"Searching...\",\n                    )\n                case ToolCallResult():\n                    yield self.create_live_update(\n                        update_id=\"2\",\n                        type=LiveUpdateType.FINISH,\n                        label=\"Search\",\n                        description=f\"Found {len(result.result)} relevant chunks.\",\n                    )\n\n        yield self.create_live_update(\n            update_id=\"1\",\n            type=LiveUpdateType.FINISH,\n            label=\"Answer\",\n        )\n\nif __name__ == \"__main__\":\n    api = RagbitsAPI(MyChat)\n    api.run()\n```\n\n## Rapid development\n\nCreate Ragbits projects from templates:\n\n```sh\nuvx create-ragbits-app\n```\n\nExplore `create-ragbits-app` repo [here](https://github.com/deepsense-ai/create-ragbits-app). If you have a new idea for a template, feel free to contribute!\n\n## Documentation\n\n- [Tutorials](https://ragbits.deepsense.ai/tutorials/intro) - Get started with Ragbits in a few minutes\n- [How-to](https://ragbits.deepsense.ai/how-to/prompts/use_prompting) - Learn how to use Ragbits in your projects\n- [CLI](https://ragbits.deepsense.ai/cli/main) - Learn how to run Ragbits in your terminal\n- [API reference](https://ragbits.deepsense.ai/api_reference/core/prompt) - Explore the underlying Ragbits API\n\n## Contributing\n\nWe welcome contributions! Please read [CONTRIBUTING.md](https://github.com/deepsense-ai/ragbits/tree/main/CONTRIBUTING.md) for more information.\n\n## License\n\nRagbits is licensed under the [MIT License](https://github.com/deepsense-ai/ragbits/tree/main/LICENSE).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Building blocks for rapid development of GenAI applications",
    "version": "1.2.2",
    "project_urls": {
        "Bug Reports": "https://github.com/deepsense-ai/ragbits/issues",
        "Documentation": "https://ragbits.deepsense.ai/",
        "Homepage": "https://github.com/deepsense-ai/ragbits",
        "Source": "https://github.com/deepsense-ai/ragbits"
    },
    "split_keywords": [
        "genai",
        " generative ai",
        " llms",
        " large language models",
        " prompt management",
        " rag",
        " retrieval augmented generation"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "057cf0d7289864fe13760c23f7149776ba7f3ab2334a0ba0d33c34e0e16c2613",
                "md5": "4f5dd2367b50bfdda622914875768225",
                "sha256": "61d4485a979704e9126d7ed9de1bf36aaff8416a74b123fe020f17acd8dcaa99"
            },
            "downloads": -1,
            "filename": "ragbits-1.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4f5dd2367b50bfdda622914875768225",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 5482,
            "upload_time": "2025-08-09T18:12:17",
            "upload_time_iso_8601": "2025-08-09T18:12:17.696415Z",
            "url": "https://files.pythonhosted.org/packages/05/7c/f0d7289864fe13760c23f7149776ba7f3ab2334a0ba0d33c34e0e16c2613/ragbits-1.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "eb02aaa50420597a283eca4c5df553a7324af00450a4da9f591827a6ff1b722d",
                "md5": "9cb662b05e3ce58a3cfbb8d7cd7d2585",
                "sha256": "a4bcb2cffd5a3845007d7933504585379c3dd829e12cca28c2f633de0c0265ae"
            },
            "downloads": -1,
            "filename": "ragbits-1.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "9cb662b05e3ce58a3cfbb8d7cd7d2585",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 13223,
            "upload_time": "2025-08-09T18:12:27",
            "upload_time_iso_8601": "2025-08-09T18:12:27.604533Z",
            "url": "https://files.pythonhosted.org/packages/eb/02/aaa50420597a283eca4c5df553a7324af00450a4da9f591827a6ff1b722d/ragbits-1.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-09 18:12:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deepsense-ai",
    "github_project": "ragbits",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "ragbits"
}
        
Elapsed time: 1.87234s