aquiles-rag


Nameaquiles-rag JSON
Version 0.4.0 PyPI version JSON
download
home_pageNone
SummaryAquiles-RAG is a high-performance Augmented Recovery-Generation (RAG) solution based on Redis, Qdrant or PostgreSQLRAG. It offers a high-level interface using FastAPI REST APIs.
upload_time2025-09-07 16:26:36
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseApache License 2.0
keywords fastapi ai rag vector-database
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">Aquiles-RAG</h1>

<div align="center">
  <img src="aquiles/static/aq-rag2.png" alt="Aquiles-RAG Logo" width="200"/>
</div>

<p align="center">
  <strong>High-performance Retrieval-Augmented Generation (RAG) on Redis or Qdrant</strong><br/>
  🚀 FastAPI • Redis / Qdrant • Async • Embedding-agnostic
</p>

<p align="center">
  <a href="https://aquiles-ai.github.io/aqRAG-docs/">📖 Documentation</a>
</p>

## 📑 Table of Contents

1. [Features](#features)  
2. [Tech Stack](#tech-stack)  
3. [Requirements](#requirements)  
4. [Installation](#installation)  
5. [Configuration & Connection Options](#configuration--connection-options)  
6. [Usage](#usage)
   * [CLI](#cli)
   * [REST API](#rest-api)
   * [Python Client](#python-client)
   * [UI Playground](#ui-playground)  
7. [Architecture](#architecture)  
8. [License](#license)

## ⭐ Features

* 📈 **High Performance**: Vector search powered by Redis HNSW or Qdrant.  
* 🛠️ **Simple API**: Endpoints for index creation, insertion, and querying.  
* 🔌 **Embedding-agnostic**: Works with any embedding model (OpenAI, Llama 3, HuggingFace, etc.).  
* 💻 **Interactive Setup Wizard**: `aquiles-rag configs` walks you through full configuration for Redis or Qdrant.  
* ⚡ **Sync & Async clients**: `AquilesRAG` (requests) and `AsyncAquilesRAG` (httpx) with `embedding_model` metadata support.  
* 🧩 **Extensible**: Designed to integrate into ML pipelines, microservices, or serverless deployments.

## 🛠 Tech Stack

* **Python 3.9+**  
* [FastAPI](https://fastapi.tiangolo.com/)  
* [Redis](https://redis.io/) or [Qdrant](https://qdrant.tech/) as vector store  
* [NumPy](https://numpy.org/)  
* [Pydantic](https://pydantic-docs.helpmanual.io/)  
* [Jinja2](https://jinja.palletsprojects.com/)  
* [Click](https://click.palletsprojects.com/) (CLI)  
* [Requests](https://docs.python-requests.org/) (sync client)  
* [HTTPX](https://www.python-httpx.org/) (async client)  
* [Platformdirs](https://github.com/platformdirs/platformdirs) (config management)

## ⚙️ Requirements

1. **Redis** (standalone or cluster) — *or* **Qdrant** (HTTP / gRPC).  
2. **Python 3.9+**  
3. **pip**

> **Optional**: run Redis locally with Docker:
>
> ```bash
> docker run -d --name redis-stack -p 6379:6379 redis/redis-stack-server:latest
> ```


## 🚀 Installation

### Via PyPI (recommended)

```bash
pip install aquiles-rag
````

### From Source (optional)

```bash
git clone https://github.com/Aquiles-ai/Aquiles-RAG.git
cd Aquiles-RAG

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

# optional development install
pip install -e .
```

## 🔧 Configuration & Connection Options

Configuration is persisted at:

```
~/.local/share/aquiles/aquiles_config.json
```

### Setup Wizard (recommended)

The previous manual per-flag config flow was replaced by an interactive wizard. Run:

```bash
aquiles-rag configs
```

The wizard prompts for everything required for either **Redis** or **Qdrant** (host, ports, TLS/gRPC options, API keys, admin user). At the end it writes `aquiles_config.json` to the standard location.

### Manual config (advanced / CI)

If you prefer automation, generate the same JSON schema the wizard writes and place it at `~/.local/share/aquiles/aquiles_config.json` before starting the server (or use the `deploy` pattern described below).

### Redis connection modes (examples)

Aquiles-RAG supports multiple Redis modes:

1. **Local Cluster**

```py
RedisCluster(host=host, port=port, decode_responses=True)
```

2. **Standalone Local**

```py
redis.Redis(host=host, port=port, decode_responses=True)
```

3. **Remote with TLS/SSL**

```py
redis.Redis(host=host, port=port, username=username or None,
            password=password or None, ssl=True, decode_responses=True,
            ssl_certfile=ssl_certfile, ssl_keyfile=ssl_keyfile, ssl_ca_certs=ssl_ca_certs)
```

4. **Remote without TLS/SSL**

```py
redis.Redis(host=host, port=port, username=username or None, password=password or None, decode_responses=True)
```

## 📖 Usage

### CLI

* **Interactive Setup Wizard (recommended)**:

```bash
aquiles-rag configs
```

* **Serve the API**:

```bash
aquiles-rag serve --host "0.0.0.0" --port 5500
```

* **Deploy with bootstrap script** (pattern: `deploy_*.py` with `run()` that calls `gen_configs_file()`):

```bash
# Redis example
aquiles-rag deploy --host "0.0.0.0" --port 5500 --workers 2 deploy_redis.py

# Qdrant example
aquiles-rag deploy --host "0.0.0.0" --port 5500 --workers 2 deploy_qdrant.py
```

> The `deploy` command imports the given Python file, executes its `run()` to generate the config (writes `aquiles_config.json`), then starts the FastAPI server.

### REST API — common examples

1. **Create Index**

```bash
curl -X POST http://localhost:5500/create/index \
  -H "X-API-Key: YOUR_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "indexname": "documents",
    "embeddings_dim": 768,
    "dtype": "FLOAT32",
    "delete_the_index_if_it_exists": false
  }'
```

2. **Insert Chunk (ingest)**

```bash
curl -X POST http://localhost:5500/rag/create \
  -H "X-API-Key: YOUR_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "index": "documents",
    "name_chunk": "doc1_part1",
    "dtype": "FLOAT32",
    "chunk_size": 1024,
    "raw_text": "Text of the chunk...",
    "embeddings": [0.12, 0.34, 0.56, ...]
  }'
```

3. **Query Top-K**

```bash
curl -X POST http://localhost:5500/rag/query-rag \
  -H "X-API-Key: YOUR_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "index": "documents",
    "embeddings": [0.78, 0.90, ...],
    "dtype": "FLOAT32",
    "top_k": 5,
    "cosine_distance_threshold": 0.6
  }'
```

### Python Client

#### Sync client

```python
from aquiles.client import AquilesRAG

client = AquilesRAG(host="http://127.0.0.1:5500", api_key="YOUR_API_KEY")

# Create an index (returns server text)
resp_text = client.create_index("documents", embeddings_dim=768, dtype="FLOAT32")

# Insert chunks using your embedding function
def get_embedding(text):
    return embedding_model.encode(text)

responses = client.send_rag(
    embedding_func=get_embedding,
    index="documents",
    name_chunk="doc1",
    raw_text=full_text,
    embedding_model="text-embedding-v1"  # optional metadata sent with each chunk
)

# Query the index (returns parsed JSON)
results = client.query("documents", query_embedding, top_k=5)
print(results)
```

#### Async client

```python
import asyncio
from aquiles.client import AsyncAquilesRAG

client = AsyncAquilesRAG(host="http://127.0.0.1:5500", api_key="YOUR_API_KEY")

async def main():
    await client.create_index("documents_async")
    responses = await client.send_rag(
        embedding_func=async_embedding_func,   # supports sync or async callables
        index="documents_async",
        name_chunk="doc_async",
        raw_text=full_text
    )
    results = await client.query("documents_async", query_embedding)
    print(results)

asyncio.run(main())
```

**Notes**

* Both clients accept an optional `embedding_model` parameter forwarded as metadata — helpful when storing/querying embeddings produced by different models.
* `send_rag` chunks text using `chunk_text_by_words()` (default \~600 words / ≈1024 tokens) and uploads each chunk (concurrently in the async client).


### UI Playground

Open the web UI (protected) at:

```
http://localhost:5500/ui
```

Use it to:

* Run the Setup Wizard link (if available) or inspect live configs
* Test `/create/index`, `/rag/create`, `/rag/query-rag`
* Access protected Swagger UI & ReDoc after logging in


## 🏗 Architecture

![Architecture](aquiles/static/diagram.png)

1. **Clients** (HTTP/HTTPS, Python SDK, or UI Playground) make asynchronous HTTP requests.
2. **FastAPI Server** — orchestration and business logic; validates requests and translates them to vector store operations.
3. **Vector Store** — either Redis (HASH + HNSW/COSINE search) or Qdrant (collections + vector search).


## ⚠️ Backend differences & notes

* **Metrics / `/status/ram`**: Redis offers `INFO memory` and `memory_stats()` — for Qdrant the same Redis-specific metrics are not available (the endpoint will return a short message explaining this).
* **Dtype handling**: Server validates `dtype` for Redis (converts embeddings to the requested NumPy dtype). Qdrant accepts float arrays directly — `dtype` is informational/compatibility metadata.
* **gRPC**: Qdrant can be used over HTTP or gRPC (`prefer_grpc=true` in the config). Ensure your environment allows gRPC outbound/inbound as needed.


## 🔎 Test Suite

See the `test/` directory for automated tests:

* client tests for the Python SDK
* API tests for endpoint behavior
* `test_deploy.py` for deployment / bootstrap validation


## 📄 License

[Apache License](LICENSE)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "aquiles-rag",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "fastapi, ai, rag, vector-database",
    "author": null,
    "author_email": "Aquiles-ai / Fredy <riveraaai200678@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/c0/d8/527fc0c6b9f8ba019e95031a8b0cd1e2de4e7cbb7e470308804dc0c16f38/aquiles_rag-0.4.0.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">Aquiles-RAG</h1>\n\n<div align=\"center\">\n  <img src=\"aquiles/static/aq-rag2.png\" alt=\"Aquiles-RAG Logo\" width=\"200\"/>\n</div>\n\n<p align=\"center\">\n  <strong>High-performance Retrieval-Augmented Generation (RAG) on Redis or Qdrant</strong><br/>\n  \ud83d\ude80 FastAPI \u2022 Redis / Qdrant \u2022 Async \u2022 Embedding-agnostic\n</p>\n\n<p align=\"center\">\n  <a href=\"https://aquiles-ai.github.io/aqRAG-docs/\">\ud83d\udcd6 Documentation</a>\n</p>\n\n## \ud83d\udcd1 Table of Contents\n\n1. [Features](#features)  \n2. [Tech Stack](#tech-stack)  \n3. [Requirements](#requirements)  \n4. [Installation](#installation)  \n5. [Configuration & Connection Options](#configuration--connection-options)  \n6. [Usage](#usage)\n   * [CLI](#cli)\n   * [REST API](#rest-api)\n   * [Python Client](#python-client)\n   * [UI Playground](#ui-playground)  \n7. [Architecture](#architecture)  \n8. [License](#license)\n\n## \u2b50 Features\n\n* \ud83d\udcc8 **High Performance**: Vector search powered by Redis HNSW or Qdrant.  \n* \ud83d\udee0\ufe0f **Simple API**: Endpoints for index creation, insertion, and querying.  \n* \ud83d\udd0c **Embedding-agnostic**: Works with any embedding model (OpenAI, Llama 3, HuggingFace, etc.).  \n* \ud83d\udcbb **Interactive Setup Wizard**: `aquiles-rag configs` walks you through full configuration for Redis or Qdrant.  \n* \u26a1 **Sync & Async clients**: `AquilesRAG` (requests) and `AsyncAquilesRAG` (httpx) with `embedding_model` metadata support.  \n* \ud83e\udde9 **Extensible**: Designed to integrate into ML pipelines, microservices, or serverless deployments.\n\n## \ud83d\udee0 Tech Stack\n\n* **Python 3.9+**  \n* [FastAPI](https://fastapi.tiangolo.com/)  \n* [Redis](https://redis.io/) or [Qdrant](https://qdrant.tech/) as vector store  \n* [NumPy](https://numpy.org/)  \n* [Pydantic](https://pydantic-docs.helpmanual.io/)  \n* [Jinja2](https://jinja.palletsprojects.com/)  \n* [Click](https://click.palletsprojects.com/) (CLI)  \n* [Requests](https://docs.python-requests.org/) (sync client)  \n* [HTTPX](https://www.python-httpx.org/) (async client)  \n* [Platformdirs](https://github.com/platformdirs/platformdirs) (config management)\n\n## \u2699\ufe0f Requirements\n\n1. **Redis** (standalone or cluster) \u2014 *or* **Qdrant** (HTTP / gRPC).  \n2. **Python 3.9+**  \n3. **pip**\n\n> **Optional**: run Redis locally with Docker:\n>\n> ```bash\n> docker run -d --name redis-stack -p 6379:6379 redis/redis-stack-server:latest\n> ```\n\n\n## \ud83d\ude80 Installation\n\n### Via PyPI (recommended)\n\n```bash\npip install aquiles-rag\n````\n\n### From Source (optional)\n\n```bash\ngit clone https://github.com/Aquiles-ai/Aquiles-RAG.git\ncd Aquiles-RAG\n\npython -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n\n# optional development install\npip install -e .\n```\n\n## \ud83d\udd27 Configuration & Connection Options\n\nConfiguration is persisted at:\n\n```\n~/.local/share/aquiles/aquiles_config.json\n```\n\n### Setup Wizard (recommended)\n\nThe previous manual per-flag config flow was replaced by an interactive wizard. Run:\n\n```bash\naquiles-rag configs\n```\n\nThe wizard prompts for everything required for either **Redis** or **Qdrant** (host, ports, TLS/gRPC options, API keys, admin user). At the end it writes `aquiles_config.json` to the standard location.\n\n### Manual config (advanced / CI)\n\nIf you prefer automation, generate the same JSON schema the wizard writes and place it at `~/.local/share/aquiles/aquiles_config.json` before starting the server (or use the `deploy` pattern described below).\n\n### Redis connection modes (examples)\n\nAquiles-RAG supports multiple Redis modes:\n\n1. **Local Cluster**\n\n```py\nRedisCluster(host=host, port=port, decode_responses=True)\n```\n\n2. **Standalone Local**\n\n```py\nredis.Redis(host=host, port=port, decode_responses=True)\n```\n\n3. **Remote with TLS/SSL**\n\n```py\nredis.Redis(host=host, port=port, username=username or None,\n            password=password or None, ssl=True, decode_responses=True,\n            ssl_certfile=ssl_certfile, ssl_keyfile=ssl_keyfile, ssl_ca_certs=ssl_ca_certs)\n```\n\n4. **Remote without TLS/SSL**\n\n```py\nredis.Redis(host=host, port=port, username=username or None, password=password or None, decode_responses=True)\n```\n\n## \ud83d\udcd6 Usage\n\n### CLI\n\n* **Interactive Setup Wizard (recommended)**:\n\n```bash\naquiles-rag configs\n```\n\n* **Serve the API**:\n\n```bash\naquiles-rag serve --host \"0.0.0.0\" --port 5500\n```\n\n* **Deploy with bootstrap script** (pattern: `deploy_*.py` with `run()` that calls `gen_configs_file()`):\n\n```bash\n# Redis example\naquiles-rag deploy --host \"0.0.0.0\" --port 5500 --workers 2 deploy_redis.py\n\n# Qdrant example\naquiles-rag deploy --host \"0.0.0.0\" --port 5500 --workers 2 deploy_qdrant.py\n```\n\n> The `deploy` command imports the given Python file, executes its `run()` to generate the config (writes `aquiles_config.json`), then starts the FastAPI server.\n\n### REST API \u2014 common examples\n\n1. **Create Index**\n\n```bash\ncurl -X POST http://localhost:5500/create/index \\\n  -H \"X-API-Key: YOUR_API_KEY\" \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n    \"indexname\": \"documents\",\n    \"embeddings_dim\": 768,\n    \"dtype\": \"FLOAT32\",\n    \"delete_the_index_if_it_exists\": false\n  }'\n```\n\n2. **Insert Chunk (ingest)**\n\n```bash\ncurl -X POST http://localhost:5500/rag/create \\\n  -H \"X-API-Key: YOUR_API_KEY\" \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n    \"index\": \"documents\",\n    \"name_chunk\": \"doc1_part1\",\n    \"dtype\": \"FLOAT32\",\n    \"chunk_size\": 1024,\n    \"raw_text\": \"Text of the chunk...\",\n    \"embeddings\": [0.12, 0.34, 0.56, ...]\n  }'\n```\n\n3. **Query Top-K**\n\n```bash\ncurl -X POST http://localhost:5500/rag/query-rag \\\n  -H \"X-API-Key: YOUR_API_KEY\" \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n    \"index\": \"documents\",\n    \"embeddings\": [0.78, 0.90, ...],\n    \"dtype\": \"FLOAT32\",\n    \"top_k\": 5,\n    \"cosine_distance_threshold\": 0.6\n  }'\n```\n\n### Python Client\n\n#### Sync client\n\n```python\nfrom aquiles.client import AquilesRAG\n\nclient = AquilesRAG(host=\"http://127.0.0.1:5500\", api_key=\"YOUR_API_KEY\")\n\n# Create an index (returns server text)\nresp_text = client.create_index(\"documents\", embeddings_dim=768, dtype=\"FLOAT32\")\n\n# Insert chunks using your embedding function\ndef get_embedding(text):\n    return embedding_model.encode(text)\n\nresponses = client.send_rag(\n    embedding_func=get_embedding,\n    index=\"documents\",\n    name_chunk=\"doc1\",\n    raw_text=full_text,\n    embedding_model=\"text-embedding-v1\"  # optional metadata sent with each chunk\n)\n\n# Query the index (returns parsed JSON)\nresults = client.query(\"documents\", query_embedding, top_k=5)\nprint(results)\n```\n\n#### Async client\n\n```python\nimport asyncio\nfrom aquiles.client import AsyncAquilesRAG\n\nclient = AsyncAquilesRAG(host=\"http://127.0.0.1:5500\", api_key=\"YOUR_API_KEY\")\n\nasync def main():\n    await client.create_index(\"documents_async\")\n    responses = await client.send_rag(\n        embedding_func=async_embedding_func,   # supports sync or async callables\n        index=\"documents_async\",\n        name_chunk=\"doc_async\",\n        raw_text=full_text\n    )\n    results = await client.query(\"documents_async\", query_embedding)\n    print(results)\n\nasyncio.run(main())\n```\n\n**Notes**\n\n* Both clients accept an optional `embedding_model` parameter forwarded as metadata \u2014 helpful when storing/querying embeddings produced by different models.\n* `send_rag` chunks text using `chunk_text_by_words()` (default \\~600 words / \u22481024 tokens) and uploads each chunk (concurrently in the async client).\n\n\n### UI Playground\n\nOpen the web UI (protected) at:\n\n```\nhttp://localhost:5500/ui\n```\n\nUse it to:\n\n* Run the Setup Wizard link (if available) or inspect live configs\n* Test `/create/index`, `/rag/create`, `/rag/query-rag`\n* Access protected Swagger UI & ReDoc after logging in\n\n\n## \ud83c\udfd7 Architecture\n\n![Architecture](aquiles/static/diagram.png)\n\n1. **Clients** (HTTP/HTTPS, Python SDK, or UI Playground) make asynchronous HTTP requests.\n2. **FastAPI Server** \u2014 orchestration and business logic; validates requests and translates them to vector store operations.\n3. **Vector Store** \u2014 either Redis (HASH + HNSW/COSINE search) or Qdrant (collections + vector search).\n\n\n## \u26a0\ufe0f Backend differences & notes\n\n* **Metrics / `/status/ram`**: Redis offers `INFO memory` and `memory_stats()` \u2014 for Qdrant the same Redis-specific metrics are not available (the endpoint will return a short message explaining this).\n* **Dtype handling**: Server validates `dtype` for Redis (converts embeddings to the requested NumPy dtype). Qdrant accepts float arrays directly \u2014 `dtype` is informational/compatibility metadata.\n* **gRPC**: Qdrant can be used over HTTP or gRPC (`prefer_grpc=true` in the config). Ensure your environment allows gRPC outbound/inbound as needed.\n\n\n## \ud83d\udd0e Test Suite\n\nSee the `test/` directory for automated tests:\n\n* client tests for the Python SDK\n* API tests for endpoint behavior\n* `test_deploy.py` for deployment / bootstrap validation\n\n\n## \ud83d\udcc4 License\n\n[Apache License](LICENSE)\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "Aquiles-RAG is a high-performance Augmented Recovery-Generation (RAG) solution based on Redis, Qdrant or PostgreSQLRAG. It offers a high-level interface using FastAPI REST APIs.",
    "version": "0.4.0",
    "project_urls": {
        "Homepage": "https://github.com/Aquiles-ai/Aquiles-RAG",
        "Issues": "https://github.com/Aquiles-ai/Aquiles-RAG/issues"
    },
    "split_keywords": [
        "fastapi",
        " ai",
        " rag",
        " vector-database"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1303d8b8c237c090b7e9fd772756bc4a61480c73d44d7200e67ae137225272f2",
                "md5": "e10545d883d88ce4f0df9d92766e445c",
                "sha256": "2460d0429a5a327cb6420d95643611d59fa5dad2b3aa9c044a68923395ffbbfc"
            },
            "downloads": -1,
            "filename": "aquiles_rag-0.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e10545d883d88ce4f0df9d92766e445c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1183854,
            "upload_time": "2025-09-07T16:26:34",
            "upload_time_iso_8601": "2025-09-07T16:26:34.571697Z",
            "url": "https://files.pythonhosted.org/packages/13/03/d8b8c237c090b7e9fd772756bc4a61480c73d44d7200e67ae137225272f2/aquiles_rag-0.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c0d8527fc0c6b9f8ba019e95031a8b0cd1e2de4e7cbb7e470308804dc0c16f38",
                "md5": "537ad56dcaf41665231fe2fcf4d09146",
                "sha256": "bbb826547a1f3c3a3678b142775f24560b146ce01f9e8fb00ffe171804a74a9a"
            },
            "downloads": -1,
            "filename": "aquiles_rag-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "537ad56dcaf41665231fe2fcf4d09146",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1195634,
            "upload_time": "2025-09-07T16:26:36",
            "upload_time_iso_8601": "2025-09-07T16:26:36.699561Z",
            "url": "https://files.pythonhosted.org/packages/c0/d8/527fc0c6b9f8ba019e95031a8b0cd1e2de4e7cbb7e470308804dc0c16f38/aquiles_rag-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-07 16:26:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Aquiles-ai",
    "github_project": "Aquiles-RAG",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "aquiles-rag"
}
        
Elapsed time: 2.84166s