vexor


Namevexor JSON
Version 0.6.1 PyPI version JSON
download
home_pageNone
SummaryA vector-powered CLI for semantic search over files.
upload_time2025-11-11 13:50:14
maintainerNone
docs_urlNone
authorscarletkc
requires_python>=3.9
licenseMIT
keywords ai cli file semantic-search typer vector
VCS
bugtrack_url
requirements google-genai openai python-dotenv scikit-learn numpy typer rich charset-normalizer pypdf python-docx python-pptx pytest pytest-cov build twine
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<img src="https://raw.githubusercontent.com/scarletkc/vexor/refs/heads/main/assets/vexor.svg" alt="Vexor" width="50%" height="auto">

# Vexor

[![Python](https://img.shields.io/badge/python-3.9%2B-blue)](https://www.python.org/downloads/)
[![PyPI](https://img.shields.io/pypi/v/vexor.svg)](https://pypi.org/project/vexor/)
[![CI](https://img.shields.io/github/actions/workflow/status/scarletkc/vexor/publish.yml?branch=main)](https://github.com/scarletkc/vexor/actions/workflows/publish.yml)
[![Codecov](https://img.shields.io/codecov/c/github/scarletkc/vexor/main)](https://codecov.io/github/scarletkc/vexor)
[![License](https://img.shields.io/github/license/scarletkc/vexor.svg)](https://github.com/scarletkc/vexor/blob/main/LICENSE)

</div>

---

Vexor is a vector-powered CLI for semantic file search. It supports configurable remote embedding models and ranks results by cosine similarity.

## Usage
Vexor is designed to work seamlessly with both humans and AI coding assistants through the terminal, enabling semantic file search in autonomous agent workflows.

When you remember what a file does but forget its name or location, Vexor's semantic search finds it instantly—no grep patterns or directory traversal needed.

## Install
Download from [releases](https://github.com/scarletkc/vexor/releases) without python, or with:
```bash
pip install vexor # or use pipx, uv
```
The CLI entry point is `vexor`.

## Configure
Set the Gemini API key once and reuse it everywhere:
```bash
vexor config --set-api-key "YOUR_KEY"
```
Optional defaults:
```bash
vexor config --set-model gemini-embedding-001
vexor config --set-batch-size 0   # 0 = single request
vexor config --set-provider gemini
vexor config --set-base-url https://proxy.example.com  # optional proxy support for local LM Studio and similar tools; use --clear-base-url to reset
```
Provider defaults to `gemini`, so you only need to override it when switching to other backends (e.g., `openai`). Base URLs are optional and let you route requests through a custom proxy; run `vexor config --clear-base-url` to return to the official endpoint.

Environment/API keys can be supplied via `vexor config --set-api-key`, `VEXOR_API_KEY`, or provider-specific variables (`GOOGLE_GENAI_API_KEY`, `OPENAI_API_KEY`). Example OpenAI setup:
```bash
vexor config --set-provider openai
vexor config --set-model text-embedding-3-small
export OPENAI_API_KEY="sk-..."   # or use vexor config --set-api-key
```
Configuration is stored in `~/.vexor/config.json`.

Inspect or reset every cached index:
```bash
vexor config --show-index-all
vexor config --clear-index-all
```

## Workflow
1. **Index** the project root (includes every subdirectory):
   ```bash
   vexor index --path ~/projects/demo --mode name --include-hidden
   ```
2. **Search** from anywhere, pointing to the same path:
   ```bash
   vexor search "api client config" --path ~/projects/demo --mode name --top 5
   ```
   Output example:
   ```
   Vexor semantic file search results
   ──────────────────────────────────
   #   Similarity   File path                      Preview
   1   0.923        ./src/config_loader.py        config loader entrypoint
   2   0.871        ./src/utils/config_parse.py   parse config helpers
   3   0.809        ./tests/test_config_loader.py tests for config loader
   ```

Tips:
- Keep one index per project root; subdirectories need separate indexes only if you explicitly run `vexor index` on them.
- Toggle `--no-recursive` (or `-n`) on both `index` and `search` when you only care about the current directory; recursive and non-recursive caches are stored separately.
- Hidden files are included only if both `index` and `search` use `--include-hidden`.
- Use `--ext`/`-e` (repeatable) on both `index` and `search` to limit indexing and search results to specific extensions, e.g. `--ext .py --ext .md`.
- Re-running `vexor index` only re-embeds files whose names changed (or were added/removed); if more than half the files differ, it automatically falls back to a full rebuild for consistency.
- Specify the indexing mode with `--mode`:
  - `name`: embed only the file name (fastest, zero content reads).
  - `head`: grab the first snippet of supported text/code/PDF/DOCX/PPTX files for lightweight semantic context.
  - `brief`: summarize PRDs/high-frequency keywords (English/Chinese) in requirements documents enable quick location of key requirements.
  - `full`: chunk the entire contents of supported text/code/PDF/DOCX/PPTX files into windows so long documents stay searchable end-to-end.
- Switch embedding providers (Gemini by default, OpenAI format supported) via `vexor config --set-provider PROVIDER` and pick a matching embedding model.

## Commands
| Command | Description |
| ------- | ----------- |
| `vexor index --path PATH --mode MODE [--include-hidden] [--no-recursive] [--ext EXT ...] [--clear/--show]` | Scans `PATH` (recursively by default), embeds content according to `MODE` (`name`, `head`, or `full`), and writes a cache under `~/.vexor`. |
| `vexor search QUERY --path PATH --mode MODE [--top K] [--include-hidden] [--no-recursive] [--ext EXT ...]` | Loads the cached embeddings for `PATH` (matching the chosen mode/recursion/hidden settings), shows matches for `QUERY`. |
| `vexor doctor` | Checks whether the `vexor` command is available on the current `PATH`. |
| `vexor update` | Fetches the latest release version and shows links to update via GitHub or PyPI. |
| `vexor config --set-api-key/--clear-api-key` | Manage the stored API key (Gemini by default). |
| `vexor config --set-model/--set-batch-size/--show` | Manage default model, batch size, and inspect current settings. |
| `vexor config --set-provider/--set-base-url/--clear-base-url` | Switch embedding providers and optionally override the remote base URL. |
| `vexor config --show-index-all/--clear-index-all` | Inspect or delete every cached index regardless of path/mode. |

## Documentation
See the [docs](https://github.com/scarletkc/vexor/tree/main/docs) for more details.

Contributions, issues, and PRs are all welcome!

Star this repo if you find it helpful!

## License
This project is licensed under the [MIT](http://github.com/scarletkc/vexor/blob/main/LICENSE) License.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "vexor",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "ai, cli, file, semantic-search, typer, vector",
    "author": "scarletkc",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/ab/f8/aa1f3c99cbd21672afe480a04edfe1acbd849357fc488051d5505ef50dc4/vexor-0.6.1.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<img src=\"https://raw.githubusercontent.com/scarletkc/vexor/refs/heads/main/assets/vexor.svg\" alt=\"Vexor\" width=\"50%\" height=\"auto\">\n\n# Vexor\n\n[![Python](https://img.shields.io/badge/python-3.9%2B-blue)](https://www.python.org/downloads/)\n[![PyPI](https://img.shields.io/pypi/v/vexor.svg)](https://pypi.org/project/vexor/)\n[![CI](https://img.shields.io/github/actions/workflow/status/scarletkc/vexor/publish.yml?branch=main)](https://github.com/scarletkc/vexor/actions/workflows/publish.yml)\n[![Codecov](https://img.shields.io/codecov/c/github/scarletkc/vexor/main)](https://codecov.io/github/scarletkc/vexor)\n[![License](https://img.shields.io/github/license/scarletkc/vexor.svg)](https://github.com/scarletkc/vexor/blob/main/LICENSE)\n\n</div>\n\n---\n\nVexor is a vector-powered CLI for semantic file search. It supports configurable remote embedding models and ranks results by cosine similarity.\n\n## Usage\nVexor is designed to work seamlessly with both humans and AI coding assistants through the terminal, enabling semantic file search in autonomous agent workflows.\n\nWhen you remember what a file does but forget its name or location, Vexor's semantic search finds it instantly\u2014no grep patterns or directory traversal needed.\n\n## Install\nDownload from [releases](https://github.com/scarletkc/vexor/releases) without python, or with:\n```bash\npip install vexor # or use pipx, uv\n```\nThe CLI entry point is `vexor`.\n\n## Configure\nSet the Gemini API key once and reuse it everywhere:\n```bash\nvexor config --set-api-key \"YOUR_KEY\"\n```\nOptional defaults:\n```bash\nvexor config --set-model gemini-embedding-001\nvexor config --set-batch-size 0   # 0 = single request\nvexor config --set-provider gemini\nvexor config --set-base-url https://proxy.example.com  # optional proxy support for local LM Studio and similar tools; use --clear-base-url to reset\n```\nProvider defaults to `gemini`, so you only need to override it when switching to other backends (e.g., `openai`). Base URLs are optional and let you route requests through a custom proxy; run `vexor config --clear-base-url` to return to the official endpoint.\n\nEnvironment/API keys can be supplied via `vexor config --set-api-key`, `VEXOR_API_KEY`, or provider-specific variables (`GOOGLE_GENAI_API_KEY`, `OPENAI_API_KEY`). Example OpenAI setup:\n```bash\nvexor config --set-provider openai\nvexor config --set-model text-embedding-3-small\nexport OPENAI_API_KEY=\"sk-...\"   # or use vexor config --set-api-key\n```\nConfiguration is stored in `~/.vexor/config.json`.\n\nInspect or reset every cached index:\n```bash\nvexor config --show-index-all\nvexor config --clear-index-all\n```\n\n## Workflow\n1. **Index** the project root (includes every subdirectory):\n   ```bash\n   vexor index --path ~/projects/demo --mode name --include-hidden\n   ```\n2. **Search** from anywhere, pointing to the same path:\n   ```bash\n   vexor search \"api client config\" --path ~/projects/demo --mode name --top 5\n   ```\n   Output example:\n   ```\n   Vexor semantic file search results\n   \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n   #   Similarity   File path                      Preview\n   1   0.923        ./src/config_loader.py        config loader entrypoint\n   2   0.871        ./src/utils/config_parse.py   parse config helpers\n   3   0.809        ./tests/test_config_loader.py tests for config loader\n   ```\n\nTips:\n- Keep one index per project root; subdirectories need separate indexes only if you explicitly run `vexor index` on them.\n- Toggle `--no-recursive` (or `-n`) on both `index` and `search` when you only care about the current directory; recursive and non-recursive caches are stored separately.\n- Hidden files are included only if both `index` and `search` use `--include-hidden`.\n- Use `--ext`/`-e` (repeatable) on both `index` and `search` to limit indexing and search results to specific extensions, e.g. `--ext .py --ext .md`.\n- Re-running `vexor index` only re-embeds files whose names changed (or were added/removed); if more than half the files differ, it automatically falls back to a full rebuild for consistency.\n- Specify the indexing mode with `--mode`:\n  - `name`: embed only the file name (fastest, zero content reads).\n  - `head`: grab the first snippet of supported text/code/PDF/DOCX/PPTX files for lightweight semantic context.\n  - `brief`: summarize PRDs/high-frequency keywords (English/Chinese) in requirements documents enable quick location of key requirements.\n  - `full`: chunk the entire contents of supported text/code/PDF/DOCX/PPTX files into windows so long documents stay searchable end-to-end.\n- Switch embedding providers (Gemini by default, OpenAI format supported) via `vexor config --set-provider PROVIDER` and pick a matching embedding model.\n\n## Commands\n| Command | Description |\n| ------- | ----------- |\n| `vexor index --path PATH --mode MODE [--include-hidden] [--no-recursive] [--ext EXT ...] [--clear/--show]` | Scans `PATH` (recursively by default), embeds content according to `MODE` (`name`, `head`, or `full`), and writes a cache under `~/.vexor`. |\n| `vexor search QUERY --path PATH --mode MODE [--top K] [--include-hidden] [--no-recursive] [--ext EXT ...]` | Loads the cached embeddings for `PATH` (matching the chosen mode/recursion/hidden settings), shows matches for `QUERY`. |\n| `vexor doctor` | Checks whether the `vexor` command is available on the current `PATH`. |\n| `vexor update` | Fetches the latest release version and shows links to update via GitHub or PyPI. |\n| `vexor config --set-api-key/--clear-api-key` | Manage the stored API key (Gemini by default). |\n| `vexor config --set-model/--set-batch-size/--show` | Manage default model, batch size, and inspect current settings. |\n| `vexor config --set-provider/--set-base-url/--clear-base-url` | Switch embedding providers and optionally override the remote base URL. |\n| `vexor config --show-index-all/--clear-index-all` | Inspect or delete every cached index regardless of path/mode. |\n\n## Documentation\nSee the [docs](https://github.com/scarletkc/vexor/tree/main/docs) for more details.\n\nContributions, issues, and PRs are all welcome!\n\nStar this repo if you find it helpful!\n\n## License\nThis project is licensed under the [MIT](http://github.com/scarletkc/vexor/blob/main/LICENSE) License.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A vector-powered CLI for semantic search over files.",
    "version": "0.6.1",
    "project_urls": {
        "Repository": "https://github.com/scarletkc/vexor"
    },
    "split_keywords": [
        "ai",
        " cli",
        " file",
        " semantic-search",
        " typer",
        " vector"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "44250df40cfb97286f62223af705d089f323f134369ca6c04003c1603639f323",
                "md5": "10ab1fa03a7af70b6e387b46cc9dda32",
                "sha256": "8588ce010a2e216c19c463344566fc68c210a9028aeb85fc1cc5c70746c92955"
            },
            "downloads": -1,
            "filename": "vexor-0.6.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "10ab1fa03a7af70b6e387b46cc9dda32",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 35491,
            "upload_time": "2025-11-11T13:50:13",
            "upload_time_iso_8601": "2025-11-11T13:50:13.252899Z",
            "url": "https://files.pythonhosted.org/packages/44/25/0df40cfb97286f62223af705d089f323f134369ca6c04003c1603639f323/vexor-0.6.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "abf8aa1f3c99cbd21672afe480a04edfe1acbd849357fc488051d5505ef50dc4",
                "md5": "93ab7ceac6da154964efb99d40df416e",
                "sha256": "fc173471049a50ec1b4998a0357030a6d9dac25437094e58acfc07e9cb4ba43b"
            },
            "downloads": -1,
            "filename": "vexor-0.6.1.tar.gz",
            "has_sig": false,
            "md5_digest": "93ab7ceac6da154964efb99d40df416e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 27627,
            "upload_time": "2025-11-11T13:50:14",
            "upload_time_iso_8601": "2025-11-11T13:50:14.343464Z",
            "url": "https://files.pythonhosted.org/packages/ab/f8/aa1f3c99cbd21672afe480a04edfe1acbd849357fc488051d5505ef50dc4/vexor-0.6.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-11-11 13:50:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "scarletkc",
    "github_project": "vexor",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "google-genai",
            "specs": [
                [
                    ">=",
                    "0.5.0"
                ]
            ]
        },
        {
            "name": "openai",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.23.0"
                ]
            ]
        },
        {
            "name": "typer",
            "specs": [
                [
                    ">=",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    ">=",
                    "13.0.0"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    ">=",
                    "3.3.0"
                ]
            ]
        },
        {
            "name": "pypdf",
            "specs": [
                [
                    ">=",
                    "4.0.0"
                ]
            ]
        },
        {
            "name": "python-docx",
            "specs": [
                [
                    ">=",
                    "0.8.11"
                ]
            ]
        },
        {
            "name": "python-pptx",
            "specs": [
                [
                    ">=",
                    "0.6.21"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    ">=",
                    "7.4"
                ]
            ]
        },
        {
            "name": "pytest-cov",
            "specs": [
                [
                    ">=",
                    "4.1"
                ]
            ]
        },
        {
            "name": "build",
            "specs": [
                [
                    ">=",
                    "1.2.1"
                ]
            ]
        },
        {
            "name": "twine",
            "specs": [
                [
                    ">=",
                    "5.1.1"
                ]
            ]
        }
    ],
    "lcname": "vexor"
}
        
Elapsed time: 4.57518s