winzig


Namewinzig JSON
Version 0.3.0 PyPI version JSON
download
home_pagehttps://github.com/dnlzrgz/winzig
SummaryA tiny search engine for personal use.
upload_time2024-04-05 13:51:31
maintainerNone
docs_urlNone
authordnlzrgz
requires_python<4.0,>=3.12
licenseMIT
keywords search crawl sqlite async feeds
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

# [winzig](https://pypi.org/project/winzig/)

winzig is a tiny search engine designed for personal use that enables users to download and search for posts from their favourite feeds.  

This project was heavily inspired by the [microsearch](https://github.com/alexmolas/microsearch) project and this [article](https://www.alexmolas.com/2024/02/05/a-search-engine-in-80-lines.html) about it.  

![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)
![SQLite](https://img.shields.io/badge/sqlite-%2307405e.svg?style=for-the-badge&logo=sqlite&logoColor=white)
![Poetry](https://img.shields.io/badge/Poetry-%233B82F6.svg?style=for-the-badge&logo=poetry&logoColor=0B3D8D)
</div>


## Motivation

For quite some time, I've been contemplating the idea of creating my own personal search engine. I wanted a tool that could facilitate searching through my personal notes, books, articles, podcast transcripts, and anything else I wished to include. However, I was unsure of how or where to begin until I discovered the microsearch project, which reignited the momentum for the idea in my mind. I also though about it as an opportunity to delve deeper into asynchronous Python.  

This project started as a clone of the `microsearch` project to be able to better understand how some things worked. Later, I decided to start implementing some changes like keeping all the data in a SQLite database or building a sort-of inverted index after crawling.  

## Features

- **Fetch only what you need**: winzig optimizes data retrieval by excluding previously fetched content, making sure that only new content is downloaded each time.  
- **Async, Async, Async**: Both crawling and the subsequent data processing operate asynchronously, resulting in lightning-fast performance.  
- **Efficient data management**: All the data is stored in a SQLite database in your home directory making it easy to retrieve and update.  
- **Easy to use CLI**: The CLI provides simple commands for crawling and searching effortlessly, as well as clear feedback.  
- **Enhanced search speed**: With the heavy lifting part done after fetching the content, search yields near-instant results.  
- **TUI**: winzig also provides a basic TUI that facilitates an interactive search experience.  

## Installation

> You'll need Python >= 3.12 to be able to run winzig.

### pip

```bash
pip install winzig
```

### pipx

```bash
pipx install winzig
```

### Cloning this repository

Clone this repo with `git clone`:

```bash
git clone https://github.com/dnlzrgz/winzig winzig
```

Or use `gh` if you prefer it instead:

```bash
gh repo clone dnlzrgz/winzig
```

Then, create a `virtualenv` inside the winzig directory:

```bash
python -m venv venv
```

Activate the `virtualenv`:

```bash
source venv/bin/activate
```

And run:

```bash
pip install .
```

Instead of using `pip` you can also use `poetry`:

```bash
poetry install
```

And now you should be able to run:

```bash
winzig --help
```

## Usage

To begin using Winzig, the first step is to crawl some content. The easiest method for this is to utilize the feeds file located in this repository along with the `winzig crawl feeds` command. These feeds will be stored in a SQLite database in your home directory, eliminating the need to provide this file again unless you're adding new feeds. If instead what you want is to crawl specific posts directly, you can use `winzig crawl posts` and specify a file containing the URLs you want to fetch.  

> Currently, there is no way to manage the feeds or posts added to the database. So if you want to remove some of them you will need to do it manually. However, it may be more efficient to delete the database and crawl again.  

### Crawl

The `crawl` command serves as a convenient and efficient method to update your database with new content. When used without any subcommands, it automatically checks for new content using the feeds stored in the database and tries to retrieves it. Basically, running:  

```bash
winzig crawl
```

Is equivalent to:

```bash
winzig crawl feeds
```

#### Feeds

The `feeds` subcommand allows you to fetch and extract content from the posts of the specified feeds provided. The feeds are stored in the database so there is no need to provide a file every time.

```bash
winzig crawl feeds --file feeds.txt
```

```bash
winzig crawl feeds
```

You can also provide feed URLs directly as arguments. This feeds, if valid, will also be saved to the database.  

```bash
winzig crawl feeds https://chriscoyier.net/feed/
```

#### Posts

By using the `posts` subcommand, you can extract content directly from the posts listed in the provided file.  

```bash
winzig crawl posts --file="posts"
```

Or, if you prefer it, you can pass the URLs as arguments:  

```bash
winzig crawl posts https://textual.textualize.io/blog/2024/02/11/file-magic-with-the-python-standard-library/
```

### Searching

The following command starts a search for content matching the provided query and after a few seconds will return a list of relevant links.  

```bash
winzig search --query="async databases with sqlalchemy"
```

By default the number of results is `5` but you can change this by using the `-n` flag.  

```bash
winzig search --query="async databases with sqlalchemy" -n 10
```

You can add filters to your search results by using the `--filter` flag. Currently, the only filter supported is `domain`, which allows you to specify one or more domains to filter the search results.

```bash
winzig search --query "read large files" --filter domain='motherduck, textualize'
```

### TUI

If you prefer you can use the TUI to interact with the search engine. The TUI is its early stage but it offers basic functionality and faster search experiences compared to the `search` command since the content is indexed once and not each time you want to search something.  

```bash
winzig tui
```

### Export

You can export your feeds and your posts to plain text or CSV format using the `export` command and the `feeds` and `posts` subcommands.  

```bash
winzig export feeds --format csv --output feeds.csv
```

```bash
winzig export posts
```

## More feeds, please

If you're looking to expand your feed collection significantly, you can get a curated list of feeds from the [blogs.hn](https://github.com/surprisetalk/blogs.hn) repository with just a couple of commands.  

1. Download the JSON file containing the relevant information from the `blogs.hn` repository.

```bash
curl -sL https://raw.githubusercontent.com/surprisetalk/blogs.hn/main/blogs.json -o hn.json
```

2. Extract the feeds using `jq`. Make sure you have it installed in your system.

```bash
jq -r '.[] | select(.feed != null) | .feed' hn.json >> urls
```

> Incorporating feeds from the resultant file will significantly increase the number of requests made. Based on my experience, fetching posts from each feed, extracting content, and performing other operations may take approximately 20 to 30 minutes, depending on your Internet connection speed. The search speed will still be pretty fast.

## About the ranking function

Like the `microsearch` project, the ranking function used in winzig is the [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25). However, I am planning to add support for other variants of BM25, such as BM25+.

### BM11 and BM15 variants

If you're using the CLI for search, you have the flexibility to adjust the `k1` and `b` parameters. By manipulating the later to `0` or `1`, you can transform the BM25 ranking function into BM15 and BM11 variants, respectively:  

```bash
winzig search --query="build search engine" --b 0 # BM15
winzig search --query="build search engine" --b 1 # BM11
```

## Roadmap

- [ ] Improve TUI.
- [ ] Add tests.  
- [ ] Add multiple ranking functions.
- [ ] Add support for documents like markdown or plain text files.  
- [ ] Add support for PDFs and other formats.  
- [ ] Add commands to manage the SQLite database.  
- [ ] Add support for advanced queries.  

## Contributing

If you are interested in contributing, please open an issue first. I will try to answer as soon as possible.  


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dnlzrgz/winzig",
    "name": "winzig",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.12",
    "maintainer_email": null,
    "keywords": "search, crawl, sqlite, async, feeds",
    "author": "dnlzrgz",
    "author_email": "24715931+dnlzrgz@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/21/1a/3efa1e74e66f51ac204430dde7c70f54fd7ec9fb00270d5e7b1fd754f480/winzig-0.3.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n# [winzig](https://pypi.org/project/winzig/)\n\nwinzig is a tiny search engine designed for personal use that enables users to download and search for posts from their favourite feeds.  \n\nThis project was heavily inspired by the [microsearch](https://github.com/alexmolas/microsearch) project and this [article](https://www.alexmolas.com/2024/02/05/a-search-engine-in-80-lines.html) about it.  \n\n![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)\n![SQLite](https://img.shields.io/badge/sqlite-%2307405e.svg?style=for-the-badge&logo=sqlite&logoColor=white)\n![Poetry](https://img.shields.io/badge/Poetry-%233B82F6.svg?style=for-the-badge&logo=poetry&logoColor=0B3D8D)\n</div>\n\n\n## Motivation\n\nFor quite some time, I've been contemplating the idea of creating my own personal search engine. I wanted a tool that could facilitate searching through my personal notes, books, articles, podcast transcripts, and anything else I wished to include. However, I was unsure of how or where to begin until I discovered the microsearch project, which reignited the momentum for the idea in my mind. I also though about it as an opportunity to delve deeper into asynchronous Python.  \n\nThis project started as a clone of the `microsearch` project to be able to better understand how some things worked. Later, I decided to start implementing some changes like keeping all the data in a SQLite database or building a sort-of inverted index after crawling.  \n\n## Features\n\n- **Fetch only what you need**: winzig optimizes data retrieval by excluding previously fetched content, making sure that only new content is downloaded each time.  \n- **Async, Async, Async**: Both crawling and the subsequent data processing operate asynchronously, resulting in lightning-fast performance.  \n- **Efficient data management**: All the data is stored in a SQLite database in your home directory making it easy to retrieve and update.  \n- **Easy to use CLI**: The CLI provides simple commands for crawling and searching effortlessly, as well as clear feedback.  \n- **Enhanced search speed**: With the heavy lifting part done after fetching the content, search yields near-instant results.  \n- **TUI**: winzig also provides a basic TUI that facilitates an interactive search experience.  \n\n## Installation\n\n> You'll need Python >= 3.12 to be able to run winzig.\n\n### pip\n\n```bash\npip install winzig\n```\n\n### pipx\n\n```bash\npipx install winzig\n```\n\n### Cloning this repository\n\nClone this repo with `git clone`:\n\n```bash\ngit clone https://github.com/dnlzrgz/winzig winzig\n```\n\nOr use `gh` if you prefer it instead:\n\n```bash\ngh repo clone dnlzrgz/winzig\n```\n\nThen, create a `virtualenv` inside the winzig directory:\n\n```bash\npython -m venv venv\n```\n\nActivate the `virtualenv`:\n\n```bash\nsource venv/bin/activate\n```\n\nAnd run:\n\n```bash\npip install .\n```\n\nInstead of using `pip` you can also use `poetry`:\n\n```bash\npoetry install\n```\n\nAnd now you should be able to run:\n\n```bash\nwinzig --help\n```\n\n## Usage\n\nTo begin using Winzig, the first step is to crawl some content. The easiest method for this is to utilize the feeds file located in this repository along with the `winzig crawl feeds` command. These feeds will be stored in a SQLite database in your home directory, eliminating the need to provide this file again unless you're adding new feeds. If instead what you want is to crawl specific posts directly, you can use `winzig crawl posts` and specify a file containing the URLs you want to fetch.  \n\n> Currently, there is no way to manage the feeds or posts added to the database. So if you want to remove some of them you will need to do it manually. However, it may be more efficient to delete the database and crawl again.  \n\n### Crawl\n\nThe `crawl` command serves as a convenient and efficient method to update your database with new content. When used without any subcommands, it automatically checks for new content using the feeds stored in the database and tries to retrieves it. Basically, running:  \n\n```bash\nwinzig crawl\n```\n\nIs equivalent to:\n\n```bash\nwinzig crawl feeds\n```\n\n#### Feeds\n\nThe `feeds` subcommand allows you to fetch and extract content from the posts of the specified feeds provided. The feeds are stored in the database so there is no need to provide a file every time.\n\n```bash\nwinzig crawl feeds --file feeds.txt\n```\n\n```bash\nwinzig crawl feeds\n```\n\nYou can also provide feed URLs directly as arguments. This feeds, if valid, will also be saved to the database.  \n\n```bash\nwinzig crawl feeds https://chriscoyier.net/feed/\n```\n\n#### Posts\n\nBy using the `posts` subcommand, you can extract content directly from the posts listed in the provided file.  \n\n```bash\nwinzig crawl posts --file=\"posts\"\n```\n\nOr, if you prefer it, you can pass the URLs as arguments:  \n\n```bash\nwinzig crawl posts https://textual.textualize.io/blog/2024/02/11/file-magic-with-the-python-standard-library/\n```\n\n### Searching\n\nThe following command starts a search for content matching the provided query and after a few seconds will return a list of relevant links.  \n\n```bash\nwinzig search --query=\"async databases with sqlalchemy\"\n```\n\nBy default the number of results is `5` but you can change this by using the `-n` flag.  \n\n```bash\nwinzig search --query=\"async databases with sqlalchemy\" -n 10\n```\n\nYou can add filters to your search results by using the `--filter` flag. Currently, the only filter supported is `domain`, which allows you to specify one or more domains to filter the search results.\n\n```bash\nwinzig search --query \"read large files\" --filter domain='motherduck, textualize'\n```\n\n### TUI\n\nIf you prefer you can use the TUI to interact with the search engine. The TUI is its early stage but it offers basic functionality and faster search experiences compared to the `search` command since the content is indexed once and not each time you want to search something.  \n\n```bash\nwinzig tui\n```\n\n### Export\n\nYou can export your feeds and your posts to plain text or CSV format using the `export` command and the `feeds` and `posts` subcommands.  \n\n```bash\nwinzig export feeds --format csv --output feeds.csv\n```\n\n```bash\nwinzig export posts\n```\n\n## More feeds, please\n\nIf you're looking to expand your feed collection significantly, you can get a curated list of feeds from the [blogs.hn](https://github.com/surprisetalk/blogs.hn) repository with just a couple of commands.  \n\n1. Download the JSON file containing the relevant information from the `blogs.hn` repository.\n\n```bash\ncurl -sL https://raw.githubusercontent.com/surprisetalk/blogs.hn/main/blogs.json -o hn.json\n```\n\n2. Extract the feeds using `jq`. Make sure you have it installed in your system.\n\n```bash\njq -r '.[] | select(.feed != null) | .feed' hn.json >> urls\n```\n\n> Incorporating feeds from the resultant file will significantly increase the number of requests made. Based on my experience, fetching posts from each feed, extracting content, and performing other operations may take approximately 20 to 30 minutes, depending on your Internet connection speed. The search speed will still be pretty fast.\n\n## About the ranking function\n\nLike the `microsearch` project, the ranking function used in winzig is the [Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25). However, I am planning to add support for other variants of BM25, such as BM25+.\n\n### BM11 and BM15 variants\n\nIf you're using the CLI for search, you have the flexibility to adjust the `k1` and `b` parameters. By manipulating the later to `0` or `1`, you can transform the BM25 ranking function into BM15 and BM11 variants, respectively:  \n\n```bash\nwinzig search --query=\"build search engine\" --b 0 # BM15\nwinzig search --query=\"build search engine\" --b 1 # BM11\n```\n\n## Roadmap\n\n- [ ] Improve TUI.\n- [ ] Add tests.  \n- [ ] Add multiple ranking functions.\n- [ ] Add support for documents like markdown or plain text files.  \n- [ ] Add support for PDFs and other formats.  \n- [ ] Add commands to manage the SQLite database.  \n- [ ] Add support for advanced queries.  \n\n## Contributing\n\nIf you are interested in contributing, please open an issue first. I will try to answer as soon as possible.  \n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A tiny search engine for personal use.",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/dnlzrgz/winzig",
        "Repository": "https://github.com/dnlzrgz/winzig"
    },
    "split_keywords": [
        "search",
        " crawl",
        " sqlite",
        " async",
        " feeds"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bcd40f3c4c9e2056466c54dfbb0e7222f4299ac4bf7cef3df72b0f3cbe09c1ff",
                "md5": "64b95436034309152c6e9efa6a0a65a5",
                "sha256": "0d563ccd51813217868a29e5f742ab541cbc5049c1b15b7cd5759aa220b60839"
            },
            "downloads": -1,
            "filename": "winzig-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "64b95436034309152c6e9efa6a0a65a5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.12",
            "size": 18606,
            "upload_time": "2024-04-05T13:51:29",
            "upload_time_iso_8601": "2024-04-05T13:51:29.835624Z",
            "url": "https://files.pythonhosted.org/packages/bc/d4/0f3c4c9e2056466c54dfbb0e7222f4299ac4bf7cef3df72b0f3cbe09c1ff/winzig-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "211a3efa1e74e66f51ac204430dde7c70f54fd7ec9fb00270d5e7b1fd754f480",
                "md5": "6d3ce905e9bdb531c2806563e5b8645d",
                "sha256": "ea4dd283d95f0b4a3ca641e26e244214c148a32d614933ebc002e0c6e988860e"
            },
            "downloads": -1,
            "filename": "winzig-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "6d3ce905e9bdb531c2806563e5b8645d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.12",
            "size": 17186,
            "upload_time": "2024-04-05T13:51:31",
            "upload_time_iso_8601": "2024-04-05T13:51:31.782238Z",
            "url": "https://files.pythonhosted.org/packages/21/1a/3efa1e74e66f51ac204430dde7c70f54fd7ec9fb00270d5e7b1fd754f480/winzig-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-05 13:51:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dnlzrgz",
    "github_project": "winzig",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "winzig"
}
        
Elapsed time: 0.22261s