Name | searchflow JSON |
Version |
0.0.111
JSON |
| download |
home_page | None |
Summary | An assistant helping you to index webpages into structured datasets. |
upload_time | 2024-10-10 15:36:56 |
maintainer | None |
docs_url | None |
author | Ben Selleslagh |
requires_python | <3.13,>=3.11 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
  
 
# SearchFlow
SearchFlow is an assistant designed to help you index webpages into structured datasets. It leverages various tools and models to scrape, process, and store web content efficiently.
## Features
- **Web Scraping**: Uses `trafilatura` for focused crawling and web scraping.
- **Document Processing**: Supports chunking and processing of various document types.
- **Database Management**: Manages projects, documents, and prompts using PostgreSQL.
- **Vector Search**: Utilizes vector search for document retrieval.
- **LLM Integration**: Integrates with language models for question answering and document grading.
## Installation
To set up the development environment, use the provided `Dockerfile` and `.devcontainer/devcontainer.json` for a consistent development setup.
### Prerequisites
- Docker
- Python 3.11 or higher
### Steps
## Usage
Install SearchFlow via pip:
```bash
pip install searchflow
```
### Quickstart
1. **Initialize the Database**
```python
from searchflow.db.postgresql import DB
db = DB()
db.create_project(project_name="example_project")
```
2. **Create a project**
```python
db.create_project(project_name="example_project")
```
3. **Import Data from a URL**
```python
from searchflow.importers import WebScraper
scraper = WebScraper(project_name='MyProject', db=db)
scraper.full_import("https://example.com", max_pages=100)
````
4. ** Upload a file to the project **
```python
from searchflow.importers import Files
with open("path/to/your/file.pdf", "rb") as f:
bytes_data = f.read()
files = Files()
files.upload_file(
document_data=[(bytes_data, "file.pdf")],
project_name="MyProject",
inference_type="local"
)
```
5. **List Files in a Project**
```python
files.list_files(project_name="MyProject")
```
6. **Remove a File from a Project**
```python
files.remove_file(project_name="MyProject", file_name="file.pdf")
```
### Question Answering
### Vector Search
To perform a similarity search:
```python
from searchflow.db.postgresql import DB
db = DB()
results = db.similarity_search(project_name="example_project", query="example query"
```
Raw data
{
"_id": null,
"home_page": null,
"name": "searchflow",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.11",
"maintainer_email": null,
"keywords": null,
"author": "Ben Selleslagh",
"author_email": "ben@vectrix.ai",
"download_url": "https://files.pythonhosted.org/packages/78/c5/5b747bfe7c1f5059d066d268fa7037e7980c6ab3ed54e78a9bddb114fda9/searchflow-0.0.111.tar.gz",
"platform": null,
"description": "  \n  \n\n\n\n# SearchFlow\n\nSearchFlow is an assistant designed to help you index webpages into structured datasets. It leverages various tools and models to scrape, process, and store web content efficiently.\n\n## Features\n\n- **Web Scraping**: Uses `trafilatura` for focused crawling and web scraping.\n- **Document Processing**: Supports chunking and processing of various document types.\n- **Database Management**: Manages projects, documents, and prompts using PostgreSQL.\n- **Vector Search**: Utilizes vector search for document retrieval.\n- **LLM Integration**: Integrates with language models for question answering and document grading.\n\n## Installation\n\nTo set up the development environment, use the provided `Dockerfile` and `.devcontainer/devcontainer.json` for a consistent development setup.\n\n### Prerequisites\n\n- Docker\n- Python 3.11 or higher\n\n### Steps\n\n\n\n## Usage\n\nInstall SearchFlow via pip:\n```bash\npip install searchflow\n```\n\n### Quickstart\n1. **Initialize the Database**\n\n```python\nfrom searchflow.db.postgresql import DB\ndb = DB()\ndb.create_project(project_name=\"example_project\")\n```\n\n2. **Create a project**\n\n```python\ndb.create_project(project_name=\"example_project\")\n```\n\n3. **Import Data from a URL**\n\n```python\nfrom searchflow.importers import WebScraper\nscraper = WebScraper(project_name='MyProject', db=db)\nscraper.full_import(\"https://example.com\", max_pages=100)\n````\n\n4. ** Upload a file to the project **\n\n```python\nfrom searchflow.importers import Files\nwith open(\"path/to/your/file.pdf\", \"rb\") as f:\nbytes_data = f.read()\nfiles = Files()\nfiles.upload_file(\ndocument_data=[(bytes_data, \"file.pdf\")],\nproject_name=\"MyProject\",\ninference_type=\"local\"\n)\n```\n\n5. **List Files in a Project**\n\n```python\nfiles.list_files(project_name=\"MyProject\")\n```\n\n6. **Remove a File from a Project**\n\n```python\nfiles.remove_file(project_name=\"MyProject\", file_name=\"file.pdf\")\n```\n\n### Question Answering\n\n\n\n### Vector Search\nTo perform a similarity search:\n\n```python\nfrom searchflow.db.postgresql import DB\ndb = DB()\nresults = db.similarity_search(project_name=\"example_project\", query=\"example query\"\n```",
"bugtrack_url": null,
"license": "MIT",
"summary": "An assistant helping you to index webpages into structured datasets.",
"version": "0.0.111",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4436cc6c73b5e3633563f6a3c148d91416726a03cb61a761ddf9020d4203e724",
"md5": "5d763292febda2073be0edc981f047f0",
"sha256": "8fabd2cad973a663b4386aad11595cd9f59146d624ad3d0009193d112ad82fcc"
},
"downloads": -1,
"filename": "searchflow-0.0.111-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5d763292febda2073be0edc981f047f0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.11",
"size": 41377,
"upload_time": "2024-10-10T15:36:54",
"upload_time_iso_8601": "2024-10-10T15:36:54.884944Z",
"url": "https://files.pythonhosted.org/packages/44/36/cc6c73b5e3633563f6a3c148d91416726a03cb61a761ddf9020d4203e724/searchflow-0.0.111-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "78c55b747bfe7c1f5059d066d268fa7037e7980c6ab3ed54e78a9bddb114fda9",
"md5": "c574704340392a7defd678c68c83f9a8",
"sha256": "dde33cd007a8c56babc12bf973759a0f1e5a617fc6dc89fe8505b1b0b6116e7f"
},
"downloads": -1,
"filename": "searchflow-0.0.111.tar.gz",
"has_sig": false,
"md5_digest": "c574704340392a7defd678c68c83f9a8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.11",
"size": 33279,
"upload_time": "2024-10-10T15:36:56",
"upload_time_iso_8601": "2024-10-10T15:36:56.064497Z",
"url": "https://files.pythonhosted.org/packages/78/c5/5b747bfe7c1f5059d066d268fa7037e7980c6ab3ed54e78a9bddb114fda9/searchflow-0.0.111.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-10 15:36:56",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "searchflow"
}