<div align="center">
<a href="https://github.com/OEvortex/Webscout">
<img src="https://img.shields.io/badge/WebScout-Ultimate%20Toolkit-blue?style=for-the-badge&logo=python&logoColor=white" alt="WebScout Logo">
</a>
<h1>Webscout</h1>
<p><strong>Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More</strong></p>
<p>
Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.
</p>
<!-- Badges -->
<p>
<a href="https://pypi.org/project/webscout/"><img src="https://img.shields.io/pypi/v/webscout.svg?style=flat-square&logo=pypi&label=PyPI" alt="PyPI Version"></a>
<a href="https://pepy.tech/project/webscout"><img src="https://static.pepy.tech/badge/webscout/month?style=flat-square" alt="Monthly Downloads"></a>
<a href="https://pepy.tech/project/webscout"><img src="https://static.pepy.tech/badge/webscout?style=flat-square" alt="Total Downloads"></a>
<a href="#"><img src="https://img.shields.io/pypi/pyversions/webscout?style=flat-square&logo=python" alt="Python Version"></a>
<a href="https://deepwiki.com/OEvortex/Webscout"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
</p>
</div>
<hr/>
## 📋 Table of Contents
- [🌟 Key Features](#-features)
- [⚙️ Installation](#️-installation)
- [🖥️ Command Line Interface](#️-command-line-interface)
- [🔄 OpenAI-Compatible API Server](#-openai-compatible-api-server)
- [🔍 Search Engines](#-search-engines)
- [🦆 DuckDuckGo Search](#-duckduckgo-search-with-webs-and-asyncwebs)
- [💻 WEBS API Reference](#-webs-api-reference)
- [🤖 AI Models and Voices](#-ai-models-and-voices)
- [💬 AI Chat Providers](#-ai-chat-providers)
- [👨💻 Advanced AI Interfaces](#-advanced-ai-interfaces)
- [🤝 Contributing](#-contributing)
- [🙏 Acknowledgments](#-acknowledgments)
<hr/>
> [!IMPORTANT] >
> **Webscout supports three types of compatibility:**
>
> - **Native Compatibility:** Webscout's own native API for maximum flexibility
> - **OpenAI Compatibility:** Use providers with OpenAI-compatible interfaces
> - **Local LLM Compatibility:** Run local models with [Inferno](https://github.com/HelpingAI/inferno), an OpenAI-compatible server (now a standalone package)
>
> Choose the approach that best fits your needs! For OpenAI compatibility, check the [OpenAI Providers README](webscout/Provider/OPENAI/README.md) or see the [OpenAI-Compatible API Server](#-openai-compatible-api-server) section below.
> [!NOTE]
> Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.
<div align="center">
<!-- Social/Support Links -->
<p>
<a href="https://t.me/PyscoutAI"><img alt="Telegram Group" src="https://img.shields.io/badge/Telegram%20Group-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
<a href="https://t.me/ANONYMOUS_56788"><img alt="Developer Telegram" src="https://img.shields.io/badge/Developer%20Contact-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
<a href="https://youtube.com/@OEvortex"><img alt="YouTube" src="https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white"></a>
<a href="https://www.linkedin.com/in/oe-vortex-29a407265/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a>
<a href="https://www.instagram.com/oevortex/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a>
<a href="https://buymeacoffee.com/oevortex"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a>
</p>
</div>
<hr/>
## 🚀 Features
<details open>
<summary><b>Search & AI</b></summary>
<p>
- **Comprehensive Search:** Leverage Google, DuckDuckGo, and Yep for diverse search results
- **AI Powerhouse:** Access and interact with various AI models through three compatibility options:
- **Native API:** Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
- **[OpenAI-Compatible Providers](webscout/Provider/OPENAI/README.md):** Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
- **[Local LLMs with Inferno](https://github.com/HelpingAI/inferno):** Run local models with an OpenAI-compatible server (now available as a standalone package)
- **[AI Search](webscout/Provider/AISEARCH/README.md):** AI-powered search engines with advanced capabilities
</p>
</details>
<details open>
<summary><b>Media & Content Tools</b></summary>
<p>
- **[YouTube Toolkit](webscout/Extra/YTToolkit/README.md):** Advanced YouTube video and transcript management with multi-language support
- **[Text-to-Speech (TTS)](webscout/Provider/TTS/README.md):** Convert text into natural-sounding speech using multiple AI-powered providers
- **[Text-to-Image](webscout/Provider/TTI/README.md):** Generate high-quality images using a wide range of AI art providers
- **[Weather Tools](webscout/Extra/weather.md):** Retrieve detailed weather information for any location
</p>
</details>
<details open>
<summary><b>Developer Tools</b></summary>
<p>
- **[GitAPI](webscout/Extra/GitToolkit/gitapi):** Powerful GitHub data extraction toolkit without authentication requirements for public data
- **[SwiftCLI](webscout/swiftcli/Readme.md):** A powerful and elegant CLI framework for beautiful command-line interfaces
- **[LitPrinter](webscout/litprinter/Readme.md):** Styled console output with rich formatting and colors
- **[LitLogger](webscout/litlogger/README.md):** Simplified logging with customizable formats and color schemes
- **[LitAgent](webscout/litagent/Readme.md):** Modern user agent generator that keeps your requests undetectable
- **[Scout](webscout/scout/README.md):** Advanced web parsing and crawling library with intelligent HTML/XML parsing
- **[Inferno](https://github.com/HelpingAI/inferno):** Run local LLMs with an OpenAI-compatible API and interactive CLI (now a standalone package: `pip install inferno-llm`)
- **[GGUF Conversion](webscout/Extra/gguf.md):** Convert and quantize Hugging Face models to GGUF format
</p>
</details>
<details open>
<summary><b>Privacy & Utilities</b></summary>
<p>
- **[Tempmail](webscout/Extra/tempmail/README.md) & Temp Number:** Generate temporary email addresses and phone numbers
- **[Awesome Prompts](webscout/Extra/Act.md):** Curated collection of system prompts for specialized AI personas
</p>
</details>
<hr/>
## ⚙️ Installation
Webscout supports multiple installation methods to fit your workflow:
### 📦 Standard Installation
```bash
# Install from PyPI
pip install -U webscout
# Install with API server dependencies
pip install -U "webscout[api]"
# Install with development dependencies
pip install -U "webscout[dev]"
```
### ⚡ UV Package Manager (Recommended)
[UV](https://github.com/astral-sh/uv) is a fast Python package manager. Webscout has full UV support:
```bash
# Install UV first (if not already installed)
pip install uv
# Install Webscout with UV
uv add webscout
# Install with API dependencies
uv add "webscout[api]"
# Run Webscout directly with UV (no installation needed)
uv run webscout --help
# Run with API dependencies
uv run webscout --extra api webscout-server
# Install as a UV tool for global access
uv tool install webscout
# Use UV tool commands
webscout --help
webscout-server
```
### 🔧 Development Installation
```bash
# Clone the repository
git clone https://github.com/OEvortex/Webscout.git
cd Webscout
# Install in development mode with UV
uv sync --extra dev --extra api
# Or with pip
pip install -e ".[dev,api]"
# Or with uv pip
uv pip install -e ".[dev,api]"
```
### 🐳 Docker Installation
```bash
# Pull and run the Docker image
docker pull oevortex/webscout:latest
docker run -it oevortex/webscout:latest
```
### 📱 Quick Start Commands
After installation, you can immediately start using Webscout:
```bash
# Check version
webscout version
# Search the web
webscout text -k "python programming"
# Start API server
webscout-server
# Get help
webscout --help
```
<hr/>
## 🖥️ Command Line Interface
Webscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:
### 🚀 Direct Commands (Recommended)
After installing with `uv tool install webscout` or `pip install webscout`:
```bash
# Get help
webscout --help
# Start API server
webscout-server
```
### 🔧 UV Run Commands (No Installation Required)
```bash
# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run --extra api webscout-server
```
### 📦 Python Module Commands
```bash
# Traditional Python module execution
python -m webscout --help
python -m webscout-server
```
<details open>
<summary><b>🔍 Web Search Commands</b></summary>
<p>
| Command | Description | Example |
| --------------------------------- | --------------------------- | ----------------------------------------- |
| `webscout text -k "query"` | Perform a text search | `webscout text -k "python programming"` |
| `webscout answers -k "query"` | Get instant answers | `webscout answers -k "what is AI"` |
| `webscout images -k "query"` | Search for images | `webscout images -k "nature photography"` |
| `webscout videos -k "query"` | Search for videos | `webscout videos -k "python tutorials"` |
| `webscout news -k "query"` | Search for news articles | `webscout news -k "technology trends"` |
| `webscout maps -k "query"` | Perform a maps search | `webscout maps -k "restaurants near me"` |
| `webscout translate -k "text"` | Translate text | `webscout translate -k "hello world"` |
| `webscout suggestions -k "query"` | Get search suggestions | `webscout suggestions -k "how to"` |
| `webscout weather -l "location"` | Get weather information | `webscout weather -l "New York"` |
| `webscout version` | Display the current version | `webscout version` |
**Google Search Commands:**
| Command | Description | Example |
|---------|-------------|---------|
| `webscout google_text -k "query"` | Google text search | `webscout google_text -k "machine learning"` |
| `webscout google_news -k "query"` | Google news search | `webscout google_news -k "AI breakthrough"` |
| `webscout google_suggestions -q "query"` | Google suggestions | `webscout google_suggestions -q "python"` |
**Yep Search Commands:**
| Command | Description | Example |
|---------|-------------|---------|
| `webscout yep_text -k "query"` | Yep text search | `webscout yep_text -k "web development"` |
| `webscout yep_images -k "query"` | Yep image search | `webscout yep_images -k "landscapes"` |
| `webscout yep_suggestions -q "query"` | Yep suggestions | `webscout yep_suggestions -q "javascript"` |
</p>
</details>
<details open>
<summary><b>Inferno LLM Commands</b></summary>
<p>
Inferno is now a standalone package. Install it separately with:
```bash
pip install inferno-llm
```
After installation, you can use its CLI for managing and using local LLMs:
```bash
inferno --help
```
| Command | Description |
| ------------------------ | ----------------------------------------------- |
| `inferno pull <model>` | Download a model from Hugging Face |
| `inferno list` | List downloaded models |
| `inferno serve <model>` | Start a model server with OpenAI-compatible API |
| `inferno run <model>` | Chat with a model interactively |
| `inferno remove <model>` | Remove a downloaded model |
| `inferno version` | Show version information |
For more information, visit the [Inferno GitHub repository](https://github.com/HelpingAI/inferno) or [PyPI package page](https://pypi.org/project/inferno-llm/).
</p>
</details>
> [!NOTE] > **Hardware requirements for running models with Inferno:**
>
> - Around 2 GB of RAM for 1B models
> - Around 4 GB of RAM for 3B models
> - At least 8 GB of RAM for 7B models
> - 16 GB of RAM for 13B models
> - 32 GB of RAM for 33B models
> - GPU acceleration is recommended for better performance
<details open>
<summary><b>🔄 OpenAI-Compatible API Server</b></summary>
<p>
Webscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.
### Starting the API Server
#### From Command Line (Recommended)
```bash
# Start with default settings (port 8000)
webscout-server
# Start with custom port
webscout-server --port 8080
# Start with API key authentication
webscout-server --api-key "your-secret-key"
# Start in no-auth mode using command line flag (no API key required)
webscout-server --no-auth
# Start in no-auth mode using environment variable
$env:WEBSCOUT_NO_AUTH='true'; webscout-server
# Specify a default provider
webscout-server --default-provider "Claude"
# Run in debug mode
webscout-server --debug
# Get help for all options (includes authentication options)
webscout-server --help
```
#### Alternative Methods
```bash
# Using UV (no installation required)
uv run --extra api webscout-server
# Using Python module
python -m webscout.auth.server
```
#### Environment Variables
Webscout server supports configuration through environment variables:
```bash
# Start server in no-auth mode (no API key required)
$env:WEBSCOUT_NO_AUTH='true'; webscout-server
# Disable rate limiting
$env:WEBSCOUT_NO_RATE_LIMIT='true'; webscout-server
# Start with custom port using environment variable
$env:WEBSCOUT_PORT='7860'; webscout-server
```
For a complete list of supported environment variables and Docker deployment options, see [DOCKER.md](DOCKER.md).
#### From Python Code
> **Recommended:**
> Use `start_server` from `webscout.client` for the simplest programmatic startup.
> For advanced control (custom host, debug, etc.), use `run_api`.
```python
# Method 1: Using the helper function (recommended)
from webscout.client import start_server
# Start with default settings
start_server()
# Start with custom settings
start_server(port=8080, api_key="your-secret-key", default_provider="Claude")
# Start in no-auth mode (no API key required)
start_server(no_auth=True)
# Method 2: Advanced usage with run_api
from webscout.client import run_api
run_api(
host="0.0.0.0",
debug=True
)
```
### Using the API
Once the server is running, you can use it with any OpenAI client library or tool:
```python
# Using the OpenAI Python client
from openai import OpenAI
client = OpenAI(
api_key="your-secret-key", # Only needed if you set an API key
base_url="http://localhost:8000/v1" # Point to your local server
)
# Chat completion
response = client.chat.completions.create(
model="gpt-4", # This can be any model name registered with Webscout
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
```
### Using with cURL
```bash
# Basic chat completion request
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret-key" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
}'
# List available models
curl http://localhost:8000/v1/models \
-H "Authorization: Bearer your-secret-key"
```
### Available Endpoints
- `GET /v1/models` - List all available models
- `GET /v1/models/{model_name}` - Get information about a specific model
- `POST /v1/chat/completions` - Create a chat completion
</p>
</details>
<hr/>
## 🔍 Search Engines
Webscout provides multiple search engine interfaces for diverse search capabilities.
### YepSearch - Yep.com Interface
```python
from webscout import YepSearch
# Initialize YepSearch
yep = YepSearch(
timeout=20, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = yep.text(
keywords="artificial intelligence",
region="all", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
# Image Search
image_results = yep.images(
keywords="nature photography",
region="all",
safesearch="moderate",
max_results=10
)
# Get search suggestions
suggestions = yep.suggestions("hist")
```
### GoogleSearch - Google Interface
```python
from webscout import GoogleSearch
# Initialize GoogleSearch
google = GoogleSearch(
timeout=10, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = google.text(
keywords="artificial intelligence",
region="us", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
for result in text_results:
print(f"Title: {result.title}")
print(f"URL: {result.url}")
print(f"Description: {result.description}")
# News Search
news_results = google.news(
keywords="technology trends",
region="us",
safesearch="moderate",
max_results=5
)
# Get search suggestions
suggestions = google.suggestions("how to")
# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)
```
<hr/>
## 🦆 DuckDuckGo Search with WEBS and AsyncWEBS
Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the `WEBS` and `AsyncWEBS` classes.
<details open>
<summary><b>Synchronous Usage with WEBS</b></summary>
<p>
```python
from webscout import WEBS
# Use as a context manager for proper resource management
with WEBS() as webs:
# Simple text search
results = webs.text("python programming", max_results=5)
for result in results:
print(f"Title: {result['title']}\nURL: {result['url']}")
```
</p>
</details>
<details open>
<summary><b>Asynchronous Usage with AsyncWEBS</b></summary>
<p>
```python
import asyncio
from webscout import AsyncWEBS
async def search_multiple_terms(search_terms):
async with AsyncWEBS() as webs:
# Create tasks for each search term
tasks = [webs.text(term, max_results=5) for term in search_terms]
# Run all searches concurrently
results = await asyncio.gather(*tasks)
return results
async def main():
terms = ["python", "javascript", "machine learning"]
all_results = await search_multiple_terms(terms)
# Process results
for i, term_results in enumerate(all_results):
print(f"Results for '{terms[i]}':\n")
for result in term_results:
print(f"- {result['title']}")
print("\n")
# Run the async function
asyncio.run(main())
```
</p>
</details>
> [!TIP]
> Always use these classes with a context manager (`with` statement) to ensure proper resource management and cleanup.
<hr/>
## 💻 WEBS API Reference
The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.
### Available Search Methods
| Method | Description | Example |
| --------------- | ------------------- | -------------------------------------------- |
| `text()` | General web search | `webs.text('python programming')` |
| `answers()` | Instant answers | `webs.answers('population of france')` |
| `images()` | Image search | `webs.images('nature photography')` |
| `videos()` | Video search | `webs.videos('documentary')` |
| `news()` | News articles | `webs.news('technology')` |
| `maps()` | Location search | `webs.maps('restaurants', place='new york')` |
| `translate()` | Text translation | `webs.translate('hello', to='es')` |
| `suggestions()` | Search suggestions | `webs.suggestions('how to')` |
| `weather()` | Weather information | `webs.weather('london')` |
<details>
<summary><b>Example: Text Search</b></summary>
<p>
```python
from webscout import WEBS
with WEBS() as webs:
results = webs.text(
'artificial intelligence',
region='wt-wt', # Optional: Region for results
safesearch='off', # Optional: 'on', 'moderate', 'off'
timelimit='y', # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
max_results=10 # Optional: Limit number of results
)
for result in results:
print(f"Title: {result['title']}")
print(f"URL: {result['url']}")
print(f"Description: {result['body']}\n")
```
</p>
</details>
<details>
<summary><b>Example: News Search with Formatting</b></summary>
<p>
```python
from webscout import WEBS
import datetime
def fetch_formatted_news(keywords, timelimit='d', max_results=20):
"""Fetch and format news articles"""
with WEBS() as webs:
# Get news results
news_results = webs.news(
keywords,
region="wt-wt",
safesearch="off",
timelimit=timelimit, # 'd'=day, 'w'=week, 'm'=month
max_results=max_results
)
# Format the results
formatted_news = []
for i, item in enumerate(news_results, 1):
# Format the date
date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')
# Create formatted entry
entry = f"{i}. {item['title']}\n"
entry += f" Published: {date}\n"
entry += f" {item['body']}\n"
entry += f" URL: {item['url']}\n"
formatted_news.append(entry)
return formatted_news
# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))
```
</p>
</details>
<details>
<summary><b>Example: Weather Information</b></summary>
<p>
```python
from webscout import WEBS
with WEBS() as webs:
# Get weather for a location
weather = webs.weather("New York")
# Access weather data
if weather:
print(f"Location: {weather.get('location', 'Unknown')}")
print(f"Temperature: {weather.get('temperature', 'N/A')}")
print(f"Conditions: {weather.get('condition', 'N/A')}")
```
</p>
</details>
<hr/>
## 🤖 AI Models and Voices
Webscout provides easy access to a wide range of AI models and voice options.
<details open>
<summary><b>LLM Models</b></summary>
<p>
Access and manage Large Language Models with Webscout's model utilities.
```python
from webscout import model
from rich import print
# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")
# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} models")
# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
for i, model_name in enumerate(available_models, 1):
print(f" {i}. {model_name}")
else:
print(f" {available_models}")
```
</p>
</details>
<details open>
<summary><b>TTS Voices</b></summary>
<p>
Access and manage Text-to-Speech voices across multiple providers.
```python
from webscout import model
from rich import print
# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")
# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} voices")
# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices
print(f" - {voice_name}: {voice_id}")
if len(available_voices) > 5:
print(f" ... and {len(available_voices) - 5} more")
```
</p>
</details>
<hr/>
## 💬 AI Chat Providers
Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.
### Popular AI Providers
<div class="provider-table">
| Provider | Description | Key Features |
| ---------------- | ------------------------ | ---------------------------------- |
| `OPENAI` | OpenAI's models | GPT-3.5, GPT-4, tool calling |
| `GEMINI` | Google's Gemini models | Web search capabilities |
| `Meta` | Meta's AI assistant | Image generation, web search |
| `GROQ` | Fast inference platform | High-speed inference, tool calling |
| `LLAMA` | Meta's Llama models | Open weights models |
| `DeepInfra` | Various open models | Multiple model options |
| `Cohere` | Cohere's language models | Command models |
| `PerplexityLabs` | Perplexity AI | Web search integration |
| `YEPCHAT` | Yep.com's AI | Streaming responses |
| `ChatGPTClone` | ChatGPT-like interface | Multiple model options |
| `TypeGPT` | TypeChat models | Multiple model options |
</div>
<details>
<summary><b>Example: Using Meta AI</b></summary>
<p>
```python
from webscout import Meta
# For basic usage (no authentication required)
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")
# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])
# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
print(media["url"])
```
</p>
</details>
<details>
<summary><b>Example: GROQ with Tool Calling</b></summary>
<p>
```python
from webscout import GROQ, WEBS
import json
# Initialize GROQ client
client = GROQ(api_key="your_api_key")
# Define helper functions
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a web search"""
try:
results = WEBS().text(query, max_results=3)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define tool specifications
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
]
# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)
response = client.chat("Find information about quantum computing", tools=tools)
print(response)
```
</p>
</details>
<details open>
<summary><b>GGUF Model Conversion</b></summary>
<p>
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.
```python
from webscout.Extra.gguf import ModelConverter
# Create a converter instance
converter = ModelConverter(
model_id="mistralai/Mistral-7B-Instruct-v0.2", # Hugging Face model ID
quantization_methods="q4_k_m" # Quantization method
)
# Run the conversion
converter.convert()
```
#### Available Quantization Methods
| Method | Description |
| -------- | ------------------------------------------------------------- |
| `fp16` | 16-bit floating point - maximum accuracy, largest size |
| `q2_k` | 2-bit quantization (smallest size, lowest accuracy) |
| `q3_k_l` | 3-bit quantization (large) - balanced for size/accuracy |
| `q3_k_m` | 3-bit quantization (medium) - good balance for most use cases |
| `q3_k_s` | 3-bit quantization (small) - optimized for speed |
| `q4_0` | 4-bit quantization (version 0) - standard 4-bit compression |
| `q4_1` | 4-bit quantization (version 1) - improved accuracy over q4_0 |
| `q4_k_m` | 4-bit quantization (medium) - balanced for most models |
| `q4_k_s` | 4-bit quantization (small) - optimized for speed |
| `q5_0` | 5-bit quantization (version 0) - high accuracy, larger size |
| `q5_1` | 5-bit quantization (version 1) - improved accuracy over q5_0 |
| `q5_k_m` | 5-bit quantization (medium) - best balance for quality/size |
| `q5_k_s` | 5-bit quantization (small) - optimized for speed |
| `q6_k` | 6-bit quantization - highest accuracy, largest size |
| `q8_0` | 8-bit quantization - maximum accuracy, largest size |
#### Command Line Usage
```bash
python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"
```
</p>
</details>
<div align="center">
<p>
<a href="https://youtube.com/@OEvortex">▶️ Vortex's YouTube Channel</a> |
<a href="https://t.me/ANONYMOUS_56788">📢 Anonymous Coder's Telegram</a>
</p>
</div>
<hr/>
## 🤝 Contributing
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
1. Fork the repository
2. Create a new branch for your feature or bug fix
3. Make your changes and commit them with descriptive messages
4. Push your branch to your forked repository
5. Submit a pull request to the main repository
## 🙏 Acknowledgments
- All the amazing developers who have contributed to the project
- The open-source community for their support and inspiration
<hr/>
<div align="center">
<p>Made with ❤️ by the Webscout team</p>
</div>
Raw data
{
"_id": null,
"home_page": null,
"name": "webscout",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "search, ai, chatbot, llm, language-model, gpt, openai, gemini, claude, llama, search-engine, text-to-speech, tts, text-to-image, tti, weather, youtube, toolkit, utilities, web-search, duckduckgo, google, yep",
"author": null,
"author_email": "OEvortex <helpingai5@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/eb/b4/fb4638ea97b52eac10c1cc1655186268ef4d5d3d539d03d3a61de62c866a/webscout-8.3.4.tar.gz",
"platform": null,
"description": "<div align=\"center\">\r\n <a href=\"https://github.com/OEvortex/Webscout\">\r\n <img src=\"https://img.shields.io/badge/WebScout-Ultimate%20Toolkit-blue?style=for-the-badge&logo=python&logoColor=white\" alt=\"WebScout Logo\">\r\n </a>\r\n\r\n <h1>Webscout</h1>\r\n\r\n <p><strong>Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More</strong></p>\r\n\r\n <p>\r\n Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces \u2013 all through one unified library.\r\n </p>\r\n\r\n <!-- Badges -->\r\n <p>\r\n <a href=\"https://pypi.org/project/webscout/\"><img src=\"https://img.shields.io/pypi/v/webscout.svg?style=flat-square&logo=pypi&label=PyPI\" alt=\"PyPI Version\"></a>\r\n <a href=\"https://pepy.tech/project/webscout\"><img src=\"https://static.pepy.tech/badge/webscout/month?style=flat-square\" alt=\"Monthly Downloads\"></a>\r\n <a href=\"https://pepy.tech/project/webscout\"><img src=\"https://static.pepy.tech/badge/webscout?style=flat-square\" alt=\"Total Downloads\"></a>\r\n <a href=\"#\"><img src=\"https://img.shields.io/pypi/pyversions/webscout?style=flat-square&logo=python\" alt=\"Python Version\"></a>\r\n <a href=\"https://deepwiki.com/OEvortex/Webscout\"><img src=\"https://deepwiki.com/badge.svg\" alt=\"Ask DeepWiki\"></a>\r\n </p>\r\n</div>\r\n\r\n<hr/>\r\n\r\n## \ud83d\udccb Table of Contents\r\n\r\n- [\ud83c\udf1f Key Features](#-features)\r\n- [\u2699\ufe0f Installation](#\ufe0f-installation)\r\n- [\ud83d\udda5\ufe0f Command Line Interface](#\ufe0f-command-line-interface)\r\n- [\ud83d\udd04 OpenAI-Compatible API Server](#-openai-compatible-api-server)\r\n- [\ud83d\udd0d Search Engines](#-search-engines)\r\n- [\ud83e\udd86 DuckDuckGo Search](#-duckduckgo-search-with-webs-and-asyncwebs)\r\n- [\ud83d\udcbb WEBS API Reference](#-webs-api-reference)\r\n- [\ud83e\udd16 AI Models and Voices](#-ai-models-and-voices)\r\n- [\ud83d\udcac AI Chat Providers](#-ai-chat-providers)\r\n- [\ud83d\udc68\u200d\ud83d\udcbb Advanced AI Interfaces](#-advanced-ai-interfaces)\r\n- [\ud83e\udd1d Contributing](#-contributing)\r\n- [\ud83d\ude4f Acknowledgments](#-acknowledgments)\r\n\r\n<hr/>\r\n\r\n> [!IMPORTANT] >\r\n> **Webscout supports three types of compatibility:**\r\n>\r\n> - **Native Compatibility:** Webscout's own native API for maximum flexibility\r\n> - **OpenAI Compatibility:** Use providers with OpenAI-compatible interfaces\r\n> - **Local LLM Compatibility:** Run local models with [Inferno](https://github.com/HelpingAI/inferno), an OpenAI-compatible server (now a standalone package)\r\n>\r\n> Choose the approach that best fits your needs! For OpenAI compatibility, check the [OpenAI Providers README](webscout/Provider/OPENAI/README.md) or see the [OpenAI-Compatible API Server](#-openai-compatible-api-server) section below.\r\n\r\n> [!NOTE]\r\n> Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.\r\n\r\n<div align=\"center\">\r\n <!-- Social/Support Links -->\r\n <p>\r\n <a href=\"https://t.me/PyscoutAI\"><img alt=\"Telegram Group\" src=\"https://img.shields.io/badge/Telegram%20Group-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n <a href=\"https://t.me/ANONYMOUS_56788\"><img alt=\"Developer Telegram\" src=\"https://img.shields.io/badge/Developer%20Contact-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n <a href=\"https://youtube.com/@OEvortex\"><img alt=\"YouTube\" src=\"https://img.shields.io/badge/YouTube-FF0000?style=for-the-badge&logo=youtube&logoColor=white\"></a>\r\n <a href=\"https://www.linkedin.com/in/oe-vortex-29a407265/\"><img alt=\"LinkedIn\" src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\"></a>\r\n <a href=\"https://www.instagram.com/oevortex/\"><img alt=\"Instagram\" src=\"https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white\"></a>\r\n <a href=\"https://buymeacoffee.com/oevortex\"><img alt=\"Buy Me A Coffee\" src=\"https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black\"></a>\r\n </p>\r\n</div>\r\n\r\n<hr/>\r\n\r\n## \ud83d\ude80 Features\r\n\r\n<details open>\r\n<summary><b>Search & AI</b></summary>\r\n<p>\r\n\r\n- **Comprehensive Search:** Leverage Google, DuckDuckGo, and Yep for diverse search results\r\n- **AI Powerhouse:** Access and interact with various AI models through three compatibility options:\r\n - **Native API:** Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more\r\n - **[OpenAI-Compatible Providers](webscout/Provider/OPENAI/README.md):** Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces\r\n - **[Local LLMs with Inferno](https://github.com/HelpingAI/inferno):** Run local models with an OpenAI-compatible server (now available as a standalone package)\r\n- **[AI Search](webscout/Provider/AISEARCH/README.md):** AI-powered search engines with advanced capabilities\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Media & Content Tools</b></summary>\r\n<p>\r\n\r\n- **[YouTube Toolkit](webscout/Extra/YTToolkit/README.md):** Advanced YouTube video and transcript management with multi-language support\r\n- **[Text-to-Speech (TTS)](webscout/Provider/TTS/README.md):** Convert text into natural-sounding speech using multiple AI-powered providers\r\n- **[Text-to-Image](webscout/Provider/TTI/README.md):** Generate high-quality images using a wide range of AI art providers\r\n- **[Weather Tools](webscout/Extra/weather.md):** Retrieve detailed weather information for any location\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Developer Tools</b></summary>\r\n<p>\r\n\r\n- **[GitAPI](webscout/Extra/GitToolkit/gitapi):** Powerful GitHub data extraction toolkit without authentication requirements for public data\r\n- **[SwiftCLI](webscout/swiftcli/Readme.md):** A powerful and elegant CLI framework for beautiful command-line interfaces\r\n- **[LitPrinter](webscout/litprinter/Readme.md):** Styled console output with rich formatting and colors\r\n- **[LitLogger](webscout/litlogger/README.md):** Simplified logging with customizable formats and color schemes\r\n- **[LitAgent](webscout/litagent/Readme.md):** Modern user agent generator that keeps your requests undetectable\r\n- **[Scout](webscout/scout/README.md):** Advanced web parsing and crawling library with intelligent HTML/XML parsing\r\n- **[Inferno](https://github.com/HelpingAI/inferno):** Run local LLMs with an OpenAI-compatible API and interactive CLI (now a standalone package: `pip install inferno-llm`)\r\n- **[GGUF Conversion](webscout/Extra/gguf.md):** Convert and quantize Hugging Face models to GGUF format\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Privacy & Utilities</b></summary>\r\n<p>\r\n\r\n- **[Tempmail](webscout/Extra/tempmail/README.md) & Temp Number:** Generate temporary email addresses and phone numbers\r\n- **[Awesome Prompts](webscout/Extra/Act.md):** Curated collection of system prompts for specialized AI personas\r\n</p>\r\n</details>\r\n\r\n<hr/>\r\n\r\n## \u2699\ufe0f Installation\r\n\r\nWebscout supports multiple installation methods to fit your workflow:\r\n\r\n### \ud83d\udce6 Standard Installation\r\n\r\n```bash\r\n# Install from PyPI\r\npip install -U webscout\r\n\r\n# Install with API server dependencies\r\npip install -U \"webscout[api]\"\r\n\r\n# Install with development dependencies\r\npip install -U \"webscout[dev]\"\r\n```\r\n\r\n### \u26a1 UV Package Manager (Recommended)\r\n\r\n[UV](https://github.com/astral-sh/uv) is a fast Python package manager. Webscout has full UV support:\r\n\r\n```bash\r\n# Install UV first (if not already installed)\r\npip install uv\r\n\r\n# Install Webscout with UV\r\nuv add webscout\r\n\r\n# Install with API dependencies\r\nuv add \"webscout[api]\"\r\n\r\n# Run Webscout directly with UV (no installation needed)\r\nuv run webscout --help\r\n\r\n# Run with API dependencies\r\nuv run webscout --extra api webscout-server\r\n\r\n# Install as a UV tool for global access\r\nuv tool install webscout\r\n\r\n# Use UV tool commands\r\nwebscout --help\r\nwebscout-server\r\n```\r\n\r\n### \ud83d\udd27 Development Installation\r\n\r\n```bash\r\n# Clone the repository\r\ngit clone https://github.com/OEvortex/Webscout.git\r\ncd Webscout\r\n\r\n# Install in development mode with UV\r\nuv sync --extra dev --extra api\r\n\r\n# Or with pip\r\npip install -e \".[dev,api]\"\r\n\r\n# Or with uv pip\r\nuv pip install -e \".[dev,api]\"\r\n```\r\n\r\n### \ud83d\udc33 Docker Installation\r\n\r\n```bash\r\n# Pull and run the Docker image\r\ndocker pull oevortex/webscout:latest\r\ndocker run -it oevortex/webscout:latest\r\n```\r\n\r\n### \ud83d\udcf1 Quick Start Commands\r\n\r\nAfter installation, you can immediately start using Webscout:\r\n\r\n```bash\r\n# Check version\r\nwebscout version\r\n\r\n# Search the web\r\nwebscout text -k \"python programming\"\r\n\r\n# Start API server\r\nwebscout-server\r\n\r\n# Get help\r\nwebscout --help\r\n```\r\n\r\n<hr/>\r\n\r\n## \ud83d\udda5\ufe0f Command Line Interface\r\n\r\nWebscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:\r\n\r\n### \ud83d\ude80 Direct Commands (Recommended)\r\n\r\nAfter installing with `uv tool install webscout` or `pip install webscout`:\r\n\r\n```bash\r\n# Get help\r\nwebscout --help\r\n\r\n# Start API server\r\nwebscout-server\r\n```\r\n\r\n### \ud83d\udd27 UV Run Commands (No Installation Required)\r\n\r\n```bash\r\n# Run directly with UV (downloads and runs automatically)\r\nuv run webscout --help\r\nuv run --extra api webscout-server\r\n```\r\n\r\n### \ud83d\udce6 Python Module Commands\r\n\r\n```bash\r\n# Traditional Python module execution\r\npython -m webscout --help\r\npython -m webscout-server\r\n```\r\n\r\n<details open>\r\n<summary><b>\ud83d\udd0d Web Search Commands</b></summary>\r\n<p>\r\n\r\n| Command | Description | Example |\r\n| --------------------------------- | --------------------------- | ----------------------------------------- |\r\n| `webscout text -k \"query\"` | Perform a text search | `webscout text -k \"python programming\"` |\r\n| `webscout answers -k \"query\"` | Get instant answers | `webscout answers -k \"what is AI\"` |\r\n| `webscout images -k \"query\"` | Search for images | `webscout images -k \"nature photography\"` |\r\n| `webscout videos -k \"query\"` | Search for videos | `webscout videos -k \"python tutorials\"` |\r\n| `webscout news -k \"query\"` | Search for news articles | `webscout news -k \"technology trends\"` |\r\n| `webscout maps -k \"query\"` | Perform a maps search | `webscout maps -k \"restaurants near me\"` |\r\n| `webscout translate -k \"text\"` | Translate text | `webscout translate -k \"hello world\"` |\r\n| `webscout suggestions -k \"query\"` | Get search suggestions | `webscout suggestions -k \"how to\"` |\r\n| `webscout weather -l \"location\"` | Get weather information | `webscout weather -l \"New York\"` |\r\n| `webscout version` | Display the current version | `webscout version` |\r\n\r\n**Google Search Commands:**\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `webscout google_text -k \"query\"` | Google text search | `webscout google_text -k \"machine learning\"` |\r\n| `webscout google_news -k \"query\"` | Google news search | `webscout google_news -k \"AI breakthrough\"` |\r\n| `webscout google_suggestions -q \"query\"` | Google suggestions | `webscout google_suggestions -q \"python\"` |\r\n\r\n**Yep Search Commands:**\r\n| Command | Description | Example |\r\n|---------|-------------|---------|\r\n| `webscout yep_text -k \"query\"` | Yep text search | `webscout yep_text -k \"web development\"` |\r\n| `webscout yep_images -k \"query\"` | Yep image search | `webscout yep_images -k \"landscapes\"` |\r\n| `webscout yep_suggestions -q \"query\"` | Yep suggestions | `webscout yep_suggestions -q \"javascript\"` |\r\n\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Inferno LLM Commands</b></summary>\r\n<p>\r\n\r\nInferno is now a standalone package. Install it separately with:\r\n\r\n```bash\r\npip install inferno-llm\r\n```\r\n\r\nAfter installation, you can use its CLI for managing and using local LLMs:\r\n\r\n```bash\r\ninferno --help\r\n```\r\n\r\n| Command | Description |\r\n| ------------------------ | ----------------------------------------------- |\r\n| `inferno pull <model>` | Download a model from Hugging Face |\r\n| `inferno list` | List downloaded models |\r\n| `inferno serve <model>` | Start a model server with OpenAI-compatible API |\r\n| `inferno run <model>` | Chat with a model interactively |\r\n| `inferno remove <model>` | Remove a downloaded model |\r\n| `inferno version` | Show version information |\r\n\r\nFor more information, visit the [Inferno GitHub repository](https://github.com/HelpingAI/inferno) or [PyPI package page](https://pypi.org/project/inferno-llm/).\r\n\r\n</p>\r\n</details>\r\n\r\n> [!NOTE] > **Hardware requirements for running models with Inferno:**\r\n>\r\n> - Around 2 GB of RAM for 1B models\r\n> - Around 4 GB of RAM for 3B models\r\n> - At least 8 GB of RAM for 7B models\r\n> - 16 GB of RAM for 13B models\r\n> - 32 GB of RAM for 33B models\r\n> - GPU acceleration is recommended for better performance\r\n\r\n<details open>\r\n<summary><b>\ud83d\udd04 OpenAI-Compatible API Server</b></summary>\r\n<p>\r\n\r\nWebscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.\r\n\r\n### Starting the API Server\r\n\r\n#### From Command Line (Recommended)\r\n\r\n```bash\r\n# Start with default settings (port 8000)\r\nwebscout-server\r\n\r\n# Start with custom port\r\nwebscout-server --port 8080\r\n\r\n# Start with API key authentication\r\nwebscout-server --api-key \"your-secret-key\"\r\n\r\n# Start in no-auth mode using command line flag (no API key required)\r\nwebscout-server --no-auth\r\n\r\n# Start in no-auth mode using environment variable\r\n$env:WEBSCOUT_NO_AUTH='true'; webscout-server\r\n\r\n# Specify a default provider\r\nwebscout-server --default-provider \"Claude\"\r\n\r\n# Run in debug mode\r\nwebscout-server --debug\r\n\r\n# Get help for all options (includes authentication options)\r\nwebscout-server --help\r\n```\r\n\r\n#### Alternative Methods\r\n\r\n```bash\r\n# Using UV (no installation required)\r\nuv run --extra api webscout-server\r\n\r\n# Using Python module\r\npython -m webscout.auth.server\r\n```\r\n\r\n#### Environment Variables\r\n\r\nWebscout server supports configuration through environment variables:\r\n\r\n```bash\r\n# Start server in no-auth mode (no API key required)\r\n$env:WEBSCOUT_NO_AUTH='true'; webscout-server\r\n\r\n# Disable rate limiting\r\n$env:WEBSCOUT_NO_RATE_LIMIT='true'; webscout-server\r\n\r\n# Start with custom port using environment variable\r\n$env:WEBSCOUT_PORT='7860'; webscout-server\r\n```\r\n\r\nFor a complete list of supported environment variables and Docker deployment options, see [DOCKER.md](DOCKER.md).\r\n\r\n#### From Python Code\r\n\r\n> **Recommended:** \r\n> Use `start_server` from `webscout.client` for the simplest programmatic startup. \r\n> For advanced control (custom host, debug, etc.), use `run_api`.\r\n\r\n```python\r\n# Method 1: Using the helper function (recommended)\r\nfrom webscout.client import start_server\r\n\r\n# Start with default settings\r\nstart_server()\r\n\r\n# Start with custom settings\r\nstart_server(port=8080, api_key=\"your-secret-key\", default_provider=\"Claude\")\r\n\r\n# Start in no-auth mode (no API key required)\r\nstart_server(no_auth=True)\r\n\r\n# Method 2: Advanced usage with run_api\r\nfrom webscout.client import run_api\r\n\r\nrun_api(\r\n host=\"0.0.0.0\",\r\n debug=True\r\n)\r\n```\r\n\r\n### Using the API\r\n\r\nOnce the server is running, you can use it with any OpenAI client library or tool:\r\n\r\n```python\r\n# Using the OpenAI Python client\r\nfrom openai import OpenAI\r\n\r\nclient = OpenAI(\r\n api_key=\"your-secret-key\", # Only needed if you set an API key\r\n base_url=\"http://localhost:8000/v1\" # Point to your local server\r\n)\r\n\r\n# Chat completion\r\nresponse = client.chat.completions.create(\r\n model=\"gpt-4\", # This can be any model name registered with Webscout\r\n messages=[\r\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"}\r\n ]\r\n)\r\n\r\nprint(response.choices[0].message.content)\r\n```\r\n\r\n### Using with cURL\r\n\r\n```bash\r\n# Basic chat completion request\r\ncurl http://localhost:8000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -H \"Authorization: Bearer your-secret-key\" \\\r\n -d '{\r\n \"model\": \"gpt-4\",\r\n \"messages\": [\r\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": \"Hello, how are you?\"}\r\n ]\r\n }'\r\n\r\n# List available models\r\ncurl http://localhost:8000/v1/models \\\r\n -H \"Authorization: Bearer your-secret-key\"\r\n```\r\n\r\n### Available Endpoints\r\n\r\n- `GET /v1/models` - List all available models\r\n- `GET /v1/models/{model_name}` - Get information about a specific model\r\n- `POST /v1/chat/completions` - Create a chat completion\r\n\r\n</p>\r\n</details>\r\n\r\n<hr/>\r\n\r\n## \ud83d\udd0d Search Engines\r\n\r\nWebscout provides multiple search engine interfaces for diverse search capabilities.\r\n\r\n### YepSearch - Yep.com Interface\r\n\r\n```python\r\nfrom webscout import YepSearch\r\n\r\n# Initialize YepSearch\r\nyep = YepSearch(\r\n timeout=20, # Optional: Set custom timeout\r\n proxies=None, # Optional: Use proxies\r\n verify=True # Optional: SSL verification\r\n)\r\n\r\n# Text Search\r\ntext_results = yep.text(\r\n keywords=\"artificial intelligence\",\r\n region=\"all\", # Optional: Region for results\r\n safesearch=\"moderate\", # Optional: \"on\", \"moderate\", \"off\"\r\n max_results=10 # Optional: Limit number of results\r\n)\r\n\r\n# Image Search\r\nimage_results = yep.images(\r\n keywords=\"nature photography\",\r\n region=\"all\",\r\n safesearch=\"moderate\",\r\n max_results=10\r\n)\r\n\r\n# Get search suggestions\r\nsuggestions = yep.suggestions(\"hist\")\r\n```\r\n\r\n### GoogleSearch - Google Interface\r\n\r\n```python\r\nfrom webscout import GoogleSearch\r\n\r\n# Initialize GoogleSearch\r\ngoogle = GoogleSearch(\r\n timeout=10, # Optional: Set custom timeout\r\n proxies=None, # Optional: Use proxies\r\n verify=True # Optional: SSL verification\r\n)\r\n\r\n# Text Search\r\ntext_results = google.text(\r\n keywords=\"artificial intelligence\",\r\n region=\"us\", # Optional: Region for results\r\n safesearch=\"moderate\", # Optional: \"on\", \"moderate\", \"off\"\r\n max_results=10 # Optional: Limit number of results\r\n)\r\nfor result in text_results:\r\n print(f\"Title: {result.title}\")\r\n print(f\"URL: {result.url}\")\r\n print(f\"Description: {result.description}\")\r\n\r\n# News Search\r\nnews_results = google.news(\r\n keywords=\"technology trends\",\r\n region=\"us\",\r\n safesearch=\"moderate\",\r\n max_results=5\r\n)\r\n\r\n# Get search suggestions\r\nsuggestions = google.suggestions(\"how to\")\r\n\r\n# Legacy usage is still supported\r\nfrom webscout import search\r\nresults = search(\"Python programming\", num_results=5)\r\n```\r\n\r\n<hr/>\r\n\r\n## \ud83e\udd86 DuckDuckGo Search with WEBS and AsyncWEBS\r\n\r\nWebscout provides powerful interfaces to DuckDuckGo's search capabilities through the `WEBS` and `AsyncWEBS` classes.\r\n\r\n<details open>\r\n<summary><b>Synchronous Usage with WEBS</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Use as a context manager for proper resource management\r\nwith WEBS() as webs:\r\n # Simple text search\r\n results = webs.text(\"python programming\", max_results=5)\r\n for result in results:\r\n print(f\"Title: {result['title']}\\nURL: {result['url']}\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Asynchronous Usage with AsyncWEBS</b></summary>\r\n<p>\r\n\r\n```python\r\nimport asyncio\r\nfrom webscout import AsyncWEBS\r\n\r\nasync def search_multiple_terms(search_terms):\r\n async with AsyncWEBS() as webs:\r\n # Create tasks for each search term\r\n tasks = [webs.text(term, max_results=5) for term in search_terms]\r\n # Run all searches concurrently\r\n results = await asyncio.gather(*tasks)\r\n return results\r\n\r\nasync def main():\r\n terms = [\"python\", \"javascript\", \"machine learning\"]\r\n all_results = await search_multiple_terms(terms)\r\n\r\n # Process results\r\n for i, term_results in enumerate(all_results):\r\n print(f\"Results for '{terms[i]}':\\n\")\r\n for result in term_results:\r\n print(f\"- {result['title']}\")\r\n print(\"\\n\")\r\n\r\n# Run the async function\r\nasyncio.run(main())\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n> [!TIP]\r\n> Always use these classes with a context manager (`with` statement) to ensure proper resource management and cleanup.\r\n\r\n<hr/>\r\n\r\n## \ud83d\udcbb WEBS API Reference\r\n\r\nThe WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.\r\n\r\n### Available Search Methods\r\n\r\n| Method | Description | Example |\r\n| --------------- | ------------------- | -------------------------------------------- |\r\n| `text()` | General web search | `webs.text('python programming')` |\r\n| `answers()` | Instant answers | `webs.answers('population of france')` |\r\n| `images()` | Image search | `webs.images('nature photography')` |\r\n| `videos()` | Video search | `webs.videos('documentary')` |\r\n| `news()` | News articles | `webs.news('technology')` |\r\n| `maps()` | Location search | `webs.maps('restaurants', place='new york')` |\r\n| `translate()` | Text translation | `webs.translate('hello', to='es')` |\r\n| `suggestions()` | Search suggestions | `webs.suggestions('how to')` |\r\n| `weather()` | Weather information | `webs.weather('london')` |\r\n\r\n<details>\r\n<summary><b>Example: Text Search</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\nwith WEBS() as webs:\r\n results = webs.text(\r\n 'artificial intelligence',\r\n region='wt-wt', # Optional: Region for results\r\n safesearch='off', # Optional: 'on', 'moderate', 'off'\r\n timelimit='y', # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)\r\n max_results=10 # Optional: Limit number of results\r\n )\r\n\r\n for result in results:\r\n print(f\"Title: {result['title']}\")\r\n print(f\"URL: {result['url']}\")\r\n print(f\"Description: {result['body']}\\n\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details>\r\n<summary><b>Example: News Search with Formatting</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import WEBS\r\nimport datetime\r\n\r\ndef fetch_formatted_news(keywords, timelimit='d', max_results=20):\r\n \"\"\"Fetch and format news articles\"\"\"\r\n with WEBS() as webs:\r\n # Get news results\r\n news_results = webs.news(\r\n keywords,\r\n region=\"wt-wt\",\r\n safesearch=\"off\",\r\n timelimit=timelimit, # 'd'=day, 'w'=week, 'm'=month\r\n max_results=max_results\r\n )\r\n\r\n # Format the results\r\n formatted_news = []\r\n for i, item in enumerate(news_results, 1):\r\n # Format the date\r\n date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')\r\n\r\n # Create formatted entry\r\n entry = f\"{i}. {item['title']}\\n\"\r\n entry += f\" Published: {date}\\n\"\r\n entry += f\" {item['body']}\\n\"\r\n entry += f\" URL: {item['url']}\\n\"\r\n\r\n formatted_news.append(entry)\r\n\r\n return formatted_news\r\n\r\n# Example usage\r\nnews = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)\r\nprint('\\n'.join(news))\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details>\r\n<summary><b>Example: Weather Information</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\nwith WEBS() as webs:\r\n # Get weather for a location\r\n weather = webs.weather(\"New York\")\r\n\r\n # Access weather data\r\n if weather:\r\n print(f\"Location: {weather.get('location', 'Unknown')}\")\r\n print(f\"Temperature: {weather.get('temperature', 'N/A')}\")\r\n print(f\"Conditions: {weather.get('condition', 'N/A')}\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<hr/>\r\n\r\n## \ud83e\udd16 AI Models and Voices\r\n\r\nWebscout provides easy access to a wide range of AI models and voice options.\r\n\r\n<details open>\r\n<summary><b>LLM Models</b></summary>\r\n<p>\r\n\r\nAccess and manage Large Language Models with Webscout's model utilities.\r\n\r\n```python\r\nfrom webscout import model\r\nfrom rich import print\r\n\r\n# List all available LLM models\r\nall_models = model.llm.list()\r\nprint(f\"Total available models: {len(all_models)}\")\r\n\r\n# Get a summary of models by provider\r\nsummary = model.llm.summary()\r\nprint(\"Models by provider:\")\r\nfor provider, count in summary.items():\r\n print(f\" {provider}: {count} models\")\r\n\r\n# Get models for a specific provider\r\nprovider_name = \"PerplexityLabs\"\r\navailable_models = model.llm.get(provider_name)\r\nprint(f\"\\n{provider_name} models:\")\r\nif isinstance(available_models, list):\r\n for i, model_name in enumerate(available_models, 1):\r\n print(f\" {i}. {model_name}\")\r\nelse:\r\n print(f\" {available_models}\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>TTS Voices</b></summary>\r\n<p>\r\n\r\nAccess and manage Text-to-Speech voices across multiple providers.\r\n\r\n```python\r\nfrom webscout import model\r\nfrom rich import print\r\n\r\n# List all available TTS voices\r\nall_voices = model.tts.list()\r\nprint(f\"Total available voices: {len(all_voices)}\")\r\n\r\n# Get a summary of voices by provider\r\nsummary = model.tts.summary()\r\nprint(\"\\nVoices by provider:\")\r\nfor provider, count in summary.items():\r\n print(f\" {provider}: {count} voices\")\r\n\r\n# Get voices for a specific provider\r\nprovider_name = \"ElevenlabsTTS\"\r\navailable_voices = model.tts.get(provider_name)\r\nprint(f\"\\n{provider_name} voices:\")\r\nif isinstance(available_voices, dict):\r\n for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices\r\n print(f\" - {voice_name}: {voice_id}\")\r\n if len(available_voices) > 5:\r\n print(f\" ... and {len(available_voices) - 5} more\")\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<hr/>\r\n\r\n## \ud83d\udcac AI Chat Providers\r\n\r\nWebscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.\r\n\r\n### Popular AI Providers\r\n\r\n<div class=\"provider-table\">\r\n\r\n| Provider | Description | Key Features |\r\n| ---------------- | ------------------------ | ---------------------------------- |\r\n| `OPENAI` | OpenAI's models | GPT-3.5, GPT-4, tool calling |\r\n| `GEMINI` | Google's Gemini models | Web search capabilities |\r\n| `Meta` | Meta's AI assistant | Image generation, web search |\r\n| `GROQ` | Fast inference platform | High-speed inference, tool calling |\r\n| `LLAMA` | Meta's Llama models | Open weights models |\r\n| `DeepInfra` | Various open models | Multiple model options |\r\n| `Cohere` | Cohere's language models | Command models |\r\n| `PerplexityLabs` | Perplexity AI | Web search integration |\r\n| `YEPCHAT` | Yep.com's AI | Streaming responses |\r\n| `ChatGPTClone` | ChatGPT-like interface | Multiple model options |\r\n| `TypeGPT` | TypeChat models | Multiple model options |\r\n\r\n</div>\r\n\r\n<details>\r\n<summary><b>Example: Using Meta AI</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import Meta\r\n\r\n# For basic usage (no authentication required)\r\nmeta_ai = Meta()\r\n\r\n# Simple text prompt\r\nresponse = meta_ai.chat(\"What is the capital of France?\")\r\nprint(response)\r\n\r\n# For authenticated usage with web search and image generation\r\nmeta_ai = Meta(fb_email=\"your_email@example.com\", fb_password=\"your_password\")\r\n\r\n# Text prompt with web search\r\nresponse = meta_ai.ask(\"What are the latest developments in quantum computing?\")\r\nprint(response[\"message\"])\r\nprint(\"Sources:\", response[\"sources\"])\r\n\r\n# Image generation\r\nresponse = meta_ai.ask(\"Create an image of a futuristic city\")\r\nfor media in response.get(\"media\", []):\r\n print(media[\"url\"])\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details>\r\n<summary><b>Example: GROQ with Tool Calling</b></summary>\r\n<p>\r\n\r\n```python\r\nfrom webscout import GROQ, WEBS\r\nimport json\r\n\r\n# Initialize GROQ client\r\nclient = GROQ(api_key=\"your_api_key\")\r\n\r\n# Define helper functions\r\ndef calculate(expression):\r\n \"\"\"Evaluate a mathematical expression\"\"\"\r\n try:\r\n result = eval(expression)\r\n return json.dumps({\"result\": result})\r\n except Exception as e:\r\n return json.dumps({\"error\": str(e)})\r\n\r\ndef search(query):\r\n \"\"\"Perform a web search\"\"\"\r\n try:\r\n results = WEBS().text(query, max_results=3)\r\n return json.dumps({\"results\": results})\r\n except Exception as e:\r\n return json.dumps({\"error\": str(e)})\r\n\r\n# Register functions with GROQ\r\nclient.add_function(\"calculate\", calculate)\r\nclient.add_function(\"search\", search)\r\n\r\n# Define tool specifications\r\ntools = [\r\n {\r\n \"type\": \"function\",\r\n \"function\": {\r\n \"name\": \"calculate\",\r\n \"description\": \"Evaluate a mathematical expression\",\r\n \"parameters\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"expression\": {\r\n \"type\": \"string\",\r\n \"description\": \"The mathematical expression to evaluate\"\r\n }\r\n },\r\n \"required\": [\"expression\"]\r\n }\r\n }\r\n },\r\n {\r\n \"type\": \"function\",\r\n \"function\": {\r\n \"name\": \"search\",\r\n \"description\": \"Perform a web search\",\r\n \"parameters\": {\r\n \"type\": \"object\",\r\n \"properties\": {\r\n \"query\": {\r\n \"type\": \"string\",\r\n \"description\": \"The search query\"\r\n }\r\n },\r\n \"required\": [\"query\"]\r\n }\r\n }\r\n }\r\n]\r\n\r\n# Use the tools\r\nresponse = client.chat(\"What is 25 * 4 + 10?\", tools=tools)\r\nprint(response)\r\n\r\nresponse = client.chat(\"Find information about quantum computing\", tools=tools)\r\nprint(response)\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<details open>\r\n<summary><b>GGUF Model Conversion</b></summary>\r\n<p>\r\n\r\nWebscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.\r\n\r\n```python\r\nfrom webscout.Extra.gguf import ModelConverter\r\n\r\n# Create a converter instance\r\nconverter = ModelConverter(\r\n model_id=\"mistralai/Mistral-7B-Instruct-v0.2\", # Hugging Face model ID\r\n quantization_methods=\"q4_k_m\" # Quantization method\r\n)\r\n\r\n# Run the conversion\r\nconverter.convert()\r\n```\r\n\r\n#### Available Quantization Methods\r\n\r\n| Method | Description |\r\n| -------- | ------------------------------------------------------------- |\r\n| `fp16` | 16-bit floating point - maximum accuracy, largest size |\r\n| `q2_k` | 2-bit quantization (smallest size, lowest accuracy) |\r\n| `q3_k_l` | 3-bit quantization (large) - balanced for size/accuracy |\r\n| `q3_k_m` | 3-bit quantization (medium) - good balance for most use cases |\r\n| `q3_k_s` | 3-bit quantization (small) - optimized for speed |\r\n| `q4_0` | 4-bit quantization (version 0) - standard 4-bit compression |\r\n| `q4_1` | 4-bit quantization (version 1) - improved accuracy over q4_0 |\r\n| `q4_k_m` | 4-bit quantization (medium) - balanced for most models |\r\n| `q4_k_s` | 4-bit quantization (small) - optimized for speed |\r\n| `q5_0` | 5-bit quantization (version 0) - high accuracy, larger size |\r\n| `q5_1` | 5-bit quantization (version 1) - improved accuracy over q5_0 |\r\n| `q5_k_m` | 5-bit quantization (medium) - best balance for quality/size |\r\n| `q5_k_s` | 5-bit quantization (small) - optimized for speed |\r\n| `q6_k` | 6-bit quantization - highest accuracy, largest size |\r\n| `q8_0` | 8-bit quantization - maximum accuracy, largest size |\r\n\r\n#### Command Line Usage\r\n\r\n```bash\r\npython -m webscout.Extra.gguf convert -m \"mistralai/Mistral-7B-Instruct-v0.2\" -q \"q4_k_m\"\r\n```\r\n\r\n</p>\r\n</details>\r\n\r\n<div align=\"center\">\r\n <p>\r\n <a href=\"https://youtube.com/@OEvortex\">\u25b6\ufe0f Vortex's YouTube Channel</a> |\r\n <a href=\"https://t.me/ANONYMOUS_56788\">\ud83d\udce2 Anonymous Coder's Telegram</a>\r\n </p>\r\n</div>\r\n\r\n<hr/>\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nContributions are welcome! If you'd like to contribute to Webscout, please follow these steps:\r\n\r\n1. Fork the repository\r\n2. Create a new branch for your feature or bug fix\r\n3. Make your changes and commit them with descriptive messages\r\n4. Push your branch to your forked repository\r\n5. Submit a pull request to the main repository\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- All the amazing developers who have contributed to the project\r\n- The open-source community for their support and inspiration\r\n\r\n<hr/>\r\n\r\n<div align=\"center\">\r\n <p>Made with \u2764\ufe0f by the Webscout team</p>\r\n</div>\r\n",
"bugtrack_url": null,
"license": "HelpingAI",
"summary": "Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more",
"version": "8.3.4",
"project_urls": {
"Source": "https://github.com/OE-LUCIFER/Webscout",
"Tracker": "https://github.com/OE-LUCIFER/Webscout/issues",
"YouTube": "https://youtube.com/@OEvortex"
},
"split_keywords": [
"search",
" ai",
" chatbot",
" llm",
" language-model",
" gpt",
" openai",
" gemini",
" claude",
" llama",
" search-engine",
" text-to-speech",
" tts",
" text-to-image",
" tti",
" weather",
" youtube",
" toolkit",
" utilities",
" web-search",
" duckduckgo",
" google",
" yep"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d906eda2b25931d41769e09dda95fd6075e991a5e4d8b5888b0e96c7b2cbc1f9",
"md5": "26c9e0b3d7f73a2890426ff961dd2185",
"sha256": "0de567b341dd4f1e5f4420c62704ec5c23c28d7cd18140e45c5cf3e736a64c2a"
},
"downloads": -1,
"filename": "webscout-8.3.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "26c9e0b3d7f73a2890426ff961dd2185",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 968504,
"upload_time": "2025-07-11T03:46:08",
"upload_time_iso_8601": "2025-07-11T03:46:08.020573Z",
"url": "https://files.pythonhosted.org/packages/d9/06/eda2b25931d41769e09dda95fd6075e991a5e4d8b5888b0e96c7b2cbc1f9/webscout-8.3.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ebb4fb4638ea97b52eac10c1cc1655186268ef4d5d3d539d03d3a61de62c866a",
"md5": "6e811de37df340004317fd774ee7b8cf",
"sha256": "17437ea57ac50d06d6d2e76f1cac44491ba8c12a24c279dc0b7be11b53ab7534"
},
"downloads": -1,
"filename": "webscout-8.3.4.tar.gz",
"has_sig": false,
"md5_digest": "6e811de37df340004317fd774ee7b8cf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 724150,
"upload_time": "2025-07-11T03:46:10",
"upload_time_iso_8601": "2025-07-11T03:46:10.783327Z",
"url": "https://files.pythonhosted.org/packages/eb/b4/fb4638ea97b52eac10c1cc1655186268ef4d5d3d539d03d3a61de62c866a/webscout-8.3.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-11 03:46:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "OE-LUCIFER",
"github_project": "Webscout",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "webscout"
}