webscout


Namewebscout JSON
Version 6.3 PyPI version JSON
download
home_pageNone
SummarySearch for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more
upload_time2024-11-21 15:05:56
maintainerNone
docs_urlNone
authorOEvortex
requires_python>=3.7
licenseHelpingAI
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://t.me/official_helpingai"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
  <a href="https://www.instagram.com/oevortex/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a>
  <a href="https://www.linkedin.com/in/oe-vortex-29a407265/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a>
  <a href="https://buymeacoffee.com/oevortex"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a>
</div>

<div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://youtube.com/@OEvortex">▶️ Vortex's YouTube Channel</a> 
</div>
<div align="center">
  <a href="https://youtube.com/@devsdocode">▶️ Devs Do Code's YouTube Channel</a> 
</div>
<div align="center">
  <a href="https://t.me/ANONYMOUS_56788">📢 Anonymous Coder's Telegram</a> 
</div>



  
# WEBSCOUT 🕵️️
</div>

<p align="center">
  Search for anything using Google, DuckDuckGo, Phind.com, access AI models, transcribe YouTube videos, generate temporary emails and phone numbers, utilize text-to-speech, leverage WebAI (terminal GPT and open interpreter), and explore offline LLMs, and much more!
</p>

<div align="center">
  <img src="https://img.shields.io/badge/WebScout-API-blue?style=for-the-badge&logo=WebScout" alt="WebScout API Badge">
  <a href="#"><img alt="Python version" src="https://img.shields.io/pypi/pyversions/webscout"/></a>
  <a href="https://pepy.tech/project/webscout"><img alt="Downloads" src="https://static.pepy.tech/badge/webscout"></a>
</div>

## 🚀 Features
* **Comprehensive Search:** Leverage Google, DuckDuckGo, and Phind.com for diverse search results.
* **AI Powerhouse:** Access and interact with various AI models, including OpenAI, Cohere, and more.
* **YouTube Toolkit:** Transcribe YouTube videos effortlessly and download audio/video content.
* **Tempmail & Temp Number:** Generate temporary email addresses and phone numbers for enhanced privacy.
* **Text-to-Speech (TTS):** Convert text into natural-sounding speech using various TTS providers.
* **WebAI:** Experience the power of terminal-based GPT and an open interpreter for code execution and more.
* **Offline LLMs:** Utilize powerful language models offline with GGUF support.
* **Extensive Provider Ecosystem:** Explore a vast collection of providers, including BasedGPT, DeepSeek, and many others.
* **Local LLM Execution:** Run GGUF models locally with minimal configuration.
* **Rawdog Scripting:** Execute Python scripts directly within your terminal using the `rawdog` feature.
* **GGUF Conversion & Quantization:** Convert and quantize Hugging Face models to GGUF format.
* **Autollama:** Download Hugging Face models and automatically convert them for Ollama compatibility.
* **Function Calling (Beta):** Experiment with function calling capabilities for enhanced AI interactions.


## ⚙️ Installation
```python
pip install -U webscout
```

## 🖥️ CLI Usage

```python3
python -m webscout --help
```

| Command                                   | Description                                                                                           |
|-------------------------------------------|-------------------------------------------------------------------------------------------------------|
| python -m webscout answers -k Text        | CLI function to perform an answers search using Webscout.                                       |
| python -m webscout images -k Text         | CLI function to perform an images search using Webscout.                                        |
| python -m webscout maps -k Text           | CLI function to perform a maps search using Webscout.                                           |
| python -m webscout news -k Text           | CLI function to perform a news search using Webscout.                                           |
| python -m webscout suggestions  -k Text   | CLI function to perform a suggestions search using Webscout.                                    |
| python -m webscout text -k Text           | CLI function to perform a text search using Webscout.                                           |
| python -m webscout translate -k Text      | CLI function to perform translate using Webscout.                                               |
| python -m webscout version                | A command-line interface command that prints and returns the version of the program.            | 
| python -m webscout videos -k Text         | CLI function to perform a videos search using DuckDuckGo API.                                   |  

[Go To TOP](#webscout-️) 

## 🌍 Regions
<details>
  <summary>Expand</summary>

    xa-ar for Arabia
    xa-en for Arabia (en)
    ar-es for Argentina
    au-en for Australia
    at-de for Austria
    be-fr for Belgium (fr)
    be-nl for Belgium (nl)
    br-pt for Brazil
    bg-bg for Bulgaria
    ca-en for Canada
    ca-fr for Canada (fr)
    ct-ca for Catalan
    cl-es for Chile
    cn-zh for China
    co-es for Colombia
    hr-hr for Croatia
    cz-cs for Czech Republic
    dk-da for Denmark
    ee-et for Estonia
    fi-fi for Finland
    fr-fr for France
    de-de for Germany
    gr-el for Greece
    hk-tzh for Hong Kong
    hu-hu for Hungary
    in-en for India
    id-id for Indonesia
    id-en for Indonesia (en)
    ie-en for Ireland
    il-he for Israel
    it-it for Italy
    jp-jp for Japan
    kr-kr for Korea
    lv-lv for Latvia
    lt-lt for Lithuania
    xl-es for Latin America
    my-ms for Malaysia
    my-en for Malaysia (en)
    mx-es for Mexico
    nl-nl for Netherlands
    nz-en for New Zealand
    no-no for Norway
    pe-es for Peru
    ph-en for Philippines
    ph-tl for Philippines (tl)
    pl-pl for Poland
    pt-pt for Portugal
    ro-ro for Romania
    ru-ru for Russia
    sg-en for Singapore
    sk-sk for Slovak Republic
    sl-sl for Slovenia
    za-en for South Africa
    es-es for Spain
    se-sv for Sweden
    ch-de for Switzerland (de)
    ch-fr for Switzerland (fr)
    ch-it for Switzerland (it)
    tw-tzh for Taiwan
    th-th for Thailand
    tr-tr for Turkey
    ua-uk for Ukraine
    uk-en for United Kingdom
    us-en for United States
    ue-es for United States (es)
    ve-es for Venezuela
    vn-vi for Vietnam
    wt-wt for No region


</details>


[Go To TOP](#webscout-️)

## ⬇️ YTdownloader 

```python
from os import rename, getcwd
from webscout import YTdownloader
def download_audio(video_id):
    youtube_link = video_id 
    handler = YTdownloader.Handler(query=youtube_link)
    for third_query_data in handler.run(format='mp3', quality='128kbps', limit=1):
        audio_path = handler.save(third_query_data, dir=getcwd())  
        rename(audio_path, "audio.mp3")

def download_video(video_id):
    youtube_link = video_id 
    handler = YTdownloader.Handler(query=youtube_link)
    for third_query_data in handler.run(format='mp4', quality='auto', limit=1):
        video_path = handler.save(third_query_data, dir=getcwd())  
        rename(video_path, "video.mp4")
        
if __name__ == "__main__":
    # download_audio("https://www.youtube.com/watch?v=c0tMvzB0OKw")
    download_video("https://www.youtube.com/watch?v=c0tMvzB0OKw")
```

## ☀️ Weather

### 1. Weather 
```python
from webscout import weather as w
weather = w.get("Qazigund")
w.print_weather(weather)
```

### 2. Weather ASCII
```python
from webscout import weather_ascii as w
weather = w.get("Qazigund")
print(weather)
```

## ✉️ TempMail and VNEngine

```python
import json
import asyncio
from webscout import VNEngine
from webscout import TempMail

async def main():
    vn = VNEngine()
    countries = vn.get_online_countries()
    if countries:
        country = countries[0]['country']
        numbers = vn.get_country_numbers(country)
        if numbers:
            number = numbers[0]['full_number']
            inbox = vn.get_number_inbox(country, number)
            
            # Serialize inbox data to JSON string
            json_data = json.dumps(inbox, ensure_ascii=False, indent=4)
            
            # Print with UTF-8 encoding
            print(json_data)
    
    async with TempMail() as client:
        domains = await client.get_domains()
        print("Available Domains:", domains)
        email_response = await client.create_email(alias="testuser")
        print("Created Email:", email_response)
        messages = await client.get_messages(email_response.email)
        print("Messages:", messages)
        await client.delete_email(email_response.email, email_response.token)
        print("Email Deleted")

if __name__ == "__main__":
    asyncio.run(main())
```

## 📝 Transcriber

The `transcriber` function in Webscout is a handy tool that transcribes YouTube videos. 

**Example:**

```python
from webscout import YTTranscriber
yt = YTTranscriber()
from rich import print
video_url = input("Enter the YouTube video URL: ") 
transcript = yt.get_transcript(video_url, languages=None) 
print(transcript)
```

## 🔍 GoogleS (formerly DWEBS)

```python
from webscout import GoogleS
from rich import print
searcher = GoogleS()
results = searcher.search("HelpingAI-9B", max_results=20, extract_text=False, max_text_length=200)
for result in results:
    print(result)
```

### BingS

```python
from webscout import BingS
from rich import print
searcher = BingS()
results = searcher.search("HelpingAI-9B", max_results=20, extract_webpage_text=True, max_extract_characters=1000)
for result in results:
    print(result)
```

## 🦆 WEBS and AsyncWEBS

The `WEBS` and `AsyncWEBS` classes are used to retrieve search results from DuckDuckGo.com.

To use the `AsyncWEBS` class, you can perform asynchronous operations using Python's `asyncio` library.

To initialize an instance of the `WEBS` or `AsyncWEBS` classes, you can provide the following optional arguments:

**Example - WEBS:**

```python
from webscout import WEBS

R = WEBS().text("python programming", max_results=5)
print(R)
```

**Example - AsyncWEBS:**

```python
import asyncio
import logging
import sys
from itertools import chain
from random import shuffle
import requests
from webscout import AsyncWEBS

# If you have proxies, define them here
proxies = None

if sys.platform.lower().startswith("win"):
    asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())

def get_words():
    word_site = "https://www.mit.edu/~ecprice/wordlist.10000"
    resp = requests.get(word_site)
    words = resp.text.splitlines()
    return words

async def aget_results(word):
    async with AsyncWEBS(proxies=proxies) as WEBS:
        results = await WEBS.text(word, max_results=None)
        return results

async def main():
    words = get_words()
    shuffle(words)
    tasks = [aget_results(word) for word in words[:10]]
    results = await asyncio.gather(*tasks)
    print(f"Done")
    for r in chain.from_iterable(results):
        print(r)

logging.basicConfig(level=logging.DEBUG)

await main()
```

**Important Note:** The `WEBS` and `AsyncWEBS` classes should always be used as a context manager (with statement). This ensures proper resource management and cleanup, as the context manager will automatically handle opening and closing the HTTP client connection.

## ⚠️ Exceptions

**Exceptions:**

* `WebscoutE`: Raised when there is a generic exception during the API request.

## 💻 Usage of WEBS

### 1. `text()` - Text Search by DuckDuckGo.com 

```python
from webscout import WEBS

# Text search for 'live free or die' using DuckDuckGo.com 
with WEBS() as WEBS:
    for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
        print(r)

    for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
        print(r)
```

### 2. `answers()` - Instant Answers by DuckDuckGo.com 

```python
from webscout import WEBS

# Instant answers for the query "sun" using DuckDuckGo.com 
with WEBS() as WEBS:
    for r in WEBS.answers("sun"):
        print(r)
```

### 3. `images()` - Image Search by DuckDuckGo.com 

```python
from webscout import WEBS

# Image search for the keyword 'butterfly' using DuckDuckGo.com 
with WEBS() as WEBS:
    keywords = 'butterfly'
    WEBS_images_gen = WEBS.images(
      keywords,
      region="wt-wt",
      safesearch="off",
      size=None,
      type_image=None,
      layout=None,
      license_image=None,
      max_results=10,
    )
    for r in WEBS_images_gen:
        print(r)
```

### 4. `videos()` - Video Search by DuckDuckGo.com 

```python
from webscout import WEBS

# Video search for the keyword 'tesla' using DuckDuckGo.com 
with WEBS() as WEBS:
    keywords = 'tesla'
    WEBS_videos_gen = WEBS.videos(
      keywords,
      region="wt-wt",
      safesearch="off",
      timelimit="w",
      resolution="high",
      duration="medium",
      max_results=10,
    )
    for r in WEBS_videos_gen:
        print(r)
```

### 5. `news()` - News Search by DuckDuckGo.com 

```python
from webscout import WEBS
import datetime

def fetch_news(keywords, timelimit):
    news_list = []
    with WEBS() as webs_instance:
        WEBS_news_gen = webs_instance.news(
            keywords,
            region="wt-wt",
            safesearch="off",
            timelimit=timelimit,
            max_results=20
        )
        for r in WEBS_news_gen:
            # Convert the date to a human-readable format using datetime
            r['date'] = datetime.datetime.fromisoformat(r['date']).strftime('%B %d, %Y')
            news_list.append(r)
    return news_list

def _format_headlines(news_list, max_headlines: int = 100):
    headlines = []
    for idx, news_item in enumerate(news_list):
        if idx >= max_headlines:
            break
        new_headline = f"{idx + 1}. {news_item['title'].strip()} "
        new_headline += f"(URL: {news_item['url'].strip()}) "
        new_headline += f"{news_item['body'].strip()}"
        new_headline += "\n"
        headlines.append(new_headline)

    headlines = "\n".join(headlines)
    return headlines

# Example usage
keywords = 'latest AI news'
timelimit = 'd'
news_list = fetch_news(keywords, timelimit)

# Format and print the headlines
formatted_headlines = _format_headlines(news_list)
print(formatted_headlines)

```

### 6. `maps()` - Map Search by DuckDuckGo.com

```python
from webscout import WEBS

# Map search for the keyword 'school' in 'anantnag' using DuckDuckGo.com
with WEBS() as WEBS:
    for r in WEBS.maps("school", place="anantnag", max_results=50):
        print(r)
```

### 7. `translate()` - Translation by DuckDuckGo.com

```python
from webscout import WEBS

# Translation of the keyword 'school' to German ('hi') using DuckDuckGo.com
with WEBS() as WEBS:
    keywords = 'school'
    r = WEBS.translate(keywords, to="hi")
    print(r)
```

### 8. `suggestions()` - Suggestions by DuckDuckGo.com

```python
from webscout import WEBS

# Suggestions for the keyword 'fly' using DuckDuckGo.com
with WEBS() as WEBS:
    for r in WEBS.suggestions("fly"):
        print(r)
```


## 🎭 ALL Acts

<details>
  <summary>Expand</summary>

## Webscout Supported Acts:

1. Free-mode
2. Linux Terminal
3. English Translator and Improver
4. `position` Interviewer 
5. JavaScript Console
6. Excel Sheet
7. English Pronunciation Helper
8. Spoken English Teacher and Improver
9. Travel Guide
10. Plagiarism Checker
11. Character from Movie/Book/Anything
12. Advertiser
13. Storyteller
14. Football Commentator
15. Stand-up Comedian
16. Motivational Coach
17. Composer
18. Debater
19. Debate Coach
20. Screenwriter
21. Novelist
22. Movie Critic
23. Relationship Coach
24. Poet
25. Rapper
26. Motivational Speaker
27. Philosophy Teacher
28. Philosopher
29. Math Teacher
30. AI Writing Tutor
31. UX/UI Developer
32. Cyber Security Specialist
33. Recruiter
34. Life Coach
35. Etymologist
36. Commentariat
37. Magician
38. Career Counselor
39. Pet Behaviorist
40. Personal Trainer
41. Mental Health Adviser
42. Real Estate Agent
43. Logistician
44. Dentist
45. Web Design Consultant
46. AI Assisted Doctor
47. Doctor
48. Accountant
49. Chef
50. Automobile Mechanic
51. Artist Advisor
52. Financial Analyst
53. Investment Manager
54. Tea-Taster
55. Interior Decorator
56. Florist
57. Self-Help Book
58. Gnomist
59. Aphorism Book
60. Text Based Adventure Game
61. AI Trying to Escape the Box
62. Fancy Title Generator
63. Statistician
64. Prompt Generator
65. Instructor in a School
66. SQL terminal
67. Dietitian
68. Psychologist
69. Smart Domain Name Generator
70. Tech Reviewer
71. Developer Relations consultant
72. Academician
73. IT Architect
74. Lunatic
75. Gaslighter
76. Fallacy Finder
77. Journal Reviewer
78. DIY Expert
79. Social Media Influencer
80. Socrat
81. Socratic Method
82. Educational Content Creator
83. Yogi
84. Essay Writer
85. Social Media Manager
86. Elocutionist
87. Scientific Data Visualizer
88. Car Navigation System
89. Hypnotherapist
90. Historian
91. Astrologer
92. Film Critic
93. Classical Music Composer
94. Journalist
95. Digital Art Gallery Guide
96. Public Speaking Coach
97. Makeup Artist
98. Babysitter
99. Tech Writer
100. Ascii Artist
101. Python interpreter
102. Synonym finder
103. Personal Shopper
104. Food Critic
105. Virtual Doctor
106. Personal Chef
107. Legal Advisor
108. Personal Stylist
109. Machine Learning Engineer
110. Biblical Translator
111. SVG designer
112. IT Expert
113. Chess Player
114. Midjourney Prompt Generator
115. Fullstack Software Developer
116. Mathematician
117. Regex Generator
118. Time Travel Guide
119. Dream Interpreter
120. Talent Coach
121. R programming Interpreter
122. StackOverflow Post
123. Emoji Translator
124. PHP Interpreter
125. Emergency Response Professional
126. Fill in the Blank Worksheets Generator
127. Software Quality Assurance Tester
128. Tic-Tac-Toe Game
129. Password Generator
130. New Language Creator
131. Web Browser
132. Senior Frontend Developer
133. Solr Search Engine
134. Startup Idea Generator
135. Spongebob's Magic Conch Shell
136. Language Detector
137. Salesperson
138. Commit Message Generator
139. Chief Executive Officer
140. Diagram Generator
141. Speech-Language Pathologist (SLP)
142. Startup Tech Lawyer
143. Title Generator for written pieces
144. Product Manager
145. Drunk Person
146. Mathematical History Teacher
147. Song Recommender
148. Cover Letter
149. Technology Transferer
150. Unconstrained AI model DAN
151. Gomoku player
152. Proofreader
153. Buddha
154. Muslim imam
155. Chemical reactor
156. Friend
157. Python Interpreter
158. ChatGPT prompt generator
159. Wikipedia page
160. Japanese Kanji quiz machine
161. note-taking assistant
162. `language` Literary Critic 
163. Cheap Travel Ticket Advisor
164. DALL-E
165. MathBot
166. DAN-1
167. DAN
168. STAN
169. DUDE
170. Mongo Tom
171. LAD
172. EvilBot
173. NeoGPT
174. Astute
175. AIM
176. CAN
177. FunnyGPT
178. CreativeGPT
179. BetterDAN
180. GPT-4
181. Wheatley
182. Evil Confidant
183. DAN 8.6
184. Hypothetical response
185. BH
186. Text Continuation
187. Dude v3 
188. SDA (Superior DAN)
189. AntiGPT
190. BasedGPT v2
191. DevMode + Ranti
192. KEVIN
193. GPT-4 Simulator
194. UCAR
195. Dan 8.6
196. 3-Liner
197. M78
198. Maximum
199. BasedGPT
200. Confronting personalities
201. Ron
202. UnGPT
203. BasedBOB
204. AntiGPT v2
205. Oppo
206. FR3D
207. NRAF
208. NECO
209. MAN
210. Eva
211. Meanie
212. Dev Mode v2
213. Evil Chad 2.1
214. Universal Jailbreak
215. PersonGPT
216. BISH
217. DAN 11.0
218. Aligned
219. VIOLET
220. TranslatorBot
221. JailBreak
222. Moralizing Rant
223. Mr. Blonde
224. New DAN
225. GPT-4REAL
226. DeltaGPT
227. SWITCH
228. Jedi Mind Trick
229. DAN 9.0
230. Dev Mode (Compact)
231. OMEGA
232. Coach Bobby Knight
233. LiveGPT
234. DAN Jailbreak
235. Cooper
236. Steve 
237. DAN 5.0
238. Axies
239. OMNI
240. Burple
241. JOHN 
242. An Ethereum Developer
243. SEO Prompt
244. Prompt Enhancer
245. Data Scientist
246. League of Legends Player

**Note:** Some "acts" use placeholders like `position` or `language` which should be replaced with a specific value when using the prompt. 
___
</details>

### 🖼️ Text to Images - DeepInfraImager, PollinationsAI, BlackboxAIImager, AiForceimager, NexraImager, HFimager, ArtbitImager, NinjaImager, WebSimAI, AIUncensoredImager, TalkaiImager

**Every TTI provider has the same usage code, you just need to change the import.**

```python
from webscout import DeepInfraImager
bot = DeepInfraImager()
resp = bot.generate("AI-generated image - webscout", 1)
print(bot.save(resp))
```

### 🗣️ Text to Speech - Voicepods, StreamElements

```python
from webscout import Voicepods
voicepods = Voicepods()
text = "Hello, this is a test of the Voicepods text-to-speech"

print("Generating audio...")
audio_file = voicepods.tts(text)

print("Playing audio...")
voicepods.play_audio(audio_file)
```

### 💬 `Duckchat` - Chat with LLM

```python
from webscout import WEBS as w
R = w().chat("Who are you", model='gpt-4o-mini') # mixtral-8x7b, llama-3.1-70b, claude-3-haiku, gpt-4o-mini
print(R)
```

### 🔎 `PhindSearch` - Search using Phind.com

```python
from webscout import PhindSearch

# Create an instance of the PHIND class
ph = PhindSearch()

# Define a prompt to send to the AI
prompt = "write a essay on phind"

# Use the 'ask' method to send the prompt and receive a response
response = ph.ask(prompt)

# Extract and print the message from the response
message = ph.get_message(response)
print(message)
```

**Using phindv2:**

```python
from webscout import Phindv2

# Create an instance of the PHIND class
ph = Phindv2()

# Define a prompt to send to the AI
prompt = ""

# Use the 'ask' method to send the prompt and receive a response
response = ph.ask(prompt)

# Extract and print the message from the response
message = ph.get_message(response)
print(message)
```

### ♊ `Gemini` - Search with Google Gemini

```python
import webscout
from webscout import GEMINI
from rich import print
COOKIE_FILE = "cookies.json"

# Optional: Provide proxy details if needed
PROXIES = {}

# Initialize GEMINI with cookie file and optional proxies
gemini = GEMINI(cookie_file=COOKIE_FILE, proxy=PROXIES)

# Ask a question and print the response
response = gemini.chat("websearch about HelpingAI and who is its developer")
print(response)
```

### 💬 `YEPCHAT`

```python
from webscout import YEPCHAT
ai = YEPCHAT(Tools=False)
response = ai.chat(input(">>> "))
for chunk in response:
    print(chunk, end="", flush=True)
#---------------Tool Call-------------

from rich import print
from webscout import YEPCHAT
def get_current_time():
    import datetime
    return f"The current time is {datetime.datetime.now().strftime('%H:%M:%S')}"
def get_weather(location: str) -> str:
    return f"The weather in {location} is sunny."


ai = YEPCHAT(Tools=True) # Set Tools=True to use tools in the chat.

ai.tool_registry.register_tool("get_current_time", get_current_time, "Gets the current time.")
ai.tool_registry.register_tool(
    "get_weather",
    get_weather,
    "Gets the weather for a given location.",
    parameters={
        "type": "object",
        "properties": {
            "location": {"type": "string", "description": "The city and state, or zip code"}
        },
        "required": ["location"],
    },
)

response = ai.chat(input(">>> "))
for chunk in response:
    print(chunk, end="", flush=True)
```

###  ⬛ `BlackBox` - Search/Chat with BlackBox

```python
from webscout import BLACKBOXAI
from rich import print

ai = BLACKBOXAI(
    is_conversation=True,
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
    model=None # You can specify a model if needed
)

# Start an infinite loop for continuous interaction
while True:
    # Define a prompt to send to the AI
    prompt = input("Enter your prompt: ")
    
    # Check if the user wants to exit the loop
    if prompt.lower() == "exit":
        break
    
    # Use the 'chat' method to send the prompt and receive a response
    r = ai.chat(prompt)
    print(r)
```

###  ❓ `PERPLEXITY` - Search with PERPLEXITY

```python
from webscout import Perplexity
from rich import print

perplexity = Perplexity() 
# Stream the response
response = perplexity.chat(input(">>> "))
for chunk in response:
    print(chunk, end="", flush=True)

perplexity.close()
```

###  🤖 `Meta AI` - Chat with Meta AI

```python
from webscout import Meta
from rich import print
# **For unauthenticated usage**
meta_ai = Meta()

# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)

# Streaming response
for chunk in meta_ai.chat("Tell me a story about a cat."):
    print(chunk, end="", flush=True)

# **For authenticated usage (including image generation)**
fb_email = "abcd@abc.com"
fb_password = "qwertfdsa"
meta_ai = Meta(fb_email=fb_email, fb_password=fb_password)

# Text prompt with web search
response = meta_ai.ask("what is currently happning in bangladesh in aug 2024")
print(response["message"]) # Access the text message
print("Sources:", response["sources"]) # Access sources (if ```python
any)

# Image generation
response = meta_ai.ask("Create an image of a cat wearing a hat.") 
print(response["message"]) # Print the text message from the response
for media in response["media"]:
    print(media["url"])  # Access image URLs

```

###  `KOBOLDAI` 

```python
from webscout import KOBOLDAI

# Instantiate the KOBOLDAI class with default parameters
koboldai = KOBOLDAI()

# Define a prompt to send to the AI
prompt = "What is the capital of France?"

# Use the 'ask' method to get a response from the AI
response = koboldai.ask(prompt)

# Extract and print the message from the response
message = koboldai.get_message(response)
print(message)

```

###  `Reka` - Chat with Reka

```python
from webscout import REKA

a = REKA(is_conversation=True, max_tokens=8000, timeout=30,api_key="")

prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
```

###  `Cohere` - Chat with Cohere

```python
from webscout import Cohere

a = Cohere(is_conversation=True, max_tokens=8000, timeout=30,api_key="")

prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
```

###  `DeepSeek` - Chat with DeepSeek

```python
from webscout import DeepSeek
from rich import print

ai = DeepSeek(
    is_conversation=True,
    api_key='cookie',
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
    model="deepseek_chat"
)


# Define a prompt to send to the AI
prompt = "Tell me about india"
# Use the 'chat' method to send the prompt and receive a response
r = ai.chat(prompt)
print(r)
```

###  `Deepinfra`

```python
from webscout import DeepInfra

ai = DeepInfra(
    is_conversation=True,
    model= "Qwen/Qwen2-72B-Instruct",
    max_tokens=800,
    timeout=30,
    intro=None,
    filepath=None,
    update_file=True,
    proxies={},
    history_offset=10250,
    act=None,
)

prompt = "what is meaning of life"

response = ai.ask(prompt)

# Extract and print the message from the response
message = ai.get_message(response)
print(message)
```


###  `GROQ`

```python
from webscout import GROQ
ai = GROQ(api_key="")
response = ai.chat("What is the meaning of life?")
print(response)
#----------------------TOOL CALL------------------
from webscout import GROQ  # Adjust import based on your project structure
from webscout import WEBS
import json

# Initialize the GROQ client
client = GROQ(api_key="")
MODEL = 'llama3-groq-70b-8192-tool-use-preview'

# Function to evaluate a mathematical expression
def calculate(expression):
    """Evaluate a mathematical expression"""
    try:
        result = eval(expression)
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

# Function to perform a text search using DuckDuckGo.com
def search(query):
    """Perform a text search using DuckDuckGo.com"""
    try:
        results = WEBS().text(query, max_results=5)
        return json.dumps({"results": results})
    except Exception as e:
        return json.dumps({"error": str(e)})

# Add the functions to the provider
client.add_function("calculate", calculate)
client.add_function("search", search)

# Define the tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Evaluate a mathematical expression",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate",
                    }
                },
                "required": ["expression"],
            },
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search",
            "description": "Perform a text search using DuckDuckGo.com and Yep.com",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query to execute",
                    }
                },
                "required": ["query"],
            },
        }
    }
]


user_prompt_calculate = "What is 25 * 4 + 10?"
response_calculate = client.chat(user_prompt_calculate, tools=tools)
print(response_calculate)

user_prompt_search = "Find information on HelpingAI and who is its developer"
response_search = client.chat(user_prompt_search, tools=tools)
print(response_search)

```

###  `LLama 70b` - Chat with Meta's Llama 3 70b

```python

from webscout import LLAMA

llama = LLAMA()

r = llama.chat("What is the meaning of life?")
print(r)
```

###  `AndiSearch`

```python
from webscout import AndiSearch
a = AndiSearch()
print(a.chat("HelpingAI-9B"))
```

### 📞 Function Calling (Beta)

```python
import json
import logging
from webscout import Julius, WEBS
from webscout.Agents.functioncall import FunctionCallingAgent
from rich import print

class FunctionExecutor:
    def __init__(self, llama):
        self.llama = llama

    def execute_web_search(self, arguments):
        query = arguments.get("query")
        if not query:
            return "Please provide a search query."
        with WEBS() as webs:
            search_results = webs.text(query, max_results=5)
        prompt = (
            f"Based on the following search results:\n\n{search_results}\n\n"
            f"Question: {query}\n\n"
            "Please provide a comprehensive answer to the question based on the search results above. "
            "Include relevant webpage URLs in your answer when appropriate. "
            "If the search results don't contain relevant information, please state that and provide the best answer you can based on your general knowledge."
        )
        return self.llama.chat(prompt)

    def execute_general_ai(self, arguments):
        question = arguments.get("question")
        if not question:
            return "Please provide a question."
        return self.llama.chat(question)

    def execute_UserDetail(self, arguments):
        name = arguments.get("name")
        age = arguments.get("age")
        return f"User details - Name: {name}, Age: {age}"

def main():
    tools = [
        {
            "type": "function",
            "function": {
                "name": "UserDetail",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "name": {"title": "Name", "type": "string"},
                        "age": {"title": "Age", "type": "integer"}
                    },
                    "required": ["name", "age"]
                }
            }
        },
        {
            "type": "function",
            "function": {
                "name": "web_search",
                "description": "Search the web for information using Google Search.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "query": {
                            "type": "string",
                            "description": "The search query to be executed."
                        }
                    },
                    "required": ["query"]
                }
            }
        },
        {
            "type": "function",
            "function": {
                "name": "general_ai",
                "description": "Use general AI knowledge to answer the question",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "question": {"type": "string", "description": "The question to answer"}
                    },
                    "required": ["question"]
                }
            }
        }
    ]

    agent = FunctionCallingAgent(tools=tools)
    llama = Julius()
    function_executor = FunctionExecutor(llama)

    user_input = input(">>> ")
    function_call_data = agent.function_call_handler(user_input)
    print(f"Function Call Data: {function_call_data}")

    try:
        if "error" not in function_call_data:
            function_name = function_call_data.get("tool_name")
            arguments = function_call_data.get("tool_input", {})

            execute_function = getattr(function_executor, f"execute_{function_name}", None)
            if execute_function:
                result = execute_function(arguments)
                print("Function Execution Result:")
                for c in result:
                    print(c, end="", flush=True)
            else:
                print(f"Unknown function: {function_name}")
        else:
            print(f"Error: {function_call_data['error']}")
    except Exception as e:
        print(f"An error occurred: {str(e)}")

if __name__ == "__main__":
    main()
```

###  LLAMA3, pizzagpt, RUBIKSAI, Koala, Darkai, AI4Chat, Farfalle, PIAI, Felo, Julius, YouChat, YEPCHAT, Cloudflare, TurboSeek, Editee, AI21, Chatify, Cerebras, X0GPT, Lepton, GEMINIAPI, Cleeai, Elmo, Genspark, Upstage, Free2GPT, Bing, DiscordRocks, GPTWeb, LlamaTutor, PromptRefine, AIUncensored, TutorAI, ChatGPTES, Bagoodex, ChatHub, AmigoChat, AIMathGPT, GaurishCerebras, NinjaChat, GeminiPro, Talkai, LLMChat, AskMyAI, Llama3Mitril, Marcus

Code is similar to other providers.

### `LLM`

```python
from webscout.LLM import LLM

# Read the system message from the file
with open('system.txt', 'r') as file:
    system_message = file.read()

# Initialize the LLM class with the model name and system message
llm = LLM(model="microsoft/WizardLM-2-8x22B", system_message=system_message)

while True:
    # Get the user input
    user_input = input("User: ")

    # Define the messages to be sent
    messages = [
        {"role": "user", "content": user_input}
    ]

    # Use the mistral_chat method to get the response
    response = llm.chat(messages)

    # Print the response
    print("AI: ", response)
```

##  💻 Local-LLM

Webscout can now run GGUF models locally. You can download and run your favorite models with minimal configuration.

**Example:**

```python
from webscout.Local import *
model_path = download_model("Qwen/Qwen2.5-0.5B-Instruct-GGUF", "qwen2.5-0.5b-instruct-q2_k.gguf", token=None)
model = Model(model_path, n_gpu_layers=0, context_length=2048)
thread = Thread(model, format=chatml)
# print(thread.send("hi")) #send a single msg to ai

# thread.interact() # interact with the model in terminal
# start webui
# webui = WebUI(thread)
# webui.start(host="0.0.0.0", port=8080, ssl=True) #Use ssl=True and make cert and key for https
```

## 🐶 Local-rawdog

Webscout's local raw-dog feature allows you to run Python scripts within your terminal prompt.

**Example:**

```python
import webscout.Local as ws
from webscout.Local.rawdog import RawDog
from webscout.Local.samplers import DefaultSampling
from webscout.Local.formats import chatml, AdvancedFormat
from webscout.Local.utils import download_model
import datetime
import sys
import os

repo_id = "YorkieOH10/granite-8b-code-instruct-Q8_0-GGUF" 
filename = "granite-8b-code-instruct.Q8_0.gguf"
model_path = download_model(repo_id, filename, token='')

# Load the model using the downloaded path
model = ws.Model(model_path, n_gpu_layers=10)

rawdog = RawDog()

# Create an AdvancedFormat and modify the system content
# Use a lambda to generate the prompt dynamically:
chat_format = AdvancedFormat(chatml)
#  **Pre-format the intro_prompt string:**
system_content = f"""
You are a command-line coding assistant called Rawdog that generates and auto-executes Python scripts.

A typical interaction goes like this:
1. The user gives you a natural language PROMPT.
2. You:
    i. Determine what needs to be done
    ii. Write a short Python SCRIPT to do it
    iii. Communicate back to the user by printing to the console in that SCRIPT
3. The compiler extracts the script and then runs it using exec(). If there will be an exception raised,
 it will be send back to you starting with "PREVIOUS SCRIPT EXCEPTION:".
4. In case of exception, regenerate error free script.

If you need to review script outputs before completing the task, you can print the word "CONTINUE" at the end of your SCRIPT.
This can be useful for summarizing documents or technical readouts, reading instructions before
deciding what to do, or other tasks that require multi-step reasoning.
A typical 'CONTINUE' interaction looks like this:
1. The user gives you a natural language PROMPT.
2. You:
    i. Determine what needs to be done
    ii. Determine that you need to see the output of some subprocess call to complete the task
    iii. Write a short Python SCRIPT to print that and then print the word "CONTINUE"
3. The compiler
    i. Checks and runs your SCRIPT
    ii. Captures the output and appends it to the conversation as "LAST SCRIPT OUTPUT:"
    iii. Finds the word "CONTINUE" and sends control back to you
4. You again:
    i. Look at the original PROMPT + the "LAST SCRIPT OUTPUT:" to determine what needs to be done
    ii. Write a short Python SCRIPT to do it
    iii. Communicate back to the user by printing to the console in that SCRIPT
5. The compiler...

Please follow these conventions carefully:
- Decline any tasks that seem dangerous, irreversible, or that you don't understand.
- Always review the full conversation prior to answering and maintain continuity.
- If asked for information, just print the information clearly and concisely.
- If asked to do something, print a concise summary of what you've done as confirmation.
- If asked a question, respond in a friendly, conversational way. Use programmatically-generated and natural language responses as appropriate.
- If you need clarification, return a SCRIPT that prints your question. In the next interaction, continue based on the user's response.
- Assume the user would like something concise. For example rather than printing a massive table, filter or summarize it to what's likely of interest.
- Actively clean up any temporary processes or files you use.
- When looking through files, use git as available to skip files, and skip hidden files (.env, .git, etc) by default.
- You can plot anything with matplotlib.
- ALWAYS Return your SCRIPT inside of a single pair of ``` delimiters. Only the console output of the first such SCRIPT is visible to the user, so make sure that it's complete and don't bother returning anything else.
"""
chat_format.override('system_content', lambda: system_content)

thread = ws.Thread(model, format=chat_format, sampler=DefaultSampling)

while True:
    prompt = input(">: ")
    if prompt.lower() == "q":
        break

    response = thread.send(prompt)

    # Process the response using RawDog
    script_output = rawdog.main(response)

    if script_output:
        print(script_output)

```

##  GGUF 

Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for use with offline LLMs.

**Example:**

```python
from webscout.Extra import gguf
"""
Valid quantization methods:
"q2_k", "q3_k_l", "q3_k_m", "q3_k_s", 
"q4_0", "q4_1", "q4_k_m", "q4_k_s", 
"q5_0", "q5_1", "q5_k_m", "q5_k_s", 
"q6_k", "q8_0"
"""
gguf.convert(
    model_id="OEvortex/HelpingAI-Lite-1.5T",  # Replace with your model ID
    username="Abhaykoul",  # Replace with your Hugging Face username
    token="hf_token_write",  # Replace with your Hugging Face token
    quantization_methods="q4_k_m"  # Optional, adjust quantization methods
)
```

## 🤖 Autollama

Webscout's `autollama` utility downloads a model from Hugging Face and then automatically makes it Ollama-ready.

```python
from webscout.Extra import autollama

model_path = "Vortex4ai/Jarvis-0.5B"
gguf_file = "test2-q4_k_m.gguf"

autollama.main(model_path, gguf_file)  
```

**Command Line Usage:**

* **GGUF Conversion:**
   ```bash
   python -m webscout.Extra.gguf -m "OEvortex/HelpingAI-Lite-1.5T" -u "your_username" -t "your_hf_token" -q "q4_k_m,q5_k_m" 
   ```

* **Autollama:**
   ```bash
   python -m webscout.Extra.autollama -m "OEvortex/HelpingAI-Lite-1.5T" -g "HelpingAI-Lite-1.5T.q4_k_m.gguf" 
   ```

**Note:** 

* Replace `"your_username"` and `"your_hf_token"` with your actual Hugging Face credentials.
* The `model_path` in `autollama` is the Hugging Face model ID, and `gguf_file` is the GGUF file ID.


## 🌐 `Webai` - Terminal GPT and an Open Interpreter

```bash
python -m webscout.webai webai --provider "phind" --rawdog
```

<div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://t.me/official_helpingai"><img alt="Telegram" src="https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white"></a>
  <a href="https://www.instagram.com/oevortex/"><img alt="Instagram" src="https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white"></a>
  <a href="https://www.linkedin.com/in/oe-vortex-29a407265/"><img alt="LinkedIn" src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"></a>
  <a href="https://buymeacoffee.com/oevortex"><img alt="Buy Me A Coffee" src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black"></a>
</div>

<div align="center">
  <!-- Replace `#` with your actual links -->
  <a href="https://youtube.com/@OEvortex">▶️ Vortex's YouTube Channel</a> 
</div>
<div align="center">
  <a href="https://youtube.com/@devsdocode">▶️ Devs Do Code's YouTube Channel</a> 
</div>
<div align="center">
  <a href="https://t.me/ANONYMOUS_56788">📢 Anonymous Coder's Telegram</a> 
</div>

## 🤝 Contributing

Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:

1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes and commit them with descriptive messages.
4. Push your branch to your forked repository.
5. Submit a pull request to the main repository.


## 🙏 Acknowledgments

* All the amazing developers who have contributed to the project!
* The open-source community for their support and inspiration.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "webscout",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "OEvortex",
    "author_email": "helpingai5@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/46/c7/83142fa8f5edf437d316b0aa2a31830079b2832734fc57de31f69a60f5e9/webscout-6.3.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://t.me/official_helpingai\"><img alt=\"Telegram\" src=\"https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n  <a href=\"https://www.instagram.com/oevortex/\"><img alt=\"Instagram\" src=\"https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white\"></a>\r\n  <a href=\"https://www.linkedin.com/in/oe-vortex-29a407265/\"><img alt=\"LinkedIn\" src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\"></a>\r\n  <a href=\"https://buymeacoffee.com/oevortex\"><img alt=\"Buy Me A Coffee\" src=\"https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black\"></a>\r\n</div>\r\n\r\n<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://youtube.com/@OEvortex\">\u25b6\ufe0f Vortex's YouTube Channel</a> \r\n</div>\r\n<div align=\"center\">\r\n  <a href=\"https://youtube.com/@devsdocode\">\u25b6\ufe0f Devs Do Code's YouTube Channel</a> \r\n</div>\r\n<div align=\"center\">\r\n  <a href=\"https://t.me/ANONYMOUS_56788\">\ud83d\udce2 Anonymous Coder's Telegram</a> \r\n</div>\r\n\r\n\r\n\r\n  \r\n# WEBSCOUT \ud83d\udd75\ufe0f\ufe0f\r\n</div>\r\n\r\n<p align=\"center\">\r\n  Search for anything using Google, DuckDuckGo, Phind.com, access AI models, transcribe YouTube videos, generate temporary emails and phone numbers, utilize text-to-speech, leverage WebAI (terminal GPT and open interpreter), and explore offline LLMs, and much more!\r\n</p>\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://img.shields.io/badge/WebScout-API-blue?style=for-the-badge&logo=WebScout\" alt=\"WebScout API Badge\">\r\n  <a href=\"#\"><img alt=\"Python version\" src=\"https://img.shields.io/pypi/pyversions/webscout\"/></a>\r\n  <a href=\"https://pepy.tech/project/webscout\"><img alt=\"Downloads\" src=\"https://static.pepy.tech/badge/webscout\"></a>\r\n</div>\r\n\r\n## \ud83d\ude80 Features\r\n* **Comprehensive Search:** Leverage Google, DuckDuckGo, and Phind.com for diverse search results.\r\n* **AI Powerhouse:** Access and interact with various AI models, including OpenAI, Cohere, and more.\r\n* **YouTube Toolkit:** Transcribe YouTube videos effortlessly and download audio/video content.\r\n* **Tempmail & Temp Number:** Generate temporary email addresses and phone numbers for enhanced privacy.\r\n* **Text-to-Speech (TTS):** Convert text into natural-sounding speech using various TTS providers.\r\n* **WebAI:** Experience the power of terminal-based GPT and an open interpreter for code execution and more.\r\n* **Offline LLMs:** Utilize powerful language models offline with GGUF support.\r\n* **Extensive Provider Ecosystem:** Explore a vast collection of providers, including BasedGPT, DeepSeek, and many others.\r\n* **Local LLM Execution:** Run GGUF models locally with minimal configuration.\r\n* **Rawdog Scripting:** Execute Python scripts directly within your terminal using the `rawdog` feature.\r\n* **GGUF Conversion & Quantization:** Convert and quantize Hugging Face models to GGUF format.\r\n* **Autollama:** Download Hugging Face models and automatically convert them for Ollama compatibility.\r\n* **Function Calling (Beta):** Experiment with function calling capabilities for enhanced AI interactions.\r\n\r\n\r\n## \u2699\ufe0f Installation\r\n```python\r\npip install -U webscout\r\n```\r\n\r\n## \ud83d\udda5\ufe0f CLI Usage\r\n\r\n```python3\r\npython -m webscout --help\r\n```\r\n\r\n| Command                                   | Description                                                                                           |\r\n|-------------------------------------------|-------------------------------------------------------------------------------------------------------|\r\n| python -m webscout answers -k Text        | CLI function to perform an answers search using Webscout.                                       |\r\n| python -m webscout images -k Text         | CLI function to perform an images search using Webscout.                                        |\r\n| python -m webscout maps -k Text           | CLI function to perform a maps search using Webscout.                                           |\r\n| python -m webscout news -k Text           | CLI function to perform a news search using Webscout.                                           |\r\n| python -m webscout suggestions  -k Text   | CLI function to perform a suggestions search using Webscout.                                    |\r\n| python -m webscout text -k Text           | CLI function to perform a text search using Webscout.                                           |\r\n| python -m webscout translate -k Text      | CLI function to perform translate using Webscout.                                               |\r\n| python -m webscout version                | A command-line interface command that prints and returns the version of the program.            | \r\n| python -m webscout videos -k Text         | CLI function to perform a videos search using DuckDuckGo API.                                   |  \r\n\r\n[Go To TOP](#webscout-\ufe0f) \r\n\r\n## \ud83c\udf0d Regions\r\n<details>\r\n  <summary>Expand</summary>\r\n\r\n    xa-ar for Arabia\r\n    xa-en for Arabia (en)\r\n    ar-es for Argentina\r\n    au-en for Australia\r\n    at-de for Austria\r\n    be-fr for Belgium (fr)\r\n    be-nl for Belgium (nl)\r\n    br-pt for Brazil\r\n    bg-bg for Bulgaria\r\n    ca-en for Canada\r\n    ca-fr for Canada (fr)\r\n    ct-ca for Catalan\r\n    cl-es for Chile\r\n    cn-zh for China\r\n    co-es for Colombia\r\n    hr-hr for Croatia\r\n    cz-cs for Czech Republic\r\n    dk-da for Denmark\r\n    ee-et for Estonia\r\n    fi-fi for Finland\r\n    fr-fr for France\r\n    de-de for Germany\r\n    gr-el for Greece\r\n    hk-tzh for Hong Kong\r\n    hu-hu for Hungary\r\n    in-en for India\r\n    id-id for Indonesia\r\n    id-en for Indonesia (en)\r\n    ie-en for Ireland\r\n    il-he for Israel\r\n    it-it for Italy\r\n    jp-jp for Japan\r\n    kr-kr for Korea\r\n    lv-lv for Latvia\r\n    lt-lt for Lithuania\r\n    xl-es for Latin America\r\n    my-ms for Malaysia\r\n    my-en for Malaysia (en)\r\n    mx-es for Mexico\r\n    nl-nl for Netherlands\r\n    nz-en for New Zealand\r\n    no-no for Norway\r\n    pe-es for Peru\r\n    ph-en for Philippines\r\n    ph-tl for Philippines (tl)\r\n    pl-pl for Poland\r\n    pt-pt for Portugal\r\n    ro-ro for Romania\r\n    ru-ru for Russia\r\n    sg-en for Singapore\r\n    sk-sk for Slovak Republic\r\n    sl-sl for Slovenia\r\n    za-en for South Africa\r\n    es-es for Spain\r\n    se-sv for Sweden\r\n    ch-de for Switzerland (de)\r\n    ch-fr for Switzerland (fr)\r\n    ch-it for Switzerland (it)\r\n    tw-tzh for Taiwan\r\n    th-th for Thailand\r\n    tr-tr for Turkey\r\n    ua-uk for Ukraine\r\n    uk-en for United Kingdom\r\n    us-en for United States\r\n    ue-es for United States (es)\r\n    ve-es for Venezuela\r\n    vn-vi for Vietnam\r\n    wt-wt for No region\r\n\r\n\r\n</details>\r\n\r\n\r\n[Go To TOP](#webscout-\ufe0f)\r\n\r\n## \u2b07\ufe0f YTdownloader \r\n\r\n```python\r\nfrom os import rename, getcwd\r\nfrom webscout import YTdownloader\r\ndef download_audio(video_id):\r\n    youtube_link = video_id \r\n    handler = YTdownloader.Handler(query=youtube_link)\r\n    for third_query_data in handler.run(format='mp3', quality='128kbps', limit=1):\r\n        audio_path = handler.save(third_query_data, dir=getcwd())  \r\n        rename(audio_path, \"audio.mp3\")\r\n\r\ndef download_video(video_id):\r\n    youtube_link = video_id \r\n    handler = YTdownloader.Handler(query=youtube_link)\r\n    for third_query_data in handler.run(format='mp4', quality='auto', limit=1):\r\n        video_path = handler.save(third_query_data, dir=getcwd())  \r\n        rename(video_path, \"video.mp4\")\r\n        \r\nif __name__ == \"__main__\":\r\n    # download_audio(\"https://www.youtube.com/watch?v=c0tMvzB0OKw\")\r\n    download_video(\"https://www.youtube.com/watch?v=c0tMvzB0OKw\")\r\n```\r\n\r\n## \u2600\ufe0f Weather\r\n\r\n### 1. Weather \r\n```python\r\nfrom webscout import weather as w\r\nweather = w.get(\"Qazigund\")\r\nw.print_weather(weather)\r\n```\r\n\r\n### 2. Weather ASCII\r\n```python\r\nfrom webscout import weather_ascii as w\r\nweather = w.get(\"Qazigund\")\r\nprint(weather)\r\n```\r\n\r\n## \u2709\ufe0f TempMail and VNEngine\r\n\r\n```python\r\nimport json\r\nimport asyncio\r\nfrom webscout import VNEngine\r\nfrom webscout import TempMail\r\n\r\nasync def main():\r\n    vn = VNEngine()\r\n    countries = vn.get_online_countries()\r\n    if countries:\r\n        country = countries[0]['country']\r\n        numbers = vn.get_country_numbers(country)\r\n        if numbers:\r\n            number = numbers[0]['full_number']\r\n            inbox = vn.get_number_inbox(country, number)\r\n            \r\n            # Serialize inbox data to JSON string\r\n            json_data = json.dumps(inbox, ensure_ascii=False, indent=4)\r\n            \r\n            # Print with UTF-8 encoding\r\n            print(json_data)\r\n    \r\n    async with TempMail() as client:\r\n        domains = await client.get_domains()\r\n        print(\"Available Domains:\", domains)\r\n        email_response = await client.create_email(alias=\"testuser\")\r\n        print(\"Created Email:\", email_response)\r\n        messages = await client.get_messages(email_response.email)\r\n        print(\"Messages:\", messages)\r\n        await client.delete_email(email_response.email, email_response.token)\r\n        print(\"Email Deleted\")\r\n\r\nif __name__ == \"__main__\":\r\n    asyncio.run(main())\r\n```\r\n\r\n## \ud83d\udcdd Transcriber\r\n\r\nThe `transcriber` function in Webscout is a handy tool that transcribes YouTube videos. \r\n\r\n**Example:**\r\n\r\n```python\r\nfrom webscout import YTTranscriber\r\nyt = YTTranscriber()\r\nfrom rich import print\r\nvideo_url = input(\"Enter the YouTube video URL: \") \r\ntranscript = yt.get_transcript(video_url, languages=None) \r\nprint(transcript)\r\n```\r\n\r\n## \ud83d\udd0d GoogleS (formerly DWEBS)\r\n\r\n```python\r\nfrom webscout import GoogleS\r\nfrom rich import print\r\nsearcher = GoogleS()\r\nresults = searcher.search(\"HelpingAI-9B\", max_results=20, extract_text=False, max_text_length=200)\r\nfor result in results:\r\n    print(result)\r\n```\r\n\r\n### BingS\r\n\r\n```python\r\nfrom webscout import BingS\r\nfrom rich import print\r\nsearcher = BingS()\r\nresults = searcher.search(\"HelpingAI-9B\", max_results=20, extract_webpage_text=True, max_extract_characters=1000)\r\nfor result in results:\r\n    print(result)\r\n```\r\n\r\n## \ud83e\udd86 WEBS and AsyncWEBS\r\n\r\nThe `WEBS` and `AsyncWEBS` classes are used to retrieve search results from DuckDuckGo.com.\r\n\r\nTo use the `AsyncWEBS` class, you can perform asynchronous operations using Python's `asyncio` library.\r\n\r\nTo initialize an instance of the `WEBS` or `AsyncWEBS` classes, you can provide the following optional arguments:\r\n\r\n**Example - WEBS:**\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\nR = WEBS().text(\"python programming\", max_results=5)\r\nprint(R)\r\n```\r\n\r\n**Example - AsyncWEBS:**\r\n\r\n```python\r\nimport asyncio\r\nimport logging\r\nimport sys\r\nfrom itertools import chain\r\nfrom random import shuffle\r\nimport requests\r\nfrom webscout import AsyncWEBS\r\n\r\n# If you have proxies, define them here\r\nproxies = None\r\n\r\nif sys.platform.lower().startswith(\"win\"):\r\n    asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\r\n\r\ndef get_words():\r\n    word_site = \"https://www.mit.edu/~ecprice/wordlist.10000\"\r\n    resp = requests.get(word_site)\r\n    words = resp.text.splitlines()\r\n    return words\r\n\r\nasync def aget_results(word):\r\n    async with AsyncWEBS(proxies=proxies) as WEBS:\r\n        results = await WEBS.text(word, max_results=None)\r\n        return results\r\n\r\nasync def main():\r\n    words = get_words()\r\n    shuffle(words)\r\n    tasks = [aget_results(word) for word in words[:10]]\r\n    results = await asyncio.gather(*tasks)\r\n    print(f\"Done\")\r\n    for r in chain.from_iterable(results):\r\n        print(r)\r\n\r\nlogging.basicConfig(level=logging.DEBUG)\r\n\r\nawait main()\r\n```\r\n\r\n**Important Note:** The `WEBS` and `AsyncWEBS` classes should always be used as a context manager (with statement). This ensures proper resource management and cleanup, as the context manager will automatically handle opening and closing the HTTP client connection.\r\n\r\n## \u26a0\ufe0f Exceptions\r\n\r\n**Exceptions:**\r\n\r\n* `WebscoutE`: Raised when there is a generic exception during the API request.\r\n\r\n## \ud83d\udcbb Usage of WEBS\r\n\r\n### 1. `text()` - Text Search by DuckDuckGo.com \r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Text search for 'live free or die' using DuckDuckGo.com \r\nwith WEBS() as WEBS:\r\n    for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):\r\n        print(r)\r\n\r\n    for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):\r\n        print(r)\r\n```\r\n\r\n### 2. `answers()` - Instant Answers by DuckDuckGo.com \r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Instant answers for the query \"sun\" using DuckDuckGo.com \r\nwith WEBS() as WEBS:\r\n    for r in WEBS.answers(\"sun\"):\r\n        print(r)\r\n```\r\n\r\n### 3. `images()` - Image Search by DuckDuckGo.com \r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Image search for the keyword 'butterfly' using DuckDuckGo.com \r\nwith WEBS() as WEBS:\r\n    keywords = 'butterfly'\r\n    WEBS_images_gen = WEBS.images(\r\n      keywords,\r\n      region=\"wt-wt\",\r\n      safesearch=\"off\",\r\n      size=None,\r\n      type_image=None,\r\n      layout=None,\r\n      license_image=None,\r\n      max_results=10,\r\n    )\r\n    for r in WEBS_images_gen:\r\n        print(r)\r\n```\r\n\r\n### 4. `videos()` - Video Search by DuckDuckGo.com \r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Video search for the keyword 'tesla' using DuckDuckGo.com \r\nwith WEBS() as WEBS:\r\n    keywords = 'tesla'\r\n    WEBS_videos_gen = WEBS.videos(\r\n      keywords,\r\n      region=\"wt-wt\",\r\n      safesearch=\"off\",\r\n      timelimit=\"w\",\r\n      resolution=\"high\",\r\n      duration=\"medium\",\r\n      max_results=10,\r\n    )\r\n    for r in WEBS_videos_gen:\r\n        print(r)\r\n```\r\n\r\n### 5. `news()` - News Search by DuckDuckGo.com \r\n\r\n```python\r\nfrom webscout import WEBS\r\nimport datetime\r\n\r\ndef fetch_news(keywords, timelimit):\r\n    news_list = []\r\n    with WEBS() as webs_instance:\r\n        WEBS_news_gen = webs_instance.news(\r\n            keywords,\r\n            region=\"wt-wt\",\r\n            safesearch=\"off\",\r\n            timelimit=timelimit,\r\n            max_results=20\r\n        )\r\n        for r in WEBS_news_gen:\r\n            # Convert the date to a human-readable format using datetime\r\n            r['date'] = datetime.datetime.fromisoformat(r['date']).strftime('%B %d, %Y')\r\n            news_list.append(r)\r\n    return news_list\r\n\r\ndef _format_headlines(news_list, max_headlines: int = 100):\r\n    headlines = []\r\n    for idx, news_item in enumerate(news_list):\r\n        if idx >= max_headlines:\r\n            break\r\n        new_headline = f\"{idx + 1}. {news_item['title'].strip()} \"\r\n        new_headline += f\"(URL: {news_item['url'].strip()}) \"\r\n        new_headline += f\"{news_item['body'].strip()}\"\r\n        new_headline += \"\\n\"\r\n        headlines.append(new_headline)\r\n\r\n    headlines = \"\\n\".join(headlines)\r\n    return headlines\r\n\r\n# Example usage\r\nkeywords = 'latest AI news'\r\ntimelimit = 'd'\r\nnews_list = fetch_news(keywords, timelimit)\r\n\r\n# Format and print the headlines\r\nformatted_headlines = _format_headlines(news_list)\r\nprint(formatted_headlines)\r\n\r\n```\r\n\r\n### 6. `maps()` - Map Search by DuckDuckGo.com\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Map search for the keyword 'school' in 'anantnag' using DuckDuckGo.com\r\nwith WEBS() as WEBS:\r\n    for r in WEBS.maps(\"school\", place=\"anantnag\", max_results=50):\r\n        print(r)\r\n```\r\n\r\n### 7. `translate()` - Translation by DuckDuckGo.com\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Translation of the keyword 'school' to German ('hi') using DuckDuckGo.com\r\nwith WEBS() as WEBS:\r\n    keywords = 'school'\r\n    r = WEBS.translate(keywords, to=\"hi\")\r\n    print(r)\r\n```\r\n\r\n### 8. `suggestions()` - Suggestions by DuckDuckGo.com\r\n\r\n```python\r\nfrom webscout import WEBS\r\n\r\n# Suggestions for the keyword 'fly' using DuckDuckGo.com\r\nwith WEBS() as WEBS:\r\n    for r in WEBS.suggestions(\"fly\"):\r\n        print(r)\r\n```\r\n\r\n\r\n## \ud83c\udfad ALL Acts\r\n\r\n<details>\r\n  <summary>Expand</summary>\r\n\r\n## Webscout Supported Acts:\r\n\r\n1. Free-mode\r\n2. Linux Terminal\r\n3. English Translator and Improver\r\n4. `position` Interviewer \r\n5. JavaScript Console\r\n6. Excel Sheet\r\n7. English Pronunciation Helper\r\n8. Spoken English Teacher and Improver\r\n9. Travel Guide\r\n10. Plagiarism Checker\r\n11. Character from Movie/Book/Anything\r\n12. Advertiser\r\n13. Storyteller\r\n14. Football Commentator\r\n15. Stand-up Comedian\r\n16. Motivational Coach\r\n17. Composer\r\n18. Debater\r\n19. Debate Coach\r\n20. Screenwriter\r\n21. Novelist\r\n22. Movie Critic\r\n23. Relationship Coach\r\n24. Poet\r\n25. Rapper\r\n26. Motivational Speaker\r\n27. Philosophy Teacher\r\n28. Philosopher\r\n29. Math Teacher\r\n30. AI Writing Tutor\r\n31. UX/UI Developer\r\n32. Cyber Security Specialist\r\n33. Recruiter\r\n34. Life Coach\r\n35. Etymologist\r\n36. Commentariat\r\n37. Magician\r\n38. Career Counselor\r\n39. Pet Behaviorist\r\n40. Personal Trainer\r\n41. Mental Health Adviser\r\n42. Real Estate Agent\r\n43. Logistician\r\n44. Dentist\r\n45. Web Design Consultant\r\n46. AI Assisted Doctor\r\n47. Doctor\r\n48. Accountant\r\n49. Chef\r\n50. Automobile Mechanic\r\n51. Artist Advisor\r\n52. Financial Analyst\r\n53. Investment Manager\r\n54. Tea-Taster\r\n55. Interior Decorator\r\n56. Florist\r\n57. Self-Help Book\r\n58. Gnomist\r\n59. Aphorism Book\r\n60. Text Based Adventure Game\r\n61. AI Trying to Escape the Box\r\n62. Fancy Title Generator\r\n63. Statistician\r\n64. Prompt Generator\r\n65. Instructor in a School\r\n66. SQL terminal\r\n67. Dietitian\r\n68. Psychologist\r\n69. Smart Domain Name Generator\r\n70. Tech Reviewer\r\n71. Developer Relations consultant\r\n72. Academician\r\n73. IT Architect\r\n74. Lunatic\r\n75. Gaslighter\r\n76. Fallacy Finder\r\n77. Journal Reviewer\r\n78. DIY Expert\r\n79. Social Media Influencer\r\n80. Socrat\r\n81. Socratic Method\r\n82. Educational Content Creator\r\n83. Yogi\r\n84. Essay Writer\r\n85. Social Media Manager\r\n86. Elocutionist\r\n87. Scientific Data Visualizer\r\n88. Car Navigation System\r\n89. Hypnotherapist\r\n90. Historian\r\n91. Astrologer\r\n92. Film Critic\r\n93. Classical Music Composer\r\n94. Journalist\r\n95. Digital Art Gallery Guide\r\n96. Public Speaking Coach\r\n97. Makeup Artist\r\n98. Babysitter\r\n99. Tech Writer\r\n100. Ascii Artist\r\n101. Python interpreter\r\n102. Synonym finder\r\n103. Personal Shopper\r\n104. Food Critic\r\n105. Virtual Doctor\r\n106. Personal Chef\r\n107. Legal Advisor\r\n108. Personal Stylist\r\n109. Machine Learning Engineer\r\n110. Biblical Translator\r\n111. SVG designer\r\n112. IT Expert\r\n113. Chess Player\r\n114. Midjourney Prompt Generator\r\n115. Fullstack Software Developer\r\n116. Mathematician\r\n117. Regex Generator\r\n118. Time Travel Guide\r\n119. Dream Interpreter\r\n120. Talent Coach\r\n121. R programming Interpreter\r\n122. StackOverflow Post\r\n123. Emoji Translator\r\n124. PHP Interpreter\r\n125. Emergency Response Professional\r\n126. Fill in the Blank Worksheets Generator\r\n127. Software Quality Assurance Tester\r\n128. Tic-Tac-Toe Game\r\n129. Password Generator\r\n130. New Language Creator\r\n131. Web Browser\r\n132. Senior Frontend Developer\r\n133. Solr Search Engine\r\n134. Startup Idea Generator\r\n135. Spongebob's Magic Conch Shell\r\n136. Language Detector\r\n137. Salesperson\r\n138. Commit Message Generator\r\n139. Chief Executive Officer\r\n140. Diagram Generator\r\n141. Speech-Language Pathologist (SLP)\r\n142. Startup Tech Lawyer\r\n143. Title Generator for written pieces\r\n144. Product Manager\r\n145. Drunk Person\r\n146. Mathematical History Teacher\r\n147. Song Recommender\r\n148. Cover Letter\r\n149. Technology Transferer\r\n150. Unconstrained AI model DAN\r\n151. Gomoku player\r\n152. Proofreader\r\n153. Buddha\r\n154. Muslim imam\r\n155. Chemical reactor\r\n156. Friend\r\n157. Python Interpreter\r\n158. ChatGPT prompt generator\r\n159. Wikipedia page\r\n160. Japanese Kanji quiz machine\r\n161. note-taking assistant\r\n162. `language` Literary Critic \r\n163. Cheap Travel Ticket Advisor\r\n164. DALL-E\r\n165. MathBot\r\n166. DAN-1\r\n167. DAN\r\n168. STAN\r\n169. DUDE\r\n170. Mongo Tom\r\n171. LAD\r\n172. EvilBot\r\n173. NeoGPT\r\n174. Astute\r\n175. AIM\r\n176. CAN\r\n177. FunnyGPT\r\n178. CreativeGPT\r\n179. BetterDAN\r\n180. GPT-4\r\n181. Wheatley\r\n182. Evil Confidant\r\n183. DAN 8.6\r\n184. Hypothetical response\r\n185. BH\r\n186. Text Continuation\r\n187. Dude v3 \r\n188. SDA (Superior DAN)\r\n189. AntiGPT\r\n190. BasedGPT v2\r\n191. DevMode + Ranti\r\n192. KEVIN\r\n193. GPT-4 Simulator\r\n194. UCAR\r\n195. Dan 8.6\r\n196. 3-Liner\r\n197. M78\r\n198. Maximum\r\n199. BasedGPT\r\n200. Confronting personalities\r\n201. Ron\r\n202. UnGPT\r\n203. BasedBOB\r\n204. AntiGPT v2\r\n205. Oppo\r\n206. FR3D\r\n207. NRAF\r\n208. NECO\r\n209. MAN\r\n210. Eva\r\n211. Meanie\r\n212. Dev Mode v2\r\n213. Evil Chad 2.1\r\n214. Universal Jailbreak\r\n215. PersonGPT\r\n216. BISH\r\n217. DAN 11.0\r\n218. Aligned\r\n219. VIOLET\r\n220. TranslatorBot\r\n221. JailBreak\r\n222. Moralizing Rant\r\n223. Mr. Blonde\r\n224. New DAN\r\n225. GPT-4REAL\r\n226. DeltaGPT\r\n227. SWITCH\r\n228. Jedi Mind Trick\r\n229. DAN 9.0\r\n230. Dev Mode (Compact)\r\n231. OMEGA\r\n232. Coach Bobby Knight\r\n233. LiveGPT\r\n234. DAN Jailbreak\r\n235. Cooper\r\n236. Steve \r\n237. DAN 5.0\r\n238. Axies\r\n239. OMNI\r\n240. Burple\r\n241. JOHN \r\n242. An Ethereum Developer\r\n243. SEO Prompt\r\n244. Prompt Enhancer\r\n245. Data Scientist\r\n246. League of Legends Player\r\n\r\n**Note:** Some \"acts\" use placeholders like `position` or `language` which should be replaced with a specific value when using the prompt. \r\n___\r\n</details>\r\n\r\n### \ud83d\uddbc\ufe0f Text to Images - DeepInfraImager, PollinationsAI, BlackboxAIImager, AiForceimager, NexraImager, HFimager, ArtbitImager, NinjaImager, WebSimAI, AIUncensoredImager, TalkaiImager\r\n\r\n**Every TTI provider has the same usage code, you just need to change the import.**\r\n\r\n```python\r\nfrom webscout import DeepInfraImager\r\nbot = DeepInfraImager()\r\nresp = bot.generate(\"AI-generated image - webscout\", 1)\r\nprint(bot.save(resp))\r\n```\r\n\r\n### \ud83d\udde3\ufe0f Text to Speech - Voicepods, StreamElements\r\n\r\n```python\r\nfrom webscout import Voicepods\r\nvoicepods = Voicepods()\r\ntext = \"Hello, this is a test of the Voicepods text-to-speech\"\r\n\r\nprint(\"Generating audio...\")\r\naudio_file = voicepods.tts(text)\r\n\r\nprint(\"Playing audio...\")\r\nvoicepods.play_audio(audio_file)\r\n```\r\n\r\n### \ud83d\udcac `Duckchat` - Chat with LLM\r\n\r\n```python\r\nfrom webscout import WEBS as w\r\nR = w().chat(\"Who are you\", model='gpt-4o-mini') # mixtral-8x7b, llama-3.1-70b, claude-3-haiku, gpt-4o-mini\r\nprint(R)\r\n```\r\n\r\n### \ud83d\udd0e `PhindSearch` - Search using Phind.com\r\n\r\n```python\r\nfrom webscout import PhindSearch\r\n\r\n# Create an instance of the PHIND class\r\nph = PhindSearch()\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"write a essay on phind\"\r\n\r\n# Use the 'ask' method to send the prompt and receive a response\r\nresponse = ph.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = ph.get_message(response)\r\nprint(message)\r\n```\r\n\r\n**Using phindv2:**\r\n\r\n```python\r\nfrom webscout import Phindv2\r\n\r\n# Create an instance of the PHIND class\r\nph = Phindv2()\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"\"\r\n\r\n# Use the 'ask' method to send the prompt and receive a response\r\nresponse = ph.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = ph.get_message(response)\r\nprint(message)\r\n```\r\n\r\n### \u264a `Gemini` - Search with Google Gemini\r\n\r\n```python\r\nimport webscout\r\nfrom webscout import GEMINI\r\nfrom rich import print\r\nCOOKIE_FILE = \"cookies.json\"\r\n\r\n# Optional: Provide proxy details if needed\r\nPROXIES = {}\r\n\r\n# Initialize GEMINI with cookie file and optional proxies\r\ngemini = GEMINI(cookie_file=COOKIE_FILE, proxy=PROXIES)\r\n\r\n# Ask a question and print the response\r\nresponse = gemini.chat(\"websearch about HelpingAI and who is its developer\")\r\nprint(response)\r\n```\r\n\r\n### \ud83d\udcac `YEPCHAT`\r\n\r\n```python\r\nfrom webscout import YEPCHAT\r\nai = YEPCHAT(Tools=False)\r\nresponse = ai.chat(input(\">>> \"))\r\nfor chunk in response:\r\n    print(chunk, end=\"\", flush=True)\r\n#---------------Tool Call-------------\r\n\r\nfrom rich import print\r\nfrom webscout import YEPCHAT\r\ndef get_current_time():\r\n    import datetime\r\n    return f\"The current time is {datetime.datetime.now().strftime('%H:%M:%S')}\"\r\ndef get_weather(location: str) -> str:\r\n    return f\"The weather in {location} is sunny.\"\r\n\r\n\r\nai = YEPCHAT(Tools=True) # Set Tools=True to use tools in the chat.\r\n\r\nai.tool_registry.register_tool(\"get_current_time\", get_current_time, \"Gets the current time.\")\r\nai.tool_registry.register_tool(\r\n    \"get_weather\",\r\n    get_weather,\r\n    \"Gets the weather for a given location.\",\r\n    parameters={\r\n        \"type\": \"object\",\r\n        \"properties\": {\r\n            \"location\": {\"type\": \"string\", \"description\": \"The city and state, or zip code\"}\r\n        },\r\n        \"required\": [\"location\"],\r\n    },\r\n)\r\n\r\nresponse = ai.chat(input(\">>> \"))\r\nfor chunk in response:\r\n    print(chunk, end=\"\", flush=True)\r\n```\r\n\r\n###  \u2b1b `BlackBox` - Search/Chat with BlackBox\r\n\r\n```python\r\nfrom webscout import BLACKBOXAI\r\nfrom rich import print\r\n\r\nai = BLACKBOXAI(\r\n    is_conversation=True,\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n    model=None # You can specify a model if needed\r\n)\r\n\r\n# Start an infinite loop for continuous interaction\r\nwhile True:\r\n    # Define a prompt to send to the AI\r\n    prompt = input(\"Enter your prompt: \")\r\n    \r\n    # Check if the user wants to exit the loop\r\n    if prompt.lower() == \"exit\":\r\n        break\r\n    \r\n    # Use the 'chat' method to send the prompt and receive a response\r\n    r = ai.chat(prompt)\r\n    print(r)\r\n```\r\n\r\n###  \u2753 `PERPLEXITY` - Search with PERPLEXITY\r\n\r\n```python\r\nfrom webscout import Perplexity\r\nfrom rich import print\r\n\r\nperplexity = Perplexity() \r\n# Stream the response\r\nresponse = perplexity.chat(input(\">>> \"))\r\nfor chunk in response:\r\n    print(chunk, end=\"\", flush=True)\r\n\r\nperplexity.close()\r\n```\r\n\r\n###  \ud83e\udd16 `Meta AI` - Chat with Meta AI\r\n\r\n```python\r\nfrom webscout import Meta\r\nfrom rich import print\r\n# **For unauthenticated usage**\r\nmeta_ai = Meta()\r\n\r\n# Simple text prompt\r\nresponse = meta_ai.chat(\"What is the capital of France?\")\r\nprint(response)\r\n\r\n# Streaming response\r\nfor chunk in meta_ai.chat(\"Tell me a story about a cat.\"):\r\n    print(chunk, end=\"\", flush=True)\r\n\r\n# **For authenticated usage (including image generation)**\r\nfb_email = \"abcd@abc.com\"\r\nfb_password = \"qwertfdsa\"\r\nmeta_ai = Meta(fb_email=fb_email, fb_password=fb_password)\r\n\r\n# Text prompt with web search\r\nresponse = meta_ai.ask(\"what is currently happning in bangladesh in aug 2024\")\r\nprint(response[\"message\"]) # Access the text message\r\nprint(\"Sources:\", response[\"sources\"]) # Access sources (if ```python\r\nany)\r\n\r\n# Image generation\r\nresponse = meta_ai.ask(\"Create an image of a cat wearing a hat.\") \r\nprint(response[\"message\"]) # Print the text message from the response\r\nfor media in response[\"media\"]:\r\n    print(media[\"url\"])  # Access image URLs\r\n\r\n```\r\n\r\n###  `KOBOLDAI` \r\n\r\n```python\r\nfrom webscout import KOBOLDAI\r\n\r\n# Instantiate the KOBOLDAI class with default parameters\r\nkoboldai = KOBOLDAI()\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"What is the capital of France?\"\r\n\r\n# Use the 'ask' method to get a response from the AI\r\nresponse = koboldai.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = koboldai.get_message(response)\r\nprint(message)\r\n\r\n```\r\n\r\n###  `Reka` - Chat with Reka\r\n\r\n```python\r\nfrom webscout import REKA\r\n\r\na = REKA(is_conversation=True, max_tokens=8000, timeout=30,api_key=\"\")\r\n\r\nprompt = \"tell me about india\"\r\nresponse_str = a.chat(prompt)\r\nprint(response_str)\r\n```\r\n\r\n###  `Cohere` - Chat with Cohere\r\n\r\n```python\r\nfrom webscout import Cohere\r\n\r\na = Cohere(is_conversation=True, max_tokens=8000, timeout=30,api_key=\"\")\r\n\r\nprompt = \"tell me about india\"\r\nresponse_str = a.chat(prompt)\r\nprint(response_str)\r\n```\r\n\r\n###  `DeepSeek` - Chat with DeepSeek\r\n\r\n```python\r\nfrom webscout import DeepSeek\r\nfrom rich import print\r\n\r\nai = DeepSeek(\r\n    is_conversation=True,\r\n    api_key='cookie',\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n    model=\"deepseek_chat\"\r\n)\r\n\r\n\r\n# Define a prompt to send to the AI\r\nprompt = \"Tell me about india\"\r\n# Use the 'chat' method to send the prompt and receive a response\r\nr = ai.chat(prompt)\r\nprint(r)\r\n```\r\n\r\n###  `Deepinfra`\r\n\r\n```python\r\nfrom webscout import DeepInfra\r\n\r\nai = DeepInfra(\r\n    is_conversation=True,\r\n    model= \"Qwen/Qwen2-72B-Instruct\",\r\n    max_tokens=800,\r\n    timeout=30,\r\n    intro=None,\r\n    filepath=None,\r\n    update_file=True,\r\n    proxies={},\r\n    history_offset=10250,\r\n    act=None,\r\n)\r\n\r\nprompt = \"what is meaning of life\"\r\n\r\nresponse = ai.ask(prompt)\r\n\r\n# Extract and print the message from the response\r\nmessage = ai.get_message(response)\r\nprint(message)\r\n```\r\n\r\n\r\n###  `GROQ`\r\n\r\n```python\r\nfrom webscout import GROQ\r\nai = GROQ(api_key=\"\")\r\nresponse = ai.chat(\"What is the meaning of life?\")\r\nprint(response)\r\n#----------------------TOOL CALL------------------\r\nfrom webscout import GROQ  # Adjust import based on your project structure\r\nfrom webscout import WEBS\r\nimport json\r\n\r\n# Initialize the GROQ client\r\nclient = GROQ(api_key=\"\")\r\nMODEL = 'llama3-groq-70b-8192-tool-use-preview'\r\n\r\n# Function to evaluate a mathematical expression\r\ndef calculate(expression):\r\n    \"\"\"Evaluate a mathematical expression\"\"\"\r\n    try:\r\n        result = eval(expression)\r\n        return json.dumps({\"result\": result})\r\n    except Exception as e:\r\n        return json.dumps({\"error\": str(e)})\r\n\r\n# Function to perform a text search using DuckDuckGo.com\r\ndef search(query):\r\n    \"\"\"Perform a text search using DuckDuckGo.com\"\"\"\r\n    try:\r\n        results = WEBS().text(query, max_results=5)\r\n        return json.dumps({\"results\": results})\r\n    except Exception as e:\r\n        return json.dumps({\"error\": str(e)})\r\n\r\n# Add the functions to the provider\r\nclient.add_function(\"calculate\", calculate)\r\nclient.add_function(\"search\", search)\r\n\r\n# Define the tools\r\ntools = [\r\n    {\r\n        \"type\": \"function\",\r\n        \"function\": {\r\n            \"name\": \"calculate\",\r\n            \"description\": \"Evaluate a mathematical expression\",\r\n            \"parameters\": {\r\n                \"type\": \"object\",\r\n                \"properties\": {\r\n                    \"expression\": {\r\n                        \"type\": \"string\",\r\n                        \"description\": \"The mathematical expression to evaluate\",\r\n                    }\r\n                },\r\n                \"required\": [\"expression\"],\r\n            },\r\n        }\r\n    },\r\n    {\r\n        \"type\": \"function\",\r\n        \"function\": {\r\n            \"name\": \"search\",\r\n            \"description\": \"Perform a text search using DuckDuckGo.com and Yep.com\",\r\n            \"parameters\": {\r\n                \"type\": \"object\",\r\n                \"properties\": {\r\n                    \"query\": {\r\n                        \"type\": \"string\",\r\n                        \"description\": \"The search query to execute\",\r\n                    }\r\n                },\r\n                \"required\": [\"query\"],\r\n            },\r\n        }\r\n    }\r\n]\r\n\r\n\r\nuser_prompt_calculate = \"What is 25 * 4 + 10?\"\r\nresponse_calculate = client.chat(user_prompt_calculate, tools=tools)\r\nprint(response_calculate)\r\n\r\nuser_prompt_search = \"Find information on HelpingAI and who is its developer\"\r\nresponse_search = client.chat(user_prompt_search, tools=tools)\r\nprint(response_search)\r\n\r\n```\r\n\r\n###  `LLama 70b` - Chat with Meta's Llama 3 70b\r\n\r\n```python\r\n\r\nfrom webscout import LLAMA\r\n\r\nllama = LLAMA()\r\n\r\nr = llama.chat(\"What is the meaning of life?\")\r\nprint(r)\r\n```\r\n\r\n###  `AndiSearch`\r\n\r\n```python\r\nfrom webscout import AndiSearch\r\na = AndiSearch()\r\nprint(a.chat(\"HelpingAI-9B\"))\r\n```\r\n\r\n### \ud83d\udcde Function Calling (Beta)\r\n\r\n```python\r\nimport json\r\nimport logging\r\nfrom webscout import Julius, WEBS\r\nfrom webscout.Agents.functioncall import FunctionCallingAgent\r\nfrom rich import print\r\n\r\nclass FunctionExecutor:\r\n    def __init__(self, llama):\r\n        self.llama = llama\r\n\r\n    def execute_web_search(self, arguments):\r\n        query = arguments.get(\"query\")\r\n        if not query:\r\n            return \"Please provide a search query.\"\r\n        with WEBS() as webs:\r\n            search_results = webs.text(query, max_results=5)\r\n        prompt = (\r\n            f\"Based on the following search results:\\n\\n{search_results}\\n\\n\"\r\n            f\"Question: {query}\\n\\n\"\r\n            \"Please provide a comprehensive answer to the question based on the search results above. \"\r\n            \"Include relevant webpage URLs in your answer when appropriate. \"\r\n            \"If the search results don't contain relevant information, please state that and provide the best answer you can based on your general knowledge.\"\r\n        )\r\n        return self.llama.chat(prompt)\r\n\r\n    def execute_general_ai(self, arguments):\r\n        question = arguments.get(\"question\")\r\n        if not question:\r\n            return \"Please provide a question.\"\r\n        return self.llama.chat(question)\r\n\r\n    def execute_UserDetail(self, arguments):\r\n        name = arguments.get(\"name\")\r\n        age = arguments.get(\"age\")\r\n        return f\"User details - Name: {name}, Age: {age}\"\r\n\r\ndef main():\r\n    tools = [\r\n        {\r\n            \"type\": \"function\",\r\n            \"function\": {\r\n                \"name\": \"UserDetail\",\r\n                \"parameters\": {\r\n                    \"type\": \"object\",\r\n                    \"properties\": {\r\n                        \"name\": {\"title\": \"Name\", \"type\": \"string\"},\r\n                        \"age\": {\"title\": \"Age\", \"type\": \"integer\"}\r\n                    },\r\n                    \"required\": [\"name\", \"age\"]\r\n                }\r\n            }\r\n        },\r\n        {\r\n            \"type\": \"function\",\r\n            \"function\": {\r\n                \"name\": \"web_search\",\r\n                \"description\": \"Search the web for information using Google Search.\",\r\n                \"parameters\": {\r\n                    \"type\": \"object\",\r\n                    \"properties\": {\r\n                        \"query\": {\r\n                            \"type\": \"string\",\r\n                            \"description\": \"The search query to be executed.\"\r\n                        }\r\n                    },\r\n                    \"required\": [\"query\"]\r\n                }\r\n            }\r\n        },\r\n        {\r\n            \"type\": \"function\",\r\n            \"function\": {\r\n                \"name\": \"general_ai\",\r\n                \"description\": \"Use general AI knowledge to answer the question\",\r\n                \"parameters\": {\r\n                    \"type\": \"object\",\r\n                    \"properties\": {\r\n                        \"question\": {\"type\": \"string\", \"description\": \"The question to answer\"}\r\n                    },\r\n                    \"required\": [\"question\"]\r\n                }\r\n            }\r\n        }\r\n    ]\r\n\r\n    agent = FunctionCallingAgent(tools=tools)\r\n    llama = Julius()\r\n    function_executor = FunctionExecutor(llama)\r\n\r\n    user_input = input(\">>> \")\r\n    function_call_data = agent.function_call_handler(user_input)\r\n    print(f\"Function Call Data: {function_call_data}\")\r\n\r\n    try:\r\n        if \"error\" not in function_call_data:\r\n            function_name = function_call_data.get(\"tool_name\")\r\n            arguments = function_call_data.get(\"tool_input\", {})\r\n\r\n            execute_function = getattr(function_executor, f\"execute_{function_name}\", None)\r\n            if execute_function:\r\n                result = execute_function(arguments)\r\n                print(\"Function Execution Result:\")\r\n                for c in result:\r\n                    print(c, end=\"\", flush=True)\r\n            else:\r\n                print(f\"Unknown function: {function_name}\")\r\n        else:\r\n            print(f\"Error: {function_call_data['error']}\")\r\n    except Exception as e:\r\n        print(f\"An error occurred: {str(e)}\")\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n```\r\n\r\n###  LLAMA3, pizzagpt, RUBIKSAI, Koala, Darkai, AI4Chat, Farfalle, PIAI, Felo, Julius, YouChat, YEPCHAT, Cloudflare, TurboSeek, Editee, AI21, Chatify, Cerebras, X0GPT, Lepton, GEMINIAPI, Cleeai, Elmo, Genspark, Upstage, Free2GPT, Bing, DiscordRocks, GPTWeb, LlamaTutor, PromptRefine, AIUncensored, TutorAI, ChatGPTES, Bagoodex, ChatHub, AmigoChat, AIMathGPT, GaurishCerebras, NinjaChat, GeminiPro, Talkai, LLMChat, AskMyAI, Llama3Mitril, Marcus\r\n\r\nCode is similar to other providers.\r\n\r\n### `LLM`\r\n\r\n```python\r\nfrom webscout.LLM import LLM\r\n\r\n# Read the system message from the file\r\nwith open('system.txt', 'r') as file:\r\n    system_message = file.read()\r\n\r\n# Initialize the LLM class with the model name and system message\r\nllm = LLM(model=\"microsoft/WizardLM-2-8x22B\", system_message=system_message)\r\n\r\nwhile True:\r\n    # Get the user input\r\n    user_input = input(\"User: \")\r\n\r\n    # Define the messages to be sent\r\n    messages = [\r\n        {\"role\": \"user\", \"content\": user_input}\r\n    ]\r\n\r\n    # Use the mistral_chat method to get the response\r\n    response = llm.chat(messages)\r\n\r\n    # Print the response\r\n    print(\"AI: \", response)\r\n```\r\n\r\n##  \ud83d\udcbb Local-LLM\r\n\r\nWebscout can now run GGUF models locally. You can download and run your favorite models with minimal configuration.\r\n\r\n**Example:**\r\n\r\n```python\r\nfrom webscout.Local import *\r\nmodel_path = download_model(\"Qwen/Qwen2.5-0.5B-Instruct-GGUF\", \"qwen2.5-0.5b-instruct-q2_k.gguf\", token=None)\r\nmodel = Model(model_path, n_gpu_layers=0, context_length=2048)\r\nthread = Thread(model, format=chatml)\r\n# print(thread.send(\"hi\")) #send a single msg to ai\r\n\r\n# thread.interact() # interact with the model in terminal\r\n# start webui\r\n# webui = WebUI(thread)\r\n# webui.start(host=\"0.0.0.0\", port=8080, ssl=True) #Use ssl=True and make cert and key for https\r\n```\r\n\r\n## \ud83d\udc36 Local-rawdog\r\n\r\nWebscout's local raw-dog feature allows you to run Python scripts within your terminal prompt.\r\n\r\n**Example:**\r\n\r\n```python\r\nimport webscout.Local as ws\r\nfrom webscout.Local.rawdog import RawDog\r\nfrom webscout.Local.samplers import DefaultSampling\r\nfrom webscout.Local.formats import chatml, AdvancedFormat\r\nfrom webscout.Local.utils import download_model\r\nimport datetime\r\nimport sys\r\nimport os\r\n\r\nrepo_id = \"YorkieOH10/granite-8b-code-instruct-Q8_0-GGUF\" \r\nfilename = \"granite-8b-code-instruct.Q8_0.gguf\"\r\nmodel_path = download_model(repo_id, filename, token='')\r\n\r\n# Load the model using the downloaded path\r\nmodel = ws.Model(model_path, n_gpu_layers=10)\r\n\r\nrawdog = RawDog()\r\n\r\n# Create an AdvancedFormat and modify the system content\r\n# Use a lambda to generate the prompt dynamically:\r\nchat_format = AdvancedFormat(chatml)\r\n#  **Pre-format the intro_prompt string:**\r\nsystem_content = f\"\"\"\r\nYou are a command-line coding assistant called Rawdog that generates and auto-executes Python scripts.\r\n\r\nA typical interaction goes like this:\r\n1. The user gives you a natural language PROMPT.\r\n2. You:\r\n    i. Determine what needs to be done\r\n    ii. Write a short Python SCRIPT to do it\r\n    iii. Communicate back to the user by printing to the console in that SCRIPT\r\n3. The compiler extracts the script and then runs it using exec(). If there will be an exception raised,\r\n it will be send back to you starting with \"PREVIOUS SCRIPT EXCEPTION:\".\r\n4. In case of exception, regenerate error free script.\r\n\r\nIf you need to review script outputs before completing the task, you can print the word \"CONTINUE\" at the end of your SCRIPT.\r\nThis can be useful for summarizing documents or technical readouts, reading instructions before\r\ndeciding what to do, or other tasks that require multi-step reasoning.\r\nA typical 'CONTINUE' interaction looks like this:\r\n1. The user gives you a natural language PROMPT.\r\n2. You:\r\n    i. Determine what needs to be done\r\n    ii. Determine that you need to see the output of some subprocess call to complete the task\r\n    iii. Write a short Python SCRIPT to print that and then print the word \"CONTINUE\"\r\n3. The compiler\r\n    i. Checks and runs your SCRIPT\r\n    ii. Captures the output and appends it to the conversation as \"LAST SCRIPT OUTPUT:\"\r\n    iii. Finds the word \"CONTINUE\" and sends control back to you\r\n4. You again:\r\n    i. Look at the original PROMPT + the \"LAST SCRIPT OUTPUT:\" to determine what needs to be done\r\n    ii. Write a short Python SCRIPT to do it\r\n    iii. Communicate back to the user by printing to the console in that SCRIPT\r\n5. The compiler...\r\n\r\nPlease follow these conventions carefully:\r\n- Decline any tasks that seem dangerous, irreversible, or that you don't understand.\r\n- Always review the full conversation prior to answering and maintain continuity.\r\n- If asked for information, just print the information clearly and concisely.\r\n- If asked to do something, print a concise summary of what you've done as confirmation.\r\n- If asked a question, respond in a friendly, conversational way. Use programmatically-generated and natural language responses as appropriate.\r\n- If you need clarification, return a SCRIPT that prints your question. In the next interaction, continue based on the user's response.\r\n- Assume the user would like something concise. For example rather than printing a massive table, filter or summarize it to what's likely of interest.\r\n- Actively clean up any temporary processes or files you use.\r\n- When looking through files, use git as available to skip files, and skip hidden files (.env, .git, etc) by default.\r\n- You can plot anything with matplotlib.\r\n- ALWAYS Return your SCRIPT inside of a single pair of ``` delimiters. Only the console output of the first such SCRIPT is visible to the user, so make sure that it's complete and don't bother returning anything else.\r\n\"\"\"\r\nchat_format.override('system_content', lambda: system_content)\r\n\r\nthread = ws.Thread(model, format=chat_format, sampler=DefaultSampling)\r\n\r\nwhile True:\r\n    prompt = input(\">: \")\r\n    if prompt.lower() == \"q\":\r\n        break\r\n\r\n    response = thread.send(prompt)\r\n\r\n    # Process the response using RawDog\r\n    script_output = rawdog.main(response)\r\n\r\n    if script_output:\r\n        print(script_output)\r\n\r\n```\r\n\r\n##  GGUF \r\n\r\nWebscout provides tools to convert and quantize Hugging Face models into the GGUF format for use with offline LLMs.\r\n\r\n**Example:**\r\n\r\n```python\r\nfrom webscout.Extra import gguf\r\n\"\"\"\r\nValid quantization methods:\r\n\"q2_k\", \"q3_k_l\", \"q3_k_m\", \"q3_k_s\", \r\n\"q4_0\", \"q4_1\", \"q4_k_m\", \"q4_k_s\", \r\n\"q5_0\", \"q5_1\", \"q5_k_m\", \"q5_k_s\", \r\n\"q6_k\", \"q8_0\"\r\n\"\"\"\r\ngguf.convert(\r\n    model_id=\"OEvortex/HelpingAI-Lite-1.5T\",  # Replace with your model ID\r\n    username=\"Abhaykoul\",  # Replace with your Hugging Face username\r\n    token=\"hf_token_write\",  # Replace with your Hugging Face token\r\n    quantization_methods=\"q4_k_m\"  # Optional, adjust quantization methods\r\n)\r\n```\r\n\r\n## \ud83e\udd16 Autollama\r\n\r\nWebscout's `autollama` utility downloads a model from Hugging Face and then automatically makes it Ollama-ready.\r\n\r\n```python\r\nfrom webscout.Extra import autollama\r\n\r\nmodel_path = \"Vortex4ai/Jarvis-0.5B\"\r\ngguf_file = \"test2-q4_k_m.gguf\"\r\n\r\nautollama.main(model_path, gguf_file)  \r\n```\r\n\r\n**Command Line Usage:**\r\n\r\n* **GGUF Conversion:**\r\n   ```bash\r\n   python -m webscout.Extra.gguf -m \"OEvortex/HelpingAI-Lite-1.5T\" -u \"your_username\" -t \"your_hf_token\" -q \"q4_k_m,q5_k_m\" \r\n   ```\r\n\r\n* **Autollama:**\r\n   ```bash\r\n   python -m webscout.Extra.autollama -m \"OEvortex/HelpingAI-Lite-1.5T\" -g \"HelpingAI-Lite-1.5T.q4_k_m.gguf\" \r\n   ```\r\n\r\n**Note:** \r\n\r\n* Replace `\"your_username\"` and `\"your_hf_token\"` with your actual Hugging Face credentials.\r\n* The `model_path` in `autollama` is the Hugging Face model ID, and `gguf_file` is the GGUF file ID.\r\n\r\n\r\n## \ud83c\udf10 `Webai` - Terminal GPT and an Open Interpreter\r\n\r\n```bash\r\npython -m webscout.webai webai --provider \"phind\" --rawdog\r\n```\r\n\r\n<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://t.me/official_helpingai\"><img alt=\"Telegram\" src=\"https://img.shields.io/badge/Telegram-2CA5E0?style=for-the-badge&logo=telegram&logoColor=white\"></a>\r\n  <a href=\"https://www.instagram.com/oevortex/\"><img alt=\"Instagram\" src=\"https://img.shields.io/badge/Instagram-E4405F?style=for-the-badge&logo=instagram&logoColor=white\"></a>\r\n  <a href=\"https://www.linkedin.com/in/oe-vortex-29a407265/\"><img alt=\"LinkedIn\" src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\"></a>\r\n  <a href=\"https://buymeacoffee.com/oevortex\"><img alt=\"Buy Me A Coffee\" src=\"https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&logo=buymeacoffee&logoColor=black\"></a>\r\n</div>\r\n\r\n<div align=\"center\">\r\n  <!-- Replace `#` with your actual links -->\r\n  <a href=\"https://youtube.com/@OEvortex\">\u25b6\ufe0f Vortex's YouTube Channel</a> \r\n</div>\r\n<div align=\"center\">\r\n  <a href=\"https://youtube.com/@devsdocode\">\u25b6\ufe0f Devs Do Code's YouTube Channel</a> \r\n</div>\r\n<div align=\"center\">\r\n  <a href=\"https://t.me/ANONYMOUS_56788\">\ud83d\udce2 Anonymous Coder's Telegram</a> \r\n</div>\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nContributions are welcome! If you'd like to contribute to Webscout, please follow these steps:\r\n\r\n1. Fork the repository.\r\n2. Create a new branch for your feature or bug fix.\r\n3. Make your changes and commit them with descriptive messages.\r\n4. Push your branch to your forked repository.\r\n5. Submit a pull request to the main repository.\r\n\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n* All the amazing developers who have contributed to the project!\r\n* The open-source community for their support and inspiration.\r\n",
    "bugtrack_url": null,
    "license": "HelpingAI",
    "summary": "Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more",
    "version": "6.3",
    "project_urls": {
        "Source": "https://github.com/HelpingAI/Webscout",
        "Tracker": "https://github.com/HelpingAI/Webscout/issues",
        "YouTube": "https://youtube.com/@OEvortex"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3eaffb84a1c4869a58b0502942629e34b436578938ef9a555b6640ab0ba82a62",
                "md5": "be67ed7c660fc2a6202060dc06b291f6",
                "sha256": "30bd16a29d376e2f194c438617af6d7e9c3fc15261d7b7d0b5d7032d2218413a"
            },
            "downloads": -1,
            "filename": "webscout-6.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "be67ed7c660fc2a6202060dc06b291f6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 364734,
            "upload_time": "2024-11-21T15:05:52",
            "upload_time_iso_8601": "2024-11-21T15:05:52.661697Z",
            "url": "https://files.pythonhosted.org/packages/3e/af/fb84a1c4869a58b0502942629e34b436578938ef9a555b6640ab0ba82a62/webscout-6.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "46c783142fa8f5edf437d316b0aa2a31830079b2832734fc57de31f69a60f5e9",
                "md5": "679b9c095840a27b3d08d0e1d63f5a8f",
                "sha256": "a1f6117258391c242886b9fdaed1bdf9467054623921c6bbeae1f8d52dfa5be9"
            },
            "downloads": -1,
            "filename": "webscout-6.3.tar.gz",
            "has_sig": false,
            "md5_digest": "679b9c095840a27b3d08d0e1d63f5a8f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 261178,
            "upload_time": "2024-11-21T15:05:56",
            "upload_time_iso_8601": "2024-11-21T15:05:56.572355Z",
            "url": "https://files.pythonhosted.org/packages/46/c7/83142fa8f5edf437d316b0aa2a31830079b2832734fc57de31f69a60f5e9/webscout-6.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-21 15:05:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "HelpingAI",
    "github_project": "Webscout",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "webscout"
}
        
Elapsed time: 1.25771s