# pyvigate
Pyvigate: A Python framework that combines headless browsing with LLMs that assists you in your data solutions, product tours, building RAG applications, web automation, functional testing, and many more!

## Installation
Pyvigate can be installed using pip or directly from the source for the latest version.
### Using pip
`pip install pyvigate`
### Installing from source
```
git clone https://github.com/kindsmiles/pyvigate.git
cd pyvigate
pip install .
```
## Components
Pyvigate consists of several key components designed to work together seamlessly for web automation tasks.
### PlayWrightEngine:
PlayWright is one library we use for headless browsing and other browser automation tasks.
```
from pyvigate.core.engine import PlaywrightEngine
engine = PlaywrightEngine(headless=True)
await engine.start_browser()
```
### QueryEngine (with Azure OpenAI)
QueryEngine incorporates AI to dynamically detect web page elements,
significantly improving the efficiency and reliability of automated interactions.
It also can help the user navigate and also create their own applications, which involve curating data, creating RAG applications, product tour, functional testing, etc.
```
from pyvigate.ai.query_engine import QueryEngine
query_engine = QueryEngine(
api_key=os.getenv("OPENAI_API_KEY"),
api_version=os.getenv("AZURE_API_VERSION"),
azure_endpoint=os.getenv("AZURE_ENDPOINT"),
llm_deployment_name=os.getenv("LLM_DEPLOYMENT_NAME"),
embedding_deployment_name=os.getenv("EMBEDDING_DEPLOYMENT_NAME")
)
```
### Login
Some products can be accessed by the browser only after the login. We can do this either manually identifying the login selectors or letting the AI detect the UI elements where the credentials can be passed.The Login component utilizes QueryEngine to intelligently identify login forms and fields, streamlining the login process.
```
from pyvigate.core.login import Login
login = Login(query_engine)
await login.perform_login(engine.page, "https://example.com/login", "username", "password")
```
### Scraping
With Scraping, Pyvigate offers powerful data extraction capabilities, enabling the collection of content from web pages post-login or navigation.
```
from pyvigate.services.scraping import Scraping
scraping = Scraping(data_dir="data")
content = await scraping.extract_data_from_page(engine.page)
print("Scraped content:", content)
```
### Caching
The Caching component allows for the local storage of web page content, facilitating offline analysis and reducing bandwidth usage.
```
from pyvigate.services.caching import Caching
caching = Caching(cache_dir="html_cache")
await caching.cache_page_content(engine.page, "https://example.com/page")
```
### Full Example
Bringing it all together, here's how you can use Pyvigate to login, scrape content, and cache it:
```
import asyncio
from dotenv import load_dotenv
from pyvigate.core.engine import PlaywrightEngine
from pyvigate.core.login import Login
from pyvigate.services.scraping import Scraping
from pyvigate.services.caching import Caching
from pyvigate.ai.query_engine import QueryEngine
import os
load_dotenv()
async def login_and_scrape():
engine = PlaywrightEngine(headless=True)
await engine.start_browser()
query_engine = QueryEngine(api_key=os.getenv("OPENAI_API_KEY"))
login = Login(query_engine)
await login.perform_login(engine.page, "https://example.com/login", os.getenv("USERNAME"), os.getenv("PASSWORD"))
scraping = Scraping(data_dir="data")
content = await scraping.extract_data_from_page(engine.page)
print("Scraped content:", content)
caching = Caching(cache_dir="html_cache")
await caching.cache_page_content(engine.page, "https://example.com/dashboard")
await engine.stop_browser()
if __name__ == "__main__":
asyncio.run(login_and_scrape())
```
Raw data
{
"_id": null,
"home_page": "https://github.com/pyvigate/pyvigate",
"name": "pyvigate",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "",
"author": "Abhijith Neil Abraham",
"author_email": "abhijithneilabrahampk@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b2/22/f8fd113855f2c56eb29974811ed7ee26db73db271e6bee4a392a2139e763/pyvigate-0.0.2.tar.gz",
"platform": null,
"description": "# pyvigate\nPyvigate: A Python framework that combines headless browsing with LLMs that assists you in your data solutions, product tours, building RAG applications, web automation, functional testing, and many more!\n\n\n\n\n## Installation\n\nPyvigate can be installed using pip or directly from the source for the latest version.\n\n### Using pip\n\n`pip install pyvigate`\n\n### Installing from source\n```\ngit clone https://github.com/kindsmiles/pyvigate.git\ncd pyvigate\npip install .\n```\n\n## Components\n\nPyvigate consists of several key components designed to work together seamlessly for web automation tasks.\n\n### PlayWrightEngine:\n\nPlayWright is one library we use for headless browsing and other browser automation tasks.\n\n```\nfrom pyvigate.core.engine import PlaywrightEngine\n\nengine = PlaywrightEngine(headless=True)\nawait engine.start_browser()\n```\n\n\n### QueryEngine (with Azure OpenAI)\nQueryEngine incorporates AI to dynamically detect web page elements,\nsignificantly improving the efficiency and reliability of automated interactions.\nIt also can help the user navigate and also create their own applications, which involve curating data, creating RAG applications, product tour, functional testing, etc.\n\n```\nfrom pyvigate.ai.query_engine import QueryEngine\n\nquery_engine = QueryEngine(\n api_key=os.getenv(\"OPENAI_API_KEY\"),\n api_version=os.getenv(\"AZURE_API_VERSION\"),\n azure_endpoint=os.getenv(\"AZURE_ENDPOINT\"),\n llm_deployment_name=os.getenv(\"LLM_DEPLOYMENT_NAME\"),\n embedding_deployment_name=os.getenv(\"EMBEDDING_DEPLOYMENT_NAME\")\n)\n```\n\n### Login\n\nSome products can be accessed by the browser only after the login. We can do this either manually identifying the login selectors or letting the AI detect the UI elements where the credentials can be passed.The Login component utilizes QueryEngine to intelligently identify login forms and fields, streamlining the login process.\n\n```\nfrom pyvigate.core.login import Login\n\nlogin = Login(query_engine)\nawait login.perform_login(engine.page, \"https://example.com/login\", \"username\", \"password\")\n```\n\n\n### Scraping\n\nWith Scraping, Pyvigate offers powerful data extraction capabilities, enabling the collection of content from web pages post-login or navigation.\n\n```\nfrom pyvigate.services.scraping import Scraping\n\nscraping = Scraping(data_dir=\"data\")\ncontent = await scraping.extract_data_from_page(engine.page)\nprint(\"Scraped content:\", content)\n```\n\n\n### Caching\n\nThe Caching component allows for the local storage of web page content, facilitating offline analysis and reducing bandwidth usage.\n```\nfrom pyvigate.services.caching import Caching\n\ncaching = Caching(cache_dir=\"html_cache\")\nawait caching.cache_page_content(engine.page, \"https://example.com/page\")\n```\n\n\n### Full Example\n\nBringing it all together, here's how you can use Pyvigate to login, scrape content, and cache it:\n\n\n```\nimport asyncio\nfrom dotenv import load_dotenv\nfrom pyvigate.core.engine import PlaywrightEngine\nfrom pyvigate.core.login import Login\nfrom pyvigate.services.scraping import Scraping\nfrom pyvigate.services.caching import Caching\nfrom pyvigate.ai.query_engine import QueryEngine\nimport os\n\nload_dotenv()\n\nasync def login_and_scrape():\n engine = PlaywrightEngine(headless=True)\n await engine.start_browser()\n\n query_engine = QueryEngine(api_key=os.getenv(\"OPENAI_API_KEY\"))\n login = Login(query_engine)\n await login.perform_login(engine.page, \"https://example.com/login\", os.getenv(\"USERNAME\"), os.getenv(\"PASSWORD\"))\n\n scraping = Scraping(data_dir=\"data\")\n content = await scraping.extract_data_from_page(engine.page)\n print(\"Scraped content:\", content)\n\n caching = Caching(cache_dir=\"html_cache\")\n await caching.cache_page_content(engine.page, \"https://example.com/dashboard\")\n\n await engine.stop_browser()\n\nif __name__ == \"__main__\":\n asyncio.run(login_and_scrape())\n```\n",
"bugtrack_url": null,
"license": "",
"summary": "A brief description of what your package does",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/pyvigate/pyvigate"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a09fe4cc71eea23cc70a8591ca82a57b547825f47ff9e16486f96f090bbab55c",
"md5": "da47693635922f72e05d28fa5adfc5db",
"sha256": "d144bbd1e419c2a16f3ab450ae3b4aacceb5965f7237b2cfffb4cf81134746e5"
},
"downloads": -1,
"filename": "pyvigate-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "da47693635922f72e05d28fa5adfc5db",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 10829,
"upload_time": "2024-02-17T20:55:41",
"upload_time_iso_8601": "2024-02-17T20:55:41.217333Z",
"url": "https://files.pythonhosted.org/packages/a0/9f/e4cc71eea23cc70a8591ca82a57b547825f47ff9e16486f96f090bbab55c/pyvigate-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b222f8fd113855f2c56eb29974811ed7ee26db73db271e6bee4a392a2139e763",
"md5": "72ec7deedc91fb3dbd7870e8fcaae0e9",
"sha256": "9a557b2d89b3b1534a812336d3234dc3aa287ffdd2a1ee27723c3c1b8165eb40"
},
"downloads": -1,
"filename": "pyvigate-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "72ec7deedc91fb3dbd7870e8fcaae0e9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 9529,
"upload_time": "2024-02-17T20:55:42",
"upload_time_iso_8601": "2024-02-17T20:55:42.859217Z",
"url": "https://files.pythonhosted.org/packages/b2/22/f8fd113855f2c56eb29974811ed7ee26db73db271e6bee4a392a2139e763/pyvigate-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-17 20:55:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pyvigate",
"github_project": "pyvigate",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "pyvigate"
}