# easylenium
A powerful web scraping tool for automating data extraction from websites using Selenium.
## Features
- Built on Selenium focusing on web scraping
- Reduce boilerplate and increase development speed
- Easily navigate web pages and interact with elements.
- Retrieve data from web pages using XPath or CSS selectors.
- Handle common scenarios like timeouts and missing elements.
- Support for multiple web browsers (Chrome, Firefox, Edge).
- Customizable options for handling downloads, prompts, and PDF files.
## Installation
You can install `easylenium` using pip:
```shell
pip install easylenium
```
Make sure you have Python 3.7 or above installed.
## Usage
```shell
from easylenium import (Scraper, create_chromedriver Location, By, save_to_json, handle)
# Create a scraper instance
driver = create_chromedriver("path/to/default/download/directory")
scraper = Scraper(driver)
# Open a web page
scraper.open_('https://example.com')
# Fill out form
example_textfield = scraper.get_element(Location(By.ID, "username_field"))
scraper.click_(example_textfield).send_keys("my_username")
# Interact with buttons
continue_button = scraper.get_element(Location(By.XPATH, "xpath/to/button"))
scraper.click_(continue_button)
# Find and retrieve elements
elements = scraper.get_elements(Location(By.CLASS_NAME, "bookItemTitle"))
# Write custom helper classes/functions
# (fully interactable with the python 'Selenium' package)
from selenium.webdriver.remote.webelement import WebElement
from selenium.common.exceptions import NoSuchElementException
@handle(NoSuchElementException)
def extract_url_from_(element: WebElement) -> str:
return element.find_element(By.CLASS_NAME, 'a').get_attribute("href")
# Interact with elements, with optional handling of Exceptions
urls = scraper.iterate_(elements, extract_url_from_)
# Save data as json
save_to_json(urls, "path/to/save.json")
# Close the web scraper
scraper.terminate()
```
For more details and advanced usage examples, please refer to the documentation.
## Documentation
The complete documentation for easylenium can be found at (tbd).
## Contributing
Contributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository (https://github.com/hubertus444/easylenium).
## License
This project is licensed under the MIT License. See the LICENSE file for more details.
## Additional Info
The easylenium package provides a convenient and efficient
way to perform web scraping and automate data extraction from websites.
It is built on top of the Selenium library, leveraging its capabilities
to interact with web pages and extract desired data.
With easylenium, you can easily navigate web pages, interact with elements,
retrieve data, and handle common scenarios such as timeouts and
missing elements.
The package offers a high-level interface for common web scraping tasks,
allowing you to focus on the data extraction process rather than
the intricacies of web automation.
Whether you need to scrape data for research, data analysis, or other purposes,
easylenium simplifies the process and provides a reliable solution.
It supports various web browsers, including Chrome, Firefox, and Edge, and
provides customizable options for handling downloads, prompts, and PDF files.
Empower your data extraction workflow with easylenium and
unlock the potential of web scraping in Python.
Raw data
{
"_id": null,
"home_page": "",
"name": "easylenium",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "web scraping,data extraction,automation,Selenium,web scraping tool,data scraping,web automation,web crawler,data scraping library,Python,website scraping,scraping framework,web data extraction,automated data collection,data scraping tool,data scraping automation,web scraping library",
"author": "",
"author_email": "Hubertus R Mitschke <sturmboe.transit0m@icloud.com>",
"download_url": "https://files.pythonhosted.org/packages/10/77/10d712b3a5a5dae2c38fedb4fea6e3d0f52426c73216b3eab02649120ee1/easylenium-1.0.0.tar.gz",
"platform": null,
"description": "# easylenium\nA powerful web scraping tool for automating data extraction from websites using Selenium.\n\n## Features\n- Built on Selenium focusing on web scraping\n- Reduce boilerplate and increase development speed\n- Easily navigate web pages and interact with elements.\n- Retrieve data from web pages using XPath or CSS selectors.\n- Handle common scenarios like timeouts and missing elements.\n- Support for multiple web browsers (Chrome, Firefox, Edge).\n- Customizable options for handling downloads, prompts, and PDF files.\n\n## Installation\nYou can install `easylenium` using pip:\n```shell\npip install easylenium\n```\n\nMake sure you have Python 3.7 or above installed.\n\n## Usage\n```shell\nfrom easylenium import (Scraper, create_chromedriver Location, By, save_to_json, handle)\n\n# Create a scraper instance\ndriver = create_chromedriver(\"path/to/default/download/directory\")\nscraper = Scraper(driver)\n\n# Open a web page\nscraper.open_('https://example.com')\n\n# Fill out form\nexample_textfield = scraper.get_element(Location(By.ID, \"username_field\"))\nscraper.click_(example_textfield).send_keys(\"my_username\")\n\n# Interact with buttons\ncontinue_button = scraper.get_element(Location(By.XPATH, \"xpath/to/button\"))\nscraper.click_(continue_button)\n\n# Find and retrieve elements\nelements = scraper.get_elements(Location(By.CLASS_NAME, \"bookItemTitle\"))\n\n# Write custom helper classes/functions\n# (fully interactable with the python 'Selenium' package)\nfrom selenium.webdriver.remote.webelement import WebElement\nfrom selenium.common.exceptions import NoSuchElementException\n\n@handle(NoSuchElementException)\ndef extract_url_from_(element: WebElement) -> str:\n return element.find_element(By.CLASS_NAME, 'a').get_attribute(\"href\")\n\n# Interact with elements, with optional handling of Exceptions\nurls = scraper.iterate_(elements, extract_url_from_)\n\n# Save data as json\nsave_to_json(urls, \"path/to/save.json\")\n\n# Close the web scraper\nscraper.terminate()\n```\n\nFor more details and advanced usage examples, please refer to the documentation.\n\n## Documentation\nThe complete documentation for easylenium can be found at (tbd).\n\n## Contributing\nContributions are welcome! If you find any issues or have suggestions for improvements, please open an issue or submit a pull request on the GitHub repository (https://github.com/hubertus444/easylenium).\n\n## License\nThis project is licensed under the MIT License. See the LICENSE file for more details.\n\n## Additional Info\nThe easylenium package provides a convenient and efficient \nway to perform web scraping and automate data extraction from websites.\nIt is built on top of the Selenium library, leveraging its capabilities \nto interact with web pages and extract desired data. \nWith easylenium, you can easily navigate web pages, interact with elements, \nretrieve data, and handle common scenarios such as timeouts and \nmissing elements. \nThe package offers a high-level interface for common web scraping tasks, \nallowing you to focus on the data extraction process rather than \nthe intricacies of web automation. \nWhether you need to scrape data for research, data analysis, or other purposes, \neasylenium simplifies the process and provides a reliable solution. \nIt supports various web browsers, including Chrome, Firefox, and Edge, and \nprovides customizable options for handling downloads, prompts, and PDF files. \nEmpower your data extraction workflow with easylenium and \nunlock the potential of web scraping in Python.\n",
"bugtrack_url": null,
"license": "",
"summary": "A powerful web scraping tool for automating data extraction",
"version": "1.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/hubertus444/easylenium/issues",
"Homepage": "https://github.com/hubertus444/easylenium"
},
"split_keywords": [
"web scraping",
"data extraction",
"automation",
"selenium",
"web scraping tool",
"data scraping",
"web automation",
"web crawler",
"data scraping library",
"python",
"website scraping",
"scraping framework",
"web data extraction",
"automated data collection",
"data scraping tool",
"data scraping automation",
"web scraping library"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b7f163bf161d53bc03af12a91d20a9148db1ee05f16ad669948db4eeafbced15",
"md5": "69cb75d6da127e5ec3eabd4571c9de0d",
"sha256": "fd0600dfd34a02edfccb727222c419518c998206e4953d4a1deeb0bcc31325ad"
},
"downloads": -1,
"filename": "easylenium-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "69cb75d6da127e5ec3eabd4571c9de0d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 9123,
"upload_time": "2023-05-19T20:28:44",
"upload_time_iso_8601": "2023-05-19T20:28:44.053272Z",
"url": "https://files.pythonhosted.org/packages/b7/f1/63bf161d53bc03af12a91d20a9148db1ee05f16ad669948db4eeafbced15/easylenium-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "107710d712b3a5a5dae2c38fedb4fea6e3d0f52426c73216b3eab02649120ee1",
"md5": "64fd442f8aa13b9261a89b11162afe8a",
"sha256": "4f415ec1a7143635b7fef6a752c7eb2e6b8b896d06332e8b88c150fba907fefd"
},
"downloads": -1,
"filename": "easylenium-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "64fd442f8aa13b9261a89b11162afe8a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 9276,
"upload_time": "2023-05-19T20:28:46",
"upload_time_iso_8601": "2023-05-19T20:28:46.102547Z",
"url": "https://files.pythonhosted.org/packages/10/77/10d712b3a5a5dae2c38fedb4fea6e3d0f52426c73216b3eab02649120ee1/easylenium-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-19 20:28:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hubertus444",
"github_project": "easylenium",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "easylenium"
}