aba-cli-scrapper


Nameaba-cli-scrapper JSON
Version 0.7.6 PyPI version JSON
download
home_pagehttps://aba-cli-scrapper.readthedocs.io/en/latest/
SummaryCreate your own Alibaba dataset and interact with it in plain English.
upload_time2024-09-23 22:39:05
maintainerNone
docs_urlNone
authorponeoneo
requires_python<4.0.0,>=3.11.0
licenseGNU GENERAL PUBLIC LICENSEVersion 3, 29 June 2007
keywords cli scrapping alibaba scraper alibaba-cli-scrapper dataset ai-agent rag
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <p>
    <a href="#"><img src="_static/images/image.png" width="300" height="300" alt="overview image" /></a>
  </p>
</div>
<p align="center">  <b>Alibaba-CLI-Scraper</b> </p>
<p align="center"> 🛒-💻- 🕸 </p>

---

<p align="center"> <b> Create your own Alibaba dataset and interact with it in plain English. </b> </p>
<div align="center">

![PyPI - Version](https://img.shields.io/pypi/v/aba-cli-scrapper) ![PyPI - Downloads](https://img.shields.io/pypi/dm/aba-cli-scrapper?label=PyPI-Downloads)  ![GitHub Release Date](https://img.shields.io/github/release-date/poneoneo/Alibaba-CLI-Scrapper) 
</div>
<div align="center">

![PyPI - Python Version](https://img.shields.io/pypi/pyversions/aba-cli-scrapper) ![GitHub License](https://img.shields.io/github/license/poneoneo/Alibaba-CLI-Scrapper)
</div>

<div align="center">

![Codacy grade](https://img.shields.io/codacy/grade/bbecd0598d5e460ea87e3aa5f8db8798)
</div>



---
  
# About
Alibaba-CLI-Scraper is a python CLI tool designed to scrape, save and interact in plain english with data from Alibaba.com. Based on user, some products data and theirs related suppliers data will be extracted and saved it in a local database (SQLite or MySQL) and then will be ready to be analysed and even visualized through a powefull ai-agent powered by [data-horse](https://github.com/DeDolphins/DataHorse). It's also be designed to be user-friendly and therefore has fairly simple and easy-to-use commands to navigate through all the features of this tool.

## Features

* **Asynchronous API:** Utilizes asynchronous API of Playwright and Brightdata Proxies for efficient handling of numerous pages results.

* **Text Mode:** Provides easy-to-use commands with text mode for those who are not comfortable with commands in the terminal.

* **Export-As-Csv:** Export your generated sqlite file to csv file.
  
* **Ai-agent:** Interact with your scraped data in plain english to easily analyse an visualize what matter for you.

## Which important informations will be retrieved from the Alibaba website ?

Fields related to `Suppliers`:

    `id`: int
    
    `name`: str
    
    `verification_mode`: str
    
    `sopi_level`: int
    
    `country_name`: str
    
    `years_as_gold_supplier`: int
    
    `supplier_service_score`: float

Fields related to `Products`:

    `id`: int
    
    `name`: str 
    
    `alibaba_guranteed`: bool
    
    `certifications`: str
    
    `minimum_to_order`: int
    
    `ordered_or_sold`: int
    
    `supplier_id`: int
    
    `min_price`: float
    
    `max_price`: float
    
    `product_score`: float
    
    `review_count` : float
    
    `review_score` : float
    
    `shipping_time_score` : float
    
    `is_full_promotion`: bool
    
    `is_customizable`: bool
    
    `is_instant_order`: bool
    
    `trade_product`:bool
  
## Sample of CSV output

When you will run command to export your sqlite file as a csv a `OUTER FULL JOIN` operation will be made to join all the fields of the both tables. Bellow you have a sample results maching `agricultural machinery` keywords.

|id  |name                                                                                                                            |alibaba_guranteed|minimum_to_order|supplier_id|alibaba_guranteed|certifications|ordered_or_sold|product_score|review_count|review_score|shipping_time_score|is_full_promotion|is_customizable|is_instant_order|trade_product|min_price|max_price|name                                                                                         |verification_mode|sopi_level|country_name  |years_as_gold_supplier|supplier_service_score|
|----|--------------------------------------------------------------------------------------------------------------------------------|-----------------|----------------|-----------|-----------------|--------------|---------------|-------------|------------|------------|-------------------|-----------------|---------------|----------------|-------------|---------|---------|---------------------------------------------------------------------------------------------|-----------------|----------|--------------|----------------------|----------------------|
|1   |mesh knitting weaving machine produce sunscreen net agricultural shade net anti net                                             |1                |1               |1          |1                |              |0              |5.0          |1.0         |5.0         |5.0                |1                |1              |1               |1            |9997.0   |18979.0  |qingdao shanzhong imp and exp ltd.                                                           |unverified       |0         |chine         |9                     |5.0                   |
|2   |chinese small farm rotary tiller 12hp 15hp 20hp two wheel mini hand tractor walk behind tractors                                |1                |1               |2          |1                |              |0              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |455.0    |455.0    |shandong guoyoule agricultural machinery co., ltd.                                           |unverified       |0         |chine         |1                     |0.0                   |
|3   |small multifunctional flexible 130l orchard remote control garden crawler agriculture robot sprayer                             |1                |1               |3          |1                |              |0              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |2350.0   |4620.0   |shandong my agricultural facilities co., ltd.                                                |unverified       |0         |chine         |1                     |0.0                   |
|4   |5hp/7hp/12hp rotary electric start agricultural farming walking tractor power tiller weeder cultivators                         |1                |1               |4          |1                |              |2              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |244.0    |371.0    |shandong jinlong lutai international trade co., ltd.                                         |verified         |0         |chine         |1                     |0.0                   |
|5   |free shipping 3.5 ton mini excavator 1 ton 2 ton kubota engine digger excavator mini pelle chinese cheap small excavator machine|1                |1               |5          |1                |CE            |95             |4.6          |25.0        |4.6         |4.6                |1                |1              |1               |1            |988.0    |1235.0   |shandong qilu industrial co., ltd.                                                           |unverified       |5         |chine         |4                     |4.6                   |

## Prerequisites

- Python 3.11 or Higher

- Scraping Browser API KEY from [BrightData](https://get.brightdata.com/fdrqnme1smdc)  to know how to set your api key look at [here]<available-commands>

- Windows or Linux as OS 




## Installation

It's recommended to use [pipx](https://pypa.github.io/pipx/) instead of pip for end-user applications written in Python. `pipx` installs the package, exposes his CLI() entrypoints in an isolated environment and makes it available everywhere in your system. This guarantees no dependency conflicts and clean uninstall.
let's install `aba-cli-scrapper` using pipx:

- Install pipx with a python 3.11 or higher version:
   ```shell
      python/python3 -m pip install --user pipx 
   ```
- add pipx to your PATH:
  ```shell
      pipx ensurepath
  ```
- Install `aba-cli-scrapper` using pipx:

   ```shell
      pipx install aba-cli-scrapper
   ```
and you're ready to use `aba-cli-scrapper`!

If you'd like to use `pip` instead, just replace `pipx` with `pip`  but obviously  as usual you'll need to  create a virtual environment and activate it before to use `aba-cli-scrapper` to avoid any dependency conflicts issues. let's install `aba-cli-scrapper` using pip:

- Create a virtual environment with Python 3.11 or higher:
   ```shell
      python/python3 -m venv your-virtual-environment-name
   ```
- activate the virtual environment:
   ```shell
      your-virtual-environment-name\\Scripts\\activate
   ```

  
## Using the CLI Interface
 

**Need Help?**  run  any commands followed by `--help` for detailed informations about its usage and options. For example: `aba-run --help` will show you all subcommands available and how to use them.

<div align="center">
  <p>
    <a href="#"><img src="_static/images/help-cli-2.png" width="900" height="340" alt="aba-run help image" /></a>
  </p>

</div>

**Warnings:** 
- `aba-run` is the base command means all other commands that will be introduce bellow are sub-commands and should always be preceded by  `aba-run`.
Practice make perfect isn't ? So let's get started with a use case example. 
Let's assume  that you want to scrape data about `electric bikes` from Alibaba.com.

---
## Available Commands:

  ### Important Informations:

  * **`initialize` :**  Means create a new Mysql or SQLite database with products and suppliers table in it. Which will be used to store your scraped data. Especially for mysql engine, you will need to create an empty database in your mysql server before.
  * **`update` :** Means add your scraped data to a newly initialized Mysql or SQLite database. this action cannot be performed twice on the same database. 

  <details>
  <summary>Scraper Demo</summary>

  </details>


  ### How to set My API KEY ?

  by default `scrapper` will use async mode which is supported by brightdata api, means if you want to use it you will need to provide your api key. set it by using command bellow:

  ```shell
      aba-run set-api-key your_api_key
  ```
  and now run `scraper` sub-command without `--sync-api` flag to use async mode.

  *   **`scraper` sub-command:**  Initiates scraping of Alibaba.com based on the provided keywords.
  this command takes two required arguments and one optional argument:
      * -  **`key_words` (required):** The search term(s) for finding products on Alibaba. Enclose multiple keywords in quotes.
      * - **`--page-results` or `-pr` (required):** Usually keys words will results to many pages macthing them. Then you must to indicate how many of them you want to pull out.If any value is not provided `10` will be used by default.
      * -  **`--html-folder` or `-hf` (optional):** Specifies the directory to store the raw HTML files. If omitted, a folder with sanitized keywords as name will be automatically created. In this case `electric_bikes` will be used as a results folder name.
      * -  **`--sync-api` or `-sa` (optional):** flag indicates that you want to use sync mode. By default `async` mode is used.

      **Example**:
      ```shell
          aba-run scraper "electric bikes" -hf "bike_results" -pr 15
      ```
  However if you want to use sync mode you can use :

  ```bash
  aba-run scraper "electric bikes" -hf "bike_results" -pr 15  --sync-api/-sa
  ```
  and voila! 

  Now `bike_results` (since you already provided name you wish to have) directory has been created and should contains all html files from alibaba.com matching your keywords.

  ---

  <details>
  <summary>db-init Demo with sqlite</summary>

  </details>

  *   **`db-init` sub-command:** Creates a new database mysql/sqlite with products and suppliers as tables in it.
  this command takes one required arguments and six optional arguments(depends on engine you choose):
      * -   **`engine` (required):** Choose either `sqlite` or `mysql`. Is set to `sqlite` by default.
      *  - **`--sqlite-file` or `-f`(optional, SQLite only):**  The name for your SQLite database file (without any extension).
      *   - **`--host` or `-h`, `--port` or `-p`, `--user` or `-u`, `--password` or `-pw`, `--db-name`or `-db` (required for MySQL):**  Your MySQL database connection details.
    
      *   **`--only-with` or `-ow`(optional Mysql):**  If you just want to update some details of your credentials in `db_credentials.json` file but not all, use this flag.
  
  * **NB:** `--host` and `--port` are respectively set to `localhost` and `3306` by default. Also When you initialize your database with Mysql Engine for the first time, you must to set `--user`, `--password` and `--db-name` arguments. this will create a `db_credentials.json` file in your current directory with your credentials. Prevent you to set it again next time. Thus you will be able to set just import field when the time will come to [update]<important-informations> your database.

  **MySQL Use case:**

  ```shell
  aba-run db-init mysql -u "mysql_username" -pw "mysql_password" -db "alibaba_products" 
  ```



  **SQLite Use case :**

  ```shell
  aba-run db-init sqlite --sqlite-file alibaba_data
  ```
  db-init subcommand will try to use sqlite engine by default  so if you are planning to use it run as bellow :

  **SQLite Use case V2 :**
  ```shell
  aba-run db-init -f alibaba_data
  ```

  As soons as your database has been initialized, you can update it with the scraped data.

  ---

  <details>
  <summary>db-update Demo</summary>

  </details>

  *  **`db-update` sub-command:** add scraped data from html files to your database (you can't use this command twice with same database name to avoid UNIQUE CONSTRAINT ERROR).

  this command takes two required arguments and two optional arguments:
    * - **`--db-engine` (required):** Select your database engine: `sqlite` or `mysql`. Is set to `sqlite` by default.
    * -  **`--kw-results`/`-kr` (required):**  The path to the folder containing the HTML files generated by the `scraper` sub command.
    * - **`--filename`/`-f` (required for SQLite):** If you're using SQLite, provide the desired filename for your database. whitout any extension.
    * - **`--db-name`/`-db` (optional for MySQL):** If you're using MySQL engine, and want to push the data to a different database, provide the desired database name.

  **MySQL Use case:**
   
   command bellow assuming that you already have your database credentials in `db_credentials.json` file to autocomplete required parameter. if not this will raise an error.

  ```shell
    aba-run db-update  mysql --kw-results bike_results\ 
  ```
  **NB:What if you want to change something while you updating the database? Assuming that you have run another scraping command and you want to save this data in another database name whitout update credential file or rewriting all theses parameter just to change your database name then, simply run `aba-run db-update  mysql --kw-results another_keyword_folder_result\ --db-name "another_database_name"`.**

  **SQLite Use case:**
  ```shell
  aba-run db-update  sqlite --kw-results bike_results\ --filename alibaba_data
  ```
  ---
  <details>
  <summary> export-as-csv Demo</summary>

<div align="center">
  <p>
    <a href="#"><img src="_static/images/export-as-csv-demo.gif" width="900" height="340" alt="command result 1" /></a>
  </p>
  <p align="center">
  </p>
</div>
  </details>

  

  *  **`export-as-csv` sub-command:** Exports scraped data from your sqlitedatabase to a CSV file. This csv file will contain a `FULL OUTER JOIN` with the `products` and `suppliers` tables.

  this command takes one required argument and one optional argument:
  * -  **`--sqlite_file` (required):** The name for your SQLite database file with his extension.
  * -  **`--to` or `-t` (required):**  The name for your CSV file with his extension.


  <details>
  <summary> ai-agent Demo</summary>

  <div align="center">


  </div>
  </details>
  The purpose of this command is to provide a way to interact with your scraped data in plain english.
  - You will be able build a query i.e "list all suppliers in china". In this case the answer will be a pretty table with the name of the suppliers.
  - or i.e "plot the price of all the products in china". In this case the answer will be a line chart with the price of all the products in china.

 *  **`ai-agent` sub-command:** Calls an Ai agent to interact with your scraped data in plain english. 

  this command takes one required argument and one optional argument:
  * -  **`query` (required):** content of the query that you want to ask to the ai agent.
  * -  **`--csv-file` or `-f` (required):**  The name for your CSV file with his extension.



## Contributions Welcome!

I believe in the power of open source! If you'd like to contribute to this project, feel free to fork the repository, make your changes, and submit a pull request. I'm always open to new ideas and improvements.

## License

This project is licensed under the [Gnu General Public License Version **3**]<COPYING>.


  


            

Raw data

            {
    "_id": null,
    "home_page": "https://aba-cli-scrapper.readthedocs.io/en/latest/",
    "name": "aba-cli-scrapper",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0.0,>=3.11.0",
    "maintainer_email": null,
    "keywords": "cli, scrapping, alibaba, scraper, alibaba-cli-scrapper, dataset, ai-agent, RAG",
    "author": "poneoneo",
    "author_email": "onealzero@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/3f/6f/155f43210d6f346fbccdb5e75c8fd6471ead2974f17b2926ca2161cb6116/aba_cli_scrapper-0.7.6.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <p>\n    <a href=\"#\"><img src=\"_static/images/image.png\" width=\"300\" height=\"300\" alt=\"overview image\" /></a>\n  </p>\n</div>\n<p align=\"center\">  <b>Alibaba-CLI-Scraper</b> </p>\n<p align=\"center\"> \ud83d\uded2-\ud83d\udcbb- \ud83d\udd78 </p>\n\n---\n\n<p align=\"center\"> <b> Create your own Alibaba dataset and interact with it in plain English. </b> </p>\n<div align=\"center\">\n\n![PyPI - Version](https://img.shields.io/pypi/v/aba-cli-scrapper) ![PyPI - Downloads](https://img.shields.io/pypi/dm/aba-cli-scrapper?label=PyPI-Downloads)  ![GitHub Release Date](https://img.shields.io/github/release-date/poneoneo/Alibaba-CLI-Scrapper) \n</div>\n<div align=\"center\">\n\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/aba-cli-scrapper) ![GitHub License](https://img.shields.io/github/license/poneoneo/Alibaba-CLI-Scrapper)\n</div>\n\n<div align=\"center\">\n\n![Codacy grade](https://img.shields.io/codacy/grade/bbecd0598d5e460ea87e3aa5f8db8798)\n</div>\n\n\n\n---\n  \n# About\nAlibaba-CLI-Scraper is a python CLI tool designed to scrape, save and interact in plain english with data from Alibaba.com. Based on user, some products data and theirs related suppliers data will be extracted and saved it in a local database (SQLite or MySQL) and then will be ready to be analysed and even visualized through a powefull ai-agent powered by [data-horse](https://github.com/DeDolphins/DataHorse). It's also be designed to be user-friendly and therefore has fairly simple and easy-to-use commands to navigate through all the features of this tool.\n\n## Features\n\n* **Asynchronous API:** Utilizes asynchronous API of Playwright and Brightdata Proxies for efficient handling of numerous pages results.\n\n* **Text Mode:** Provides easy-to-use commands with text mode for those who are not comfortable with commands in the terminal.\n\n* **Export-As-Csv:** Export your generated sqlite file to csv file.\n  \n* **Ai-agent:** Interact with your scraped data in plain english to easily analyse an visualize what matter for you.\n\n## Which important informations will be retrieved from the Alibaba website ?\n\nFields related to `Suppliers`:\n\n    `id`: int\n    \n    `name`: str\n    \n    `verification_mode`: str\n    \n    `sopi_level`: int\n    \n    `country_name`: str\n    \n    `years_as_gold_supplier`: int\n    \n    `supplier_service_score`: float\n\nFields related to `Products`:\n\n    `id`: int\n    \n    `name`: str \n    \n    `alibaba_guranteed`: bool\n    \n    `certifications`: str\n    \n    `minimum_to_order`: int\n    \n    `ordered_or_sold`: int\n    \n    `supplier_id`: int\n    \n    `min_price`: float\n    \n    `max_price`: float\n    \n    `product_score`: float\n    \n    `review_count` : float\n    \n    `review_score` : float\n    \n    `shipping_time_score` : float\n    \n    `is_full_promotion`: bool\n    \n    `is_customizable`: bool\n    \n    `is_instant_order`: bool\n    \n    `trade_product`:bool\n  \n## Sample of CSV output\n\nWhen you will run command to export your sqlite file as a csv a `OUTER FULL JOIN` operation will be made to join all the fields of the both tables. Bellow you have a sample results maching `agricultural machinery` keywords.\n\n|id  |name                                                                                                                            |alibaba_guranteed|minimum_to_order|supplier_id|alibaba_guranteed|certifications|ordered_or_sold|product_score|review_count|review_score|shipping_time_score|is_full_promotion|is_customizable|is_instant_order|trade_product|min_price|max_price|name                                                                                         |verification_mode|sopi_level|country_name  |years_as_gold_supplier|supplier_service_score|\n|----|--------------------------------------------------------------------------------------------------------------------------------|-----------------|----------------|-----------|-----------------|--------------|---------------|-------------|------------|------------|-------------------|-----------------|---------------|----------------|-------------|---------|---------|---------------------------------------------------------------------------------------------|-----------------|----------|--------------|----------------------|----------------------|\n|1   |mesh knitting weaving machine produce sunscreen net agricultural shade net anti net                                             |1                |1               |1          |1                |              |0              |5.0          |1.0         |5.0         |5.0                |1                |1              |1               |1            |9997.0   |18979.0  |qingdao shanzhong imp and exp ltd.                                                           |unverified       |0         |chine         |9                     |5.0                   |\n|2   |chinese small farm rotary tiller 12hp 15hp 20hp two wheel mini hand tractor walk behind tractors                                |1                |1               |2          |1                |              |0              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |455.0    |455.0    |shandong guoyoule agricultural machinery co., ltd.                                           |unverified       |0         |chine         |1                     |0.0                   |\n|3   |small multifunctional flexible 130l orchard remote control garden crawler agriculture robot sprayer                             |1                |1               |3          |1                |              |0              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |2350.0   |4620.0   |shandong my agricultural facilities co., ltd.                                                |unverified       |0         |chine         |1                     |0.0                   |\n|4   |5hp/7hp/12hp rotary electric start agricultural farming walking tractor power tiller weeder cultivators                         |1                |1               |4          |1                |              |2              |0.0          |0.0         |0.0         |0.0                |1                |1              |1               |1            |244.0    |371.0    |shandong jinlong lutai international trade co., ltd.                                         |verified         |0         |chine         |1                     |0.0                   |\n|5   |free shipping 3.5 ton mini excavator 1 ton 2 ton kubota engine digger excavator mini pelle chinese cheap small excavator machine|1                |1               |5          |1                |CE            |95             |4.6          |25.0        |4.6         |4.6                |1                |1              |1               |1            |988.0    |1235.0   |shandong qilu industrial co., ltd.                                                           |unverified       |5         |chine         |4                     |4.6                   |\n\n## Prerequisites\n\n- Python 3.11 or Higher\n\n- Scraping Browser API KEY from [BrightData](https://get.brightdata.com/fdrqnme1smdc)  to know how to set your api key look at [here]<available-commands>\n\n- Windows or Linux as OS \n\n\n\n\n## Installation\n\nIt's recommended to use [pipx](https://pypa.github.io/pipx/) instead of pip for end-user applications written in Python. `pipx` installs the package, exposes his CLI() entrypoints in an isolated environment and makes it available everywhere in your system. This guarantees no dependency conflicts and clean uninstall.\nlet's install `aba-cli-scrapper` using pipx:\n\n- Install pipx with a python 3.11 or higher version:\n   ```shell\n      python/python3 -m pip install --user pipx \n   ```\n- add pipx to your PATH:\n  ```shell\n      pipx ensurepath\n  ```\n- Install `aba-cli-scrapper` using pipx:\n\n   ```shell\n      pipx install aba-cli-scrapper\n   ```\nand you're ready to use `aba-cli-scrapper`!\n\nIf you'd like to use `pip` instead, just replace `pipx` with `pip`  but obviously  as usual you'll need to  create a virtual environment and activate it before to use `aba-cli-scrapper` to avoid any dependency conflicts issues. let's install `aba-cli-scrapper` using pip:\n\n- Create a virtual environment with Python 3.11 or higher:\n   ```shell\n      python/python3 -m venv your-virtual-environment-name\n   ```\n- activate the virtual environment:\n   ```shell\n      your-virtual-environment-name\\\\Scripts\\\\activate\n   ```\n\n  \n## Using the CLI Interface\n \n\n**Need Help?**  run  any commands followed by `--help` for detailed informations about its usage and options. For example: `aba-run --help` will show you all subcommands available and how to use them.\n\n<div align=\"center\">\n  <p>\n    <a href=\"#\"><img src=\"_static/images/help-cli-2.png\" width=\"900\" height=\"340\" alt=\"aba-run help image\" /></a>\n  </p>\n\n</div>\n\n**Warnings:** \n- `aba-run` is the base command means all other commands that will be introduce bellow are sub-commands and should always be preceded by  `aba-run`.\nPractice make perfect isn't ? So let's get started with a use case example. \nLet's assume  that you want to scrape data about `electric bikes` from Alibaba.com.\n\n---\n## Available Commands:\n\n  ### Important Informations:\n\n  * **`initialize` :**  Means create a new Mysql or SQLite database with products and suppliers table in it. Which will be used to store your scraped data. Especially for mysql engine, you will need to create an empty database in your mysql server before.\n  * **`update` :** Means add your scraped data to a newly initialized Mysql or SQLite database. this action cannot be performed twice on the same database. \n\n  <details>\n  <summary>Scraper Demo</summary>\n\n  </details>\n\n\n  ### How to set My API KEY ?\n\n  by default `scrapper` will use async mode which is supported by brightdata api, means if you want to use it you will need to provide your api key. set it by using command bellow:\n\n  ```shell\n      aba-run set-api-key your_api_key\n  ```\n  and now run `scraper` sub-command without `--sync-api` flag to use async mode.\n\n  *   **`scraper` sub-command:**  Initiates scraping of Alibaba.com based on the provided keywords.\n  this command takes two required arguments and one optional argument:\n      * -  **`key_words` (required):** The search term(s) for finding products on Alibaba. Enclose multiple keywords in quotes.\n      * - **`--page-results` or `-pr` (required):** Usually keys words will results to many pages macthing them. Then you must to indicate how many of them you want to pull out.If any value is not provided `10` will be used by default.\n      * -  **`--html-folder` or `-hf` (optional):** Specifies the directory to store the raw HTML files. If omitted, a folder with sanitized keywords as name will be automatically created. In this case `electric_bikes` will be used as a results folder name.\n      * -  **`--sync-api` or `-sa` (optional):** flag indicates that you want to use sync mode. By default `async` mode is used.\n\n      **Example**:\n      ```shell\n          aba-run scraper \"electric bikes\" -hf \"bike_results\" -pr 15\n      ```\n  However if you want to use sync mode you can use :\n\n  ```bash\n  aba-run scraper \"electric bikes\" -hf \"bike_results\" -pr 15  --sync-api/-sa\n  ```\n  and voila! \n\n  Now `bike_results` (since you already provided name you wish to have) directory has been created and should contains all html files from alibaba.com matching your keywords.\n\n  ---\n\n  <details>\n  <summary>db-init Demo with sqlite</summary>\n\n  </details>\n\n  *   **`db-init` sub-command:** Creates a new database mysql/sqlite with products and suppliers as tables in it.\n  this command takes one required arguments and six optional arguments(depends on engine you choose):\n      * -   **`engine` (required):** Choose either `sqlite` or `mysql`. Is set to `sqlite` by default.\n      *  - **`--sqlite-file` or `-f`(optional, SQLite only):**  The name for your SQLite database file (without any extension).\n      *   - **`--host` or `-h`, `--port` or `-p`, `--user` or `-u`, `--password` or `-pw`, `--db-name`or `-db` (required for MySQL):**  Your MySQL database connection details.\n    \n      *   **`--only-with` or `-ow`(optional Mysql):**  If you just want to update some details of your credentials in `db_credentials.json` file but not all, use this flag.\n  \n  * **NB:** `--host` and `--port` are respectively set to `localhost` and `3306` by default. Also When you initialize your database with Mysql Engine for the first time, you must to set `--user`, `--password` and `--db-name` arguments. this will create a `db_credentials.json` file in your current directory with your credentials. Prevent you to set it again next time. Thus you will be able to set just import field when the time will come to [update]<important-informations> your database.\n\n  **MySQL Use case:**\n\n  ```shell\n  aba-run db-init mysql -u \"mysql_username\" -pw \"mysql_password\" -db \"alibaba_products\" \n  ```\n\n\n\n  **SQLite Use case :**\n\n  ```shell\n  aba-run db-init sqlite --sqlite-file alibaba_data\n  ```\n  db-init subcommand will try to use sqlite engine by default  so if you are planning to use it run as bellow :\n\n  **SQLite Use case V2 :**\n  ```shell\n  aba-run db-init -f alibaba_data\n  ```\n\n  As soons as your database has been initialized, you can update it with the scraped data.\n\n  ---\n\n  <details>\n  <summary>db-update Demo</summary>\n\n  </details>\n\n  *  **`db-update` sub-command:** add scraped data from html files to your database (you can't use this command twice with same database name to avoid UNIQUE CONSTRAINT ERROR).\n\n  this command takes two required arguments and two optional arguments:\n    * - **`--db-engine` (required):** Select your database engine: `sqlite` or `mysql`. Is set to `sqlite` by default.\n    * -  **`--kw-results`/`-kr` (required):**  The path to the folder containing the HTML files generated by the `scraper` sub command.\n    * - **`--filename`/`-f` (required for SQLite):** If you're using SQLite, provide the desired filename for your database. whitout any extension.\n    * - **`--db-name`/`-db` (optional for MySQL):** If you're using MySQL engine, and want to push the data to a different database, provide the desired database name.\n\n  **MySQL Use case:**\n   \n   command bellow assuming that you already have your database credentials in `db_credentials.json` file to autocomplete required parameter. if not this will raise an error.\n\n  ```shell\n    aba-run db-update  mysql --kw-results bike_results\\ \n  ```\n  **NB:What if you want to change something while you updating the database? Assuming that you have run another scraping command and you want to save this data in another database name whitout update credential file or rewriting all theses parameter just to change your database name then, simply run `aba-run db-update  mysql --kw-results another_keyword_folder_result\\ --db-name \"another_database_name\"`.**\n\n  **SQLite Use case:**\n  ```shell\n  aba-run db-update  sqlite --kw-results bike_results\\ --filename alibaba_data\n  ```\n  ---\n  <details>\n  <summary> export-as-csv Demo</summary>\n\n<div align=\"center\">\n  <p>\n    <a href=\"#\"><img src=\"_static/images/export-as-csv-demo.gif\" width=\"900\" height=\"340\" alt=\"command result 1\" /></a>\n  </p>\n  <p align=\"center\">\n  </p>\n</div>\n  </details>\n\n  \n\n  *  **`export-as-csv` sub-command:** Exports scraped data from your sqlitedatabase to a CSV file. This csv file will contain a `FULL OUTER JOIN` with the `products` and `suppliers` tables.\n\n  this command takes one required argument and one optional argument:\n  * -  **`--sqlite_file` (required):** The name for your SQLite database file with his extension.\n  * -  **`--to` or `-t` (required):**  The name for your CSV file with his extension.\n\n\n  <details>\n  <summary> ai-agent Demo</summary>\n\n  <div align=\"center\">\n\n\n  </div>\n  </details>\n  The purpose of this command is to provide a way to interact with your scraped data in plain english.\n  - You will be able build a query i.e \"list all suppliers in china\". In this case the answer will be a pretty table with the name of the suppliers.\n  - or i.e \"plot the price of all the products in china\". In this case the answer will be a line chart with the price of all the products in china.\n\n *  **`ai-agent` sub-command:** Calls an Ai agent to interact with your scraped data in plain english. \n\n  this command takes one required argument and one optional argument:\n  * -  **`query` (required):** content of the query that you want to ask to the ai agent.\n  * -  **`--csv-file` or `-f` (required):**  The name for your CSV file with his extension.\n\n\n\n## Contributions Welcome!\n\nI believe in the power of open source! If you'd like to contribute to this project, feel free to fork the repository, make your changes, and submit a pull request. I'm always open to new ideas and improvements.\n\n## License\n\nThis project is licensed under the [Gnu General Public License Version **3**]<COPYING>.\n\n\n  \n\n",
    "bugtrack_url": null,
    "license": "GNU GENERAL PUBLIC LICENSEVersion 3, 29 June 2007",
    "summary": "Create your own Alibaba dataset and interact with it in plain English.",
    "version": "0.7.6",
    "project_urls": {
        "Homepage": "https://aba-cli-scrapper.readthedocs.io/en/latest/",
        "Repository": "https://github.com/poneoneo/Alibaba-CLI-Scraper"
    },
    "split_keywords": [
        "cli",
        " scrapping",
        " alibaba",
        " scraper",
        " alibaba-cli-scrapper",
        " dataset",
        " ai-agent",
        " rag"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0ccd3517b4f8649d87204300f0b6e9efca0188b0eac06ad04464930937b0f57b",
                "md5": "658b072a1f877a370c9ca23eadb2a5cb",
                "sha256": "25ac0fe60c51cd3617c4c679f2dbb0657f3c731673f7fb5daf148535b3ee2c4c"
            },
            "downloads": -1,
            "filename": "aba_cli_scrapper-0.7.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "658b072a1f877a370c9ca23eadb2a5cb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0.0,>=3.11.0",
            "size": 18846611,
            "upload_time": "2024-09-23T22:39:02",
            "upload_time_iso_8601": "2024-09-23T22:39:02.128317Z",
            "url": "https://files.pythonhosted.org/packages/0c/cd/3517b4f8649d87204300f0b6e9efca0188b0eac06ad04464930937b0f57b/aba_cli_scrapper-0.7.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3f6f155f43210d6f346fbccdb5e75c8fd6471ead2974f17b2926ca2161cb6116",
                "md5": "03e424e24fdb71eb79a085d597e20f55",
                "sha256": "ef71b4d47b4411aede2c534c0a62b57052699f55bdf84a77d8e701b2cb95a91a"
            },
            "downloads": -1,
            "filename": "aba_cli_scrapper-0.7.6.tar.gz",
            "has_sig": false,
            "md5_digest": "03e424e24fdb71eb79a085d597e20f55",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0.0,>=3.11.0",
            "size": 18847597,
            "upload_time": "2024-09-23T22:39:05",
            "upload_time_iso_8601": "2024-09-23T22:39:05.213722Z",
            "url": "https://files.pythonhosted.org/packages/3f/6f/155f43210d6f346fbccdb5e75c8fd6471ead2974f17b2926ca2161cb6116/aba_cli_scrapper-0.7.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-23 22:39:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "poneoneo",
    "github_project": "Alibaba-CLI-Scraper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aba-cli-scrapper"
}
        
Elapsed time: 0.33025s