aba_cli_scrapper


Nameaba_cli_scrapper JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/poneoneo/Alibaba-CLI-Scrapper
SummaryScrappe all products and theirs related suppliers existing on Alibaba based on keywords provided by user and save results into a database (Mysql/Sqlite).
upload_time2024-07-21 14:15:38
maintainerNone
docs_urlNone
authorponeoneo
requires_python<4.0,>=3.10
licenseGNU GENERAL PUBLIC LICENSEVersion 3, 29 June 2007
keywords cli scrapping alibaba scraper alibaba-cli-scrapper
VCS
bugtrack_url
requirements aiohttp aiosignal annotated-types attrs black certifi cffi charset-normalizer click colorama cryptography frozenlist greenlet idna loguru markdown-it-py mdurl multidict mypy-extensions mysqlclient nodeenv packaging pathspec platformdirs playwright pycparser pydantic pydantic-core pyee pygments pyright python-decouple python-dotenv requests rich selectolax shellingham sqlalchemy sqlmodel typer typing-extensions urllib3 win32-setctime yarl
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <p>
    <a href="#"><img src="images\d.jpeg" width="600" height="300" alt="overview image" /></a>
  </p>
</div>

# Alibaba-CLI-Scraper

Is a python package that provides a dedicated CLI interface for scraping data from Alibaba.com.
The purpose of this project is to extract products and theirs related suppliers informations from Alibaba.com and store it in a local database (SQLite or MySQL). The project utilizes asynchronous requests for efficient handling of numerous requests and allows users to easily run the scraper and manage the database using a user-friendly command-line interface (CLI).

**Features:**

* **Asynchronous Scraping:** Utilizes asynchronous API of Playwright for efficient handling of numerous pages results.
* **Database Integration:**  Stores scraped data in a database (SQLite or MySQL) for structured persistence.
* **User-Friendly CLI:** Provides easy-to-use commands for running the scraper and managing the database.

## Future Enhancements

This project has a lot of potential for growth! Here are some exciting features I'm considering for the future:

*   **Data Export:** Add functionality to export scraped data to various formats like CSV and Excel spreadsheets for easier analysis and sharing.
*   **PostgreSQL Support:**  Expand database compatibility to include PostgreSQL, giving users more database choices.
*   **Retrieval Augmented Generation (RAG):** Integrate a RAG system that allows users to ask natural language questions about the scraped data, making it even more powerful for insights.

### Installation
Like any other python packages, to avoid any issues, with other packages  or depencies installed already installed on your machine, this tool should be installed with pipx to create isolated environments before to run it. But i didn't found a way to allow that. Then you will need to create a virtual environment before to  install it with the following commands:

1. **Create virtual environment:**
   ```bash
      python -m venv scrapper
   ```

2. **Activate virtual environment:**
   ```bash
      scrapper\Scripts\activate.bat
   ```

3. **Install scraper package:**
   ```bash
      python -m pip install aba-cli-scrapper 
   ```
  
## Using the CLI Interface

This project provides a user-friendly command-line interface (CLI) built with `typer` for interacting with the scraper and database. 

### Available Commands:
**Need Help?**  run  any commands followed by `--help` for detailed informations about its usage and options. For example: `aba-run --help` will show you all subcommands available and how to use them.

<div align="center">
  <p>
    <a href="#"><img src="images\aba-run--help.png" width="600" height="300" alt="command result 1" /></a>
  </p>
  <p align="center">
  </p>
</div>

**Warnings:** *1)* `aba-run` is the base command means all other commands that will be introduce bellow are sub-commands and should always be preceded by  `aba-run`.
        *2)* As i'm still working on this packages i'm facing many bugs with async part of this tool i'm using bright data to enhance perfomance when its come to retrieve html pages results asynchronously. So it's could be unvailable due to  api key bright data limit reached. Then if for any reason you encounter an issue with async api which is set by default, you can use instead sync api by specifying `--sync-api` flag cause is works perfecly fine. So let's jump to the tutorial.

Practice make perfect isn't ? So let's get started with a use case example. 
Let's assume  that you want to scrape data about electric bikes from Alibaba.com.


*   **`scraper`:**  Initiates scraping of Alibaba.com based on the provided keywords.
this command takes two required arguments and one optional argument:
    *   **`key_words` (required):** The search term(s) for finding products on Alibaba. Enclose multiple keywords in quotes.
    *   **`--page-results` (required):** Usually keys words will results to many pages macthing them. Then you must to indicate how many of them you want to pull out.
    *   **`--html-folder` (optional):** Specifies the directory to store the raw HTML files. If omitted, a folder with sanitized keywords as name will be automatically created.

    **Example**:
    ```bash
    aba-run scraper "electric bikes" --html-folder bike_results --page-results 15
    ```
by default `scrapper` will use async which is as explained unstable. the if you want to use sync api run:
    ```bash
    aba-run scraper "electric bikes" --html-folder bike_results --page-results 15  --sync-api
    ```
    and voila! 

if `--html-folder` option is not provided, a folder with sanitized keywords as name will be automatically created and should result to `electric_bikes` as a results folder name.
after that  `bike_results` (since you already provided name you wish to have) directory has been created and should contains all html files from alibaba.com matching your keywords.

Then you must initialize a database. Mysql and sqlite are supported.
*   **`db-init`:** Creates a new database mysql/sqlite.
this command takes one required arguments and six optional arguments(depends on engine you choose):
    *   **`engine` (required):** Choose either `sqlite` or `mysql`.
    *   **`--sqlite-file` (optional, SQLite only):**  The name for your SQLite database file (without the extension).
    *   **`--host`, `--port`, `--user`, `--password`, `--db-name` (required for MySQL):**  Your MySQL database connection details.
    *   **`--only-with` (optional Mysql):**  If you just want to update some details of your credentials in `db_credentials.json` file but not all before to initialize  an brand new database.
  
**MySQL Example:**
  ```bash
  aba-run db-init mysql --user "mysql_username" --password "mysql_password" --db-name "alibaba_products" 
  ```
Assuming that you have already initialized your database,and you want to created a new one without updating all your credentials, simply run :

  ```bash
  aba-run db-init mysql --db-name "alibaba_products" --only-with 
  ```

**NB: This commands will save your credentials in `db_credentials.json` file, so when you will need to update your database, simply run `aba-run db-update  mysql --kw-results bike_results\` to automatically update your database and using your saved credentials**
   

 
**SQLite Use case :**
  ```bash
  aba db-init sqlite --sqlite-file alibaba_data
  ```

As soons as your database has been initialized, you can update it with the scraped data.
*   **`db-update`:** add scraped data from html files to your database (you can't use this command twice with same database credentals to avoid UNIQUE CONSTRAINT ERROR).

this command takes two required arguments and two optional arguments:
    *   **`--db-engine` (required):** Select your database engine: `sqlite` or `mysql`.
    *   **`--kw-results` (required):**  The path to the folder containing the HTML files generated by the `scraper` sub command.
    *   **`--filename` (required for SQLite):** If you're using SQLite, provide the desired filename for your database. whitout any extension.
    *   **`--db-name` (optional for MySQL):** If you're using MySQL,and want to push the data to a different database, provide the desired database name.

  **MySQL Example:**
  ```bash
  aba-run db-update  mysql --kw-results bike_results\ 
  ```
**NB:What if you want to change something while you updating the database? Assuming that you have run another scraping command and you want to save this data in another database name whitout update credential file or rewriting all theses parameter just to change your database name then, simply run `aba-run db-update  mysql --kw-results another_keyword_folder_result\ --db-name "another_database_name"`.**

  **SQLite Example:**
  ```bash
  aba-run db-update  sqlite --kw-results bike_results\ --filename alibaba_data
  ```

## Contributions Welcome!

I believe in the power of open source! If you'd like to contribute to this project, feel free to fork the repository, make your changes, and submit a pull request. I'm always open to new ideas and improvements.

## License

This project is licensed under the [Gnu General Public License Version **3**](COPYING).


  
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/poneoneo/Alibaba-CLI-Scrapper",
    "name": "aba_cli_scrapper",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "cli, scrapping, alibaba, scraper, alibaba-cli-scrapper",
    "author": "poneoneo",
    "author_email": "onealzero@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/ac/23/cca8a07dafdff146a864888350b15666ab46e938eec2a243e3bfe168afa8/aba_cli_scrapper-0.1.6.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <p>\n    <a href=\"#\"><img src=\"images\\d.jpeg\" width=\"600\" height=\"300\" alt=\"overview image\" /></a>\n  </p>\n</div>\n\n# Alibaba-CLI-Scraper\n\nIs a python package that provides a dedicated CLI interface for scraping data from Alibaba.com.\nThe purpose of this project is to extract products and theirs related suppliers informations from Alibaba.com and store it in a local database (SQLite or MySQL). The project utilizes asynchronous requests for efficient handling of numerous requests and allows users to easily run the scraper and manage the database using a user-friendly command-line interface (CLI).\n\n**Features:**\n\n* **Asynchronous Scraping:** Utilizes asynchronous API of Playwright for efficient handling of numerous pages results.\n* **Database Integration:**  Stores scraped data in a database (SQLite or MySQL) for structured persistence.\n* **User-Friendly CLI:** Provides easy-to-use commands for running the scraper and managing the database.\n\n## Future Enhancements\n\nThis project has a lot of potential for growth! Here are some exciting features I'm considering for the future:\n\n*   **Data Export:** Add functionality to export scraped data to various formats like CSV and Excel spreadsheets for easier analysis and sharing.\n*   **PostgreSQL Support:**  Expand database compatibility to include PostgreSQL, giving users more database choices.\n*   **Retrieval Augmented Generation (RAG):** Integrate a RAG system that allows users to ask natural language questions about the scraped data, making it even more powerful for insights.\n\n### Installation\nLike any other python packages, to avoid any issues, with other packages  or depencies installed already installed on your machine, this tool should be installed with pipx to create isolated environments before to run it. But i didn't found a way to allow that. Then you will need to create a virtual environment before to  install it with the following commands:\n\n1. **Create virtual environment:**\n   ```bash\n      python -m venv scrapper\n   ```\n\n2. **Activate virtual environment:**\n   ```bash\n      scrapper\\Scripts\\activate.bat\n   ```\n\n3. **Install scraper package:**\n   ```bash\n      python -m pip install aba-cli-scrapper \n   ```\n  \n## Using the CLI Interface\n\nThis project provides a user-friendly command-line interface (CLI) built with `typer` for interacting with the scraper and database. \n\n### Available Commands:\n**Need Help?**  run  any commands followed by `--help` for detailed informations about its usage and options. For example: `aba-run --help` will show you all subcommands available and how to use them.\n\n<div align=\"center\">\n  <p>\n    <a href=\"#\"><img src=\"images\\aba-run--help.png\" width=\"600\" height=\"300\" alt=\"command result 1\" /></a>\n  </p>\n  <p align=\"center\">\n  </p>\n</div>\n\n**Warnings:** *1)* `aba-run` is the base command means all other commands that will be introduce bellow are sub-commands and should always be preceded by  `aba-run`.\n        *2)* As i'm still working on this packages i'm facing many bugs with async part of this tool i'm using bright data to enhance perfomance when its come to retrieve html pages results asynchronously. So it's could be unvailable due to  api key bright data limit reached. Then if for any reason you encounter an issue with async api which is set by default, you can use instead sync api by specifying `--sync-api` flag cause is works perfecly fine. So let's jump to the tutorial.\n\nPractice make perfect isn't ? So let's get started with a use case example. \nLet's assume  that you want to scrape data about electric bikes from Alibaba.com.\n\n\n*   **`scraper`:**  Initiates scraping of Alibaba.com based on the provided keywords.\nthis command takes two required arguments and one optional argument:\n    *   **`key_words` (required):** The search term(s) for finding products on Alibaba. Enclose multiple keywords in quotes.\n    *   **`--page-results` (required):** Usually keys words will results to many pages macthing them. Then you must to indicate how many of them you want to pull out.\n    *   **`--html-folder` (optional):** Specifies the directory to store the raw HTML files. If omitted, a folder with sanitized keywords as name will be automatically created.\n\n    **Example**:\n    ```bash\n    aba-run scraper \"electric bikes\" --html-folder bike_results --page-results 15\n    ```\nby default `scrapper` will use async which is as explained unstable. the if you want to use sync api run:\n    ```bash\n    aba-run scraper \"electric bikes\" --html-folder bike_results --page-results 15  --sync-api\n    ```\n    and voila! \n\nif `--html-folder` option is not provided, a folder with sanitized keywords as name will be automatically created and should result to `electric_bikes` as a results folder name.\nafter that  `bike_results` (since you already provided name you wish to have) directory has been created and should contains all html files from alibaba.com matching your keywords.\n\nThen you must initialize a database. Mysql and sqlite are supported.\n*   **`db-init`:** Creates a new database mysql/sqlite.\nthis command takes one required arguments and six optional arguments(depends on engine you choose):\n    *   **`engine` (required):** Choose either `sqlite` or `mysql`.\n    *   **`--sqlite-file` (optional, SQLite only):**  The name for your SQLite database file (without the extension).\n    *   **`--host`, `--port`, `--user`, `--password`, `--db-name` (required for MySQL):**  Your MySQL database connection details.\n    *   **`--only-with` (optional Mysql):**  If you just want to update some details of your credentials in `db_credentials.json` file but not all before to initialize  an brand new database.\n  \n**MySQL Example:**\n  ```bash\n  aba-run db-init mysql --user \"mysql_username\" --password \"mysql_password\" --db-name \"alibaba_products\" \n  ```\nAssuming that you have already initialized your database,and you want to created a new one without updating all your credentials, simply run :\n\n  ```bash\n  aba-run db-init mysql --db-name \"alibaba_products\" --only-with \n  ```\n\n**NB: This commands will save your credentials in `db_credentials.json` file, so when you will need to update your database, simply run `aba-run db-update  mysql --kw-results bike_results\\` to automatically update your database and using your saved credentials**\n   \n\n \n**SQLite Use case :**\n  ```bash\n  aba db-init sqlite --sqlite-file alibaba_data\n  ```\n\nAs soons as your database has been initialized, you can update it with the scraped data.\n*   **`db-update`:** add scraped data from html files to your database (you can't use this command twice with same database credentals to avoid UNIQUE CONSTRAINT ERROR).\n\nthis command takes two required arguments and two optional arguments:\n    *   **`--db-engine` (required):** Select your database engine: `sqlite` or `mysql`.\n    *   **`--kw-results` (required):**  The path to the folder containing the HTML files generated by the `scraper` sub command.\n    *   **`--filename` (required for SQLite):** If you're using SQLite, provide the desired filename for your database. whitout any extension.\n    *   **`--db-name` (optional for MySQL):** If you're using MySQL,and want to push the data to a different database, provide the desired database name.\n\n  **MySQL Example:**\n  ```bash\n  aba-run db-update  mysql --kw-results bike_results\\ \n  ```\n**NB:What if you want to change something while you updating the database? Assuming that you have run another scraping command and you want to save this data in another database name whitout update credential file or rewriting all theses parameter just to change your database name then, simply run `aba-run db-update  mysql --kw-results another_keyword_folder_result\\ --db-name \"another_database_name\"`.**\n\n  **SQLite Example:**\n  ```bash\n  aba-run db-update  sqlite --kw-results bike_results\\ --filename alibaba_data\n  ```\n\n## Contributions Welcome!\n\nI believe in the power of open source! If you'd like to contribute to this project, feel free to fork the repository, make your changes, and submit a pull request. I'm always open to new ideas and improvements.\n\n## License\n\nThis project is licensed under the [Gnu General Public License Version **3**](COPYING).\n\n\n  ",
    "bugtrack_url": null,
    "license": "GNU GENERAL PUBLIC LICENSEVersion 3, 29 June 2007",
    "summary": "Scrappe all products and theirs related suppliers existing on Alibaba based on keywords provided by user and save results into a database (Mysql/Sqlite).",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/poneoneo/Alibaba-CLI-Scrapper",
        "Repository": "https://github.com/poneoneo/Alibaba-CLI-Scrapper"
    },
    "split_keywords": [
        "cli",
        " scrapping",
        " alibaba",
        " scraper",
        " alibaba-cli-scrapper"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a19fe7c57dbcd6a573c4bf41c05a4d70dc7434fa57ee87f3d08bde3aaaba269a",
                "md5": "3b5bdbb48fa69284ec6ce9c89626463d",
                "sha256": "0301664474f5d24e29129f83daf01d21d0ca60c123620914975813699619a6ed"
            },
            "downloads": -1,
            "filename": "aba_cli_scrapper-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3b5bdbb48fa69284ec6ce9c89626463d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 61299,
            "upload_time": "2024-07-21T14:15:36",
            "upload_time_iso_8601": "2024-07-21T14:15:36.540586Z",
            "url": "https://files.pythonhosted.org/packages/a1/9f/e7c57dbcd6a573c4bf41c05a4d70dc7434fa57ee87f3d08bde3aaaba269a/aba_cli_scrapper-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac23cca8a07dafdff146a864888350b15666ab46e938eec2a243e3bfe168afa8",
                "md5": "4b98cb9b746a1c6e44a8b0426cfd1b81",
                "sha256": "2af01d7e0284ee83347ea0e7f26ae844b20ed216df1135188dff11fa40ae91ec"
            },
            "downloads": -1,
            "filename": "aba_cli_scrapper-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "4b98cb9b746a1c6e44a8b0426cfd1b81",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 43176,
            "upload_time": "2024-07-21T14:15:38",
            "upload_time_iso_8601": "2024-07-21T14:15:38.789022Z",
            "url": "https://files.pythonhosted.org/packages/ac/23/cca8a07dafdff146a864888350b15666ab46e938eec2a243e3bfe168afa8/aba_cli_scrapper-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-21 14:15:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "poneoneo",
    "github_project": "Alibaba-CLI-Scrapper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "aiohttp",
            "specs": [
                [
                    "==",
                    "3.9.5"
                ]
            ]
        },
        {
            "name": "aiosignal",
            "specs": [
                [
                    "==",
                    "1.3.1"
                ]
            ]
        },
        {
            "name": "annotated-types",
            "specs": [
                [
                    "==",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "attrs",
            "specs": [
                [
                    "==",
                    "23.2.0"
                ]
            ]
        },
        {
            "name": "black",
            "specs": [
                [
                    "==",
                    "24.4.2"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    "==",
                    "2024.7.4"
                ]
            ]
        },
        {
            "name": "cffi",
            "specs": [
                [
                    "==",
                    "1.16.0"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    "==",
                    "3.3.2"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    "==",
                    "8.1.7"
                ]
            ]
        },
        {
            "name": "colorama",
            "specs": [
                [
                    "==",
                    "0.4.6"
                ]
            ]
        },
        {
            "name": "cryptography",
            "specs": [
                [
                    "==",
                    "42.0.8"
                ]
            ]
        },
        {
            "name": "frozenlist",
            "specs": [
                [
                    "==",
                    "1.4.1"
                ]
            ]
        },
        {
            "name": "greenlet",
            "specs": [
                [
                    "==",
                    "3.0.3"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    "==",
                    "3.7"
                ]
            ]
        },
        {
            "name": "loguru",
            "specs": [
                [
                    "==",
                    "0.7.2"
                ]
            ]
        },
        {
            "name": "markdown-it-py",
            "specs": [
                [
                    "==",
                    "3.0.0"
                ]
            ]
        },
        {
            "name": "mdurl",
            "specs": [
                [
                    "==",
                    "0.1.2"
                ]
            ]
        },
        {
            "name": "multidict",
            "specs": [
                [
                    "==",
                    "6.0.5"
                ]
            ]
        },
        {
            "name": "mypy-extensions",
            "specs": [
                [
                    "==",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "mysqlclient",
            "specs": [
                [
                    "==",
                    "2.2.4"
                ]
            ]
        },
        {
            "name": "nodeenv",
            "specs": [
                [
                    "==",
                    "1.9.1"
                ]
            ]
        },
        {
            "name": "packaging",
            "specs": [
                [
                    "==",
                    "24.1"
                ]
            ]
        },
        {
            "name": "pathspec",
            "specs": [
                [
                    "==",
                    "0.12.1"
                ]
            ]
        },
        {
            "name": "platformdirs",
            "specs": [
                [
                    "==",
                    "4.2.2"
                ]
            ]
        },
        {
            "name": "playwright",
            "specs": [
                [
                    "==",
                    "1.45.0"
                ]
            ]
        },
        {
            "name": "pycparser",
            "specs": [
                [
                    "==",
                    "2.22"
                ]
            ]
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "2.8.2"
                ]
            ]
        },
        {
            "name": "pydantic-core",
            "specs": [
                [
                    "==",
                    "2.20.1"
                ]
            ]
        },
        {
            "name": "pyee",
            "specs": [
                [
                    "==",
                    "11.1.0"
                ]
            ]
        },
        {
            "name": "pygments",
            "specs": [
                [
                    "==",
                    "2.18.0"
                ]
            ]
        },
        {
            "name": "pyright",
            "specs": [
                [
                    "==",
                    "1.1.370"
                ]
            ]
        },
        {
            "name": "python-decouple",
            "specs": [
                [
                    "==",
                    "3.8"
                ]
            ]
        },
        {
            "name": "python-dotenv",
            "specs": [
                [
                    "==",
                    "1.0.1"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    "==",
                    "2.32.3"
                ]
            ]
        },
        {
            "name": "rich",
            "specs": [
                [
                    "==",
                    "13.7.1"
                ]
            ]
        },
        {
            "name": "selectolax",
            "specs": [
                [
                    "==",
                    "0.3.21"
                ]
            ]
        },
        {
            "name": "shellingham",
            "specs": [
                [
                    "==",
                    "1.5.4"
                ]
            ]
        },
        {
            "name": "sqlalchemy",
            "specs": [
                [
                    "==",
                    "2.0.31"
                ]
            ]
        },
        {
            "name": "sqlmodel",
            "specs": [
                [
                    "==",
                    "0.0.19"
                ]
            ]
        },
        {
            "name": "typer",
            "specs": [
                [
                    "==",
                    "0.12.3"
                ]
            ]
        },
        {
            "name": "typing-extensions",
            "specs": [
                [
                    "==",
                    "4.12.2"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    "==",
                    "2.2.2"
                ]
            ]
        },
        {
            "name": "win32-setctime",
            "specs": [
                [
                    "==",
                    "1.1.0"
                ]
            ]
        },
        {
            "name": "yarl",
            "specs": [
                [
                    "==",
                    "1.9.4"
                ]
            ]
        }
    ],
    "lcname": "aba_cli_scrapper"
}
        
Elapsed time: 0.38820s