sreddit


Namesreddit JSON
Version 1.0.3 PyPI version JSON
download
home_pagehttps://github.com/Mandy-cyber/sreddit
SummaryWeb scraper for subreddits
upload_time2023-07-03 18:36:51
maintainer
docs_urlNone
authorMandy-cyber
requires_python>=3.6
license
keywords python selenium reddit subreddit
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # sreddit
A simple tool for scraping information from subreddits.

<br>

## **Installation**
To *install*, you can either [Download the Pypi Package](https://pypi.org/project/sreddit/#files) or,
```python
pip install sreddit
```

<br>

To *upgrade* to the latest version,
```python
pip install --upgrade sreddit
```

<br>


## **Usage**
### **srtitles**
Gets all unique titles from a subreddit.

```python
from sreddit import SubRedditTitles

scraper = SubRedditTitles(subreddit="subreddit_name")
scraper.run()
```

<br>

### **srbodies**
Gets all unique post bodies (i.e. descriptions) from a subreddit.

```python
from sreddit import SubRedditBodies

scraper = SubRedditBodies(subreddit="subreddit_name")
scraper.run()
```

<br>

### **Optional Arguments**

<br>

| **Argument** | **What it Does**                                                                       |
|-----------------------|----------------------------------------------------------------------------------------|
| keywords              | Only includes content that has one or more of these keywords                           |
| show_progess          | Whether or not to show scraping progress (i.e. number of titles found) in the terminal |
| make_db               | If a database of the content found should be created after scraping                    |
| db_name               | The name of the database to be created--must end in .db                                |
| scroll_time           | Time to wait between scrolling down the page and finding elements.                     |


<br>

## **FAQs**

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Mandy-cyber/sreddit",
    "name": "sreddit",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "python,selenium,reddit,subreddit",
    "author": "Mandy-cyber",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/3e/b8/f8b554cc25f9b60edf180ad44a43df59cfdd5b6e8033536ddb839ca1b679/sreddit-1.0.3.tar.gz",
    "platform": null,
    "description": "# sreddit\r\nA simple tool for scraping information from subreddits.\r\n\r\n<br>\r\n\r\n## **Installation**\r\nTo *install*, you can either [Download the Pypi Package](https://pypi.org/project/sreddit/#files) or,\r\n```python\r\npip install sreddit\r\n```\r\n\r\n<br>\r\n\r\nTo *upgrade* to the latest version,\r\n```python\r\npip install --upgrade sreddit\r\n```\r\n\r\n<br>\r\n\r\n\r\n## **Usage**\r\n### **srtitles**\r\nGets all unique titles from a subreddit.\r\n\r\n```python\r\nfrom sreddit import SubRedditTitles\r\n\r\nscraper = SubRedditTitles(subreddit=\"subreddit_name\")\r\nscraper.run()\r\n```\r\n\r\n<br>\r\n\r\n### **srbodies**\r\nGets all unique post bodies (i.e. descriptions) from a subreddit.\r\n\r\n```python\r\nfrom sreddit import SubRedditBodies\r\n\r\nscraper = SubRedditBodies(subreddit=\"subreddit_name\")\r\nscraper.run()\r\n```\r\n\r\n<br>\r\n\r\n### **Optional Arguments**\r\n\r\n<br>\r\n\r\n| **Argument** | **What it Does**                                                                       |\r\n|-----------------------|----------------------------------------------------------------------------------------|\r\n| keywords              | Only includes content that has one or more of these keywords                           |\r\n| show_progess          | Whether or not to show scraping progress (i.e. number of titles found) in the terminal |\r\n| make_db               | If a database of the content found should be created after scraping                    |\r\n| db_name               | The name of the database to be created--must end in .db                                |\r\n| scroll_time           | Time to wait between scrolling down the page and finding elements.                     |\r\n\r\n\r\n<br>\r\n\r\n## **FAQs**\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Web scraper for subreddits",
    "version": "1.0.3",
    "project_urls": {
        "Homepage": "https://github.com/Mandy-cyber/sreddit"
    },
    "split_keywords": [
        "python",
        "selenium",
        "reddit",
        "subreddit"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "99adad33da1ff65b43aeca9cd100a3c62597708cb2d79329552ecfc6723aa04d",
                "md5": "54ee8a6c01f9ad45daf1b9eff32558ca",
                "sha256": "79465ea2b4a8708eaab0914b1a6ab89104e92409f35dcbc0363354de18d193ef"
            },
            "downloads": -1,
            "filename": "sreddit-1.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "54ee8a6c01f9ad45daf1b9eff32558ca",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 7306,
            "upload_time": "2023-07-03T18:36:50",
            "upload_time_iso_8601": "2023-07-03T18:36:50.527084Z",
            "url": "https://files.pythonhosted.org/packages/99/ad/ad33da1ff65b43aeca9cd100a3c62597708cb2d79329552ecfc6723aa04d/sreddit-1.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3eb8f8b554cc25f9b60edf180ad44a43df59cfdd5b6e8033536ddb839ca1b679",
                "md5": "5f8fc226323dde57b7fe6a23f65544e5",
                "sha256": "f160ae4a8ce0be2ed34bd542e092b85d0a0ce8100d13633d2e7b9df64824d6fb"
            },
            "downloads": -1,
            "filename": "sreddit-1.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "5f8fc226323dde57b7fe6a23f65544e5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 5234,
            "upload_time": "2023-07-03T18:36:51",
            "upload_time_iso_8601": "2023-07-03T18:36:51.997186Z",
            "url": "https://files.pythonhosted.org/packages/3e/b8/f8b554cc25f9b60edf180ad44a43df59cfdd5b6e8033536ddb839ca1b679/sreddit-1.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-03 18:36:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Mandy-cyber",
    "github_project": "sreddit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sreddit"
}
        
Elapsed time: 0.12215s