redd-harvest


Nameredd-harvest JSON
Version 0.0.1 PyPI version JSON
download
home_pagehttps://github.com/pyqlsa/redd-harvest
SummaryDownload media from Reddit posts.
upload_time2024-06-09 17:46:27
maintainerNone
docs_urlNone
authorpyqlsa
requires_python>=3.10
licenseMIT
keywords reddit download
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # redd-harvest
Download media from Reddit posts.  Why? Why not.

# Install
This can be installed using pip:
```bash
python3 -m pip install --upgrade redd-harvest
```

## Setup from source
A `Makefile` is available that should make it easy (for most UNIX users) to get this project set up.  The only requirements are `python3`, `venv`, and `setuptools`.

```bash
# Set up a virtual environment and install the project's dependencies:
make
# Activate the virtual environment to interact with a live editable version:
. ./activate
# ...and it should be available to run:
redd-harvest --help
```

# Options
```
Global Options:
  --version  Show the version and exit.
  --help     Show this message and exit.

Commands:
  run    Run the harvester.
  setup  Bootstrap an example config in the default location.

Options for 'run':
  -c, --config FILE      Path to config file (default: ~/.config/redd-
                         harvest/config.yml).
  -s, --subreddits-only  Only download from configured subreddits (useful in
                         testing).
  -r, --redditors-only   Only download from configured redditors (useful in
                         testing).
  -o, --only-name TEXT   Only download from a configured entity with the given
                         name (useful in testing).
  -i, --interactive      Elevates a few interactive prompts when certain
                         events occur.
  --help                 Show this message and exit.

Options for 'setup':
  --help  Show this message and exit.
```

# Configuration
This is where a majority of the tuneables live.  A default configuration is not initially provided upon installation, but if you want to use the below example as a starting point, just run `redd-harvest setup`.

## Before getting started
Before jumping in and running this, since this interacts with the Reddit API, you need a Reddit account.  Create one at [reddit.com](https://www.reddit.com/).

Next, at a minimum, you need a Client ID & Client Secret to access Reddit's API as a *script* application (that's what this is!).  If you don't already have those, follow Reddit's [First Steps Guide](https://github.com/reddit/reddit/wiki/OAuth2-Quick-Start-Example#first-steps) to create them.

Once you have a Client ID & Client Secret, these must be provided to `redd-harvest` via its configuration file.  This is enough to get you a read-only client to start running.

If an authorized client is desired, you'll also need to provide your username and password via the the configuration file, as well.  Currently, `redd-harvest` doesn't benefit much from being fully authenticated/authorized, except for seeing an increased upper bound for Reddit's API rate limit.

## Config File Structure
```yaml
---
globals:
  # Used in the construction of the user-agent; this should coincide with the
  # name of the associated app created in your reddit account.
  app: redd-harvest
  # Used in user-agent and reddit client construction (both username and
  # password are required to build a fully authenticated reddit client).
  username: <put-your-username-here>
  # Used to build a fully authenticated reddit client.
  password: <put-your-password-here>
  # Obtained from Reddit after setting up your script application.
  client_id: <put-your-client-id-here>
  # Obtained from Reddit after setting up your script application.
  client_secret: <put-your-client-secret-here>
  # Default post limit; can be overwritten here, or individually at each
  # redditor/subreddit entry.
  post_limit: 5
  # Direct pass-through to praw rate limit max wait setting.
  rate_limit_max_wait: 300
  # Seconds to sleep between fetching submisisons from each configured entity
  # (a redditor or subreddit). This can be used as a 'protective' measure to
  # reduce the likelihood of running up against the reddit api rate limits,
  # even though rate limits should be cleanly handled.
  backoff_sleep: 0.1
  # Folder to use for saving content retrieved from submissions.
  download_folder: ~/.redd-harvest/data
  # Within the download folder, store files by media type (image/video).
  separate_media: true
  # By default pruning is disabled; if set to true, pruning of saved media is
  # executed before retrieving new posts. Pruning: if we can determine that an
  # ignored redditor posted in a subreddit that is being followed, attempt to
  # remove just that redditor's posts from where nested posts would be saved.
  # If we can determine that a redditor that is being followed has posted in an
  # ignored subreddit, attempt to remove just posts from that subreddit from
  # where nested posts would be saved. 'nested' described in more detail below.
  prune_ignorables: false
  # When a post is retrieved for an entity (subreddit/redditor), and the post
  # overlaps with a configured entity of the opposite type, choose which entity
  # to favor when determining where to store the content from the post. For
  # example, if user ABC is a redditor we follow, and they also posted content
  # to a subreddit we're following, it can be chosen whether to favor the
  # download folder for user ABC or the specific subreddit. Accepted values are
  # 'redditor', 'subreddit', or 'disabled'. Default is 'redditor'.
  favor_entity: redditor
# Individual redditors can be followed the same as subreddits, but none are
# specified in this example.
redditors: []
# Specify subreddits to follow (case matters for the value of 'name').
subreddits:
  - name: EarthPorn
    # Within the specified download folder, choose how to store files; 'nested'
    # means <subreddit>/<redditor> for subreddits, and <redditor>/<subreddit>
    # for redditors. 'nested' is the default store_type for subreddits.
    store_type: nested
    # When creating a folder for the downloaded files, use this as the folder
    # name rather than the name of the subreddit/redditor; this is ignored when
    # using store_type 'really flat' (see below).
    alias: earthpapes
    search_criteria:
      # Post limit specified at the level of each entity takes precedence over
      # a globally defined post limit.
      post_limit: 10
      # Specify how to sort posts when retrieving from the entity, the same as
      # how you would when browsing reddit.  A special 'stream' option is also
      # supported which behaves like 'new', but live streams posts as they are
      # submitted (which has the side effect of ignoring pinned submissions).
      sort_type: hot
  - name: wallpaper
    # Within the specified download folder, you can also choose to store files
    # in a 'flat' structure; 'flat' means <subreddit> for subreddits, and
    # <redditor> for redditors. 'flat' is the default store_type for redditors.
    store_type: flat
    search_criteria:
      post_limit: 15
      sort_type: top
      # Some sort types ('top'/'controversial') support toggling a time
      # boundary; supported values are they same as when normally browsing
      # reddit ('hour', 'day', 'week', 'month', 'year', 'all').
      sort_toggle: month
  - name: wallpapers
    # Within the specified download folder, you can also choose to store files
    # in a 'really-flat' structure; 'really-flat' means files will be stored in
    # the root of the download folder.
    store_type: really-flat
    search_criteria:
      post_limit: 10
      sort_type: top
      sort_toggle: year
# Individual redditors can be ignored the same as subreddits, but none are
# specified in this example. Example situation: I want to follow a specific
# subreddit, but I don't care for seeing posts from X redditor. Just specify
# the name of the redditor (case matters). If a redditor is specified both here
# and in the redditors section, the redditor will be ignored.
ignored_redditors: []
# Example situation: I want to follow a specific redditor, but I don't care for
# seeing their posts in X subreddit. Just specify the name of the subreddit
# (case matters). If a subreddit is specified both here, and in the subreddits
# section, the subreddit will be ignored.
ignored_subreddits:
  - name: drawing
  - name: birding
  - name: wildlifephotography
# We need to whitelist the urls, file extensions, etc. that we trust and care
# about saving; it's important that we trust these domains / base urls since
# we will automatically be downloading content from them.
links:
  # If a given post links to a url with this base, ...
  - base_url: https://i.redd.it
    # ...then we'll try to directly download it if the url matches the listed
    # extensions.
    direct_dl_url_extensions: [ jpg, jpeg, png ]
  # Galleries are uniquely handled; all gallery items from a given post will be
  # downloaded (at the highest available quality).
  - base_url: https://www.reddit.com/gallery
  # Reddit-hosted videos are also uniquely handled; just specifying the
  # base_url is sufficient.
  - base_url: https://v.redd.it
  # Posts linking to a url with this base will also be entertained...
  - base_url: https://i.imgur.com
    # ...and we'll try to directly download content if the url matches the
    # listed extensions.
    direct_dl_url_extensions: [ jpg, jpeg, png ]
    sub_searches:
      # ...but if the url has this extension, we might be able to find the link
      # to the original content in the page...
      - extension: gifv
        # ...so let's use this regex to try to find the url we really want
        # within the page. Regexes are treated as raw strings, so no
        # language-specific care in escaping needs to be taken; if it works on
        # an online regex tester, there's a good chance it will work here; note
        # that this application doesn't support capture groups; if groups are
        # desired to be used, you must use non-capturing groups, like:
        # `(?:some|thing)`.
        page_search_regex: https://i\.imgur\.com/[0-9a-zA-Z]+\.mp4
  # Sometimes a site will host content from different domains/subdomains, so
  # we'll also trust imgur content from this base url...
  - base_url: https://imgur.com
    #...and we'll want to directly download the content if it matches these
    # extensions...
    direct_dl_url_extensions: [ jpg, jpeg, png ]
    # ...but if the content doesn't match the direct download extension, we can
    # use the list of regexes to search the page for the real content
    # (sub_search extentsions are optional).
    sub_searches:
      # let's look for video files...
      - page_search_regex: https://i\.imgur\.com/[0-9a-zA-Z]+\.mp4
      # ...as well as images.
      - page_search_regex: https://i\.imgur\.com/[0-9a-zA-Z]+\.jpg

```

# Behavior
## Saving content
When media files are saved, they are named by their SHA256 hash.

Instead of maintaining a separate database to track what content have already been encountered, this was chosen as a lazy means of deduplicating content.  Deduplication of files only occurs within a single given folder (i.e. deduplication does not occur across folders once a final download location is chosen based on the configuration).

Media file hashes are calculated before they are written to disk, so this also has a positive side effect of reducing writes to your filesystem.  Even though they're not written to disk, the media needs to be downloaded in order to calculate the hash, so this will still tax the network.

Another side effect of this scheme is that if you've downloaded some content that you're not interested in keeping, you can prevent `redd-harvest` from continuing to attempt to save the content by truncating the file in place.  If the content is ever encountered again, `redd-harvest` will think it already has a copy because a file name with the SHA256 already exists in the folder.  It's basically a lazy strategy for being able to ignore specific files.

## How is it intended to run?
This is designed as a one-shot tool that retrieves content from Reddit, serially.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/pyqlsa/redd-harvest",
    "name": "redd-harvest",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "reddit, download",
    "author": "pyqlsa",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/cb/e7/dc250436cff0938ee01e2c65b03d75bedf79a1c236bf52d29a6820eaa14c/redd_harvest-0.0.1.tar.gz",
    "platform": null,
    "description": "# redd-harvest\nDownload media from Reddit posts.  Why? Why not.\n\n# Install\nThis can be installed using pip:\n```bash\npython3 -m pip install --upgrade redd-harvest\n```\n\n## Setup from source\nA `Makefile` is available that should make it easy (for most UNIX users) to get this project set up.  The only requirements are `python3`, `venv`, and `setuptools`.\n\n```bash\n# Set up a virtual environment and install the project's dependencies:\nmake\n# Activate the virtual environment to interact with a live editable version:\n. ./activate\n# ...and it should be available to run:\nredd-harvest --help\n```\n\n# Options\n```\nGlobal Options:\n  --version  Show the version and exit.\n  --help     Show this message and exit.\n\nCommands:\n  run    Run the harvester.\n  setup  Bootstrap an example config in the default location.\n\nOptions for 'run':\n  -c, --config FILE      Path to config file (default: ~/.config/redd-\n                         harvest/config.yml).\n  -s, --subreddits-only  Only download from configured subreddits (useful in\n                         testing).\n  -r, --redditors-only   Only download from configured redditors (useful in\n                         testing).\n  -o, --only-name TEXT   Only download from a configured entity with the given\n                         name (useful in testing).\n  -i, --interactive      Elevates a few interactive prompts when certain\n                         events occur.\n  --help                 Show this message and exit.\n\nOptions for 'setup':\n  --help  Show this message and exit.\n```\n\n# Configuration\nThis is where a majority of the tuneables live.  A default configuration is not initially provided upon installation, but if you want to use the below example as a starting point, just run `redd-harvest setup`.\n\n## Before getting started\nBefore jumping in and running this, since this interacts with the Reddit API, you need a Reddit account.  Create one at [reddit.com](https://www.reddit.com/).\n\nNext, at a minimum, you need a Client ID & Client Secret to access Reddit's API as a *script* application (that's what this is!).  If you don't already have those, follow Reddit's [First Steps Guide](https://github.com/reddit/reddit/wiki/OAuth2-Quick-Start-Example#first-steps) to create them.\n\nOnce you have a Client ID & Client Secret, these must be provided to `redd-harvest` via its configuration file.  This is enough to get you a read-only client to start running.\n\nIf an authorized client is desired, you'll also need to provide your username and password via the the configuration file, as well.  Currently, `redd-harvest` doesn't benefit much from being fully authenticated/authorized, except for seeing an increased upper bound for Reddit's API rate limit.\n\n## Config File Structure\n```yaml\n---\nglobals:\n  # Used in the construction of the user-agent; this should coincide with the\n  # name of the associated app created in your reddit account.\n  app: redd-harvest\n  # Used in user-agent and reddit client construction (both username and\n  # password are required to build a fully authenticated reddit client).\n  username: <put-your-username-here>\n  # Used to build a fully authenticated reddit client.\n  password: <put-your-password-here>\n  # Obtained from Reddit after setting up your script application.\n  client_id: <put-your-client-id-here>\n  # Obtained from Reddit after setting up your script application.\n  client_secret: <put-your-client-secret-here>\n  # Default post limit; can be overwritten here, or individually at each\n  # redditor/subreddit entry.\n  post_limit: 5\n  # Direct pass-through to praw rate limit max wait setting.\n  rate_limit_max_wait: 300\n  # Seconds to sleep between fetching submisisons from each configured entity\n  # (a redditor or subreddit). This can be used as a 'protective' measure to\n  # reduce the likelihood of running up against the reddit api rate limits,\n  # even though rate limits should be cleanly handled.\n  backoff_sleep: 0.1\n  # Folder to use for saving content retrieved from submissions.\n  download_folder: ~/.redd-harvest/data\n  # Within the download folder, store files by media type (image/video).\n  separate_media: true\n  # By default pruning is disabled; if set to true, pruning of saved media is\n  # executed before retrieving new posts. Pruning: if we can determine that an\n  # ignored redditor posted in a subreddit that is being followed, attempt to\n  # remove just that redditor's posts from where nested posts would be saved.\n  # If we can determine that a redditor that is being followed has posted in an\n  # ignored subreddit, attempt to remove just posts from that subreddit from\n  # where nested posts would be saved. 'nested' described in more detail below.\n  prune_ignorables: false\n  # When a post is retrieved for an entity (subreddit/redditor), and the post\n  # overlaps with a configured entity of the opposite type, choose which entity\n  # to favor when determining where to store the content from the post. For\n  # example, if user ABC is a redditor we follow, and they also posted content\n  # to a subreddit we're following, it can be chosen whether to favor the\n  # download folder for user ABC or the specific subreddit. Accepted values are\n  # 'redditor', 'subreddit', or 'disabled'. Default is 'redditor'.\n  favor_entity: redditor\n# Individual redditors can be followed the same as subreddits, but none are\n# specified in this example.\nredditors: []\n# Specify subreddits to follow (case matters for the value of 'name').\nsubreddits:\n  - name: EarthPorn\n    # Within the specified download folder, choose how to store files; 'nested'\n    # means <subreddit>/<redditor> for subreddits, and <redditor>/<subreddit>\n    # for redditors. 'nested' is the default store_type for subreddits.\n    store_type: nested\n    # When creating a folder for the downloaded files, use this as the folder\n    # name rather than the name of the subreddit/redditor; this is ignored when\n    # using store_type 'really flat' (see below).\n    alias: earthpapes\n    search_criteria:\n      # Post limit specified at the level of each entity takes precedence over\n      # a globally defined post limit.\n      post_limit: 10\n      # Specify how to sort posts when retrieving from the entity, the same as\n      # how you would when browsing reddit.  A special 'stream' option is also\n      # supported which behaves like 'new', but live streams posts as they are\n      # submitted (which has the side effect of ignoring pinned submissions).\n      sort_type: hot\n  - name: wallpaper\n    # Within the specified download folder, you can also choose to store files\n    # in a 'flat' structure; 'flat' means <subreddit> for subreddits, and\n    # <redditor> for redditors. 'flat' is the default store_type for redditors.\n    store_type: flat\n    search_criteria:\n      post_limit: 15\n      sort_type: top\n      # Some sort types ('top'/'controversial') support toggling a time\n      # boundary; supported values are they same as when normally browsing\n      # reddit ('hour', 'day', 'week', 'month', 'year', 'all').\n      sort_toggle: month\n  - name: wallpapers\n    # Within the specified download folder, you can also choose to store files\n    # in a 'really-flat' structure; 'really-flat' means files will be stored in\n    # the root of the download folder.\n    store_type: really-flat\n    search_criteria:\n      post_limit: 10\n      sort_type: top\n      sort_toggle: year\n# Individual redditors can be ignored the same as subreddits, but none are\n# specified in this example. Example situation: I want to follow a specific\n# subreddit, but I don't care for seeing posts from X redditor. Just specify\n# the name of the redditor (case matters). If a redditor is specified both here\n# and in the redditors section, the redditor will be ignored.\nignored_redditors: []\n# Example situation: I want to follow a specific redditor, but I don't care for\n# seeing their posts in X subreddit. Just specify the name of the subreddit\n# (case matters). If a subreddit is specified both here, and in the subreddits\n# section, the subreddit will be ignored.\nignored_subreddits:\n  - name: drawing\n  - name: birding\n  - name: wildlifephotography\n# We need to whitelist the urls, file extensions, etc. that we trust and care\n# about saving; it's important that we trust these domains / base urls since\n# we will automatically be downloading content from them.\nlinks:\n  # If a given post links to a url with this base, ...\n  - base_url: https://i.redd.it\n    # ...then we'll try to directly download it if the url matches the listed\n    # extensions.\n    direct_dl_url_extensions: [ jpg, jpeg, png ]\n  # Galleries are uniquely handled; all gallery items from a given post will be\n  # downloaded (at the highest available quality).\n  - base_url: https://www.reddit.com/gallery\n  # Reddit-hosted videos are also uniquely handled; just specifying the\n  # base_url is sufficient.\n  - base_url: https://v.redd.it\n  # Posts linking to a url with this base will also be entertained...\n  - base_url: https://i.imgur.com\n    # ...and we'll try to directly download content if the url matches the\n    # listed extensions.\n    direct_dl_url_extensions: [ jpg, jpeg, png ]\n    sub_searches:\n      # ...but if the url has this extension, we might be able to find the link\n      # to the original content in the page...\n      - extension: gifv\n        # ...so let's use this regex to try to find the url we really want\n        # within the page. Regexes are treated as raw strings, so no\n        # language-specific care in escaping needs to be taken; if it works on\n        # an online regex tester, there's a good chance it will work here; note\n        # that this application doesn't support capture groups; if groups are\n        # desired to be used, you must use non-capturing groups, like:\n        # `(?:some|thing)`.\n        page_search_regex: https://i\\.imgur\\.com/[0-9a-zA-Z]+\\.mp4\n  # Sometimes a site will host content from different domains/subdomains, so\n  # we'll also trust imgur content from this base url...\n  - base_url: https://imgur.com\n    #...and we'll want to directly download the content if it matches these\n    # extensions...\n    direct_dl_url_extensions: [ jpg, jpeg, png ]\n    # ...but if the content doesn't match the direct download extension, we can\n    # use the list of regexes to search the page for the real content\n    # (sub_search extentsions are optional).\n    sub_searches:\n      # let's look for video files...\n      - page_search_regex: https://i\\.imgur\\.com/[0-9a-zA-Z]+\\.mp4\n      # ...as well as images.\n      - page_search_regex: https://i\\.imgur\\.com/[0-9a-zA-Z]+\\.jpg\n\n```\n\n# Behavior\n## Saving content\nWhen media files are saved, they are named by their SHA256 hash.\n\nInstead of maintaining a separate database to track what content have already been encountered, this was chosen as a lazy means of deduplicating content.  Deduplication of files only occurs within a single given folder (i.e. deduplication does not occur across folders once a final download location is chosen based on the configuration).\n\nMedia file hashes are calculated before they are written to disk, so this also has a positive side effect of reducing writes to your filesystem.  Even though they're not written to disk, the media needs to be downloaded in order to calculate the hash, so this will still tax the network.\n\nAnother side effect of this scheme is that if you've downloaded some content that you're not interested in keeping, you can prevent `redd-harvest` from continuing to attempt to save the content by truncating the file in place.  If the content is ever encountered again, `redd-harvest` will think it already has a copy because a file name with the SHA256 already exists in the folder.  It's basically a lazy strategy for being able to ignore specific files.\n\n## How is it intended to run?\nThis is designed as a one-shot tool that retrieves content from Reddit, serially.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Download media from Reddit posts.",
    "version": "0.0.1",
    "project_urls": {
        "Homepage": "https://github.com/pyqlsa/redd-harvest"
    },
    "split_keywords": [
        "reddit",
        " download"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e3e6f1403d8a4652c2ef1534ab7d8987644777ab140c6fd083ddcf504ac94bd4",
                "md5": "84dc2e1b67257933f0c384d9276ac3a0",
                "sha256": "f8321c7c5bbde50065e807dbf4dc21123c112bdc85a8de62fd910b55e2179ea5"
            },
            "downloads": -1,
            "filename": "redd_harvest-0.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "84dc2e1b67257933f0c384d9276ac3a0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 20382,
            "upload_time": "2024-06-09T17:46:26",
            "upload_time_iso_8601": "2024-06-09T17:46:26.107887Z",
            "url": "https://files.pythonhosted.org/packages/e3/e6/f1403d8a4652c2ef1534ab7d8987644777ab140c6fd083ddcf504ac94bd4/redd_harvest-0.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cbe7dc250436cff0938ee01e2c65b03d75bedf79a1c236bf52d29a6820eaa14c",
                "md5": "0db02ba42a963608b45e976a6d1afbfa",
                "sha256": "314e2740536caf6568b7c021a067a69a1dcd262a425ff322f1789d00ff4a2268"
            },
            "downloads": -1,
            "filename": "redd_harvest-0.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "0db02ba42a963608b45e976a6d1afbfa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 23317,
            "upload_time": "2024-06-09T17:46:27",
            "upload_time_iso_8601": "2024-06-09T17:46:27.571503Z",
            "url": "https://files.pythonhosted.org/packages/cb/e7/dc250436cff0938ee01e2c65b03d75bedf79a1c236bf52d29a6820eaa14c/redd_harvest-0.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-09 17:46:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pyqlsa",
    "github_project": "redd-harvest",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "redd-harvest"
}
        
Elapsed time: 1.79814s