Exports all accessible reddit comments for an account using [pushshift](https://pushshift.io/).
[![PyPi version](https://img.shields.io/pypi/v/pushshift_comment_export.svg)](https://pypi.python.org/pypi/pushshift_comment_export) [![Python 3.6|3.7|3.8](https://img.shields.io/pypi/pyversions/pushshift_comment_export.svg)](https://pypi.python.org/pypi/pushshift_comment_export) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)
### Install
Requires `python3.6+`
To install with pip, run:
pip install pushshift_comment_export
Is accessible as the script `pushshift_comment_export`, or by using `python3 -m pushshift_comment_export`.
---
Reddit (supposedly) only indexes the last 1000 items per query, so there are lots of comments that I don't have access to using the official reddit API (I run [`rexport`](https://github.com/karlicoss/rexport/) periodically to pick up any new data.)
This downloads all the comments that pushshift has, which is typically more than the 1000 query limit. This is only really meant to be used once per account, to access old data that I don't have access to.
For more context see the comments [here](https://github.com/karlicoss/rexport/#api-limitations).
Reddit has recently added a [data request](https://www.reddit.com/settings/data-request) which may let you get comments going further back, but pushshifts JSON response contains a bit more info than what the GDPR request does
Complies to the rate limit [described here](https://github.com/dmarx/psaw#features)
```
$ pushshift_comment_export <reddit_username> --to-file ./data.json
.....
[D 200903 19:51:49 __init__:43] Have 4700, now searching for comments before 2015-10-07 23:32:03...
[D 200903 19:51:49 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1444260723...
[D 200903 19:51:52 __init__:43] Have 4800, now searching for comments before 2015-09-22 13:55:00...
[D 200903 19:51:52 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1442930100...
[D 200903 19:51:57 __init__:43] Have 4860, now searching for comments before 2014-08-28 07:10:14...
[D 200903 19:51:57 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1409209814...
[I 200903 19:52:01 __init__:64] Done! writing 4860 comments to file ./data.json
```
pushshift doesn't require authentication, if you want to preview what this looks like, just go to <https://api.pushshift.io/reddit/comment/search?author=>
#### Usage in HPI
This has been merged into [karlicoss/HPI](https://github.com/karlicoss/HPI), which combines the periodic results of `rexport` (to pick up new comments), with any from the past using this, which looks like [this](https://github.com/karlicoss/HPI/tree/master/my/reddit); my config looking like:
```reddit
class reddit:
class rexport:
export_path: Paths = "~/data/rexport/*.json"
class pushshift:
export_path: Paths = "~/data/pushshift/*.json"
```
Then importing from `my.reddit.all` combines the data from both of them:
```
>>> from my.reddit.rexport import comments as rcomments
>>> from my.reddit.pushshift import comments as pcomments
>>> from my.reddit.all import comments
>>> from more_itertools import ilen
>>> ilen(rcomments())
1020
>>> ilen(pcomments())
4891
>>> ilen(comments())
4914
```
Raw data
{
"_id": null,
"home_page": "https://github.com/seanbreckenridge/pushshift_comment_export",
"name": "pushshift-comment-export",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "reddit data",
"author": "Sean Breckenridge",
"author_email": "seanbrecke@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/79/42/1146a90f6d4f4072f10d3cb5eead10efb8efc4121272fc14a1ec20de5629/pushshift_comment_export-0.1.4.tar.gz",
"platform": null,
"description": "Exports all accessible reddit comments for an account using [pushshift](https://pushshift.io/).\n\n[![PyPi version](https://img.shields.io/pypi/v/pushshift_comment_export.svg)](https://pypi.python.org/pypi/pushshift_comment_export) [![Python 3.6|3.7|3.8](https://img.shields.io/pypi/pyversions/pushshift_comment_export.svg)](https://pypi.python.org/pypi/pushshift_comment_export) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)\n\n### Install\n\nRequires `python3.6+`\n\nTo install with pip, run:\n\n pip install pushshift_comment_export\n\nIs accessible as the script `pushshift_comment_export`, or by using `python3 -m pushshift_comment_export`.\n\n---\n\nReddit (supposedly) only indexes the last 1000 items per query, so there are lots of comments that I don't have access to using the official reddit API (I run [`rexport`](https://github.com/karlicoss/rexport/) periodically to pick up any new data.)\n\nThis downloads all the comments that pushshift has, which is typically more than the 1000 query limit. This is only really meant to be used once per account, to access old data that I don't have access to.\n\nFor more context see the comments [here](https://github.com/karlicoss/rexport/#api-limitations).\n\nReddit has recently added a [data request](https://www.reddit.com/settings/data-request) which may let you get comments going further back, but pushshifts JSON response contains a bit more info than what the GDPR request does\n\nComplies to the rate limit [described here](https://github.com/dmarx/psaw#features)\n\n```\n$ pushshift_comment_export <reddit_username> --to-file ./data.json\n.....\n[D 200903 19:51:49 __init__:43] Have 4700, now searching for comments before 2015-10-07 23:32:03...\n[D 200903 19:51:49 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1444260723...\n[D 200903 19:51:52 __init__:43] Have 4800, now searching for comments before 2015-09-22 13:55:00...\n[D 200903 19:51:52 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1442930100...\n[D 200903 19:51:57 __init__:43] Have 4860, now searching for comments before 2014-08-28 07:10:14...\n[D 200903 19:51:57 __init__:17] Requesting https://api.pushshift.io/reddit/comment/search?author=username&limit=100&sort_type=created_utc&sort=desc&before=1409209814...\n[I 200903 19:52:01 __init__:64] Done! writing 4860 comments to file ./data.json\n```\n\npushshift doesn't require authentication, if you want to preview what this looks like, just go to <https://api.pushshift.io/reddit/comment/search?author=>\n\n#### Usage in HPI\n\nThis has been merged into [karlicoss/HPI](https://github.com/karlicoss/HPI), which combines the periodic results of `rexport` (to pick up new comments), with any from the past using this, which looks like [this](https://github.com/karlicoss/HPI/tree/master/my/reddit); my config looking like:\n\n```reddit\nclass reddit:\n class rexport:\n export_path: Paths = \"~/data/rexport/*.json\"\n class pushshift:\n export_path: Paths = \"~/data/pushshift/*.json\"\n```\n\nThen importing from `my.reddit.all` combines the data from both of them:\n\n```\n>>> from my.reddit.rexport import comments as rcomments\n>>> from my.reddit.pushshift import comments as pcomments\n>>> from my.reddit.all import comments\n>>> from more_itertools import ilen\n>>> ilen(rcomments())\n1020\n>>> ilen(pcomments())\n4891\n>>> ilen(comments())\n4914\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Exports all accessible reddit comments for an account using pushshift",
"version": "0.1.4",
"split_keywords": [
"reddit",
"data"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "b4991bbd2a0187d856f1eaf1613237f7",
"sha256": "2de6ccd0cf93a6f0f72fcef97a561d40816ff32984e95629b072364aa69c0722"
},
"downloads": -1,
"filename": "pushshift_comment_export-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b4991bbd2a0187d856f1eaf1613237f7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 6608,
"upload_time": "2022-12-29T03:26:57",
"upload_time_iso_8601": "2022-12-29T03:26:57.129894Z",
"url": "https://files.pythonhosted.org/packages/44/5e/505038ef7c70b63a05cdf17fd84987131a445102161c550cb3af1bab6129/pushshift_comment_export-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "8d12e912a2de77a3f46061da54bcf770",
"sha256": "3885520b575d3b84fa01cbd3c59f334c53933904d466702551f2be61a2f522a6"
},
"downloads": -1,
"filename": "pushshift_comment_export-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "8d12e912a2de77a3f46061da54bcf770",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 5568,
"upload_time": "2022-12-29T03:26:58",
"upload_time_iso_8601": "2022-12-29T03:26:58.915029Z",
"url": "https://files.pythonhosted.org/packages/79/42/1146a90f6d4f4072f10d3cb5eead10efb8efc4121272fc14a1ec20de5629/pushshift_comment_export-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-29 03:26:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "seanbreckenridge",
"github_project": "pushshift_comment_export",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "pushshift-comment-export"
}