Name | reddit-user-to-sqlite JSON |
Version |
0.4.2
JSON |
| download |
home_page | |
Summary | Create a SQLite database containing data pulled from Reddit about a single user. |
upload_time | 2023-07-23 04:36:52 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.9 |
license | |
keywords |
sqlite
reddit
dogsheep
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# reddit-user-to-sqlite
Stores all the content from a specific user in a SQLite database. This includes their comments and their posts.
## Install
The PyPI package is `reddit-user-to-sqlite` ([PyPI Link](https://pypi.org/project/reddit-user-to-sqlite/)). Install it globally using [pipx](https://pypa.github.io/pipx/):
```bash
pipx install reddit-user-to-sqlite
```
## Usage
The CLI currently exposes two commands: `user` and `archive`. They allow you to archive recent comments/posts from the API or _all_ posts (as read from a CSV file).
### user
Fetches all comments and posts for a specific user.
```bash
reddit-user-to-sqlite user your_username
reddit-user-to-sqlite user your_username --db my-reddit-data.db
```
#### Params
> Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.
1. `username`: a case-insensitive string. The leading `/u/` is optional (and ignored if supplied).
2. (optional) `--db`: the path to a sqlite file, which will be created or updated as needed. Defaults to `reddit.db`.
### archive
Reads the output of a [Reddit GDPR archive](https://support.reddithelp.com/hc/en-us/articles/360043048352-How-do-I-request-a-copy-of-my-Reddit-data-and-information-) and fetches additional info from the Reddit API (where possible). This allows you to store more than 1k posts/comments.
> FYI: this behavior is built with the assumption that the archive that Reddit provides has the same format regardless of if you select `GDPR` or `CCPA` as the request type. But, just to be on the safe side, I recommend selecting `GDPR` during the export process until I'm able to confirm.
#### Params
> Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.
1. `archive_path`: the path to the (unzipped) archive directory on your machine. Don't rename/move the files that Reddit gives you.
2. (optional) `--db`: the path to a sqlite file, which will be created or updated as needed. Defaults to `reddit.db`.
3. (optional) `--skip-saved`: a flag for skipping the inclusion of loading saved comments/posts from the archive.
## Viewing Data
The resulting SQLite database pairs well with [Datasette](https://datasette.io/), a tool for viewing SQLite in the web. Below is my recommended configuration.
First, install `datasette`:
```bash
pipx install datasette
```
Then, add the recommended plugins (for rendering timestamps and markdown):
```bash
pipx inject datasette datasette-render-markdown datasette-render-timestamps
```
Finally, create a `metadata.json` file next to your `reddit.db` with the following:
```json
{
"databases": {
"reddit": {
"tables": {
"comments": {
"sort_desc": "timestamp",
"plugins": {
"datasette-render-markdown": {
"columns": ["text"]
},
"datasette-render-timestamps": {
"columns": ["timestamp"]
}
}
},
"posts": {
"sort_desc": "timestamp",
"plugins": {
"datasette-render-markdown": {
"columns": ["text"]
},
"datasette-render-timestamps": {
"columns": ["timestamp"]
}
}
},
"subreddits": {
"sort": "name"
}
}
}
}
}
```
Now when you run
```bash
datasette reddit.db --metadata metadata.json
```
You'll get a nice, formatted output:
![](https://cdn.zappy.app/93b1760ab541a8b68c2ee2899be5e079.png)
![](https://cdn.zappy.app/5850a782196d1c7a83a054400c0a5dc4.png)
## Motivation
I got nervous when I saw Reddit's [notification of upcoming API changes](https://old.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/). To ensure I could always access data I created, I wanted to make sure I had a backup in place before anything changed in a big way.
## FAQs
### Why does this post only show 1k recent comments / posts?
Reddit's paging API only shows 1000 items (page 11 is an empty list). If you have more comments (or posts) than than that, you can use the [GDPR archive import feature](#archive) feature to backfill your older data.
### Why are my longer posts truncated in Datasette?
Datasette truncates long text fields by default. You can disable this behavior by using the `truncate_cells_html` flag when running `datasette` ([see the docs](https://docs.datasette.io/en/stable/settings.html#truncate-cells-html)):
```shell
datasette reddit.db --setting truncate_cells_html 0
```
### How do I store a username that starts with `-`?
By default, [click](https://click.palletsprojects.com/en/8.1.x/) (the argument parser this uses) interprets leading dashes on argument as a flag. If you're fetching data for user `-asdf`, you'll get an error saying `Error: No such option: -a`. To ensure the last argument is interpreted positionally, put it after a `--`:
```shell
reddit-user-to-sqlite user -- -asdf
```
### Why do some of my posts say `[removed]` even though I can see them on the web?
If a post is removed, only the mods and the user who posted it can see its text. Since this tool currently runs without any authentication, those removed posts can't be fetched via the API.
To load data about your own removed posts, use the [GDPR archive import feature](#archive).
### Why is the database missing data returned by the Reddit API?
While most [Dogsheep](https://github.com/dogsheep) projects grab the raw JSON output of their source APIs, Reddit's API has a lot of junk in it. So, I opted for a slimmed down approach.
If there's a field missing that you think would be useful, feel free to open an issue!
### Does this tool refetch old data?
When running the `user` command, yes. It fetches and updates up to 1k each of comments and posts and updates the local copy.
When running the `archive` command, no. To cut down on API requests, it only fetches data about comments/posts that aren't yet in the database (since the archive may include many items).
Both of these may change in the future to be more in line with [Reddit's per-subreddit archiving guidelines](https://www.reddit.com/r/modnews/comments/py2xy2/voting_commenting_on_archived_posts/).
## Development
This section is people making changes to this package.
When in a virtual environment, run the following:
```bash
pip install -e '.[test]'
```
This installs the package in `--edit` mode and makes its dependencies available. You can now run `reddit-user-to-sqlite` to invoke the CLI.
### Running Tests
In your virtual environment, a simple `pytest` should run the unit test suite. You can also run `pyright` for type checking.
### Releasing New Versions
> these notes are mostly for myself (or other contributors)
1. Run `just release` while your venv is active
2. paste the stored API key (If you're getting invalid password, verify that `~/.pypirc` is empty)
Raw data
{
"_id": null,
"home_page": "",
"name": "reddit-user-to-sqlite",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "",
"keywords": "sqlite,reddit,dogsheep",
"author": "",
"author_email": "David Brownman <beamneocube@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/b2/50/dd4e19068929cfa89cb48c4ff91d3016fecc0fb3003993332823ae0d21ae/reddit_user_to_sqlite-0.4.2.tar.gz",
"platform": null,
"description": "# reddit-user-to-sqlite\n\nStores all the content from a specific user in a SQLite database. This includes their comments and their posts.\n\n## Install\n\nThe PyPI package is `reddit-user-to-sqlite` ([PyPI Link](https://pypi.org/project/reddit-user-to-sqlite/)). Install it globally using [pipx](https://pypa.github.io/pipx/):\n\n```bash\npipx install reddit-user-to-sqlite\n```\n\n## Usage\n\nThe CLI currently exposes two commands: `user` and `archive`. They allow you to archive recent comments/posts from the API or _all_ posts (as read from a CSV file).\n\n### user\n\nFetches all comments and posts for a specific user.\n\n```bash\nreddit-user-to-sqlite user your_username\nreddit-user-to-sqlite user your_username --db my-reddit-data.db\n```\n\n#### Params\n\n> Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.\n\n1. `username`: a case-insensitive string. The leading `/u/` is optional (and ignored if supplied).\n2. (optional) `--db`: the path to a sqlite file, which will be created or updated as needed. Defaults to `reddit.db`.\n\n### archive\n\nReads the output of a [Reddit GDPR archive](https://support.reddithelp.com/hc/en-us/articles/360043048352-How-do-I-request-a-copy-of-my-Reddit-data-and-information-) and fetches additional info from the Reddit API (where possible). This allows you to store more than 1k posts/comments.\n\n> FYI: this behavior is built with the assumption that the archive that Reddit provides has the same format regardless of if you select `GDPR` or `CCPA` as the request type. But, just to be on the safe side, I recommend selecting `GDPR` during the export process until I'm able to confirm.\n\n#### Params\n\n> Note: the argument order is reversed from most dogsheep packages (which take db_path first). This method allows for use of a default db name, so I prefer it.\n\n1. `archive_path`: the path to the (unzipped) archive directory on your machine. Don't rename/move the files that Reddit gives you.\n2. (optional) `--db`: the path to a sqlite file, which will be created or updated as needed. Defaults to `reddit.db`.\n3. (optional) `--skip-saved`: a flag for skipping the inclusion of loading saved comments/posts from the archive.\n\n## Viewing Data\n\nThe resulting SQLite database pairs well with [Datasette](https://datasette.io/), a tool for viewing SQLite in the web. Below is my recommended configuration.\n\nFirst, install `datasette`:\n\n```bash\npipx install datasette\n```\n\nThen, add the recommended plugins (for rendering timestamps and markdown):\n\n```bash\npipx inject datasette datasette-render-markdown datasette-render-timestamps\n```\n\nFinally, create a `metadata.json` file next to your `reddit.db` with the following:\n\n```json\n{\n \"databases\": {\n \"reddit\": {\n \"tables\": {\n \"comments\": {\n \"sort_desc\": \"timestamp\",\n \"plugins\": {\n \"datasette-render-markdown\": {\n \"columns\": [\"text\"]\n },\n \"datasette-render-timestamps\": {\n \"columns\": [\"timestamp\"]\n }\n }\n },\n \"posts\": {\n \"sort_desc\": \"timestamp\",\n \"plugins\": {\n \"datasette-render-markdown\": {\n \"columns\": [\"text\"]\n },\n \"datasette-render-timestamps\": {\n \"columns\": [\"timestamp\"]\n }\n }\n },\n \"subreddits\": {\n \"sort\": \"name\"\n }\n }\n }\n }\n}\n```\n\nNow when you run\n\n```bash\ndatasette reddit.db --metadata metadata.json\n```\n\nYou'll get a nice, formatted output:\n\n![](https://cdn.zappy.app/93b1760ab541a8b68c2ee2899be5e079.png)\n\n![](https://cdn.zappy.app/5850a782196d1c7a83a054400c0a5dc4.png)\n\n## Motivation\n\nI got nervous when I saw Reddit's [notification of upcoming API changes](https://old.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/). To ensure I could always access data I created, I wanted to make sure I had a backup in place before anything changed in a big way.\n\n## FAQs\n\n### Why does this post only show 1k recent comments / posts?\n\nReddit's paging API only shows 1000 items (page 11 is an empty list). If you have more comments (or posts) than than that, you can use the [GDPR archive import feature](#archive) feature to backfill your older data.\n\n### Why are my longer posts truncated in Datasette?\n\nDatasette truncates long text fields by default. You can disable this behavior by using the `truncate_cells_html` flag when running `datasette` ([see the docs](https://docs.datasette.io/en/stable/settings.html#truncate-cells-html)):\n\n```shell\ndatasette reddit.db --setting truncate_cells_html 0\n```\n\n### How do I store a username that starts with `-`?\n\nBy default, [click](https://click.palletsprojects.com/en/8.1.x/) (the argument parser this uses) interprets leading dashes on argument as a flag. If you're fetching data for user `-asdf`, you'll get an error saying `Error: No such option: -a`. To ensure the last argument is interpreted positionally, put it after a `--`:\n\n```shell\nreddit-user-to-sqlite user -- -asdf\n```\n\n### Why do some of my posts say `[removed]` even though I can see them on the web?\n\nIf a post is removed, only the mods and the user who posted it can see its text. Since this tool currently runs without any authentication, those removed posts can't be fetched via the API.\n\nTo load data about your own removed posts, use the [GDPR archive import feature](#archive).\n\n### Why is the database missing data returned by the Reddit API?\n\nWhile most [Dogsheep](https://github.com/dogsheep) projects grab the raw JSON output of their source APIs, Reddit's API has a lot of junk in it. So, I opted for a slimmed down approach.\n\nIf there's a field missing that you think would be useful, feel free to open an issue!\n\n### Does this tool refetch old data?\n\nWhen running the `user` command, yes. It fetches and updates up to 1k each of comments and posts and updates the local copy.\n\nWhen running the `archive` command, no. To cut down on API requests, it only fetches data about comments/posts that aren't yet in the database (since the archive may include many items).\n\nBoth of these may change in the future to be more in line with [Reddit's per-subreddit archiving guidelines](https://www.reddit.com/r/modnews/comments/py2xy2/voting_commenting_on_archived_posts/).\n\n## Development\n\nThis section is people making changes to this package.\n\nWhen in a virtual environment, run the following:\n\n```bash\npip install -e '.[test]'\n```\n\nThis installs the package in `--edit` mode and makes its dependencies available. You can now run `reddit-user-to-sqlite` to invoke the CLI.\n\n### Running Tests\n\nIn your virtual environment, a simple `pytest` should run the unit test suite. You can also run `pyright` for type checking.\n\n### Releasing New Versions\n\n> these notes are mostly for myself (or other contributors)\n\n1. Run `just release` while your venv is active\n2. paste the stored API key (If you're getting invalid password, verify that `~/.pypirc` is empty)\n\n",
"bugtrack_url": null,
"license": "",
"summary": "Create a SQLite database containing data pulled from Reddit about a single user.",
"version": "0.4.2",
"project_urls": {
"Author": "https://xavd.id",
"Bug Tracker": "https://github.com/xavdid/reddit-user-to-sqlite/issues",
"Changelog": "https://github.com/xavdid/reddit-user-to-sqlite/blob/main/CHANGELOG.md",
"Homepage": "https://github.com/xavdid/reddit-user-to-sqlite"
},
"split_keywords": [
"sqlite",
"reddit",
"dogsheep"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "b199bdc61688cd0c8e079204913a7f19236d99d9539aaffcd6feabc6d8c6e203",
"md5": "c83f5f8c43269eac4e35c48a325ceeea",
"sha256": "b1896e98d925f6d33bfdbc732fe37b2c30641934b237e98180232cdb4e397035"
},
"downloads": -1,
"filename": "reddit_user_to_sqlite-0.4.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c83f5f8c43269eac4e35c48a325ceeea",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 13873,
"upload_time": "2023-07-23T04:36:51",
"upload_time_iso_8601": "2023-07-23T04:36:51.228567Z",
"url": "https://files.pythonhosted.org/packages/b1/99/bdc61688cd0c8e079204913a7f19236d99d9539aaffcd6feabc6d8c6e203/reddit_user_to_sqlite-0.4.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b250dd4e19068929cfa89cb48c4ff91d3016fecc0fb3003993332823ae0d21ae",
"md5": "2e2ef98e81d63250391e5892c1d2a1c2",
"sha256": "3c138e9fa56234ae686a4f09ea0b5293263ffed878b9d12b006cad6a85b96455"
},
"downloads": -1,
"filename": "reddit_user_to_sqlite-0.4.2.tar.gz",
"has_sig": false,
"md5_digest": "2e2ef98e81d63250391e5892c1d2a1c2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 14405,
"upload_time": "2023-07-23T04:36:52",
"upload_time_iso_8601": "2023-07-23T04:36:52.714343Z",
"url": "https://files.pythonhosted.org/packages/b2/50/dd4e19068929cfa89cb48c4ff91d3016fecc0fb3003993332823ae0d21ae/reddit_user_to_sqlite-0.4.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-23 04:36:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "xavdid",
"github_project": "reddit-user-to-sqlite",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"tox": true,
"lcname": "reddit-user-to-sqlite"
}