xklb


Namexklb JSON
Version 2.6.22 PyPI version JSON
download
home_pageNone
Summaryxk library
upload_time2024-04-23 08:42:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseBSD 3-Clause No Nuclear License Copyright (c) 2021, Jacob Chapman All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. You acknowledge that this software is not designed nor intended for use in the design, construction, operation or maintenance of any nuclear facility.
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # library (media toolkit)

A wise philosopher once told me: "the future is [autotainment](https://www.youtube.com/watch?v=F9sZFrsjPp0)".

Manage and curate large media libraries. An index for your archive.
Primary usage is local filesystem but also supports some virtual constructs like
tracking online video playlists (eg. YouTube subscriptions) and scheduling browser tabs.

<img align="right" width="300" height="600" src="https://raw.githubusercontent.com/chapmanjacobd/library/main/.github/examples/art.avif" />

## Install

Linux recommended but [Windows setup instructions](./Windows.md) available.

    pip install xklb

Should also work on Mac OS.

### External dependencies

Required: `ffmpeg`

Some features work better with: `mpv`, `firefox`, `fish`

## Getting started

<details><summary>Local media</summary>

### 1. Extract Metadata

For thirty terabytes of video the initial scan takes about four hours to complete.
After that, subsequent scans of the path (or any subpaths) are much quicker--only
new files will be read by `ffprobe`.

    library fsadd tv.db ./video/folder/

![termtosvg](./examples/extract.svg)

### 2. Watch / Listen from local files

    library watch tv.db                           # the default post-action is to do nothing
    library watch tv.db --post-action delete      # delete file after playing
    library listen finalists.db -k ask_keep       # ask whether to keep file after playing

To stop playing press Ctrl+C in either the terminal or mpv

</details>

<details><summary>Online media</summary>

### 1. Download Metadata

Download playlist and channel metadata. Break free of the YouTube algo~

    library tubeadd educational.db https://www.youtube.com/c/BranchEducation/videos

[![termtosvg](./examples/tubeadd.svg "library tubeadd example")](https://asciinema.org/a/BzplqNj9sCERH3A80GVvwsTTT)

And you can always add more later--even from different websites.

    library tubeadd maker.db https://vimeo.com/terburg

To prevent mistakes the default configuration is to download metadata for only
the most recent 20,000 videos per playlist/channel.

    library tubeadd maker.db --extractor-config playlistend=1000

Be aware that there are some YouTube Channels which have many items--for example
the TEDx channel has about 180,000 videos. Some channels even have upwards of
two million videos. More than you could likely watch in one sitting--maybe even one lifetime.
On a high-speed connection (>500 Mbps), it can take up to five hours to download
the metadata for 180,000 videos.

TIP! If you often copy and paste many URLs you can paste line-delimited text as arguments via a subshell. For example, in `fish` shell with [cb](https://github.com/niedzielski/cb):

    library tubeadd my.db (cb)

Or in BASH:

    library tubeadd my.db $(xclip -selection c)

#### 1a. Get new videos for saved playlists

Tubeupdate will go through the list of added playlists and fetch metadata for
any videos not previously seen.

    library tube-update tube.db

### 2. Watch / Listen from websites

    library watch maker.db

To stop playing press Ctrl+C in either the terminal or mpv

</details>

<details><summary>List all subcommands</summary>

    $ library
    xk media library subcommands (v2.6.022)

    Create database subcommands:
    ╭───────────────┬──────────────────────────────────────────╮
    │ fs-add        │ Add local media                          │
    ├───────────────┼──────────────────────────────────────────┤
    │ tube-add      │ Add online video media (yt-dlp)          │
    ├───────────────┼──────────────────────────────────────────┤
    │ web-add       │ Add open-directory media                 │
    ├───────────────┼──────────────────────────────────────────┤
    │ gallery-add   │ Add online gallery media (gallery-dl)    │
    ├───────────────┼──────────────────────────────────────────┤
    │ tabs-add      │ Create a tabs database; Add URLs         │
    ├───────────────┼──────────────────────────────────────────┤
    │ links-add     │ Create a link-scraping database          │
    ├───────────────┼──────────────────────────────────────────┤
    │ site-add      │ Auto-scrape website data to SQLITE       │
    ├───────────────┼──────────────────────────────────────────┤
    │ reddit-add    │ Create a reddit database; Add subreddits │
    ├───────────────┼──────────────────────────────────────────┤
    │ hn-add        │ Create / Update a Hacker News database   │
    ├───────────────┼──────────────────────────────────────────┤
    │ substack      │ Backup substack articles                 │
    ├───────────────┼──────────────────────────────────────────┤
    │ tildes        │ Backup tildes comments and topics        │
    ├───────────────┼──────────────────────────────────────────┤
    │ places-import │ Import places of interest (POIs)         │
    ├───────────────┼──────────────────────────────────────────┤
    │ row-add       │ Add arbitrary data to SQLITE             │
    ╰───────────────┴──────────────────────────────────────────╯

    Text subcommands:
    ╭────────────────┬─────────────────────────────────────────────╮
    │ cluster-sort   │ Sort text and images by similarity          │
    ├────────────────┼─────────────────────────────────────────────┤
    │ extract-links  │ Extract inner links from lists of web links │
    ├────────────────┼─────────────────────────────────────────────┤
    │ extract-text   │ Extract human text from lists of web links  │
    ├────────────────┼─────────────────────────────────────────────┤
    │ markdown-links │ Extract titles from lists of web links      │
    ├────────────────┼─────────────────────────────────────────────┤
    │ nouns          │ Unstructured text -> compound nouns (stdin) │
    ╰────────────────┴─────────────────────────────────────────────╯

    Folder subcommands:
    ╭───────────────┬──────────────────────────────────────────────────╮
    │ merge-folders │ Merge two or more file trees                     │
    ├───────────────┼──────────────────────────────────────────────────┤
    │ relmv         │ Move files preserving parent folder hierarchy    │
    ├───────────────┼──────────────────────────────────────────────────┤
    │ mv-list       │ Find specific folders to move to different disks │
    ├───────────────┼──────────────────────────────────────────────────┤
    │ scatter       │ Scatter files between folders or disks           │
    ╰───────────────┴──────────────────────────────────────────────────╯

    File subcommands:
    ╭────────────────┬─────────────────────────────────────────────────────╮
    │ sample-hash    │ Calculate a hash based on small file segments       │
    ├────────────────┼─────────────────────────────────────────────────────┤
    │ sample-compare │ Compare files using sample-hash and other shortcuts │
    ╰────────────────┴─────────────────────────────────────────────────────╯

    Tabular data subcommands:
    ╭──────────────────┬───────────────────────────────────────────────╮
    │ eda              │ Exploratory Data Analysis on table-like files │
    ├──────────────────┼───────────────────────────────────────────────┤
    │ mcda             │ Multi-criteria Ranking for Decision Support   │
    ├──────────────────┼───────────────────────────────────────────────┤
    │ incremental-diff │ Diff large table-like files in chunks         │
    ╰──────────────────┴───────────────────────────────────────────────╯

    Media File subcommands:
    ╭────────────────┬────────────────────────────────────────────────────────╮
    │ media-check    │ Check video and audio files for corruption via ffmpeg  │
    ├────────────────┼────────────────────────────────────────────────────────┤
    │ process-ffmpeg │ Shrink video/audio to AV1/Opus format (.mkv, .mka)     │
    ├────────────────┼────────────────────────────────────────────────────────┤
    │ process-image  │ Shrink images by resizing and AV1 image format (.avif) │
    ╰────────────────┴────────────────────────────────────────────────────────╯

    Multi-database subcommands:
    ╭──────────────────┬────────────────────────╮
    │ merge-dbs        │ Merge SQLITE databases │
    ├──────────────────┼────────────────────────┤
    │ copy-play-counts │ Copy play history      │
    ╰──────────────────┴────────────────────────╯

    Filesystem Database subcommands:
    ╭─────────────┬────────────────────────────────╮
    │ christen    │ Clean filenames                │
    ├─────────────┼────────────────────────────────┤
    │ disk-usage  │ Show disk usage                │
    ├─────────────┼────────────────────────────────┤
    │ mount-stats │ Show some relative mount stats │
    ├─────────────┼────────────────────────────────┤
    │ big-dirs    │ Show large folders             │
    ├─────────────┼────────────────────────────────┤
    │ search-db   │ Search a SQLITE database       │
    ╰─────────────┴────────────────────────────────╯

    Media Database subcommands:
    ╭─────────────────┬─────────────────────────────────────────────────────────────╮
    │ block           │ Block a channel                                             │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ playlists       │ List stored playlists                                       │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ download        │ Download media                                              │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ download-status │ Show download status                                        │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ redownload      │ Re-download deleted/lost media                              │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ history         │ Show and manage playback history                            │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ history-add     │ Add history from paths                                      │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ stats           │ Show some event statistics (created, deleted, watched, etc) │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ search          │ Search captions / subtitles                                 │
    ├─────────────────┼─────────────────────────────────────────────────────────────┤
    │ optimize        │ Re-optimize database                                        │
    ╰─────────────────┴─────────────────────────────────────────────────────────────╯

    Playback subcommands:
    ╭────────────┬───────────────────────────────────────────────────╮
    │ watch      │ Watch / Listen                                    │
    ├────────────┼───────────────────────────────────────────────────┤
    │ now        │ Show what is currently playing                    │
    ├────────────┼───────────────────────────────────────────────────┤
    │ next       │ Play next file and optionally delete current file │
    ├────────────┼───────────────────────────────────────────────────┤
    │ stop       │ Stop all playback                                 │
    ├────────────┼───────────────────────────────────────────────────┤
    │ pause      │ Pause all playback                                │
    ├────────────┼───────────────────────────────────────────────────┤
    │ tabs-open  │ Open your tabs for the day                        │
    ├────────────┼───────────────────────────────────────────────────┤
    │ links-open │ Open links from link dbs                          │
    ├────────────┼───────────────────────────────────────────────────┤
    │ surf       │ Auto-load browser tabs in a streaming way (stdin) │
    ╰────────────┴───────────────────────────────────────────────────╯

    Database enrichment subcommands:
    ╭────────────────────┬────────────────────────────────────────────────────╮
    │ dedupe-db          │ Dedupe SQLITE tables                               │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ dedupe-media       │ Dedupe similar media                               │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ merge-online-local │ Merge online and local data                        │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ mpv-watchlater     │ Import mpv watchlater files to history             │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ reddit-selftext    │ Copy selftext links to media table                 │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ tabs-shuffle       │ Randomize tabs.db a bit                            │
    ├────────────────────┼────────────────────────────────────────────────────┤
    │ pushshift          │ Convert pushshift data to reddit.db format (stdin) │
    ╰────────────────────┴────────────────────────────────────────────────────╯

    Update database subcommands:
    ╭────────────────┬─────────────────────────────────╮
    │ fs-update      │ Update local media              │
    ├────────────────┼─────────────────────────────────┤
    │ tube-update    │ Update online video media       │
    ├────────────────┼─────────────────────────────────┤
    │ web-update     │ Update open-directory media     │
    ├────────────────┼─────────────────────────────────┤
    │ gallery-update │ Update online gallery media     │
    ├────────────────┼─────────────────────────────────┤
    │ links-update   │ Update a link-scraping database │
    ├────────────────┼─────────────────────────────────┤
    │ reddit-update  │ Update reddit media             │
    ╰────────────────┴─────────────────────────────────╯

    Misc subcommands:
    ╭────────────────┬─────────────────────────────────────────╮
    │ export-text    │ Export HTML files from SQLite databases │
    ├────────────────┼─────────────────────────────────────────┤
    │ dedupe-czkawka │ Process czkawka diff output             │
    ╰────────────────┴─────────────────────────────────────────╯


</details>

## Examples

### Watch online media on your PC

    wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/mealtime.tw.db
    library watch mealtime.tw.db --random --duration 30m

### Listen to online media on a chromecast group

    wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/music.tl.db
    library listen music.tl.db -ct "House speakers" --random

### Hook into HackerNews

    wget https://github.com/chapmanjacobd/hn_mining/raw/main/hackernews_only_direct.tw.db
    library watch hackernews_only_direct.tw.db --random --ignore-errors

### Organize via separate databases

    library fsadd --audio audiobooks.db ./audiobooks/
    library fsadd --audio podcasts.db ./podcasts/ ./another/more/secret/podcasts_folder/

    # merge later if you want
    library merge-dbs --pk path -t playlists,media both.db audiobooks.db podcasts.db

    # or split
    library merge-dbs --pk path -t playlists,media audiobooks.db both.db -w 'path like "%/audiobooks/%"'
    library merge-dbs --pk path -t playlists,media podcasts.db both.db -w 'path like "%/podcasts%"'

## Guides

### Music alarm clock

<details><summary>via termux crontab</summary>

Wake up to your own music

    30 7 * * * library listen ./audio.db

Wake up to your own music _only when you are *not* home_ (computer on local IP)

    30 7 * * * timeout 0.4 nc -z 192.168.1.12 22 || library listen --random

Wake up to your own music on your Chromecast speaker group _only when you are home_

    30 7 * * * ssh 192.168.1.12 library listen --cast --cast-to "Bedroom pair"

</details>


### Browser Tabs

<details><summary>Visit websites on a schedule</summary>

`tabs` is a way to organize your visits to URLs that you want to remember every once in a while.

The main benefit of tabs is that you can have a large amount of tabs saved (say 500 monthly tabs) and only the smallest
amount of tabs to satisfy that goal (500/30) tabs will open each day. 17 tabs per day seems manageable--500 all at once does not.

The use-case of tabs are websites that you know are going to change: subreddits, games,
or tools that you want to use for a few minutes daily, weekly, monthly, quarterly, or yearly.

### 1. Add your websites

    library tabsadd tabs.db --frequency monthly --category fun \
        https://old.reddit.com/r/Showerthoughts/top/?sort=top&t=month \
        https://old.reddit.com/r/RedditDayOf/top/?sort=top&t=month

### 2. Add library tabs to cron

library tabs is meant to run **once per day**. Here is how you would configure it with `crontab`:

    45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db

Or with `systemd`:

    tee ~/.config/systemd/user/tabs.service
    [Unit]
    Description=xklb daily browser tabs

    [Service]
    Type=simple
    RemainAfterExit=no
    Environment="DISPLAY=:0"
    ExecStart="/usr/bin/fish" "-c" "lb tabs /home/xk/lb/tabs.db"

    tee ~/.config/systemd/user/tabs.timer
    [Unit]
    Description=xklb daily browser tabs timer

    [Timer]
    Persistent=yes
    OnCalendar=*-*-* 9:58

    [Install]
    WantedBy=timers.target

    systemctl --user daemon-reload
    systemctl --user enable --now tabs.service

You can also invoke tabs manually:

    library tabs tabs.db -L 1  # open one tab

Incremental surfing. 📈🏄 totally rad!

</details>

### Find large folders

<details><summary>Curate with library big-dirs</summary>

If you are looking for candidate folders for curation (ie. you need space but don't want to buy another hard drive).
The big-dirs subcommand was written for that purpose:

    $ library big-dirs fs/d.db

You may filter by folder depth (similar to QDirStat or WizTree)

    $ library big-dirs --depth=3 audio.db

There is also an flag to prioritize folders which have many files which have been deleted (for example you delete songs you don't like--now you can see who wrote those songs and delete all their other songs...)

    $ library big-dirs --sort-groups-by deleted audio.db

Recently, this functionality has also been integrated into watch/listen subcommands so you could just do this:

    $ library watch --big-dirs ./my.db
    $ lb wt -B  # shorthand equivalent

</details>

### Backfill data

<details><summary>Backfill missing YouTube videos from the Internet Archive</summary>

```fish
for base in https://youtu.be/ http://youtu.be/ http://youtube.com/watch?v= https://youtube.com/watch?v= https://m.youtube.com/watch?v= http://www.youtube.com/watch?v= https://www.youtube.com/watch?v=
    sqlite3 video.db "
        update or ignore media
            set path = replace(path, '$base', 'https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/')
              , time_deleted = 0
        where time_deleted > 0
        and (path = webpath or path not in (select webpath from media))
        and path like '$base%'
    "
end
```

</details>

<details><summary>Backfill reddit databases with pushshift data</summary>

[https://github.com/chapmanjacobd/reddit_mining/](https://github.com/chapmanjacobd/reddit_mining/)

```fish
for reddit_db in ~/lb/reddit/*.db
    set subreddits (sqlite-utils $reddit_db 'select path from playlists' --tsv --no-headers | grep old.reddit.com | sed 's|https://old.reddit.com/r/\(.*\)/|\1|' | sed 's|https://old.reddit.com/user/\(.*\)/|u_\1|' | tr -d "\r")

    ~/github/xk/reddit_mining/links/
    for subreddit in $subreddits
        if not test -e "$subreddit.csv"
            echo "octosql -o csv \"select path,score,'https://old.reddit.com/r/$subreddit/' as playlist_path from `../reddit_links.parquet` where lower(playlist_path) = '$subreddit' order by score desc \" > $subreddit.csv"
        end
    end | parallel -j8

    for subreddit in $subreddits
        sqlite-utils upsert --pk path --alter --csv --detect-types $reddit_db media $subreddit.csv
    end

    library tubeadd --safe --ignore-errors --force $reddit_db (sqlite-utils --raw-lines $reddit_db 'select path from media')
end
```

</details>

### Datasette

<details><summary>Explore `library` databases in your browser</summary>

    pip install datasette
    datasette tv.db

</details>

### Pipe to [mnamer](https://github.com/jkwill87/mnamer)

<details><summary>Rename poorly named files</summary>

    pip install mnamer
    mnamer --movie-directory ~/d/70_Now_Watching/ --episode-directory ~/d/70_Now_Watching/ \
        --no-overwrite -b (library watch -p fd -s 'path : McCloud')
    library fsadd ~/d/70_Now_Watching/

</details>

### Pipe to [lowcharts](https://github.com/juan-leon/lowcharts)

<details><summary>$ library watch -p f -col time_created | lowcharts timehist -w 80</summary>

    Matches: 445183.
    Each ∎ represents a count of 1896
    [2022-04-13 03:16:05] [151689] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    [2022-04-19 07:59:37] [ 16093] ∎∎∎∎∎∎∎∎
    [2022-04-25 12:43:09] [ 12019] ∎∎∎∎∎∎
    [2022-05-01 17:26:41] [ 48817] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    [2022-05-07 22:10:14] [ 36259] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    [2022-05-14 02:53:46] [  3942] ∎∎
    [2022-05-20 07:37:18] [  2371] ∎
    [2022-05-26 12:20:50] [   517]
    [2022-06-01 17:04:23] [  4845] ∎∎
    [2022-06-07 21:47:55] [  2340] ∎
    [2022-06-14 02:31:27] [   563]
    [2022-06-20 07:14:59] [ 13836] ∎∎∎∎∎∎∎
    [2022-06-26 11:58:32] [  1905] ∎
    [2022-07-02 16:42:04] [  1269]
    [2022-07-08 21:25:36] [  3062] ∎
    [2022-07-15 02:09:08] [  9192] ∎∎∎∎
    [2022-07-21 06:52:41] [ 11955] ∎∎∎∎∎∎
    [2022-07-27 11:36:13] [ 50938] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    [2022-08-02 16:19:45] [ 70973] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    [2022-08-08 21:03:17] [  2598] ∎

BTW, for some cols like time_deleted you'll need to specify a where clause so they aren't filtered out:

    $ library watch -p f -col time_deleted -w time_deleted'>'0 | lowcharts timehist -w 80

![video width](https://user-images.githubusercontent.com/7908073/184737808-b96fbe65-a1d9-43c2-b6b4-4bdfab592190.png)

![fps](https://user-images.githubusercontent.com/7908073/184738438-ee566a4b-2da0-4e6d-a4b3-9bfca036aa2a.png)

</details>

## Usage


### Create database subcommands

###### fs-add

<details><summary>Add local media</summary>

    $ library fs-add -h
    usage: library fs-add [(--video) | --audio | --image |  --text | --filesystem] DATABASE PATH ...

    The default database type is video:
        library fsadd tv.db ./tv/
        library fsadd --video tv.db ./tv/  # equivalent

    You can also create audio databases. Both audio and video use ffmpeg to read metadata:
        library fsadd --audio audio.db ./music/

    Image uses ExifTool:
        library fsadd --image image.db ./photos/

    Text will try to read files and save the contents into a searchable database:
        library fsadd --text text.db ./documents_and_books/

    Create a text database and scan with OCR and speech-recognition:
        library fsadd --text --ocr --speech-recognition ocr.db ./receipts_and_messages/

    Create a video database and read internal/external subtitle files into a searchable database:
        library fsadd --scan-subtitles tv.search.db ./tv/ ./movies/

    Decode media to check for corruption (slow):
        library fsadd --check-corrupt
        # See media-check command for full options

    Normally only relevant filetypes are included. You can scan all files with this flag:
        library fsadd --scan-all-files mixed.db ./tv-and-maybe-audio-only-files/
        # I use that with this to keep my folders organized:
        library watch -w 'video_count=0 and audio_count>=1' -pf mixed.db | parallel mv {} ~/d/82_Audiobooks/

    Remove path roots with --force
        library fsadd audio.db /mnt/d/Youtube/
        [/mnt/d/Youtube] Path does not exist

        library fsadd --force audio.db /mnt/d/Youtube/
        [/mnt/d/Youtube] Path does not exist
        [/mnt/d/Youtube] Building file list...
        [/mnt/d/Youtube] Marking 28932 orphaned metadata records as deleted

    If you run out of RAM, for example scanning large VR videos, you can lower the number of threads via --io-multiplier

        library fsadd vr.db --delete-unplayable --check-corrupt --full-scan-if-corrupt 15% --delete-corrupt 20% ./vr/ --io-multiplier 0.2

    Move files on import

        library fsadd audio.db --move ~/library/ ./added_folder/
        This will run destination paths through `library christen` and move files relative to the added folder root


</details>

###### tube-add

<details><summary>Add online video media (yt-dlp)</summary>

    $ library tube-add -h
    usage: library tube-add [--safe] [--extra] [--subs] [--auto-subs] DATABASE URLS ...

    Create a dl database / add links to an existing database

        library tubeadd dl.db https://www.youdl.com/c/BranchEducation/videos

    Add links from a line-delimited file

        cat ./my_yt_subscriptions.txt | library tubeadd reddit.db -

    Add metadata to links already in a database table

        library tubeadd --force reddit.db (sqlite-utils --raw-lines reddit.db 'select path from media')

    Fetch extra metadata:

        By default tubeadd will quickly add media at the expense of less metadata.
        If you plan on using `library download` then it doesn't make sense to use `--extra`.
        Downloading will add the extra metadata automatically to the database.
        You can always fetch more metadata later via tubeupdate:
        library tube-update tw.db --extra


</details>

###### web-add

<details><summary>Add open-directory media</summary>

    $ library web-add -h
    usage: library web-add [(--filesystem) | --video | --audio | --image | --text] DATABASE URL ...

    Scan open directories

        library web-add open_dir.db --video http://1.1.1.1/

    Check download size of all videos matching some criteria

        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'height<720' -E preview -pa

        path         count  download_duration                  size    avg_size
        ---------  -------  ----------------------------  ---------  ----------
        Aggregate     5694  2 years, 7 months and 5 days  724.4 GiB   130.3 MiB

    Download all videos matching some criteria

        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'height<720' -E preview

    Stream directly to mpv

        library watch open_dir.db

    Check videos before downloading

        library watch open_dir.db --online-media-only --loop --exit-code-confirm -i --action ask-keep -m 4  --start 35% --volume=0 -w 'height<720' -E preview

        Assuming you have bound in mpv input.conf a key to 'quit' and another key to 'quit 4',
        using the ask-keep action will mark a video as deleted when you 'quit 4' and it will mark a video as watched when you 'quit'.

        For example, here I bind "'" to "KEEP" and  "j" to "DELETE"

            ' quit
            j quit 4

        This is pretty intuitive after you use it a few times but writing this out I realize this might seem a bit opaque.
        Instead of using built-in post-actions (example above) you could also do something like
            `--cmd5 'echo {} >> keep.txt' --cmd6 'echo {} >> rejected.txt'`

        But you will still bind keys in mpv input.conf:

            k quit 5  # goes to keep.txt
            r quit 6  # goes to rejected.txt

    Download checked videos

        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'id in (select media_id from history)'

    View most recent files

        library fs example_dbs/web_add.image.db -u time_modified desc --cols path,width,height,size,time_modified -p -l 10
        path                                                                                                                      width    height       size  time_modified
        ----------------------------------------------------------------------------------------------------------------------  -------  --------  ---------  -----------------
        https://siliconpr0n.org/map/infineon/m7690-b1/single/infineon_m7690-b1_infosecdj_mz_nikon20x.jpg                           7066     10513   16.4 MiB  2 days ago, 20:54
        https://siliconpr0n.org/map/starchip/scf384g/single/starchip_scf384g_infosecdj_mz_nikon20x.jpg                            10804     10730   19.2 MiB  2 days ago, 15:31
        https://siliconpr0n.org/map/hp/2hpt20065-1-68k-core/single/hp_2hpt20065-1-68k-core_marmontel_mz_ms50x-1.25.jpg            28966     26816  192.2 MiB  4 days ago, 15:05
        https://siliconpr0n.org/map/hp/2hpt20065-1-68k-core/single/hp_2hpt20065-1-68k-core_marmontel_mz_ms20x-1.25.jpg            11840     10978   49.2 MiB  4 days ago, 15:04
        https://siliconpr0n.org/map/hp/2hpt20065-1/single/hp_2hpt20065-1_marmontel_mz_ms10x-1.25.jpg                              16457     14255  101.4 MiB  4 days ago, 15:03
        https://siliconpr0n.org/map/pervasive/e2213ps01e1/single/pervasive_e2213ps01e1_azonenberg_back_roi1_mit10x_rotated.jpg    18880     61836  136.8 MiB  6 days ago, 16:00
        https://siliconpr0n.org/map/pervasive/e2213ps01e/single/pervasive_e2213ps01e_azonenberg_back_mit5x_rotated.jpg            62208     30736  216.5 MiB  6 days ago, 15:57
        https://siliconpr0n.org/map/amd/am2964bpc/single/amd_am2964bpc_infosecdj_mz_lmplan10x.jpg                                 12809     11727   39.8 MiB  6 days ago, 10:28
        https://siliconpr0n.org/map/unknown/ks1804ir1/single/unknown_ks1804ir1_infosecdj_mz_lmplan10x.jpg                          6508      6707    8.4 MiB  6 days ago, 08:04
        https://siliconpr0n.org/map/amd/am2960dc-b/single/amd_am2960dc-b_infosecdj_mz_lmplan10x.jpg                               16434     15035   64.9 MiB  7 days ago, 19:01
        10 media (limited by --limit 10)



</details>

###### gallery-add

<details><summary>Add online gallery media (gallery-dl)</summary>

    $ library gallery-add -h
    usage: library gallery-add DATABASE URLS

    Add gallery_dl URLs to download later or periodically update

    If you have many URLs use stdin

        cat ./my-favorite-manhwa.txt | library galleryadd your.db --insert-only -


</details>

###### tabs-add

<details><summary>Create a tabs database; Add URLs</summary>

    $ library tabs-add -h
    usage: library tabs-add [--frequency daily weekly (monthly) quarterly yearly] [--no-sanitize] DATABASE URLS ...

    Adding one URL:

        library tabsadd -f daily tabs.db https://wiby.me/surprise/

        Depending on your shell you may need to escape the URL (add quotes)

        If you use Fish shell know that you can enable features to make pasting easier:
            set -U fish_features stderr-nocaret qmark-noglob regex-easyesc ampersand-nobg-in-token

        Also I recommend turning Ctrl+Backspace into a super-backspace for repeating similar commands with long args:
            echo 'bind \b backward-kill-bigword' >> ~/.config/fish/config.fish

    Importing from a line-delimitated file:

        library tabsadd -f yearly -c reddit tabs.db (cat ~/mc/yearly-subreddit.cron)



</details>

###### links-add

<details><summary>Create a link-scraping database</summary>

    $ library links-add -h
    usage: library links-add DATABASE PATH ... [--case-sensitive] [--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]] [--selenium] [--manual] [--scroll] [--auto-pager] [--poke] [--chrome] [--local-html] [--file FILE]

    Database version of extract-links

    You can fine-tune what links get saved with --path/text/before/after-include/exclude.

        library links-add --path-include /video/

    Defaults to stop fetching

        After encountering ten pages with no new links:
        library links-add --stop-pages-no-new 10

        Some websites don't give an error when you try to access pages which don't exist.
        To compensate for this the script will only continue fetching pages until there are both no new nor known links for four pages:
        library links-add --stop-pages-no-match 4

    Backfill fixed number of pages

        You can disable automatic stopping by any of the following:

        - Set `--backfill-pages` to the desired number of pages for the first run
        - Set `--fixed-pages` to _always_ fetch the desired number of pages

        If the website is supported by --auto-pager data is fetched twice when using page iteration.
        As such, page iteration (--max-pages, --fixed-pages, etc) is disabled when using `--auto-pager`.

        You can set unset --fixed-pages for all the playlists in your database by running this command:
        sqlite your.db "UPDATE playlists SET extractor_config = json_replace(extractor_config, '$.fixed_pages', null)"

    To use "&p=1" instead of "&page=1"

        library links-add --page-key p

        By default the script will attempt to modify each given URL with "&page=1".

    Single page

        If `--fixed-pages` is 1 and --start-page is not set then the URL will not be modified.

        library links-add --fixed-pages=1
        Loading page https://site/path

        library links-add --fixed-pages=1 --page-start 99
        Loading page https://site/path?page=99

    Reverse chronological paging

        library links-add --max-pages 10
        library links-add --fixed-pages (overrides --max-pages and --stop-known but you can still stop early via --stop-link ie. 429 page)

    Chronological paging

        library links-add --page-start 100 --page-step 1

        library links-add --page-start 100 --page-step=-1 --fixed-pages=5  # go backwards

        # TODO: store previous page id (max of sliding window)

    Jump pages

        Some pages don't count page numbers but instead count items like messages or forum posts. You can iterate through like this:

        library links-add --page-key start --page-start 0 --page-step 50

        which translates to
        &start=0    first page
        &start=50   second page
        &start=100  third page

    Page folders

        Some websites use paths instead of query parameters. In this case make sure the URL provided includes that information with a matching --page-key

        library links-add --page-key page https://website/page/1/
        library links-add --page-key article https://website/article/1/

    Import links from args

        library links-add --no-extract links.db (cb)

    Import lines from stdin

        cb | lb linksdb example_dbs/links.db --skip-extract -

    Other Examples

        library links-add links.db https://video/site/ --path-include /video/

        library links-add links.db https://loginsite/ --path-include /article/ --cookies-from-browser firefox
        library links-add links.db https://loginsite/ --path-include /article/ --cookies-from-browser chrome

        library links-add --path-include viewtopic.php --cookies-from-browser firefox \
        --page-key start --page-start 0 --page-step 50 --fixed-pages 14 --stop-pages-no-match 1 \
        plab.db https://plab/forum/tracker.php?o=(string replace ' ' \n -- 1 4 7 10 15)&s=2&tm=-1&f=(string replace ' ' \n -- 1670 1768 60 1671 1644 1672 1111 508 555 1112 1718 1143 1717 1851 1713 1712 1775 1674 902 1675 36 1830 1803 1831 1741 1676 1677 1780 1110 1124 1784 1769 1793 1797 1804 1819 1825 1836 1842 1846 1857 1861 1867 1451 1788 1789 1792 1798 1805 1820 1826 1837 1843 1847 1856 1862 1868 284 1853 1823 1800 1801 1719 997 1818 1849 1711 1791 1762)


</details>

###### site-add

<details><summary>Auto-scrape website data to SQLITE</summary>

    $ library site-add -h
    usage: library site-add DATABASE PATH ... [--auto-pager] [--poke] [--local-html] [--file FILE]

    Extract data from website requests to a database

        library siteadd jobs.st.db --poke https://hk.jobsdb.com/hk/search-jobs/python/

    Requires selenium-wire
    Requires xmltodict when using --extract-xml

        pip install selenium-wire xmltodict

    Run with `-vv` to see and interact with the browser


</details>

###### reddit-add

<details><summary>Create a reddit database; Add subreddits</summary>

    $ library reddit-add -h
    usage: library reddit-add [--lookback N_DAYS] [--praw-site bot1] DATABASE URLS ...

    Fetch data for redditors and reddits:

        library redditadd interesting.db https://old.reddit.com/r/coolgithubprojects/ https://old.reddit.com/user/Diastro

    If you have a file with a list of subreddits you can do this:

        library redditadd 96_Weird_History.db --subreddits (cat ~/mc/96_Weird_History-reddit.txt)

    Likewise for redditors:

        library redditadd shadow_banned.db --redditors (cat ~/mc/shadow_banned.txt)

    Note that reddit's API is limited to 1000 posts and it usually doesn't go back very far historically.
    Also, it may be the case that reddit's API (praw) will stop working in the near future. For both of these problems
    my suggestion is to use pushshift data.
    You can find more info here: https://github.com/chapmanjacobd/reddit_mining#how-was-this-made


</details>

###### hn-add

<details><summary>Create / Update a Hacker News database</summary>

    $ library hn-add -h
    usage: library hn-add [--oldest] DATABASE

    Fetch latest stories first:

        library hnadd hn.db -v
        Fetching 154873 items (33212696 to 33367569)
        Saving comment 33367568
        Saving comment 33367543
        Saving comment 33367564
        ...

    Fetch oldest stories first:

        library hnadd --oldest hn.db


</details>

###### substack

<details><summary>Backup substack articles</summary>

    $ library substack -h
    usage: library substack DATABASE PATH ...

    Backup substack articles


</details>

###### tildes

<details><summary>Backup tildes comments and topics</summary>

    $ library tildes -h
    usage: library tildes DATABASE USER

    Backup tildes.net user comments and topics

        library tildes tildes.net.db xk3

    Without cookies you are limited to the first page. You can use cookies like this:
        https://github.com/rotemdan/ExportCookies
        library tildes tildes.net.db xk3 --cookies ~/Downloads/cookies-tildes-net.txt


</details>

###### places-import

<details><summary>Import places of interest (POIs)</summary>

    $ library places-import -h
    usage: library places-import DATABASE PATH ...

    Load POIs from Google Maps Google Takeout


</details>

###### row-add

<details><summary>Add arbitrary data to SQLITE</summary>

    $ library row-add -h
    usage: library row-add DATABASE [--table-name TABLE_NAME]

    Add a row to sqlite

        library row-add t.db --test_b 1 --test-a 2

        ### media (1 rows)
        |   test_b |   test_a |
        |----------|----------|
        |        1 |        2 |


</details>

### Text subcommands

###### cluster-sort

<details><summary>Sort text and images by similarity</summary>

    $ library cluster-sort -h
    usage: library cluster-sort [input_path | stdin] [output_path | stdout]

    Group lines of text into sorted output

        echo 'red apple
        broccoli
        yellow
        green
        orange apple
        red apple' | library cluster-sort

        orange apple
        red apple
        red apple
        broccoli
        green
        yellow

    Show the groupings

        echo 'red apple
        broccoli
        yellow
        green
        orange apple
        red apple' | library cluster-sort --print-groups

        [
            {'grouped_paths': ['orange apple', 'red apple', 'red apple']},
            {'grouped_paths': ['broccoli', 'green', 'yellow']}
        ]

    Auto-sort images into directories

        echo 'image1.jpg
        image2.jpg
        image3.jpg' | library cluster-sort --image --move-groups

    Print similar paths

        library fs 0day.db -pa --cluster --print-groups



</details>

###### extract-links

<details><summary>Extract inner links from lists of web links</summary>

    $ library extract-links -h
    usage: library extract-links PATH ... [--case-sensitive] [--scroll] [--download] [--verbose] [--local-html] [--file FILE] [--path-include ...] [--text-include ...] [--after-include ...] [--before-include ...] [--path-exclude ...] [--text-exclude ...] [--after-exclude ...] [--before-exclude ...]

    Extract links from within local HTML fragments, files, or remote pages; filtering on link text and nearby plain-text

        library links https://en.wikipedia.org/wiki/List_of_bacon_dishes --path-include https://en.wikipedia.org/wiki/ --after-include famous
        https://en.wikipedia.org/wiki/Omelette

    Read from local clipboard and filter out links based on nearby plain text:

        library links --local-html (cb -t text/html | psub) --after-exclude paranormal spooky horror podcast tech fantasy supernatural lecture sport
        # note: the equivalent BASH-ism is <(xclip -selection clipboard -t text/html)

    Run with `-vv` to see the browser


</details>

###### extract-text

<details><summary>Extract human text from lists of web links</summary>

    $ library extract-text -h
    usage: library extract-text PATH ... [--skip-links]

    Sorting suggestions

        lb extract-text --skip-links --local-file (cb -t text/html | psub) | lb cs --groups | jq -r '.[] | .grouped_paths | "\n" + join("\n")'


</details>

###### markdown-links

<details><summary>Extract titles from lists of web links</summary>

    $ library markdown-links -h
    usage: usage: library markdown-links URL ... [--cookies COOKIES] [--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]] [--firefox] [--chrome] [--allow-insecure] [--scroll] [--manual] [--auto-pager] [--poke] [--file FILE]

    Convert URLs into Markdown links with page titles filled in

        $ lb markdown-links https://www.youtube.com/watch?v=IgZDDW-NXDE
        [Work For Peace](https://www.youtube.com/watch?v=IgZDDW-NXDE)


</details>

###### nouns

<details><summary>Unstructured text -> compound nouns (stdin)</summary>

    $ library nouns -h
    usage: library nouns (stdin)

    Extract compound nouns and phrases from unstructured mixed HTML plain text

        xsv select text hn_comment_202210242109.csv | library nouns | sort | uniq -c | sort --numeric-sort


</details>

### Folder subcommands

###### merge-folders

<details><summary>Merge two or more file trees</summary>

    $ library merge-folders -h
    usage: library merge-folders [--replace] [--no-replace] [--simulate] SOURCES ... DESTINATION

    Merge multiple folders with the same file tree into a single folder.

    https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src

    Trumps are new or replaced files from an earlier source which now conflict with a later source.
    If you only have one source then the count of trumps will always be zero.
    The count of conflicts also includes trumps.


</details>

###### relmv

<details><summary>Move files preserving parent folder hierarchy</summary>

    $ library relmv -h
    usage: library relmv [--simulate] SOURCE ... DEST

    Move files/folders without losing hierarchy metadata

    Move fresh music to your phone every Sunday:

        # move last week music back to their source folders
        library mv /mnt/d/sync/weekly/ /mnt/d/check/audio/

        # move new music for this week
        library relmv (
            library listen audio.db --local-media-only --where 'play_count=0' --random -L 600 -p f
        ) /mnt/d/sync/weekly/


</details>

###### mv-list

<details><summary>Find specific folders to move to different disks</summary>

    $ library mv-list -h
    usage: library mv-list [--limit LIMIT] [--lower LOWER] [--upper UPPER] MOUNT_POINT DATABASE

    Free up space on a specific disk. Find candidates for moving data to a different mount point


    The program takes a mount point and a xklb database file. If you don't have a database file you can create one like this:

        library fsadd --filesystem d.db ~/d/

    But this should definitely also work with xklb audio and video databases:

        library mv-list /mnt/d/ video.db

    The program will print a table with a sorted list of folders which are good candidates for moving.
    Candidates are determined by how many files are in the folder (so you don't spend hours waiting for folders with millions of tiny files to copy over).
    The default is 4 to 4000--but it can be adjusted via the --lower and --upper flags.

        ...
        ├──────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
        │ 4.0 GB   │       7 │ /mnt/d/71_Mealtime_Videos/unsorted/Miguel_4K/                                                                 │
        ├──────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
        │ 5.7 GB   │      10 │ /mnt/d/71_Mealtime_Videos/unsorted/Bollywood_Premium/                                                         │
        ├──────────┼─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
        │ 2.3 GB   │       4 │ /mnt/d/71_Mealtime_Videos/chief_wiggum/                                                                       │
        ╘══════════╧═════════╧═══════════════════════════════════════════════════════════════════════════════════════════════════════════════╛
        6702 other folders not shown

        ██╗███╗░░██╗░██████╗████████╗██████╗░██╗░░░██╗░█████╗░████████╗██╗░█████╗░███╗░░██╗░██████╗
        ██║████╗░██║██╔════╝╚══██╔══╝██╔══██╗██║░░░██║██╔══██╗╚══██╔══╝██║██╔══██╗████╗░██║██╔════╝
        ██║██╔██╗██║╚█████╗░░░░██║░░░██████╔╝██║░░░██║██║░░╚═╝░░░██║░░░██║██║░░██║██╔██╗██║╚█████╗░
        ██║██║╚████║░╚═══██╗░░░██║░░░██╔══██╗██║░░░██║██║░░██╗░░░██║░░░██║██║░░██║██║╚████║░╚═══██╗
        ██║██║░╚███║██████╔╝░░░██║░░░██║░░██║╚██████╔╝╚█████╔╝░░░██║░░░██║╚█████╔╝██║░╚███║██████╔╝
        ╚═╝╚═╝░░╚══╝╚═════╝░░░░╚═╝░░░╚═╝░░╚═╝░╚═════╝░░╚════╝░░░░╚═╝░░░╚═╝░╚════╝░╚═╝░░╚══╝╚═════╝░

        Type "done" when finished
        Type "more" to see more files
        Paste a folder (and press enter) to toggle selection
        Type "*" to select all files in the most recently printed table

    Then it will give you a prompt:

        Paste a path:

    Wherein you can copy and paste paths you want to move from the table and the program will keep track for you.

        Paste a path: /mnt/d/75_MovieQueue/720p/s11/
        26 selected paths: 162.1 GB ; future free space: 486.9 GB

    You can also press the up arrow or paste it again to remove it from the list:

        Paste a path: /mnt/d/75_MovieQueue/720p/s11/
        25 selected paths: 159.9 GB ; future free space: 484.7 GB

    After you are done selecting folders you can press ctrl-d and it will save the list to a tmp file:

        Paste a path: done

            Folder list saved to /tmp/tmp7x_75l8. You may want to use the following command to move files to an EMPTY folder target:

                rsync -a --info=progress2 --no-inc-recursive --remove-source-files --files-from=/tmp/tmp7x_75l8 -r --relative -vv --dry-run / jim:/free/real/estate/


</details>

###### scatter

<details><summary>Scatter files between folders or disks</summary>

    $ library scatter -h
    usage: library scatter [--limit LIMIT] [--policy POLICY] [--sort SORT] --targets TARGETS DATABASE RELATIVE_PATH ...

    Balance files across filesystem folder trees or multiple devices (mostly useful for mergerfs)

    Scatter filesystem folder trees (without mountpoints; limited functionality; good for balancing fs inodes)

        library scatter scatter.db /test/{0,1,2,3,4,5,6,7,8,9}

    Reduce number of files per folder (creates more folders)

        library scatter scatter.db --max-files-per-folder 16000 /test/{0,1,2,3,4,5,6,7,8,9}

    Multi-device re-bin: balance by size

        library scatter -m /mnt/d1:/mnt/d2:/mnt/d3:/mnt/d4/:/mnt/d5:/mnt/d6:/mnt/d7 fs.db subfolder/of/mergerfs/mnt
        Current path distribution:
        ╒═════════╤══════════════╤══════════════╤═══════════════╤════════════════╤═════════════════╤════════════════╕
        │ mount   │   file_count │ total_size   │ median_size   │ time_created   │ time_modified   │ time_downloaded│
        ╞═════════╪══════════════╪══════════════╪═══════════════╪════════════════╪═════════════════╪════════════════╡
        │ /mnt/d1 │        12793 │ 169.5 GB     │ 4.5 MB        │ Jan 27         │ Jul 19 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d2 │        13226 │ 177.9 GB     │ 4.7 MB        │ Jan 27         │ Jul 19 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d3 │            1 │ 717.6 kB     │ 717.6 kB      │ Jan 31         │ Jul 18 2022     │ yesterday      │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d4 │           82 │ 1.5 GB       │ 12.5 MB       │ Jan 31         │ Apr 22 2022     │ yesterday      │
        ╘═════════╧══════════════╧══════════════╧═══════════════╧════════════════╧═════════════════╧════════════════╛

        Simulated path distribution:
        5845 files should be moved
        20257 files should not be moved
        ╒═════════╤══════════════╤══════════════╤═══════════════╤════════════════╤═════════════════╤════════════════╕
        │ mount   │   file_count │ total_size   │ median_size   │ time_created   │ time_modified   │ time_downloaded│
        ╞═════════╪══════════════╪══════════════╪═══════════════╪════════════════╪═════════════════╪════════════════╡
        │ /mnt/d1 │         9989 │ 46.0 GB      │ 2.4 MB        │ Jan 27         │ Jul 19 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d2 │        10185 │ 46.0 GB      │ 2.4 MB        │ Jan 27         │ Jul 19 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d3 │         1186 │ 53.6 GB      │ 30.8 MB       │ Jan 27         │ Apr 07 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d4 │         1216 │ 49.5 GB      │ 29.5 MB       │ Jan 27         │ Apr 07 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d5 │         1146 │ 53.0 GB      │ 30.9 MB       │ Jan 27         │ Apr 07 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d6 │         1198 │ 48.8 GB      │ 30.6 MB       │ Jan 27         │ Apr 07 2022     │ Jan 31         │
        ├─────────┼──────────────┼──────────────┼───────────────┼────────────────┼─────────────────┼────────────────┤
        │ /mnt/d7 │         1182 │ 52.0 GB      │ 30.9 MB       │ Jan 27         │ Apr 07 2022     │ Jan 31         │
        ╘═════════╧══════════════╧══════════════╧═══════════════╧════════════════╧═════════════════╧════════════════╛
        ### Move 1182 files to /mnt/d7 with this command: ###
        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpmr1628ij / /mnt/d7
        ### Move 1198 files to /mnt/d6 with this command: ###
        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmp9yd75f6j / /mnt/d6
        ### Move 1146 files to /mnt/d5 with this command: ###
        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpfrj141jj / /mnt/d5
        ### Move 1185 files to /mnt/d3 with this command: ###
        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpqh2euc8n / /mnt/d3
        ### Move 1134 files to /mnt/d4 with this command: ###
        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmphzb0gj92 / /mnt/d4

    Multi-device re-bin: balance device inodes for specific subfolder

        library scatter -m /mnt/d1:/mnt/d2 fs.db subfolder --group count --sort 'size desc'

    Multi-device re-bin: only consider the most recent 100 files

        library scatter -m /mnt/d1:/mnt/d2 -l 100 -s 'time_modified desc' fs.db /

    Multi-device re-bin: empty out a disk (/mnt/d2) into many other disks (/mnt/d1, /mnt/d3, and /mnt/d4)

        library scatter fs.db -m /mnt/d1:/mnt/d3:/mnt/d4 /mnt/d2

    This tool is intended for local use. If transferring many small files across the network something like
    [fpart](https://github.com/martymac/fpart) or [fpsync](https://www.fpart.org/fpsync/) will be better.


</details>

### File subcommands

###### sample-hash

<details><summary>Calculate a hash based on small file segments</summary>

    $ library sample-hash -h
    usage: library sample-hash [--threads 10] [--chunk-size BYTES] [--gap BYTES OR 0.0-1.0*FILESIZE] PATH ...

    Calculate hashes for large files by reading only small segments of each file

        library sample-hash ./my_file.mkv

    The threads flag seems to be faster for rotational media but slower on SSDs


</details>

###### sample-compare

<details><summary>Compare files using sample-hash and other shortcuts</summary>

    $ library sample-compare -h
    usage: library sample-compare [--threads 10] [--chunk-size BYTES] [--gap BYTES OR 0.0-1.0*FILESIZE] PATH ...

    Convenience subcommand to compare multiple files using sample-hash


</details>

### Tabular data subcommands

###### eda

<details><summary>Exploratory Data Analysis on table-like files</summary>

    $ library eda -h
    usage: library eda PATH ... [--table TABLE] [--start-row START_ROW] [--end-row END_ROW] [--repl]

    Perform Exploratory Data Analysis (EDA) on one or more files

    Only 20,000 rows per file are loaded for performance purposes. Set `--end-row inf` to read all the rows and/or run out of RAM.


</details>

###### mcda

<details><summary>Multi-criteria Ranking for Decision Support</summary>

    $ library mcda -h
    usage: library mcda PATH ... [--table TABLE] [--start-row START_ROW] [--end-row END_ROW]

    Perform Multiple Criteria Decision Analysis (MCDA) on one or more files

    Only 20,000 rows per file are loaded for performance purposes. Set `--end-row inf` to read all the rows and/or run out of RAM.

    $ library mcda ~/storage.csv --minimize price --ignore warranty

        ### Goals
        #### Maximize
        - size
        #### Minimize
        - price

        |    |   price |   size |   warranty |   TOPSIS |      MABAC |   SPOTIS |   BORDA |
        |----|---------|--------|------------|----------|------------|----------|---------|
        |  0 |     359 |     36 |          5 | 0.769153 |  0.348907  | 0.230847 | 7.65109 |
        |  1 |     453 |     40 |          2 | 0.419921 |  0.0124531 | 0.567301 | 8.00032 |
        |  2 |     519 |     44 |          2 | 0.230847 | -0.189399  | 0.769153 | 8.1894  |

    $ library mcda ~/storage.csv --ignore warranty

        ### Goals
        #### Maximize
        - price
        - size

        |    |   price |   size |   warranty |   TOPSIS |     MABAC |   SPOTIS |   BORDA |
        |----|---------|--------|------------|----------|-----------|----------|---------|
        |  2 |     519 |     44 |          2 | 1        |  0.536587 | 0        | 7.46341 |
        |  1 |     453 |     40 |          2 | 0.580079 |  0.103888 | 0.432699 | 7.88333 |
        |  0 |     359 |     36 |          5 | 0        | -0.463413 | 1        | 8.46341 |

    $ library mcda ~/storage.csv --minimize price --ignore warranty

        ### Goals
        #### Maximize
        - size
        #### Minimize
        - price

        |    |   price |   size |   warranty |   TOPSIS |      MABAC |   SPOTIS |   BORDA |
        |----|---------|--------|------------|----------|------------|----------|---------|
        |  0 |     359 |     36 |          5 | 0.769153 |  0.348907  | 0.230847 | 7.65109 |
        |  1 |     453 |     40 |          2 | 0.419921 |  0.0124531 | 0.567301 | 8.00032 |
        |  2 |     519 |     44 |          2 | 0.230847 | -0.189399  | 0.769153 | 8.1894  |

    It also works with HTTP/GCS/S3 URLs:

    $ library mcda https://en.wikipedia.org/wiki/List_of_Academy_Award-winning_films --clean --minimize Year

        ### Goals

        #### Maximize

        - Nominations
        - Awards

        #### Minimize

        - Year

        |      | Film                                                                    |   Year |   Awards |   Nominations |      TOPSIS |    MABAC |      SPOTIS |   BORDA |
        |------|-------------------------------------------------------------------------|--------|----------|---------------|-------------|----------|-------------|---------|
        |  378 | Titanic                                                                 |   1997 |       11 |            14 | 0.999993    | 1.38014  | 4.85378e-06 | 4116.62 |
        |  868 | Ben-Hur                                                                 |   1959 |       11 |            12 | 0.902148    | 1.30871  | 0.0714303   | 4116.72 |
        |  296 | The Lord of the Rings: The Return of the King                           |   2003 |       11 |            11 | 0.8558      | 1.27299  | 0.107147    | 4116.76 |
        | 1341 | West Side Story                                                         |   1961 |       10 |            11 | 0.837716    | 1.22754  | 0.152599    | 4116.78 |
        |  389 | The English Patient                                                     |   1996 |        9 |            12 | 0.836725    | 1.2178   | 0.162341    | 4116.78 |
        | 1007 | Gone with the Wind                                                      |   1939 |        8 |            13 | 0.807086    | 1.20806  | 0.172078    | 4116.81 |
        |  990 | From Here to Eternity                                                   |   1953 |        8 |            13 | 0.807086    | 1.20806  | 0.172079    | 4116.81 |
        | 1167 | On the Waterfront                                                       |   1954 |        8 |            12 | 0.785       | 1.17235  | 0.207793    | 4116.83 |
        | 1145 | My Fair Lady                                                            |   1964 |        8 |            12 | 0.785       | 1.17235  | 0.207793    | 4116.83 |
        |  591 | Gandhi                                                                  |   1982 |        8 |            11 | 0.755312    | 1.13663  | 0.243509    | 4116.86 |


</details>

###### incremental-diff

<details><summary>Diff large table-like files in chunks</summary>

    $ library incremental-diff -h
    usage: library incremental-diff PATH1 PATH2 [--join-keys JOIN_KEYS] [--table1 TABLE1] [--table2 TABLE2] [--table1-index TABLE1_INDEX] [--table2-index TABLE2_INDEX] [--start-row START_ROW] [--batch-size BATCH_SIZE]

    See data differences in an incremental way to quickly see how two different files differ.

    Data (PATH1, PATH2) can be two different files of different file formats (CSV, Excel) or it could even be the same file with different tables.

    If files are unsorted you may need to use `--join-keys id,name` to specify ID columns. Rows that have the same ID will then be compared.
    If you are comparing SQLITE files you may be able to use `--sort id,name` to achieve the same effect.

    To diff everything at once run with `--batch-size inf`


</details>

### Media File subcommands

###### media-check

<details><summary>Check video and audio files for corruption via ffmpeg</summary>

    $ library media-check -h
    usage: library media-check [--chunk-size SECONDS] [--gap SECONDS OR 0.0-1.0*DURATION] [--delete-corrupt >0-100] [--full-scan] [--audio-scan] PATH ...

    Defaults to decode 0.5 second per 10% of each file

        library media-check ./video.mp4

    Decode all the frames of each file to evaluate how corrupt it is
    (scantime is very slow; about 150 seconds for an hour-long file)

        library media-check --full-scan ./video.mp4

    Decode all the packets of each file to evaluate how corrupt it is
    (scantime is about one second of each file but only accurate for formats where 1 packet == 1 frame)

        library media-check --full-scan --gap 0 ./video.mp4

    Decode all audio of each file to evaluate how corrupt it is
    (scantime is about four seconds per file)

        library media-check --full-scan --audio ./video.mp4

    Decode at least one frame at the start and end of each file to evaluate how corrupt it is
    (scantime is about one second per file)

        library media-check --chunk-size 5% --gap 99.9% ./video.mp4

    Decode 3s every 5% of a file to evaluate how corrupt it is
    (scantime is about three seconds per file)

        library media-check --chunk-size 3 --gap 5% ./video.mp4

    Delete the file if 20 percent or more of checks fail

        library media-check --delete-corrupt 20% ./video.mp4

    To scan a large folder use `fsadd`. I recommend something like this two-stage approach:

        library fsadd --delete-unplayable --check-corrupt --chunk-size 5% tmp.db ./video/ ./folders/
        library media-check (library fs tmp.db -w 'corruption>15' -pf) --full-scan --delete-corrupt 25%

    The above can now be done in one command via `--full-scan-if-corrupt`:

        library fsadd --delete-unplayable --check-corrupt --chunk-size 5% tmp.db ./video/ ./folders/ --full-scan-if-corrupt 15% --delete-corrupt 25%

    Corruption stats

        library fs tmp.db -w 'corruption>15' -pa
        path         count  duration             avg_duration         size    avg_size
        ---------  -------  -------------------  --------------  ---------  ----------
        Aggregate      907  15 days and 9 hours  24 minutes      130.6 GiB   147.4 MiB

    Corruption graph

        sqlite --raw-lines tmp.db 'select corruption from media' | lowcharts hist --min 10 --intervals 10

        Samples = 931; Min = 10.0; Max = 100.0
        Average = 39.1; Variance = 1053.103; STD = 32.452
        each ∎ represents a count of 6
        [ 10.0 ..  19.0] [561] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
        [ 19.0 ..  28.0] [ 69] ∎∎∎∎∎∎∎∎∎∎∎
        [ 28.0 ..  37.0] [ 33] ∎∎∎∎∎
        [ 37.0 ..  46.0] [ 18] ∎∎∎
        [ 46.0 ..  55.0] [ 14] ∎∎
        [ 55.0 ..  64.0] [ 12] ∎∎
        [ 64.0 ..  73.0] [ 15] ∎∎
        [ 73.0 ..  82.0] [ 18] ∎∎∎
        [ 82.0 ..  91.0] [ 50] ∎∎∎∎∎∎∎∎
        [ 91.0 .. 100.0] [141] ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎


</details>

###### process-ffmpeg

<details><summary>Shrink video/audio to AV1/Opus format (.mkv, .mka)</summary>

    $ library process-ffmpeg -h
    usage: library process-ffmpeg PATH ... [--always-split] [--split-longer-than DURATION] [--min-split-segment SECONDS] [--simulate]

    Resize videos to max 1440x960px AV1 and/or Opus to save space

    Convert audio to Opus. Optionally split up long tracks into multiple files.

        fd -tf -eDTS -eAAC -eWAV -eAIF -eAIFF -eFLAC -eAIFF -eM4A -eMP3 -eOGG -eMP4 -eWMA -j4 -x library process --audio

    Use --always-split to _always_ split files if silence is detected

        library process-audio --always-split audiobook.m4a

    Use --split-longer-than to _only_ detect silence for files in excess of a specific duration

        library process-audio --split-longer-than 36mins audiobook.m4b audiobook2.mp3


</details>

###### process-image

<details><summary>Shrink images by resizing and AV1 image format (.avif)</summary>

    $ library process-image -h
    usage: library process-image PATH ...

    Resize images to max 2400x2400px and format AVIF to save space


</details>

### Multi-database subcommands

###### merge-dbs

<details><summary>Merge SQLITE databases</summary>

    $ library merge-dbs -h
    usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...]

    Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided,
    and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row
    replacing all existing fields.

    Upsert mode will update each matching PK row such that if a source row has a NULL field and
    the destination row has a value then the value will be preserved instead of changed to the source row's NULL value.

    Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db

    Test first by using temp databases as the destination db.
    Try out different modes / flags until you are satisfied with the behavior of the program

        library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db

    Merge database data and tables

        library merge-dbs --upsert --pk path video.db tv.db movies.db
        library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path --skip-column id audio-fts.db audio.db

        library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db
        library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v

     To skip copying primary-keys from the source table(s) use --business-keys instead of --primary-keys

     Split DBs using --where

         library merge-dbs --pk path specific-site.db big.db -v --only-new-rows -t media,playlists -w 'path like "https://specific-site%"'


</details>

###### copy-play-counts

<details><summary>Copy play history</summary>

    $ library copy-play-counts -h
    usage: library copy-play-counts DEST_DB SOURCE_DB ... [--source-prefix x] [--target-prefix y]

    Copy play count information between databases

        library copy-play-counts audio.db phone.db --source-prefix /storage/6E7B-7DCE/d --target-prefix /mnt/d


</details>

### Filesystem Database subcommands

###### christen

<details><summary>Clean filenames</summary>

    $ library christen -h
    usage: library christen DATABASE [--run]

    Rename files to be somewhat normalized

    Default mode is simulate

        library christen fs.db

    To actually do stuff use the run flag

        library christen audio.db --run

    You can optionally replace all the spaces in your filenames with dots

        library christen --dot-space video.db


</details>

###### disk-usage

<details><summary>Show disk usage</summary>

    $ library disk-usage -h
    usage: library disk-usage DATABASE [--sort-groups-by size | count] [--depth DEPTH] [PATH / SUBSTRING SEARCH]

    Only include files smaller than 1kib

        library disk-usage du.db --size=-1Ki
        lb du du.db -S-1Ki
        | path                                  |      size |   count |
        |---------------------------------------|-----------|---------|
        | /home/xk/github/xk/lb/__pycache__/    | 620 Bytes |       1 |
        | /home/xk/github/xk/lb/.github/        |    1.7 kB |       4 |
        | /home/xk/github/xk/lb/__pypackages__/ |    1.4 MB |    3519 |
        | /home/xk/github/xk/lb/xklb/           |    4.4 kB |      12 |
        | /home/xk/github/xk/lb/tests/          |    3.2 kB |       9 |
        | /home/xk/github/xk/lb/.git/           |  782.4 kB |    2276 |
        | /home/xk/github/xk/lb/.pytest_cache/  |    1.5 kB |       5 |
        | /home/xk/github/xk/lb/.ruff_cache/    |   19.5 kB |     100 |
        | /home/xk/github/xk/lb/.gitattributes  | 119 Bytes |         |
        | /home/xk/github/xk/lb/.mypy_cache/    | 280 Bytes |       4 |
        | /home/xk/github/xk/lb/.pdm-python     |  15 Bytes |         |

    Only include files with a specific depth

        library disk-usage du.db --depth 19
        lb du du.db -d 19
        | path                                                                                                                                                                |     size |
        |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|
        | /home/xk/github/xk/lb/__pypackages__/3.11/lib/jedi/third_party/typeshed/third_party/2and3/requests/packages/urllib3/packages/ssl_match_hostname/__init__.pyi        | 88 Bytes |
        | /home/xk/github/xk/lb/__pypackages__/3.11/lib/jedi/third_party/typeshed/third_party/2and3/requests/packages/urllib3/packages/ssl_match_hostname/_implementation.pyi | 81 Bytes |



</details>

###### mount-stats

<details><summary>Show some relative mount stats</summary>

    $ library mount-stats -h
    usage: library mount-stats MOUNTPOINT ...

    Print relative use and free for multiple mount points


</details>

###### big-dirs

<details><summary>Show large folders</summary>

    $ library big-dirs -h
    usage: library big-dirs DATABASE [--limit (4000)] [--depth (0)] [--sort-groups-by deleted | played] [--size=+5MB]

    See what folders take up space

        library big-dirs video.db
        library big-dirs audio.db
        library big-dirs fs.db

    lb big-dirs video.db --folder-size=+10G --lower 400 --upper 14000

    lb big-dirs video.db --depth 5
    lb big-dirs video.db --depth 7

    You can even sort by auto-MCDA ~LOL~

    lb big-dirs video.db -u 'mcda median_size,-deleted'


</details>

###### search-db

<details><summary>Search a SQLITE database</summary>

    $ library search-db -h
    usage: library search-db DATABASE TABLE SEARCH ... [--delete-rows]

    Search all columns in a SQLITE table. If the table does not exist, uses the table which startswith (if only one match)


</details>

### Media Database subcommands

###### block

<details><summary>Block a channel</summary>

    $ library block -h
    usage: library block DATABASE URLS ...

    Blocklist specific URLs (eg. YouTube channels, etc)

        library block dl.db https://annoyingwebsite/etc/

    Or URL substrings

        library block dl.db "%fastcompany.com%"

    Block videos from the playlist uploader

        library block dl.db --match-column playlist_path 'https://youtube.com/playlist?list=PLVoczRgDnXDLWV1UJ_tO70VT_ON0tuEdm'

    Or other columns

        library block dl.db --match-column title "% bitcoin%"
        library block dl.db --force --match-column uploader Zeducation

    Display subdomains (similar to `lb download-status`)

        library block audio.db
        subdomain              count    new_links    tried  percent_tried      successful  percent_successful      failed  percent_failed
        -------------------  -------  -----------  -------  ---------------  ------------  --------------------  --------  ----------------
        dts.podtrac.com         5244          602     4642  88.52%                    690  14.86%                    3952  85.14%
        soundcloud.com         16948        11931     5017  29.60%                    920  18.34%                    4097  81.66%
        twitter.com              945          841      104  11.01%                      5  4.81%                       99  95.19%
        v.redd.it               9530         6805     2725  28.59%                    225  8.26%                     2500  91.74%
        vimeo.com                865          795       70  8.09%                      65  92.86%                       5  7.14%
        www.youtube.com       210435       140952    69483  33.02%                  66017  95.01%                    3467  4.99%
        youtu.be               60061        51911     8150  13.57%                   7736  94.92%                     414  5.08%
        youtube.com             5976         5337      639  10.69%                    599  93.74%                      40  6.26%

    Find some words to block based on frequency / recency of downloaded media

        library watch dl.db -u time_downloaded desc -L 10000 -pf | lb nouns | sort | uniq -c | sort -g
        ...
        183 ArchiveOrg
        187 Documentary
        237 PBS
        243 BBC
        ...


</details>

###### playlists

<details><summary>List stored playlists</summary>

    $ library playlists -h
    usage: library playlists DATABASE

    List of Playlists

        library playlists
        ╒══════════╤════════════════════╤══════════════════════════════════════════════════════════════════════════╕
        │ extractor_key   │ title              │ path                                                                     │
        ╞══════════╪════════════════════╪══════════════════════════════════════════════════════════════════════════╡
        │ Youtube  │ Highlights of Life │ https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n │
        ╘══════════╧════════════════════╧══════════════════════════════════════════════════════════════════════════╛

    Search playlists

        library playlists audio.db badfinger
        path                                                        extractor_key    title                             count
        ----------------------------------------------------------  ---------------  ------------------------------  -------
        https://music.youtube.com/channel/UCyJzUJ95hXeBVfO8zOA0GZQ  ydl_Youtube      Uploads from Badfinger - Topic      226

    Aggregate Report of Videos in each Playlist

        library playlists -p a
        ╒══════════╤════════════════════╤══════════════════════════════════════════════════════════════════════════╤═══════════════╤═════════╕
        │ extractor_key   │ title              │ path                                                                     │ duration      │   count │
        ╞══════════╪════════════════════╪══════════════════════════════════════════════════════════════════════════╪═══════════════╪═════════╡
        │ Youtube  │ Highlights of Life │ https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n │ 53.28 minutes │      15 │
        ╘══════════╧════════════════════╧══════════════════════════════════════════════════════════════════════════╧═══════════════╧═════════╛
        1 playlist
        Total duration: 53.28 minutes

    Print only playlist urls:
        Useful for piping to other utilities like xargs or GNU Parallel.
        library playlists -p f
        https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n

    Remove a playlist/channel and all linked videos:
        library playlists --delete-rows https://vimeo.com/canal180



</details>

###### download

<details><summary>Download media</summary>

    $ library download -h
    usage: library download [--prefix /mnt/d/] [--safe] [--subs] [--auto-subs] [--small] DATABASE --video | --audio | --photos

    Files will be saved to <lb download prefix>/<extractor>/. If prefix is not specified the current working directory will be used

    By default things will download in a random order

        library download dl.db --prefix ~/output/path/root/

    But you can sort; eg. oldest first

        library download dl.db -u m.time_modified,m.time_created

    Limit downloads to a specified playlist URLs or substring (TODO: https://github.com/chapmanjacobd/library/issues/31)

        library download dl.db https://www.youtube.com/c/BlenderFoundation/videos

    Limit downloads to a specified video URLs or substring

        library download dl.db --include https://www.youtube.com/watch?v=YE7VzlLtp-4
        library download dl.db -s https://www.youtube.com/watch?v=YE7VzlLtp-4  # equivalent

    Maximizing the variety of subdomains

        library download photos.db --photos --image --sort "ROW_NUMBER() OVER ( PARTITION BY SUBSTR(m.path, INSTR(m.path, '//') + 2, INSTR( SUBSTR(m.path, INSTR(m.path, '//') + 2), '/') - 1) )"

    Print list of queued up downloads

        library download --print

    Print list of saved playlists

        library playlists dl.db -p a

    Print download queue groups

        library download-status audio.db
        ╒════════════╤══════════════════╤════════════════════╤══════════╕
        │ extractor_key     │ duration         │   never_downloaded │   errors │
        ╞════════════╪══════════════════╪════════════════════╪══════════╡
        │ Soundcloud │                  │                 10 │        0 │
        ├────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube    │ 10 days, 4 hours │                  1 │     2555 │
        │            │ and 20 minutes   │                    │          │
        ├────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube    │ 7.68 minutes     │                 99 │        1 │
        ╘════════════╧══════════════════╧════════════════════╧══════════╛


</details>

###### download-status

<details><summary>Show download status</summary>

    $ library download-status -h
    usage: library download-status DATABASE

    Print download queue groups

        library download-status video.db
        ╒═════════════╤══════════════════╤════════════════════╤══════════╕
        │ extractor_key      │ duration         │   never_downloaded │   errors │
        ╞═════════════╪══════════════════╪════════════════════╪══════════╡
        │ Youtube     │ 3 hours and 2.07 │                 76 │        0 │
        │             │ minutes          │                    │          │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Dailymotion │                  │                 53 │        0 │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube     │ 1 day, 18 hours  │                 30 │        0 │
        │             │ and 6 minutes    │                    │          │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Dailymotion │                  │                186 │      198 │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube     │ 1 hour and 52.18 │                  1 │        0 │
        │             │ minutes          │                    │          │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Vimeo       │                  │                253 │       49 │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube     │ 2 years, 4       │              51676 │      197 │
        │             │ months, 15 days  │                    │          │
        │             │ and 6 hours      │                    │          │
        ├─────────────┼──────────────────┼────────────────────┼──────────┤
        │ Youtube     │ 4 months, 23     │               2686 │        7 │
        │             │ days, 19 hours   │                    │          │
        │             │ and 33 minutes   │                    │          │
        ╘═════════════╧══════════════════╧════════════════════╧══════════╛

    Simulate --safe flag

        library download-status video.db --safe


</details>

###### redownload

<details><summary>Re-download deleted/lost media</summary>

    $ library redownload -h
    usage: library redownload DATABASE

    If you have previously downloaded YouTube or other online media, but your
    hard drive failed or you accidentally deleted something, and if that media
    is still accessible from the same URL, this script can help to redownload
    everything that was scanned-as-deleted between two timestamps.

    List deletions:

        library redownload news.db
        Deletions:
        ╒═════════════════════╤═════════╕
        │ time_deleted        │   count │
        ╞═════════════════════╪═════════╡
        │ 2023-01-26T00:31:26 │     120 │
        ├─────────────────────┼─────────┤
        │ 2023-01-26T19:54:42 │      18 │
        ├─────────────────────┼─────────┤
        │ 2023-01-26T20:45:24 │      26 │
        ╘═════════════════════╧═════════╛
        Showing most recent 3 deletions. Use -l to change this limit

    Mark videos as candidates for download via specific deletion timestamp:

        library redownload city.db 2023-01-26T19:54:42
        ╒══════════╤════════════════╤═════════════════╤═══════════════════╤═════════╤══════════╤═══════╤══════════════════╤════════════════════════════════════════════════════════════════════════════════════════════════════════╕
        │ size     │ time_created   │ time_modified   │ time_downloaded   │   width │   height │   fps │ duration         │ path                                                                                                   │
        ╞══════════╪════════════════╪═════════════════╪═══════════════════╪═════════╪══════════╪═══════╪══════════════════╪════════════════════════════════════════════════════════════════════════════════════════════════════════╡
        │ 697.7 MB │ Apr 13 2022    │ Mar 11 2022     │ Oct 19            │    1920 │     1080 │    30 │ 21.22 minutes    │ /mnt/d/76_CityVideos/PRAIA DE BARRA DE JANGADA CANDEIAS JABOATÃO                                       │
        │          │                │                 │                   │         │          │       │                  │ RECIFE PE BRASIL AVENIDA BERNARDO VIEIRA DE MELO-4Lx3hheMPmg.mp4
        ...

    ...or between two timestamps inclusive:

        library redownload city.db 2023-01-26T19:54:42 2023-01-26T20:45:24


</details>

###### history

<details><summary>Show and manage playback history</summary>

    $ library history -h
    usage: library history [--frequency daily weekly (monthly) yearly] [--limit LIMIT] DATABASE [(all) watching watched created modified deleted]

    View playback history

        $ library history web_add.image.db
        In progress:
        play_count  time_last_played    playhead    path                                     title
        ------------  ------------------  ----------  ---------------------------------------  -----------
                0  today, 20:48        2 seconds   https://siliconpr0n.org/map/COPYING.txt  COPYING.txt

    Show only completed history

        $ library history web_add.image.db --completed

    Show only completed history

        $ library history web_add.image.db --in-progress

    Delete history

        Delete two hours of history
        $ library history web_add.image.db --played-within '2 hours' -L inf --delete-rows

        Delete all history
        $ library history web_add.image.db -L inf --delete-rows

    See also: library stats -h
              library history-add -h


</details>

###### history-add

<details><summary>Add history from paths</summary>

    $ library history-add -h
    usage: library history-add DATABASE PATH ...

    Add history

        $ library history-add links.db $urls $paths
        $ library history-add links.db (cb)

    Items that don't already exist in the database will be counted under "skipped"



</details>

###### stats

<details><summary>Show some event statistics (created, deleted, watched, etc)</summary>

    $ library stats -h
    usage: library stats DATABASE TIME_COLUMN

    View watched stats

        $ library stats video.db --completed
        Finished watching:
        ╒═══════════════╤═════════════════════════════════╤════════════════╤════════════╤════════════╕
        │ time_period   │ duration_sum                    │ duration_avg   │ size_sum   │ size_avg   │
        ╞═══════════════╪═════════════════════════════════╪════════════════╪════════════╪════════════╡
        │ 2022-11       │ 4 days, 16 hours and 20 minutes │ 55.23 minutes  │ 26.3 GB    │ 215.9 MB   │
        ├───────────────┼─────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2022-12       │ 23 hours and 20.03 minutes      │ 35.88 minutes  │ 8.3 GB     │ 213.8 MB   │
        ├───────────────┼─────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2023-01       │ 17 hours and 3.32 minutes       │ 15.27 minutes  │ 14.3 GB    │ 214.1 MB   │
        ├───────────────┼─────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2023-02       │ 4 days, 5 hours and 60 minutes  │ 23.17 minutes  │ 148.3 GB   │ 561.6 MB   │
        ├───────────────┼─────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2023-03       │ 2 days, 18 hours and 18 minutes │ 11.20 minutes  │ 118.1 GB   │ 332.8 MB   │
        ├───────────────┼─────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2023-05       │ 5 days, 5 hours and 4 minutes   │ 45.75 minutes  │ 152.9 GB   │ 932.1 MB   │
        ╘═══════════════╧═════════════════════════════════╧════════════════╧════════════╧════════════╛

    View download stats

        $ library stats video.db time_downloaded --frequency daily
        Downloaded media:
        day         total_duration                          avg_duration                total_size    avg_size    count
        ----------  --------------------------------------  ------------------------  ------------  ----------  -------
        2023-08-11  1 month, 7 days and 8 hours             17 minutes                    192.2 GB     58.3 MB     3296
        2023-08-12  18 days and 15 hours                    17 minutes                     89.7 GB     56.4 MB     1590
        2023-08-14  13 days and 1 hours                     22 minutes                    111.2 GB    127.2 MB      874
        2023-08-15  13 days and 6 hours                     17 minutes                    140.0 GB    126.7 MB     1105
        2023-08-17  2 months, 8 days and 8 hours            19 minutes                    380.4 GB     72.6 MB     5243
        2023-08-18  2 months, 30 days and 18 hours          17 minutes                    501.9 GB     63.3 MB     7926
        2023-08-19  2 months, 6 days and 19 hours           19 minutes                    578.1 GB    110.6 MB     5229
        2023-08-20  3 days and 9 hours                      6 minutes and 57 seconds       14.5 GB     20.7 MB      700
        2023-08-21  4 days and 3 hours                      12 minutes                     18.0 GB     36.3 MB      495
        2023-08-22  10 days and 8 hours                     17 minutes                     82.1 GB     91.7 MB      895
        2023-08-23  19 days and 9 hours                     22 minutes                     93.7 GB     74.7 MB     1254

        See also: library stats video.db time_downloaded -f daily --hide-deleted

    View deleted stats

        $ library stats video.db time_deleted
        Deleted media:
        ╒═══════════════╤════════════════════════════════════════════╤════════════════╤════════════╤════════════╕
        │ time_period   │ duration_sum                               │ duration_avg   │ size_sum   │ size_avg   │
        ╞═══════════════╪════════════════════════════════════════════╪════════════════╪════════════╪════════════╡
        │ 2023-04       │ 1 year, 10 months, 3 days and 8 hours      │ 4.47 minutes   │ 1.6 TB     │ 7.4 MB     │
        ├───────────────┼────────────────────────────────────────────┼────────────────┼────────────┼────────────┤
        │ 2023-05       │ 9 months, 26 days, 20 hours and 34 minutes │ 30.35 minutes  │ 1.1 TB     │ 73.7 MB    │
        ╘═══════════════╧════════════════════════════════════════════╧════════════════╧════════════╧════════════╛
        ╒════════════════════════════════════════════════════════════════════════════════════════════════════════════╤═══════════════╤══════════════════╤════════════════╕
        │ title_path                                                                                                 │ duration      │   subtitle_count │ time_deleted   │
        ╞════════════════════════════════════════════════════════════════════════════════════════════════════════════╪═══════════════╪══════════════════╪════════════════╡
        │ Terminus (1987)                                                                                            │ 1 hour and    │                0 │ yesterday      │
        │ /mnt/d/70_Now_Watching/Terminus_1987.mp4                                                                   │ 15.55 minutes │                  │                │
        ├────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────┼──────────────────┼────────────────┤
        │ Commodore 64 Longplay [062] The Transformers (EU) /mnt/d/71_Mealtime_Videos/Youtube/World_of_Longplays/Com │ 24.77 minutes │                2 │ yesterday      │
        │ modore_64_Longplay_062_The_Transformers_EU_[1RRX7Kykb38].webm                                              │               │                  │                │
        ...


    View time_modified stats

        $ library stats example_dbs/web_add.image.db time_modified -f year
        Time_Modified media:
        year      total_size    avg_size    count
        ------  ------------  ----------  -------
        2010         4.4 MiB     1.5 MiB        3
        2011       136.2 MiB    68.1 MiB        2
        2013         1.6 GiB    10.7 MiB      154
        2014         4.6 GiB    25.2 MiB      187
        2015         4.3 GiB    26.5 MiB      167
        2016         5.1 GiB    46.8 MiB      112
        2017         4.8 GiB    51.7 MiB       95
        2018         5.3 GiB    97.9 MiB       55
        2019         1.3 GiB    46.5 MiB       29
        2020        25.7 GiB   113.5 MiB      232
        2021        25.6 GiB    96.5 MiB      272
        2022        14.6 GiB    82.7 MiB      181
        2023        24.3 GiB    72.5 MiB      343
        2024        17.3 GiB   104.8 MiB      169
        14 media


</details>

###### search

<details><summary>Search captions / subtitles</summary>

    $ library search -h
    usage: library search DATABASE QUERY

    Search text databases and subtitles

        library search fts.db boil
            7 captions
            /mnt/d/70_Now_Watching/DidubeTheLastStop-720p.mp4
               33:46 I brought a real stainless steel boiler
               33:59 The world is using only stainless boilers nowadays
               34:02 The boiler is old and authentic
               34:30 - This boiler? - Yes
               34:44 I am not forcing you to buy this boiler…
               34:52 Who will give her a one liter stainless steel boiler for one Lari?
               34:54 Glass boilers cost two

    Search and open file

        library search fts.db 'two words' --open


</details>

###### optimize

<details><summary>Re-optimize database</summary>

    $ library optimize -h
    usage: library optimize DATABASE [--force]

    Optimize library databases

    The force flag is usually unnecessary and it can take much longer


</details>

### Playback subcommands

###### watch

<details><summary>Watch / Listen</summary>

    $ library watch -h
    usage: library watch DATABASE [optional args]

    Control playback:
        To stop playback press Ctrl-C in either the terminal or mpv

        Create global shortcuts in your desktop environment by sending commands to mpv_socket:
        echo 'playlist-next force' | socat - /tmp/mpv_socket

    Override the default player (mpv):
        library watch --player "vlc --vlc-opts"

    Cast to chromecast groups:
        library watch --cast --cast-to "Office pair"
        library watch -ct "Office pair"  # equivalent
        If you don't know the exact name of your chromecast group run `catt scan`

    Play media in order (similarly named episodes):
        library watch --play-in-order
        library watch -O    # equivalent

        The default sort value is 'natural_ps' which means media will be sorted by parent path
        and then stem in a natural way (using the integer values within the path). But there are many other options:

        Options:

            - reverse: reverse the sort order
            - compat: treat characters like '⑦' as '7'

        Algorithms:

            - natural: parse numbers as integers
            - os: sort similar to the OS File Explorer sorts. To improve non-alphanumeric sorting on Mac OS X and Linux it is necessary to install pyicu (perhaps via python3-icu -- https://gitlab.pyicu.org/main/pyicu#installing-pyicu)
            - path: use natsort "path" algorithm (https://natsort.readthedocs.io/en/stable/api.html#the-ns-enum)
            - human: use system locale
            - ignorecase: treat all case as equal
            - lowercase: sort lowercase first
            - signed: sort with an understanding of negative numbers
            - python: sort like default python

        Values:

            - path
            - parent
            - stem
            - title (or any other column value)
            - ps: parent, stem
            - pts: parent, title, stem

        Use this format: algorithm, value, algorithm_value, or option_algorithm_value.
        For example:

            - library watch -O human
            - library watch -O title
            - library watch -O human_title
            - library watch -O reverse_compat_human_title

            - library watch -O path       # path algorithm and parent, stem values (path_ps)
            - library watch -O path_path  # path algorithm and path values

        Also, if you are using --random you need to fetch sibling media to play the media in order:

            - library watch --random --fetch-siblings each -O          # get the first result per directory
            - library watch --random --fetch-siblings if-audiobook -O  # get the first result per directory if 'audiobook' is in the path
            - library watch --random --fetch-siblings always -O        # get 2,000 results per directory

        If searching by a specific subpath it may be preferable to just sort by path instead
        library watch d/planet.earth.2024/ -u path

        library watch --related  # Similar to -O but uses fts to find similar content
        library watch -R         # equivalent
        library watch -RR        # above, plus ignores most filters

        library watch --cluster  # cluster-sort to put similar-named paths closer together
        library watch -C         # equivalent

        library watch --big-dirs # Recommended to use with --duration or --depth filters; see `lb big-dirs -h` for more info
        library watch -B         # equivalent

        All of these options can be used together but it will be a bit slow and the results might be mid-tier
        as multiple different algorithms create a muddied signal (too many cooks in the kitchen):
        library watch -RRCO

        You can even sort the items within each cluster by auto-MCDA ~LOL~
        library watch -B --sort-groups-by 'mcda median_size,-deleted'
        library watch -C --sort-groups-by 'mcda median_size,-deleted'

    Filter media by file siblings of parent directory:
        library watch --sibling   # only include files which have more than or equal to one sibling
        library watch --solo      # only include files which are alone by themselves

        `--sibling` is just a shortcut for `--lower 2`; `--solo` is `--upper 1`
        library watch --sibling --solo      # you will always get zero records here
        library watch --lower 2 --upper 1   # equivalent

        You can be more specific via the `--upper` and `--lower` flags
        library watch --lower 3   # only include files which have three or more siblings
        library watch --upper 3   # only include files which have fewer than three siblings
        library watch --lower 3 --upper 3   # only include files which are three siblings inclusive
        library watch --lower 12 --upper 25 -O  # on my machine this launches My Mister 2018

    Play recent partially-watched videos (requires mpv history):
        library watch --partial       # play newest first

        library watch --partial old   # play oldest first
        library watch -P o            # equivalent

        library watch -P p            # sort by percent remaining
        library watch -P t            # sort by time remaining
        library watch -P s            # skip partially watched (only show unseen)

        The default time used is "last-viewed" (ie. the most recent time you closed the video)
        If you want to use the "first-viewed" time (ie. the very first time you opened the video)
        library watch -P f            # use watch_later file creation time instead of modified time

        You can combine most of these options, though some will be overridden by others.
        library watch -P fo           # this means "show the oldest videos using the time I first opened them"
        library watch -P pt           # weighted remaining (percent * time remaining)

    Print instead of play:
        library watch --print --limit 10  # print the next 10 files
        library watch -p -L 10  # print the next 10 files
        library watch -p  # this will print _all_ the media. be cautious about `-p` on an unfiltered set

        Printing modes
        library watch -p    # print as a table
        library watch -p a  # print an aggregate report
        library watch -p b  # print a big-dirs report (see library bigdirs -h for more info)
        library watch -p f  # print fields (defaults to path; use --cols to change)
                               # -- useful for piping paths to utilities like xargs or GNU Parallel

        library watch -p d  # mark deleted
        library watch -p w  # mark watched

        Some printing modes can be combined
        library watch -p df  # print files for piping into another program and mark them as deleted within the db
        library watch -p bf  # print fields from big-dirs report

        Check if you have downloaded something before
        library watch -u duration -p -s 'title'

        Print an aggregate report of deleted media
        library watch -w time_deleted!=0 -p=a
        ╒═══════════╤══════════════╤═════════╤═════════╕
        │ path      │ duration     │ size    │   count │
        ╞═══════════╪══════════════╪═════════╪═════════╡
        │ Aggregate │ 14 days, 23  │ 50.6 GB │   29058 │
        │           │ hours and 42 │         │         │
        │           │ minutes      │         │         │
        ╘═══════════╧══════════════╧═════════╧═════════╛
        Total duration: 14 days, 23 hours and 42 minutes

        Print an aggregate report of media that has no duration information (ie. online or corrupt local media)
        library watch -w 'duration is null' -p=a

        Print a list of filenames which have below 1280px resolution
        library watch -w 'width<1280' -p=f

        Print media you have partially viewed with mpv
        library watch --partial -p
        library watch -P -p  # equivalent
        library watch -P -p f --cols path,progress,duration  # print CSV of partially watched files
        library watch --partial -pa  # print an aggregate report of partially watched files

        View how much time you have watched
        library watch -w play_count'>'0 -p=a

        See how much video you have
        library watch video.db -p=a
        ╒═══════════╤═════════╤═════════╤═════════╕
        │ path      │   hours │ size    │   count │
        ╞═══════════╪═════════╪═════════╪═════════╡
        │ Aggregate │  145769 │ 37.6 TB │  439939 │
        ╘═══════════╧═════════╧═════════╧═════════╛
        Total duration: 16 years, 7 months, 19 days, 17 hours and 25 minutes

        View all the columns
        library watch -p -L 1 --cols '*'

        Open ipython with all of your media
        library watch -vv -p --cols '*'
        ipdb> len(media)
        462219

    Set the play queue size:
        By default the play queue is 120--long enough that you likely have not noticed
        but short enough that the program is snappy.

        If you want everything in your play queue you can use the aid of infinity.
        Pick your poison (these all do effectively the same thing):
        library watch -L inf
        library watch -l inf
        library watch --queue inf
        library watch -L 999999999999

        You may also want to restrict the play queue.
        For example, when you only want 1000 random files:
        library watch -u random -L 1000

    Offset the play queue:
        You can also offset the queue. For example if you want to skip one or ten media:
        library watch --offset 10      # offset ten from the top of an ordered query

    Repeat
        library watch                  # listen to 120 random songs (DEFAULT_PLAY_QUEUE)
        library watch --limit 5        # listen to FIVE songs
        library watch -l inf -u random # listen to random songs indefinitely
        library watch -s infinite      # listen to songs from the band infinite

    Constrain media by search:
        Audio files have many tags to readily search through so metadata like artist,
        album, and even mood are included in search.
        Video files have less consistent metadata and so only paths are included in search.
        library watch --include happy  # only matches will be included
        library watch -s happy         # equivalent
        library watch --exclude sad    # matches will be excluded
        library watch -E sad           # equivalent

        Search only the path column
        library watch -O -s 'path : mad max'
        library watch -O -s 'path : "mad max"' # add "quotes" to be more strict

        Double spaces are parsed as one space
        library watch -s '  ost'        # will match OST and not ghost
        library watch -s toy story      # will match '/folder/toy/something/story.mp3'
        library watch -s 'toy  story'   # will match more strictly '/folder/toy story.mp3'

        You can search without -s but it must directly follow the database due to how argparse works
        library watch ./your.db searching for something

    Constrain media by arbitrary SQL expressions:
        library watch --where audio_count = 2  # media which have two audio tracks
        library watch -w "language = 'eng'"    # media which have an English language tag
                                                    (this could be audio _or_ subtitle)
        library watch -w subtitle_count=0      # media that doesn't have subtitles

    Constrain media to duration (in minutes):
        library watch --duration 20
        library watch -d 6  # 6 mins ±10 percent (ie. between 5 and 7 mins)
        library watch -d-6  # less than 6 mins
        library watch -d+6  # more than 6 mins

        Duration can be specified multiple times:
        library watch -d+5 -d-7  # should be similar to -d 6

        If you want exact time use `where`
        library watch --where 'duration=6*60'

    Constrain media to file size (in megabytes):
        library watch --size 20
        library watch -S 6  # 6 MB ±10 percent (ie. between 5 and 7 MB)
        library watch -S-6  # less than 6 MB
        library watch -S+6  # more than 6 MB

    Constrain media by time_created / time_last_played / time_deleted / time_modified:
        library watch --created-within '3 days'
        library watch --created-before '3 years'

    Constrain media by throughput:
        Bitrate information is not explicitly saved.
        You can use file size and duration as a proxy for throughput:
        library watch -w 'size/duration<50000'

    Constrain media to portrait orientation video:
        library watch --portrait
        library watch -w 'width<height' # equivalent

    Constrain media to duration of videos which match any size constraints:
        library watch --duration-from-size +700 -u 'duration desc, size desc'

    Constrain media to online-media or local-media:
        Not to be confused with only local-media which is not "offline" (ie. one HDD disconnected)
        library watch --online-media-only
        library watch --online-media-only -i  # and ignore playback errors (ie. YouTube video deleted)
        library watch --local-media-only

    Specify media play order:
        library watch --sort duration   # play shortest media first
        library watch -u duration desc  # play longest media first

        You can use multiple SQL ORDER BY expressions
        library watch -u 'subtitle_count > 0 desc' # play media that has at least one subtitle first

        Prioritize large-sized media
        library watch --sort 'ntile(10000) over (order by size/duration) desc'
        library watch -u 'ntile(100) over (order by size) desc'

        Sort by count of media with the same-X column (default DESC: most common to least common value)
        library watch -u same-duration
        library watch -u same-title
        library watch -u same-size
        library watch -u same-width, same-height ASC, same-fps
        library watch -u same-time_uploaded same-view_count same-upvote_ratio

        No media found when using --random
        In addition to -u/--sort random, there is also the -r/--random flag.
        If you have a large database it should be faster than -u random but it comes with a caveat:
        This flag randomizes via rowid at an earlier stage to boost performance.
        It is possible that you see "No media found" or a smaller amount of media than correct.
        You can bypass this by setting --limit. For example:
        library watch -B --folder-size=+12GiB --folder-size=-100GiB -r -pa
        path         count      size  duration                        avg_duration      avg_size
        ---------  -------  --------  ------------------------------  --------------  ----------
        Aggregate    10000  752.5 GB  4 months, 15 days and 10 hours  20 minutes         75.3 MB
        (17 seconds)
        library watch -B --folder-size=+12GiB --folder-size=-100GiB -r -pa -l inf
        path         count     size  duration                                 avg_duration      avg_size
        ---------  -------  -------  ---------------------------------------  --------------  ----------
        Aggregate   140868  10.6 TB  5 years, 2 months, 28 days and 14 hours  20 minutes         75.3 MB
        (30 seconds)

    Post-actions -- choose what to do after playing:
        library watch --post-action keep    # do nothing after playing (default)
        library watch -k delete             # delete file after playing
        library watch -k softdelete         # mark deleted after playing

        library watch -k ask_keep           # ask whether to keep after playing
        library watch -k ask_delete         # ask whether to delete after playing

        library watch -k move               # move to "keep" dir after playing
        library watch -k ask_move           # ask whether to move to "keep" folder
        The default location of the keep folder is ./keep/ (relative to the played media file)
        You can change this by explicitly setting an *absolute* `keep-dir` path:
        library watch -k ask_move --keep-dir /home/my/music/keep/

        library watch -k ask_move_or_delete # ask after each whether to move to "keep" folder or delete

        You can also bind keys in mpv to different exit codes. For example in input.conf:
            ; quit 5

        And if you run something like:
            library watch --cmd5 ~/bin/process_audio.py
            library watch --cmd5 echo  # this will effectively do nothing except skip the normal post-actions via mpv shortcut

        When semicolon is pressed in mpv (it will exit with error code 5) then the applicable player-exit-code command
        will start with the media file as the first argument; in this case `~/bin/process_audio.py $path`.
        The command will be daemonized if library exits before it completes.

        To prevent confusion, normal post-actions will be skipped if the exit-code is greater than 4.
        Exit-codes 0, 1, 2, 3, and 4: the external post-action will run after normal post-actions. Be careful of conflicting player-exit-code command and post-action behavior when using these!

    Experimental options:
        Duration to play (in seconds) while changing the channel
        library watch --interdimensional-cable 40
        library watch -4dtv 40
        You can open two terminals to replicate AMV Hell somewhat
        library watch --volume 0 -4dtv 30
        library listen -4dtv 30

        Playback multiple files at once
        library watch --multiple-playback    # one per display; or two if only one display detected
        library watch --multiple-playback 4  # play four media at once, divide by available screens
        library watch -m 4 --screen-name eDP # play four media at once on specific screen
        library watch -m 4 --loop --crop     # play four cropped videos on a loop
        library watch -m 4 --hstack          # use hstack style

        When using `--multiple-playback` it may be helpful to set simple window focus rules to prevent keys from accidentally being entered in the wrong mpv window (as new windows are created and capture the cursor focus).
        You can set and restore your previous mouse focus setting by wrapping the command like this:

            focus-under-mouse
            library watch ... --multiple-playback 4
            focus-follows-mouse

        For example in KDE:

            function focus-under-mouse
                kwriteconfig5 --file kwinrc --group Windows --key FocusPolicy FocusUnderMouse
                qdbus-qt5 org.kde.KWin /KWin reconfigure
            end

            function focus-follows-mouse
                kwriteconfig5 --file kwinrc --group Windows --key FocusPolicy FocusFollowsMouse
                kwriteconfig5 --file kwinrc --group Windows --key NextFocusPrefersMouse true
                qdbus-qt5 org.kde.KWin /KWin reconfigure
            end



</details>

###### tabs-open

<details><summary>Open your tabs for the day</summary>

    $ library tabs-open -h
    usage: library tabs-open DATABASE

    Tabs is meant to run **once per day**. Here is how you would configure it with `crontab`:

        45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db

    If things aren't working you can use `at` to simulate a similar environment as `cron`

        echo 'fish -c "export DISPLAY=:0 && library tabs /full/path/to/tabs.db"' | at NOW

    You can also invoke tabs manually:

        library tabs -L 1  # open one tab

    Print URLs

        library tabs -w "frequency='yearly'" -p
        ╒════════════════════════════════════════════════════════════════╤═════════════╤══════════════╕
        │ path                                                           │ frequency   │ time_valid   │
        ╞════════════════════════════════════════════════════════════════╪═════════════╪══════════════╡
        │ https://old.reddit.com/r/Autonomia/top/?sort=top&t=year        │ yearly      │ Dec 31 1970  │
        ├────────────────────────────────────────────────────────────────┼─────────────┼──────────────┤
        │ https://old.reddit.com/r/Cyberpunk/top/?sort=top&t=year        │ yearly      │ Dec 31 1970  │
        ├────────────────────────────────────────────────────────────────┼─────────────┼──────────────┤
        │ https://old.reddit.com/r/ExperiencedDevs/top/?sort=top&t=year  │ yearly      │ Dec 31 1970  │

        ...

        ╘════════════════════════════════════════════════════════════════╧═════════════╧══════════════╛

    View how many yearly tabs you have:

        library tabs -w "frequency='yearly'" -p a
        ╒═══════════╤═════════╕
        │ path      │   count │
        ╞═══════════╪═════════╡
        │ Aggregate │     134 │
        ╘═══════════╧═════════╛

    Delete URLs

        library tabs -p -s cyber
        ╒═══════════════════════════════════════╤═════════════╤══════════════╕
        │ path                                  │ frequency   │ time_valid   │
        ╞═══════════════════════════════════════╪═════════════╪══════════════╡
        │ https://old.reddit.com/r/cyberDeck/to │ yearly      │ Dec 31 1970  │
        │ p/?sort=top&t=year                    │             │              │
        ├───────────────────────────────────────┼─────────────┼──────────────┤
        │ https://old.reddit.com/r/Cyberpunk/to │ yearly      │ Aug 29 2023  │
        │ p/?sort=top&t=year                    │             │              │
        ├───────────────────────────────────────┼─────────────┼──────────────┤
        │ https://www.reddit.com/r/cyberDeck/   │ yearly      │ Sep 05 2023  │
        ╘═══════════════════════════════════════╧═════════════╧══════════════╛
        library tabs -p -w "path='https://www.reddit.com/r/cyberDeck/'" --delete-rows
        Removed 1 metadata records
        library tabs -p -s cyber
        ╒═══════════════════════════════════════╤═════════════╤══════════════╕
        │ path                                  │ frequency   │ time_valid   │
        ╞═══════════════════════════════════════╪═════════════╪══════════════╡
        │ https://old.reddit.com/r/cyberDeck/to │ yearly      │ Dec 31 1970  │
        │ p/?sort=top&t=year                    │             │              │
        ├───────────────────────────────────────┼─────────────┼──────────────┤
        │ https://old.reddit.com/r/Cyberpunk/to │ yearly      │ Aug 29 2023  │
        │ p/?sort=top&t=year                    │             │              │
        ╘═══════════════════════════════════════╧═════════════╧══════════════╛


</details>

###### links-open

<details><summary>Open links from link dbs</summary>

    $ library links-open -h
    usage: library links-open DATABASE [search] [--title] [--title-prefix TITLE_PREFIX]

    Open links from a links db

        wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/music.korea.ln.db
        library open-links music.korea.ln.db

    Only open links once

        library open-links ln.db -w 'time_modified=0'

    Print a preview instead of opening tabs

        library open-links ln.db -p
        library open-links ln.db --cols time_modified -p

    Delete rows

        Make sure you have the right search query
        library open-links ln.db "query" -p -L inf
        library open-links ln.db "query" -pa  # view total

        library open-links ln.db "query" -pd  # mark as deleted

    Custom search engine

        library open-links ln.db --title --prefix 'https://duckduckgo.com/?q='

    Skip local media

        library open-links dl.db --online
        library open-links dl.db -w 'path like "http%"'  # equivalent



</details>

###### surf

<details><summary>Auto-load browser tabs in a streaming way (stdin)</summary>

    $ library surf -h
    usage: library surf [--count COUNT] [--target-hosts TARGET_HOSTS] < stdin

    Streaming tab loader: press ctrl+c to stop.

    Open tabs from a line-delimited file:

        cat tabs.txt | library surf -n 5

    You will likely want to use this setting in `about:config`

        browser.tabs.loadDivertedInBackground = True

    If you prefer GUI, check out https://unli.xyz/tabsender/


</details>

### Database enrichment subcommands

###### dedupe-db

<details><summary>Dedupe SQLITE tables</summary>

    $ library dedupe-db -h
    usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS]

    Dedupe your database (not to be confused with the dedupe subcommand)

    It should not need to be said but *backup* your database before trying this tool!

    Dedupe-DB will help remove duplicate rows based on non-primary-key business keys

        library dedupe-db ./video.db media --bk path

    By default all non-primary and non-business key columns will be upserted unless --only-columns is provided
    If --primary-keys is not provided table metadata primary keys will be used
    If your duplicate rows contain exactly the same data in all the columns you can run with --skip-upsert to save a lot of time


</details>

###### dedupe-media

<details><summary>Dedupe similar media</summary>

    $ library dedupe-media -h
    usage: library dedupe-media [--audio | --id | --title | --filesystem] [--only-soft-delete] [--limit LIMIT] DATABASE

    Dedupe your files (not to be confused with the dedupe-db subcommand)

    Exact file matches

        library dedupe-media --fs video.db

    Dedupe based on duration and file basename or dirname similarity

        library dedupe-media video.db --duration --basename -s release_group  # pre-filter with a specific text substring
        library dedupe-media video.db --duration --basename -u m1.size  # sort such that small files are treated as originals and larger files are deleted
        library dedupe-media video.db --duration --basename -u 'm1.size desc'  # sort such that large files are treated as originals and smaller files are deleted

    Dedupe online against local media

        library dedupe-media video.db / http


</details>

###### merge-online-local

<details><summary>Merge online and local data</summary>

    $ library merge-online-local -h
    usage: library merge-online-local DATABASE

    If you have previously downloaded YouTube or other online media, you can dedupe
    your database and combine the online and local media records as long as your
    files have the youtube-dl / yt-dlp id in the filename.


</details>

###### mpv-watchlater

<details><summary>Import mpv watchlater files to history</summary>

    $ library mpv-watchlater -h
    usage: library mpv-watchlater DATABASE [--watch-later-directory ~/.config/mpv/watch_later/]

    Extract timestamps from MPV to the history table


</details>

###### reddit-selftext

<details><summary>Copy selftext links to media table</summary>

    $ library reddit-selftext -h
    usage: library reddit-selftext DATABASE

    Extract URLs from reddit selftext from the reddit_posts table to the media table


</details>

###### tabs-shuffle

<details><summary>Randomize tabs.db a bit</summary>

    $ library tabs-shuffle -h
    usage: library tabs-shuffle DATABASE

    Moves each tab to a random day-of-the-week by default

    It may also be useful to shuffle monthly tabs, etc. You can accomplish this like so:

        library tabs-shuffle tabs.db -d  31 -f monthly
        library tabs-shuffle tabs.db -d  90 -f quarterly
        library tabs-shuffle tabs.db -d 365 -f yearly


</details>

###### pushshift

<details><summary>Convert pushshift data to reddit.db format (stdin)</summary>

    $ library pushshift -h
    usage: library pushshift DATABASE < stdin

    Download data (about 600GB jsonl.zst; 6TB uncompressed)

        wget -e robots=off -r -k -A zst https://files.pushshift.io/reddit/submissions/

    Load data from files via unzstd

        unzstd --memory=2048MB --stdout RS_2005-07.zst | library pushshift pushshift.db

    Or multiple (output is about 1.5TB SQLITE fts-searchable):

        for f in psaw/files.pushshift.io/reddit/submissions/*.zst
            echo "unzstd --memory=2048MB --stdout $f | library pushshift (basename $f).db"
            library optimize (basename $f).db
        end | parallel -j5


</details>

### Update database subcommands

###### fs-update

<details><summary>Update local media</summary>

    $ library fs-update -h
    usage: library fs-update DATABASE

    Update each path previously saved:

        library fsupdate video.db


</details>

###### tube-update

<details><summary>Update online video media</summary>

    $ library tube-update -h
    usage: library tube-update [--audio | --video] DATABASE

    Fetch the latest videos for every playlist saved in your database

        library tubeupdate educational.db

    Fetch extra metadata:

        By default tubeupdate will quickly add media.
        You can run with --extra to fetch more details: (best resolution width, height, subtitle tags, etc)

        library tubeupdate educational.db --extra https://www.youtube.com/channel/UCBsEUcR-ezAuxB2WlfeENvA/videos

    Remove duplicate playlists:

        lb dedupe-db video.db playlists --bk extractor_playlist_id


</details>

###### web-update

<details><summary>Update open-directory media</summary>

    $ library web-update -h
    usage: library web-update DATABASE

    Update saved open directories



</details>

###### gallery-update

<details><summary>Update online gallery media</summary>

    $ library gallery-update -h
    usage: library gallery-update DATABASE URLS

    Check previously saved gallery_dl URLs for new content


</details>

###### links-update

<details><summary>Update a link-scraping database</summary>

    $ library links-update -h
    usage: library links-update DATABASE

    Fetch new links from each path previously saved

        library links-update links.db


</details>

###### reddit-update

<details><summary>Update reddit media</summary>

    $ library reddit-update -h
    usage: library reddit-update [--audio | --video] [--lookback N_DAYS] [--praw-site bot1] DATABASE

    Fetch the latest posts for every subreddit/redditor saved in your database

        library redditupdate edu_subreddits.db


</details>

### Misc subcommands

###### export-text

<details><summary>Export HTML files from SQLite databases</summary>

    $ library export-text -h
    usage: library export-text DATABASE

    Generate HTML files from SQLite databases


</details>

###### dedupe-czkawka

<details><summary>Process czkawka diff output</summary>

    $ library dedupe-czkawka -h
    usage: library dedupe-czkawka [--volume VOLUME] [--auto-seek] [--ignore-errors] [--folder] [--folder-glob [FOLDER_GLOB]] [--replace] [--no-replace] [--override-trash OVERRIDE_TRASH] [--delete-files] [--gui]
               [--auto-select-min-ratio AUTO_SELECT_MIN_RATIO] [--all-keep] [--all-left] [--all-right] [--all-delete] [--verbose]
               czkawka_dupes_output_path

    Choose which duplicate to keep by opening both side-by-side in mpv


</details>


<details><summary>Chicken mode</summary>


           ////////////////////////
          ////////////////////////|
         //////////////////////// |
        ////////////////////////| |
        |    _\/_   |   _\/_    | |
        |     )o(>  |  <)o(     | |
        |   _/ <\   |   /> \_   | |        just kidding :-)
        |  (_____)  |  (_____)  | |_
        | ~~~oOo~~~ | ~~~0oO~~~ |/__|
       _|====\_=====|=====_/====|_ ||
      |_|\_________ O _________/|_|||
       ||//////////|_|\\\\\\\\\\|| ||
       || ||       |\_\\        || ||
       ||/||        \\_\\       ||/||
       ||/||         \)_\)      ||/||
       || ||         \  O /     || ||
       ||             \  /      || LGB

                   \________/======
                   / ( || ) \\

</details>

You can expand all by running this in your browser console:

```js
(() => { const readmeDiv = document.getElementById("readme"); const detailsElements = readmeDiv.getElementsByTagName("details"); for (let i = 0; i < detailsElements.length; i++) { detailsElements[i].setAttribute("open", "true"); } })();
```



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "xklb",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Jacob Chapman <7908073+chapmanjacobd@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/3c/21/c764eb867ea1001f9d3bd103614fe027d9b351cad84107db5d1394d6029f/xklb-2.6.22.tar.gz",
    "platform": null,
    "description": "# library (media toolkit)\n\nA wise philosopher once told me: \"the future is [autotainment](https://www.youtube.com/watch?v=F9sZFrsjPp0)\".\n\nManage and curate large media libraries. An index for your archive.\nPrimary usage is local filesystem but also supports some virtual constructs like\ntracking online video playlists (eg. YouTube subscriptions) and scheduling browser tabs.\n\n<img align=\"right\" width=\"300\" height=\"600\" src=\"https://raw.githubusercontent.com/chapmanjacobd/library/main/.github/examples/art.avif\" />\n\n## Install\n\nLinux recommended but [Windows setup instructions](./Windows.md) available.\n\n    pip install xklb\n\nShould also work on Mac OS.\n\n### External dependencies\n\nRequired: `ffmpeg`\n\nSome features work better with: `mpv`, `firefox`, `fish`\n\n## Getting started\n\n<details><summary>Local media</summary>\n\n### 1. Extract Metadata\n\nFor thirty terabytes of video the initial scan takes about four hours to complete.\nAfter that, subsequent scans of the path (or any subpaths) are much quicker--only\nnew files will be read by `ffprobe`.\n\n    library fsadd tv.db ./video/folder/\n\n![termtosvg](./examples/extract.svg)\n\n### 2. Watch / Listen from local files\n\n    library watch tv.db                           # the default post-action is to do nothing\n    library watch tv.db --post-action delete      # delete file after playing\n    library listen finalists.db -k ask_keep       # ask whether to keep file after playing\n\nTo stop playing press Ctrl+C in either the terminal or mpv\n\n</details>\n\n<details><summary>Online media</summary>\n\n### 1. Download Metadata\n\nDownload playlist and channel metadata. Break free of the YouTube algo~\n\n    library tubeadd educational.db https://www.youtube.com/c/BranchEducation/videos\n\n[![termtosvg](./examples/tubeadd.svg \"library tubeadd example\")](https://asciinema.org/a/BzplqNj9sCERH3A80GVvwsTTT)\n\nAnd you can always add more later--even from different websites.\n\n    library tubeadd maker.db https://vimeo.com/terburg\n\nTo prevent mistakes the default configuration is to download metadata for only\nthe most recent 20,000 videos per playlist/channel.\n\n    library tubeadd maker.db --extractor-config playlistend=1000\n\nBe aware that there are some YouTube Channels which have many items--for example\nthe TEDx channel has about 180,000 videos. Some channels even have upwards of\ntwo million videos. More than you could likely watch in one sitting--maybe even one lifetime.\nOn a high-speed connection (>500 Mbps), it can take up to five hours to download\nthe metadata for 180,000 videos.\n\nTIP! If you often copy and paste many URLs you can paste line-delimited text as arguments via a subshell. For example, in `fish` shell with [cb](https://github.com/niedzielski/cb):\n\n    library tubeadd my.db (cb)\n\nOr in BASH:\n\n    library tubeadd my.db $(xclip -selection c)\n\n#### 1a. Get new videos for saved playlists\n\nTubeupdate will go through the list of added playlists and fetch metadata for\nany videos not previously seen.\n\n    library tube-update tube.db\n\n### 2. Watch / Listen from websites\n\n    library watch maker.db\n\nTo stop playing press Ctrl+C in either the terminal or mpv\n\n</details>\n\n<details><summary>List all subcommands</summary>\n\n    $ library\n    xk media library subcommands (v2.6.022)\n\n    Create database subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 fs-add        \u2502 Add local media                          \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tube-add      \u2502 Add online video media (yt-dlp)          \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 web-add       \u2502 Add open-directory media                 \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 gallery-add   \u2502 Add online gallery media (gallery-dl)    \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tabs-add      \u2502 Create a tabs database; Add URLs         \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 links-add     \u2502 Create a link-scraping database          \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 site-add      \u2502 Auto-scrape website data to SQLITE       \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 reddit-add    \u2502 Create a reddit database; Add subreddits \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 hn-add        \u2502 Create / Update a Hacker News database   \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 substack      \u2502 Backup substack articles                 \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tildes        \u2502 Backup tildes comments and topics        \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 places-import \u2502 Import places of interest (POIs)         \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 row-add       \u2502 Add arbitrary data to SQLITE             \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Text subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 cluster-sort   \u2502 Sort text and images by similarity          \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 extract-links  \u2502 Extract inner links from lists of web links \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 extract-text   \u2502 Extract human text from lists of web links  \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 markdown-links \u2502 Extract titles from lists of web links      \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 nouns          \u2502 Unstructured text -> compound nouns (stdin) \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Folder subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 merge-folders \u2502 Merge two or more file trees                     \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 relmv         \u2502 Move files preserving parent folder hierarchy    \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 mv-list       \u2502 Find specific folders to move to different disks \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 scatter       \u2502 Scatter files between folders or disks           \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    File subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 sample-hash    \u2502 Calculate a hash based on small file segments       \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 sample-compare \u2502 Compare files using sample-hash and other shortcuts \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Tabular data subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 eda              \u2502 Exploratory Data Analysis on table-like files \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 mcda             \u2502 Multi-criteria Ranking for Decision Support   \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 incremental-diff \u2502 Diff large table-like files in chunks         \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Media File subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 media-check    \u2502 Check video and audio files for corruption via ffmpeg  \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 process-ffmpeg \u2502 Shrink video/audio to AV1/Opus format (.mkv, .mka)     \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 process-image  \u2502 Shrink images by resizing and AV1 image format (.avif) \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Multi-database subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 merge-dbs        \u2502 Merge SQLITE databases \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 copy-play-counts \u2502 Copy play history      \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Filesystem Database subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 christen    \u2502 Clean filenames                \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 disk-usage  \u2502 Show disk usage                \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 mount-stats \u2502 Show some relative mount stats \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 big-dirs    \u2502 Show large folders             \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 search-db   \u2502 Search a SQLITE database       \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Media Database subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 block           \u2502 Block a channel                                             \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 playlists       \u2502 List stored playlists                                       \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 download        \u2502 Download media                                              \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 download-status \u2502 Show download status                                        \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 redownload      \u2502 Re-download deleted/lost media                              \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 history         \u2502 Show and manage playback history                            \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 history-add     \u2502 Add history from paths                                      \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 stats           \u2502 Show some event statistics (created, deleted, watched, etc) \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 search          \u2502 Search captions / subtitles                                 \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 optimize        \u2502 Re-optimize database                                        \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Playback subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 watch      \u2502 Watch / Listen                                    \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 now        \u2502 Show what is currently playing                    \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 next       \u2502 Play next file and optionally delete current file \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 stop       \u2502 Stop all playback                                 \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 pause      \u2502 Pause all playback                                \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tabs-open  \u2502 Open your tabs for the day                        \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 links-open \u2502 Open links from link dbs                          \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 surf       \u2502 Auto-load browser tabs in a streaming way (stdin) \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Database enrichment subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 dedupe-db          \u2502 Dedupe SQLITE tables                               \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 dedupe-media       \u2502 Dedupe similar media                               \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 merge-online-local \u2502 Merge online and local data                        \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 mpv-watchlater     \u2502 Import mpv watchlater files to history             \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 reddit-selftext    \u2502 Copy selftext links to media table                 \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tabs-shuffle       \u2502 Randomize tabs.db a bit                            \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 pushshift          \u2502 Convert pushshift data to reddit.db format (stdin) \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Update database subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 fs-update      \u2502 Update local media              \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 tube-update    \u2502 Update online video media       \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 web-update     \u2502 Update open-directory media     \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 gallery-update \u2502 Update online gallery media     \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 links-update   \u2502 Update a link-scraping database \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 reddit-update  \u2502 Update reddit media             \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n    Misc subcommands:\n    \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n    \u2502 export-text    \u2502 Export HTML files from SQLite databases \u2502\n    \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n    \u2502 dedupe-czkawka \u2502 Process czkawka diff output             \u2502\n    \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\n\n</details>\n\n## Examples\n\n### Watch online media on your PC\n\n    wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/mealtime.tw.db\n    library watch mealtime.tw.db --random --duration 30m\n\n### Listen to online media on a chromecast group\n\n    wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/music.tl.db\n    library listen music.tl.db -ct \"House speakers\" --random\n\n### Hook into HackerNews\n\n    wget https://github.com/chapmanjacobd/hn_mining/raw/main/hackernews_only_direct.tw.db\n    library watch hackernews_only_direct.tw.db --random --ignore-errors\n\n### Organize via separate databases\n\n    library fsadd --audio audiobooks.db ./audiobooks/\n    library fsadd --audio podcasts.db ./podcasts/ ./another/more/secret/podcasts_folder/\n\n    # merge later if you want\n    library merge-dbs --pk path -t playlists,media both.db audiobooks.db podcasts.db\n\n    # or split\n    library merge-dbs --pk path -t playlists,media audiobooks.db both.db -w 'path like \"%/audiobooks/%\"'\n    library merge-dbs --pk path -t playlists,media podcasts.db both.db -w 'path like \"%/podcasts%\"'\n\n## Guides\n\n### Music alarm clock\n\n<details><summary>via termux crontab</summary>\n\nWake up to your own music\n\n    30 7 * * * library listen ./audio.db\n\nWake up to your own music _only when you are *not* home_ (computer on local IP)\n\n    30 7 * * * timeout 0.4 nc -z 192.168.1.12 22 || library listen --random\n\nWake up to your own music on your Chromecast speaker group _only when you are home_\n\n    30 7 * * * ssh 192.168.1.12 library listen --cast --cast-to \"Bedroom pair\"\n\n</details>\n\n\n### Browser Tabs\n\n<details><summary>Visit websites on a schedule</summary>\n\n`tabs` is a way to organize your visits to URLs that you want to remember every once in a while.\n\nThe main benefit of tabs is that you can have a large amount of tabs saved (say 500 monthly tabs) and only the smallest\namount of tabs to satisfy that goal (500/30) tabs will open each day. 17 tabs per day seems manageable--500 all at once does not.\n\nThe use-case of tabs are websites that you know are going to change: subreddits, games,\nor tools that you want to use for a few minutes daily, weekly, monthly, quarterly, or yearly.\n\n### 1. Add your websites\n\n    library tabsadd tabs.db --frequency monthly --category fun \\\n        https://old.reddit.com/r/Showerthoughts/top/?sort=top&t=month \\\n        https://old.reddit.com/r/RedditDayOf/top/?sort=top&t=month\n\n### 2. Add library tabs to cron\n\nlibrary tabs is meant to run **once per day**. Here is how you would configure it with `crontab`:\n\n    45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db\n\nOr with `systemd`:\n\n    tee ~/.config/systemd/user/tabs.service\n    [Unit]\n    Description=xklb daily browser tabs\n\n    [Service]\n    Type=simple\n    RemainAfterExit=no\n    Environment=\"DISPLAY=:0\"\n    ExecStart=\"/usr/bin/fish\" \"-c\" \"lb tabs /home/xk/lb/tabs.db\"\n\n    tee ~/.config/systemd/user/tabs.timer\n    [Unit]\n    Description=xklb daily browser tabs timer\n\n    [Timer]\n    Persistent=yes\n    OnCalendar=*-*-* 9:58\n\n    [Install]\n    WantedBy=timers.target\n\n    systemctl --user daemon-reload\n    systemctl --user enable --now tabs.service\n\nYou can also invoke tabs manually:\n\n    library tabs tabs.db -L 1  # open one tab\n\nIncremental surfing. \ud83d\udcc8\ud83c\udfc4 totally rad!\n\n</details>\n\n### Find large folders\n\n<details><summary>Curate with library big-dirs</summary>\n\nIf you are looking for candidate folders for curation (ie. you need space but don't want to buy another hard drive).\nThe big-dirs subcommand was written for that purpose:\n\n    $ library big-dirs fs/d.db\n\nYou may filter by folder depth (similar to QDirStat or WizTree)\n\n    $ library big-dirs --depth=3 audio.db\n\nThere is also an flag to prioritize folders which have many files which have been deleted (for example you delete songs you don't like--now you can see who wrote those songs and delete all their other songs...)\n\n    $ library big-dirs --sort-groups-by deleted audio.db\n\nRecently, this functionality has also been integrated into watch/listen subcommands so you could just do this:\n\n    $ library watch --big-dirs ./my.db\n    $ lb wt -B  # shorthand equivalent\n\n</details>\n\n### Backfill data\n\n<details><summary>Backfill missing YouTube videos from the Internet Archive</summary>\n\n```fish\nfor base in https://youtu.be/ http://youtu.be/ http://youtube.com/watch?v= https://youtube.com/watch?v= https://m.youtube.com/watch?v= http://www.youtube.com/watch?v= https://www.youtube.com/watch?v=\n    sqlite3 video.db \"\n        update or ignore media\n            set path = replace(path, '$base', 'https://web.archive.org/web/2oe_/http://wayback-fakeurl.archive.org/yt/')\n              , time_deleted = 0\n        where time_deleted > 0\n        and (path = webpath or path not in (select webpath from media))\n        and path like '$base%'\n    \"\nend\n```\n\n</details>\n\n<details><summary>Backfill reddit databases with pushshift data</summary>\n\n[https://github.com/chapmanjacobd/reddit_mining/](https://github.com/chapmanjacobd/reddit_mining/)\n\n```fish\nfor reddit_db in ~/lb/reddit/*.db\n    set subreddits (sqlite-utils $reddit_db 'select path from playlists' --tsv --no-headers | grep old.reddit.com | sed 's|https://old.reddit.com/r/\\(.*\\)/|\\1|' | sed 's|https://old.reddit.com/user/\\(.*\\)/|u_\\1|' | tr -d \"\\r\")\n\n    ~/github/xk/reddit_mining/links/\n    for subreddit in $subreddits\n        if not test -e \"$subreddit.csv\"\n            echo \"octosql -o csv \\\"select path,score,'https://old.reddit.com/r/$subreddit/' as playlist_path from `../reddit_links.parquet` where lower(playlist_path) = '$subreddit' order by score desc \\\" > $subreddit.csv\"\n        end\n    end | parallel -j8\n\n    for subreddit in $subreddits\n        sqlite-utils upsert --pk path --alter --csv --detect-types $reddit_db media $subreddit.csv\n    end\n\n    library tubeadd --safe --ignore-errors --force $reddit_db (sqlite-utils --raw-lines $reddit_db 'select path from media')\nend\n```\n\n</details>\n\n### Datasette\n\n<details><summary>Explore `library` databases in your browser</summary>\n\n    pip install datasette\n    datasette tv.db\n\n</details>\n\n### Pipe to [mnamer](https://github.com/jkwill87/mnamer)\n\n<details><summary>Rename poorly named files</summary>\n\n    pip install mnamer\n    mnamer --movie-directory ~/d/70_Now_Watching/ --episode-directory ~/d/70_Now_Watching/ \\\n        --no-overwrite -b (library watch -p fd -s 'path : McCloud')\n    library fsadd ~/d/70_Now_Watching/\n\n</details>\n\n### Pipe to [lowcharts](https://github.com/juan-leon/lowcharts)\n\n<details><summary>$ library watch -p f -col time_created | lowcharts timehist -w 80</summary>\n\n    Matches: 445183.\n    Each \u220e represents a count of 1896\n    [2022-04-13 03:16:05] [151689] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-04-19 07:59:37] [ 16093] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-04-25 12:43:09] [ 12019] \u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-05-01 17:26:41] [ 48817] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-05-07 22:10:14] [ 36259] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-05-14 02:53:46] [  3942] \u220e\u220e\n    [2022-05-20 07:37:18] [  2371] \u220e\n    [2022-05-26 12:20:50] [   517]\n    [2022-06-01 17:04:23] [  4845] \u220e\u220e\n    [2022-06-07 21:47:55] [  2340] \u220e\n    [2022-06-14 02:31:27] [   563]\n    [2022-06-20 07:14:59] [ 13836] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-06-26 11:58:32] [  1905] \u220e\n    [2022-07-02 16:42:04] [  1269]\n    [2022-07-08 21:25:36] [  3062] \u220e\n    [2022-07-15 02:09:08] [  9192] \u220e\u220e\u220e\u220e\n    [2022-07-21 06:52:41] [ 11955] \u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-07-27 11:36:13] [ 50938] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-08-02 16:19:45] [ 70973] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n    [2022-08-08 21:03:17] [  2598] \u220e\n\nBTW, for some cols like time_deleted you'll need to specify a where clause so they aren't filtered out:\n\n    $ library watch -p f -col time_deleted -w time_deleted'>'0 | lowcharts timehist -w 80\n\n![video width](https://user-images.githubusercontent.com/7908073/184737808-b96fbe65-a1d9-43c2-b6b4-4bdfab592190.png)\n\n![fps](https://user-images.githubusercontent.com/7908073/184738438-ee566a4b-2da0-4e6d-a4b3-9bfca036aa2a.png)\n\n</details>\n\n## Usage\n\n\n### Create database subcommands\n\n###### fs-add\n\n<details><summary>Add local media</summary>\n\n    $ library fs-add -h\n    usage: library fs-add [(--video) | --audio | --image |  --text | --filesystem] DATABASE PATH ...\n\n    The default database type is video:\n        library fsadd tv.db ./tv/\n        library fsadd --video tv.db ./tv/  # equivalent\n\n    You can also create audio databases. Both audio and video use ffmpeg to read metadata:\n        library fsadd --audio audio.db ./music/\n\n    Image uses ExifTool:\n        library fsadd --image image.db ./photos/\n\n    Text will try to read files and save the contents into a searchable database:\n        library fsadd --text text.db ./documents_and_books/\n\n    Create a text database and scan with OCR and speech-recognition:\n        library fsadd --text --ocr --speech-recognition ocr.db ./receipts_and_messages/\n\n    Create a video database and read internal/external subtitle files into a searchable database:\n        library fsadd --scan-subtitles tv.search.db ./tv/ ./movies/\n\n    Decode media to check for corruption (slow):\n        library fsadd --check-corrupt\n        # See media-check command for full options\n\n    Normally only relevant filetypes are included. You can scan all files with this flag:\n        library fsadd --scan-all-files mixed.db ./tv-and-maybe-audio-only-files/\n        # I use that with this to keep my folders organized:\n        library watch -w 'video_count=0 and audio_count>=1' -pf mixed.db | parallel mv {} ~/d/82_Audiobooks/\n\n    Remove path roots with --force\n        library fsadd audio.db /mnt/d/Youtube/\n        [/mnt/d/Youtube] Path does not exist\n\n        library fsadd --force audio.db /mnt/d/Youtube/\n        [/mnt/d/Youtube] Path does not exist\n        [/mnt/d/Youtube] Building file list...\n        [/mnt/d/Youtube] Marking 28932 orphaned metadata records as deleted\n\n    If you run out of RAM, for example scanning large VR videos, you can lower the number of threads via --io-multiplier\n\n        library fsadd vr.db --delete-unplayable --check-corrupt --full-scan-if-corrupt 15% --delete-corrupt 20% ./vr/ --io-multiplier 0.2\n\n    Move files on import\n\n        library fsadd audio.db --move ~/library/ ./added_folder/\n        This will run destination paths through `library christen` and move files relative to the added folder root\n\n\n</details>\n\n###### tube-add\n\n<details><summary>Add online video media (yt-dlp)</summary>\n\n    $ library tube-add -h\n    usage: library tube-add [--safe] [--extra] [--subs] [--auto-subs] DATABASE URLS ...\n\n    Create a dl database / add links to an existing database\n\n        library tubeadd dl.db https://www.youdl.com/c/BranchEducation/videos\n\n    Add links from a line-delimited file\n\n        cat ./my_yt_subscriptions.txt | library tubeadd reddit.db -\n\n    Add metadata to links already in a database table\n\n        library tubeadd --force reddit.db (sqlite-utils --raw-lines reddit.db 'select path from media')\n\n    Fetch extra metadata:\n\n        By default tubeadd will quickly add media at the expense of less metadata.\n        If you plan on using `library download` then it doesn't make sense to use `--extra`.\n        Downloading will add the extra metadata automatically to the database.\n        You can always fetch more metadata later via tubeupdate:\n        library tube-update tw.db --extra\n\n\n</details>\n\n###### web-add\n\n<details><summary>Add open-directory media</summary>\n\n    $ library web-add -h\n    usage: library web-add [(--filesystem) | --video | --audio | --image | --text] DATABASE URL ...\n\n    Scan open directories\n\n        library web-add open_dir.db --video http://1.1.1.1/\n\n    Check download size of all videos matching some criteria\n\n        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'height<720' -E preview -pa\n\n        path         count  download_duration                  size    avg_size\n        ---------  -------  ----------------------------  ---------  ----------\n        Aggregate     5694  2 years, 7 months and 5 days  724.4 GiB   130.3 MiB\n\n    Download all videos matching some criteria\n\n        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'height<720' -E preview\n\n    Stream directly to mpv\n\n        library watch open_dir.db\n\n    Check videos before downloading\n\n        library watch open_dir.db --online-media-only --loop --exit-code-confirm -i --action ask-keep -m 4  --start 35% --volume=0 -w 'height<720' -E preview\n\n        Assuming you have bound in mpv input.conf a key to 'quit' and another key to 'quit 4',\n        using the ask-keep action will mark a video as deleted when you 'quit 4' and it will mark a video as watched when you 'quit'.\n\n        For example, here I bind \"'\" to \"KEEP\" and  \"j\" to \"DELETE\"\n\n            ' quit\n            j quit 4\n\n        This is pretty intuitive after you use it a few times but writing this out I realize this might seem a bit opaque.\n        Instead of using built-in post-actions (example above) you could also do something like\n            `--cmd5 'echo {} >> keep.txt' --cmd6 'echo {} >> rejected.txt'`\n\n        But you will still bind keys in mpv input.conf:\n\n            k quit 5  # goes to keep.txt\n            r quit 6  # goes to rejected.txt\n\n    Download checked videos\n\n        library download --fs open_dir.db --prefix ~/d/dump/video/ -w 'id in (select media_id from history)'\n\n    View most recent files\n\n        library fs example_dbs/web_add.image.db -u time_modified desc --cols path,width,height,size,time_modified -p -l 10\n        path                                                                                                                      width    height       size  time_modified\n        ----------------------------------------------------------------------------------------------------------------------  -------  --------  ---------  -----------------\n        https://siliconpr0n.org/map/infineon/m7690-b1/single/infineon_m7690-b1_infosecdj_mz_nikon20x.jpg                           7066     10513   16.4 MiB  2 days ago, 20:54\n        https://siliconpr0n.org/map/starchip/scf384g/single/starchip_scf384g_infosecdj_mz_nikon20x.jpg                            10804     10730   19.2 MiB  2 days ago, 15:31\n        https://siliconpr0n.org/map/hp/2hpt20065-1-68k-core/single/hp_2hpt20065-1-68k-core_marmontel_mz_ms50x-1.25.jpg            28966     26816  192.2 MiB  4 days ago, 15:05\n        https://siliconpr0n.org/map/hp/2hpt20065-1-68k-core/single/hp_2hpt20065-1-68k-core_marmontel_mz_ms20x-1.25.jpg            11840     10978   49.2 MiB  4 days ago, 15:04\n        https://siliconpr0n.org/map/hp/2hpt20065-1/single/hp_2hpt20065-1_marmontel_mz_ms10x-1.25.jpg                              16457     14255  101.4 MiB  4 days ago, 15:03\n        https://siliconpr0n.org/map/pervasive/e2213ps01e1/single/pervasive_e2213ps01e1_azonenberg_back_roi1_mit10x_rotated.jpg    18880     61836  136.8 MiB  6 days ago, 16:00\n        https://siliconpr0n.org/map/pervasive/e2213ps01e/single/pervasive_e2213ps01e_azonenberg_back_mit5x_rotated.jpg            62208     30736  216.5 MiB  6 days ago, 15:57\n        https://siliconpr0n.org/map/amd/am2964bpc/single/amd_am2964bpc_infosecdj_mz_lmplan10x.jpg                                 12809     11727   39.8 MiB  6 days ago, 10:28\n        https://siliconpr0n.org/map/unknown/ks1804ir1/single/unknown_ks1804ir1_infosecdj_mz_lmplan10x.jpg                          6508      6707    8.4 MiB  6 days ago, 08:04\n        https://siliconpr0n.org/map/amd/am2960dc-b/single/amd_am2960dc-b_infosecdj_mz_lmplan10x.jpg                               16434     15035   64.9 MiB  7 days ago, 19:01\n        10 media (limited by --limit 10)\n\n\n\n</details>\n\n###### gallery-add\n\n<details><summary>Add online gallery media (gallery-dl)</summary>\n\n    $ library gallery-add -h\n    usage: library gallery-add DATABASE URLS\n\n    Add gallery_dl URLs to download later or periodically update\n\n    If you have many URLs use stdin\n\n        cat ./my-favorite-manhwa.txt | library galleryadd your.db --insert-only -\n\n\n</details>\n\n###### tabs-add\n\n<details><summary>Create a tabs database; Add URLs</summary>\n\n    $ library tabs-add -h\n    usage: library tabs-add [--frequency daily weekly (monthly) quarterly yearly] [--no-sanitize] DATABASE URLS ...\n\n    Adding one URL:\n\n        library tabsadd -f daily tabs.db https://wiby.me/surprise/\n\n        Depending on your shell you may need to escape the URL (add quotes)\n\n        If you use Fish shell know that you can enable features to make pasting easier:\n            set -U fish_features stderr-nocaret qmark-noglob regex-easyesc ampersand-nobg-in-token\n\n        Also I recommend turning Ctrl+Backspace into a super-backspace for repeating similar commands with long args:\n            echo 'bind \\b backward-kill-bigword' >> ~/.config/fish/config.fish\n\n    Importing from a line-delimitated file:\n\n        library tabsadd -f yearly -c reddit tabs.db (cat ~/mc/yearly-subreddit.cron)\n\n\n\n</details>\n\n###### links-add\n\n<details><summary>Create a link-scraping database</summary>\n\n    $ library links-add -h\n    usage: library links-add DATABASE PATH ... [--case-sensitive] [--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]] [--selenium] [--manual] [--scroll] [--auto-pager] [--poke] [--chrome] [--local-html] [--file FILE]\n\n    Database version of extract-links\n\n    You can fine-tune what links get saved with --path/text/before/after-include/exclude.\n\n        library links-add --path-include /video/\n\n    Defaults to stop fetching\n\n        After encountering ten pages with no new links:\n        library links-add --stop-pages-no-new 10\n\n        Some websites don't give an error when you try to access pages which don't exist.\n        To compensate for this the script will only continue fetching pages until there are both no new nor known links for four pages:\n        library links-add --stop-pages-no-match 4\n\n    Backfill fixed number of pages\n\n        You can disable automatic stopping by any of the following:\n\n        - Set `--backfill-pages` to the desired number of pages for the first run\n        - Set `--fixed-pages` to _always_ fetch the desired number of pages\n\n        If the website is supported by --auto-pager data is fetched twice when using page iteration.\n        As such, page iteration (--max-pages, --fixed-pages, etc) is disabled when using `--auto-pager`.\n\n        You can set unset --fixed-pages for all the playlists in your database by running this command:\n        sqlite your.db \"UPDATE playlists SET extractor_config = json_replace(extractor_config, '$.fixed_pages', null)\"\n\n    To use \"&p=1\" instead of \"&page=1\"\n\n        library links-add --page-key p\n\n        By default the script will attempt to modify each given URL with \"&page=1\".\n\n    Single page\n\n        If `--fixed-pages` is 1 and --start-page is not set then the URL will not be modified.\n\n        library links-add --fixed-pages=1\n        Loading page https://site/path\n\n        library links-add --fixed-pages=1 --page-start 99\n        Loading page https://site/path?page=99\n\n    Reverse chronological paging\n\n        library links-add --max-pages 10\n        library links-add --fixed-pages (overrides --max-pages and --stop-known but you can still stop early via --stop-link ie. 429 page)\n\n    Chronological paging\n\n        library links-add --page-start 100 --page-step 1\n\n        library links-add --page-start 100 --page-step=-1 --fixed-pages=5  # go backwards\n\n        # TODO: store previous page id (max of sliding window)\n\n    Jump pages\n\n        Some pages don't count page numbers but instead count items like messages or forum posts. You can iterate through like this:\n\n        library links-add --page-key start --page-start 0 --page-step 50\n\n        which translates to\n        &start=0    first page\n        &start=50   second page\n        &start=100  third page\n\n    Page folders\n\n        Some websites use paths instead of query parameters. In this case make sure the URL provided includes that information with a matching --page-key\n\n        library links-add --page-key page https://website/page/1/\n        library links-add --page-key article https://website/article/1/\n\n    Import links from args\n\n        library links-add --no-extract links.db (cb)\n\n    Import lines from stdin\n\n        cb | lb linksdb example_dbs/links.db --skip-extract -\n\n    Other Examples\n\n        library links-add links.db https://video/site/ --path-include /video/\n\n        library links-add links.db https://loginsite/ --path-include /article/ --cookies-from-browser firefox\n        library links-add links.db https://loginsite/ --path-include /article/ --cookies-from-browser chrome\n\n        library links-add --path-include viewtopic.php --cookies-from-browser firefox \\\n        --page-key start --page-start 0 --page-step 50 --fixed-pages 14 --stop-pages-no-match 1 \\\n        plab.db https://plab/forum/tracker.php?o=(string replace ' ' \\n -- 1 4 7 10 15)&s=2&tm=-1&f=(string replace ' ' \\n -- 1670 1768 60 1671 1644 1672 1111 508 555 1112 1718 1143 1717 1851 1713 1712 1775 1674 902 1675 36 1830 1803 1831 1741 1676 1677 1780 1110 1124 1784 1769 1793 1797 1804 1819 1825 1836 1842 1846 1857 1861 1867 1451 1788 1789 1792 1798 1805 1820 1826 1837 1843 1847 1856 1862 1868 284 1853 1823 1800 1801 1719 997 1818 1849 1711 1791 1762)\n\n\n</details>\n\n###### site-add\n\n<details><summary>Auto-scrape website data to SQLITE</summary>\n\n    $ library site-add -h\n    usage: library site-add DATABASE PATH ... [--auto-pager] [--poke] [--local-html] [--file FILE]\n\n    Extract data from website requests to a database\n\n        library siteadd jobs.st.db --poke https://hk.jobsdb.com/hk/search-jobs/python/\n\n    Requires selenium-wire\n    Requires xmltodict when using --extract-xml\n\n        pip install selenium-wire xmltodict\n\n    Run with `-vv` to see and interact with the browser\n\n\n</details>\n\n###### reddit-add\n\n<details><summary>Create a reddit database; Add subreddits</summary>\n\n    $ library reddit-add -h\n    usage: library reddit-add [--lookback N_DAYS] [--praw-site bot1] DATABASE URLS ...\n\n    Fetch data for redditors and reddits:\n\n        library redditadd interesting.db https://old.reddit.com/r/coolgithubprojects/ https://old.reddit.com/user/Diastro\n\n    If you have a file with a list of subreddits you can do this:\n\n        library redditadd 96_Weird_History.db --subreddits (cat ~/mc/96_Weird_History-reddit.txt)\n\n    Likewise for redditors:\n\n        library redditadd shadow_banned.db --redditors (cat ~/mc/shadow_banned.txt)\n\n    Note that reddit's API is limited to 1000 posts and it usually doesn't go back very far historically.\n    Also, it may be the case that reddit's API (praw) will stop working in the near future. For both of these problems\n    my suggestion is to use pushshift data.\n    You can find more info here: https://github.com/chapmanjacobd/reddit_mining#how-was-this-made\n\n\n</details>\n\n###### hn-add\n\n<details><summary>Create / Update a Hacker News database</summary>\n\n    $ library hn-add -h\n    usage: library hn-add [--oldest] DATABASE\n\n    Fetch latest stories first:\n\n        library hnadd hn.db -v\n        Fetching 154873 items (33212696 to 33367569)\n        Saving comment 33367568\n        Saving comment 33367543\n        Saving comment 33367564\n        ...\n\n    Fetch oldest stories first:\n\n        library hnadd --oldest hn.db\n\n\n</details>\n\n###### substack\n\n<details><summary>Backup substack articles</summary>\n\n    $ library substack -h\n    usage: library substack DATABASE PATH ...\n\n    Backup substack articles\n\n\n</details>\n\n###### tildes\n\n<details><summary>Backup tildes comments and topics</summary>\n\n    $ library tildes -h\n    usage: library tildes DATABASE USER\n\n    Backup tildes.net user comments and topics\n\n        library tildes tildes.net.db xk3\n\n    Without cookies you are limited to the first page. You can use cookies like this:\n        https://github.com/rotemdan/ExportCookies\n        library tildes tildes.net.db xk3 --cookies ~/Downloads/cookies-tildes-net.txt\n\n\n</details>\n\n###### places-import\n\n<details><summary>Import places of interest (POIs)</summary>\n\n    $ library places-import -h\n    usage: library places-import DATABASE PATH ...\n\n    Load POIs from Google Maps Google Takeout\n\n\n</details>\n\n###### row-add\n\n<details><summary>Add arbitrary data to SQLITE</summary>\n\n    $ library row-add -h\n    usage: library row-add DATABASE [--table-name TABLE_NAME]\n\n    Add a row to sqlite\n\n        library row-add t.db --test_b 1 --test-a 2\n\n        ### media (1 rows)\n        |   test_b |   test_a |\n        |----------|----------|\n        |        1 |        2 |\n\n\n</details>\n\n### Text subcommands\n\n###### cluster-sort\n\n<details><summary>Sort text and images by similarity</summary>\n\n    $ library cluster-sort -h\n    usage: library cluster-sort [input_path | stdin] [output_path | stdout]\n\n    Group lines of text into sorted output\n\n        echo 'red apple\n        broccoli\n        yellow\n        green\n        orange apple\n        red apple' | library cluster-sort\n\n        orange apple\n        red apple\n        red apple\n        broccoli\n        green\n        yellow\n\n    Show the groupings\n\n        echo 'red apple\n        broccoli\n        yellow\n        green\n        orange apple\n        red apple' | library cluster-sort --print-groups\n\n        [\n            {'grouped_paths': ['orange apple', 'red apple', 'red apple']},\n            {'grouped_paths': ['broccoli', 'green', 'yellow']}\n        ]\n\n    Auto-sort images into directories\n\n        echo 'image1.jpg\n        image2.jpg\n        image3.jpg' | library cluster-sort --image --move-groups\n\n    Print similar paths\n\n        library fs 0day.db -pa --cluster --print-groups\n\n\n\n</details>\n\n###### extract-links\n\n<details><summary>Extract inner links from lists of web links</summary>\n\n    $ library extract-links -h\n    usage: library extract-links PATH ... [--case-sensitive] [--scroll] [--download] [--verbose] [--local-html] [--file FILE] [--path-include ...] [--text-include ...] [--after-include ...] [--before-include ...] [--path-exclude ...] [--text-exclude ...] [--after-exclude ...] [--before-exclude ...]\n\n    Extract links from within local HTML fragments, files, or remote pages; filtering on link text and nearby plain-text\n\n        library links https://en.wikipedia.org/wiki/List_of_bacon_dishes --path-include https://en.wikipedia.org/wiki/ --after-include famous\n        https://en.wikipedia.org/wiki/Omelette\n\n    Read from local clipboard and filter out links based on nearby plain text:\n\n        library links --local-html (cb -t text/html | psub) --after-exclude paranormal spooky horror podcast tech fantasy supernatural lecture sport\n        # note: the equivalent BASH-ism is <(xclip -selection clipboard -t text/html)\n\n    Run with `-vv` to see the browser\n\n\n</details>\n\n###### extract-text\n\n<details><summary>Extract human text from lists of web links</summary>\n\n    $ library extract-text -h\n    usage: library extract-text PATH ... [--skip-links]\n\n    Sorting suggestions\n\n        lb extract-text --skip-links --local-file (cb -t text/html | psub) | lb cs --groups | jq -r '.[] | .grouped_paths | \"\\n\" + join(\"\\n\")'\n\n\n</details>\n\n###### markdown-links\n\n<details><summary>Extract titles from lists of web links</summary>\n\n    $ library markdown-links -h\n    usage: usage: library markdown-links URL ... [--cookies COOKIES] [--cookies-from-browser BROWSER[+KEYRING][:PROFILE][::CONTAINER]] [--firefox] [--chrome] [--allow-insecure] [--scroll] [--manual] [--auto-pager] [--poke] [--file FILE]\n\n    Convert URLs into Markdown links with page titles filled in\n\n        $ lb markdown-links https://www.youtube.com/watch?v=IgZDDW-NXDE\n        [Work For Peace](https://www.youtube.com/watch?v=IgZDDW-NXDE)\n\n\n</details>\n\n###### nouns\n\n<details><summary>Unstructured text -> compound nouns (stdin)</summary>\n\n    $ library nouns -h\n    usage: library nouns (stdin)\n\n    Extract compound nouns and phrases from unstructured mixed HTML plain text\n\n        xsv select text hn_comment_202210242109.csv | library nouns | sort | uniq -c | sort --numeric-sort\n\n\n</details>\n\n### Folder subcommands\n\n###### merge-folders\n\n<details><summary>Merge two or more file trees</summary>\n\n    $ library merge-folders -h\n    usage: library merge-folders [--replace] [--no-replace] [--simulate] SOURCES ... DESTINATION\n\n    Merge multiple folders with the same file tree into a single folder.\n\n    https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src\n\n    Trumps are new or replaced files from an earlier source which now conflict with a later source.\n    If you only have one source then the count of trumps will always be zero.\n    The count of conflicts also includes trumps.\n\n\n</details>\n\n###### relmv\n\n<details><summary>Move files preserving parent folder hierarchy</summary>\n\n    $ library relmv -h\n    usage: library relmv [--simulate] SOURCE ... DEST\n\n    Move files/folders without losing hierarchy metadata\n\n    Move fresh music to your phone every Sunday:\n\n        # move last week music back to their source folders\n        library mv /mnt/d/sync/weekly/ /mnt/d/check/audio/\n\n        # move new music for this week\n        library relmv (\n            library listen audio.db --local-media-only --where 'play_count=0' --random -L 600 -p f\n        ) /mnt/d/sync/weekly/\n\n\n</details>\n\n###### mv-list\n\n<details><summary>Find specific folders to move to different disks</summary>\n\n    $ library mv-list -h\n    usage: library mv-list [--limit LIMIT] [--lower LOWER] [--upper UPPER] MOUNT_POINT DATABASE\n\n    Free up space on a specific disk. Find candidates for moving data to a different mount point\n\n\n    The program takes a mount point and a xklb database file. If you don't have a database file you can create one like this:\n\n        library fsadd --filesystem d.db ~/d/\n\n    But this should definitely also work with xklb audio and video databases:\n\n        library mv-list /mnt/d/ video.db\n\n    The program will print a table with a sorted list of folders which are good candidates for moving.\n    Candidates are determined by how many files are in the folder (so you don't spend hours waiting for folders with millions of tiny files to copy over).\n    The default is 4 to 4000--but it can be adjusted via the --lower and --upper flags.\n\n        ...\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 4.0 GB   \u2502       7 \u2502 /mnt/d/71_Mealtime_Videos/unsorted/Miguel_4K/                                                                 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 5.7 GB   \u2502      10 \u2502 /mnt/d/71_Mealtime_Videos/unsorted/Bollywood_Premium/                                                         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2.3 GB   \u2502       4 \u2502 /mnt/d/71_Mealtime_Videos/chief_wiggum/                                                                       \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        6702 other folders not shown\n\n        \u2588\u2588\u2557\u2588\u2588\u2588\u2557\u2591\u2591\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2557\u2591\u2591\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2557\n        \u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d\u255a\u2550\u2550\u2588\u2588\u2554\u2550\u2550\u255d\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u255a\u2550\u2550\u2588\u2588\u2554\u2550\u2550\u255d\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2550\u2550\u255d\n        \u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2557\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u255a\u2550\u255d\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2557\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2557\u2591\n        \u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2551\u2591\u255a\u2550\u2550\u2550\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2551\u2591\u255a\u2550\u2550\u2550\u2588\u2588\u2557\n        \u2588\u2588\u2551\u2588\u2588\u2551\u2591\u255a\u2588\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u255a\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2591\u2591\u2591\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2591\u255a\u2588\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\n        \u255a\u2550\u255d\u255a\u2550\u255d\u2591\u2591\u255a\u2550\u2550\u255d\u255a\u2550\u2550\u2550\u2550\u2550\u255d\u2591\u2591\u2591\u2591\u255a\u2550\u255d\u2591\u2591\u2591\u255a\u2550\u255d\u2591\u2591\u255a\u2550\u255d\u2591\u255a\u2550\u2550\u2550\u2550\u2550\u255d\u2591\u2591\u255a\u2550\u2550\u2550\u2550\u255d\u2591\u2591\u2591\u2591\u255a\u2550\u255d\u2591\u2591\u2591\u255a\u2550\u255d\u2591\u255a\u2550\u2550\u2550\u2550\u255d\u2591\u255a\u2550\u255d\u2591\u2591\u255a\u2550\u2550\u255d\u255a\u2550\u2550\u2550\u2550\u2550\u255d\u2591\n\n        Type \"done\" when finished\n        Type \"more\" to see more files\n        Paste a folder (and press enter) to toggle selection\n        Type \"*\" to select all files in the most recently printed table\n\n    Then it will give you a prompt:\n\n        Paste a path:\n\n    Wherein you can copy and paste paths you want to move from the table and the program will keep track for you.\n\n        Paste a path: /mnt/d/75_MovieQueue/720p/s11/\n        26 selected paths: 162.1 GB ; future free space: 486.9 GB\n\n    You can also press the up arrow or paste it again to remove it from the list:\n\n        Paste a path: /mnt/d/75_MovieQueue/720p/s11/\n        25 selected paths: 159.9 GB ; future free space: 484.7 GB\n\n    After you are done selecting folders you can press ctrl-d and it will save the list to a tmp file:\n\n        Paste a path: done\n\n            Folder list saved to /tmp/tmp7x_75l8. You may want to use the following command to move files to an EMPTY folder target:\n\n                rsync -a --info=progress2 --no-inc-recursive --remove-source-files --files-from=/tmp/tmp7x_75l8 -r --relative -vv --dry-run / jim:/free/real/estate/\n\n\n</details>\n\n###### scatter\n\n<details><summary>Scatter files between folders or disks</summary>\n\n    $ library scatter -h\n    usage: library scatter [--limit LIMIT] [--policy POLICY] [--sort SORT] --targets TARGETS DATABASE RELATIVE_PATH ...\n\n    Balance files across filesystem folder trees or multiple devices (mostly useful for mergerfs)\n\n    Scatter filesystem folder trees (without mountpoints; limited functionality; good for balancing fs inodes)\n\n        library scatter scatter.db /test/{0,1,2,3,4,5,6,7,8,9}\n\n    Reduce number of files per folder (creates more folders)\n\n        library scatter scatter.db --max-files-per-folder 16000 /test/{0,1,2,3,4,5,6,7,8,9}\n\n    Multi-device re-bin: balance by size\n\n        library scatter -m /mnt/d1:/mnt/d2:/mnt/d3:/mnt/d4/:/mnt/d5:/mnt/d6:/mnt/d7 fs.db subfolder/of/mergerfs/mnt\n        Current path distribution:\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 mount   \u2502   file_count \u2502 total_size   \u2502 median_size   \u2502 time_created   \u2502 time_modified   \u2502 time_downloaded\u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 /mnt/d1 \u2502        12793 \u2502 169.5 GB     \u2502 4.5 MB        \u2502 Jan 27         \u2502 Jul 19 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d2 \u2502        13226 \u2502 177.9 GB     \u2502 4.7 MB        \u2502 Jan 27         \u2502 Jul 19 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d3 \u2502            1 \u2502 717.6 kB     \u2502 717.6 kB      \u2502 Jan 31         \u2502 Jul 18 2022     \u2502 yesterday      \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d4 \u2502           82 \u2502 1.5 GB       \u2502 12.5 MB       \u2502 Jan 31         \u2502 Apr 22 2022     \u2502 yesterday      \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n        Simulated path distribution:\n        5845 files should be moved\n        20257 files should not be moved\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 mount   \u2502   file_count \u2502 total_size   \u2502 median_size   \u2502 time_created   \u2502 time_modified   \u2502 time_downloaded\u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 /mnt/d1 \u2502         9989 \u2502 46.0 GB      \u2502 2.4 MB        \u2502 Jan 27         \u2502 Jul 19 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d2 \u2502        10185 \u2502 46.0 GB      \u2502 2.4 MB        \u2502 Jan 27         \u2502 Jul 19 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d3 \u2502         1186 \u2502 53.6 GB      \u2502 30.8 MB       \u2502 Jan 27         \u2502 Apr 07 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d4 \u2502         1216 \u2502 49.5 GB      \u2502 29.5 MB       \u2502 Jan 27         \u2502 Apr 07 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d5 \u2502         1146 \u2502 53.0 GB      \u2502 30.9 MB       \u2502 Jan 27         \u2502 Apr 07 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d6 \u2502         1198 \u2502 48.8 GB      \u2502 30.6 MB       \u2502 Jan 27         \u2502 Apr 07 2022     \u2502 Jan 31         \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 /mnt/d7 \u2502         1182 \u2502 52.0 GB      \u2502 30.9 MB       \u2502 Jan 27         \u2502 Apr 07 2022     \u2502 Jan 31         \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        ### Move 1182 files to /mnt/d7 with this command: ###\n        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpmr1628ij / /mnt/d7\n        ### Move 1198 files to /mnt/d6 with this command: ###\n        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmp9yd75f6j / /mnt/d6\n        ### Move 1146 files to /mnt/d5 with this command: ###\n        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpfrj141jj / /mnt/d5\n        ### Move 1185 files to /mnt/d3 with this command: ###\n        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmpqh2euc8n / /mnt/d3\n        ### Move 1134 files to /mnt/d4 with this command: ###\n        rsync -aE --xattrs --info=progress2 --remove-source-files --files-from=/tmp/tmphzb0gj92 / /mnt/d4\n\n    Multi-device re-bin: balance device inodes for specific subfolder\n\n        library scatter -m /mnt/d1:/mnt/d2 fs.db subfolder --group count --sort 'size desc'\n\n    Multi-device re-bin: only consider the most recent 100 files\n\n        library scatter -m /mnt/d1:/mnt/d2 -l 100 -s 'time_modified desc' fs.db /\n\n    Multi-device re-bin: empty out a disk (/mnt/d2) into many other disks (/mnt/d1, /mnt/d3, and /mnt/d4)\n\n        library scatter fs.db -m /mnt/d1:/mnt/d3:/mnt/d4 /mnt/d2\n\n    This tool is intended for local use. If transferring many small files across the network something like\n    [fpart](https://github.com/martymac/fpart) or [fpsync](https://www.fpart.org/fpsync/) will be better.\n\n\n</details>\n\n### File subcommands\n\n###### sample-hash\n\n<details><summary>Calculate a hash based on small file segments</summary>\n\n    $ library sample-hash -h\n    usage: library sample-hash [--threads 10] [--chunk-size BYTES] [--gap BYTES OR 0.0-1.0*FILESIZE] PATH ...\n\n    Calculate hashes for large files by reading only small segments of each file\n\n        library sample-hash ./my_file.mkv\n\n    The threads flag seems to be faster for rotational media but slower on SSDs\n\n\n</details>\n\n###### sample-compare\n\n<details><summary>Compare files using sample-hash and other shortcuts</summary>\n\n    $ library sample-compare -h\n    usage: library sample-compare [--threads 10] [--chunk-size BYTES] [--gap BYTES OR 0.0-1.0*FILESIZE] PATH ...\n\n    Convenience subcommand to compare multiple files using sample-hash\n\n\n</details>\n\n### Tabular data subcommands\n\n###### eda\n\n<details><summary>Exploratory Data Analysis on table-like files</summary>\n\n    $ library eda -h\n    usage: library eda PATH ... [--table TABLE] [--start-row START_ROW] [--end-row END_ROW] [--repl]\n\n    Perform Exploratory Data Analysis (EDA) on one or more files\n\n    Only 20,000 rows per file are loaded for performance purposes. Set `--end-row inf` to read all the rows and/or run out of RAM.\n\n\n</details>\n\n###### mcda\n\n<details><summary>Multi-criteria Ranking for Decision Support</summary>\n\n    $ library mcda -h\n    usage: library mcda PATH ... [--table TABLE] [--start-row START_ROW] [--end-row END_ROW]\n\n    Perform Multiple Criteria Decision Analysis (MCDA) on one or more files\n\n    Only 20,000 rows per file are loaded for performance purposes. Set `--end-row inf` to read all the rows and/or run out of RAM.\n\n    $ library mcda ~/storage.csv --minimize price --ignore warranty\n\n        ### Goals\n        #### Maximize\n        - size\n        #### Minimize\n        - price\n\n        |    |   price |   size |   warranty |   TOPSIS |      MABAC |   SPOTIS |   BORDA |\n        |----|---------|--------|------------|----------|------------|----------|---------|\n        |  0 |     359 |     36 |          5 | 0.769153 |  0.348907  | 0.230847 | 7.65109 |\n        |  1 |     453 |     40 |          2 | 0.419921 |  0.0124531 | 0.567301 | 8.00032 |\n        |  2 |     519 |     44 |          2 | 0.230847 | -0.189399  | 0.769153 | 8.1894  |\n\n    $ library mcda ~/storage.csv --ignore warranty\n\n        ### Goals\n        #### Maximize\n        - price\n        - size\n\n        |    |   price |   size |   warranty |   TOPSIS |     MABAC |   SPOTIS |   BORDA |\n        |----|---------|--------|------------|----------|-----------|----------|---------|\n        |  2 |     519 |     44 |          2 | 1        |  0.536587 | 0        | 7.46341 |\n        |  1 |     453 |     40 |          2 | 0.580079 |  0.103888 | 0.432699 | 7.88333 |\n        |  0 |     359 |     36 |          5 | 0        | -0.463413 | 1        | 8.46341 |\n\n    $ library mcda ~/storage.csv --minimize price --ignore warranty\n\n        ### Goals\n        #### Maximize\n        - size\n        #### Minimize\n        - price\n\n        |    |   price |   size |   warranty |   TOPSIS |      MABAC |   SPOTIS |   BORDA |\n        |----|---------|--------|------------|----------|------------|----------|---------|\n        |  0 |     359 |     36 |          5 | 0.769153 |  0.348907  | 0.230847 | 7.65109 |\n        |  1 |     453 |     40 |          2 | 0.419921 |  0.0124531 | 0.567301 | 8.00032 |\n        |  2 |     519 |     44 |          2 | 0.230847 | -0.189399  | 0.769153 | 8.1894  |\n\n    It also works with HTTP/GCS/S3 URLs:\n\n    $ library mcda https://en.wikipedia.org/wiki/List_of_Academy_Award-winning_films --clean --minimize Year\n\n        ### Goals\n\n        #### Maximize\n\n        - Nominations\n        - Awards\n\n        #### Minimize\n\n        - Year\n\n        |      | Film                                                                    |   Year |   Awards |   Nominations |      TOPSIS |    MABAC |      SPOTIS |   BORDA |\n        |------|-------------------------------------------------------------------------|--------|----------|---------------|-------------|----------|-------------|---------|\n        |  378 | Titanic                                                                 |   1997 |       11 |            14 | 0.999993    | 1.38014  | 4.85378e-06 | 4116.62 |\n        |  868 | Ben-Hur                                                                 |   1959 |       11 |            12 | 0.902148    | 1.30871  | 0.0714303   | 4116.72 |\n        |  296 | The Lord of the Rings: The Return of the King                           |   2003 |       11 |            11 | 0.8558      | 1.27299  | 0.107147    | 4116.76 |\n        | 1341 | West Side Story                                                         |   1961 |       10 |            11 | 0.837716    | 1.22754  | 0.152599    | 4116.78 |\n        |  389 | The English Patient                                                     |   1996 |        9 |            12 | 0.836725    | 1.2178   | 0.162341    | 4116.78 |\n        | 1007 | Gone with the Wind                                                      |   1939 |        8 |            13 | 0.807086    | 1.20806  | 0.172078    | 4116.81 |\n        |  990 | From Here to Eternity                                                   |   1953 |        8 |            13 | 0.807086    | 1.20806  | 0.172079    | 4116.81 |\n        | 1167 | On the Waterfront                                                       |   1954 |        8 |            12 | 0.785       | 1.17235  | 0.207793    | 4116.83 |\n        | 1145 | My Fair Lady                                                            |   1964 |        8 |            12 | 0.785       | 1.17235  | 0.207793    | 4116.83 |\n        |  591 | Gandhi                                                                  |   1982 |        8 |            11 | 0.755312    | 1.13663  | 0.243509    | 4116.86 |\n\n\n</details>\n\n###### incremental-diff\n\n<details><summary>Diff large table-like files in chunks</summary>\n\n    $ library incremental-diff -h\n    usage: library incremental-diff PATH1 PATH2 [--join-keys JOIN_KEYS] [--table1 TABLE1] [--table2 TABLE2] [--table1-index TABLE1_INDEX] [--table2-index TABLE2_INDEX] [--start-row START_ROW] [--batch-size BATCH_SIZE]\n\n    See data differences in an incremental way to quickly see how two different files differ.\n\n    Data (PATH1, PATH2) can be two different files of different file formats (CSV, Excel) or it could even be the same file with different tables.\n\n    If files are unsorted you may need to use `--join-keys id,name` to specify ID columns. Rows that have the same ID will then be compared.\n    If you are comparing SQLITE files you may be able to use `--sort id,name` to achieve the same effect.\n\n    To diff everything at once run with `--batch-size inf`\n\n\n</details>\n\n### Media File subcommands\n\n###### media-check\n\n<details><summary>Check video and audio files for corruption via ffmpeg</summary>\n\n    $ library media-check -h\n    usage: library media-check [--chunk-size SECONDS] [--gap SECONDS OR 0.0-1.0*DURATION] [--delete-corrupt >0-100] [--full-scan] [--audio-scan] PATH ...\n\n    Defaults to decode 0.5 second per 10% of each file\n\n        library media-check ./video.mp4\n\n    Decode all the frames of each file to evaluate how corrupt it is\n    (scantime is very slow; about 150 seconds for an hour-long file)\n\n        library media-check --full-scan ./video.mp4\n\n    Decode all the packets of each file to evaluate how corrupt it is\n    (scantime is about one second of each file but only accurate for formats where 1 packet == 1 frame)\n\n        library media-check --full-scan --gap 0 ./video.mp4\n\n    Decode all audio of each file to evaluate how corrupt it is\n    (scantime is about four seconds per file)\n\n        library media-check --full-scan --audio ./video.mp4\n\n    Decode at least one frame at the start and end of each file to evaluate how corrupt it is\n    (scantime is about one second per file)\n\n        library media-check --chunk-size 5% --gap 99.9% ./video.mp4\n\n    Decode 3s every 5% of a file to evaluate how corrupt it is\n    (scantime is about three seconds per file)\n\n        library media-check --chunk-size 3 --gap 5% ./video.mp4\n\n    Delete the file if 20 percent or more of checks fail\n\n        library media-check --delete-corrupt 20% ./video.mp4\n\n    To scan a large folder use `fsadd`. I recommend something like this two-stage approach:\n\n        library fsadd --delete-unplayable --check-corrupt --chunk-size 5% tmp.db ./video/ ./folders/\n        library media-check (library fs tmp.db -w 'corruption>15' -pf) --full-scan --delete-corrupt 25%\n\n    The above can now be done in one command via `--full-scan-if-corrupt`:\n\n        library fsadd --delete-unplayable --check-corrupt --chunk-size 5% tmp.db ./video/ ./folders/ --full-scan-if-corrupt 15% --delete-corrupt 25%\n\n    Corruption stats\n\n        library fs tmp.db -w 'corruption>15' -pa\n        path         count  duration             avg_duration         size    avg_size\n        ---------  -------  -------------------  --------------  ---------  ----------\n        Aggregate      907  15 days and 9 hours  24 minutes      130.6 GiB   147.4 MiB\n\n    Corruption graph\n\n        sqlite --raw-lines tmp.db 'select corruption from media' | lowcharts hist --min 10 --intervals 10\n\n        Samples = 931; Min = 10.0; Max = 100.0\n        Average = 39.1; Variance = 1053.103; STD = 32.452\n        each \u220e represents a count of 6\n        [ 10.0 ..  19.0] [561] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n        [ 19.0 ..  28.0] [ 69] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n        [ 28.0 ..  37.0] [ 33] \u220e\u220e\u220e\u220e\u220e\n        [ 37.0 ..  46.0] [ 18] \u220e\u220e\u220e\n        [ 46.0 ..  55.0] [ 14] \u220e\u220e\n        [ 55.0 ..  64.0] [ 12] \u220e\u220e\n        [ 64.0 ..  73.0] [ 15] \u220e\u220e\n        [ 73.0 ..  82.0] [ 18] \u220e\u220e\u220e\n        [ 82.0 ..  91.0] [ 50] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n        [ 91.0 .. 100.0] [141] \u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\u220e\n\n\n</details>\n\n###### process-ffmpeg\n\n<details><summary>Shrink video/audio to AV1/Opus format (.mkv, .mka)</summary>\n\n    $ library process-ffmpeg -h\n    usage: library process-ffmpeg PATH ... [--always-split] [--split-longer-than DURATION] [--min-split-segment SECONDS] [--simulate]\n\n    Resize videos to max 1440x960px AV1 and/or Opus to save space\n\n    Convert audio to Opus. Optionally split up long tracks into multiple files.\n\n        fd -tf -eDTS -eAAC -eWAV -eAIF -eAIFF -eFLAC -eAIFF -eM4A -eMP3 -eOGG -eMP4 -eWMA -j4 -x library process --audio\n\n    Use --always-split to _always_ split files if silence is detected\n\n        library process-audio --always-split audiobook.m4a\n\n    Use --split-longer-than to _only_ detect silence for files in excess of a specific duration\n\n        library process-audio --split-longer-than 36mins audiobook.m4b audiobook2.mp3\n\n\n</details>\n\n###### process-image\n\n<details><summary>Shrink images by resizing and AV1 image format (.avif)</summary>\n\n    $ library process-image -h\n    usage: library process-image PATH ...\n\n    Resize images to max 2400x2400px and format AVIF to save space\n\n\n</details>\n\n### Multi-database subcommands\n\n###### merge-dbs\n\n<details><summary>Merge SQLITE databases</summary>\n\n    $ library merge-dbs -h\n    usage: library merge-dbs DEST_DB SOURCE_DB ... [--only-target-columns] [--only-new-rows] [--upsert] [--pk PK ...] [--table TABLE ...]\n\n    Merge-DBs will insert new rows from source dbs to target db, table by table. If primary key(s) are provided,\n    and there is an existing row with the same PK, the default action is to delete the existing row and insert the new row\n    replacing all existing fields.\n\n    Upsert mode will update each matching PK row such that if a source row has a NULL field and\n    the destination row has a value then the value will be preserved instead of changed to the source row's NULL value.\n\n    Ignore mode (--only-new-rows) will insert only rows which don't already exist in the destination db\n\n    Test first by using temp databases as the destination db.\n    Try out different modes / flags until you are satisfied with the behavior of the program\n\n        library merge-dbs --pk path (mktemp --suffix .db) tv.db movies.db\n\n    Merge database data and tables\n\n        library merge-dbs --upsert --pk path video.db tv.db movies.db\n        library merge-dbs --only-target-columns --only-new-rows --table media,playlists --pk path --skip-column id audio-fts.db audio.db\n\n        library merge-dbs --pk id --only-tables subreddits reddit/81_New_Music.db audio.db\n        library merge-dbs --only-new-rows --pk subreddit,path --only-tables reddit_posts reddit/81_New_Music.db audio.db -v\n\n     To skip copying primary-keys from the source table(s) use --business-keys instead of --primary-keys\n\n     Split DBs using --where\n\n         library merge-dbs --pk path specific-site.db big.db -v --only-new-rows -t media,playlists -w 'path like \"https://specific-site%\"'\n\n\n</details>\n\n###### copy-play-counts\n\n<details><summary>Copy play history</summary>\n\n    $ library copy-play-counts -h\n    usage: library copy-play-counts DEST_DB SOURCE_DB ... [--source-prefix x] [--target-prefix y]\n\n    Copy play count information between databases\n\n        library copy-play-counts audio.db phone.db --source-prefix /storage/6E7B-7DCE/d --target-prefix /mnt/d\n\n\n</details>\n\n### Filesystem Database subcommands\n\n###### christen\n\n<details><summary>Clean filenames</summary>\n\n    $ library christen -h\n    usage: library christen DATABASE [--run]\n\n    Rename files to be somewhat normalized\n\n    Default mode is simulate\n\n        library christen fs.db\n\n    To actually do stuff use the run flag\n\n        library christen audio.db --run\n\n    You can optionally replace all the spaces in your filenames with dots\n\n        library christen --dot-space video.db\n\n\n</details>\n\n###### disk-usage\n\n<details><summary>Show disk usage</summary>\n\n    $ library disk-usage -h\n    usage: library disk-usage DATABASE [--sort-groups-by size | count] [--depth DEPTH] [PATH / SUBSTRING SEARCH]\n\n    Only include files smaller than 1kib\n\n        library disk-usage du.db --size=-1Ki\n        lb du du.db -S-1Ki\n        | path                                  |      size |   count |\n        |---------------------------------------|-----------|---------|\n        | /home/xk/github/xk/lb/__pycache__/    | 620 Bytes |       1 |\n        | /home/xk/github/xk/lb/.github/        |    1.7 kB |       4 |\n        | /home/xk/github/xk/lb/__pypackages__/ |    1.4 MB |    3519 |\n        | /home/xk/github/xk/lb/xklb/           |    4.4 kB |      12 |\n        | /home/xk/github/xk/lb/tests/          |    3.2 kB |       9 |\n        | /home/xk/github/xk/lb/.git/           |  782.4 kB |    2276 |\n        | /home/xk/github/xk/lb/.pytest_cache/  |    1.5 kB |       5 |\n        | /home/xk/github/xk/lb/.ruff_cache/    |   19.5 kB |     100 |\n        | /home/xk/github/xk/lb/.gitattributes  | 119 Bytes |         |\n        | /home/xk/github/xk/lb/.mypy_cache/    | 280 Bytes |       4 |\n        | /home/xk/github/xk/lb/.pdm-python     |  15 Bytes |         |\n\n    Only include files with a specific depth\n\n        library disk-usage du.db --depth 19\n        lb du du.db -d 19\n        | path                                                                                                                                                                |     size |\n        |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|\n        | /home/xk/github/xk/lb/__pypackages__/3.11/lib/jedi/third_party/typeshed/third_party/2and3/requests/packages/urllib3/packages/ssl_match_hostname/__init__.pyi        | 88 Bytes |\n        | /home/xk/github/xk/lb/__pypackages__/3.11/lib/jedi/third_party/typeshed/third_party/2and3/requests/packages/urllib3/packages/ssl_match_hostname/_implementation.pyi | 81 Bytes |\n\n\n\n</details>\n\n###### mount-stats\n\n<details><summary>Show some relative mount stats</summary>\n\n    $ library mount-stats -h\n    usage: library mount-stats MOUNTPOINT ...\n\n    Print relative use and free for multiple mount points\n\n\n</details>\n\n###### big-dirs\n\n<details><summary>Show large folders</summary>\n\n    $ library big-dirs -h\n    usage: library big-dirs DATABASE [--limit (4000)] [--depth (0)] [--sort-groups-by deleted | played] [--size=+5MB]\n\n    See what folders take up space\n\n        library big-dirs video.db\n        library big-dirs audio.db\n        library big-dirs fs.db\n\n    lb big-dirs video.db --folder-size=+10G --lower 400 --upper 14000\n\n    lb big-dirs video.db --depth 5\n    lb big-dirs video.db --depth 7\n\n    You can even sort by auto-MCDA ~LOL~\n\n    lb big-dirs video.db -u 'mcda median_size,-deleted'\n\n\n</details>\n\n###### search-db\n\n<details><summary>Search a SQLITE database</summary>\n\n    $ library search-db -h\n    usage: library search-db DATABASE TABLE SEARCH ... [--delete-rows]\n\n    Search all columns in a SQLITE table. If the table does not exist, uses the table which startswith (if only one match)\n\n\n</details>\n\n### Media Database subcommands\n\n###### block\n\n<details><summary>Block a channel</summary>\n\n    $ library block -h\n    usage: library block DATABASE URLS ...\n\n    Blocklist specific URLs (eg. YouTube channels, etc)\n\n        library block dl.db https://annoyingwebsite/etc/\n\n    Or URL substrings\n\n        library block dl.db \"%fastcompany.com%\"\n\n    Block videos from the playlist uploader\n\n        library block dl.db --match-column playlist_path 'https://youtube.com/playlist?list=PLVoczRgDnXDLWV1UJ_tO70VT_ON0tuEdm'\n\n    Or other columns\n\n        library block dl.db --match-column title \"% bitcoin%\"\n        library block dl.db --force --match-column uploader Zeducation\n\n    Display subdomains (similar to `lb download-status`)\n\n        library block audio.db\n        subdomain              count    new_links    tried  percent_tried      successful  percent_successful      failed  percent_failed\n        -------------------  -------  -----------  -------  ---------------  ------------  --------------------  --------  ----------------\n        dts.podtrac.com         5244          602     4642  88.52%                    690  14.86%                    3952  85.14%\n        soundcloud.com         16948        11931     5017  29.60%                    920  18.34%                    4097  81.66%\n        twitter.com              945          841      104  11.01%                      5  4.81%                       99  95.19%\n        v.redd.it               9530         6805     2725  28.59%                    225  8.26%                     2500  91.74%\n        vimeo.com                865          795       70  8.09%                      65  92.86%                       5  7.14%\n        www.youtube.com       210435       140952    69483  33.02%                  66017  95.01%                    3467  4.99%\n        youtu.be               60061        51911     8150  13.57%                   7736  94.92%                     414  5.08%\n        youtube.com             5976         5337      639  10.69%                    599  93.74%                      40  6.26%\n\n    Find some words to block based on frequency / recency of downloaded media\n\n        library watch dl.db -u time_downloaded desc -L 10000 -pf | lb nouns | sort | uniq -c | sort -g\n        ...\n        183 ArchiveOrg\n        187 Documentary\n        237 PBS\n        243 BBC\n        ...\n\n\n</details>\n\n###### playlists\n\n<details><summary>List stored playlists</summary>\n\n    $ library playlists -h\n    usage: library playlists DATABASE\n\n    List of Playlists\n\n        library playlists\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 extractor_key   \u2502 title              \u2502 path                                                                     \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Youtube  \u2502 Highlights of Life \u2502 https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n    Search playlists\n\n        library playlists audio.db badfinger\n        path                                                        extractor_key    title                             count\n        ----------------------------------------------------------  ---------------  ------------------------------  -------\n        https://music.youtube.com/channel/UCyJzUJ95hXeBVfO8zOA0GZQ  ydl_Youtube      Uploads from Badfinger - Topic      226\n\n    Aggregate Report of Videos in each Playlist\n\n        library playlists -p a\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 extractor_key   \u2502 title              \u2502 path                                                                     \u2502 duration      \u2502   count \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Youtube  \u2502 Highlights of Life \u2502 https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n \u2502 53.28 minutes \u2502      15 \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        1 playlist\n        Total duration: 53.28 minutes\n\n    Print only playlist urls:\n        Useful for piping to other utilities like xargs or GNU Parallel.\n        library playlists -p f\n        https://www.youtube.com/playlist?list=PL7gXS9DcOm5-O0Fc1z79M72BsrHByda3n\n\n    Remove a playlist/channel and all linked videos:\n        library playlists --delete-rows https://vimeo.com/canal180\n\n\n\n</details>\n\n###### download\n\n<details><summary>Download media</summary>\n\n    $ library download -h\n    usage: library download [--prefix /mnt/d/] [--safe] [--subs] [--auto-subs] [--small] DATABASE --video | --audio | --photos\n\n    Files will be saved to <lb download prefix>/<extractor>/. If prefix is not specified the current working directory will be used\n\n    By default things will download in a random order\n\n        library download dl.db --prefix ~/output/path/root/\n\n    But you can sort; eg. oldest first\n\n        library download dl.db -u m.time_modified,m.time_created\n\n    Limit downloads to a specified playlist URLs or substring (TODO: https://github.com/chapmanjacobd/library/issues/31)\n\n        library download dl.db https://www.youtube.com/c/BlenderFoundation/videos\n\n    Limit downloads to a specified video URLs or substring\n\n        library download dl.db --include https://www.youtube.com/watch?v=YE7VzlLtp-4\n        library download dl.db -s https://www.youtube.com/watch?v=YE7VzlLtp-4  # equivalent\n\n    Maximizing the variety of subdomains\n\n        library download photos.db --photos --image --sort \"ROW_NUMBER() OVER ( PARTITION BY SUBSTR(m.path, INSTR(m.path, '//') + 2, INSTR( SUBSTR(m.path, INSTR(m.path, '//') + 2), '/') - 1) )\"\n\n    Print list of queued up downloads\n\n        library download --print\n\n    Print list of saved playlists\n\n        library playlists dl.db -p a\n\n    Print download queue groups\n\n        library download-status audio.db\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 extractor_key     \u2502 duration         \u2502   never_downloaded \u2502   errors \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Soundcloud \u2502                  \u2502                 10 \u2502        0 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube    \u2502 10 days, 4 hours \u2502                  1 \u2502     2555 \u2502\n        \u2502            \u2502 and 20 minutes   \u2502                    \u2502          \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube    \u2502 7.68 minutes     \u2502                 99 \u2502        1 \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n\n</details>\n\n###### download-status\n\n<details><summary>Show download status</summary>\n\n    $ library download-status -h\n    usage: library download-status DATABASE\n\n    Print download queue groups\n\n        library download-status video.db\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 extractor_key      \u2502 duration         \u2502   never_downloaded \u2502   errors \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Youtube     \u2502 3 hours and 2.07 \u2502                 76 \u2502        0 \u2502\n        \u2502             \u2502 minutes          \u2502                    \u2502          \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Dailymotion \u2502                  \u2502                 53 \u2502        0 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube     \u2502 1 day, 18 hours  \u2502                 30 \u2502        0 \u2502\n        \u2502             \u2502 and 6 minutes    \u2502                    \u2502          \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Dailymotion \u2502                  \u2502                186 \u2502      198 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube     \u2502 1 hour and 52.18 \u2502                  1 \u2502        0 \u2502\n        \u2502             \u2502 minutes          \u2502                    \u2502          \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Vimeo       \u2502                  \u2502                253 \u2502       49 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube     \u2502 2 years, 4       \u2502              51676 \u2502      197 \u2502\n        \u2502             \u2502 months, 15 days  \u2502                    \u2502          \u2502\n        \u2502             \u2502 and 6 hours      \u2502                    \u2502          \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Youtube     \u2502 4 months, 23     \u2502               2686 \u2502        7 \u2502\n        \u2502             \u2502 days, 19 hours   \u2502                    \u2502          \u2502\n        \u2502             \u2502 and 33 minutes   \u2502                    \u2502          \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n    Simulate --safe flag\n\n        library download-status video.db --safe\n\n\n</details>\n\n###### redownload\n\n<details><summary>Re-download deleted/lost media</summary>\n\n    $ library redownload -h\n    usage: library redownload DATABASE\n\n    If you have previously downloaded YouTube or other online media, but your\n    hard drive failed or you accidentally deleted something, and if that media\n    is still accessible from the same URL, this script can help to redownload\n    everything that was scanned-as-deleted between two timestamps.\n\n    List deletions:\n\n        library redownload news.db\n        Deletions:\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 time_deleted        \u2502   count \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 2023-01-26T00:31:26 \u2502     120 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-01-26T19:54:42 \u2502      18 \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-01-26T20:45:24 \u2502      26 \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        Showing most recent 3 deletions. Use -l to change this limit\n\n    Mark videos as candidates for download via specific deletion timestamp:\n\n        library redownload city.db 2023-01-26T19:54:42\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 size     \u2502 time_created   \u2502 time_modified   \u2502 time_downloaded   \u2502   width \u2502   height \u2502   fps \u2502 duration         \u2502 path                                                                                                   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 697.7 MB \u2502 Apr 13 2022    \u2502 Mar 11 2022     \u2502 Oct 19            \u2502    1920 \u2502     1080 \u2502    30 \u2502 21.22 minutes    \u2502 /mnt/d/76_CityVideos/PRAIA DE BARRA DE JANGADA CANDEIAS JABOAT\u00c3O                                       \u2502\n        \u2502          \u2502                \u2502                 \u2502                   \u2502         \u2502          \u2502       \u2502                  \u2502 RECIFE PE BRASIL AVENIDA BERNARDO VIEIRA DE MELO-4Lx3hheMPmg.mp4\n        ...\n\n    ...or between two timestamps inclusive:\n\n        library redownload city.db 2023-01-26T19:54:42 2023-01-26T20:45:24\n\n\n</details>\n\n###### history\n\n<details><summary>Show and manage playback history</summary>\n\n    $ library history -h\n    usage: library history [--frequency daily weekly (monthly) yearly] [--limit LIMIT] DATABASE [(all) watching watched created modified deleted]\n\n    View playback history\n\n        $ library history web_add.image.db\n        In progress:\n        play_count  time_last_played    playhead    path                                     title\n        ------------  ------------------  ----------  ---------------------------------------  -----------\n                0  today, 20:48        2 seconds   https://siliconpr0n.org/map/COPYING.txt  COPYING.txt\n\n    Show only completed history\n\n        $ library history web_add.image.db --completed\n\n    Show only completed history\n\n        $ library history web_add.image.db --in-progress\n\n    Delete history\n\n        Delete two hours of history\n        $ library history web_add.image.db --played-within '2 hours' -L inf --delete-rows\n\n        Delete all history\n        $ library history web_add.image.db -L inf --delete-rows\n\n    See also: library stats -h\n              library history-add -h\n\n\n</details>\n\n###### history-add\n\n<details><summary>Add history from paths</summary>\n\n    $ library history-add -h\n    usage: library history-add DATABASE PATH ...\n\n    Add history\n\n        $ library history-add links.db $urls $paths\n        $ library history-add links.db (cb)\n\n    Items that don't already exist in the database will be counted under \"skipped\"\n\n\n\n</details>\n\n###### stats\n\n<details><summary>Show some event statistics (created, deleted, watched, etc)</summary>\n\n    $ library stats -h\n    usage: library stats DATABASE TIME_COLUMN\n\n    View watched stats\n\n        $ library stats video.db --completed\n        Finished watching:\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 time_period   \u2502 duration_sum                    \u2502 duration_avg   \u2502 size_sum   \u2502 size_avg   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 2022-11       \u2502 4 days, 16 hours and 20 minutes \u2502 55.23 minutes  \u2502 26.3 GB    \u2502 215.9 MB   \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2022-12       \u2502 23 hours and 20.03 minutes      \u2502 35.88 minutes  \u2502 8.3 GB     \u2502 213.8 MB   \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-01       \u2502 17 hours and 3.32 minutes       \u2502 15.27 minutes  \u2502 14.3 GB    \u2502 214.1 MB   \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-02       \u2502 4 days, 5 hours and 60 minutes  \u2502 23.17 minutes  \u2502 148.3 GB   \u2502 561.6 MB   \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-03       \u2502 2 days, 18 hours and 18 minutes \u2502 11.20 minutes  \u2502 118.1 GB   \u2502 332.8 MB   \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-05       \u2502 5 days, 5 hours and 4 minutes   \u2502 45.75 minutes  \u2502 152.9 GB   \u2502 932.1 MB   \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n    View download stats\n\n        $ library stats video.db time_downloaded --frequency daily\n        Downloaded media:\n        day         total_duration                          avg_duration                total_size    avg_size    count\n        ----------  --------------------------------------  ------------------------  ------------  ----------  -------\n        2023-08-11  1 month, 7 days and 8 hours             17 minutes                    192.2 GB     58.3 MB     3296\n        2023-08-12  18 days and 15 hours                    17 minutes                     89.7 GB     56.4 MB     1590\n        2023-08-14  13 days and 1 hours                     22 minutes                    111.2 GB    127.2 MB      874\n        2023-08-15  13 days and 6 hours                     17 minutes                    140.0 GB    126.7 MB     1105\n        2023-08-17  2 months, 8 days and 8 hours            19 minutes                    380.4 GB     72.6 MB     5243\n        2023-08-18  2 months, 30 days and 18 hours          17 minutes                    501.9 GB     63.3 MB     7926\n        2023-08-19  2 months, 6 days and 19 hours           19 minutes                    578.1 GB    110.6 MB     5229\n        2023-08-20  3 days and 9 hours                      6 minutes and 57 seconds       14.5 GB     20.7 MB      700\n        2023-08-21  4 days and 3 hours                      12 minutes                     18.0 GB     36.3 MB      495\n        2023-08-22  10 days and 8 hours                     17 minutes                     82.1 GB     91.7 MB      895\n        2023-08-23  19 days and 9 hours                     22 minutes                     93.7 GB     74.7 MB     1254\n\n        See also: library stats video.db time_downloaded -f daily --hide-deleted\n\n    View deleted stats\n\n        $ library stats video.db time_deleted\n        Deleted media:\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 time_period   \u2502 duration_sum                               \u2502 duration_avg   \u2502 size_sum   \u2502 size_avg   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 2023-04       \u2502 1 year, 10 months, 3 days and 8 hours      \u2502 4.47 minutes   \u2502 1.6 TB     \u2502 7.4 MB     \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 2023-05       \u2502 9 months, 26 days, 20 hours and 34 minutes \u2502 30.35 minutes  \u2502 1.1 TB     \u2502 73.7 MB    \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 title_path                                                                                                 \u2502 duration      \u2502   subtitle_count \u2502 time_deleted   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Terminus (1987)                                                                                            \u2502 1 hour and    \u2502                0 \u2502 yesterday      \u2502\n        \u2502 /mnt/d/70_Now_Watching/Terminus_1987.mp4                                                                   \u2502 15.55 minutes \u2502                  \u2502                \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 Commodore 64 Longplay [062] The Transformers (EU) /mnt/d/71_Mealtime_Videos/Youtube/World_of_Longplays/Com \u2502 24.77 minutes \u2502                2 \u2502 yesterday      \u2502\n        \u2502 modore_64_Longplay_062_The_Transformers_EU_[1RRX7Kykb38].webm                                              \u2502               \u2502                  \u2502                \u2502\n        ...\n\n\n    View time_modified stats\n\n        $ library stats example_dbs/web_add.image.db time_modified -f year\n        Time_Modified media:\n        year      total_size    avg_size    count\n        ------  ------------  ----------  -------\n        2010         4.4 MiB     1.5 MiB        3\n        2011       136.2 MiB    68.1 MiB        2\n        2013         1.6 GiB    10.7 MiB      154\n        2014         4.6 GiB    25.2 MiB      187\n        2015         4.3 GiB    26.5 MiB      167\n        2016         5.1 GiB    46.8 MiB      112\n        2017         4.8 GiB    51.7 MiB       95\n        2018         5.3 GiB    97.9 MiB       55\n        2019         1.3 GiB    46.5 MiB       29\n        2020        25.7 GiB   113.5 MiB      232\n        2021        25.6 GiB    96.5 MiB      272\n        2022        14.6 GiB    82.7 MiB      181\n        2023        24.3 GiB    72.5 MiB      343\n        2024        17.3 GiB   104.8 MiB      169\n        14 media\n\n\n</details>\n\n###### search\n\n<details><summary>Search captions / subtitles</summary>\n\n    $ library search -h\n    usage: library search DATABASE QUERY\n\n    Search text databases and subtitles\n\n        library search fts.db boil\n            7 captions\n            /mnt/d/70_Now_Watching/DidubeTheLastStop-720p.mp4\n               33:46 I brought a real stainless steel boiler\n               33:59 The world is using only stainless boilers nowadays\n               34:02 The boiler is old and authentic\n               34:30 - This boiler? - Yes\n               34:44 I am not forcing you to buy this boiler\u2026\n               34:52 Who will give her a one liter stainless steel boiler for one Lari?\n               34:54 Glass boilers cost two\n\n    Search and open file\n\n        library search fts.db 'two words' --open\n\n\n</details>\n\n###### optimize\n\n<details><summary>Re-optimize database</summary>\n\n    $ library optimize -h\n    usage: library optimize DATABASE [--force]\n\n    Optimize library databases\n\n    The force flag is usually unnecessary and it can take much longer\n\n\n</details>\n\n### Playback subcommands\n\n###### watch\n\n<details><summary>Watch / Listen</summary>\n\n    $ library watch -h\n    usage: library watch DATABASE [optional args]\n\n    Control playback:\n        To stop playback press Ctrl-C in either the terminal or mpv\n\n        Create global shortcuts in your desktop environment by sending commands to mpv_socket:\n        echo 'playlist-next force' | socat - /tmp/mpv_socket\n\n    Override the default player (mpv):\n        library watch --player \"vlc --vlc-opts\"\n\n    Cast to chromecast groups:\n        library watch --cast --cast-to \"Office pair\"\n        library watch -ct \"Office pair\"  # equivalent\n        If you don't know the exact name of your chromecast group run `catt scan`\n\n    Play media in order (similarly named episodes):\n        library watch --play-in-order\n        library watch -O    # equivalent\n\n        The default sort value is 'natural_ps' which means media will be sorted by parent path\n        and then stem in a natural way (using the integer values within the path). But there are many other options:\n\n        Options:\n\n            - reverse: reverse the sort order\n            - compat: treat characters like '\u2466' as '7'\n\n        Algorithms:\n\n            - natural: parse numbers as integers\n            - os: sort similar to the OS File Explorer sorts. To improve non-alphanumeric sorting on Mac OS X and Linux it is necessary to install pyicu (perhaps via python3-icu -- https://gitlab.pyicu.org/main/pyicu#installing-pyicu)\n            - path: use natsort \"path\" algorithm (https://natsort.readthedocs.io/en/stable/api.html#the-ns-enum)\n            - human: use system locale\n            - ignorecase: treat all case as equal\n            - lowercase: sort lowercase first\n            - signed: sort with an understanding of negative numbers\n            - python: sort like default python\n\n        Values:\n\n            - path\n            - parent\n            - stem\n            - title (or any other column value)\n            - ps: parent, stem\n            - pts: parent, title, stem\n\n        Use this format: algorithm, value, algorithm_value, or option_algorithm_value.\n        For example:\n\n            - library watch -O human\n            - library watch -O title\n            - library watch -O human_title\n            - library watch -O reverse_compat_human_title\n\n            - library watch -O path       # path algorithm and parent, stem values (path_ps)\n            - library watch -O path_path  # path algorithm and path values\n\n        Also, if you are using --random you need to fetch sibling media to play the media in order:\n\n            - library watch --random --fetch-siblings each -O          # get the first result per directory\n            - library watch --random --fetch-siblings if-audiobook -O  # get the first result per directory if 'audiobook' is in the path\n            - library watch --random --fetch-siblings always -O        # get 2,000 results per directory\n\n        If searching by a specific subpath it may be preferable to just sort by path instead\n        library watch d/planet.earth.2024/ -u path\n\n        library watch --related  # Similar to -O but uses fts to find similar content\n        library watch -R         # equivalent\n        library watch -RR        # above, plus ignores most filters\n\n        library watch --cluster  # cluster-sort to put similar-named paths closer together\n        library watch -C         # equivalent\n\n        library watch --big-dirs # Recommended to use with --duration or --depth filters; see `lb big-dirs -h` for more info\n        library watch -B         # equivalent\n\n        All of these options can be used together but it will be a bit slow and the results might be mid-tier\n        as multiple different algorithms create a muddied signal (too many cooks in the kitchen):\n        library watch -RRCO\n\n        You can even sort the items within each cluster by auto-MCDA ~LOL~\n        library watch -B --sort-groups-by 'mcda median_size,-deleted'\n        library watch -C --sort-groups-by 'mcda median_size,-deleted'\n\n    Filter media by file siblings of parent directory:\n        library watch --sibling   # only include files which have more than or equal to one sibling\n        library watch --solo      # only include files which are alone by themselves\n\n        `--sibling` is just a shortcut for `--lower 2`; `--solo` is `--upper 1`\n        library watch --sibling --solo      # you will always get zero records here\n        library watch --lower 2 --upper 1   # equivalent\n\n        You can be more specific via the `--upper` and `--lower` flags\n        library watch --lower 3   # only include files which have three or more siblings\n        library watch --upper 3   # only include files which have fewer than three siblings\n        library watch --lower 3 --upper 3   # only include files which are three siblings inclusive\n        library watch --lower 12 --upper 25 -O  # on my machine this launches My Mister 2018\n\n    Play recent partially-watched videos (requires mpv history):\n        library watch --partial       # play newest first\n\n        library watch --partial old   # play oldest first\n        library watch -P o            # equivalent\n\n        library watch -P p            # sort by percent remaining\n        library watch -P t            # sort by time remaining\n        library watch -P s            # skip partially watched (only show unseen)\n\n        The default time used is \"last-viewed\" (ie. the most recent time you closed the video)\n        If you want to use the \"first-viewed\" time (ie. the very first time you opened the video)\n        library watch -P f            # use watch_later file creation time instead of modified time\n\n        You can combine most of these options, though some will be overridden by others.\n        library watch -P fo           # this means \"show the oldest videos using the time I first opened them\"\n        library watch -P pt           # weighted remaining (percent * time remaining)\n\n    Print instead of play:\n        library watch --print --limit 10  # print the next 10 files\n        library watch -p -L 10  # print the next 10 files\n        library watch -p  # this will print _all_ the media. be cautious about `-p` on an unfiltered set\n\n        Printing modes\n        library watch -p    # print as a table\n        library watch -p a  # print an aggregate report\n        library watch -p b  # print a big-dirs report (see library bigdirs -h for more info)\n        library watch -p f  # print fields (defaults to path; use --cols to change)\n                               # -- useful for piping paths to utilities like xargs or GNU Parallel\n\n        library watch -p d  # mark deleted\n        library watch -p w  # mark watched\n\n        Some printing modes can be combined\n        library watch -p df  # print files for piping into another program and mark them as deleted within the db\n        library watch -p bf  # print fields from big-dirs report\n\n        Check if you have downloaded something before\n        library watch -u duration -p -s 'title'\n\n        Print an aggregate report of deleted media\n        library watch -w time_deleted!=0 -p=a\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path      \u2502 duration     \u2502 size    \u2502   count \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Aggregate \u2502 14 days, 23  \u2502 50.6 GB \u2502   29058 \u2502\n        \u2502           \u2502 hours and 42 \u2502         \u2502         \u2502\n        \u2502           \u2502 minutes      \u2502         \u2502         \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        Total duration: 14 days, 23 hours and 42 minutes\n\n        Print an aggregate report of media that has no duration information (ie. online or corrupt local media)\n        library watch -w 'duration is null' -p=a\n\n        Print a list of filenames which have below 1280px resolution\n        library watch -w 'width<1280' -p=f\n\n        Print media you have partially viewed with mpv\n        library watch --partial -p\n        library watch -P -p  # equivalent\n        library watch -P -p f --cols path,progress,duration  # print CSV of partially watched files\n        library watch --partial -pa  # print an aggregate report of partially watched files\n\n        View how much time you have watched\n        library watch -w play_count'>'0 -p=a\n\n        See how much video you have\n        library watch video.db -p=a\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path      \u2502   hours \u2502 size    \u2502   count \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Aggregate \u2502  145769 \u2502 37.6 TB \u2502  439939 \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        Total duration: 16 years, 7 months, 19 days, 17 hours and 25 minutes\n\n        View all the columns\n        library watch -p -L 1 --cols '*'\n\n        Open ipython with all of your media\n        library watch -vv -p --cols '*'\n        ipdb> len(media)\n        462219\n\n    Set the play queue size:\n        By default the play queue is 120--long enough that you likely have not noticed\n        but short enough that the program is snappy.\n\n        If you want everything in your play queue you can use the aid of infinity.\n        Pick your poison (these all do effectively the same thing):\n        library watch -L inf\n        library watch -l inf\n        library watch --queue inf\n        library watch -L 999999999999\n\n        You may also want to restrict the play queue.\n        For example, when you only want 1000 random files:\n        library watch -u random -L 1000\n\n    Offset the play queue:\n        You can also offset the queue. For example if you want to skip one or ten media:\n        library watch --offset 10      # offset ten from the top of an ordered query\n\n    Repeat\n        library watch                  # listen to 120 random songs (DEFAULT_PLAY_QUEUE)\n        library watch --limit 5        # listen to FIVE songs\n        library watch -l inf -u random # listen to random songs indefinitely\n        library watch -s infinite      # listen to songs from the band infinite\n\n    Constrain media by search:\n        Audio files have many tags to readily search through so metadata like artist,\n        album, and even mood are included in search.\n        Video files have less consistent metadata and so only paths are included in search.\n        library watch --include happy  # only matches will be included\n        library watch -s happy         # equivalent\n        library watch --exclude sad    # matches will be excluded\n        library watch -E sad           # equivalent\n\n        Search only the path column\n        library watch -O -s 'path : mad max'\n        library watch -O -s 'path : \"mad max\"' # add \"quotes\" to be more strict\n\n        Double spaces are parsed as one space\n        library watch -s '  ost'        # will match OST and not ghost\n        library watch -s toy story      # will match '/folder/toy/something/story.mp3'\n        library watch -s 'toy  story'   # will match more strictly '/folder/toy story.mp3'\n\n        You can search without -s but it must directly follow the database due to how argparse works\n        library watch ./your.db searching for something\n\n    Constrain media by arbitrary SQL expressions:\n        library watch --where audio_count = 2  # media which have two audio tracks\n        library watch -w \"language = 'eng'\"    # media which have an English language tag\n                                                    (this could be audio _or_ subtitle)\n        library watch -w subtitle_count=0      # media that doesn't have subtitles\n\n    Constrain media to duration (in minutes):\n        library watch --duration 20\n        library watch -d 6  # 6 mins \u00b110 percent (ie. between 5 and 7 mins)\n        library watch -d-6  # less than 6 mins\n        library watch -d+6  # more than 6 mins\n\n        Duration can be specified multiple times:\n        library watch -d+5 -d-7  # should be similar to -d 6\n\n        If you want exact time use `where`\n        library watch --where 'duration=6*60'\n\n    Constrain media to file size (in megabytes):\n        library watch --size 20\n        library watch -S 6  # 6 MB \u00b110 percent (ie. between 5 and 7 MB)\n        library watch -S-6  # less than 6 MB\n        library watch -S+6  # more than 6 MB\n\n    Constrain media by time_created / time_last_played / time_deleted / time_modified:\n        library watch --created-within '3 days'\n        library watch --created-before '3 years'\n\n    Constrain media by throughput:\n        Bitrate information is not explicitly saved.\n        You can use file size and duration as a proxy for throughput:\n        library watch -w 'size/duration<50000'\n\n    Constrain media to portrait orientation video:\n        library watch --portrait\n        library watch -w 'width<height' # equivalent\n\n    Constrain media to duration of videos which match any size constraints:\n        library watch --duration-from-size +700 -u 'duration desc, size desc'\n\n    Constrain media to online-media or local-media:\n        Not to be confused with only local-media which is not \"offline\" (ie. one HDD disconnected)\n        library watch --online-media-only\n        library watch --online-media-only -i  # and ignore playback errors (ie. YouTube video deleted)\n        library watch --local-media-only\n\n    Specify media play order:\n        library watch --sort duration   # play shortest media first\n        library watch -u duration desc  # play longest media first\n\n        You can use multiple SQL ORDER BY expressions\n        library watch -u 'subtitle_count > 0 desc' # play media that has at least one subtitle first\n\n        Prioritize large-sized media\n        library watch --sort 'ntile(10000) over (order by size/duration) desc'\n        library watch -u 'ntile(100) over (order by size) desc'\n\n        Sort by count of media with the same-X column (default DESC: most common to least common value)\n        library watch -u same-duration\n        library watch -u same-title\n        library watch -u same-size\n        library watch -u same-width, same-height ASC, same-fps\n        library watch -u same-time_uploaded same-view_count same-upvote_ratio\n\n        No media found when using --random\n        In addition to -u/--sort random, there is also the -r/--random flag.\n        If you have a large database it should be faster than -u random but it comes with a caveat:\n        This flag randomizes via rowid at an earlier stage to boost performance.\n        It is possible that you see \"No media found\" or a smaller amount of media than correct.\n        You can bypass this by setting --limit. For example:\n        library watch -B --folder-size=+12GiB --folder-size=-100GiB -r -pa\n        path         count      size  duration                        avg_duration      avg_size\n        ---------  -------  --------  ------------------------------  --------------  ----------\n        Aggregate    10000  752.5 GB  4 months, 15 days and 10 hours  20 minutes         75.3 MB\n        (17 seconds)\n        library watch -B --folder-size=+12GiB --folder-size=-100GiB -r -pa -l inf\n        path         count     size  duration                                 avg_duration      avg_size\n        ---------  -------  -------  ---------------------------------------  --------------  ----------\n        Aggregate   140868  10.6 TB  5 years, 2 months, 28 days and 14 hours  20 minutes         75.3 MB\n        (30 seconds)\n\n    Post-actions -- choose what to do after playing:\n        library watch --post-action keep    # do nothing after playing (default)\n        library watch -k delete             # delete file after playing\n        library watch -k softdelete         # mark deleted after playing\n\n        library watch -k ask_keep           # ask whether to keep after playing\n        library watch -k ask_delete         # ask whether to delete after playing\n\n        library watch -k move               # move to \"keep\" dir after playing\n        library watch -k ask_move           # ask whether to move to \"keep\" folder\n        The default location of the keep folder is ./keep/ (relative to the played media file)\n        You can change this by explicitly setting an *absolute* `keep-dir` path:\n        library watch -k ask_move --keep-dir /home/my/music/keep/\n\n        library watch -k ask_move_or_delete # ask after each whether to move to \"keep\" folder or delete\n\n        You can also bind keys in mpv to different exit codes. For example in input.conf:\n            ; quit 5\n\n        And if you run something like:\n            library watch --cmd5 ~/bin/process_audio.py\n            library watch --cmd5 echo  # this will effectively do nothing except skip the normal post-actions via mpv shortcut\n\n        When semicolon is pressed in mpv (it will exit with error code 5) then the applicable player-exit-code command\n        will start with the media file as the first argument; in this case `~/bin/process_audio.py $path`.\n        The command will be daemonized if library exits before it completes.\n\n        To prevent confusion, normal post-actions will be skipped if the exit-code is greater than 4.\n        Exit-codes 0, 1, 2, 3, and 4: the external post-action will run after normal post-actions. Be careful of conflicting player-exit-code command and post-action behavior when using these!\n\n    Experimental options:\n        Duration to play (in seconds) while changing the channel\n        library watch --interdimensional-cable 40\n        library watch -4dtv 40\n        You can open two terminals to replicate AMV Hell somewhat\n        library watch --volume 0 -4dtv 30\n        library listen -4dtv 30\n\n        Playback multiple files at once\n        library watch --multiple-playback    # one per display; or two if only one display detected\n        library watch --multiple-playback 4  # play four media at once, divide by available screens\n        library watch -m 4 --screen-name eDP # play four media at once on specific screen\n        library watch -m 4 --loop --crop     # play four cropped videos on a loop\n        library watch -m 4 --hstack          # use hstack style\n\n        When using `--multiple-playback` it may be helpful to set simple window focus rules to prevent keys from accidentally being entered in the wrong mpv window (as new windows are created and capture the cursor focus).\n        You can set and restore your previous mouse focus setting by wrapping the command like this:\n\n            focus-under-mouse\n            library watch ... --multiple-playback 4\n            focus-follows-mouse\n\n        For example in KDE:\n\n            function focus-under-mouse\n                kwriteconfig5 --file kwinrc --group Windows --key FocusPolicy FocusUnderMouse\n                qdbus-qt5 org.kde.KWin /KWin reconfigure\n            end\n\n            function focus-follows-mouse\n                kwriteconfig5 --file kwinrc --group Windows --key FocusPolicy FocusFollowsMouse\n                kwriteconfig5 --file kwinrc --group Windows --key NextFocusPrefersMouse true\n                qdbus-qt5 org.kde.KWin /KWin reconfigure\n            end\n\n\n\n</details>\n\n###### tabs-open\n\n<details><summary>Open your tabs for the day</summary>\n\n    $ library tabs-open -h\n    usage: library tabs-open DATABASE\n\n    Tabs is meant to run **once per day**. Here is how you would configure it with `crontab`:\n\n        45 9 * * * DISPLAY=:0 library tabs /home/my/tabs.db\n\n    If things aren't working you can use `at` to simulate a similar environment as `cron`\n\n        echo 'fish -c \"export DISPLAY=:0 && library tabs /full/path/to/tabs.db\"' | at NOW\n\n    You can also invoke tabs manually:\n\n        library tabs -L 1  # open one tab\n\n    Print URLs\n\n        library tabs -w \"frequency='yearly'\" -p\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path                                                           \u2502 frequency   \u2502 time_valid   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 https://old.reddit.com/r/Autonomia/top/?sort=top&t=year        \u2502 yearly      \u2502 Dec 31 1970  \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 https://old.reddit.com/r/Cyberpunk/top/?sort=top&t=year        \u2502 yearly      \u2502 Dec 31 1970  \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 https://old.reddit.com/r/ExperiencedDevs/top/?sort=top&t=year  \u2502 yearly      \u2502 Dec 31 1970  \u2502\n\n        ...\n\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n    View how many yearly tabs you have:\n\n        library tabs -w \"frequency='yearly'\" -p a\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path      \u2502   count \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 Aggregate \u2502     134 \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n    Delete URLs\n\n        library tabs -p -s cyber\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path                                  \u2502 frequency   \u2502 time_valid   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 https://old.reddit.com/r/cyberDeck/to \u2502 yearly      \u2502 Dec 31 1970  \u2502\n        \u2502 p/?sort=top&t=year                    \u2502             \u2502              \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 https://old.reddit.com/r/Cyberpunk/to \u2502 yearly      \u2502 Aug 29 2023  \u2502\n        \u2502 p/?sort=top&t=year                    \u2502             \u2502              \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 https://www.reddit.com/r/cyberDeck/   \u2502 yearly      \u2502 Sep 05 2023  \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n        library tabs -p -w \"path='https://www.reddit.com/r/cyberDeck/'\" --delete-rows\n        Removed 1 metadata records\n        library tabs -p -s cyber\n        \u2552\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2564\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2555\n        \u2502 path                                  \u2502 frequency   \u2502 time_valid   \u2502\n        \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561\n        \u2502 https://old.reddit.com/r/cyberDeck/to \u2502 yearly      \u2502 Dec 31 1970  \u2502\n        \u2502 p/?sort=top&t=year                    \u2502             \u2502              \u2502\n        \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n        \u2502 https://old.reddit.com/r/Cyberpunk/to \u2502 yearly      \u2502 Aug 29 2023  \u2502\n        \u2502 p/?sort=top&t=year                    \u2502             \u2502              \u2502\n        \u2558\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2567\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255b\n\n\n</details>\n\n###### links-open\n\n<details><summary>Open links from link dbs</summary>\n\n    $ library links-open -h\n    usage: library links-open DATABASE [search] [--title] [--title-prefix TITLE_PREFIX]\n\n    Open links from a links db\n\n        wget https://github.com/chapmanjacobd/library/raw/main/example_dbs/music.korea.ln.db\n        library open-links music.korea.ln.db\n\n    Only open links once\n\n        library open-links ln.db -w 'time_modified=0'\n\n    Print a preview instead of opening tabs\n\n        library open-links ln.db -p\n        library open-links ln.db --cols time_modified -p\n\n    Delete rows\n\n        Make sure you have the right search query\n        library open-links ln.db \"query\" -p -L inf\n        library open-links ln.db \"query\" -pa  # view total\n\n        library open-links ln.db \"query\" -pd  # mark as deleted\n\n    Custom search engine\n\n        library open-links ln.db --title --prefix 'https://duckduckgo.com/?q='\n\n    Skip local media\n\n        library open-links dl.db --online\n        library open-links dl.db -w 'path like \"http%\"'  # equivalent\n\n\n\n</details>\n\n###### surf\n\n<details><summary>Auto-load browser tabs in a streaming way (stdin)</summary>\n\n    $ library surf -h\n    usage: library surf [--count COUNT] [--target-hosts TARGET_HOSTS] < stdin\n\n    Streaming tab loader: press ctrl+c to stop.\n\n    Open tabs from a line-delimited file:\n\n        cat tabs.txt | library surf -n 5\n\n    You will likely want to use this setting in `about:config`\n\n        browser.tabs.loadDivertedInBackground = True\n\n    If you prefer GUI, check out https://unli.xyz/tabsender/\n\n\n</details>\n\n### Database enrichment subcommands\n\n###### dedupe-db\n\n<details><summary>Dedupe SQLITE tables</summary>\n\n    $ library dedupe-db -h\n    usage: library dedupe-dbs DATABASE TABLE --bk BUSINESS_KEYS [--pk PRIMARY_KEYS] [--only-columns COLUMNS]\n\n    Dedupe your database (not to be confused with the dedupe subcommand)\n\n    It should not need to be said but *backup* your database before trying this tool!\n\n    Dedupe-DB will help remove duplicate rows based on non-primary-key business keys\n\n        library dedupe-db ./video.db media --bk path\n\n    By default all non-primary and non-business key columns will be upserted unless --only-columns is provided\n    If --primary-keys is not provided table metadata primary keys will be used\n    If your duplicate rows contain exactly the same data in all the columns you can run with --skip-upsert to save a lot of time\n\n\n</details>\n\n###### dedupe-media\n\n<details><summary>Dedupe similar media</summary>\n\n    $ library dedupe-media -h\n    usage: library dedupe-media [--audio | --id | --title | --filesystem] [--only-soft-delete] [--limit LIMIT] DATABASE\n\n    Dedupe your files (not to be confused with the dedupe-db subcommand)\n\n    Exact file matches\n\n        library dedupe-media --fs video.db\n\n    Dedupe based on duration and file basename or dirname similarity\n\n        library dedupe-media video.db --duration --basename -s release_group  # pre-filter with a specific text substring\n        library dedupe-media video.db --duration --basename -u m1.size  # sort such that small files are treated as originals and larger files are deleted\n        library dedupe-media video.db --duration --basename -u 'm1.size desc'  # sort such that large files are treated as originals and smaller files are deleted\n\n    Dedupe online against local media\n\n        library dedupe-media video.db / http\n\n\n</details>\n\n###### merge-online-local\n\n<details><summary>Merge online and local data</summary>\n\n    $ library merge-online-local -h\n    usage: library merge-online-local DATABASE\n\n    If you have previously downloaded YouTube or other online media, you can dedupe\n    your database and combine the online and local media records as long as your\n    files have the youtube-dl / yt-dlp id in the filename.\n\n\n</details>\n\n###### mpv-watchlater\n\n<details><summary>Import mpv watchlater files to history</summary>\n\n    $ library mpv-watchlater -h\n    usage: library mpv-watchlater DATABASE [--watch-later-directory ~/.config/mpv/watch_later/]\n\n    Extract timestamps from MPV to the history table\n\n\n</details>\n\n###### reddit-selftext\n\n<details><summary>Copy selftext links to media table</summary>\n\n    $ library reddit-selftext -h\n    usage: library reddit-selftext DATABASE\n\n    Extract URLs from reddit selftext from the reddit_posts table to the media table\n\n\n</details>\n\n###### tabs-shuffle\n\n<details><summary>Randomize tabs.db a bit</summary>\n\n    $ library tabs-shuffle -h\n    usage: library tabs-shuffle DATABASE\n\n    Moves each tab to a random day-of-the-week by default\n\n    It may also be useful to shuffle monthly tabs, etc. You can accomplish this like so:\n\n        library tabs-shuffle tabs.db -d  31 -f monthly\n        library tabs-shuffle tabs.db -d  90 -f quarterly\n        library tabs-shuffle tabs.db -d 365 -f yearly\n\n\n</details>\n\n###### pushshift\n\n<details><summary>Convert pushshift data to reddit.db format (stdin)</summary>\n\n    $ library pushshift -h\n    usage: library pushshift DATABASE < stdin\n\n    Download data (about 600GB jsonl.zst; 6TB uncompressed)\n\n        wget -e robots=off -r -k -A zst https://files.pushshift.io/reddit/submissions/\n\n    Load data from files via unzstd\n\n        unzstd --memory=2048MB --stdout RS_2005-07.zst | library pushshift pushshift.db\n\n    Or multiple (output is about 1.5TB SQLITE fts-searchable):\n\n        for f in psaw/files.pushshift.io/reddit/submissions/*.zst\n            echo \"unzstd --memory=2048MB --stdout $f | library pushshift (basename $f).db\"\n            library optimize (basename $f).db\n        end | parallel -j5\n\n\n</details>\n\n### Update database subcommands\n\n###### fs-update\n\n<details><summary>Update local media</summary>\n\n    $ library fs-update -h\n    usage: library fs-update DATABASE\n\n    Update each path previously saved:\n\n        library fsupdate video.db\n\n\n</details>\n\n###### tube-update\n\n<details><summary>Update online video media</summary>\n\n    $ library tube-update -h\n    usage: library tube-update [--audio | --video] DATABASE\n\n    Fetch the latest videos for every playlist saved in your database\n\n        library tubeupdate educational.db\n\n    Fetch extra metadata:\n\n        By default tubeupdate will quickly add media.\n        You can run with --extra to fetch more details: (best resolution width, height, subtitle tags, etc)\n\n        library tubeupdate educational.db --extra https://www.youtube.com/channel/UCBsEUcR-ezAuxB2WlfeENvA/videos\n\n    Remove duplicate playlists:\n\n        lb dedupe-db video.db playlists --bk extractor_playlist_id\n\n\n</details>\n\n###### web-update\n\n<details><summary>Update open-directory media</summary>\n\n    $ library web-update -h\n    usage: library web-update DATABASE\n\n    Update saved open directories\n\n\n\n</details>\n\n###### gallery-update\n\n<details><summary>Update online gallery media</summary>\n\n    $ library gallery-update -h\n    usage: library gallery-update DATABASE URLS\n\n    Check previously saved gallery_dl URLs for new content\n\n\n</details>\n\n###### links-update\n\n<details><summary>Update a link-scraping database</summary>\n\n    $ library links-update -h\n    usage: library links-update DATABASE\n\n    Fetch new links from each path previously saved\n\n        library links-update links.db\n\n\n</details>\n\n###### reddit-update\n\n<details><summary>Update reddit media</summary>\n\n    $ library reddit-update -h\n    usage: library reddit-update [--audio | --video] [--lookback N_DAYS] [--praw-site bot1] DATABASE\n\n    Fetch the latest posts for every subreddit/redditor saved in your database\n\n        library redditupdate edu_subreddits.db\n\n\n</details>\n\n### Misc subcommands\n\n###### export-text\n\n<details><summary>Export HTML files from SQLite databases</summary>\n\n    $ library export-text -h\n    usage: library export-text DATABASE\n\n    Generate HTML files from SQLite databases\n\n\n</details>\n\n###### dedupe-czkawka\n\n<details><summary>Process czkawka diff output</summary>\n\n    $ library dedupe-czkawka -h\n    usage: library dedupe-czkawka [--volume VOLUME] [--auto-seek] [--ignore-errors] [--folder] [--folder-glob [FOLDER_GLOB]] [--replace] [--no-replace] [--override-trash OVERRIDE_TRASH] [--delete-files] [--gui]\n               [--auto-select-min-ratio AUTO_SELECT_MIN_RATIO] [--all-keep] [--all-left] [--all-right] [--all-delete] [--verbose]\n               czkawka_dupes_output_path\n\n    Choose which duplicate to keep by opening both side-by-side in mpv\n\n\n</details>\n\n\n<details><summary>Chicken mode</summary>\n\n\n           ////////////////////////\n          ////////////////////////|\n         //////////////////////// |\n        ////////////////////////| |\n        |    _\\/_   |   _\\/_    | |\n        |     )o(>  |  <)o(     | |\n        |   _/ <\\   |   /> \\_   | |        just kidding :-)\n        |  (_____)  |  (_____)  | |_\n        | ~~~oOo~~~ | ~~~0oO~~~ |/__|\n       _|====\\_=====|=====_/====|_ ||\n      |_|\\_________ O _________/|_|||\n       ||//////////|_|\\\\\\\\\\\\\\\\\\\\|| ||\n       || ||       |\\_\\\\        || ||\n       ||/||        \\\\_\\\\       ||/||\n       ||/||         \\)_\\)      ||/||\n       || ||         \\  O /     || ||\n       ||             \\  /      || LGB\n\n                   \\________/======\n                   / ( || ) \\\\\n\n</details>\n\nYou can expand all by running this in your browser console:\n\n```js\n(() => { const readmeDiv = document.getElementById(\"readme\"); const detailsElements = readmeDiv.getElementsByTagName(\"details\"); for (let i = 0; i < detailsElements.length; i++) { detailsElements[i].setAttribute(\"open\", \"true\"); } })();\n```\n\n\n",
    "bugtrack_url": null,
    "license": "BSD 3-Clause No Nuclear License\n        \n        Copyright (c) 2021, Jacob Chapman\n        All rights reserved.\n        \n        Redistribution and use in source and binary forms, with or without\n        modification, are permitted provided that the following conditions are met:\n        \n        * Redistributions of source code must retain the above copyright notice, this\n          list of conditions and the following disclaimer.\n        \n        * Redistributions in binary form must reproduce the above copyright notice,\n          this list of conditions and the following disclaimer in the documentation\n          and/or other materials provided with the distribution.\n        \n        * Neither the name of the copyright holder nor the names of its\n          contributors may be used to endorse or promote products derived from\n          this software without specific prior written permission.\n        \n        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n        AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n        IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n        DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n        FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n        SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n        CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n        OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n        OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n        \n        You acknowledge that this software is not designed nor intended for use in the\n        design, construction, operation or maintenance of any nuclear facility.",
    "summary": "xk library",
    "version": "2.6.22",
    "project_urls": {
        "documentation": "https://github.com/chapmanjacobd/library#usage",
        "homepage": "https://github.com/chapmanjacobd/library#readme",
        "repository": "https://github.com/chapmanjacobd/library/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6598185b208b19fe48e5d777e1e3c7b078371e4d39b6fa77f8f94d1d316789cc",
                "md5": "b11c68152c1cc86eea70a92d5ae63395",
                "sha256": "62be01c2a5b008fb95f5cb0b17833b6d785f5faa32a2f5026dabab7e2d088264"
            },
            "downloads": -1,
            "filename": "xklb-2.6.22-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b11c68152c1cc86eea70a92d5ae63395",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 300229,
            "upload_time": "2024-04-23T08:42:15",
            "upload_time_iso_8601": "2024-04-23T08:42:15.873065Z",
            "url": "https://files.pythonhosted.org/packages/65/98/185b208b19fe48e5d777e1e3c7b078371e4d39b6fa77f8f94d1d316789cc/xklb-2.6.22-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3c21c764eb867ea1001f9d3bd103614fe027d9b351cad84107db5d1394d6029f",
                "md5": "2dd8dd94e2fdc0cc481c5d37bd294a44",
                "sha256": "6d36883dd9fde83b5a00e932dd666d7b38dce673c0d93b9e5b18bd339dfa0c5e"
            },
            "downloads": -1,
            "filename": "xklb-2.6.22.tar.gz",
            "has_sig": false,
            "md5_digest": "2dd8dd94e2fdc0cc481c5d37bd294a44",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 363374,
            "upload_time": "2024-04-23T08:42:27",
            "upload_time_iso_8601": "2024-04-23T08:42:27.766849Z",
            "url": "https://files.pythonhosted.org/packages/3c/21/c764eb867ea1001f9d3bd103614fe027d9b351cad84107db5d1394d6029f/xklb-2.6.22.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 08:42:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "chapmanjacobd",
    "github_project": "library#usage",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "xklb"
}
        
Elapsed time: 0.34968s