wikiteam3


Namewikiteam3 JSON
Version 4.2.6 PyPI version JSON
download
home_pagehttps://github.com/saveweb/wikiteam3
SummaryTools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.
upload_time2024-04-20 09:18:16
maintaineryzqzss
docs_urlNone
authoryzqzss
requires_python<4.0,>=3.8
licenseGPL-3.0-or-later
keywords archiveteam mediawiki preservation wiki wikipedia
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `wikiteam3`

![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Farchive.org%2Fadvancedsearch.php%3Fq%3Dsubject%3Awikiteam3%26rows%3D1%26page%3D1%26output%3Djson&query=%24.response.numFound&label=WikiTeam3%20Dumps%40IA)
[![PyPI version](https://badge.fury.io/py/wikiteam3.svg)](https://badge.fury.io/py/wikiteam3)

<!-- !["MediaWikiArchive.png"](./MediaWikiArchive.png) -->
<div align=center><img width = "150" height ="150" src ="https://raw.githubusercontent.com/saveweb/wikiteam3/v4-main/MediaWikiArchive.png"/></div>

> Countless MediaWikis are still waiting to be archived.
>
> _Image by [@gledos](https://github.com/gledos/)_

`wikiteam3` is a fork of `mediawiki-scraper`.

<details>

## Why we fork mediawiki-scraper

Originally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.  
Half a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks "code" reviewers.  
So, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.

Everything still under GPLv3 license.

</details>

## Installation/Upgrade

```shell
pip install wikiteam3 --upgrade
```

## Dumpgenerator usage

<!-- DUMPER -->
<details>

```bash
usage: wikiteam3dumpgenerator [-h] [-v] [--cookies cookies.txt] [--delay 1.5]
                              [--retries 5] [--path PATH] [--resume] [--force]
                              [--user USER] [--pass PASSWORD]
                              [--http-user HTTP_USER]
                              [--http-pass HTTP_PASSWORD] [--insecure]
                              [--verbose] [--api_chunksize 50] [--api API]
                              [--index INDEX] [--index-check-threshold 0.80]
                              [--xml] [--curonly] [--xmlapiexport]
                              [--xmlrevisions] [--xmlrevisions_page]
                              [--namespaces 1,2,3] [--exnamespaces 1,2,3]
                              [--images] [--bypass-cdn-image-compression]
                              [--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z]
                              [--ia-wbm-booster {0,1,2,3}]
                              [--assert-max-pages 123]
                              [--assert-max-edits 123]
                              [--assert-max-images 123]
                              [--assert-max-images-bytes 123]
                              [--get-wiki-engine] [--failfast] [--upload]
                              [-g UPLOADER_ARGS]
                              [wiki]

options:
  -h, --help            show this help message and exit
  -v, --version         show program's version number and exit
  --cookies cookies.txt
                        path to a cookies.txt file
  --delay 1.5           adds a delay (in seconds) [NOTE: most HTTP servers
                        have a 5s HTTP/1.1 keep-alive timeout, you should
                        consider it if you wanna reuse the connection]
  --retries 5           Maximum number of retries for
  --path PATH           path to store wiki dump at
  --resume              resumes previous incomplete dump (requires --path)
  --force               download it even if Wikimedia site or a recent dump
                        exists in the Internet Archive
  --user USER           Username if MediaWiki authentication is required.
  --pass PASSWORD       Password if MediaWiki authentication is required.
  --http-user HTTP_USER
                        Username if HTTP authentication is required.
  --http-pass HTTP_PASSWORD
                        Password if HTTP authentication is required.
  --insecure            Disable SSL certificate verification
  --verbose
  --api_chunksize 50    Chunk size for MediaWiki API (arvlimit, ailimit, etc.)

  wiki                  URL to wiki (e.g. http://wiki.domain.org), auto
                        detects API and index.php
  --api API             URL to API (e.g. http://wiki.domain.org/w/api.php)
  --index INDEX         URL to index.php (e.g.
                        http://wiki.domain.org/w/index.php), (not supported
                        with --images on newer(?) MediaWiki without --api)
  --index-check-threshold 0.80
                        pass index.php check if result is greater than (>)
                        this value (default: 0.80)

Data to download:
  What info download from the wiki

  --xml                 Export XML dump using Special:Export (index.php).
                        (supported with --curonly)
  --curonly             store only the lastest revision of pages
  --xmlapiexport        Export XML dump using API:revisions instead of
                        Special:Export, use this when Special:Export fails and
                        xmlrevisions not supported. (supported with --curonly)
  --xmlrevisions        Export all revisions from an API generator
                        (API:Allrevisions). MediaWiki 1.27+ only. (not
                        supported with --curonly)
  --xmlrevisions_page   [[! Development only !]] Export all revisions from an
                        API generator, but query page by page MediaWiki 1.27+
                        only. (default: --curonly)
  --namespaces 1,2,3    comma-separated value of namespaces to include (all by
                        default)
  --exnamespaces 1,2,3  comma-separated value of namespaces to exclude
  --images              Generates an image dump

Image dump options:
  Options for image dump (--images)

  --bypass-cdn-image-compression
                        Bypass CDN image compression. (CloudFlare Polish,
                        etc.)
  --image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z
                        Only download images uploaded in the given time
                        interval. [format: ISO 8601 UTC interval] (only works
                        with api)
  --ia-wbm-booster {0,1,2,3}
                        Download images from Internet Archive Wayback Machine
                        if possible, reduce the bandwidth usage of the wiki.
                        [0: disabled (default), 1: use earliest snapshot, 2:
                        use latest snapshot, 3: the closest snapshot to the
                        image's upload time]

Assertions:
  What assertions to check before actually downloading, if any assertion
  fails, program will exit with exit code 45. [NOTE: This feature requires
  correct siteinfo API response from the wiki, and not working properly with
  some wikis. But it's useful for mass automated archiving, so you can
  schedule a re-run for HUGE wiki that may run out of your disk]

  --assert-max-pages 123
                        Maximum number of pages to download
  --assert-max-edits 123
                        Maximum number of edits to download
  --assert-max-images 123
                        Maximum number of images to download
  --assert-max-images-bytes 123
                        Maximum number of bytes to download for images [NOTE:
                        this assert happens after downloading images list]

Meta info:
  What meta info to retrieve from the wiki

  --get-wiki-engine     returns the wiki engine
  --failfast            [lack maintenance] Avoid resuming, discard failing
                        wikis quickly. Useful only for mass downloads.

wikiteam3uploader params:
  --upload              (run `wikiteam3uplaoder` for you) Upload wikidump to
                        Internet Archive after successfully dumped
  -g UPLOADER_ARGS, --uploader-arg UPLOADER_ARGS
                        Arguments for uploader.

```
</details>

<!-- DUMPER -->

### Downloading a wiki with complete XML history and images

```bash
wikiteam3dumpgenerator http://wiki.domain.org --xml --images
```

>[!WARNING]
>
> `NTFS/Windows` users please note: When using `--images`, because NTFS does not allow characters such as `:*?"<>|` in filenames, some files may not be downloaded, please pay attention to the `XXXXX could not be created by OS` error in your `errors.log`.
> We will not make special treatment for NTFS/EncFS "path too long/illegal filename", highly recommend you to use ext4/xfs/btrfs, etc.
> <details>
> - Introducing the "illegal filename rename" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3.
> - It will cause confusion to the final user of wikidump (usually the Wiki site administrator).
> - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive)
> - Using other file systems can solve all problems.
> </details>

### Manually specifying `api.php` and/or `index.php`

If the script can't find itself the `api.php` and/or `index.php` paths, then you can provide them:

```bash
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
```

```bash
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
    --xml --images
```

If you only want the XML histories, just use `--xml`. For only the images, just `--images`. For only the current version of every page, `--xml --curonly`.

### Resuming an incomplete dump

<details>

```bash
wikiteam3dumpgenerator \
    --api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump
```

In the above example, `--path` is only necessary if the download path (wikidump dir) is not the default.

>[!NOTE]
>
> en: When resuming an incomplete dump, the configuration in `config.json` will override the CLI parameters. (But not all CLI parameters will be ignored, check `config.json` for details)

`wikiteam3dumpgenerator` will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.

</details>

## Using `wikiteam3uploader`

<!-- UPLOADER -->
<details>

```bash
usage:  Upload wikidump to the Internet Archive. [-h] [-kf KEYS_FILE]
                                                 [-c {opensource,test_collection,wikiteam}]
                                                 [--dry-run] [-u]
                                                 [--bin-zstd BIN_ZSTD]
                                                 [--zstd-level {17,18,19,20,21,22}]
                                                 [--rezstd]
                                                 [--rezstd-endpoint URL]
                                                 [--bin-7z BIN_7Z]
                                                 [--parallel]
                                                 wikidump_dir

positional arguments:
  wikidump_dir

options:
  -h, --help            show this help message and exit
  -kf KEYS_FILE, --keys_file KEYS_FILE
                        Path to the IA S3 keys file. (first line: access key,
                        second line: secret key) [default:
                        ~/.wikiteam3_ia_keys.txt]
  -c {opensource,test_collection,wikiteam}, --collection {opensource,test_collection,wikiteam}
  --dry-run             Dry run, do not upload anything.
  -u, --update          Update existing item. [!! not implemented yet !!]
  --bin-zstd BIN_ZSTD   Path to zstd binary. [default: zstd]
  --zstd-level {17,18,19,20,21,22}
                        Zstd compression level. [default: 17] If you have a
                        lot of RAM, recommend to use max level (22).
  --rezstd              [server-side recompression] Upload pre-compressed zstd
                        files to rezstd server for recompression with best
                        settings (which may eat 10GB+ RAM), then download
                        back. (This feature saves your lowend machine, lol)
  --rezstd-endpoint URL
                        Rezstd server endpoint. [default: http://pool-
                        rezstd.saveweb.org/rezstd/] (source code:
                        https://github.com/yzqzss/rezstd)
  --bin-7z BIN_7Z       Path to 7z binary. [default: 7z]
  --parallel            Parallelize compression tasks

```
</details>

<!-- UPLOADER -->

### Requirements

> [!NOTE]
>
> Please make sure you have the following requirements before using `wikiteam3uploader`, and you don't need to install them if you don't wanna upload the dump to IA.

- unbinded localhost port 62954 (for multiple processes compressing queue)
- 3GB+ RAM (~2.56GB for commpressing)
- 64-bit OS (required by 2G `wlog` size)

- `7z` (binary)
    > Debian/Ubuntu: install `p7zip-full`  

    > [!NOTE]
    >
    > Windows: install <https://7-zip.org> and add `7z.exe` to PATH
- `zstd` (binary)
    > 1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)  
    > install from <https://github.com/facebook/zstd>  

    > [!NOTE]
    >
    > Windows: add `zstd.exe` to PATH

### Uploader usage

> [!NOTE]
>
> Read `wikiteam3uploader --help` and do not forget `~/.wikiteam3_ia_keys.txt` before using `wikiteam3uploader`.

```bash
wikiteam3uploader {YOUR_WIKI_DUMP_PATH}
```

## Checking dump integrity

TODO: xml2titles.py

If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:

```bash
grep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \
    "</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c
```

You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:

```bash
580
580
580
5677
5677
```

If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished ```</page>``` or ```</revision>```). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.

## import wikidump to MediaWiki / wikidump data tips

> [!IMPORTANT]
>
> In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). <https://www.mediawiki.org/wiki/Manual:Title.php#Article_name>

> [!NOTE]
>
> `WikiTeam3` uses `zstd` to compress `.xml` and `.txt` files, and `7z` to pack images (media files).  
> `zstd` is a very fast stream compression algorithm, you can use `zstd -d` to decompress `.zst` file/steam.

## Contributors

**WikiTeam** is the [Archive Team](http://www.archiveteam.org) [[GitHub](https://github.com/ArchiveTeam)] subcommittee on wikis.
It was founded and originally developed by [Emilio J. Rodríguez-Posada](https://github.com/emijrp), a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: [Federico Leva](https://github.com/nemobis), [Alex Buie](https://github.com/ab2525), [Scott Boyd](http://www.sdboyd56.com), [Hydriz](https://github.com/Hydriz), Platonides, Ian McEwen, [Mike Dupont](https://github.com/h4ck3rm1k3), [balr0g](https://github.com/balr0g) and [PiRSquared17](https://github.com/PiRSquared17).

**Mediawiki-Scraper** The Python 3 initiative is currently being led by [Elsie Hupp](https://github.com/elsiehupp), with contributions from [Victor Gambier](https://github.com/vgambier), [Thomas Karcher](https://github.com/t-karcher), [Janet Cobb](https://github.com/randomnetcat), [yzqzss](https://github.com/yzqzss), [NyaMisty](https://github.com/NyaMisty) and [Rob Kam](https://github.com/robkam)

**WikiTeam3** Every archivist who has uploaded a wikidump to the [Internet Archive](https://archive.org/search?query=subject%3Awikiteam3).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/saveweb/wikiteam3",
    "name": "wikiteam3",
    "maintainer": "yzqzss",
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": "yzqzss@yandex.com",
    "keywords": "archiveteam, mediawiki, preservation, wiki, wikipedia",
    "author": "yzqzss",
    "author_email": "yzqzss@yandex.com",
    "download_url": "https://files.pythonhosted.org/packages/30/b5/15f7cfb0efb7d91e3aaa09172452c1af9ad7218061100330eb7aa03d2e0c/wikiteam3-4.2.6.tar.gz",
    "platform": null,
    "description": "# `wikiteam3`\n\n![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Farchive.org%2Fadvancedsearch.php%3Fq%3Dsubject%3Awikiteam3%26rows%3D1%26page%3D1%26output%3Djson&query=%24.response.numFound&label=WikiTeam3%20Dumps%40IA)\n[![PyPI version](https://badge.fury.io/py/wikiteam3.svg)](https://badge.fury.io/py/wikiteam3)\n\n<!-- ![\"MediaWikiArchive.png\"](./MediaWikiArchive.png) -->\n<div align=center><img width = \"150\" height =\"150\" src =\"https://raw.githubusercontent.com/saveweb/wikiteam3/v4-main/MediaWikiArchive.png\"/></div>\n\n> Countless MediaWikis are still waiting to be archived.\n>\n> _Image by [@gledos](https://github.com/gledos/)_\n\n`wikiteam3` is a fork of `mediawiki-scraper`.\n\n<details>\n\n## Why we fork mediawiki-scraper\n\nOriginally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.  \nHalf a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks \"code\" reviewers.  \nSo, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.\n\nEverything still under GPLv3 license.\n\n</details>\n\n## Installation/Upgrade\n\n```shell\npip install wikiteam3 --upgrade\n```\n\n## Dumpgenerator usage\n\n<!-- DUMPER -->\n<details>\n\n```bash\nusage: wikiteam3dumpgenerator [-h] [-v] [--cookies cookies.txt] [--delay 1.5]\n                              [--retries 5] [--path PATH] [--resume] [--force]\n                              [--user USER] [--pass PASSWORD]\n                              [--http-user HTTP_USER]\n                              [--http-pass HTTP_PASSWORD] [--insecure]\n                              [--verbose] [--api_chunksize 50] [--api API]\n                              [--index INDEX] [--index-check-threshold 0.80]\n                              [--xml] [--curonly] [--xmlapiexport]\n                              [--xmlrevisions] [--xmlrevisions_page]\n                              [--namespaces 1,2,3] [--exnamespaces 1,2,3]\n                              [--images] [--bypass-cdn-image-compression]\n                              [--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z]\n                              [--ia-wbm-booster {0,1,2,3}]\n                              [--assert-max-pages 123]\n                              [--assert-max-edits 123]\n                              [--assert-max-images 123]\n                              [--assert-max-images-bytes 123]\n                              [--get-wiki-engine] [--failfast] [--upload]\n                              [-g UPLOADER_ARGS]\n                              [wiki]\n\noptions:\n  -h, --help            show this help message and exit\n  -v, --version         show program's version number and exit\n  --cookies cookies.txt\n                        path to a cookies.txt file\n  --delay 1.5           adds a delay (in seconds) [NOTE: most HTTP servers\n                        have a 5s HTTP/1.1 keep-alive timeout, you should\n                        consider it if you wanna reuse the connection]\n  --retries 5           Maximum number of retries for\n  --path PATH           path to store wiki dump at\n  --resume              resumes previous incomplete dump (requires --path)\n  --force               download it even if Wikimedia site or a recent dump\n                        exists in the Internet Archive\n  --user USER           Username if MediaWiki authentication is required.\n  --pass PASSWORD       Password if MediaWiki authentication is required.\n  --http-user HTTP_USER\n                        Username if HTTP authentication is required.\n  --http-pass HTTP_PASSWORD\n                        Password if HTTP authentication is required.\n  --insecure            Disable SSL certificate verification\n  --verbose\n  --api_chunksize 50    Chunk size for MediaWiki API (arvlimit, ailimit, etc.)\n\n  wiki                  URL to wiki (e.g. http://wiki.domain.org), auto\n                        detects API and index.php\n  --api API             URL to API (e.g. http://wiki.domain.org/w/api.php)\n  --index INDEX         URL to index.php (e.g.\n                        http://wiki.domain.org/w/index.php), (not supported\n                        with --images on newer(?) MediaWiki without --api)\n  --index-check-threshold 0.80\n                        pass index.php check if result is greater than (>)\n                        this value (default: 0.80)\n\nData to download:\n  What info download from the wiki\n\n  --xml                 Export XML dump using Special:Export (index.php).\n                        (supported with --curonly)\n  --curonly             store only the lastest revision of pages\n  --xmlapiexport        Export XML dump using API:revisions instead of\n                        Special:Export, use this when Special:Export fails and\n                        xmlrevisions not supported. (supported with --curonly)\n  --xmlrevisions        Export all revisions from an API generator\n                        (API:Allrevisions). MediaWiki 1.27+ only. (not\n                        supported with --curonly)\n  --xmlrevisions_page   [[! Development only !]] Export all revisions from an\n                        API generator, but query page by page MediaWiki 1.27+\n                        only. (default: --curonly)\n  --namespaces 1,2,3    comma-separated value of namespaces to include (all by\n                        default)\n  --exnamespaces 1,2,3  comma-separated value of namespaces to exclude\n  --images              Generates an image dump\n\nImage dump options:\n  Options for image dump (--images)\n\n  --bypass-cdn-image-compression\n                        Bypass CDN image compression. (CloudFlare Polish,\n                        etc.)\n  --image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z\n                        Only download images uploaded in the given time\n                        interval. [format: ISO 8601 UTC interval] (only works\n                        with api)\n  --ia-wbm-booster {0,1,2,3}\n                        Download images from Internet Archive Wayback Machine\n                        if possible, reduce the bandwidth usage of the wiki.\n                        [0: disabled (default), 1: use earliest snapshot, 2:\n                        use latest snapshot, 3: the closest snapshot to the\n                        image's upload time]\n\nAssertions:\n  What assertions to check before actually downloading, if any assertion\n  fails, program will exit with exit code 45. [NOTE: This feature requires\n  correct siteinfo API response from the wiki, and not working properly with\n  some wikis. But it's useful for mass automated archiving, so you can\n  schedule a re-run for HUGE wiki that may run out of your disk]\n\n  --assert-max-pages 123\n                        Maximum number of pages to download\n  --assert-max-edits 123\n                        Maximum number of edits to download\n  --assert-max-images 123\n                        Maximum number of images to download\n  --assert-max-images-bytes 123\n                        Maximum number of bytes to download for images [NOTE:\n                        this assert happens after downloading images list]\n\nMeta info:\n  What meta info to retrieve from the wiki\n\n  --get-wiki-engine     returns the wiki engine\n  --failfast            [lack maintenance] Avoid resuming, discard failing\n                        wikis quickly. Useful only for mass downloads.\n\nwikiteam3uploader params:\n  --upload              (run `wikiteam3uplaoder` for you) Upload wikidump to\n                        Internet Archive after successfully dumped\n  -g UPLOADER_ARGS, --uploader-arg UPLOADER_ARGS\n                        Arguments for uploader.\n\n```\n</details>\n\n<!-- DUMPER -->\n\n### Downloading a wiki with complete XML history and images\n\n```bash\nwikiteam3dumpgenerator http://wiki.domain.org --xml --images\n```\n\n>[!WARNING]\n>\n> `NTFS/Windows` users please note: When using `--images`, because NTFS does not allow characters such as `:*?\"<>|` in filenames, some files may not be downloaded, please pay attention to the `XXXXX could not be created by OS` error in your `errors.log`.\n> We will not make special treatment for NTFS/EncFS \"path too long/illegal filename\", highly recommend you to use ext4/xfs/btrfs, etc.\n> <details>\n> - Introducing the \"illegal filename rename\" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3.\n> - It will cause confusion to the final user of wikidump (usually the Wiki site administrator).\n> - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive)\n> - Using other file systems can solve all problems.\n> </details>\n\n### Manually specifying `api.php` and/or `index.php`\n\nIf the script can't find itself the `api.php` and/or `index.php` paths, then you can provide them:\n\n```bash\nwikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images\n```\n\n```bash\nwikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \\\n    --xml --images\n```\n\nIf you only want the XML histories, just use `--xml`. For only the images, just `--images`. For only the current version of every page, `--xml --curonly`.\n\n### Resuming an incomplete dump\n\n<details>\n\n```bash\nwikiteam3dumpgenerator \\\n    --api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump\n```\n\nIn the above example, `--path` is only necessary if the download path (wikidump dir) is not the default.\n\n>[!NOTE]\n>\n> en: When resuming an incomplete dump, the configuration in `config.json` will override the CLI parameters. (But not all CLI parameters will be ignored, check `config.json` for details)\n\n`wikiteam3dumpgenerator` will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.\n\n</details>\n\n## Using `wikiteam3uploader`\n\n<!-- UPLOADER -->\n<details>\n\n```bash\nusage:  Upload wikidump to the Internet Archive. [-h] [-kf KEYS_FILE]\n                                                 [-c {opensource,test_collection,wikiteam}]\n                                                 [--dry-run] [-u]\n                                                 [--bin-zstd BIN_ZSTD]\n                                                 [--zstd-level {17,18,19,20,21,22}]\n                                                 [--rezstd]\n                                                 [--rezstd-endpoint URL]\n                                                 [--bin-7z BIN_7Z]\n                                                 [--parallel]\n                                                 wikidump_dir\n\npositional arguments:\n  wikidump_dir\n\noptions:\n  -h, --help            show this help message and exit\n  -kf KEYS_FILE, --keys_file KEYS_FILE\n                        Path to the IA S3 keys file. (first line: access key,\n                        second line: secret key) [default:\n                        ~/.wikiteam3_ia_keys.txt]\n  -c {opensource,test_collection,wikiteam}, --collection {opensource,test_collection,wikiteam}\n  --dry-run             Dry run, do not upload anything.\n  -u, --update          Update existing item. [!! not implemented yet !!]\n  --bin-zstd BIN_ZSTD   Path to zstd binary. [default: zstd]\n  --zstd-level {17,18,19,20,21,22}\n                        Zstd compression level. [default: 17] If you have a\n                        lot of RAM, recommend to use max level (22).\n  --rezstd              [server-side recompression] Upload pre-compressed zstd\n                        files to rezstd server for recompression with best\n                        settings (which may eat 10GB+ RAM), then download\n                        back. (This feature saves your lowend machine, lol)\n  --rezstd-endpoint URL\n                        Rezstd server endpoint. [default: http://pool-\n                        rezstd.saveweb.org/rezstd/] (source code:\n                        https://github.com/yzqzss/rezstd)\n  --bin-7z BIN_7Z       Path to 7z binary. [default: 7z]\n  --parallel            Parallelize compression tasks\n\n```\n</details>\n\n<!-- UPLOADER -->\n\n### Requirements\n\n> [!NOTE]\n>\n> Please make sure you have the following requirements before using `wikiteam3uploader`, and you don't need to install them if you don't wanna upload the dump to IA.\n\n- unbinded localhost port 62954 (for multiple processes compressing queue)\n- 3GB+ RAM (~2.56GB for commpressing)\n- 64-bit OS (required by 2G `wlog` size)\n\n- `7z` (binary)\n    > Debian/Ubuntu: install `p7zip-full`  \n\n    > [!NOTE]\n    >\n    > Windows: install <https://7-zip.org> and add `7z.exe` to PATH\n- `zstd` (binary)\n    > 1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)  \n    > install from <https://github.com/facebook/zstd>  \n\n    > [!NOTE]\n    >\n    > Windows: add `zstd.exe` to PATH\n\n### Uploader usage\n\n> [!NOTE]\n>\n> Read `wikiteam3uploader --help` and do not forget `~/.wikiteam3_ia_keys.txt` before using `wikiteam3uploader`.\n\n```bash\nwikiteam3uploader {YOUR_WIKI_DUMP_PATH}\n```\n\n## Checking dump integrity\n\nTODO: xml2titles.py\n\nIf you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:\n\n```bash\ngrep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \\\n    \"</page>\" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep \"</revision>\" *.xml -c\n```\n\nYou should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:\n\n```bash\n580\n580\n580\n5677\n5677\n```\n\nIf your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished ```</page>``` or ```</revision>```). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.\n\n## import wikidump to MediaWiki / wikidump data tips\n\n> [!IMPORTANT]\n>\n> In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). <https://www.mediawiki.org/wiki/Manual:Title.php#Article_name>\n\n> [!NOTE]\n>\n> `WikiTeam3` uses `zstd` to compress `.xml` and `.txt` files, and `7z` to pack images (media files).  \n> `zstd` is a very fast stream compression algorithm, you can use `zstd -d` to decompress `.zst` file/steam.\n\n## Contributors\n\n**WikiTeam** is the [Archive Team](http://www.archiveteam.org) [[GitHub](https://github.com/ArchiveTeam)] subcommittee on wikis.\nIt was founded and originally developed by [Emilio J. Rodr\u00edguez-Posada](https://github.com/emijrp), a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: [Federico Leva](https://github.com/nemobis), [Alex Buie](https://github.com/ab2525), [Scott Boyd](http://www.sdboyd56.com), [Hydriz](https://github.com/Hydriz), Platonides, Ian McEwen, [Mike Dupont](https://github.com/h4ck3rm1k3), [balr0g](https://github.com/balr0g) and [PiRSquared17](https://github.com/PiRSquared17).\n\n**Mediawiki-Scraper** The Python 3 initiative is currently being led by [Elsie Hupp](https://github.com/elsiehupp), with contributions from [Victor Gambier](https://github.com/vgambier), [Thomas Karcher](https://github.com/t-karcher), [Janet Cobb](https://github.com/randomnetcat), [yzqzss](https://github.com/yzqzss), [NyaMisty](https://github.com/NyaMisty) and [Rob Kam](https://github.com/robkam)\n\n**WikiTeam3** Every archivist who has uploaded a wikidump to the [Internet Archive](https://archive.org/search?query=subject%3Awikiteam3).\n",
    "bugtrack_url": null,
    "license": "GPL-3.0-or-later",
    "summary": "Tools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.",
    "version": "4.2.6",
    "project_urls": {
        "Homepage": "https://github.com/saveweb/wikiteam3",
        "Repository": "https://github.com/saveweb/wikiteam3"
    },
    "split_keywords": [
        "archiveteam",
        " mediawiki",
        " preservation",
        " wiki",
        " wikipedia"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ac866b62be8717f7cd7c0204d1f57a5f29fb822ebfec5aa44840e83a740a8c04",
                "md5": "51bf913daaf7d7d991166e27ec578d2f",
                "sha256": "808af79019621798c9878120eda3ba4da5a553c8aa357bd9cc19024ca41a871c"
            },
            "downloads": -1,
            "filename": "wikiteam3-4.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "51bf913daaf7d7d991166e27ec578d2f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 106690,
            "upload_time": "2024-04-20T09:18:14",
            "upload_time_iso_8601": "2024-04-20T09:18:14.584574Z",
            "url": "https://files.pythonhosted.org/packages/ac/86/6b62be8717f7cd7c0204d1f57a5f29fb822ebfec5aa44840e83a740a8c04/wikiteam3-4.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "30b515f7cfb0efb7d91e3aaa09172452c1af9ad7218061100330eb7aa03d2e0c",
                "md5": "e8842a4396e1dae50d98c50abe0913fa",
                "sha256": "c6baaef1a9f286923dfed23cea5095d1cf254a8b9dc65cca30f8c43a0f218b49"
            },
            "downloads": -1,
            "filename": "wikiteam3-4.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "e8842a4396e1dae50d98c50abe0913fa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 91771,
            "upload_time": "2024-04-20T09:18:16",
            "upload_time_iso_8601": "2024-04-20T09:18:16.931939Z",
            "url": "https://files.pythonhosted.org/packages/30/b5/15f7cfb0efb7d91e3aaa09172452c1af9ad7218061100330eb7aa03d2e0c/wikiteam3-4.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-20 09:18:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "saveweb",
    "github_project": "wikiteam3",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "wikiteam3"
}
        
Elapsed time: 0.69541s