# `wikiteam3`
![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Farchive.org%2Fadvancedsearch.php%3Fq%3Dsubject%3Awikiteam3%26rows%3D1%26page%3D1%26output%3Djson&query=%24.response.numFound&label=WikiTeam3%20Dumps%40IA)
[![PyPI version](https://badge.fury.io/py/wikiteam3.svg)](https://badge.fury.io/py/wikiteam3)
<!-- !["MediaWikiArchive.png"](./MediaWikiArchive.png) -->
<div align=center><img width = "150" height ="150" src ="https://raw.githubusercontent.com/saveweb/wikiteam3/v4-main/MediaWikiArchive.png"/></div>
> Countless MediaWikis are still waiting to be archived.
>
> _Image by [@gledos](https://github.com/gledos/)_
`wikiteam3` is a fork of `mediawiki-scraper`.
<details>
## Why we fork mediawiki-scraper
Originally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam.
Half a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks "code" reviewers.
So, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.
Everything still under GPLv3 license.
</details>
## For webmaster
We archive every MediaWiki site yearly and upload to the Internet Archive.
We crawl sites with 1.5s crawl-delay by default, and we respect Retry-After header.
If you don’t want your wiki to be archived, add the following to your `<domain>/robots.txt`:
```robots.txt
User-agent: wikiteam3
Disallow: /
```
## Installation/Upgrade
```shell
pip install wikiteam3 --upgrade
```
>[!NOTE]
> For public MediaWiki, you don't need to install wikiteam3 locally. You can send an archive request (include the reason for the archive request, e.g. wiki is about to shutdown, need a wikidump to migrate to another wikifarm, etc.) to the wikiteam IRC channel. An online member will run a [wikibot](https://wikibot.digitaldragon.dev/) job for your request.
>
> Even more, we also accept DokuWiki and PukiWiki archive requests.
>
> - wikiteam IRC (webirc): https://webirc.hackint.org/#irc://irc.hackint.org/wikiteam
> - wikiteam IRC logs: https://irclogs.archivete.am/wikiteam
## Dumpgenerator usage
<!-- DUMPER -->
<details>
```bash
usage: wikiteam3dumpgenerator [-h] [-v] [--cookies cookies.txt] [--delay 1.5]
[--retries 5] [--path PATH] [--resume] [--force]
[--user USER] [--pass PASSWORD]
[--http-user HTTP_USER]
[--http-pass HTTP_PASSWORD] [--insecure]
[--verbose] [--api_chunksize 50] [--api API]
[--index INDEX] [--index-check-threshold 0.80]
[--xml] [--curonly] [--xmlapiexport]
[--xmlrevisions] [--xmlrevisions_page]
[--namespaces 1,2,3] [--exnamespaces 1,2,3]
[--images] [--bypass-cdn-image-compression]
[--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z]
[--ia-wbm-booster {0,1,2,3}]
[--assert-max-pages 123]
[--assert-max-edits 123]
[--assert-max-images 123]
[--assert-max-images-bytes 123]
[--get-wiki-engine] [--failfast] [--upload]
[-g UPLOADER_ARGS]
[wiki]
options:
-h, --help show this help message and exit
-v, --version show program's version number and exit
--cookies cookies.txt
path to a cookies.txt file
--delay 1.5 adds a delay (in seconds) [NOTE: most HTTP servers
have a 5s HTTP/1.1 keep-alive timeout, you should
consider it if you wanna reuse the connection]
--retries 5 Maximum number of retries for
--path PATH path to store wiki dump at
--resume resumes previous incomplete dump (requires --path)
--force download it even if Wikimedia site or a recent dump
exists in the Internet Archive
--user USER Username if MediaWiki authentication is required.
--pass PASSWORD Password if MediaWiki authentication is required.
--http-user HTTP_USER
Username if HTTP authentication is required.
--http-pass HTTP_PASSWORD
Password if HTTP authentication is required.
--insecure Disable SSL certificate verification
--verbose
--api_chunksize 50 Chunk size for MediaWiki API (arvlimit, ailimit, etc.)
wiki URL to wiki (e.g. http://wiki.domain.org), auto
detects API and index.php
--api API URL to API (e.g. http://wiki.domain.org/w/api.php)
--index INDEX URL to index.php (e.g.
http://wiki.domain.org/w/index.php), (not supported
with --images on newer(?) MediaWiki without --api)
--index-check-threshold 0.80
pass index.php check if result is greater than (>)
this value (default: 0.80)
Data to download:
What info download from the wiki
--xml Export XML dump using Special:Export (index.php).
(supported with --curonly)
--curonly store only the latest revision of pages
--xmlapiexport Export XML dump using API:revisions instead of
Special:Export, use this when Special:Export fails and
xmlrevisions not supported. (supported with --curonly)
--xmlrevisions Export all revisions from an API generator
(API:Allrevisions). MediaWiki 1.27+ only. (not
supported with --curonly)
--xmlrevisions_page [[! Development only !]] Export all revisions from an
API generator, but query page by page MediaWiki 1.27+
only. (default: --curonly)
--namespaces 1,2,3 comma-separated value of namespaces to include (all by
default)
--exnamespaces 1,2,3 comma-separated value of namespaces to exclude
--images Generates an image dump
Image dump options:
Options for image dump (--images)
--bypass-cdn-image-compression
Bypass CDN image compression. (CloudFlare Polish,
etc.) [WARNING: This will increase CDN origin traffic,
and not effective for all HTTP Server/CDN, please
don't use this blindly.]
--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z
Only download images uploaded in the given time
interval. [format: ISO 8601 UTC interval] (only works
with api)
--ia-wbm-booster {0,1,2,3}
Download images from Internet Archive Wayback Machine
if possible, reduce the bandwidth usage of the wiki.
[0: disabled (default), 1: use earliest snapshot, 2:
use latest snapshot, 3: the closest snapshot to the
image's upload time]
Assertions:
What assertions to check before actually downloading, if any assertion
fails, program will exit with exit code 45. [NOTE: This feature requires
correct siteinfo API response from the wiki, and not working properly with
some wikis. But it's useful for mass automated archiving, so you can
schedule a re-run for HUGE wiki that may run out of your disk]
--assert-max-pages 123
Maximum number of pages to download
--assert-max-edits 123
Maximum number of edits to download
--assert-max-images 123
Maximum number of images to download
--assert-max-images-bytes 123
Maximum number of bytes to download for images [NOTE:
this assert happens after downloading images list]
Meta info:
What meta info to retrieve from the wiki
--get-wiki-engine returns the wiki engine
--failfast [lack maintenance] Avoid resuming, discard failing
wikis quickly. Useful only for mass downloads.
wikiteam3uploader params:
--upload (run `wikiteam3uplaoder` for you) Upload wikidump to
Internet Archive after successfully dumped
-g, --uploader-arg UPLOADER_ARGS
Arguments for uploader.
```
</details>
<!-- DUMPER -->
### Downloading a wiki with complete XML history and images
```bash
wikiteam3dumpgenerator http://wiki.domain.org --xml --images
```
>[!WARNING]
>
> `NTFS/Windows` users please note: When using `--images`, because NTFS does not allow characters such as `:*?"<>|` in filenames, some files may not be downloaded, please pay attention to the `XXXXX could not be created by OS` error in your `errors.log`.
> We will not make special treatment for NTFS/EncFS "path too long/illegal filename", highly recommend you to use ext4/xfs/btrfs, etc.
> <details>
> - Introducing the "illegal filename rename" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3.
> - It will cause confusion to the final user of wikidump (usually the Wiki site administrator).
> - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive)
> - Using other file systems can solve all problems.
> </details>
### Manually specifying `api.php` and/or `index.php`
If the script can't find itself the `api.php` and/or `index.php` paths, then you can provide them:
```bash
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images
```
```bash
wikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \
--xml --images
```
If you only want the XML histories, just use `--xml`. For only the images, just `--images`. For only the current version of every page, `--xml --curonly`.
### Resuming an incomplete dump
<details>
```bash
wikiteam3dumpgenerator \
--api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump
```
In the above example, `--path` is only necessary if the download path (wikidump dir) is not the default.
>[!NOTE]
>
> en: When resuming an incomplete dump, the configuration in `config.json` will override the CLI parameters. (But not all CLI parameters will be ignored, check `config.json` for details)
`wikiteam3dumpgenerator` will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.
</details>
## Using `wikiteam3uploader`
<!-- UPLOADER -->
<details>
```bash
usage: Upload wikidump to the Internet Archive. [-h] [-kf KEYS_FILE]
[-c {opensource,test_collection,wikiteam}]
[--dry-run] [-u]
[--bin-zstd BIN_ZSTD]
[--zstd-level {17,18,19,20,21,22}]
[--rezstd]
[--rezstd-endpoint URL]
[--bin-7z BIN_7Z]
[--parallel]
wikidump_dir
positional arguments:
wikidump_dir
options:
-h, --help show this help message and exit
-kf, --keys_file KEYS_FILE
Path to the IA S3 keys file. (first line: access key,
second line: secret key) [default:
~/.wikiteam3_ia_keys.txt]
-c, --collection {opensource,test_collection,wikiteam}
--dry-run Dry run, do not upload anything.
-u, --update Update existing item. [!! not implemented yet !!]
--bin-zstd BIN_ZSTD Path to zstd binary. [default: zstd]
--zstd-level {17,18,19,20,21,22}
Zstd compression level. [default: 17] If you have a
lot of RAM, recommend to use max level (22).
--rezstd [server-side recompression] Upload pre-compressed zstd
files to rezstd server for recompression with best
settings (which may eat 10GB+ RAM), then download
back. (This feature saves your lowend machine, lol)
--rezstd-endpoint URL
Rezstd server endpoint. [default: http://pool-
rezstd.saveweb.org/rezstd/] (source code:
https://github.com/yzqzss/rezstd)
--bin-7z BIN_7Z Path to 7z binary. [default: 7z]
--parallel Parallelize compression tasks
```
</details>
<!-- UPLOADER -->
### Requirements
> [!NOTE]
>
> Please make sure you have the following requirements before using `wikiteam3uploader`, and you don't need to install them if you don't wanna upload the dump to IA.
- unbinded localhost port 62954 (for multiple processes compressing queue)
- 3GB+ RAM (~2.56GB for commpressing)
- 64-bit OS (required by 2G `wlog` size)
- `7z` (binary)
> Debian/Ubuntu: install `p7zip-full`
> [!NOTE]
>
> Windows: install <https://7-zip.org> and add `7z.exe` to PATH
- `zstd` (binary)
> 1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum)
> install from <https://github.com/facebook/zstd>
> [!NOTE]
>
> Windows: add `zstd.exe` to PATH
### Uploader usage
> [!NOTE]
>
> Read `wikiteam3uploader --help` and do not forget `~/.wikiteam3_ia_keys.txt` before using `wikiteam3uploader`.
```bash
wikiteam3uploader {YOUR_WIKI_DUMP_PATH}
```
## Checking dump integrity
TODO: xml2titles.py
If you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:
```bash
grep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \
"</page>" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep "</revision>" *.xml -c
```
You should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:
```bash
580
580
580
5677
5677
```
If your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished ```</page>``` or ```</revision>```). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.
## import wikidump to MediaWiki / wikidump data tips
> [!IMPORTANT]
>
> In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). <https://www.mediawiki.org/wiki/Manual:Title.php#Article_name>
> [!NOTE]
>
> `WikiTeam3` uses `zstd` to compress `.xml` and `.txt` files, and `7z` to pack images (media files).
> `zstd` is a very fast stream compression algorithm, you can use `zstd -d` to decompress `.zst` file/steam.
## Contributors
**WikiTeam** is the [Archive Team](http://www.archiveteam.org) [[GitHub](https://github.com/ArchiveTeam)] subcommittee on wikis.
It was founded and originally developed by [Emilio J. Rodríguez-Posada](https://github.com/emijrp), a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: [Federico Leva](https://github.com/nemobis), [Alex Buie](https://github.com/ab2525), [Scott Boyd](http://www.sdboyd56.com), [Hydriz](https://github.com/Hydriz), Platonides, Ian McEwen, [Mike Dupont](https://github.com/h4ck3rm1k3), [balr0g](https://github.com/balr0g) and [PiRSquared17](https://github.com/PiRSquared17).
**Mediawiki-Scraper** The Python 3 initiative is currently being led by [Elsie Hupp](https://github.com/elsiehupp), with contributions from [Victor Gambier](https://github.com/vgambier), [Thomas Karcher](https://github.com/t-karcher), [Janet Cobb](https://github.com/randomnetcat), [yzqzss](https://github.com/yzqzss), [NyaMisty](https://github.com/NyaMisty) and [Rob Kam](https://github.com/robkam)
**WikiTeam3** Every archivist who has uploaded a wikidump to the [Internet Archive](https://archive.org/search?query=subject%3Awikiteam3).
Raw data
{
"_id": null,
"home_page": null,
"name": "wikiteam3",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.8.1",
"maintainer_email": "yzqzss <yzqzss@saveweb.org>",
"keywords": "archiveteam, mediawiki, wikiteam, wikiteam3, preservation, wiki, wikipedia",
"author": "emijrp, nemobis, randomnetcat, NyaMisty, PiRSquared17, Hydriz",
"author_email": "yzqzss <yzqzss@saveweb.org>",
"download_url": "https://files.pythonhosted.org/packages/51/7e/0d7108d316ab684e223d476758b473311a55267d47eb833d39f60bd55676/wikiteam3-4.3.7.tar.gz",
"platform": null,
"description": "# `wikiteam3`\n\n![Dynamic JSON Badge](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Farchive.org%2Fadvancedsearch.php%3Fq%3Dsubject%3Awikiteam3%26rows%3D1%26page%3D1%26output%3Djson&query=%24.response.numFound&label=WikiTeam3%20Dumps%40IA)\n[![PyPI version](https://badge.fury.io/py/wikiteam3.svg)](https://badge.fury.io/py/wikiteam3)\n\n<!-- ![\"MediaWikiArchive.png\"](./MediaWikiArchive.png) -->\n<div align=center><img width = \"150\" height =\"150\" src =\"https://raw.githubusercontent.com/saveweb/wikiteam3/v4-main/MediaWikiArchive.png\"/></div>\n\n> Countless MediaWikis are still waiting to be archived.\n>\n> _Image by [@gledos](https://github.com/gledos/)_\n\n`wikiteam3` is a fork of `mediawiki-scraper`.\n\n<details>\n\n## Why we fork mediawiki-scraper\n\nOriginally, mediawiki-scraper was named wikiteam3, but wikiteam upstream (py2 version) suggested that the name should be changed to avoid confusion with the original wikiteam. \nHalf a year later, we didn't see any py3 porting progress in the original wikiteam, and mediawiki-scraper lacks \"code\" reviewers. \nSo, we decided to break that suggestion, fork and named it back to wikiteam3, put the code here, and release it to pypi wildly.\n\nEverything still under GPLv3 license.\n\n</details>\n\n## For webmaster\n\nWe archive every MediaWiki site yearly and upload to the Internet Archive.\nWe crawl sites with 1.5s crawl-delay by default, and we respect Retry-After header.\nIf you don\u2019t want your wiki to be archived, add the following to your `<domain>/robots.txt`:\n\n```robots.txt\nUser-agent: wikiteam3\nDisallow: /\n```\n\n\n## Installation/Upgrade\n\n```shell\npip install wikiteam3 --upgrade\n```\n\n>[!NOTE]\n> For public MediaWiki, you don't need to install wikiteam3 locally. You can send an archive request (include the reason for the archive request, e.g. wiki is about to shutdown, need a wikidump to migrate to another wikifarm, etc.) to the wikiteam IRC channel. An online member will run a [wikibot](https://wikibot.digitaldragon.dev/) job for your request.\n>\n> Even more, we also accept DokuWiki and PukiWiki archive requests.\n> \n> - wikiteam IRC (webirc): https://webirc.hackint.org/#irc://irc.hackint.org/wikiteam\n> - wikiteam IRC logs: https://irclogs.archivete.am/wikiteam\n\n## Dumpgenerator usage\n\n<!-- DUMPER -->\n<details>\n\n```bash\nusage: wikiteam3dumpgenerator [-h] [-v] [--cookies cookies.txt] [--delay 1.5]\n [--retries 5] [--path PATH] [--resume] [--force]\n [--user USER] [--pass PASSWORD]\n [--http-user HTTP_USER]\n [--http-pass HTTP_PASSWORD] [--insecure]\n [--verbose] [--api_chunksize 50] [--api API]\n [--index INDEX] [--index-check-threshold 0.80]\n [--xml] [--curonly] [--xmlapiexport]\n [--xmlrevisions] [--xmlrevisions_page]\n [--namespaces 1,2,3] [--exnamespaces 1,2,3]\n [--images] [--bypass-cdn-image-compression]\n [--image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z]\n [--ia-wbm-booster {0,1,2,3}]\n [--assert-max-pages 123]\n [--assert-max-edits 123]\n [--assert-max-images 123]\n [--assert-max-images-bytes 123]\n [--get-wiki-engine] [--failfast] [--upload]\n [-g UPLOADER_ARGS]\n [wiki]\n\noptions:\n -h, --help show this help message and exit\n -v, --version show program's version number and exit\n --cookies cookies.txt\n path to a cookies.txt file\n --delay 1.5 adds a delay (in seconds) [NOTE: most HTTP servers\n have a 5s HTTP/1.1 keep-alive timeout, you should\n consider it if you wanna reuse the connection]\n --retries 5 Maximum number of retries for\n --path PATH path to store wiki dump at\n --resume resumes previous incomplete dump (requires --path)\n --force download it even if Wikimedia site or a recent dump\n exists in the Internet Archive\n --user USER Username if MediaWiki authentication is required.\n --pass PASSWORD Password if MediaWiki authentication is required.\n --http-user HTTP_USER\n Username if HTTP authentication is required.\n --http-pass HTTP_PASSWORD\n Password if HTTP authentication is required.\n --insecure Disable SSL certificate verification\n --verbose\n --api_chunksize 50 Chunk size for MediaWiki API (arvlimit, ailimit, etc.)\n\n wiki URL to wiki (e.g. http://wiki.domain.org), auto\n detects API and index.php\n --api API URL to API (e.g. http://wiki.domain.org/w/api.php)\n --index INDEX URL to index.php (e.g.\n http://wiki.domain.org/w/index.php), (not supported\n with --images on newer(?) MediaWiki without --api)\n --index-check-threshold 0.80\n pass index.php check if result is greater than (>)\n this value (default: 0.80)\n\nData to download:\n What info download from the wiki\n\n --xml Export XML dump using Special:Export (index.php).\n (supported with --curonly)\n --curonly store only the latest revision of pages\n --xmlapiexport Export XML dump using API:revisions instead of\n Special:Export, use this when Special:Export fails and\n xmlrevisions not supported. (supported with --curonly)\n --xmlrevisions Export all revisions from an API generator\n (API:Allrevisions). MediaWiki 1.27+ only. (not\n supported with --curonly)\n --xmlrevisions_page [[! Development only !]] Export all revisions from an\n API generator, but query page by page MediaWiki 1.27+\n only. (default: --curonly)\n --namespaces 1,2,3 comma-separated value of namespaces to include (all by\n default)\n --exnamespaces 1,2,3 comma-separated value of namespaces to exclude\n --images Generates an image dump\n\nImage dump options:\n Options for image dump (--images)\n\n --bypass-cdn-image-compression\n Bypass CDN image compression. (CloudFlare Polish,\n etc.) [WARNING: This will increase CDN origin traffic,\n and not effective for all HTTP Server/CDN, please\n don't use this blindly.]\n --image-timestamp-interval 2019-01-02T01:36:06Z/2023-08-12T10:36:06Z\n Only download images uploaded in the given time\n interval. [format: ISO 8601 UTC interval] (only works\n with api)\n --ia-wbm-booster {0,1,2,3}\n Download images from Internet Archive Wayback Machine\n if possible, reduce the bandwidth usage of the wiki.\n [0: disabled (default), 1: use earliest snapshot, 2:\n use latest snapshot, 3: the closest snapshot to the\n image's upload time]\n\nAssertions:\n What assertions to check before actually downloading, if any assertion\n fails, program will exit with exit code 45. [NOTE: This feature requires\n correct siteinfo API response from the wiki, and not working properly with\n some wikis. But it's useful for mass automated archiving, so you can\n schedule a re-run for HUGE wiki that may run out of your disk]\n\n --assert-max-pages 123\n Maximum number of pages to download\n --assert-max-edits 123\n Maximum number of edits to download\n --assert-max-images 123\n Maximum number of images to download\n --assert-max-images-bytes 123\n Maximum number of bytes to download for images [NOTE:\n this assert happens after downloading images list]\n\nMeta info:\n What meta info to retrieve from the wiki\n\n --get-wiki-engine returns the wiki engine\n --failfast [lack maintenance] Avoid resuming, discard failing\n wikis quickly. Useful only for mass downloads.\n\nwikiteam3uploader params:\n --upload (run `wikiteam3uplaoder` for you) Upload wikidump to\n Internet Archive after successfully dumped\n -g, --uploader-arg UPLOADER_ARGS\n Arguments for uploader.\n\n```\n</details>\n\n<!-- DUMPER -->\n\n### Downloading a wiki with complete XML history and images\n\n```bash\nwikiteam3dumpgenerator http://wiki.domain.org --xml --images\n```\n\n>[!WARNING]\n>\n> `NTFS/Windows` users please note: When using `--images`, because NTFS does not allow characters such as `:*?\"<>|` in filenames, some files may not be downloaded, please pay attention to the `XXXXX could not be created by OS` error in your `errors.log`.\n> We will not make special treatment for NTFS/EncFS \"path too long/illegal filename\", highly recommend you to use ext4/xfs/btrfs, etc.\n> <details>\n> - Introducing the \"illegal filename rename\" mechanism will bring complexity. WikiTeam(python2) had this before, but it caused more problems, so it was removed in WikiTeam3.\n> - It will cause confusion to the final user of wikidump (usually the Wiki site administrator).\n> - NTFS is not suitable for large-scale image dump with millions of files in a single directory.(Windows background service will occasionally scan the whole disk, we think there should be no users using WIN/NTFS to do large-scale MediaWiki archive)\n> - Using other file systems can solve all problems.\n> </details>\n\n### Manually specifying `api.php` and/or `index.php`\n\nIf the script can't find itself the `api.php` and/or `index.php` paths, then you can provide them:\n\n```bash\nwikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --xml --images\n```\n\n```bash\nwikiteam3dumpgenerator --api http://wiki.domain.org/w/api.php --index http://wiki.domain.org/w/index.php \\\n --xml --images\n```\n\nIf you only want the XML histories, just use `--xml`. For only the images, just `--images`. For only the current version of every page, `--xml --curonly`.\n\n### Resuming an incomplete dump\n\n<details>\n\n```bash\nwikiteam3dumpgenerator \\\n --api http://wiki.domain.org/w/api.php --xml --images --resume --path /path/to/incomplete-dump\n```\n\nIn the above example, `--path` is only necessary if the download path (wikidump dir) is not the default.\n\n>[!NOTE]\n>\n> en: When resuming an incomplete dump, the configuration in `config.json` will override the CLI parameters. (But not all CLI parameters will be ignored, check `config.json` for details)\n\n`wikiteam3dumpgenerator` will also ask you if you want to resume if it finds an incomplete dump in the path where it is downloading.\n\n</details>\n\n## Using `wikiteam3uploader`\n\n<!-- UPLOADER -->\n<details>\n\n```bash\nusage: Upload wikidump to the Internet Archive. [-h] [-kf KEYS_FILE]\n [-c {opensource,test_collection,wikiteam}]\n [--dry-run] [-u]\n [--bin-zstd BIN_ZSTD]\n [--zstd-level {17,18,19,20,21,22}]\n [--rezstd]\n [--rezstd-endpoint URL]\n [--bin-7z BIN_7Z]\n [--parallel]\n wikidump_dir\n\npositional arguments:\n wikidump_dir\n\noptions:\n -h, --help show this help message and exit\n -kf, --keys_file KEYS_FILE\n Path to the IA S3 keys file. (first line: access key,\n second line: secret key) [default:\n ~/.wikiteam3_ia_keys.txt]\n -c, --collection {opensource,test_collection,wikiteam}\n --dry-run Dry run, do not upload anything.\n -u, --update Update existing item. [!! not implemented yet !!]\n --bin-zstd BIN_ZSTD Path to zstd binary. [default: zstd]\n --zstd-level {17,18,19,20,21,22}\n Zstd compression level. [default: 17] If you have a\n lot of RAM, recommend to use max level (22).\n --rezstd [server-side recompression] Upload pre-compressed zstd\n files to rezstd server for recompression with best\n settings (which may eat 10GB+ RAM), then download\n back. (This feature saves your lowend machine, lol)\n --rezstd-endpoint URL\n Rezstd server endpoint. [default: http://pool-\n rezstd.saveweb.org/rezstd/] (source code:\n https://github.com/yzqzss/rezstd)\n --bin-7z BIN_7Z Path to 7z binary. [default: 7z]\n --parallel Parallelize compression tasks\n\n```\n</details>\n\n<!-- UPLOADER -->\n\n### Requirements\n\n> [!NOTE]\n>\n> Please make sure you have the following requirements before using `wikiteam3uploader`, and you don't need to install them if you don't wanna upload the dump to IA.\n\n- unbinded localhost port 62954 (for multiple processes compressing queue)\n- 3GB+ RAM (~2.56GB for commpressing)\n- 64-bit OS (required by 2G `wlog` size)\n\n- `7z` (binary)\n > Debian/Ubuntu: install `p7zip-full` \n\n > [!NOTE]\n >\n > Windows: install <https://7-zip.org> and add `7z.exe` to PATH\n- `zstd` (binary)\n > 1.5.5+ (recommended), v1.5.0-v1.5.4(DO NOT USE), 1.4.8 (minimum) \n > install from <https://github.com/facebook/zstd> \n\n > [!NOTE]\n >\n > Windows: add `zstd.exe` to PATH\n\n### Uploader usage\n\n> [!NOTE]\n>\n> Read `wikiteam3uploader --help` and do not forget `~/.wikiteam3_ia_keys.txt` before using `wikiteam3uploader`.\n\n```bash\nwikiteam3uploader {YOUR_WIKI_DUMP_PATH}\n```\n\n## Checking dump integrity\n\nTODO: xml2titles.py\n\nIf you want to check the XML dump integrity, type this into your command line to count title, page and revision XML tags:\n\n```bash\ngrep -E '<title(.*?)>' *.xml -c; grep -E '<page(.*?)>' *.xml -c; grep \\\n \"</page>\" *.xml -c;grep -E '<revision(.*?)>' *.xml -c;grep \"</revision>\" *.xml -c\n```\n\nYou should see something similar to this (not the actual numbers) - the first three numbers should be the same and the last two should be the same as each other:\n\n```bash\n580\n580\n580\n5677\n5677\n```\n\nIf your first three numbers or your last two numbers are different, then, your XML dump is corrupt (it contains one or more unfinished ```</page>``` or ```</revision>```). This is not common in small wikis, but large or very large wikis may fail at this due to truncated XML pages while exporting and merging. The solution is to remove the XML dump and re-download, a bit boring, and it can fail again.\n\n## import wikidump to MediaWiki / wikidump data tips\n\n> [!IMPORTANT]\n>\n> In the article name, spaces and underscores are treated as equivalent and each is converted to the other in the appropriate context (underscore in URL and database keys, spaces in plain text). <https://www.mediawiki.org/wiki/Manual:Title.php#Article_name>\n\n> [!NOTE]\n>\n> `WikiTeam3` uses `zstd` to compress `.xml` and `.txt` files, and `7z` to pack images (media files). \n> `zstd` is a very fast stream compression algorithm, you can use `zstd -d` to decompress `.zst` file/steam.\n\n## Contributors\n\n**WikiTeam** is the [Archive Team](http://www.archiveteam.org) [[GitHub](https://github.com/ArchiveTeam)] subcommittee on wikis.\nIt was founded and originally developed by [Emilio J. Rodr\u00edguez-Posada](https://github.com/emijrp), a Wikipedia veteran editor and amateur archivist. Thanks to people who have helped, especially to: [Federico Leva](https://github.com/nemobis), [Alex Buie](https://github.com/ab2525), [Scott Boyd](http://www.sdboyd56.com), [Hydriz](https://github.com/Hydriz), Platonides, Ian McEwen, [Mike Dupont](https://github.com/h4ck3rm1k3), [balr0g](https://github.com/balr0g) and [PiRSquared17](https://github.com/PiRSquared17).\n\n**Mediawiki-Scraper** The Python 3 initiative is currently being led by [Elsie Hupp](https://github.com/elsiehupp), with contributions from [Victor Gambier](https://github.com/vgambier), [Thomas Karcher](https://github.com/t-karcher), [Janet Cobb](https://github.com/randomnetcat), [yzqzss](https://github.com/yzqzss), [NyaMisty](https://github.com/NyaMisty) and [Rob Kam](https://github.com/robkam)\n\n**WikiTeam3** Every archivist who has uploaded a wikidump to the [Internet Archive](https://archive.org/search?query=subject%3Awikiteam3).\n",
"bugtrack_url": null,
"license": "GPL-3.0-or-later",
"summary": "Tools for downloading and preserving MediaWikis. We archive MediaWikis, from Wikipedia to tiniest wikis.",
"version": "4.3.7",
"project_urls": {
"repository": "https://github.com/saveweb/wikiteam3"
},
"split_keywords": [
"archiveteam",
" mediawiki",
" wikiteam",
" wikiteam3",
" preservation",
" wiki",
" wikipedia"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "289b13e6226ea5809a7aec3dffa3f4d8203496442fc4c74503d32a3e10ddec34",
"md5": "d84997dcec9bf609c10847af43fe0105",
"sha256": "f49ca650907678294409bba34eaca6e8c15050b2b76f822fba93d5e202811f49"
},
"downloads": -1,
"filename": "wikiteam3-4.3.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d84997dcec9bf609c10847af43fe0105",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.8.1",
"size": 106176,
"upload_time": "2024-11-21T04:18:49",
"upload_time_iso_8601": "2024-11-21T04:18:49.475281Z",
"url": "https://files.pythonhosted.org/packages/28/9b/13e6226ea5809a7aec3dffa3f4d8203496442fc4c74503d32a3e10ddec34/wikiteam3-4.3.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "517e0d7108d316ab684e223d476758b473311a55267d47eb833d39f60bd55676",
"md5": "291c475d7f53d2228524592d911619dc",
"sha256": "bfa37614d96d9f45b7bcc18a1df2b615808c313f002a7ed1815b538c1ec7e90c"
},
"downloads": -1,
"filename": "wikiteam3-4.3.7.tar.gz",
"has_sig": false,
"md5_digest": "291c475d7f53d2228524592d911619dc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.8.1",
"size": 258943,
"upload_time": "2024-11-21T04:18:52",
"upload_time_iso_8601": "2024-11-21T04:18:52.168563Z",
"url": "https://files.pythonhosted.org/packages/51/7e/0d7108d316ab684e223d476758b473311a55267d47eb833d39f60bd55676/wikiteam3-4.3.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-21 04:18:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "saveweb",
"github_project": "wikiteam3",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "file-read-backwards",
"specs": []
},
{
"name": "internetarchive",
"specs": []
},
{
"name": "lxml",
"specs": []
},
{
"name": "mwclient",
"specs": []
},
{
"name": "requests",
"specs": []
},
{
"name": "python-slugify",
"specs": []
}
],
"lcname": "wikiteam3"
}