mediawiki-dump


Namemediawiki-dump JSON
Version 1.2.1 PyPI version JSON
download
home_pagehttps://github.com/macbre/mediawiki-dump
SummaryPython package for working with MediaWiki XML content dumps
upload_time2024-04-17 09:22:04
maintainerNone
docs_urlNone
authorMaciej Brencz
requires_python>=3.8
licenseMIT
keywords dump fandom mediawiki wikipedia wikia
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # mediawiki-dump
[![PyPI](https://img.shields.io/pypi/v/mediawiki_dump.svg)](https://pypi.python.org/pypi/mediawiki_dump)
[![Downloads](https://pepy.tech/badge/mediawiki_dump)](https://pepy.tech/project/mediawiki_dump)
[![CI](https://github.com/macbre/mediawiki-dump/actions/workflows/tests.yml/badge.svg)](https://github.com/macbre/mediawiki-dump/actions/workflows/tests.yml)
[![Coverage Status](https://coveralls.io/repos/github/macbre/mediawiki-dump/badge.svg?branch=master)](https://coveralls.io/github/macbre/mediawiki-dump?branch=master)

```
pip install mediawiki_dump
```

[Python3 package](https://pypi.org/project/mediawiki_dump/) for working with [MediaWiki XML content dumps](https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki#Backup_the_content_of_the_wiki_(XML_dump)).

[Wikipedia](https://dumps.wikimedia.org/) (bz2 compressed) and [Wikia](https://community.fandom.com/wiki/Help:Database_download) (7zip) content dumps are supported.

## Dependencies

In order to read 7zip archives (used by Wikia's XML dumps) you need to install [`libarchive`](http://libarchive.org/):

```
sudo apt install libarchive-dev
```

## API

### Tokenizer

Allows you to clean up the wikitext:

```python
from mediawiki_dump.tokenizer import clean
clean('[[Foo|bar]] is a link')
'bar is a link'
```

And then tokenize the text:

```python
from mediawiki_dump.tokenizer import tokenize
tokenize('11. juni 2007 varð kunngjørt, at Svínoyar kommuna verður løgd saman við Klaksvíkar kommunu eftir komandi bygdaráðsval.')
['juni', 'varð', 'kunngjørt', 'at', 'Svínoyar', 'kommuna', 'verður', 'løgd', 'saman', 'við', 'Klaksvíkar', 'kommunu', 'eftir', 'komandi', 'bygdaráðsval']
```

### Dump reader

Fetch and parse dumps (using a local file cache):

```python
from mediawiki_dump.dumps import WikipediaDump
from mediawiki_dump.reader import DumpReader

dump = WikipediaDump('fo')
pages = DumpReader().read(dump)

[page.title for page in pages][:10]

['Main Page', 'Brúkari:Jon Harald Søby', 'Forsíða', 'Ormurin Langi', 'Regin smiður', 'Fyrimynd:InterLingvLigoj', 'Heimsyvirlýsingin um mannarættindi', 'Bólkur:Kvæði', 'Bólkur:Yrking', 'Kjak:Forsíða']
```

`read` method yields the `DumpEntry` object for each revision.

By using `DumpReaderArticles` class you can read article pages only:

```python
import logging; logging.basicConfig(level=logging.INFO)

from mediawiki_dump.dumps import WikipediaDump
from mediawiki_dump.reader import DumpReaderArticles

dump = WikipediaDump('fo')
reader = DumpReaderArticles()
pages = reader.read(dump)

print([page.title for page in pages][:25])

print(reader.get_dump_language())  # fo
```

Will give you:

```
INFO:DumpReaderArticles:Parsing XML dump...
INFO:WikipediaDump:Checking /tmp/wikicorpus_62da4928a0a307185acaaa94f537d090.bz2 cache file...
INFO:WikipediaDump:Fetching fo dump from <https://dumps.wikimedia.org/fowiki/latest/fowiki-latest-pages-meta-current.xml.bz2>...
INFO:WikipediaDump:HTTP 200 (14105 kB will be fetched)
INFO:WikipediaDump:Cache set
...
['WIKIng', 'Føroyar', 'Borðoy', 'Eysturoy', 'Fugloy', 'Forsíða', 'Løgmenn í Føroyum', 'GNU Free Documentation License', 'GFDL', 'Opið innihald', 'Wikipedia', 'Alfrøði', '2004', '20. juni', 'WikiWiki', 'Wiki', 'Danmark', '21. juni', '22. juni', '23. juni', 'Lívfrøði', '24. juni', '25. juni', '26. juni', '27. juni']
```

## Reading Wikia's dumps

 ```python
import logging; logging.basicConfig(level=logging.INFO)

from mediawiki_dump.dumps import WikiaDump
from mediawiki_dump.reader import DumpReaderArticles

dump = WikiaDump('plnordycka')
pages = DumpReaderArticles().read(dump)

print([page.title for page in pages][:25])
```

Will give you:

```
INFO:DumpReaderArticles:Parsing XML dump...
INFO:WikiaDump:Checking /tmp/wikicorpus_f7dd3b75c5965ee10ae5fe4643fb806b.7z cache file...
INFO:WikiaDump:Fetching plnordycka dump from <https://s3.amazonaws.com/wikia_xml_dumps/p/pl/plnordycka_pages_current.xml.7z>...
INFO:WikiaDump:HTTP 200 (129 kB will be fetched)
INFO:WikiaDump:Cache set
INFO:WikiaDump:Reading wikicorpus_f7dd3b75c5965ee10ae5fe4643fb806b file from dump
...
INFO:DumpReaderArticles:Parsing completed, entries found: 615
['Nordycka Wiki', 'Strona główna', '1968', '1948', 'Ormurin Langi', 'Mykines', 'Trollsjön', 'Wyspy Owcze', 'Nólsoy', 'Sandoy', 'Vágar', 'Mørk', 'Eysturoy', 'Rakfisk', 'Hákarl', '1298', 'Sztokfisz', '1978', '1920', 'Najbardziej na północ', 'Svalbard', 'Hamferð', 'Rok w Skandynawii', 'Islandia', 'Rissajaure']
```

## Fetching full history

Pass `full_history` to `BaseDump` constructor to fetch the XML content dump with full history:

```python
import logging; logging.basicConfig(level=logging.INFO)

from mediawiki_dump.dumps import WikiaDump
from mediawiki_dump.reader import DumpReaderArticles

dump = WikiaDump('macbre', full_history=True)  # fetch full history, including old revisions
pages = DumpReaderArticles().read(dump)

print('\n'.join([repr(page) for page in pages]))
```

Will give you:

```
INFO:DumpReaderArticles:Parsing completed, entries found: 384
<DumpEntry "Macbre Wiki" by Default at 2016-10-12T19:51:06+00:00>
<DumpEntry "Macbre Wiki" by Wikia at 2016-10-12T19:51:05+00:00>
<DumpEntry "Macbre Wiki" by Macbre at 2016-11-04T10:33:20+00:00>
<DumpEntry "Macbre Wiki" by FandomBot at 2016-11-04T10:37:17+00:00>
<DumpEntry "Macbre Wiki" by FandomBot at 2017-01-25T14:47:37+00:00>
<DumpEntry "Macbre Wiki" by Ryba777 at 2017-04-10T11:20:25+00:00>
<DumpEntry "Macbre Wiki" by Ryba777 at 2017-04-10T11:21:20+00:00>
<DumpEntry "Macbre Wiki" by Macbre at 2018-03-07T12:51:12+00:00>
<DumpEntry "Main Page" by Wikia at 2016-10-12T19:51:05+00:00>
<DumpEntry "FooBar" by Anonymous at 2016-11-08T10:15:33+00:00>
<DumpEntry "FooBar" by Anonymous at 2016-11-08T10:15:49+00:00>
...
<DumpEntry "YouTube tag" by FANDOMbot at 2018-06-05T11:45:44+00:00>
<DumpEntry "Maps" by Macbre at 2018-06-06T08:51:24+00:00>
<DumpEntry "Maps" by Macbre at 2018-06-07T08:17:13+00:00>
<DumpEntry "Maps" by Macbre at 2018-06-07T08:17:36+00:00>
<DumpEntry "Scary transclusion" by Macbre at 2018-07-24T14:52:20+00:00>
<DumpEntry "Lua" by Macbre at 2018-09-11T14:04:15+00:00>
<DumpEntry "Lua" by Macbre at 2018-09-11T14:14:24+00:00>
<DumpEntry "Lua" by Macbre at 2018-09-11T14:14:37+00:00>
```

## Reading dumps of selected articles

You can use [`mwclient` Python library](https://mwclient.readthedocs.io/en/latest/index.html)
and fetch "live" dumps of selected articles from any MediaWiki-powered site.

```python
import mwclient
site = mwclient.Site('vim.fandom.com', path='/')

from mediawiki_dump.dumps import MediaWikiClientDump
from mediawiki_dump.reader import DumpReaderArticles

dump = MediaWikiClientDump(site, ['Vim documentation', 'Tutorial'])

pages = DumpReaderArticles().read(dump)

print('\n'.join([repr(page) for page in pages]))
```

Will give you:

```
<DumpEntry "Vim documentation" by Anonymous at 2019-07-05T09:39:47+00:00>
<DumpEntry "Tutorial" by Anonymous at 2019-07-05T09:41:19+00:00>
```

## Finding pages with a specific [parser tag](https://www.mediawiki.org/wiki/Manual:Tag_extensions)

Let's find pages where no longer supported `<place>` tag is still used:

```python
import logging; logging.basicConfig(level=logging.INFO)

from mediawiki_dump.dumps import WikiaDump
from mediawiki_dump.reader import DumpReader

dump = WikiaDump('plpoznan')
pages = DumpReader().read(dump)

with_places_tag = [
    page.title
    for page in pages
    if '<place ' in page.content
]

logging.info('Pages found: %d', len(with_places_tag))

with open("pages.txt", mode="wt", encoding="utf-8") as fp:
    for entry in with_places_tag:
        fp.write(entry + "\n")

logging.info("pages.txt file created")
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/macbre/mediawiki-dump",
    "name": "mediawiki-dump",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "dump fandom mediawiki wikipedia wikia",
    "author": "Maciej Brencz",
    "author_email": "maciej.brencz@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d1/d2/db405e092ea9997d7c037d10a54ceaa1767a6470d56dc79d80135fa508ad/mediawiki_dump-1.2.1.tar.gz",
    "platform": null,
    "description": "# mediawiki-dump\n[![PyPI](https://img.shields.io/pypi/v/mediawiki_dump.svg)](https://pypi.python.org/pypi/mediawiki_dump)\n[![Downloads](https://pepy.tech/badge/mediawiki_dump)](https://pepy.tech/project/mediawiki_dump)\n[![CI](https://github.com/macbre/mediawiki-dump/actions/workflows/tests.yml/badge.svg)](https://github.com/macbre/mediawiki-dump/actions/workflows/tests.yml)\n[![Coverage Status](https://coveralls.io/repos/github/macbre/mediawiki-dump/badge.svg?branch=master)](https://coveralls.io/github/macbre/mediawiki-dump?branch=master)\n\n```\npip install mediawiki_dump\n```\n\n[Python3 package](https://pypi.org/project/mediawiki_dump/) for working with [MediaWiki XML content dumps](https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki#Backup_the_content_of_the_wiki_(XML_dump)).\n\n[Wikipedia](https://dumps.wikimedia.org/) (bz2 compressed) and [Wikia](https://community.fandom.com/wiki/Help:Database_download) (7zip) content dumps are supported.\n\n## Dependencies\n\nIn order to read 7zip archives (used by Wikia's XML dumps) you need to install [`libarchive`](http://libarchive.org/):\n\n```\nsudo apt install libarchive-dev\n```\n\n## API\n\n### Tokenizer\n\nAllows you to clean up the wikitext:\n\n```python\nfrom mediawiki_dump.tokenizer import clean\nclean('[[Foo|bar]] is a link')\n'bar is a link'\n```\n\nAnd then tokenize the text:\n\n```python\nfrom mediawiki_dump.tokenizer import tokenize\ntokenize('11. juni 2007 var\u00f0 kunngj\u00f8rt, at Sv\u00ednoyar kommuna ver\u00f0ur l\u00f8gd saman vi\u00f0 Klaksv\u00edkar kommunu eftir komandi bygdar\u00e1\u00f0sval.')\n['juni', 'var\u00f0', 'kunngj\u00f8rt', 'at', 'Sv\u00ednoyar', 'kommuna', 'ver\u00f0ur', 'l\u00f8gd', 'saman', 'vi\u00f0', 'Klaksv\u00edkar', 'kommunu', 'eftir', 'komandi', 'bygdar\u00e1\u00f0sval']\n```\n\n### Dump reader\n\nFetch and parse dumps (using a local file cache):\n\n```python\nfrom mediawiki_dump.dumps import WikipediaDump\nfrom mediawiki_dump.reader import DumpReader\n\ndump = WikipediaDump('fo')\npages = DumpReader().read(dump)\n\n[page.title for page in pages][:10]\n\n['Main Page', 'Br\u00fakari:Jon Harald S\u00f8by', 'Fors\u00ed\u00f0a', 'Ormurin Langi', 'Regin smi\u00f0ur', 'Fyrimynd:InterLingvLigoj', 'Heimsyvirl\u00fdsingin um mannar\u00e6ttindi', 'B\u00f3lkur:Kv\u00e6\u00f0i', 'B\u00f3lkur:Yrking', 'Kjak:Fors\u00ed\u00f0a']\n```\n\n`read` method yields the `DumpEntry` object for each revision.\n\nBy using `DumpReaderArticles` class you can read article pages only:\n\n```python\nimport logging; logging.basicConfig(level=logging.INFO)\n\nfrom mediawiki_dump.dumps import WikipediaDump\nfrom mediawiki_dump.reader import DumpReaderArticles\n\ndump = WikipediaDump('fo')\nreader = DumpReaderArticles()\npages = reader.read(dump)\n\nprint([page.title for page in pages][:25])\n\nprint(reader.get_dump_language())  # fo\n```\n\nWill give you:\n\n```\nINFO:DumpReaderArticles:Parsing XML dump...\nINFO:WikipediaDump:Checking /tmp/wikicorpus_62da4928a0a307185acaaa94f537d090.bz2 cache file...\nINFO:WikipediaDump:Fetching fo dump from <https://dumps.wikimedia.org/fowiki/latest/fowiki-latest-pages-meta-current.xml.bz2>...\nINFO:WikipediaDump:HTTP 200 (14105 kB will be fetched)\nINFO:WikipediaDump:Cache set\n...\n['WIKIng', 'F\u00f8royar', 'Bor\u00f0oy', 'Eysturoy', 'Fugloy', 'Fors\u00ed\u00f0a', 'L\u00f8gmenn \u00ed F\u00f8royum', 'GNU Free Documentation License', 'GFDL', 'Opi\u00f0 innihald', 'Wikipedia', 'Alfr\u00f8\u00f0i', '2004', '20. juni', 'WikiWiki', 'Wiki', 'Danmark', '21. juni', '22. juni', '23. juni', 'L\u00edvfr\u00f8\u00f0i', '24. juni', '25. juni', '26. juni', '27. juni']\n```\n\n## Reading Wikia's dumps\n\n ```python\nimport logging; logging.basicConfig(level=logging.INFO)\n\nfrom mediawiki_dump.dumps import WikiaDump\nfrom mediawiki_dump.reader import DumpReaderArticles\n\ndump = WikiaDump('plnordycka')\npages = DumpReaderArticles().read(dump)\n\nprint([page.title for page in pages][:25])\n```\n\nWill give you:\n\n```\nINFO:DumpReaderArticles:Parsing XML dump...\nINFO:WikiaDump:Checking /tmp/wikicorpus_f7dd3b75c5965ee10ae5fe4643fb806b.7z cache file...\nINFO:WikiaDump:Fetching plnordycka dump from <https://s3.amazonaws.com/wikia_xml_dumps/p/pl/plnordycka_pages_current.xml.7z>...\nINFO:WikiaDump:HTTP 200 (129 kB will be fetched)\nINFO:WikiaDump:Cache set\nINFO:WikiaDump:Reading wikicorpus_f7dd3b75c5965ee10ae5fe4643fb806b file from dump\n...\nINFO:DumpReaderArticles:Parsing completed, entries found: 615\n['Nordycka Wiki', 'Strona g\u0142\u00f3wna', '1968', '1948', 'Ormurin Langi', 'Mykines', 'Trollsj\u00f6n', 'Wyspy Owcze', 'N\u00f3lsoy', 'Sandoy', 'V\u00e1gar', 'M\u00f8rk', 'Eysturoy', 'Rakfisk', 'H\u00e1karl', '1298', 'Sztokfisz', '1978', '1920', 'Najbardziej na p\u00f3\u0142noc', 'Svalbard', 'Hamfer\u00f0', 'Rok w Skandynawii', 'Islandia', 'Rissajaure']\n```\n\n## Fetching full history\n\nPass `full_history` to `BaseDump` constructor to fetch the XML content dump with full history:\n\n```python\nimport logging; logging.basicConfig(level=logging.INFO)\n\nfrom mediawiki_dump.dumps import WikiaDump\nfrom mediawiki_dump.reader import DumpReaderArticles\n\ndump = WikiaDump('macbre', full_history=True)  # fetch full history, including old revisions\npages = DumpReaderArticles().read(dump)\n\nprint('\\n'.join([repr(page) for page in pages]))\n```\n\nWill give you:\n\n```\nINFO:DumpReaderArticles:Parsing completed, entries found: 384\n<DumpEntry \"Macbre Wiki\" by Default at 2016-10-12T19:51:06+00:00>\n<DumpEntry \"Macbre Wiki\" by Wikia at 2016-10-12T19:51:05+00:00>\n<DumpEntry \"Macbre Wiki\" by Macbre at 2016-11-04T10:33:20+00:00>\n<DumpEntry \"Macbre Wiki\" by FandomBot at 2016-11-04T10:37:17+00:00>\n<DumpEntry \"Macbre Wiki\" by FandomBot at 2017-01-25T14:47:37+00:00>\n<DumpEntry \"Macbre Wiki\" by Ryba777 at 2017-04-10T11:20:25+00:00>\n<DumpEntry \"Macbre Wiki\" by Ryba777 at 2017-04-10T11:21:20+00:00>\n<DumpEntry \"Macbre Wiki\" by Macbre at 2018-03-07T12:51:12+00:00>\n<DumpEntry \"Main Page\" by Wikia at 2016-10-12T19:51:05+00:00>\n<DumpEntry \"FooBar\" by Anonymous at 2016-11-08T10:15:33+00:00>\n<DumpEntry \"FooBar\" by Anonymous at 2016-11-08T10:15:49+00:00>\n...\n<DumpEntry \"YouTube tag\" by FANDOMbot at 2018-06-05T11:45:44+00:00>\n<DumpEntry \"Maps\" by Macbre at 2018-06-06T08:51:24+00:00>\n<DumpEntry \"Maps\" by Macbre at 2018-06-07T08:17:13+00:00>\n<DumpEntry \"Maps\" by Macbre at 2018-06-07T08:17:36+00:00>\n<DumpEntry \"Scary transclusion\" by Macbre at 2018-07-24T14:52:20+00:00>\n<DumpEntry \"Lua\" by Macbre at 2018-09-11T14:04:15+00:00>\n<DumpEntry \"Lua\" by Macbre at 2018-09-11T14:14:24+00:00>\n<DumpEntry \"Lua\" by Macbre at 2018-09-11T14:14:37+00:00>\n```\n\n## Reading dumps of selected articles\n\nYou can use [`mwclient` Python library](https://mwclient.readthedocs.io/en/latest/index.html)\nand fetch \"live\" dumps of selected articles from any MediaWiki-powered site.\n\n```python\nimport mwclient\nsite = mwclient.Site('vim.fandom.com', path='/')\n\nfrom mediawiki_dump.dumps import MediaWikiClientDump\nfrom mediawiki_dump.reader import DumpReaderArticles\n\ndump = MediaWikiClientDump(site, ['Vim documentation', 'Tutorial'])\n\npages = DumpReaderArticles().read(dump)\n\nprint('\\n'.join([repr(page) for page in pages]))\n```\n\nWill give you:\n\n```\n<DumpEntry \"Vim documentation\" by Anonymous at 2019-07-05T09:39:47+00:00>\n<DumpEntry \"Tutorial\" by Anonymous at 2019-07-05T09:41:19+00:00>\n```\n\n## Finding pages with a specific [parser tag](https://www.mediawiki.org/wiki/Manual:Tag_extensions)\n\nLet's find pages where no longer supported `<place>` tag is still used:\n\n```python\nimport logging; logging.basicConfig(level=logging.INFO)\n\nfrom mediawiki_dump.dumps import WikiaDump\nfrom mediawiki_dump.reader import DumpReader\n\ndump = WikiaDump('plpoznan')\npages = DumpReader().read(dump)\n\nwith_places_tag = [\n    page.title\n    for page in pages\n    if '<place ' in page.content\n]\n\nlogging.info('Pages found: %d', len(with_places_tag))\n\nwith open(\"pages.txt\", mode=\"wt\", encoding=\"utf-8\") as fp:\n    for entry in with_places_tag:\n        fp.write(entry + \"\\n\")\n\nlogging.info(\"pages.txt file created\")\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python package for working with MediaWiki XML content dumps",
    "version": "1.2.1",
    "project_urls": {
        "Homepage": "https://github.com/macbre/mediawiki-dump"
    },
    "split_keywords": [
        "dump",
        "fandom",
        "mediawiki",
        "wikipedia",
        "wikia"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca674959a8fc261134fedf5b7f85ecad32d5f4c31a2d2dfa3e5385b21825428c",
                "md5": "d467f5fb96a8ad4e9293542bd392aa27",
                "sha256": "4b4b883ef2d3b086c63a5a96675c99f9fb01a90f2e2c4e80ab088eb6550783f3"
            },
            "downloads": -1,
            "filename": "mediawiki_dump-1.2.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d467f5fb96a8ad4e9293542bd392aa27",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 18915,
            "upload_time": "2024-04-17T09:22:02",
            "upload_time_iso_8601": "2024-04-17T09:22:02.476667Z",
            "url": "https://files.pythonhosted.org/packages/ca/67/4959a8fc261134fedf5b7f85ecad32d5f4c31a2d2dfa3e5385b21825428c/mediawiki_dump-1.2.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d1d2db405e092ea9997d7c037d10a54ceaa1767a6470d56dc79d80135fa508ad",
                "md5": "9dff711aa17b0198a55ad8fc000a0a44",
                "sha256": "767d31c07f27ef97297652365faaa8a47a7340b1d181ea7d651a69a3a337b4b0"
            },
            "downloads": -1,
            "filename": "mediawiki_dump-1.2.1.tar.gz",
            "has_sig": false,
            "md5_digest": "9dff711aa17b0198a55ad8fc000a0a44",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 14255,
            "upload_time": "2024-04-17T09:22:04",
            "upload_time_iso_8601": "2024-04-17T09:22:04.902846Z",
            "url": "https://files.pythonhosted.org/packages/d1/d2/db405e092ea9997d7c037d10a54ceaa1767a6470d56dc79d80135fa508ad/mediawiki_dump-1.2.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-17 09:22:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "macbre",
    "github_project": "mediawiki-dump",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "mediawiki-dump"
}
        
Elapsed time: 0.24651s