vk-url-scraper


Namevk-url-scraper JSON
Version 0.3.27 PyPI version JSON
download
home_page
SummaryScrape VK URLs to fetch info and media - python API or command line tool.
upload_time2024-01-23 12:00:19
maintainer
docs_urlNone
authorBellingcat
requires_python>=3.7
licenseMIT
keywords scraper vk vkontakte vk-api media-downloader
VCS
bugtrack_url
requirements brotli certifi charset-normalizer idna mutagen pycryptodomex requests urllib3 vk-api websockets yt-dlp
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # vk-url-scraper
Python library to scrape data, and especially media links like videos and photos, from vk.com URLs.


[![PyPI version](https://badge.fury.io/py/vk-url-scraper.svg)](https://badge.fury.io/py/vk-url-scraper)
[![PyPI download month](https://img.shields.io/pypi/dm/vk-url-scraper.svg)](https://pypi.python.org/pypi/vk-url-scraper/)
[![Documentation Status](https://readthedocs.org/projects/vk-url-scraper/badge/?version=latest)](https://vk-url-scraper.readthedocs.io/en/latest/?badge=latest)


You can use it via the [command line](#command-line-usage) or as a [python library](#python-library-usage), check the **[documentation](https://vk-url-scraper.readthedocs.io/en/latest/)**.

## Installation
You can install the most recent release from [pypi](https://pypi.org/project/vk-url-scraper/) via `pip install vk-url-scraper`.

To use the library you will need a valid username/password combination for vk.com. 

## Command line usage
```bash
# run this to learn more about the parameters
vk_url_scraper --help

# scrape a URL and get the JSON result in the console
vk_url_scraper --username "username here" --password "password here" --urls https://vk.com/wall12345_6789
# OR
vk_url_scraper -u "username here" -p "password here" --urls https://vk.com/wall12345_6789
# you can also have multiple urls
vk_url_scraper -u "username here" -p "password here" --urls https://vk.com/wall12345_6789 https://vk.com/photo-12345_6789 https://vk.com/video12345_6789

# you can pass a token as well to avoid always authenticating 
# and possibly getting captcha prompts
# you can fetch the token from the vk_config.v2.json file generated under by searching for "access_token"
vk_url_scraper -u "username" -p "password" -t "vktoken goes here" --urls https://vk.com/wall12345_6789

# save the JSON output into a file
vk_url_scraper -u "username here" -p "password here" --urls https://vk.com/wall12345_6789 > output.json

# download any photos or videos found in these URLS
# this will use or create an output/ folder and dump the files there
vk_url_scraper -u "username here" -p "password here" --download --urls https://vk.com/wall12345_6789
# or
vk_url_scraper -u "username here" -p "password here" -d --urls https://vk.com/wall12345_6789
```

## Python library usage
```python
from vk_url_scraper import VkScraper

vks = VkScraper("username", "password")

# scrape any "photo" URL
res = vks.scrape("https://vk.com/photo1_278184324?rev=1")

# scrape any "wall" URL
res = vks.scrape("https://vk.com/wall-1_398461")

# scrape any "video" URL
res = vks.scrape("https://vk.com/video-6596301_145810025")
print(res[0]["text"]) # eg: -> to get the text from code
```

```python
# Every scrape* function returns a list of dict like
{
	"id": "wall_id",
	"text": "text in this post" ,
	"datetime": utc datetime of post,
	"attachments": {
		# if photo, video, link exists
		"photo": [list of urls with max quality],
		"video": [list of urls with max quality],
		"link": [list of urls with max quality],
	},
	"payload": "original JSON response converted to dict which you can parse for more data
}
```

see [docs] for all available functions. 

### TODO
* scrape album links
* scrape profile links
* docs online from sphinx

## Development
(more info in [CONTRIBUTING.md](CONTRIBUTING.md)).

1. setup dev environment with `pip install -r dev-requirements.txt` or `pipenv install -r dev-requirements.txt`
1. setup environment with `pip install -r requirements.txt` or `pipenv install -r requirements.txt`
2. To run all checks to `make run-checks` (fixes style) or individually
   1. To fix style: `black .` and `isort .` -> `flake8 .` to validate lint
   2. To do type checking: `mypy .`
   3. To test: `pytest .` (`pytest -v --color=yes --doctest-modules tests/ vk_url_scraper/` to use verbose, colors, and test docstring examples)
3. `make docs` to generate shpynx docs -> edit [config.py](docs/source/conf.py) if needed

To test the command line interface available in [__main__.py](__vk_url_scraper/__main__.py) you need to pass the `-m` option to python like so: `python -m vk_url_scraper -u "" -p "" --urls ...`


## Releasing new version
1. edit [version.py](vk_url_scraper/version.py) with proper versioning
2. run `./scripts/release.sh` to create a tag and push, alternatively
   1. `git tag vx.y.z` to tag version
   2. `git push origin vx.y.z` -> this will trigger workflow and put project on [pypi](https://pypi.org/project/vk-url-scraper/)
3. go to https://readthedocs.org/ to deploy new docs version (if webhook is not setup)

### Fixing a failed release

If for some reason the GitHub Actions release workflow failed with an error that needs to be fixed, you'll have to delete both the tag and corresponding release from GitHub. After you've pushed a fix, delete the tag from your local clone with

```bash
git tag -l | xargs git tag -d && git fetch -t
```

Then repeat the steps above.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "vk-url-scraper",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "scraper,vk,vkontakte,vk-api,media-downloader",
    "author": "Bellingcat",
    "author_email": "tech@bellingcat.com",
    "download_url": "https://files.pythonhosted.org/packages/99/91/8740222c5ac558b1a28b325c5b5dc65bc3f8207301eafbac3573f38bae13/vk-url-scraper-0.3.27.tar.gz",
    "platform": null,
    "description": "# vk-url-scraper\nPython library to scrape data, and especially media links like videos and photos, from vk.com URLs.\n\n\n[![PyPI version](https://badge.fury.io/py/vk-url-scraper.svg)](https://badge.fury.io/py/vk-url-scraper)\n[![PyPI download month](https://img.shields.io/pypi/dm/vk-url-scraper.svg)](https://pypi.python.org/pypi/vk-url-scraper/)\n[![Documentation Status](https://readthedocs.org/projects/vk-url-scraper/badge/?version=latest)](https://vk-url-scraper.readthedocs.io/en/latest/?badge=latest)\n\n\nYou can use it via the [command line](#command-line-usage) or as a [python library](#python-library-usage), check the **[documentation](https://vk-url-scraper.readthedocs.io/en/latest/)**.\n\n## Installation\nYou can install the most recent release from [pypi](https://pypi.org/project/vk-url-scraper/) via `pip install vk-url-scraper`.\n\nTo use the library you will need a valid username/password combination for vk.com. \n\n## Command line usage\n```bash\n# run this to learn more about the parameters\nvk_url_scraper --help\n\n# scrape a URL and get the JSON result in the console\nvk_url_scraper --username \"username here\" --password \"password here\" --urls https://vk.com/wall12345_6789\n# OR\nvk_url_scraper -u \"username here\" -p \"password here\" --urls https://vk.com/wall12345_6789\n# you can also have multiple urls\nvk_url_scraper -u \"username here\" -p \"password here\" --urls https://vk.com/wall12345_6789 https://vk.com/photo-12345_6789 https://vk.com/video12345_6789\n\n# you can pass a token as well to avoid always authenticating \n# and possibly getting captcha prompts\n# you can fetch the token from the vk_config.v2.json file generated under by searching for \"access_token\"\nvk_url_scraper -u \"username\" -p \"password\" -t \"vktoken goes here\" --urls https://vk.com/wall12345_6789\n\n# save the JSON output into a file\nvk_url_scraper -u \"username here\" -p \"password here\" --urls https://vk.com/wall12345_6789 > output.json\n\n# download any photos or videos found in these URLS\n# this will use or create an output/ folder and dump the files there\nvk_url_scraper -u \"username here\" -p \"password here\" --download --urls https://vk.com/wall12345_6789\n# or\nvk_url_scraper -u \"username here\" -p \"password here\" -d --urls https://vk.com/wall12345_6789\n```\n\n## Python library usage\n```python\nfrom vk_url_scraper import VkScraper\n\nvks = VkScraper(\"username\", \"password\")\n\n# scrape any \"photo\" URL\nres = vks.scrape(\"https://vk.com/photo1_278184324?rev=1\")\n\n# scrape any \"wall\" URL\nres = vks.scrape(\"https://vk.com/wall-1_398461\")\n\n# scrape any \"video\" URL\nres = vks.scrape(\"https://vk.com/video-6596301_145810025\")\nprint(res[0][\"text\"]) # eg: -> to get the text from code\n```\n\n```python\n# Every scrape* function returns a list of dict like\n{\n\t\"id\": \"wall_id\",\n\t\"text\": \"text in this post\" ,\n\t\"datetime\": utc datetime of post,\n\t\"attachments\": {\n\t\t# if photo, video, link exists\n\t\t\"photo\": [list of urls with max quality],\n\t\t\"video\": [list of urls with max quality],\n\t\t\"link\": [list of urls with max quality],\n\t},\n\t\"payload\": \"original JSON response converted to dict which you can parse for more data\n}\n```\n\nsee [docs] for all available functions. \n\n### TODO\n* scrape album links\n* scrape profile links\n* docs online from sphinx\n\n## Development\n(more info in [CONTRIBUTING.md](CONTRIBUTING.md)).\n\n1. setup dev environment with `pip install -r dev-requirements.txt` or `pipenv install -r dev-requirements.txt`\n1. setup environment with `pip install -r requirements.txt` or `pipenv install -r requirements.txt`\n2. To run all checks to `make run-checks` (fixes style) or individually\n   1. To fix style: `black .` and `isort .` -> `flake8 .` to validate lint\n   2. To do type checking: `mypy .`\n   3. To test: `pytest .` (`pytest -v --color=yes --doctest-modules tests/ vk_url_scraper/` to use verbose, colors, and test docstring examples)\n3. `make docs` to generate shpynx docs -> edit [config.py](docs/source/conf.py) if needed\n\nTo test the command line interface available in [__main__.py](__vk_url_scraper/__main__.py) you need to pass the `-m` option to python like so: `python -m vk_url_scraper -u \"\" -p \"\" --urls ...`\n\n\n## Releasing new version\n1. edit [version.py](vk_url_scraper/version.py) with proper versioning\n2. run `./scripts/release.sh` to create a tag and push, alternatively\n   1. `git tag vx.y.z` to tag version\n   2. `git push origin vx.y.z` -> this will trigger workflow and put project on [pypi](https://pypi.org/project/vk-url-scraper/)\n3. go to https://readthedocs.org/ to deploy new docs version (if webhook is not setup)\n\n### Fixing a failed release\n\nIf for some reason the GitHub Actions release workflow failed with an error that needs to be fixed, you'll have to delete both the tag and corresponding release from GitHub. After you've pushed a fix, delete the tag from your local clone with\n\n```bash\ngit tag -l | xargs git tag -d && git fetch -t\n```\n\nThen repeat the steps above.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Scrape VK URLs to fetch info and media - python API or command line tool.",
    "version": "0.3.27",
    "project_urls": {
        "Code": "https://github.com/bellingcat/vk-url-scraper",
        "Documentation": "https://vk-url-scraper.readthedocs.io/en/latest/"
    },
    "split_keywords": [
        "scraper",
        "vk",
        "vkontakte",
        "vk-api",
        "media-downloader"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "49d12460da8d305b845f1411bd7bc999db1890f4b4b6d47be700a63cc8d379e1",
                "md5": "63d8c8ddb683c6bcd51de5830930f926",
                "sha256": "c1c001b66b80343a991628080398d8a923e8753183b952f99f40ecafe1087070"
            },
            "downloads": -1,
            "filename": "vk_url_scraper-0.3.27-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "63d8c8ddb683c6bcd51de5830930f926",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 10838,
            "upload_time": "2024-01-23T12:00:17",
            "upload_time_iso_8601": "2024-01-23T12:00:17.548691Z",
            "url": "https://files.pythonhosted.org/packages/49/d1/2460da8d305b845f1411bd7bc999db1890f4b4b6d47be700a63cc8d379e1/vk_url_scraper-0.3.27-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "99918740222c5ac558b1a28b325c5b5dc65bc3f8207301eafbac3573f38bae13",
                "md5": "722e519a545b8d76db1bfacf1b8fbd62",
                "sha256": "133d252ee94ceb1ee9515fb448d410ba471cbccc19e303b548076cd44cc81f30"
            },
            "downloads": -1,
            "filename": "vk-url-scraper-0.3.27.tar.gz",
            "has_sig": false,
            "md5_digest": "722e519a545b8d76db1bfacf1b8fbd62",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 12539,
            "upload_time": "2024-01-23T12:00:19",
            "upload_time_iso_8601": "2024-01-23T12:00:19.477871Z",
            "url": "https://files.pythonhosted.org/packages/99/91/8740222c5ac558b1a28b325c5b5dc65bc3f8207301eafbac3573f38bae13/vk-url-scraper-0.3.27.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-23 12:00:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bellingcat",
    "github_project": "vk-url-scraper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "brotli",
            "specs": [
                [
                    ">=",
                    "1.0.9"
                ]
            ]
        },
        {
            "name": "certifi",
            "specs": [
                [
                    ">=",
                    "2022.12.7"
                ]
            ]
        },
        {
            "name": "charset-normalizer",
            "specs": [
                [
                    ">=",
                    "3.0.1"
                ]
            ]
        },
        {
            "name": "idna",
            "specs": [
                [
                    ">=",
                    "3.4"
                ]
            ]
        },
        {
            "name": "mutagen",
            "specs": [
                [
                    ">=",
                    "1.46.0"
                ]
            ]
        },
        {
            "name": "pycryptodomex",
            "specs": [
                [
                    ">=",
                    "3.17"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.28.2"
                ]
            ]
        },
        {
            "name": "urllib3",
            "specs": [
                [
                    ">=",
                    "1.26.14"
                ]
            ]
        },
        {
            "name": "vk-api",
            "specs": [
                [
                    ">=",
                    "11.9.9"
                ]
            ]
        },
        {
            "name": "websockets",
            "specs": [
                [
                    ">=",
                    "10.4"
                ]
            ]
        },
        {
            "name": "yt-dlp",
            "specs": [
                [
                    ">=",
                    "2023.2.17"
                ]
            ]
        }
    ],
    "lcname": "vk-url-scraper"
}
        
Elapsed time: 0.24992s