cc2imgcap


Namecc2imgcap JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttps://github.com/rom1504/cc2imgcap
SummaryEasily convert common crawl to image caption set using pyspark
upload_time2022-12-05 00:55:05
maintainer
docs_urlNone
authorRomain Beaumont
requires_python
licenseMIT
keywords machine learning
VCS
bugtrack_url
requirements pyspark pysimdjson fsspec pandas loguru pyarrow fastwarc s3fs fire requests
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # cc2imgcap
[![pypi](https://img.shields.io/pypi/v/cc2imgcap.svg)](https://pypi.python.org/pypi/cc2imgcap)
[![Try it on gitpod](https://img.shields.io/badge/try-on%20gitpod-brightgreen.svg)](https://gitpod.io/#https://github.com/rom1504/cc2imgcap)

Easily convert common crawl to image caption set using pyspark.

Common crawl has [5M wat files](https://commoncrawl.org/the-data/get-started/). They provide links of the web.
This simple tool allows you to process one warc in about 50s and get image link along with the alt text.

It also runs deduplication against url+text in order to save on output space and speed up the process.

This makes it possible to do the first step of building a dataset like [laion5B](https://laion.ai/blog/laion-5b/) in 70k cpu core hours. (`5*10^6*50/(3600)`)
That's `$2.8k` using aws EC2 (0.04$/core hour)

## What hardware to pick ?

`cpu128-dy-c6i-32xlarge` instances are advised. Spark stores the non duplicated first stage in local disk. They should be nvme drive for speed during deduplication. At this first stage, one wat takes about 20MB, so the total (over all workers) space must be more than 20MB times wat count. So for example for the whole CC, that means 100TB. So for example that can fit in 150 instances with 1TB nvme drive each. 150 instances of 128 cores is 19200 cores so the whole processing takes 2h. Less instances with bigger hard drives can work too. It's also a possibility to do the processing in multiple pieces if temporary disk space is an issue by specifying `--multipart`.

## Document type

This tool support extracting several documents from CC:
* image/text: about 300B after dedup
* audio/text: about 3B after dedup

They can be selected with eg `--document_type audio`.
You may experiment with more document kinds by running `python example single_warc_example.py` and exploring the resulting output.parquet.

## Install

pip install cc2imgcap

## Python examples

Checkout these examples:
* [run_on_spark.py](examples/run_on_spark.py) it shows how to bring your own spark session

If you have a slurm cluster, refer to https://gist.github.com/rom1504/67ada3dedbecc113ae2dbdfd9c642d83 to start a spark cluster there.

## API

This module exposes a single function `cc2imgcap` which takes the same arguments as the command line tool:
* **output_path** the output path, should probably start with s3://. The output will be written to this path sufixed by the date (*required*)
* **wat_index_count** the number of wat index files to read, can be None for all. (*default 1*)
* **wat_count** the number of wat files to read, can be None for all, will randomly subsample if present. (*default 100*)
* **master** the spark master url. (*default local*)
* **num_cores** the number of cores of each spark executor. (*default 128*)
* **mem_gb** the memory of each spark executor. (*default 256*)
* **multipart** runs the processing of the specified number of parts, merge at the end (*default None*)
* **shuffle** randomly shuffle the output right before saving (*default True*)
* **resume** the specific path of the output to resume (*default None*)
* **spark_builder** a function that create a spark session, None will default to the built-in methods (*default None*)
* **document_type** the kind of document to extract (*default image*)
* **source_cc_protocol** get common crawl from http or s3 (*default s3*)

## For development

Either locally, or in [gitpod](https://gitpod.io/#https://github.com/rom1504/cc2imgcap) (do `export PIP_USER=false` there)

Setup a virtualenv:

```
python3 -m venv .env
source .env/bin/activate
pip install -e .
```

to run tests:
```
pip install -r requirements-test.txt
```
then 
```
make lint
make test
```

You can use `make black` to reformat the code

`python -m pytest -x -s -v tests -k "dummy"` to run a specific test



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rom1504/cc2imgcap",
    "name": "cc2imgcap",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "machine learning",
    "author": "Romain Beaumont",
    "author_email": "romain.rom1@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/74/b4/5a93c64d027823e3be9a4ca733839e6abea6c4147851dfc81695fd498199/cc2imgcap-1.3.0.tar.gz",
    "platform": null,
    "description": "# cc2imgcap\n[![pypi](https://img.shields.io/pypi/v/cc2imgcap.svg)](https://pypi.python.org/pypi/cc2imgcap)\n[![Try it on gitpod](https://img.shields.io/badge/try-on%20gitpod-brightgreen.svg)](https://gitpod.io/#https://github.com/rom1504/cc2imgcap)\n\nEasily convert common crawl to image caption set using pyspark.\n\nCommon crawl has [5M wat files](https://commoncrawl.org/the-data/get-started/). They provide links of the web.\nThis simple tool allows you to process one warc in about 50s and get image link along with the alt text.\n\nIt also runs deduplication against url+text in order to save on output space and speed up the process.\n\nThis makes it possible to do the first step of building a dataset like [laion5B](https://laion.ai/blog/laion-5b/) in 70k cpu core hours. (`5*10^6*50/(3600)`)\nThat's `$2.8k` using aws EC2 (0.04$/core hour)\n\n## What hardware to pick ?\n\n`cpu128-dy-c6i-32xlarge` instances are advised. Spark stores the non duplicated first stage in local disk. They should be nvme drive for speed during deduplication. At this first stage, one wat takes about 20MB, so the total (over all workers) space must be more than 20MB times wat count. So for example for the whole CC, that means 100TB. So for example that can fit in 150 instances with 1TB nvme drive each. 150 instances of 128 cores is 19200 cores so the whole processing takes 2h. Less instances with bigger hard drives can work too. It's also a possibility to do the processing in multiple pieces if temporary disk space is an issue by specifying `--multipart`.\n\n## Document type\n\nThis tool support extracting several documents from CC:\n* image/text: about 300B after dedup\n* audio/text: about 3B after dedup\n\nThey can be selected with eg `--document_type audio`.\nYou may experiment with more document kinds by running `python example single_warc_example.py` and exploring the resulting output.parquet.\n\n## Install\n\npip install cc2imgcap\n\n## Python examples\n\nCheckout these examples:\n* [run_on_spark.py](examples/run_on_spark.py) it shows how to bring your own spark session\n\nIf you have a slurm cluster, refer to https://gist.github.com/rom1504/67ada3dedbecc113ae2dbdfd9c642d83 to start a spark cluster there.\n\n## API\n\nThis module exposes a single function `cc2imgcap` which takes the same arguments as the command line tool:\n* **output_path** the output path, should probably start with s3://. The output will be written to this path sufixed by the date (*required*)\n* **wat_index_count** the number of wat index files to read, can be None for all. (*default 1*)\n* **wat_count** the number of wat files to read, can be None for all, will randomly subsample if present. (*default 100*)\n* **master** the spark master url. (*default local*)\n* **num_cores** the number of cores of each spark executor. (*default 128*)\n* **mem_gb** the memory of each spark executor. (*default 256*)\n* **multipart** runs the processing of the specified number of parts, merge at the end (*default None*)\n* **shuffle** randomly shuffle the output right before saving (*default True*)\n* **resume** the specific path of the output to resume (*default None*)\n* **spark_builder** a function that create a spark session, None will default to the built-in methods (*default None*)\n* **document_type** the kind of document to extract (*default image*)\n* **source_cc_protocol** get common crawl from http or s3 (*default s3*)\n\n## For development\n\nEither locally, or in [gitpod](https://gitpod.io/#https://github.com/rom1504/cc2imgcap) (do `export PIP_USER=false` there)\n\nSetup a virtualenv:\n\n```\npython3 -m venv .env\nsource .env/bin/activate\npip install -e .\n```\n\nto run tests:\n```\npip install -r requirements-test.txt\n```\nthen \n```\nmake lint\nmake test\n```\n\nYou can use `make black` to reformat the code\n\n`python -m pytest -x -s -v tests -k \"dummy\"` to run a specific test\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Easily convert common crawl to image caption set using pyspark",
    "version": "1.3.0",
    "split_keywords": [
        "machine",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "1f2551389c4cabf71a10efefa5af4197",
                "sha256": "8120ccf2d2682aef385f73eade22c36404598e77085ba7f12774bfc879a55dda"
            },
            "downloads": -1,
            "filename": "cc2imgcap-1.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1f2551389c4cabf71a10efefa5af4197",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 10678,
            "upload_time": "2022-12-05T00:55:04",
            "upload_time_iso_8601": "2022-12-05T00:55:04.438591Z",
            "url": "https://files.pythonhosted.org/packages/4b/91/b4d25f0d6cd91c86608c8fdb14869c16fd020c185fedb8a1dd1abe5235d1/cc2imgcap-1.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "02b341f66eb624b76db7b37473ac5296",
                "sha256": "4fb9ca2904f59eca3afe60800661b00b1682d926bd7364063efd32a5bcd4ccca"
            },
            "downloads": -1,
            "filename": "cc2imgcap-1.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "02b341f66eb624b76db7b37473ac5296",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 8845,
            "upload_time": "2022-12-05T00:55:05",
            "upload_time_iso_8601": "2022-12-05T00:55:05.998555Z",
            "url": "https://files.pythonhosted.org/packages/74/b4/5a93c64d027823e3be9a4ca733839e6abea6c4147851dfc81695fd498199/cc2imgcap-1.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-05 00:55:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "rom1504",
    "github_project": "cc2imgcap",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "pyspark",
            "specs": []
        },
        {
            "name": "pysimdjson",
            "specs": []
        },
        {
            "name": "fsspec",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "loguru",
            "specs": []
        },
        {
            "name": "pyarrow",
            "specs": []
        },
        {
            "name": "fastwarc",
            "specs": []
        },
        {
            "name": "s3fs",
            "specs": []
        },
        {
            "name": "fire",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        }
    ],
    "lcname": "cc2imgcap"
}
        
Elapsed time: 0.04389s