llnl-scraper


Namellnl-scraper JSON
Version 0.14.0 PyPI version JSON
download
home_pagehttps://github.com/llnl/scraper
SummaryPackage for extracting software repository metadata
upload_time2023-11-30 05:10:06
maintainer
docs_urlNone
authorIan Lee
requires_python>=3.6
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Scraper

Scraper is a tool for scraping and visualizing open source data from various
code hosting platforms, such as: GitHub.com, GitHub Enterprise, GitLab.com,
hosted GitLab, and Bitbucket Server.

## Getting Started: Code.gov

[Code.gov](https://code.gov) is a newly launched website of the US Federal
Government to allow the People to access metadata from the governments custom
developed software. This site requires metadata to function, and this Python
library can help with that!

To get started, you will need a [GitHub Personal Auth
Token](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/)
to make requests to the GitHub API. This should be set in your environment or
shell `rc` file with the name `GITHUB_API_TOKEN`:

```shell
    $ export GITHUB_API_TOKEN=XYZ

    $ echo "export GITHUB_API_TOKEN=XYZ" >> ~/.bashrc
```

Additionally, to perform the labor hours estimation, you will need to install
`cloc` into your environment. This is typically done with a [Package
Manager](https://github.com/AlDanial/cloc#install-via-package-manager) such as
`npm` or `homebrew`.

Then to generate a `code.json` file for your agency, you will need a
`config.json` file to coordinate the platforms you will connect to and scrape
data from. An example config file can be found in [demo.json](/demo.json). Once
you have your config file, you are ready to install and run the scraper!

```shell
    # Install Scraper from a local copy of this repository
    $ pip install -e .
    # OR
    # Install Scraper from PyPI
    $ pip install llnl-scraper

    # Run Scraper with your config file ``config.json``
    $ scraper --config config.json
```

A full example of the resulting `code.json` file can be [found
here](https://gist.github.com/IanLee1521/b7d7c0c2d8c24b10dd04edd5e8cab6c4).

## Config File Options

The configuration file is a json file that specifies what repository platforms
to pull projects from as well as some settings that can be used to override
incomplete or inaccurate data returned via the scraping.

The basic structure is:

```jsonc
{
    // REQUIRED
    "contact_email": "...",  // Used when the contact email cannot be found otherwise

    // OPTIONAL
    "agency": "...",         // Your agency abbreviation here
    "organization": "...",   // The organization within the agency
    "permissions": { ... },  // Object containing default values for usageType and exemptionText

    // Platform configurations, described in more detail below
    "GitHub": [ ... ],
    "GitLab": [ ... ],
    "Bitbucket": [ ... ],
}
```

```jsonc
"GitHub": [
    {
        "url": "https://github.com",  // GitHub.com or GitHub Enterprise URL to inventory
        "token": null,                // Private token for accessing this GitHub instance
        "public_only": true,          // Only inventory public repositories

        "connect_timeout": 4,  // The timeout in seconds for connecting to the server
        "read_timeout": 10,    // The timeout in seconds to wait for a response from the server

        "orgs": [ ... ],    // List of organizations to inventory
        "repos": [ ... ],   // List of single repositories to inventory
        "exclude": [ ... ]  // List of organizations / repositories to exclude from inventory
    }
],
```

```jsonc
"GitLab": [
    {
        "url": "https://gitlab.com",  // GitLab.com or hosted GitLab instance URL to inventory
        "token": null,                // Private token for accessing this GitHub instance
        "fetch_languages": false,     // Include individual calls to API for language metadata. Very slow, so defaults to false. (eg, for 191 projects on internal server, 5 seconds for False, 12 minutes, 38 seconds for True)

        "orgs": [ ... ],    // List of organizations to inventory
        "repos": [ ... ],   // List of single repositories to inventory
        "exclude": [ ... ]  // List of groups / repositories to exclude from inventory
    }
]
```

```jsonc
"Bitbucket": [
    {
        "url": "https://bitbucket.internal",  // Base URL for a Bitbucket Server instance
        "username": "",                       // Username to authenticate with
        "password": "",                       // Password to authenticate with
        "token": "",                          // Token to authenticate with, if supplied username and password are ignored

        "exclude": [ ... ]  // List of projects / repositories to exclude from inventory
    }
]
```

```jsonc
"TFS": [
    {
        "url": "https://tfs.internal",  // Base URL for a Team Foundation Server (TFS) or Visual Studio Team Services (VSTS) or Azure DevOps instance
        "token": null,                  // Private token for accessing this TFS instance

        "exclude": [ ... ]  // List of projects / repositories to exclude from inventory
    }
]
```

## License

Scraper is released under an MIT license. For more details see the
[LICENSE](/LICENSE) file.

LLNL-CODE-705597

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/llnl/scraper",
    "name": "llnl-scraper",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Ian Lee",
    "author_email": "lee1001@llnl.gov",
    "download_url": "https://files.pythonhosted.org/packages/a1/a9/d32afd4ad6c1ca185856ab62c421e6920c8d9555c349765ea470060220ec/llnl-scraper-0.14.0.tar.gz",
    "platform": null,
    "description": "# Scraper\n\nScraper is a tool for scraping and visualizing open source data from various\ncode hosting platforms, such as: GitHub.com, GitHub Enterprise, GitLab.com,\nhosted GitLab, and Bitbucket Server.\n\n## Getting Started: Code.gov\n\n[Code.gov](https://code.gov) is a newly launched website of the US Federal\nGovernment to allow the People to access metadata from the governments custom\ndeveloped software. This site requires metadata to function, and this Python\nlibrary can help with that!\n\nTo get started, you will need a [GitHub Personal Auth\nToken](https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/)\nto make requests to the GitHub API. This should be set in your environment or\nshell `rc` file with the name `GITHUB_API_TOKEN`:\n\n```shell\n    $ export GITHUB_API_TOKEN=XYZ\n\n    $ echo \"export GITHUB_API_TOKEN=XYZ\" >> ~/.bashrc\n```\n\nAdditionally, to perform the labor hours estimation, you will need to install\n`cloc` into your environment. This is typically done with a [Package\nManager](https://github.com/AlDanial/cloc#install-via-package-manager) such as\n`npm` or `homebrew`.\n\nThen to generate a `code.json` file for your agency, you will need a\n`config.json` file to coordinate the platforms you will connect to and scrape\ndata from. An example config file can be found in [demo.json](/demo.json). Once\nyou have your config file, you are ready to install and run the scraper!\n\n```shell\n    # Install Scraper from a local copy of this repository\n    $ pip install -e .\n    # OR\n    # Install Scraper from PyPI\n    $ pip install llnl-scraper\n\n    # Run Scraper with your config file ``config.json``\n    $ scraper --config config.json\n```\n\nA full example of the resulting `code.json` file can be [found\nhere](https://gist.github.com/IanLee1521/b7d7c0c2d8c24b10dd04edd5e8cab6c4).\n\n## Config File Options\n\nThe configuration file is a json file that specifies what repository platforms\nto pull projects from as well as some settings that can be used to override\nincomplete or inaccurate data returned via the scraping.\n\nThe basic structure is:\n\n```jsonc\n{\n    // REQUIRED\n    \"contact_email\": \"...\",  // Used when the contact email cannot be found otherwise\n\n    // OPTIONAL\n    \"agency\": \"...\",         // Your agency abbreviation here\n    \"organization\": \"...\",   // The organization within the agency\n    \"permissions\": { ... },  // Object containing default values for usageType and exemptionText\n\n    // Platform configurations, described in more detail below\n    \"GitHub\": [ ... ],\n    \"GitLab\": [ ... ],\n    \"Bitbucket\": [ ... ],\n}\n```\n\n```jsonc\n\"GitHub\": [\n    {\n        \"url\": \"https://github.com\",  // GitHub.com or GitHub Enterprise URL to inventory\n        \"token\": null,                // Private token for accessing this GitHub instance\n        \"public_only\": true,          // Only inventory public repositories\n\n        \"connect_timeout\": 4,  // The timeout in seconds for connecting to the server\n        \"read_timeout\": 10,    // The timeout in seconds to wait for a response from the server\n\n        \"orgs\": [ ... ],    // List of organizations to inventory\n        \"repos\": [ ... ],   // List of single repositories to inventory\n        \"exclude\": [ ... ]  // List of organizations / repositories to exclude from inventory\n    }\n],\n```\n\n```jsonc\n\"GitLab\": [\n    {\n        \"url\": \"https://gitlab.com\",  // GitLab.com or hosted GitLab instance URL to inventory\n        \"token\": null,                // Private token for accessing this GitHub instance\n        \"fetch_languages\": false,     // Include individual calls to API for language metadata. Very slow, so defaults to false. (eg, for 191 projects on internal server, 5 seconds for False, 12 minutes, 38 seconds for True)\n\n        \"orgs\": [ ... ],    // List of organizations to inventory\n        \"repos\": [ ... ],   // List of single repositories to inventory\n        \"exclude\": [ ... ]  // List of groups / repositories to exclude from inventory\n    }\n]\n```\n\n```jsonc\n\"Bitbucket\": [\n    {\n        \"url\": \"https://bitbucket.internal\",  // Base URL for a Bitbucket Server instance\n        \"username\": \"\",                       // Username to authenticate with\n        \"password\": \"\",                       // Password to authenticate with\n        \"token\": \"\",                          // Token to authenticate with, if supplied username and password are ignored\n\n        \"exclude\": [ ... ]  // List of projects / repositories to exclude from inventory\n    }\n]\n```\n\n```jsonc\n\"TFS\": [\n    {\n        \"url\": \"https://tfs.internal\",  // Base URL for a Team Foundation Server (TFS) or Visual Studio Team Services (VSTS) or Azure DevOps instance\n        \"token\": null,                  // Private token for accessing this TFS instance\n\n        \"exclude\": [ ... ]  // List of projects / repositories to exclude from inventory\n    }\n]\n```\n\n## License\n\nScraper is released under an MIT license. For more details see the\n[LICENSE](/LICENSE) file.\n\nLLNL-CODE-705597\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Package for extracting software repository metadata",
    "version": "0.14.0",
    "project_urls": {
        "Homepage": "https://github.com/llnl/scraper"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1ff77928878103c1a03c4be1d850f98645de9413a97c927f10ddbb18cfbc2fb7",
                "md5": "86edefd15a1f1ddbb40a3518831c0dc8",
                "sha256": "015e080d24888ef2d48aa9f4602bed866e373adc59276b42bdb1b59ed6d9ad2a"
            },
            "downloads": -1,
            "filename": "llnl_scraper-0.14.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "86edefd15a1f1ddbb40a3518831c0dc8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 32203,
            "upload_time": "2023-11-30T05:10:04",
            "upload_time_iso_8601": "2023-11-30T05:10:04.626397Z",
            "url": "https://files.pythonhosted.org/packages/1f/f7/7928878103c1a03c4be1d850f98645de9413a97c927f10ddbb18cfbc2fb7/llnl_scraper-0.14.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a1a9d32afd4ad6c1ca185856ab62c421e6920c8d9555c349765ea470060220ec",
                "md5": "99ea32f736954c72b620c2ad007bc3b8",
                "sha256": "881fbe04c0f0df3dfe6a887413bfd126921f2ec3344f5d9e797629be0aaab60d"
            },
            "downloads": -1,
            "filename": "llnl-scraper-0.14.0.tar.gz",
            "has_sig": false,
            "md5_digest": "99ea32f736954c72b620c2ad007bc3b8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 27838,
            "upload_time": "2023-11-30T05:10:06",
            "upload_time_iso_8601": "2023-11-30T05:10:06.726047Z",
            "url": "https://files.pythonhosted.org/packages/a1/a9/d32afd4ad6c1ca185856ab62c421e6920c8d9555c349765ea470060220ec/llnl-scraper-0.14.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-30 05:10:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "llnl",
    "github_project": "scraper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "llnl-scraper"
}
        
Elapsed time: 0.17093s