# `goodcrap`
`goodcrap` is a python package that generates random data stuff: creates and fills data structures (tables, databases and `csv` files) with random data, generates [`Mage`](https://github.com/mage-ai/mage-ai) pipelines that the user can use to orchestrate filling the data structures, and generate random SQL queries.
## Motivation
This software enables data engineers to replicate the database schemas at their organisations, and then generate fake data that resemble a random sample of the actual data in their organisation. It also enables them to generate any number of random SQL queries that they can use for testing their analytics pipelines, as well as benchmark their data platforms.
While public datasets, such as those hosted at Google or Kaggle, is a common starting point for people interested in learning data analytics and machine learning, many of these datasets require extensive data cleaning so that they can be usable in analytics pipelines. This makes the use of these datasets difficult for AI learners and practitioners.
Public datasets are also utilized by data engineers who are interested in testing their ETL/ELT pipelines. Those folks are particularly interested in data quantity, more than quality. Most public datasets are limited in quantity, which make them not so useful for testing pipelines or for benchmarking query execution times.
Nowadays, generating random data is increasingly a requirement for data teams. [It is a better alternative to using public datasets that will require tedious cleaning](https://motherduck.com/blog/python-faker-duckdb-exploration/). While the Jaffle toy dataset provided by [`dbt`](https://docs.getdbt.com/docs/quickstarts/dbt-core/codespace) enables an easy way to generate a huge amount of data, the python package `jafgen` is limited to generating a dataset that belongs to a specific set of tables.
`goodcrap` was developed to enable:
- AI learners to generate their own custom datasets
- AI practitioners to benchmark the scalability of their models and methods
- data engineers to test and benchmark their ETL/ELT data pipelines
- data engineers to benchmark query execution times against custom-made huge datasets
The data generated by `goodcrap` is, after all, crap. But it's good crap because:
- data values are configured based on `json` configuration files
- data can be generated to fill tables, databases, data warehouses and data lakes
- data values can be made totally random, or fulfill a certain distribution
- `goodcrap` server can generate time series data
## Installation
You can install `goodcrap` using the `pip` command as follows:
`pip install goodcrap`
## Basic usage
The simplest use-case scenario is generating a `csv` file with random data. `goodcrap` ships with a number of *template* tables that you can use. For example, let's generate 10,000 records in the `customers` table, using the random seed `3`:
`goodcrap --size 10000 --seed 3 --template_table customers --to_csv`
The file `customers.csv` will be generated.
`goodcrap` populate databases with random data, in addition to filling `csv` files. You can set `goodcrap` to connect to your database via a database configuration file, the name of which is passed to `goodcrap` via the command line argument `--database_config`. This is a json file that looks like this (for a MySQL database):
```json
{
"db_type": "mysql",
"host":"localhost",
"port":"3306",
"user":"root",
"passwd":"",
"database":"goodcrap"
}
```
Here is an example command to create and fill the `customers` table in the database:
`goodcrap --size 1000 --seed 3 --database_config mysql_config --template_table customers --to_csv`
where `mysql_config` is the name of the configuration `json` file (`mysql_config.json`).
For every table `mytable` you want to fill with random values, you must provide either:
- a file `mytable.crap_labels.json`: this file tells `goodcrap` what sort of random values to generate for each record
- a sample database table or `csv` file with matching structure and with some values: `goodcrap` will learn how to generate new random values based on the sample values
### Supported destinations
Currently, `goodcrap` can write data to the following destinations:
- MySQL
- SQLite
- Snowflake
- `csv` file
- `json` file
- `parquet` file
### Template data structures
`goodcrap` has a number of template tables and databases that you can use. They are in the `templates/` directory.
## The `crap_labels.json` settings file
For every table you want to generate, you have to provide the `crap_labels.json` file. If you are using the python library, then you can pass the `crap_label` dictionary instead - as is explained below.
The dictionary in `crap_labels.json` tells `goodcrap` how to fill each column in the table with random values. You can either use any of the `faker` providers there, or you can use the ones in `crappers`.
## How random data is made: `faker` and `crappers`
*in progress*
## Python library
The class `GoodCrap` is your `goodcrap` interface. You instantiate it with the key settings, and then generate the data by using the member functions `write_csv()`, `get_dataframe()` or `run()`.
- `write_csv()`: writes a `csv` file
- `get_dataframe()`: returns a `pandas` `DataFrame` object populated with the random data
- `run()`: that's the more generic function that can generate tables and databases and populate them
And example usage for the `goodcrap` library is as follows. Here we are generating a `pandas` `DataFrame` for one of the template tables, `customers`:
```python
from goodcrap import GoodCrap
a = GoodCrap(seed=3,size=1000,template_table='customers')
df=a.get_dataframe()
```
The following example generates the data frame for some table, given its `crap_label` configuration object:
```python
gc = GoodCrap(size=10000,seed=123)
craplabels = {
"customer_number": "ssn",
"first_name": "first_name",
"last_name": "last_name",
"phone": "phone_number",
"address_line": "street_address",
"city": "city",
"state": "state",
"postalcode": "postalcode",
"country": "current_country",
"date_of_birth": "date",
"credit_limit": {
"type": "random_int",
"min": 0,
"max": 1000,
"multiplier": 10
},
"income": {
"type": "random_int",
"min": 0,
"max": 10000,
"multiplier": 10
}
}
df = gc.get_dataframe('customers',craplabels)
```
## How data for a foreign key column is generated
`goodcrap` will detect whether a column in a table is related to another table, and will fill that column with random selections of the related column. To demonstrate, run this command:
`goodcrap --size 1000 --seed 3 --database_config examples\mysql_config --template_database customers_orders`
This command will use the database settings in `examples\mysql_config.json` to generate the template database `customers_orders` and fill the tables with 1000 rows each. There are two tables here: `customers` and `orders`, and they are related: `orders` has a column `customer_number` that is tied to `customers` via the foreign key `customers.customer_number`. Therefore, that column is filled with random selections from `customers.customer_number`.
For a quick demo of generating the `orders` table: assuming you have setup up the `customers_orders` database and filled it with some data, the following code will generate an `orders` `DataFrame` using columns values from the `customers` table:
```python
from goodcrap import GoodCrap
a = GoodCrap(seed=3,size=1000,template_table='orders',database_config="../examples/mysql_config")
df=a.get_dataframe()
```
## `goodcrap` with `Mage`
`goodcrap` can be used in `Mage` as a `Data Loader`. You can generate as much data as you want from multiple random `goodcrap` Data Loaders into your pipelines for testing, such as for testing the convergence of data into a data warehouse. You can also schedule the generation of data from `goodcrap` sources to simulate time series data traffic. A typical test-case scenario here is to run an SQL query at the data destination while data is being continuously loaded.
Here is an example `goodcrap` source in `Mage`:
```python
if 'data_loader' not in globals():
from mage_ai.data_preparation.decorators import data_loader
if 'test' not in globals():
from mage_ai.data_preparation.decorators import test
from goodcrap import GoodCrap
@data_loader
def load_data(*args, **kwargs):
gc = GoodCrap(size=10000,seed=123)
craplabels = {
"customer_number": "ssn",
"first_name": "first_name",
"last_name": "last_name",
"phone": "phone_number",
"address_line": "street_address",
"city": "city",
"state": "state",
"postalcode": "postalcode",
"country": "current_country",
"date_of_birth": "date",
"credit_limit": {
"type": "random_int",
"min": 0,
"max": 1000,
"multiplier": 10
},
"income": {
"type": "random_int",
"min": 0,
"max": 10000,
"multiplier": 10
}
}
return gc.get_dataframe('customers',craplabels)
@test
def test_output(output, *args) -> None:
"""
Template code for testing the output of the block.
"""
assert output is not None, 'The output is undefined'
```
*Note:* If you are planning to run a `Mage` pipeline multiple times, then make sure that it does not have columns that are generated using the faker.unique function. The columns should be universally unique.
## `goodcrap` generates `Mage` pipelines
`Mage` python files are generated using `Jinja` templates. Given that Mage will always be backwards compatible (according to communication with its authors), files and folders generated by `goodcrap` will always be valid. Here is an example command to generate pipelines for each of the tables in the template database `customers_orders`:
`goodcrap --size 1000 --seed 3 --database_config examples\mysql_config --template_database customers_orders --mage_pipeline`
Note that `goodcrap` currently will only generate `Mage` projects if the database configurations are defined.
## Writing to Snowflake
`goodcrap` supports writing your random table to Snowflake using two methods:
- row-by-row, which can be done by setting Snowflake as your database in the database configuration file
- bulk upload of the generated `pandas DataFrame`, which is enabled by the command line argument `--bulk_upload`.
The bulk upload is obviously preferred. Below is an example configuration settings file:
```json
{
"db_type": "snowflake",
"snowflake_database":"GOODCRAP",
"snowflake_warehouse":"WH",
"snowflake_user":"user",
"snowflake_password":"password",
"snowflake_account":"account",
"snowflake_schema":"public",
"snowflake_role":"role"
}
```
Suppose you want to create the `orders` table in Snowflake and fill it with 1,000,000 rows, and also get a few sample queries to try. You can get all that with the following command:
```
goodcrap --size 1000000 --seed 12 --database_config config --template_table orders --bulk_upload --queries 100
```
## Data warehouses
Some dimensions in data warehouses will required to be filled as part of the testing exercise, but should not be filled with random data. These are the *conformed* dimensions with rigid data, such as the Date, Countries, and Cities dimensions. `goodcrap` will be able to fill these dimensions using the `DimensionFiller` class by providing several options for featurization. Filling these tables will be performed before any other table is populated.
## Guessing the `crap_labels.json` settings
*in progress*
## Learning the values from a data sample
*in progress*
## Contributing to `goodcrap`
That would be much appreciated. Check [here](CONTRIBUTING.rst).
## License
`goodcrap` is licensed under the [GPL3](LICENSE) license.
Raw data
{
"_id": null,
"home_page": "https://github.com/goodcrap/goodcrap",
"name": "goodcrap",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "ai, data engineering, fake data, data science",
"author": "Sherif Abdulkader Tawfik Abbas",
"author_email": "sherif.tawfic@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/c1/dd/4c9725bf2792485c8ed8204148bfb8fbca3e0f39615a305282191a007043/goodcrap-0.2.5.tar.gz",
"platform": null,
"description": "# `goodcrap`\r\n\r\n`goodcrap` is a python package that generates random data stuff: creates and fills data structures (tables, databases and `csv` files) with random data, generates [`Mage`](https://github.com/mage-ai/mage-ai) pipelines that the user can use to orchestrate filling the data structures, and generate random SQL queries.\r\n\r\n## Motivation\r\n\r\nThis software enables data engineers to replicate the database schemas at their organisations, and then generate fake data that resemble a random sample of the actual data in their organisation. It also enables them to generate any number of random SQL queries that they can use for testing their analytics pipelines, as well as benchmark their data platforms.\r\n\r\nWhile public datasets, such as those hosted at Google or Kaggle, is a common starting point for people interested in learning data analytics and machine learning, many of these datasets require extensive data cleaning so that they can be usable in analytics pipelines. This makes the use of these datasets difficult for AI learners and practitioners.\r\n\r\nPublic datasets are also utilized by data engineers who are interested in testing their ETL/ELT pipelines. Those folks are particularly interested in data quantity, more than quality. Most public datasets are limited in quantity, which make them not so useful for testing pipelines or for benchmarking query execution times.\r\n\r\nNowadays, generating random data is increasingly a requirement for data teams. [It is a better alternative to using public datasets that will require tedious cleaning](https://motherduck.com/blog/python-faker-duckdb-exploration/). While the Jaffle toy dataset provided by [`dbt`](https://docs.getdbt.com/docs/quickstarts/dbt-core/codespace) enables an easy way to generate a huge amount of data, the python package `jafgen` is limited to generating a dataset that belongs to a specific set of tables.\r\n\r\n`goodcrap` was developed to enable:\r\n- AI learners to generate their own custom datasets\r\n- AI practitioners to benchmark the scalability of their models and methods\r\n- data engineers to test and benchmark their ETL/ELT data pipelines\r\n- data engineers to benchmark query execution times against custom-made huge datasets\r\n\r\nThe data generated by `goodcrap` is, after all, crap. But it's good crap because:\r\n- data values are configured based on `json` configuration files\r\n- data can be generated to fill tables, databases, data warehouses and data lakes\r\n- data values can be made totally random, or fulfill a certain distribution\r\n- `goodcrap` server can generate time series data\r\n\r\n## Installation\r\n\r\nYou can install `goodcrap` using the `pip` command as follows:\r\n\r\n`pip install goodcrap`\r\n\r\n## Basic usage\r\n\r\nThe simplest use-case scenario is generating a `csv` file with random data. `goodcrap` ships with a number of *template* tables that you can use. For example, let's generate 10,000 records in the `customers` table, using the random seed `3`:\r\n\r\n`goodcrap --size 10000 --seed 3 --template_table customers --to_csv`\r\n\r\nThe file `customers.csv` will be generated. \r\n\r\n`goodcrap` populate databases with random data, in addition to filling `csv` files. You can set `goodcrap` to connect to your database via a database configuration file, the name of which is passed to `goodcrap` via the command line argument `--database_config`. This is a json file that looks like this (for a MySQL database):\r\n\r\n```json\r\n{\r\n \"db_type\": \"mysql\",\r\n \"host\":\"localhost\",\r\n \"port\":\"3306\",\r\n \"user\":\"root\",\r\n \"passwd\":\"\",\r\n \"database\":\"goodcrap\"\r\n}\r\n```\r\n\r\nHere is an example command to create and fill the `customers` table in the database:\r\n\r\n`goodcrap --size 1000 --seed 3 --database_config mysql_config --template_table customers --to_csv`\r\n\r\nwhere `mysql_config` is the name of the configuration `json` file (`mysql_config.json`).\r\n\r\nFor every table `mytable` you want to fill with random values, you must provide either:\r\n- a file `mytable.crap_labels.json`: this file tells `goodcrap` what sort of random values to generate for each record\r\n- a sample database table or `csv` file with matching structure and with some values: `goodcrap` will learn how to generate new random values based on the sample values\r\n\r\n### Supported destinations\r\n\r\nCurrently, `goodcrap` can write data to the following destinations:\r\n- MySQL\r\n- SQLite\r\n- Snowflake\r\n- `csv` file\r\n- `json` file\r\n- `parquet` file\r\n\r\n### Template data structures\r\n\r\n`goodcrap` has a number of template tables and databases that you can use. They are in the `templates/` directory.\r\n\r\n## The `crap_labels.json` settings file\r\n\r\nFor every table you want to generate, you have to provide the `crap_labels.json` file. If you are using the python library, then you can pass the `crap_label` dictionary instead - as is explained below.\r\n\r\nThe dictionary in `crap_labels.json` tells `goodcrap` how to fill each column in the table with random values. You can either use any of the `faker` providers there, or you can use the ones in `crappers`.\r\n\r\n## How random data is made: `faker` and `crappers`\r\n\r\n*in progress*\r\n\r\n## Python library\r\n\r\nThe class `GoodCrap` is your `goodcrap` interface. You instantiate it with the key settings, and then generate the data by using the member functions `write_csv()`, `get_dataframe()` or `run()`.\r\n- `write_csv()`: writes a `csv` file\r\n- `get_dataframe()`: returns a `pandas` `DataFrame` object populated with the random data\r\n- `run()`: that's the more generic function that can generate tables and databases and populate them\r\n\r\nAnd example usage for the `goodcrap` library is as follows. Here we are generating a `pandas` `DataFrame` for one of the template tables, `customers`:\r\n\r\n```python\r\nfrom goodcrap import GoodCrap\r\na = GoodCrap(seed=3,size=1000,template_table='customers')\r\ndf=a.get_dataframe()\r\n```\r\n\r\nThe following example generates the data frame for some table, given its `crap_label` configuration object:\r\n\r\n```python\r\n\r\ngc = GoodCrap(size=10000,seed=123)\r\ncraplabels = {\r\n \"customer_number\": \"ssn\",\r\n \"first_name\": \"first_name\",\r\n \"last_name\": \"last_name\",\r\n \"phone\": \"phone_number\",\r\n \"address_line\": \"street_address\",\r\n \"city\": \"city\",\r\n \"state\": \"state\",\r\n \"postalcode\": \"postalcode\",\r\n \"country\": \"current_country\",\r\n \"date_of_birth\": \"date\",\r\n \"credit_limit\": {\r\n \"type\": \"random_int\",\r\n \"min\": 0,\r\n \"max\": 1000,\r\n \"multiplier\": 10\r\n },\r\n \"income\": {\r\n \"type\": \"random_int\",\r\n \"min\": 0,\r\n \"max\": 10000,\r\n \"multiplier\": 10\r\n }\r\n}\r\ndf = gc.get_dataframe('customers',craplabels)\r\n```\r\n\r\n## How data for a foreign key column is generated\r\n\r\n`goodcrap` will detect whether a column in a table is related to another table, and will fill that column with random selections of the related column. To demonstrate, run this command:\r\n\r\n`goodcrap --size 1000 --seed 3 --database_config examples\\mysql_config --template_database customers_orders`\r\n\r\nThis command will use the database settings in `examples\\mysql_config.json` to generate the template database `customers_orders` and fill the tables with 1000 rows each. There are two tables here: `customers` and `orders`, and they are related: `orders` has a column `customer_number` that is tied to `customers` via the foreign key `customers.customer_number`. Therefore, that column is filled with random selections from `customers.customer_number`.\r\n\r\nFor a quick demo of generating the `orders` table: assuming you have setup up the `customers_orders` database and filled it with some data, the following code will generate an `orders` `DataFrame` using columns values from the `customers` table:\r\n\r\n```python\r\nfrom goodcrap import GoodCrap\r\na = GoodCrap(seed=3,size=1000,template_table='orders',database_config=\"../examples/mysql_config\")\r\ndf=a.get_dataframe()\r\n```\r\n\r\n## `goodcrap` with `Mage`\r\n\r\n`goodcrap` can be used in `Mage` as a `Data Loader`. You can generate as much data as you want from multiple random `goodcrap` Data Loaders into your pipelines for testing, such as for testing the convergence of data into a data warehouse. You can also schedule the generation of data from `goodcrap` sources to simulate time series data traffic. A typical test-case scenario here is to run an SQL query at the data destination while data is being continuously loaded.\r\n\r\nHere is an example `goodcrap` source in `Mage`:\r\n\r\n```python\r\n\r\nif 'data_loader' not in globals():\r\n from mage_ai.data_preparation.decorators import data_loader\r\nif 'test' not in globals():\r\n from mage_ai.data_preparation.decorators import test\r\n\r\nfrom goodcrap import GoodCrap\r\n\r\n@data_loader\r\ndef load_data(*args, **kwargs):\r\n gc = GoodCrap(size=10000,seed=123)\r\n craplabels = {\r\n \"customer_number\": \"ssn\",\r\n \"first_name\": \"first_name\",\r\n \"last_name\": \"last_name\",\r\n \"phone\": \"phone_number\",\r\n \"address_line\": \"street_address\",\r\n \"city\": \"city\",\r\n \"state\": \"state\",\r\n \"postalcode\": \"postalcode\",\r\n \"country\": \"current_country\",\r\n \"date_of_birth\": \"date\",\r\n \"credit_limit\": {\r\n \"type\": \"random_int\",\r\n \"min\": 0,\r\n \"max\": 1000,\r\n \"multiplier\": 10\r\n },\r\n \"income\": {\r\n \"type\": \"random_int\",\r\n \"min\": 0,\r\n \"max\": 10000,\r\n \"multiplier\": 10\r\n }\r\n }\r\n return gc.get_dataframe('customers',craplabels)\r\n\r\n\r\n@test\r\ndef test_output(output, *args) -> None:\r\n \"\"\"\r\n Template code for testing the output of the block.\r\n \"\"\"\r\n assert output is not None, 'The output is undefined'\r\n\r\n```\r\n*Note:* If you are planning to run a `Mage` pipeline multiple times, then make sure that it does not have columns that are generated using the faker.unique function. The columns should be universally unique.\r\n\r\n## `goodcrap` generates `Mage` pipelines\r\n\r\n`Mage` python files are generated using `Jinja` templates. Given that Mage will always be backwards compatible (according to communication with its authors), files and folders generated by `goodcrap` will always be valid. Here is an example command to generate pipelines for each of the tables in the template database `customers_orders`:\r\n\r\n`goodcrap --size 1000 --seed 3 --database_config examples\\mysql_config --template_database customers_orders --mage_pipeline`\r\n\r\nNote that `goodcrap` currently will only generate `Mage` projects if the database configurations are defined.\r\n\r\n## Writing to Snowflake\r\n\r\n`goodcrap` supports writing your random table to Snowflake using two methods:\r\n- row-by-row, which can be done by setting Snowflake as your database in the database configuration file\r\n- bulk upload of the generated `pandas DataFrame`, which is enabled by the command line argument `--bulk_upload`.\r\n\r\nThe bulk upload is obviously preferred. Below is an example configuration settings file:\r\n\r\n```json\r\n{\r\n \"db_type\": \"snowflake\",\r\n \"snowflake_database\":\"GOODCRAP\",\r\n \"snowflake_warehouse\":\"WH\",\r\n \"snowflake_user\":\"user\",\r\n \"snowflake_password\":\"password\",\r\n \"snowflake_account\":\"account\",\r\n \"snowflake_schema\":\"public\",\r\n \"snowflake_role\":\"role\"\r\n}\r\n```\r\n\r\nSuppose you want to create the `orders` table in Snowflake and fill it with 1,000,000 rows, and also get a few sample queries to try. You can get all that with the following command:\r\n\r\n```\r\ngoodcrap --size 1000000 --seed 12 --database_config config --template_table orders --bulk_upload --queries 100\r\n```\r\n\r\n## Data warehouses\r\n\r\nSome dimensions in data warehouses will required to be filled as part of the testing exercise, but should not be filled with random data. These are the *conformed* dimensions with rigid data, such as the Date, Countries, and Cities dimensions. `goodcrap` will be able to fill these dimensions using the `DimensionFiller` class by providing several options for featurization. Filling these tables will be performed before any other table is populated.\r\n\r\n## Guessing the `crap_labels.json` settings\r\n\r\n*in progress*\r\n\r\n## Learning the values from a data sample\r\n\r\n*in progress*\r\n\r\n## Contributing to `goodcrap`\r\n\r\nThat would be much appreciated. Check [here](CONTRIBUTING.rst).\r\n\r\n## License\r\n\r\n`goodcrap` is licensed under the [GPL3](LICENSE) license.\r\n\r\n",
"bugtrack_url": null,
"license": "gpl-3.0",
"summary": "goodcrap creates tables, databases and csv files and fill them with random data",
"version": "0.2.5",
"project_urls": {
"Homepage": "https://github.com/goodcrap/goodcrap"
},
"split_keywords": [
"ai",
" data engineering",
" fake data",
" data science"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5b68dae509fb7802189997ebea828631c4f715a9941ce22a341e66e5696fc736",
"md5": "590a5a2eedac2e349b10be9e759e1c3e",
"sha256": "9d3f191c1865621aa3840cb090f0e4756c52f20a957f7b0f25489f8b1e612940"
},
"downloads": -1,
"filename": "goodcrap-0.2.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "590a5a2eedac2e349b10be9e759e1c3e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 39930,
"upload_time": "2024-04-09T03:48:50",
"upload_time_iso_8601": "2024-04-09T03:48:50.678943Z",
"url": "https://files.pythonhosted.org/packages/5b/68/dae509fb7802189997ebea828631c4f715a9941ce22a341e66e5696fc736/goodcrap-0.2.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c1dd4c9725bf2792485c8ed8204148bfb8fbca3e0f39615a305282191a007043",
"md5": "780cf771da0e12a0d0e00c5a9b7f6e2c",
"sha256": "85dcab3b7324bb80ff0d09e104207d4c34b773fad36ac0e0e2ad20ea3653b30a"
},
"downloads": -1,
"filename": "goodcrap-0.2.5.tar.gz",
"has_sig": false,
"md5_digest": "780cf771da0e12a0d0e00c5a9b7f6e2c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 37703,
"upload_time": "2024-04-09T03:48:52",
"upload_time_iso_8601": "2024-04-09T03:48:52.805087Z",
"url": "https://files.pythonhosted.org/packages/c1/dd/4c9725bf2792485c8ed8204148bfb8fbca3e0f39615a305282191a007043/goodcrap-0.2.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-09 03:48:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "goodcrap",
"github_project": "goodcrap",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "goodcrap"
}