gcp-pal


Namegcp-pal JSON
Version 1.0.41 PyPI version JSON
download
home_pagehttps://github.com/VitaminB16/gcp-pal
SummarySet of utilities for interacting with Google Cloud Platform
upload_time2024-09-27 09:30:06
maintainerNone
docs_urlNone
authorVitaminB16
requires_python>=3.10
licenseMIT
keywords gcp google cloud google cloud python gcp api gcp python api
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!--
TODO:
[x] Firestore Module
[x] PubSub Module
[x] Request Module
[x] BigQuery Module
[x] Storage Module
[x] Parquet Module
[x] Schema Module
[x] Cloud Functions Module
[x] Docker Module
[x] Cloud Run Module
[x] Logging Module
[x] Secret Manager Module
[x] Cloud Scheduler Module
[x] Add examples
[x] Publish to PyPI
[x] Tests
[x] Project Module
[x] Dataplex Module
[x] Artifact Registry Module
[ ] Datastore Module
...
-->

# GCP Pal Library

[![Downloads](https://static.pepy.tech/badge/gcp-pal)](https://pepy.tech/project/gcp-pal)

The `gcp-pal` library provides a set of utilities for interacting with Google Cloud Platform (GCP) services, streamlining the process of implementing GCP functionalities within your Python applications.

The utilities are designed to work with the `google-cloud` Python libraries, providing a more user-friendly and intuitive interface for common tasks.

- Source code: **https://github.com/VitaminB16/gcp-pal**
- PyPI: **https://pypi.org/project/gcp-pal/**

---

## Table of Contents

| Module                                     | Python Class               |
| ------------------------------------------ | -------------------------- |
| [Firestore](#firestore-module)             | `gcp_pal.Firestore`        |
| [BigQuery](#bigquery-module)               | `gcp_pal.BigQuery`         |
| [Storage](#storage-module)                 | `gcp_pal.Storage`          |
| [Cloud Functions](#cloud-functions-module) | `gcp_pal.CloudFunctions`   |
| [Cloud Run](#cloud-run-module)             | `gcp_pal.CloudRun`         |
| [Docker](#docker-module)                   | `gcp_pal.Docker`           |
| [Logging](#logging-module)                 | `gcp_pal.Logging`          |
| [Secret Manager](#secret-manager-module)   | `gcp_pal.SecretManager`    |
| [Cloud Scheduler](#cloud-scheduler-module) | `gcp_pal.CloudScheduler`   |
| [Project](#project-module)                 | `gcp_pal.Project`          |
| [Dataplex](#dataplex-module)               | `gcp_pal.Dataplex`         |
| [Artifact Registry](#artifact-registry)    | `gcp_pal.ArtifactRegistry` |
| [PubSub](#pubsub-module)                   | `gcp_pal.PubSub`           |
| [Request](#request-module)                 | `gcp_pal.Request`          |
| [Schema](#schema-module)                   | `gcp_pal.Schema`           |
| [Parquet](#parquet-module)                 | `gcp_pal.Parquet`          |



---

## Installation

The package is available on PyPI as `gcp-pal`. To install with `pip`:

```bash
pip install gcp-pal
```

The library has module-specific dependencies. These can be installed via `pip install gcp-pal[ModuleName]`, e.g.:

```bash
pip install gcp-pal[BigQuery]
# Installing 'google-cloud-bigquery'
pip install gcp-pal[CloudRun]
# Installing 'google-cloud-run' and 'docker'
```

To install all optional dependencies:

```bash
pip install gcp-pal[all]
```

The modules are also set up to notify the user if any required libraries are missing. For example, when attempting to use the `Firestore` module:

```python
from gcp_pal import Firestore
Firestore()
# ImportError: Module 'Firestore' requires 'google.cloud.firestore' (PyPI: 'google-cloud-firestore') to be installed.
```

Which lets the user know that the `google-cloud-firestore` package is required to use the `Firestore` module.

---

## Configuration

Before you can start using the `gcp-pal` library with Firestore or any other GCP services, make sure you either have set up your GCP credentials properly or have the necessary permissions to access the services you want to use:

```bash
gcloud auth application-default login
```

And specify the project ID to be used as the default for all API requests:

```bash
gcloud config set project PROJECT_ID
```

You can also specify the default variables such as project ID and location using environmental variables. The reserved variables are `GCP_PAL_PROJECT` and `GCP_PAL_PROJECT`:

```bash
export GCP_PROJECT_ID=project-id
export GCP_LOCATION=us-central1
```

The order of precendece is as follows:
```
1. Keyword arguments (e.g. BigQuery(project="project-id"))
2. Environmental variables (e.g. export GCP_PROJECT_ID=project-id)
3. Default project set in gcloud (e.g. gcloud config set project project-id)
4. None
```

---

## Firestore Module

The Firestore module in the `gcp-pal` library allows you to perform read and write operations on Firestore documents and collections.

### Initializing Firestore

First, import the Firestore class from the `gcp_pal` module:

```python
from gcp_pal import Firestore
```

### Writing Data to Firestore

To write data to a Firestore document, create a dictionary with your data, specify the path to your document, and use the `write` method:

```python
data = {
    "field1": "value1",
    "field2": "value2"
}

path = "collection/document"
Firestore(path).write(data)
```

### Reading Data from Firestore

To read a single document from Firestore, specify the document's path and use the `read` method:

```python
path = "collection/document"
document = Firestore(path).read()
print(document)
# Output: {'field1': 'value1', 'field2': 'value2'}
```

### Reading All Documents in a Collection

To read all documents within a specific collection, specify the collection's path and use the `read` method:

```python
path = "collection"
documents = Firestore(path).read()
print(documents)
# Output: {'document': {'field1': 'value1', 'field2': 'value2'}}
```

### Working with Pandas DataFrames

The Firestore module also supports writing and reading Pandas DataFrames, preserving their structure and data types:

```python
import pandas as pd

# Example DataFrame
df = pd.DataFrame({
    "field1": ["value1"],
    "field2": ["value2"]
})

path = "collection/document"
Firestore(path).write(df)

read_df = Firestore(path).read()
print(read_df)
# Output:
#    field1 field2
# 0  value1 value2
```

### List the Firestore documents and collections

To list all documents and collections within a Firestore database, use the `ls` method similar to bash:

```python
colls = Firestore().ls()
print(colls)
# Output: ['collection']
docs = Firestore("collection").ls()
print(docs)
# Output: ['document1', 'document2']
```

---


## BigQuery Module

The BigQuery module in the `gcp-pal` library allows you to perform read and write operations on BigQuery datasets and tables.

### Initializing BigQuery

Import the BigQuery class from the `gcp_pal` module:

```python
from gcp_pal import BigQuery
```

### Listing objects

To list all objects (datasets and tables) within a BigQuery project, use the `ls` method similar to bash:

```python
datasets = BigQuery().ls()
print(datasets)
# Output: ['dataset1', 'dataset2']
tables = BigQuery(dataset="dataset1").ls()
print(tables)
# Output: ['table1', 'table2']
```

### Creating objects

To create an object (dataset or table) within a BigQuery project, initialize the BigQuery class with the object's path and use the `create` method:

```python
BigQuery(dataset="new-dataset").create()
# Output: Dataset "new-dataset" created
BigQuery("new-dataset2.new-table").create(schema=schema) 
# Output: Dataset "new-dataset2" created, table "new-dataset2.new-table" created
```

To create a table from a Pandas DataFrame, pass the DataFrame to the `create` method:

```python
df = pd.DataFrame({
    "field1": ["value1"],
    "field2": ["value2"]
})
BigQuery("new-dataset3.new-table").create(data=df)
# Output: Dataset "new-dataset3" created, table "new-dataset3.new-table" created, data inserted
```

### Deleting objects

Deleting objects is similar to creating them, but you use the `delete` method instead:

```python
BigQuery(dataset="dataset").delete()
# Output: Dataset "dataset" and all its tables deleted
BigQuery("dataset.table").delete()
# Output: Table "dataset.table" deleted
```

### Querying data

To read data from a BigQuery table, use the `query` method:

```python
query = "SELECT * FROM dataset.table"
data = BigQuery().query(query)
print(data)
# Output: [{'field1': 'value1', 'field2': 'value2'}]
```

Alternatively, there is a simple read method to read the data from a table with the given `columns`, `filters` and `limit`:

```python
data = BigQuery("dataset.table").read(
    columns=["field1"],
    filters=[("field1", "=", "value1")],
    limit=1,
    to_dataframe=True,
)
print(data)
# Output: pd.DataFrame({'field1': ['value1']})
```

By default, the `read` method returns a Pandas DataFrame, but you can also get the data as a list of dictionaries by setting the `to_dataframe` parameter to `False`.

### Inserting data

To insert data into a BigQuery table, use the `insert` method:

```python
data = {
    "field1": "value1",
    "field2": "value2"
}
BigQuery("dataset.table").insert(data)
# Output: Data inserted
```

### External tables

One can also create BigQuery external tables by specifying the file path:

```python
file_path = "gs://bucket/file.parquet"
BigQuery("dataset.external_table").create(file_path)
# Output: Dataset "dataset" created, external table "dataset.external_table" created
```

The allowed file formats are CSV, JSON, Avro, Parquet (single and partitioned), ORC.

---

## Storage Module

The Storage module in the `gcp-pal` library allows you to perform read and write operations on Google Cloud Storage buckets and objects.

### Initializing Storage

Import the Storage class from the `gcp_pal` module:

```python
from gcp_pal import Storage
```

### Listing objects

Similar to the other modules, listing objects in a bucket is done using the `ls` method:

```python
buckets = Storage().ls()
print(buckets)
# Output: ['bucket1', 'bucket2']
objects = Storage("bucket1").ls()
print(objects)
# Output: ['object1', 'object2']
```

### Creating buckets

To create a bucket, use the `create` method:

```python
Storage("new-bucket").create()
# Output: Bucket "new-bucket" created
```

### Deleting objects

Deleting objects is similar to creating them, but you use the `delete` method instead:

```python
Storage("bucket").delete()
# Output: Bucket "bucket" and all its objects deleted
Storage("bucket/object").delete()
# Output: Object "object" in bucket "bucket" deleted
```

### Uploading and downloading objects

To upload an object to a bucket, use the `upload` method:

```python
Storage("bucket/uploaded_file.txt").upload("local_file.txt")
# Output: File "local_file.txt" uploaded to "bucket/uploaded_file.txt"
```

To download an object from a bucket, use the `download` method:

```python
Storage("bucket/uploaded_file.txt").download("downloaded_file.txt")
# Output: File "bucket/uploaded_file.txt" downloaded to "downloaded_file.txt"
```

---


## Cloud Functions Module

The Cloud Functions module in the `gcp-pal` library allows you to deploy and manage Cloud Functions.

### Initializing Cloud Functions

Import the `CloudFunctions` class from the `gcp_pal` module:

```python
from gcp_pal import CloudFunctions
```

### Deploying Cloud Functions

To deploy a Cloud Function, specifty the function's name, the source codebase, the entry point and any other parameters that are to be passed to `BuildConfig`, `ServiceConfig` and `Function` (see [docs](https://cloud.google.com/python/docs/reference/cloudfunctions/latest/google.cloud.functions_v2.types)):

```python
CloudFunctions("function-name").deploy(
    path="path/to/function_codebase",
    entry_point="main",
    environment=2,
)
```

Deploying a Cloud Function from a local source depends on the `gcp_toole.Storage` module. By default, the source codebase is uploaded to the `gcf-v2-sources-{PROJECT_NUMBER}-{REGION}` bucket and is deployed from there. An alternative bucket can be specified via the `source_bucket` parameter:

```python
CloudFunctions("function-name").deploy(
    path="path/to/function_codebase",
    entry_point="main",
    environment=2,
    source_bucket="bucket-name",
)
```

### Listing Cloud Functions

To list all Cloud Functions within a project, use the `ls` method:

```python
functions = CloudFunctions().ls()
print(functions)
# Output: ['function1', 'function2']
```

### Deleting Cloud Functions

To delete a Cloud Function, use the `delete` method:

```python
CloudFunctions("function-name").delete()
# Output: Cloud Function "function-name" deleted
```

### Invoking Cloud Functions

To invoke a Cloud Function, use the `invoke` (or `call`) method:

```python
response = CloudFunctions("function-name").invoke({"key": "value"})
print(response)
# Output: {'output_key': 'output_value'}
```

### Getting Cloud Function details

To get the details of a Cloud Function, use the `get` method:

```python
details = CloudFunctions("function-name").get()
print(details)
# Output: {'name': 'projects/project-id/locations/region/functions/function-name', 
#          'build_config': {...}, 'service_config': {...}, 'state': {...}, ... }
```

### Using service accounts

Service account email can be specified either within the constructor or via the `service_account` parameter:

```python
CloudFunctions("function-name", service_account="account@email.com").deploy(**kwargs)
# or
CloudFunctions("function-name").deploy(service_account="account@email.com", **kwargs)
```

---

## Cloud Run Module

The Cloud Run module in the `gcp-pal` library allows you to deploy and manage Cloud Run services.

### Initializing Cloud Run

Import the `CloudRun` class from the `gcp_pal` module:

```python
from gcp_pal import CloudRun
```

### Deploying Cloud Run services

```python
CloudRun("test-app").deploy(path="samples/cloud_run")
# Output: 
# - Docker image "test-app" built based on "samples/cloud_run" codebase and "samples/cloud_run/Dockerfile".
# - Docker image "test-app" pushed to Google Container Registry as "gcr.io/{PROJECT_ID}/test-app:random_tag".
# - Cloud Run service "test-app" deployed from "gcr.io/{PROJECT_ID}/test-app:random_tag".
```

The default tag is a random string but can be specified via the `image_tag` parameter:

```python
CloudRun("test-app").deploy(path="samples/cloud_run", image_tag="5fbd72c")
# Output: Cloud Run service deployed
```

### Listing Cloud Run services

To list all Cloud Run services within a project, use the `ls` method:

```python
services = CloudRun().ls()
print(services)
# Output: ['service1', 'service2']
```

To list the job, set the `job` parameter to `True`:

```python
jobs = CloudRun(job=True).ls()
print(jobs)
# Output: ['job1', 'job2']
```

### Deleting Cloud Run services

To delete a Cloud Run service, use the `delete` method:

```python
CloudRun("service-name").delete()
# Output: Cloud Run service "service-name" deleted
```

Similarly to delete a job, set the `job` parameter to `True`:

```python
CloudRun("job-name", job=True).delete()
```

### Invoking Cloud Run services

To invoke a Cloud Run service, use the `invoke`/`call` method:

```python
response = CloudRun("service-name").invoke({"key": "value"})
print(response)
# Output: {'output_key': 'output_value'}
```

### Getting Cloud Run service details

To get the details of a Cloud Run service, use the `get` method:

```python
details = CloudRun("service-name").get()
print(details)
# Output: ...
```

To get the status of a Cloud Run service, use the `status`/`state` method:

```python
service_status = CloudRun("service-name").status()
print(service_status)
# Output: Active
job_status = CloudRun("job-name", job=True).status()
print(job_status)
# Output: Active
```

### Using service accounts

Service account email can be specified either within the constructor or via the `service_account` parameter:

```python
CloudRun("run-name", service_account="account@email.com").deploy(**kwargs)
# or
CloudRun("run-name").deploy(service_account="account@email.com", **kwargs)
```

---

## Docker Module

The Docker module in the `gcp-pal` library allows you to build and push Docker images to Google Container Registry.

### Initializing Docker

Import the Docker class from the `gcp_pal` module:

```python
from gcp_pal import Docker
```

### Building Docker images

```python
Docker("image-name").build(path="path/to/context", dockerfile="Dockerfile")
# Output: Docker image "image-name:latest" built based on "path/to/context" codebase and "path/to/context/Dockerfile".
```

The default `tag` is `"latest"` but can be specified via the `tag` parameter:

```python
Docker("image-name", tag="5fbd72c").build(path="path/to/context", dockerfile="Dockerfile")
# Output: Docker image "image-name:5fbd72c" built based on "path/to/context" codebase and "path/to/context/Dockerfile".
```

### Pushing Docker images

```python
Docker("image-name").push()
# Output: Docker image "image-name" pushed to Google Container Registry.
```

The default destination is `"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}"` but can be specified via the `destination` parameter:

```python
Docker("image-name").push(destination="gcr.io/my-project/image-name:5fbd72c")
# Output: Docker image "image-name" pushed to "gcr.io/my-project/image-name:5fbd72c".
```

---


## Logging Module

The Logging module in the `gcp-pal` library allows you to access and manage logs from Google Cloud Logging.

### Initializing Logging

Import the Logging class from the `gcp_pal` module:

```python
from gcp_pal import Logging
```

### Listing logs

To list all logs within a project, use the `ls` method:

```python
logs = Logging().ls(limit=2)
for log in logs:
    print(log)
# Output: LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}
```

Where each entry is a `LogEntry` object with the following attributes: `project`, `log_name`, `resource`, `severity`, `message`, `timestamp`, `time_zone`, `timestamp_str`.

The `message` attribute is the main payload of the log entry.

### Filtering logs

To filter logs based on a query, use the `filter` method:

```python
logs = Logging().ls(filter="severity=ERROR")
# Output: [LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}, ...]
```

Some common filters are also supported natively: `severity` (str), `time_start` (str), `time_end` (str), `time_range` (int: hours). For example, the following are equivalent:

```python
# Time now: 2024-04-16 17:30:04.308 UTC
logs = Logging().ls(filter="severity=ERROR AND time_start=2024-04-16T16:30:04.308Z AND time_end=2024-04-16T17:30:04.308Z")
logs = Logging().ls(severity="ERROR", time_start="2024-04-16T16:30:04.308Z", time_end="2024-04-16T17:30:04.308Z")
logs = Logging().ls(severity="ERROR", time_range=1)
```

### Streaming logs

To stream logs in real-time, use the `stream` method:

```python
Logging().stream()
# LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}
# LogEntry - [2024-04-16 17:30:05.308 UTC] {Message payload}
# ...
```

---

## Secret Manager Module

The Secret Manager module in the `gcp-pal` library allows you to access and manage secrets from Google Cloud Secret Manager.

### Initializing Secret Manager

Import the SecretManager class from the `gcp_pal` module:

```python
from gcp_pal import SecretManager
```

### Creating secrets

To create a secret, specify the secret's name and value:

```python
SecretManager("secret1").create("value1", labels={"env": "dev"})
# Output: Secret 'secret1' created
```


### Listing secrets

To list all secrets within a project, use the `ls` method:

```python
secrets = SecretManager().ls()
print(secrets)
# Output: ['secret1', 'secret2']
```

The `ls` method also supports filtering secrets based on `filter` or `label` parameters:

```python
secrets = SecretManager().ls(filter="name:secret1")
print(secrets)
# Output: ['secret1']
secrets = SecretManager().ls(label="env:*")
print(secrets)
# Output: ['secret1', 'secret2']
```

### Accessing secrets

To access the value of a secret, use the `value` method:

```python
value = SecretManager("secret1").value()
print(value)
# Output: 'value1'
```

### Deleting secrets

To delete a secret, use the `delete` method:

```python
SecretManager("secret1").delete()
# Output: Secret 'secret1' deleted
```

---

## Cloud Scheduler Module

The Cloud Scheduler module in the `gcp-pal` library allows you to create and manage Cloud Scheduler jobs.

### Initializing Cloud Scheduler

Import the CloudScheduler class from the `gcp_pal` module:

```python
from gcp_pal import CloudScheduler
```

### Creating Cloud Scheduler jobs

To create a Cloud Scheduler job, specify the job's name in the constructor, and use the `create` method to set the schedule and target:

```python
CloudScheduler("job-name").create(
    schedule="* * * * *",
    time_zone="UTC",
    target="https://example.com/api",
    payload={"key": "value"},
)
# Output: Cloud Scheduler job "job-name" created with HTTP target "https://example.com/api"
```

If the `target` is not an HTTP endpoint, it will be treated as a PubSub topic:

```python
CloudScheduler("job-name").create(
    schedule="* * * * *",
    time_zone="UTC",
    target="pubsub-topic-name",
    payload={"key": "value"},
)
# Output: Cloud Scheduler job "job-name" created with PubSub target "pubsub-topic-name"
```

Additionally, `service_account` can be specified to add the OAuth and OIDC tokens to the request:

```python
CloudScheduler("job-name").create(
    schedule="* * * * *",
    time_zone="UTC",
    target="https://example.com/api",
    payload={"key": "value"},
    service_account="PROJECT@PROJECT.iam.gserviceaccount.com",
)
# Output: Cloud Scheduler job "job-name" created with HTTP target "https://example.com/api" and OAuth+OIDC tokens
```

### Listing Cloud Scheduler jobs

To list all Cloud Scheduler jobs within a project, use the `ls` method:

```python
jobs = CloudScheduler().ls()
print(jobs)
# Output: ['job1', 'job2']
```

### Deleting Cloud Scheduler jobs

To delete a Cloud Scheduler job, use the `delete` method:

```python
CloudScheduler("job-name").delete()
# Output: Cloud Scheduler job "job-name" deleted
```

### Managing Cloud Scheduler jobs

To pause or resume a Cloud Scheduler job, use the `pause` or `resume` methods:

```python
CloudScheduler("job-name").pause()
# Output: Cloud Scheduler job "job-name" paused
CloudScheduler("job-name").resume()
# Output: Cloud Scheduler job "job-name" resumed
```

To run a Cloud Scheduler job immediately, use the `run` method:

```python
CloudScheduler("job-name").run()
# Output: Cloud Scheduler job "job-name" run
```

If the job is paused, it will be resumed before running. To prevent this, set the `force` parameter to `False`:

```python
CloudScheduler("job-name").run(force=False)
# Output: Cloud Scheduler job "job-name" not run if it is paused
```

### Using service accounts

Service account email can be specified either within the constructor or via the `service_account` parameter:

```python
CloudScheduler("job-name", service_account="account@email.com").create(**kwargs)
# or
CloudScheduler("job-name").create(service_account="account@email.com", **kwargs)
```


---

## Project Module

The Project module in the `gcp-pal` library allows you to access and manage Google Cloud projects.

### Initializing Project

Import the Project class from the `gcp_pal` module:

```python
from gcp_pal import Project
```

### Listing projects

To list all projects available to the authenticated user, use the `ls` method:

```python
projects = Project().ls()
print(projects)
# Output: ['project1', 'project2']
```

### Creating projects

To create a new project, use the `create` method:

```python
Project("new-project").create()
# Output: Project "new-project" created
```

### Deleting projects

To delete a project, use the `delete` method:

```python
Project("project-name").delete()
# Output: Project "project-name" deleted
```

Google Cloud will delete the project after 30 days. During this time, to undelete a project, use the `undelete` method:

```python
Project("project-name").undelete()
# Output: Project "project-name" undeleted
```

### Getting project details

To get the details of a project, use the `get` method:

```python
details = Project("project-name").get()
print(details)
# Output: {'name': 'projects/project-id', 'project_id': 'project-id', ...}
```

To obtain the project number use the `number` method:

```python
project_number = Project("project-name").number()
print(project_number)
# Output: "1234567890"
```


---

## Dataplex Module

The Dataplex module in the `gcp-pal` library allows you to interact with Dataplex services.

### Initializing Dataplex

Import the Dataplex class from the `gcp_pal` module:

```python
from gcp_pal import Dataplex
```

### Listing Dataplex objects

The Dataplex module supports listing all lakes, zones, and assets within a Dataplex instance:

```python
lakes = Dataplex().ls()
print(lakes)
# Output: ['lake1', 'lake2']
zones = Dataplex("lake1").ls()
print(zones)
# Output: ['zone1', 'zone2']
assets = Dataplex("lake1/zone1").ls()
print(assets)
# Output: ['asset1', 'asset2']
```

### Creating Dataplex objects

To create a lake, zone, or asset within a Dataplex instance, use the `create_lake`, `create_zone`, and `create_asset` methods.

To create a lake:

```python
Dataplex("lake1").create_lake()
# Output: Lake "lake1" created
```

To create a zone (zone type and location type are required):

```python
Dataplex("lake1/zone1").create_zone(zone_type="raw", location_type="single-region")
# Output: Zone "zone1" created in Lake "lake1"
```

To create an asset (asset source and asset type are required):

```python
Dataplex("lake1/zone1").create_asset(asset_source="dataset_name", asset_type="bigquery")
# Output: Asset "asset1" created in Zone "zone1" of Lake "lake1"
```

### Deleting Dataplex objects

Deleting objects can be done using a single `delete` method:

```python
Dataplex("lake1/zone1/asset1").delete()
# Output: Asset "asset1" deleted
Dataplax("lake1/zone1").delete()
# Output: Zone "zone1" and all its assets deleted
Dataplex("lake1").delete()
# Output: Lake "lake1" and all its zones and assets deleted
```


---


## Artifact Registry

The Artifact Registry module in the `gcp-pal` library allows you to interact with Artifact Registry services.

### Initializing Artifact Registry

Import the ArtifactRegistry class from the `gcp_pal` module:

```python
from gcp_pal import ArtifactRegistry
```

### Listing Artifact Registry objects

The objects within Artifact Registry module follow the hierarchy: repositories > packages > versions > tags.

To list all repositories within a project, use the `ls` method:

```python
repositories = ArtifactRegistry().ls()
print(repositories)
# Output: ['repo1', 'repo2']
```

To list all packages (or "images") within a repository, use the `ls` method with the repository name:

```python
images = ArtifactRegistry("repo1").ls()
print(images)
# Output: ['image1', 'image2']
```

To list all versions of a package, use the `ls` method with the repository and package names:

```python
versions = ArtifactRegistry("repo1/image1").ls()
print(versions)
# Output: ['repo1/image1/sha256:version1', 'repo1/image1/sha256:version2']
```

To list all tags of a version, use the `ls` method with the repository, package, and version names:

```python
tags = ArtifactRegistry("repo1/image1/sha256:version1").ls()
print(tags)
# Output: ['repo1/image1/tag1', 'repo1/image1/tag2']
```

### Creating Artifact Registry objects

To create a repository, use the `create_repository` method with the repository name:

```python
ArtifactRegistry("repo1").create_repository()
# Output: Repository "repo1" created
```

Some additional parameters can be specified within the method, such as format (`"docker"` or `"maven"`), mode (`'standard'`, `'remote'` or `'virtual'`).

To create a tag, use the `create_tag` method with the repository, package, version, and tag names:

```python
ArtifactRegistry("repo1/image1/sha256:version1").create_tag("tag1")
# Output: Tag "tag1" created for version "version1" of package "image1" in repository "repo1"
```

### Deleting Artifact Registry objects

Deleting objects can be done using a single `delete` method:

```python
ArtifactRegistry("repo1/image1:tag1").delete()
# Output: Tag "tag1" deleted for package "image1" in repository "repo1"
ArtifactRegistry("repo1/image1/sha256:version1").delete()
# Output: Version "version1" deleted for package "image1" in repository "repo1"
ArtifactRegistry("repo1/image1").delete()
# Output: Package "image1" deleted in repository "repo1"
ArtifactRegistry("repo1").delete()
# Output: Repository "repo1" deleted
```

---

## PubSub Module

The PubSub module in the `gcp-pal` library allows you to publish and subscribe to PubSub topics.

### Initializing PubSub

First, import the PubSub class from the `gcp_pal` module:

```python
from gcp_pal import PubSub
```

The `PubSub` prefers to take the `path` argument in the format `project/topic/subscription`:

```python
PubSub("my-project/my-topic/my-subscription")
```

Alternatively, you can specify the project and topic/subscription separately:

```python
PubSub(project="my-project", topic="my-topic", subscription="my-subscription")
```

### Listing objects

To list all topics within a project or all subscriptions within a topic, use the `ls` method:

```python
topics = PubSub("my-project").ls()
# Output: ['topic1', 'topic2']
subscriptions = PubSub("my-project/topic1").ls()
# Output: ['subscription1', 'subscription2']
```

Or to list all subscriptions within a project:

```python
subscriptions = PubSub("my-project").ls_subscriptions()
# Output: ['subscription1', 'subscription2', ...]
```

### Creating objects

To create a PubSub topic, use the `create` method:

```python
PubSub("my-project/new-topic").create()
# Output: PubSub topic "new-topic" created
```

To create a PubSub subscription, use the `create` method with the `topic` parameter:

```python
PubSub("my-project/my-topic/new-subscription").create()
```

### Deleting objects

To delete a PubSub topic or subscription, use the `delete` method:

```python
PubSub("my-project/topic/subscription").delete()
# Output: PubSub subscription "subscription" deleted
PubSub("my-project/topic").delete()
# Output: PubSub topic "topic" deleted
```

To delete a subscription without specifying the topic, use the `subscription` parameter:

```python
PubSub(subscription="my-project/subscription").delete()
# Output: PubSub subscription "subscription" deleted
```

### Publishing Messages to a Topic

To publish a message to a PubSub topic, specify the topic's name and the message you want to publish:

```python
topic = "topic-name"
message = "Hello, PubSub!"
PubSub(topic).publish(message)
```

---

## Request Module

The Request module in the `gcp-pal` library allows you to make authorized HTTP requests.

### Initializing Request

Import the Request class from the `gcp_pal` module:

```python
from gcp_pal import Request
```

### Making Authorized Get/Post/Put Requests

To make an authorized requests, specify the URL you want to access and use the relevant method:

```python
url = "https://example.com/api"

get_response = Request(url).get()
print(get_response)
# Output: <Response [200]>
post_response = Request(url).post(data={"key": "value"})
print(post_response)
# Output: <Response [201]>
put_response = Request(url).put(data={"key": "value"})
print(put_response)
# Output: <Response [200]>
```

### Using service accounts

Specify the service account email to make requests on behalf of a service account within the constructor:

```python
Request(url, service_account="account@email.com").get()
```

---

## Schema Module

The Schema module is not strictly GCP-related, but it is a useful utility. It allows one to translate schemas between different formats, such as Python, PyArrow, BigQuery, and Pandas.

### Initializing Schema

Import the `Schema` class from the `gcp_pal` module:

```python
from gcp_pal.schema import Schema
```

### Translating schemas

To translate a schema from one format to another, use the respective methods:

```python
str_schema = {
    "a": "int",
    "b": "str",
    "c": "float",
    "d": {
        "d1": "datetime",
        "d2": "timestamp",
    },
}
python_schema = Schema(str_schema).str()
# {
#    "a": int,
#    "b": str,
#    "c": float,
#    "d": {
#        "d1": datetime,
#        "d2": datetime,
#    },
# }
pyarrow_schema = Schema(str_schema).pyarrow()
# pa.schema(
#    [
#        pa.field("a", pa.int64()),
#        pa.field("b", pa.string()),
#        pa.field("c", pa.float64()),
#        pa.field("d", pa.struct([
#            pa.field("d1", pa.timestamp("ns")),
#            pa.field("d2", pa.timestamp("ns")),
#        ])),
#    ]
# )
bigquery_schema = Schema(str_schema).bigquery()
# [
#     bigquery.SchemaField("a", "INTEGER"),
#     bigquery.SchemaField("b", "STRING"),
#     bigquery.SchemaField("c", "FLOAT"),
#     bigquery.SchemaField("d", "RECORD", fields=[
#        bigquery.SchemaField("d1", "DATETIME"),
#        bigquery.SchemaField("d2", "TIMESTAMP"),
#     ]),
# ]
pandas_schema = Schema(str_schema).pandas()
# {
#    "a": "int64",
#    "b": "object",
#    "c": "float64",
#    "d": {
#        "d1": "datetime64[ns]",
#        "d2": "datetime64[ns]",
#    },
# }
```

### Infering schemas

To infer and translate a schema from a dictionary of data or a Pandas DataFrame, use the `is_data` parameter:

```python
df = pd.DataFrame(
    {
        "a": [1, 2, 3],
        "b": ["a", "b", "c"],
        "c": [1.0, 2.0, 3.0],
        "date": [datetime.datetime.now() for _ in range(3)],
    }
)
inferred_schema = Schema(df, is_data=True).schema
# {
#   "a": "int",
#   "b": "str",
#   "c": "float",
#   "date": "datetime",
# }
pyarrow_schema = Schema(df, is_data=True).pyarrow()
# pa.schema(
#    [
#        pa.field("a", pa.int64()),
#        pa.field("b", pa.string()),
#        pa.field("c", pa.float64()),
#        pa.field("date", pa.timestamp("ns")),
#    ]
# )
```

---

## Parquet Module

The Parquet module in the `gcp-pal` library allows you to read and write Parquet files in Google Cloud Storage. The `gcp_pal.Storage` module uses this module to read and write Parquet files to and from Google Cloud Storage.

### Initializing Parquet

Import the Parquet class from the `gcp_pal` module:

```python
from gcp_pal import Parquet
```

### Reading Parquet files

To read a Parquet file from Google Cloud Storage, use the `read` method:

```python
data = Parquet("bucket/file.parquet").read()
print(data)
# Output: pd.DataFrame({'field1': ['value1'], 'field2': ['value2']})
```

### Writing Parquet files

To write a Pandas DataFrame to a Parquet file in Google Cloud Storage, use the `write` method:

```python
df = pd.DataFrame({
    "field1": ["value1"],
    "field2": ["value2"]
})
Parquet("bucket/file.parquet").write(df)
# Output: Parquet file "file.parquet" created in "bucket"
```

Partitioning can be specified via the `partition_cols` parameter:

```python
Parquet("bucket/file.parquet").write(df, partition_cols=["field1"])
# Output: Parquet file "file.parquet" created in "bucket" partitioned by "field1"
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/VitaminB16/gcp-pal",
    "name": "gcp-pal",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "gcp, google cloud, google cloud python, gcp api, gcp python api",
    "author": "VitaminB16",
    "author_email": "artemiy.nosov@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/64/ef/4b8d060f63472d7753589702e35194746bf1d1776c1d90259e626dca9778/gcp_pal-1.0.41.tar.gz",
    "platform": null,
    "description": "<!--\nTODO:\n[x] Firestore Module\n[x] PubSub Module\n[x] Request Module\n[x] BigQuery Module\n[x] Storage Module\n[x] Parquet Module\n[x] Schema Module\n[x] Cloud Functions Module\n[x] Docker Module\n[x] Cloud Run Module\n[x] Logging Module\n[x] Secret Manager Module\n[x] Cloud Scheduler Module\n[x] Add examples\n[x] Publish to PyPI\n[x] Tests\n[x] Project Module\n[x] Dataplex Module\n[x] Artifact Registry Module\n[ ] Datastore Module\n...\n-->\n\n# GCP Pal Library\n\n[![Downloads](https://static.pepy.tech/badge/gcp-pal)](https://pepy.tech/project/gcp-pal)\n\nThe `gcp-pal` library provides a set of utilities for interacting with Google Cloud Platform (GCP) services, streamlining the process of implementing GCP functionalities within your Python applications.\n\nThe utilities are designed to work with the `google-cloud` Python libraries, providing a more user-friendly and intuitive interface for common tasks.\n\n- Source code: **https://github.com/VitaminB16/gcp-pal**\n- PyPI: **https://pypi.org/project/gcp-pal/**\n\n---\n\n## Table of Contents\n\n| Module                                     | Python Class               |\n| ------------------------------------------ | -------------------------- |\n| [Firestore](#firestore-module)             | `gcp_pal.Firestore`        |\n| [BigQuery](#bigquery-module)               | `gcp_pal.BigQuery`         |\n| [Storage](#storage-module)                 | `gcp_pal.Storage`          |\n| [Cloud Functions](#cloud-functions-module) | `gcp_pal.CloudFunctions`   |\n| [Cloud Run](#cloud-run-module)             | `gcp_pal.CloudRun`         |\n| [Docker](#docker-module)                   | `gcp_pal.Docker`           |\n| [Logging](#logging-module)                 | `gcp_pal.Logging`          |\n| [Secret Manager](#secret-manager-module)   | `gcp_pal.SecretManager`    |\n| [Cloud Scheduler](#cloud-scheduler-module) | `gcp_pal.CloudScheduler`   |\n| [Project](#project-module)                 | `gcp_pal.Project`          |\n| [Dataplex](#dataplex-module)               | `gcp_pal.Dataplex`         |\n| [Artifact Registry](#artifact-registry)    | `gcp_pal.ArtifactRegistry` |\n| [PubSub](#pubsub-module)                   | `gcp_pal.PubSub`           |\n| [Request](#request-module)                 | `gcp_pal.Request`          |\n| [Schema](#schema-module)                   | `gcp_pal.Schema`           |\n| [Parquet](#parquet-module)                 | `gcp_pal.Parquet`          |\n\n\n\n---\n\n## Installation\n\nThe package is available on PyPI as `gcp-pal`. To install with `pip`:\n\n```bash\npip install gcp-pal\n```\n\nThe library has module-specific dependencies. These can be installed via `pip install gcp-pal[ModuleName]`, e.g.:\n\n```bash\npip install gcp-pal[BigQuery]\n# Installing 'google-cloud-bigquery'\npip install gcp-pal[CloudRun]\n# Installing 'google-cloud-run' and 'docker'\n```\n\nTo install all optional dependencies:\n\n```bash\npip install gcp-pal[all]\n```\n\nThe modules are also set up to notify the user if any required libraries are missing. For example, when attempting to use the `Firestore` module:\n\n```python\nfrom gcp_pal import Firestore\nFirestore()\n# ImportError: Module 'Firestore' requires 'google.cloud.firestore' (PyPI: 'google-cloud-firestore') to be installed.\n```\n\nWhich lets the user know that the `google-cloud-firestore` package is required to use the `Firestore` module.\n\n---\n\n## Configuration\n\nBefore you can start using the `gcp-pal` library with Firestore or any other GCP services, make sure you either have set up your GCP credentials properly or have the necessary permissions to access the services you want to use:\n\n```bash\ngcloud auth application-default login\n```\n\nAnd specify the project ID to be used as the default for all API requests:\n\n```bash\ngcloud config set project PROJECT_ID\n```\n\nYou can also specify the default variables such as project ID and location using environmental variables. The reserved variables are `GCP_PAL_PROJECT` and `GCP_PAL_PROJECT`:\n\n```bash\nexport GCP_PROJECT_ID=project-id\nexport GCP_LOCATION=us-central1\n```\n\nThe order of precendece is as follows:\n```\n1. Keyword arguments (e.g. BigQuery(project=\"project-id\"))\n2. Environmental variables (e.g. export GCP_PROJECT_ID=project-id)\n3. Default project set in gcloud (e.g. gcloud config set project project-id)\n4. None\n```\n\n---\n\n## Firestore Module\n\nThe Firestore module in the `gcp-pal` library allows you to perform read and write operations on Firestore documents and collections.\n\n### Initializing Firestore\n\nFirst, import the Firestore class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Firestore\n```\n\n### Writing Data to Firestore\n\nTo write data to a Firestore document, create a dictionary with your data, specify the path to your document, and use the `write` method:\n\n```python\ndata = {\n    \"field1\": \"value1\",\n    \"field2\": \"value2\"\n}\n\npath = \"collection/document\"\nFirestore(path).write(data)\n```\n\n### Reading Data from Firestore\n\nTo read a single document from Firestore, specify the document's path and use the `read` method:\n\n```python\npath = \"collection/document\"\ndocument = Firestore(path).read()\nprint(document)\n# Output: {'field1': 'value1', 'field2': 'value2'}\n```\n\n### Reading All Documents in a Collection\n\nTo read all documents within a specific collection, specify the collection's path and use the `read` method:\n\n```python\npath = \"collection\"\ndocuments = Firestore(path).read()\nprint(documents)\n# Output: {'document': {'field1': 'value1', 'field2': 'value2'}}\n```\n\n### Working with Pandas DataFrames\n\nThe Firestore module also supports writing and reading Pandas DataFrames, preserving their structure and data types:\n\n```python\nimport pandas as pd\n\n# Example DataFrame\ndf = pd.DataFrame({\n    \"field1\": [\"value1\"],\n    \"field2\": [\"value2\"]\n})\n\npath = \"collection/document\"\nFirestore(path).write(df)\n\nread_df = Firestore(path).read()\nprint(read_df)\n# Output:\n#    field1 field2\n# 0  value1 value2\n```\n\n### List the Firestore documents and collections\n\nTo list all documents and collections within a Firestore database, use the `ls` method similar to bash:\n\n```python\ncolls = Firestore().ls()\nprint(colls)\n# Output: ['collection']\ndocs = Firestore(\"collection\").ls()\nprint(docs)\n# Output: ['document1', 'document2']\n```\n\n---\n\n\n## BigQuery Module\n\nThe BigQuery module in the `gcp-pal` library allows you to perform read and write operations on BigQuery datasets and tables.\n\n### Initializing BigQuery\n\nImport the BigQuery class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import BigQuery\n```\n\n### Listing objects\n\nTo list all objects (datasets and tables) within a BigQuery project, use the `ls` method similar to bash:\n\n```python\ndatasets = BigQuery().ls()\nprint(datasets)\n# Output: ['dataset1', 'dataset2']\ntables = BigQuery(dataset=\"dataset1\").ls()\nprint(tables)\n# Output: ['table1', 'table2']\n```\n\n### Creating objects\n\nTo create an object (dataset or table) within a BigQuery project, initialize the BigQuery class with the object's path and use the `create` method:\n\n```python\nBigQuery(dataset=\"new-dataset\").create()\n# Output: Dataset \"new-dataset\" created\nBigQuery(\"new-dataset2.new-table\").create(schema=schema) \n# Output: Dataset \"new-dataset2\" created, table \"new-dataset2.new-table\" created\n```\n\nTo create a table from a Pandas DataFrame, pass the DataFrame to the `create` method:\n\n```python\ndf = pd.DataFrame({\n    \"field1\": [\"value1\"],\n    \"field2\": [\"value2\"]\n})\nBigQuery(\"new-dataset3.new-table\").create(data=df)\n# Output: Dataset \"new-dataset3\" created, table \"new-dataset3.new-table\" created, data inserted\n```\n\n### Deleting objects\n\nDeleting objects is similar to creating them, but you use the `delete` method instead:\n\n```python\nBigQuery(dataset=\"dataset\").delete()\n# Output: Dataset \"dataset\" and all its tables deleted\nBigQuery(\"dataset.table\").delete()\n# Output: Table \"dataset.table\" deleted\n```\n\n### Querying data\n\nTo read data from a BigQuery table, use the `query` method:\n\n```python\nquery = \"SELECT * FROM dataset.table\"\ndata = BigQuery().query(query)\nprint(data)\n# Output: [{'field1': 'value1', 'field2': 'value2'}]\n```\n\nAlternatively, there is a simple read method to read the data from a table with the given `columns`, `filters` and `limit`:\n\n```python\ndata = BigQuery(\"dataset.table\").read(\n    columns=[\"field1\"],\n    filters=[(\"field1\", \"=\", \"value1\")],\n    limit=1,\n    to_dataframe=True,\n)\nprint(data)\n# Output: pd.DataFrame({'field1': ['value1']})\n```\n\nBy default, the `read` method returns a Pandas DataFrame, but you can also get the data as a list of dictionaries by setting the `to_dataframe` parameter to `False`.\n\n### Inserting data\n\nTo insert data into a BigQuery table, use the `insert` method:\n\n```python\ndata = {\n    \"field1\": \"value1\",\n    \"field2\": \"value2\"\n}\nBigQuery(\"dataset.table\").insert(data)\n# Output: Data inserted\n```\n\n### External tables\n\nOne can also create BigQuery external tables by specifying the file path:\n\n```python\nfile_path = \"gs://bucket/file.parquet\"\nBigQuery(\"dataset.external_table\").create(file_path)\n# Output: Dataset \"dataset\" created, external table \"dataset.external_table\" created\n```\n\nThe allowed file formats are CSV, JSON, Avro, Parquet (single and partitioned), ORC.\n\n---\n\n## Storage Module\n\nThe Storage module in the `gcp-pal` library allows you to perform read and write operations on Google Cloud Storage buckets and objects.\n\n### Initializing Storage\n\nImport the Storage class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Storage\n```\n\n### Listing objects\n\nSimilar to the other modules, listing objects in a bucket is done using the `ls` method:\n\n```python\nbuckets = Storage().ls()\nprint(buckets)\n# Output: ['bucket1', 'bucket2']\nobjects = Storage(\"bucket1\").ls()\nprint(objects)\n# Output: ['object1', 'object2']\n```\n\n### Creating buckets\n\nTo create a bucket, use the `create` method:\n\n```python\nStorage(\"new-bucket\").create()\n# Output: Bucket \"new-bucket\" created\n```\n\n### Deleting objects\n\nDeleting objects is similar to creating them, but you use the `delete` method instead:\n\n```python\nStorage(\"bucket\").delete()\n# Output: Bucket \"bucket\" and all its objects deleted\nStorage(\"bucket/object\").delete()\n# Output: Object \"object\" in bucket \"bucket\" deleted\n```\n\n### Uploading and downloading objects\n\nTo upload an object to a bucket, use the `upload` method:\n\n```python\nStorage(\"bucket/uploaded_file.txt\").upload(\"local_file.txt\")\n# Output: File \"local_file.txt\" uploaded to \"bucket/uploaded_file.txt\"\n```\n\nTo download an object from a bucket, use the `download` method:\n\n```python\nStorage(\"bucket/uploaded_file.txt\").download(\"downloaded_file.txt\")\n# Output: File \"bucket/uploaded_file.txt\" downloaded to \"downloaded_file.txt\"\n```\n\n---\n\n\n## Cloud Functions Module\n\nThe Cloud Functions module in the `gcp-pal` library allows you to deploy and manage Cloud Functions.\n\n### Initializing Cloud Functions\n\nImport the `CloudFunctions` class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import CloudFunctions\n```\n\n### Deploying Cloud Functions\n\nTo deploy a Cloud Function, specifty the function's name, the source codebase, the entry point and any other parameters that are to be passed to `BuildConfig`, `ServiceConfig` and `Function` (see [docs](https://cloud.google.com/python/docs/reference/cloudfunctions/latest/google.cloud.functions_v2.types)):\n\n```python\nCloudFunctions(\"function-name\").deploy(\n    path=\"path/to/function_codebase\",\n    entry_point=\"main\",\n    environment=2,\n)\n```\n\nDeploying a Cloud Function from a local source depends on the `gcp_toole.Storage` module. By default, the source codebase is uploaded to the `gcf-v2-sources-{PROJECT_NUMBER}-{REGION}` bucket and is deployed from there. An alternative bucket can be specified via the `source_bucket` parameter:\n\n```python\nCloudFunctions(\"function-name\").deploy(\n    path=\"path/to/function_codebase\",\n    entry_point=\"main\",\n    environment=2,\n    source_bucket=\"bucket-name\",\n)\n```\n\n### Listing Cloud Functions\n\nTo list all Cloud Functions within a project, use the `ls` method:\n\n```python\nfunctions = CloudFunctions().ls()\nprint(functions)\n# Output: ['function1', 'function2']\n```\n\n### Deleting Cloud Functions\n\nTo delete a Cloud Function, use the `delete` method:\n\n```python\nCloudFunctions(\"function-name\").delete()\n# Output: Cloud Function \"function-name\" deleted\n```\n\n### Invoking Cloud Functions\n\nTo invoke a Cloud Function, use the `invoke` (or `call`) method:\n\n```python\nresponse = CloudFunctions(\"function-name\").invoke({\"key\": \"value\"})\nprint(response)\n# Output: {'output_key': 'output_value'}\n```\n\n### Getting Cloud Function details\n\nTo get the details of a Cloud Function, use the `get` method:\n\n```python\ndetails = CloudFunctions(\"function-name\").get()\nprint(details)\n# Output: {'name': 'projects/project-id/locations/region/functions/function-name', \n#          'build_config': {...}, 'service_config': {...}, 'state': {...}, ... }\n```\n\n### Using service accounts\n\nService account email can be specified either within the constructor or via the `service_account` parameter:\n\n```python\nCloudFunctions(\"function-name\", service_account=\"account@email.com\").deploy(**kwargs)\n# or\nCloudFunctions(\"function-name\").deploy(service_account=\"account@email.com\", **kwargs)\n```\n\n---\n\n## Cloud Run Module\n\nThe Cloud Run module in the `gcp-pal` library allows you to deploy and manage Cloud Run services.\n\n### Initializing Cloud Run\n\nImport the `CloudRun` class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import CloudRun\n```\n\n### Deploying Cloud Run services\n\n```python\nCloudRun(\"test-app\").deploy(path=\"samples/cloud_run\")\n# Output: \n# - Docker image \"test-app\" built based on \"samples/cloud_run\" codebase and \"samples/cloud_run/Dockerfile\".\n# - Docker image \"test-app\" pushed to Google Container Registry as \"gcr.io/{PROJECT_ID}/test-app:random_tag\".\n# - Cloud Run service \"test-app\" deployed from \"gcr.io/{PROJECT_ID}/test-app:random_tag\".\n```\n\nThe default tag is a random string but can be specified via the `image_tag` parameter:\n\n```python\nCloudRun(\"test-app\").deploy(path=\"samples/cloud_run\", image_tag=\"5fbd72c\")\n# Output: Cloud Run service deployed\n```\n\n### Listing Cloud Run services\n\nTo list all Cloud Run services within a project, use the `ls` method:\n\n```python\nservices = CloudRun().ls()\nprint(services)\n# Output: ['service1', 'service2']\n```\n\nTo list the job, set the `job` parameter to `True`:\n\n```python\njobs = CloudRun(job=True).ls()\nprint(jobs)\n# Output: ['job1', 'job2']\n```\n\n### Deleting Cloud Run services\n\nTo delete a Cloud Run service, use the `delete` method:\n\n```python\nCloudRun(\"service-name\").delete()\n# Output: Cloud Run service \"service-name\" deleted\n```\n\nSimilarly to delete a job, set the `job` parameter to `True`:\n\n```python\nCloudRun(\"job-name\", job=True).delete()\n```\n\n### Invoking Cloud Run services\n\nTo invoke a Cloud Run service, use the `invoke`/`call` method:\n\n```python\nresponse = CloudRun(\"service-name\").invoke({\"key\": \"value\"})\nprint(response)\n# Output: {'output_key': 'output_value'}\n```\n\n### Getting Cloud Run service details\n\nTo get the details of a Cloud Run service, use the `get` method:\n\n```python\ndetails = CloudRun(\"service-name\").get()\nprint(details)\n# Output: ...\n```\n\nTo get the status of a Cloud Run service, use the `status`/`state` method:\n\n```python\nservice_status = CloudRun(\"service-name\").status()\nprint(service_status)\n# Output: Active\njob_status = CloudRun(\"job-name\", job=True).status()\nprint(job_status)\n# Output: Active\n```\n\n### Using service accounts\n\nService account email can be specified either within the constructor or via the `service_account` parameter:\n\n```python\nCloudRun(\"run-name\", service_account=\"account@email.com\").deploy(**kwargs)\n# or\nCloudRun(\"run-name\").deploy(service_account=\"account@email.com\", **kwargs)\n```\n\n---\n\n## Docker Module\n\nThe Docker module in the `gcp-pal` library allows you to build and push Docker images to Google Container Registry.\n\n### Initializing Docker\n\nImport the Docker class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Docker\n```\n\n### Building Docker images\n\n```python\nDocker(\"image-name\").build(path=\"path/to/context\", dockerfile=\"Dockerfile\")\n# Output: Docker image \"image-name:latest\" built based on \"path/to/context\" codebase and \"path/to/context/Dockerfile\".\n```\n\nThe default `tag` is `\"latest\"` but can be specified via the `tag` parameter:\n\n```python\nDocker(\"image-name\", tag=\"5fbd72c\").build(path=\"path/to/context\", dockerfile=\"Dockerfile\")\n# Output: Docker image \"image-name:5fbd72c\" built based on \"path/to/context\" codebase and \"path/to/context/Dockerfile\".\n```\n\n### Pushing Docker images\n\n```python\nDocker(\"image-name\").push()\n# Output: Docker image \"image-name\" pushed to Google Container Registry.\n```\n\nThe default destination is `\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\"` but can be specified via the `destination` parameter:\n\n```python\nDocker(\"image-name\").push(destination=\"gcr.io/my-project/image-name:5fbd72c\")\n# Output: Docker image \"image-name\" pushed to \"gcr.io/my-project/image-name:5fbd72c\".\n```\n\n---\n\n\n## Logging Module\n\nThe Logging module in the `gcp-pal` library allows you to access and manage logs from Google Cloud Logging.\n\n### Initializing Logging\n\nImport the Logging class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Logging\n```\n\n### Listing logs\n\nTo list all logs within a project, use the `ls` method:\n\n```python\nlogs = Logging().ls(limit=2)\nfor log in logs:\n    print(log)\n# Output: LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}\n```\n\nWhere each entry is a `LogEntry` object with the following attributes: `project`, `log_name`, `resource`, `severity`, `message`, `timestamp`, `time_zone`, `timestamp_str`.\n\nThe `message` attribute is the main payload of the log entry.\n\n### Filtering logs\n\nTo filter logs based on a query, use the `filter` method:\n\n```python\nlogs = Logging().ls(filter=\"severity=ERROR\")\n# Output: [LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}, ...]\n```\n\nSome common filters are also supported natively: `severity` (str), `time_start` (str), `time_end` (str), `time_range` (int: hours). For example, the following are equivalent:\n\n```python\n# Time now: 2024-04-16 17:30:04.308 UTC\nlogs = Logging().ls(filter=\"severity=ERROR AND time_start=2024-04-16T16:30:04.308Z AND time_end=2024-04-16T17:30:04.308Z\")\nlogs = Logging().ls(severity=\"ERROR\", time_start=\"2024-04-16T16:30:04.308Z\", time_end=\"2024-04-16T17:30:04.308Z\")\nlogs = Logging().ls(severity=\"ERROR\", time_range=1)\n```\n\n### Streaming logs\n\nTo stream logs in real-time, use the `stream` method:\n\n```python\nLogging().stream()\n# LogEntry - [2024-04-16 17:30:04.308 UTC] {Message payload}\n# LogEntry - [2024-04-16 17:30:05.308 UTC] {Message payload}\n# ...\n```\n\n---\n\n## Secret Manager Module\n\nThe Secret Manager module in the `gcp-pal` library allows you to access and manage secrets from Google Cloud Secret Manager.\n\n### Initializing Secret Manager\n\nImport the SecretManager class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import SecretManager\n```\n\n### Creating secrets\n\nTo create a secret, specify the secret's name and value:\n\n```python\nSecretManager(\"secret1\").create(\"value1\", labels={\"env\": \"dev\"})\n# Output: Secret 'secret1' created\n```\n\n\n### Listing secrets\n\nTo list all secrets within a project, use the `ls` method:\n\n```python\nsecrets = SecretManager().ls()\nprint(secrets)\n# Output: ['secret1', 'secret2']\n```\n\nThe `ls` method also supports filtering secrets based on `filter` or `label` parameters:\n\n```python\nsecrets = SecretManager().ls(filter=\"name:secret1\")\nprint(secrets)\n# Output: ['secret1']\nsecrets = SecretManager().ls(label=\"env:*\")\nprint(secrets)\n# Output: ['secret1', 'secret2']\n```\n\n### Accessing secrets\n\nTo access the value of a secret, use the `value` method:\n\n```python\nvalue = SecretManager(\"secret1\").value()\nprint(value)\n# Output: 'value1'\n```\n\n### Deleting secrets\n\nTo delete a secret, use the `delete` method:\n\n```python\nSecretManager(\"secret1\").delete()\n# Output: Secret 'secret1' deleted\n```\n\n---\n\n## Cloud Scheduler Module\n\nThe Cloud Scheduler module in the `gcp-pal` library allows you to create and manage Cloud Scheduler jobs.\n\n### Initializing Cloud Scheduler\n\nImport the CloudScheduler class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import CloudScheduler\n```\n\n### Creating Cloud Scheduler jobs\n\nTo create a Cloud Scheduler job, specify the job's name in the constructor, and use the `create` method to set the schedule and target:\n\n```python\nCloudScheduler(\"job-name\").create(\n    schedule=\"* * * * *\",\n    time_zone=\"UTC\",\n    target=\"https://example.com/api\",\n    payload={\"key\": \"value\"},\n)\n# Output: Cloud Scheduler job \"job-name\" created with HTTP target \"https://example.com/api\"\n```\n\nIf the `target` is not an HTTP endpoint, it will be treated as a PubSub topic:\n\n```python\nCloudScheduler(\"job-name\").create(\n    schedule=\"* * * * *\",\n    time_zone=\"UTC\",\n    target=\"pubsub-topic-name\",\n    payload={\"key\": \"value\"},\n)\n# Output: Cloud Scheduler job \"job-name\" created with PubSub target \"pubsub-topic-name\"\n```\n\nAdditionally, `service_account` can be specified to add the OAuth and OIDC tokens to the request:\n\n```python\nCloudScheduler(\"job-name\").create(\n    schedule=\"* * * * *\",\n    time_zone=\"UTC\",\n    target=\"https://example.com/api\",\n    payload={\"key\": \"value\"},\n    service_account=\"PROJECT@PROJECT.iam.gserviceaccount.com\",\n)\n# Output: Cloud Scheduler job \"job-name\" created with HTTP target \"https://example.com/api\" and OAuth+OIDC tokens\n```\n\n### Listing Cloud Scheduler jobs\n\nTo list all Cloud Scheduler jobs within a project, use the `ls` method:\n\n```python\njobs = CloudScheduler().ls()\nprint(jobs)\n# Output: ['job1', 'job2']\n```\n\n### Deleting Cloud Scheduler jobs\n\nTo delete a Cloud Scheduler job, use the `delete` method:\n\n```python\nCloudScheduler(\"job-name\").delete()\n# Output: Cloud Scheduler job \"job-name\" deleted\n```\n\n### Managing Cloud Scheduler jobs\n\nTo pause or resume a Cloud Scheduler job, use the `pause` or `resume` methods:\n\n```python\nCloudScheduler(\"job-name\").pause()\n# Output: Cloud Scheduler job \"job-name\" paused\nCloudScheduler(\"job-name\").resume()\n# Output: Cloud Scheduler job \"job-name\" resumed\n```\n\nTo run a Cloud Scheduler job immediately, use the `run` method:\n\n```python\nCloudScheduler(\"job-name\").run()\n# Output: Cloud Scheduler job \"job-name\" run\n```\n\nIf the job is paused, it will be resumed before running. To prevent this, set the `force` parameter to `False`:\n\n```python\nCloudScheduler(\"job-name\").run(force=False)\n# Output: Cloud Scheduler job \"job-name\" not run if it is paused\n```\n\n### Using service accounts\n\nService account email can be specified either within the constructor or via the `service_account` parameter:\n\n```python\nCloudScheduler(\"job-name\", service_account=\"account@email.com\").create(**kwargs)\n# or\nCloudScheduler(\"job-name\").create(service_account=\"account@email.com\", **kwargs)\n```\n\n\n---\n\n## Project Module\n\nThe Project module in the `gcp-pal` library allows you to access and manage Google Cloud projects.\n\n### Initializing Project\n\nImport the Project class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Project\n```\n\n### Listing projects\n\nTo list all projects available to the authenticated user, use the `ls` method:\n\n```python\nprojects = Project().ls()\nprint(projects)\n# Output: ['project1', 'project2']\n```\n\n### Creating projects\n\nTo create a new project, use the `create` method:\n\n```python\nProject(\"new-project\").create()\n# Output: Project \"new-project\" created\n```\n\n### Deleting projects\n\nTo delete a project, use the `delete` method:\n\n```python\nProject(\"project-name\").delete()\n# Output: Project \"project-name\" deleted\n```\n\nGoogle Cloud will delete the project after 30 days. During this time, to undelete a project, use the `undelete` method:\n\n```python\nProject(\"project-name\").undelete()\n# Output: Project \"project-name\" undeleted\n```\n\n### Getting project details\n\nTo get the details of a project, use the `get` method:\n\n```python\ndetails = Project(\"project-name\").get()\nprint(details)\n# Output: {'name': 'projects/project-id', 'project_id': 'project-id', ...}\n```\n\nTo obtain the project number use the `number` method:\n\n```python\nproject_number = Project(\"project-name\").number()\nprint(project_number)\n# Output: \"1234567890\"\n```\n\n\n---\n\n## Dataplex Module\n\nThe Dataplex module in the `gcp-pal` library allows you to interact with Dataplex services.\n\n### Initializing Dataplex\n\nImport the Dataplex class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Dataplex\n```\n\n### Listing Dataplex objects\n\nThe Dataplex module supports listing all lakes, zones, and assets within a Dataplex instance:\n\n```python\nlakes = Dataplex().ls()\nprint(lakes)\n# Output: ['lake1', 'lake2']\nzones = Dataplex(\"lake1\").ls()\nprint(zones)\n# Output: ['zone1', 'zone2']\nassets = Dataplex(\"lake1/zone1\").ls()\nprint(assets)\n# Output: ['asset1', 'asset2']\n```\n\n### Creating Dataplex objects\n\nTo create a lake, zone, or asset within a Dataplex instance, use the `create_lake`, `create_zone`, and `create_asset` methods.\n\nTo create a lake:\n\n```python\nDataplex(\"lake1\").create_lake()\n# Output: Lake \"lake1\" created\n```\n\nTo create a zone (zone type and location type are required):\n\n```python\nDataplex(\"lake1/zone1\").create_zone(zone_type=\"raw\", location_type=\"single-region\")\n# Output: Zone \"zone1\" created in Lake \"lake1\"\n```\n\nTo create an asset (asset source and asset type are required):\n\n```python\nDataplex(\"lake1/zone1\").create_asset(asset_source=\"dataset_name\", asset_type=\"bigquery\")\n# Output: Asset \"asset1\" created in Zone \"zone1\" of Lake \"lake1\"\n```\n\n### Deleting Dataplex objects\n\nDeleting objects can be done using a single `delete` method:\n\n```python\nDataplex(\"lake1/zone1/asset1\").delete()\n# Output: Asset \"asset1\" deleted\nDataplax(\"lake1/zone1\").delete()\n# Output: Zone \"zone1\" and all its assets deleted\nDataplex(\"lake1\").delete()\n# Output: Lake \"lake1\" and all its zones and assets deleted\n```\n\n\n---\n\n\n## Artifact Registry\n\nThe Artifact Registry module in the `gcp-pal` library allows you to interact with Artifact Registry services.\n\n### Initializing Artifact Registry\n\nImport the ArtifactRegistry class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import ArtifactRegistry\n```\n\n### Listing Artifact Registry objects\n\nThe objects within Artifact Registry module follow the hierarchy: repositories > packages > versions > tags.\n\nTo list all repositories within a project, use the `ls` method:\n\n```python\nrepositories = ArtifactRegistry().ls()\nprint(repositories)\n# Output: ['repo1', 'repo2']\n```\n\nTo list all packages (or \"images\") within a repository, use the `ls` method with the repository name:\n\n```python\nimages = ArtifactRegistry(\"repo1\").ls()\nprint(images)\n# Output: ['image1', 'image2']\n```\n\nTo list all versions of a package, use the `ls` method with the repository and package names:\n\n```python\nversions = ArtifactRegistry(\"repo1/image1\").ls()\nprint(versions)\n# Output: ['repo1/image1/sha256:version1', 'repo1/image1/sha256:version2']\n```\n\nTo list all tags of a version, use the `ls` method with the repository, package, and version names:\n\n```python\ntags = ArtifactRegistry(\"repo1/image1/sha256:version1\").ls()\nprint(tags)\n# Output: ['repo1/image1/tag1', 'repo1/image1/tag2']\n```\n\n### Creating Artifact Registry objects\n\nTo create a repository, use the `create_repository` method with the repository name:\n\n```python\nArtifactRegistry(\"repo1\").create_repository()\n# Output: Repository \"repo1\" created\n```\n\nSome additional parameters can be specified within the method, such as format (`\"docker\"` or `\"maven\"`), mode (`'standard'`, `'remote'` or `'virtual'`).\n\nTo create a tag, use the `create_tag` method with the repository, package, version, and tag names:\n\n```python\nArtifactRegistry(\"repo1/image1/sha256:version1\").create_tag(\"tag1\")\n# Output: Tag \"tag1\" created for version \"version1\" of package \"image1\" in repository \"repo1\"\n```\n\n### Deleting Artifact Registry objects\n\nDeleting objects can be done using a single `delete` method:\n\n```python\nArtifactRegistry(\"repo1/image1:tag1\").delete()\n# Output: Tag \"tag1\" deleted for package \"image1\" in repository \"repo1\"\nArtifactRegistry(\"repo1/image1/sha256:version1\").delete()\n# Output: Version \"version1\" deleted for package \"image1\" in repository \"repo1\"\nArtifactRegistry(\"repo1/image1\").delete()\n# Output: Package \"image1\" deleted in repository \"repo1\"\nArtifactRegistry(\"repo1\").delete()\n# Output: Repository \"repo1\" deleted\n```\n\n---\n\n## PubSub Module\n\nThe PubSub module in the `gcp-pal` library allows you to publish and subscribe to PubSub topics.\n\n### Initializing PubSub\n\nFirst, import the PubSub class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import PubSub\n```\n\nThe `PubSub` prefers to take the `path` argument in the format `project/topic/subscription`:\n\n```python\nPubSub(\"my-project/my-topic/my-subscription\")\n```\n\nAlternatively, you can specify the project and topic/subscription separately:\n\n```python\nPubSub(project=\"my-project\", topic=\"my-topic\", subscription=\"my-subscription\")\n```\n\n### Listing objects\n\nTo list all topics within a project or all subscriptions within a topic, use the `ls` method:\n\n```python\ntopics = PubSub(\"my-project\").ls()\n# Output: ['topic1', 'topic2']\nsubscriptions = PubSub(\"my-project/topic1\").ls()\n# Output: ['subscription1', 'subscription2']\n```\n\nOr to list all subscriptions within a project:\n\n```python\nsubscriptions = PubSub(\"my-project\").ls_subscriptions()\n# Output: ['subscription1', 'subscription2', ...]\n```\n\n### Creating objects\n\nTo create a PubSub topic, use the `create` method:\n\n```python\nPubSub(\"my-project/new-topic\").create()\n# Output: PubSub topic \"new-topic\" created\n```\n\nTo create a PubSub subscription, use the `create` method with the `topic` parameter:\n\n```python\nPubSub(\"my-project/my-topic/new-subscription\").create()\n```\n\n### Deleting objects\n\nTo delete a PubSub topic or subscription, use the `delete` method:\n\n```python\nPubSub(\"my-project/topic/subscription\").delete()\n# Output: PubSub subscription \"subscription\" deleted\nPubSub(\"my-project/topic\").delete()\n# Output: PubSub topic \"topic\" deleted\n```\n\nTo delete a subscription without specifying the topic, use the `subscription` parameter:\n\n```python\nPubSub(subscription=\"my-project/subscription\").delete()\n# Output: PubSub subscription \"subscription\" deleted\n```\n\n### Publishing Messages to a Topic\n\nTo publish a message to a PubSub topic, specify the topic's name and the message you want to publish:\n\n```python\ntopic = \"topic-name\"\nmessage = \"Hello, PubSub!\"\nPubSub(topic).publish(message)\n```\n\n---\n\n## Request Module\n\nThe Request module in the `gcp-pal` library allows you to make authorized HTTP requests.\n\n### Initializing Request\n\nImport the Request class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Request\n```\n\n### Making Authorized Get/Post/Put Requests\n\nTo make an authorized requests, specify the URL you want to access and use the relevant method:\n\n```python\nurl = \"https://example.com/api\"\n\nget_response = Request(url).get()\nprint(get_response)\n# Output: <Response [200]>\npost_response = Request(url).post(data={\"key\": \"value\"})\nprint(post_response)\n# Output: <Response [201]>\nput_response = Request(url).put(data={\"key\": \"value\"})\nprint(put_response)\n# Output: <Response [200]>\n```\n\n### Using service accounts\n\nSpecify the service account email to make requests on behalf of a service account within the constructor:\n\n```python\nRequest(url, service_account=\"account@email.com\").get()\n```\n\n---\n\n## Schema Module\n\nThe Schema module is not strictly GCP-related, but it is a useful utility. It allows one to translate schemas between different formats, such as Python, PyArrow, BigQuery, and Pandas.\n\n### Initializing Schema\n\nImport the `Schema` class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal.schema import Schema\n```\n\n### Translating schemas\n\nTo translate a schema from one format to another, use the respective methods:\n\n```python\nstr_schema = {\n    \"a\": \"int\",\n    \"b\": \"str\",\n    \"c\": \"float\",\n    \"d\": {\n        \"d1\": \"datetime\",\n        \"d2\": \"timestamp\",\n    },\n}\npython_schema = Schema(str_schema).str()\n# {\n#    \"a\": int,\n#    \"b\": str,\n#    \"c\": float,\n#    \"d\": {\n#        \"d1\": datetime,\n#        \"d2\": datetime,\n#    },\n# }\npyarrow_schema = Schema(str_schema).pyarrow()\n# pa.schema(\n#    [\n#        pa.field(\"a\", pa.int64()),\n#        pa.field(\"b\", pa.string()),\n#        pa.field(\"c\", pa.float64()),\n#        pa.field(\"d\", pa.struct([\n#            pa.field(\"d1\", pa.timestamp(\"ns\")),\n#            pa.field(\"d2\", pa.timestamp(\"ns\")),\n#        ])),\n#    ]\n# )\nbigquery_schema = Schema(str_schema).bigquery()\n# [\n#     bigquery.SchemaField(\"a\", \"INTEGER\"),\n#     bigquery.SchemaField(\"b\", \"STRING\"),\n#     bigquery.SchemaField(\"c\", \"FLOAT\"),\n#     bigquery.SchemaField(\"d\", \"RECORD\", fields=[\n#        bigquery.SchemaField(\"d1\", \"DATETIME\"),\n#        bigquery.SchemaField(\"d2\", \"TIMESTAMP\"),\n#     ]),\n# ]\npandas_schema = Schema(str_schema).pandas()\n# {\n#    \"a\": \"int64\",\n#    \"b\": \"object\",\n#    \"c\": \"float64\",\n#    \"d\": {\n#        \"d1\": \"datetime64[ns]\",\n#        \"d2\": \"datetime64[ns]\",\n#    },\n# }\n```\n\n### Infering schemas\n\nTo infer and translate a schema from a dictionary of data or a Pandas DataFrame, use the `is_data` parameter:\n\n```python\ndf = pd.DataFrame(\n    {\n        \"a\": [1, 2, 3],\n        \"b\": [\"a\", \"b\", \"c\"],\n        \"c\": [1.0, 2.0, 3.0],\n        \"date\": [datetime.datetime.now() for _ in range(3)],\n    }\n)\ninferred_schema = Schema(df, is_data=True).schema\n# {\n#   \"a\": \"int\",\n#   \"b\": \"str\",\n#   \"c\": \"float\",\n#   \"date\": \"datetime\",\n# }\npyarrow_schema = Schema(df, is_data=True).pyarrow()\n# pa.schema(\n#    [\n#        pa.field(\"a\", pa.int64()),\n#        pa.field(\"b\", pa.string()),\n#        pa.field(\"c\", pa.float64()),\n#        pa.field(\"date\", pa.timestamp(\"ns\")),\n#    ]\n# )\n```\n\n---\n\n## Parquet Module\n\nThe Parquet module in the `gcp-pal` library allows you to read and write Parquet files in Google Cloud Storage. The `gcp_pal.Storage` module uses this module to read and write Parquet files to and from Google Cloud Storage.\n\n### Initializing Parquet\n\nImport the Parquet class from the `gcp_pal` module:\n\n```python\nfrom gcp_pal import Parquet\n```\n\n### Reading Parquet files\n\nTo read a Parquet file from Google Cloud Storage, use the `read` method:\n\n```python\ndata = Parquet(\"bucket/file.parquet\").read()\nprint(data)\n# Output: pd.DataFrame({'field1': ['value1'], 'field2': ['value2']})\n```\n\n### Writing Parquet files\n\nTo write a Pandas DataFrame to a Parquet file in Google Cloud Storage, use the `write` method:\n\n```python\ndf = pd.DataFrame({\n    \"field1\": [\"value1\"],\n    \"field2\": [\"value2\"]\n})\nParquet(\"bucket/file.parquet\").write(df)\n# Output: Parquet file \"file.parquet\" created in \"bucket\"\n```\n\nPartitioning can be specified via the `partition_cols` parameter:\n\n```python\nParquet(\"bucket/file.parquet\").write(df, partition_cols=[\"field1\"])\n# Output: Parquet file \"file.parquet\" created in \"bucket\" partitioned by \"field1\"\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Set of utilities for interacting with Google Cloud Platform",
    "version": "1.0.41",
    "project_urls": {
        "Homepage": "https://github.com/VitaminB16/gcp-pal"
    },
    "split_keywords": [
        "gcp",
        " google cloud",
        " google cloud python",
        " gcp api",
        " gcp python api"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a2f817924018e3cdb9044889fcb1284034b228de166811f72dee679f2f247546",
                "md5": "b5d0604d3ccea89de8d6a96198581ac3",
                "sha256": "b74d698afd159f572f17015fc61557cec5dd534acd2f9d0464c15f798d098abb"
            },
            "downloads": -1,
            "filename": "gcp_pal-1.0.41-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b5d0604d3ccea89de8d6a96198581ac3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 72300,
            "upload_time": "2024-09-27T09:30:05",
            "upload_time_iso_8601": "2024-09-27T09:30:05.027365Z",
            "url": "https://files.pythonhosted.org/packages/a2/f8/17924018e3cdb9044889fcb1284034b228de166811f72dee679f2f247546/gcp_pal-1.0.41-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64ef4b8d060f63472d7753589702e35194746bf1d1776c1d90259e626dca9778",
                "md5": "3d3cfa4548c1a6d9526566569654f9a0",
                "sha256": "506bd9d13e700c9c4bce763a1fab059c698a8601e874acafed83710f7fb030cc"
            },
            "downloads": -1,
            "filename": "gcp_pal-1.0.41.tar.gz",
            "has_sig": false,
            "md5_digest": "3d3cfa4548c1a6d9526566569654f9a0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 68569,
            "upload_time": "2024-09-27T09:30:06",
            "upload_time_iso_8601": "2024-09-27T09:30:06.758493Z",
            "url": "https://files.pythonhosted.org/packages/64/ef/4b8d060f63472d7753589702e35194746bf1d1776c1d90259e626dca9778/gcp_pal-1.0.41.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-27 09:30:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "VitaminB16",
    "github_project": "gcp-pal",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "gcp-pal"
}
        
Elapsed time: 2.70414s