cloudspark


Namecloudspark JSON
Version 1.0.13 PyPI version JSON
download
home_pagehttps://github.com/muhammedrahil/cloudspark
SummaryCloudSpark is a powerful Python package designed to simplify the management of AWS S3 and Lambda services. Whether you're working on the frontend or backend, CloudSpark provides an intuitive interface to generate presigned URLs and handle file uploads seamlessly.
upload_time2024-08-14 11:07:53
maintainerNone
docs_urlNone
authorMuhammed Rahil M
requires_pythonNone
licenseNone
keywords python aws s3 presigned urls file uploads
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CloudSpark





`cloudspark` is a Python package that provides a convenient way to manage AWS S3 buckets and generate presigned URLs using `boto3`. It supports bucket creation, CORS management, policy setting, and presigned URL generation.



## Features



- **S3 Management**: Effortlessly manage your S3 buckets, objects, and file uploads with built-in methods.



- **Presigned URL Generation**: Generate secure presigned URLs for your S3 objects, enabling users to upload files directly from the frontend without exposing your credentials.



- **Seamless Integration**: Designed to work smoothly with both frontend and backend applications, making file uploads and cloud function management more accessible.





This repository contains several components. For more details on each component, refer to the respective README files.



- [git repository](https://github.com/muhammedrahil/cloudspark)

- [Documentation](https://github.com/muhammedrahil/cloudspark/blob/main/README.md)

- [Js-Implementation](https://github.com/muhammedrahil/cloudspark/blob/main/js-implementation.md)





## Installation



You can install `cloudspark` via pip. Make sure you have `boto3` installed as well.



```bash 

pip install cloudspark 

```

## Usage

#### Importing the Library

```python

from cloudspark import S3Connection

```

#### Initializing the Connection

Create an instance of `S3Connection` by providing your AWS credentials and region name

```python

s3_conn = S3Connection(access_key='YOUR_ACCESS_KEY',

                       secret_access_key='YOUR_SECRET_ACCESS_KEY',

                       region_name='YOUR_REGION_NAME')

```



#### Connecting to a Bucket

Establish a connection to an S3 bucket: 

```python

s3_bucketclient = s3_conn.connect(bucket_name='your-bucket-name')

```

returns an S3 client instance. 



#### Returns the current S3 client instance 

```python

s3_bucketclient = s3_conn.get_instance()

```

returns an S3 client instance. 



#### Creating a bucket

Creates an S3 bucket with the provided name.

```python

s3_client = s3_conn.connect()

s3_bucketclient = s3_conn.create_s3bucket(bucket_name='new-bucket-name')

```



#### Set Bucket CORS

Set the CORS configuration for the connected bucket



```python

s3_conn.set_bucket_cors()



# CORSRules: A list of dictionaries containing CORS rules.

# If None, a default CORS rule allowing all origins and methods is applied.

cors_rules = [

    {

        'AllowedHeaders': ['*'],

        'AllowedMethods': ['GET', 'POST'],

        'AllowedOrigins': ['*'],

        'ExposeHeaders': [],

        'MaxAgeSeconds': 3000

    }

]



s3_conn.set_bucket_cors(CORSRules=cors_rules)

```

#### Get Bucket CORS

Retrieve the CORS configuration for the connected bucket

```python

cors_config = s3_conn.get_bucket_cors()

```



#### Delete Bucket CORS

Delete the CORS configuration from the connected bucket

```python

s3_conn.delete_bucket_cors()

```

#### Set Bucket Policy

Set or update the bucket policy for the connected bucket

```python



s3_conn.set_bucket_policy()



# A JSON string or dictionary representing the bucket policy. 

# If None, a default public read policy is applied.



policy = {

    "Version": "2012-10-17",

    "Statement": [

        {

            "Sid": "PublicReadGetObject",

            "Effect": "Allow",

            "Principal": "*",

            "Action": "s3:GetObject",

            "Resource": f"arn:aws:s3:::{bucket_name}/*"

        }

    ]

}

s3_conn.set_bucket_policy(bucket_policy=policy)

```

#### Policy Fields Explaintion



#### 1. Version

    Key: "Version"

    Value: "2012-10-17"

    Explanation:

    This specifies the version of the policy language. "2012-10-17" is the latest version and is recommended for use.

#### 2. Id

    Key: "Id"

    Value: "Policy1719828879302"

    Explanation:

    The Id is an optional identifier for the policy. It helps in identifying the policy, especially when dealing with multiple policies. The value here is just a unique identifier that you can use to track or refer to this policy.

#### 3. Statement

    Key: "Statement"

    Value: [ ... ]

    Explanation:

    This is an array of statements that define what actions are allowed or denied. Each statement contains specific instructions for who can perform what actions on which resources.

    Inside the Statement Array:

    Sid



        Key: "Sid"

            Value: "Stmt1719828876812"

            Explanation:

            The Sid (Statement ID) is an optional identifier for the specific statement. It helps in identifying and managing individual statements within the policy.

            Effect



        Key: "Effect"

            Value: "Allow"

            Explanation:

            The Effect determines whether the statement allows or denies access. Here, "Allow" means that the actions specified in the Action field are permitted. If it were "Deny", those actions would be explicitly forbidden.

            Principal



        Key: "Principal"

            Value: "*"

            Explanation:

            The Principal specifies the entity that is allowed or denied the actions. "*" means any entity (i.e., any user, role, or service) is allowed to perform the specified actions. You could also restrict this to a specific AWS account or IAM user.

            Action



        Key: "Action"

            Value: "s3:GetObject"

            Explanation:

            The Action specifies what actions are allowed or denied. Here, "s3:GetObject" means that the entities specified in the Principal field are allowed to retrieve objects from the S3 bucket. "s3:GetObject" is the action used for downloading files from S3.

            Resource



        Key: "Resource"

            Value: "arn:aws:s3:::{{bucket_name}}/*"

            Explanation:

            The Resource specifies the particular AWS resource that the policy applies to. Here, the resource is specified as "arn:aws:s3:::{{bucket_name}}/*", which means all objects (files) within the {{bucket_name}} S3 bucket.

            The arn:aws:s3:::{{bucket_name}}/* is an Amazon Resource Name (ARN) that uniquely identifies the S3 bucket and all its objects. The /* at the end means that the policy applies to all objects within the {{bucket_name}} bucket.



#### Retrieves Bucket Policy

Retrieves the bucket policy for the connected S3 bucket

```python

policy = s3_conn.get_bucket_policy()

```



#### Delete Bucket Policy

Delete the bucket policy from the connected bucket

```python

s3_conn.delete_bucket_policy()

```



#### List User Policies

List inline policies for an IAM user

```python

policies = s3_conn.list_user_policies(UserName='user-name')

```

#### Block or Allow Public Access

Block or allow public access to the bucket

```python

s3_conn.public_access(block=True)  # Block public access

s3_conn.public_access(block=False) # Allow public access

```



#### Generate Presigned Create URL

Generate a presigned URL for creating an object



```python

responce = s3_conn.presigned_create_url(

    object_name='object_name',

    params={'key': 'value'}, # Or params= None

    fields={'field': 'value'}, # Or fields= None

    conditions=[{'condition': 'value'}], # Or conditions= None

    expiration=3600

)

```

`object_name` : The name of the object to be created in the S3 bucket.



`params`: (Optional) Additional request parameters to include in the presigned URL.



`fields` : (Optional) Pre-filled form fields to include in the presigned URL.



`conditions`: (Optional) Conditions to include in the presigned URL.



`expiration` : (Optional) Time in seconds for which the presigned URL should remain valid. Default is 3600 seconds (1 hour).



##### Output



```python

{

  "url": "https://{{bucket_name}}.s3.amazonaws.com/",

  "fields": {

    "key": "{{object_name.extention}}",

    "x-amz-algorithm": "AWS4-HMAC-SHA256",

    "x-amz-credential": "{{access_key}}/20240814/us-east-1/s3/aws4_request",

    "x-amz-date": "20240814T092932Z",

    "x-amz-security-token": "{{sts_token}}",

    "policy": "{{policy}}",

    "x-amz-signature": "{{signature}}"

  }

}



```



#### Generate Presigned Delete URL

Generate a presigned URL for Delete an object



```python

responce = s3_conn.presigned_delete_url(object_name='object_name', expiration=3600)

```



##### Output



```python

"https://{{bucket_name}}.s3.amazonaws.com/hdfwf.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={{access_key}}%2F20240812%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20240812T151744Z&X-Amz-Expires=4600&X-Amz-SignedHeaders=host&X-Amz-Signature=a680ee638e9cc27f2ec21cae36ad807cffb2fc74e1e91fa55709f25f324d1e22"

```



#### Uploads a file to the connected S3 bucket.



```python

responce = s3_conn.upload_object(file=file, key_name="object_name")



# example:

with open('file_name',"rb") as file_obj:

    s3_conn.upload_object(file=file_obj, key_name="object_name")

```

`file`: Bytes of the file to upload



`key_name`: S3 object name (e.g., 'folder/filename.txt').



#### Retrieves an object from the connected S3 bucket.



return: The object metadata



```python

key_object = s3_conn.get_object(key_name="object_name")



```



#### Deletes an object from the connected S3 bucket



```python

key_object = s3_conn.delete_object(key_name="object_name")



```



#### Lists objects in the connected S3 bucket.



```python

key_object = s3_conn.get_objects()

```



```python

key_object = s3_conn.get_objects(only_objects=True)

```



`only_objects`: If True, returns a list of object metadata (excluding keys).



```python

key_object = s3_conn.get_objects(only_keys=True)

```



`only_keys`: If True, returns a list of object keys.



#### policy_decode Function



The `policy_decode` function is designed to decode a Base64-encoded AWS S3 policy string and return it as a formatted JSON string. This is useful for inspecting and validating S3 presigned URL policies.



Decodes a Base64-encoded policy string and returns it as a formatted JSON string

```python



policy_dict = s3_conn.policy_decode(policy_encoded="policy_encoded string")



```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/muhammedrahil/cloudspark",
    "name": "cloudspark",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "python, aws, s3, presigned urls, file uploads",
    "author": "Muhammed Rahil M",
    "author_email": "muhammedrahilmadathingal@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/6c/bf/39465873eb3f7f89e3d117ec0611b8a408801b0304f24f8aa81a79f4f444/cloudspark-1.0.13.tar.gz",
    "platform": null,
    "description": "# CloudSpark\r\n\r\n\r\n\r\n\r\n\r\n`cloudspark` is a Python package that provides a convenient way to manage AWS S3 buckets and generate presigned URLs using `boto3`. It supports bucket creation, CORS management, policy setting, and presigned URL generation.\r\n\r\n\r\n\r\n## Features\r\n\r\n\r\n\r\n- **S3 Management**: Effortlessly manage your S3 buckets, objects, and file uploads with built-in methods.\r\n\r\n\r\n\r\n- **Presigned URL Generation**: Generate secure presigned URLs for your S3 objects, enabling users to upload files directly from the frontend without exposing your credentials.\r\n\r\n\r\n\r\n- **Seamless Integration**: Designed to work smoothly with both frontend and backend applications, making file uploads and cloud function management more accessible.\r\n\r\n\r\n\r\n\r\n\r\nThis repository contains several components. For more details on each component, refer to the respective README files.\r\n\r\n\r\n\r\n- [git repository](https://github.com/muhammedrahil/cloudspark)\r\n\r\n- [Documentation](https://github.com/muhammedrahil/cloudspark/blob/main/README.md)\r\n\r\n- [Js-Implementation](https://github.com/muhammedrahil/cloudspark/blob/main/js-implementation.md)\r\n\r\n\r\n\r\n\r\n\r\n## Installation\r\n\r\n\r\n\r\nYou can install `cloudspark` via pip. Make sure you have `boto3` installed as well.\r\n\r\n\r\n\r\n```bash \r\n\r\npip install cloudspark \r\n\r\n```\r\n\r\n## Usage\r\n\r\n#### Importing the Library\r\n\r\n```python\r\n\r\nfrom cloudspark import S3Connection\r\n\r\n```\r\n\r\n#### Initializing the Connection\r\n\r\nCreate an instance of `S3Connection` by providing your AWS credentials and region name\r\n\r\n```python\r\n\r\ns3_conn = S3Connection(access_key='YOUR_ACCESS_KEY',\r\n\r\n                       secret_access_key='YOUR_SECRET_ACCESS_KEY',\r\n\r\n                       region_name='YOUR_REGION_NAME')\r\n\r\n```\r\n\r\n\r\n\r\n#### Connecting to a Bucket\r\n\r\nEstablish a connection to an S3 bucket: \r\n\r\n```python\r\n\r\ns3_bucketclient = s3_conn.connect(bucket_name='your-bucket-name')\r\n\r\n```\r\n\r\nreturns an S3 client instance. \r\n\r\n\r\n\r\n#### Returns the current S3 client instance \r\n\r\n```python\r\n\r\ns3_bucketclient = s3_conn.get_instance()\r\n\r\n```\r\n\r\nreturns an S3 client instance. \r\n\r\n\r\n\r\n#### Creating a bucket\r\n\r\nCreates an S3 bucket with the provided name.\r\n\r\n```python\r\n\r\ns3_client = s3_conn.connect()\r\n\r\ns3_bucketclient = s3_conn.create_s3bucket(bucket_name='new-bucket-name')\r\n\r\n```\r\n\r\n\r\n\r\n#### Set Bucket CORS\r\n\r\nSet the CORS configuration for the connected bucket\r\n\r\n\r\n\r\n```python\r\n\r\ns3_conn.set_bucket_cors()\r\n\r\n\r\n\r\n# CORSRules: A list of dictionaries containing CORS rules.\r\n\r\n# If None, a default CORS rule allowing all origins and methods is applied.\r\n\r\ncors_rules = [\r\n\r\n    {\r\n\r\n        'AllowedHeaders': ['*'],\r\n\r\n        'AllowedMethods': ['GET', 'POST'],\r\n\r\n        'AllowedOrigins': ['*'],\r\n\r\n        'ExposeHeaders': [],\r\n\r\n        'MaxAgeSeconds': 3000\r\n\r\n    }\r\n\r\n]\r\n\r\n\r\n\r\ns3_conn.set_bucket_cors(CORSRules=cors_rules)\r\n\r\n```\r\n\r\n#### Get Bucket CORS\r\n\r\nRetrieve the CORS configuration for the connected bucket\r\n\r\n```python\r\n\r\ncors_config = s3_conn.get_bucket_cors()\r\n\r\n```\r\n\r\n\r\n\r\n#### Delete Bucket CORS\r\n\r\nDelete the CORS configuration from the connected bucket\r\n\r\n```python\r\n\r\ns3_conn.delete_bucket_cors()\r\n\r\n```\r\n\r\n#### Set Bucket Policy\r\n\r\nSet or update the bucket policy for the connected bucket\r\n\r\n```python\r\n\r\n\r\n\r\ns3_conn.set_bucket_policy()\r\n\r\n\r\n\r\n# A JSON string or dictionary representing the bucket policy. \r\n\r\n# If None, a default public read policy is applied.\r\n\r\n\r\n\r\npolicy = {\r\n\r\n    \"Version\": \"2012-10-17\",\r\n\r\n    \"Statement\": [\r\n\r\n        {\r\n\r\n            \"Sid\": \"PublicReadGetObject\",\r\n\r\n            \"Effect\": \"Allow\",\r\n\r\n            \"Principal\": \"*\",\r\n\r\n            \"Action\": \"s3:GetObject\",\r\n\r\n            \"Resource\": f\"arn:aws:s3:::{bucket_name}/*\"\r\n\r\n        }\r\n\r\n    ]\r\n\r\n}\r\n\r\ns3_conn.set_bucket_policy(bucket_policy=policy)\r\n\r\n```\r\n\r\n#### Policy Fields Explaintion\r\n\r\n\r\n\r\n#### 1. Version\r\n\r\n    Key: \"Version\"\r\n\r\n    Value: \"2012-10-17\"\r\n\r\n    Explanation:\r\n\r\n    This specifies the version of the policy language. \"2012-10-17\" is the latest version and is recommended for use.\r\n\r\n#### 2. Id\r\n\r\n    Key: \"Id\"\r\n\r\n    Value: \"Policy1719828879302\"\r\n\r\n    Explanation:\r\n\r\n    The Id is an optional identifier for the policy. It helps in identifying the policy, especially when dealing with multiple policies. The value here is just a unique identifier that you can use to track or refer to this policy.\r\n\r\n#### 3. Statement\r\n\r\n    Key: \"Statement\"\r\n\r\n    Value: [ ... ]\r\n\r\n    Explanation:\r\n\r\n    This is an array of statements that define what actions are allowed or denied. Each statement contains specific instructions for who can perform what actions on which resources.\r\n\r\n    Inside the Statement Array:\r\n\r\n    Sid\r\n\r\n\r\n\r\n        Key: \"Sid\"\r\n\r\n            Value: \"Stmt1719828876812\"\r\n\r\n            Explanation:\r\n\r\n            The Sid (Statement ID) is an optional identifier for the specific statement. It helps in identifying and managing individual statements within the policy.\r\n\r\n            Effect\r\n\r\n\r\n\r\n        Key: \"Effect\"\r\n\r\n            Value: \"Allow\"\r\n\r\n            Explanation:\r\n\r\n            The Effect determines whether the statement allows or denies access. Here, \"Allow\" means that the actions specified in the Action field are permitted. If it were \"Deny\", those actions would be explicitly forbidden.\r\n\r\n            Principal\r\n\r\n\r\n\r\n        Key: \"Principal\"\r\n\r\n            Value: \"*\"\r\n\r\n            Explanation:\r\n\r\n            The Principal specifies the entity that is allowed or denied the actions. \"*\" means any entity (i.e., any user, role, or service) is allowed to perform the specified actions. You could also restrict this to a specific AWS account or IAM user.\r\n\r\n            Action\r\n\r\n\r\n\r\n        Key: \"Action\"\r\n\r\n            Value: \"s3:GetObject\"\r\n\r\n            Explanation:\r\n\r\n            The Action specifies what actions are allowed or denied. Here, \"s3:GetObject\" means that the entities specified in the Principal field are allowed to retrieve objects from the S3 bucket. \"s3:GetObject\" is the action used for downloading files from S3.\r\n\r\n            Resource\r\n\r\n\r\n\r\n        Key: \"Resource\"\r\n\r\n            Value: \"arn:aws:s3:::{{bucket_name}}/*\"\r\n\r\n            Explanation:\r\n\r\n            The Resource specifies the particular AWS resource that the policy applies to. Here, the resource is specified as \"arn:aws:s3:::{{bucket_name}}/*\", which means all objects (files) within the {{bucket_name}} S3 bucket.\r\n\r\n            The arn:aws:s3:::{{bucket_name}}/* is an Amazon Resource Name (ARN) that uniquely identifies the S3 bucket and all its objects. The /* at the end means that the policy applies to all objects within the {{bucket_name}} bucket.\r\n\r\n\r\n\r\n#### Retrieves Bucket Policy\r\n\r\nRetrieves the bucket policy for the connected S3 bucket\r\n\r\n```python\r\n\r\npolicy = s3_conn.get_bucket_policy()\r\n\r\n```\r\n\r\n\r\n\r\n#### Delete Bucket Policy\r\n\r\nDelete the bucket policy from the connected bucket\r\n\r\n```python\r\n\r\ns3_conn.delete_bucket_policy()\r\n\r\n```\r\n\r\n\r\n\r\n#### List User Policies\r\n\r\nList inline policies for an IAM user\r\n\r\n```python\r\n\r\npolicies = s3_conn.list_user_policies(UserName='user-name')\r\n\r\n```\r\n\r\n#### Block or Allow Public Access\r\n\r\nBlock or allow public access to the bucket\r\n\r\n```python\r\n\r\ns3_conn.public_access(block=True)  # Block public access\r\n\r\ns3_conn.public_access(block=False) # Allow public access\r\n\r\n```\r\n\r\n\r\n\r\n#### Generate Presigned Create URL\r\n\r\nGenerate a presigned URL for creating an object\r\n\r\n\r\n\r\n```python\r\n\r\nresponce = s3_conn.presigned_create_url(\r\n\r\n    object_name='object_name',\r\n\r\n    params={'key': 'value'}, # Or params= None\r\n\r\n    fields={'field': 'value'}, # Or fields= None\r\n\r\n    conditions=[{'condition': 'value'}], # Or conditions= None\r\n\r\n    expiration=3600\r\n\r\n)\r\n\r\n```\r\n\r\n`object_name` : The name of the object to be created in the S3 bucket.\r\n\r\n\r\n\r\n`params`: (Optional) Additional request parameters to include in the presigned URL.\r\n\r\n\r\n\r\n`fields` : (Optional) Pre-filled form fields to include in the presigned URL.\r\n\r\n\r\n\r\n`conditions`: (Optional) Conditions to include in the presigned URL.\r\n\r\n\r\n\r\n`expiration` : (Optional) Time in seconds for which the presigned URL should remain valid. Default is 3600 seconds (1 hour).\r\n\r\n\r\n\r\n##### Output\r\n\r\n\r\n\r\n```python\r\n\r\n{\r\n\r\n  \"url\": \"https://{{bucket_name}}.s3.amazonaws.com/\",\r\n\r\n  \"fields\": {\r\n\r\n    \"key\": \"{{object_name.extention}}\",\r\n\r\n    \"x-amz-algorithm\": \"AWS4-HMAC-SHA256\",\r\n\r\n    \"x-amz-credential\": \"{{access_key}}/20240814/us-east-1/s3/aws4_request\",\r\n\r\n    \"x-amz-date\": \"20240814T092932Z\",\r\n\r\n    \"x-amz-security-token\": \"{{sts_token}}\",\r\n\r\n    \"policy\": \"{{policy}}\",\r\n\r\n    \"x-amz-signature\": \"{{signature}}\"\r\n\r\n  }\r\n\r\n}\r\n\r\n\r\n\r\n```\r\n\r\n\r\n\r\n#### Generate Presigned Delete URL\r\n\r\nGenerate a presigned URL for Delete an object\r\n\r\n\r\n\r\n```python\r\n\r\nresponce = s3_conn.presigned_delete_url(object_name='object_name', expiration=3600)\r\n\r\n```\r\n\r\n\r\n\r\n##### Output\r\n\r\n\r\n\r\n```python\r\n\r\n\"https://{{bucket_name}}.s3.amazonaws.com/hdfwf.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={{access_key}}%2F20240812%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20240812T151744Z&X-Amz-Expires=4600&X-Amz-SignedHeaders=host&X-Amz-Signature=a680ee638e9cc27f2ec21cae36ad807cffb2fc74e1e91fa55709f25f324d1e22\"\r\n\r\n```\r\n\r\n\r\n\r\n#### Uploads a file to the connected S3 bucket.\r\n\r\n\r\n\r\n```python\r\n\r\nresponce = s3_conn.upload_object(file=file, key_name=\"object_name\")\r\n\r\n\r\n\r\n# example:\r\n\r\nwith open('file_name',\"rb\") as file_obj:\r\n\r\n    s3_conn.upload_object(file=file_obj, key_name=\"object_name\")\r\n\r\n```\r\n\r\n`file`: Bytes of the file to upload\r\n\r\n\r\n\r\n`key_name`: S3 object name (e.g., 'folder/filename.txt').\r\n\r\n\r\n\r\n#### Retrieves an object from the connected S3 bucket.\r\n\r\n\r\n\r\nreturn: The object metadata\r\n\r\n\r\n\r\n```python\r\n\r\nkey_object = s3_conn.get_object(key_name=\"object_name\")\r\n\r\n\r\n\r\n```\r\n\r\n\r\n\r\n#### Deletes an object from the connected S3 bucket\r\n\r\n\r\n\r\n```python\r\n\r\nkey_object = s3_conn.delete_object(key_name=\"object_name\")\r\n\r\n\r\n\r\n```\r\n\r\n\r\n\r\n#### Lists objects in the connected S3 bucket.\r\n\r\n\r\n\r\n```python\r\n\r\nkey_object = s3_conn.get_objects()\r\n\r\n```\r\n\r\n\r\n\r\n```python\r\n\r\nkey_object = s3_conn.get_objects(only_objects=True)\r\n\r\n```\r\n\r\n\r\n\r\n`only_objects`: If True, returns a list of object metadata (excluding keys).\r\n\r\n\r\n\r\n```python\r\n\r\nkey_object = s3_conn.get_objects(only_keys=True)\r\n\r\n```\r\n\r\n\r\n\r\n`only_keys`: If True, returns a list of object keys.\r\n\r\n\r\n\r\n#### policy_decode Function\r\n\r\n\r\n\r\nThe `policy_decode` function is designed to decode a Base64-encoded AWS S3 policy string and return it as a formatted JSON string. This is useful for inspecting and validating S3 presigned URL policies.\r\n\r\n\r\n\r\nDecodes a Base64-encoded policy string and returns it as a formatted JSON string\r\n\r\n```python\r\n\r\n\r\n\r\npolicy_dict = s3_conn.policy_decode(policy_encoded=\"policy_encoded string\")\r\n\r\n\r\n\r\n```\r\n\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "CloudSpark is a powerful Python package designed to simplify the management of AWS S3 and Lambda services. Whether you're working on the frontend or backend, CloudSpark provides an intuitive interface to generate presigned URLs and handle file uploads seamlessly.",
    "version": "1.0.13",
    "project_urls": {
        "Homepage": "https://github.com/muhammedrahil/cloudspark"
    },
    "split_keywords": [
        "python",
        " aws",
        " s3",
        " presigned urls",
        " file uploads"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2bca9d063e87bde12e130a03022d7da134e1d4b4aa6c5996d0e4f2326557b1c4",
                "md5": "56865e5b9a8318f3ee4fb8a5b22c716b",
                "sha256": "9445350e7d367f168defcb54ca4597ae71c4e1815bfc6e92405af4d6812d2259"
            },
            "downloads": -1,
            "filename": "cloudspark-1.0.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "56865e5b9a8318f3ee4fb8a5b22c716b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 11443,
            "upload_time": "2024-08-14T11:07:51",
            "upload_time_iso_8601": "2024-08-14T11:07:51.969690Z",
            "url": "https://files.pythonhosted.org/packages/2b/ca/9d063e87bde12e130a03022d7da134e1d4b4aa6c5996d0e4f2326557b1c4/cloudspark-1.0.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6cbf39465873eb3f7f89e3d117ec0611b8a408801b0304f24f8aa81a79f4f444",
                "md5": "16f94fc4d38c17d4de1e08656f6f52e0",
                "sha256": "1368a46da0c8faa3556def664422395250da5268152a0639a3287ee32317a81b"
            },
            "downloads": -1,
            "filename": "cloudspark-1.0.13.tar.gz",
            "has_sig": false,
            "md5_digest": "16f94fc4d38c17d4de1e08656f6f52e0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 13670,
            "upload_time": "2024-08-14T11:07:53",
            "upload_time_iso_8601": "2024-08-14T11:07:53.360089Z",
            "url": "https://files.pythonhosted.org/packages/6c/bf/39465873eb3f7f89e3d117ec0611b8a408801b0304f24f8aa81a79f4f444/cloudspark-1.0.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-14 11:07:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "muhammedrahil",
    "github_project": "cloudspark",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "cloudspark"
}
        
Elapsed time: 0.71276s