cjm-torchvision-tfms


Namecjm-torchvision-tfms JSON
Version 0.0.11 PyPI version JSON
download
home_pagehttps://github.com/cj-mills/cjm-torchvision-tfms
SummarySome custom Torchvision tranforms.
upload_time2024-02-23 00:54:38
maintainer
docs_urlNone
authorChristian Mills
requires_python>=3.10
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # cjm-torchvision-tfms

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install cjm_torchvision_tfms
```

## How to use

``` python
from PIL import Image

img_path = './images/call-hand-gesture.png'

# Open the associated image file as a RGB image
sample_img = Image.open(img_path).convert('RGB')

# Print the dimensions of the image
print(f"Image Dims: {sample_img.size}")

# Show the image
sample_img
```

    Image Dims: (384, 512)

![](index_files/figure-commonmark/cell-2-output-2.png)

``` python
from cjm_torchvision_tfms.core import ResizeMax, PadSquare, CustomTrivialAugmentWide

import torch
from torchvision import transforms
from cjm_pytorch_utils.core import tensor_to_pil
from cjm_pil_utils.core import stack_imgs
```

``` python
target_sz = 384
```

``` python
print(f"Source image: {sample_img.size}")

# Create a `ResizeMax` object
resize_max = ResizeMax(max_sz=target_sz)

# Convert the cropped image to a tensor
img_tensor = transforms.PILToTensor()(sample_img)[None]
print(f"Image tensor: {img_tensor.shape}")

# Resize the tensor
resized_tensor = resize_max(img_tensor)
print(f"Padded tensor: {resized_tensor.shape}")

# Display the updated image
tensor_to_pil(resized_tensor)
```

    Source image: (384, 512)
    Image tensor: torch.Size([1, 3, 512, 384])
    Padded tensor: torch.Size([1, 3, 384, 288])

![](index_files/figure-commonmark/cell-6-output-2.png)

``` python
print(f"Resized tensor: {resized_tensor.shape}")

# Create a `PadSquare` object
pad_square = PadSquare(shift=True)

# Pad the tensor
padded_tensor = pad_square(resized_tensor)
print(f"Padded tensor: {padded_tensor.shape}")

# Display the updated image
stack_imgs([tensor_to_pil(pad_square(resized_tensor)) for i in range(3)])
```

    Resized tensor: torch.Size([3, 384, 288])
    Padded tensor: torch.Size([3, 384, 384])

![](index_files/figure-commonmark/cell-8-output-2.png)

``` python
num_bins = 31

custom_augmentation_space = {
    # Identity operation doesn't change the image
    "Identity": (torch.tensor(0.0), False),
            
    # Distort the image along the x or y axis, respectively.
    "ShearX": (torch.linspace(0.0, 0.25, num_bins), True),
    "ShearY": (torch.linspace(0.0, 0.25, num_bins), True),

    # Move the image along the x or y axis, respectively.
    "TranslateX": (torch.linspace(0.0, 32.0, num_bins), True),
    "TranslateY": (torch.linspace(0.0, 32.0, num_bins), True),

    # Rotate operation: rotates the image.
    "Rotate": (torch.linspace(0.0, 45.0, num_bins), True),

    # Adjust brightness, color, contrast,and sharpness respectively.
    "Brightness": (torch.linspace(0.0, 0.75, num_bins), True),
    "Color": (torch.linspace(0.0, 0.99, num_bins), True),
    "Contrast": (torch.linspace(0.0, 0.99, num_bins), True),
    "Sharpness": (torch.linspace(0.0, 0.99, num_bins), True),

    # Reduce the number of bits used to express the color in each channel of the image.
    "Posterize": (8 - (torch.arange(num_bins) / ((num_bins - 1) / 6)).round().int(), False),

    # Invert all pixel values above a threshold.
    "Solarize": (torch.linspace(255.0, 0.0, num_bins), False),

    # Maximize the image contrast by setting the darkest color to black and the lightest to white.
    "AutoContrast": (torch.tensor(0.0), False),

    # Equalize the image histogram to improve its contrast.
    "Equalize": (torch.tensor(0.0), False),
}

# Create a `CustomTrivialAugmentWide` object
trivial_aug = CustomTrivialAugmentWide(op_meta=custom_augmentation_space)

# Pad the tensor
aug_tensor = trivial_aug(resized_tensor)
print(f"Augmented tensor: {aug_tensor.shape}")

# Display the updated image
stack_imgs([tensor_to_pil(trivial_aug(resized_tensor)) for i in range(3)])
```

    Augmented tensor: torch.Size([3, 384, 288])

![](index_files/figure-commonmark/cell-10-output-2.png)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cj-mills/cjm-torchvision-tfms",
    "name": "cjm-torchvision-tfms",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "Christian Mills",
    "author_email": "millscj@protonmail.com",
    "download_url": "https://files.pythonhosted.org/packages/cc/a0/c664c1343fe70a359f75e6736d992090b620e5602aeb14d73c6a4b79474e/cjm-torchvision-tfms-0.0.11.tar.gz",
    "platform": null,
    "description": "# cjm-torchvision-tfms\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install cjm_torchvision_tfms\n```\n\n## How to use\n\n``` python\nfrom PIL import Image\n\nimg_path = './images/call-hand-gesture.png'\n\n# Open the associated image file as a RGB image\nsample_img = Image.open(img_path).convert('RGB')\n\n# Print the dimensions of the image\nprint(f\"Image Dims: {sample_img.size}\")\n\n# Show the image\nsample_img\n```\n\n    Image Dims: (384, 512)\n\n![](index_files/figure-commonmark/cell-2-output-2.png)\n\n``` python\nfrom cjm_torchvision_tfms.core import ResizeMax, PadSquare, CustomTrivialAugmentWide\n\nimport torch\nfrom torchvision import transforms\nfrom cjm_pytorch_utils.core import tensor_to_pil\nfrom cjm_pil_utils.core import stack_imgs\n```\n\n``` python\ntarget_sz = 384\n```\n\n``` python\nprint(f\"Source image: {sample_img.size}\")\n\n# Create a `ResizeMax` object\nresize_max = ResizeMax(max_sz=target_sz)\n\n# Convert the cropped image to a tensor\nimg_tensor = transforms.PILToTensor()(sample_img)[None]\nprint(f\"Image tensor: {img_tensor.shape}\")\n\n# Resize the tensor\nresized_tensor = resize_max(img_tensor)\nprint(f\"Padded tensor: {resized_tensor.shape}\")\n\n# Display the updated image\ntensor_to_pil(resized_tensor)\n```\n\n    Source image: (384, 512)\n    Image tensor: torch.Size([1, 3, 512, 384])\n    Padded tensor: torch.Size([1, 3, 384, 288])\n\n![](index_files/figure-commonmark/cell-6-output-2.png)\n\n``` python\nprint(f\"Resized tensor: {resized_tensor.shape}\")\n\n# Create a `PadSquare` object\npad_square = PadSquare(shift=True)\n\n# Pad the tensor\npadded_tensor = pad_square(resized_tensor)\nprint(f\"Padded tensor: {padded_tensor.shape}\")\n\n# Display the updated image\nstack_imgs([tensor_to_pil(pad_square(resized_tensor)) for i in range(3)])\n```\n\n    Resized tensor: torch.Size([3, 384, 288])\n    Padded tensor: torch.Size([3, 384, 384])\n\n![](index_files/figure-commonmark/cell-8-output-2.png)\n\n``` python\nnum_bins = 31\n\ncustom_augmentation_space = {\n    # Identity operation doesn't change the image\n    \"Identity\": (torch.tensor(0.0), False),\n            \n    # Distort the image along the x or y axis, respectively.\n    \"ShearX\": (torch.linspace(0.0, 0.25, num_bins), True),\n    \"ShearY\": (torch.linspace(0.0, 0.25, num_bins), True),\n\n    # Move the image along the x or y axis, respectively.\n    \"TranslateX\": (torch.linspace(0.0, 32.0, num_bins), True),\n    \"TranslateY\": (torch.linspace(0.0, 32.0, num_bins), True),\n\n    # Rotate operation: rotates the image.\n    \"Rotate\": (torch.linspace(0.0, 45.0, num_bins), True),\n\n    # Adjust brightness, color, contrast,and sharpness respectively.\n    \"Brightness\": (torch.linspace(0.0, 0.75, num_bins), True),\n    \"Color\": (torch.linspace(0.0, 0.99, num_bins), True),\n    \"Contrast\": (torch.linspace(0.0, 0.99, num_bins), True),\n    \"Sharpness\": (torch.linspace(0.0, 0.99, num_bins), True),\n\n    # Reduce the number of bits used to express the color in each channel of the image.\n    \"Posterize\": (8 - (torch.arange(num_bins) / ((num_bins - 1) / 6)).round().int(), False),\n\n    # Invert all pixel values above a threshold.\n    \"Solarize\": (torch.linspace(255.0, 0.0, num_bins), False),\n\n    # Maximize the image contrast by setting the darkest color to black and the lightest to white.\n    \"AutoContrast\": (torch.tensor(0.0), False),\n\n    # Equalize the image histogram to improve its contrast.\n    \"Equalize\": (torch.tensor(0.0), False),\n}\n\n# Create a `CustomTrivialAugmentWide` object\ntrivial_aug = CustomTrivialAugmentWide(op_meta=custom_augmentation_space)\n\n# Pad the tensor\naug_tensor = trivial_aug(resized_tensor)\nprint(f\"Augmented tensor: {aug_tensor.shape}\")\n\n# Display the updated image\nstack_imgs([tensor_to_pil(trivial_aug(resized_tensor)) for i in range(3)])\n```\n\n    Augmented tensor: torch.Size([3, 384, 288])\n\n![](index_files/figure-commonmark/cell-10-output-2.png)\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Some custom Torchvision tranforms.",
    "version": "0.0.11",
    "project_urls": {
        "Homepage": "https://github.com/cj-mills/cjm-torchvision-tfms"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c72c709389e19af5cc791e542b48392096caf1e3080e3365982cfcd75fe7486a",
                "md5": "5b4c47312408525fef1b8d1c19fd39e1",
                "sha256": "bb950f5db035d7b6dba039c72c30f65ee351996ef567b46fa81eef8fccf5fd98"
            },
            "downloads": -1,
            "filename": "cjm_torchvision_tfms-0.0.11-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5b4c47312408525fef1b8d1c19fd39e1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 14563,
            "upload_time": "2024-02-23T00:54:36",
            "upload_time_iso_8601": "2024-02-23T00:54:36.183754Z",
            "url": "https://files.pythonhosted.org/packages/c7/2c/709389e19af5cc791e542b48392096caf1e3080e3365982cfcd75fe7486a/cjm_torchvision_tfms-0.0.11-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cca0c664c1343fe70a359f75e6736d992090b620e5602aeb14d73c6a4b79474e",
                "md5": "ec99d4333c9e9cff35e7ed4259604694",
                "sha256": "8f76bacae30c7c445c4e49dee2bc7baa31931e7fffc8cef96dff3b7e9d870bd9"
            },
            "downloads": -1,
            "filename": "cjm-torchvision-tfms-0.0.11.tar.gz",
            "has_sig": false,
            "md5_digest": "ec99d4333c9e9cff35e7ed4259604694",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 16329,
            "upload_time": "2024-02-23T00:54:38",
            "upload_time_iso_8601": "2024-02-23T00:54:38.022406Z",
            "url": "https://files.pythonhosted.org/packages/cc/a0/c664c1343fe70a359f75e6736d992090b620e5602aeb14d73c6a4b79474e/cjm-torchvision-tfms-0.0.11.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-23 00:54:38",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cj-mills",
    "github_project": "cjm-torchvision-tfms",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "cjm-torchvision-tfms"
}
        
Elapsed time: 0.19660s