uttm


Nameuttm JSON
Version 1.0.5 PyPI version JSON
download
home_pageNone
SummaryUTTM(Unsupervised-Torch-Template-Matching) is a tool for robust 2D template matching based on torch unsupervised learning.
upload_time2025-02-21 06:52:28
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT License Copyright (c) 2025 RoboticPlus Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords template-matching torch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Unsupervised-Torch-Template-Matching


A repository for robust 2D template matching based on torch unsupervised learning. 



## Installation
```
pip -r requirements.txt
```

The main dependencies are:
- numpy
- torch
- torchvision
- opencv-python


## Pipeline
### 1. Preprocess templates and segmentations
By default the binary template and segmentation images can aibitrary. 

The preprocessing step will automatically turn a mask into a 512*512 image, where the center of minimum enclosing circle is at the image center and the radius of the circle is 128. The processing step will keep the infomation of padding, translation, rotation and scaling, so that we can restore the templates back to original images.

### 2. Compute statistics by unsupervised learning


### 3. Fine-tune the rotation by 2d-icp (optional)

### 4. Visualization for matching evaluation (optional)



## Data Preparation

We provide the [example data](https://drive.google.com/drive/folders/1m9idEbKWOyDbeqHgnHHvbdW2UAv4ANhC) used for template matching, the input mainly 
- template images
- segmentation image 
- (Optional) Origina image before segmentation, used only for visualization

For custom data, a user can either extract foreground mask through online platforms like https://www.fotor.com/features/background-remover/, or locally run segmentation model like Segment Anthing (https://huggingface.co/docs/transformers/model_doc/sam) or Birefnet(https://huggingface.co/ZhengPeng7/BiRefNet).






## Parameters for our Template_Matcher class
### Inputs
- ```angle_per_rotation```: angle for each rotation resolution for one template (e.g. if we define it as 10, we will have 36 preprocessed separate masks for one template). It's defined through class initialization or function ```reset_params()```
- template: a list of binary template masks. Defined through function ```get_templates```
- segmentation masks: a list of binary segmentation masks. Defined through function ```get_masks```

### Outputs
- ```template_scores_for_segmentations```: n * m array for n segmentations and m templates;
- ```matching_info```: matching infomation according to maximum score of per m-dimention array for n segmentations, including best matched template index, rotation wrt. input template, translation and scales of template to original segmentation image (represented by center position and radius of minimum enclosing circle).


## Demo
An example is ```main.py```, which is a simple demonstration of usage of Template_Mathcer
The inputs are in ```./templates/```, ```./segmentations/```.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "uttm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "Template-Matching, Torch",
    "author": null,
    "author_email": "Xiao Zhou <xiao_zhou@roboticplus.com>, czhou <chengzhe_zhou@roboticplus.com>, longxl <xiaoling_long@roboticplus.com>",
    "download_url": "https://files.pythonhosted.org/packages/a9/f8/dde9b98169c40c828b899cc575ac7ec5f47b814e9638222c9c46dc7b5050/uttm-1.0.5.tar.gz",
    "platform": null,
    "description": "# Unsupervised-Torch-Template-Matching\n\n\nA repository for robust 2D template matching based on torch unsupervised learning. \n\n\n\n## Installation\n```\npip -r requirements.txt\n```\n\nThe main dependencies are:\n- numpy\n- torch\n- torchvision\n- opencv-python\n\n\n## Pipeline\n### 1. Preprocess templates and segmentations\nBy default the binary template and segmentation images can aibitrary. \n\nThe preprocessing step will automatically turn a mask into a 512*512 image, where the center of minimum enclosing circle is at the image center and the radius of the circle is 128. The processing step will keep the infomation of padding, translation, rotation and scaling, so that we can restore the templates back to original images.\n\n### 2. Compute statistics by unsupervised learning\n\n\n### 3. Fine-tune the rotation by 2d-icp (optional)\n\n### 4. Visualization for matching evaluation (optional)\n\n\n\n## Data Preparation\n\nWe provide the [example data](https://drive.google.com/drive/folders/1m9idEbKWOyDbeqHgnHHvbdW2UAv4ANhC) used for template matching, the input mainly \n- template images\n- segmentation image \n- (Optional) Origina image before segmentation, used only for visualization\n\nFor custom data, a user can either extract foreground mask through online platforms like https://www.fotor.com/features/background-remover/, or locally run segmentation model like Segment Anthing (https://huggingface.co/docs/transformers/model_doc/sam) or Birefnet(https://huggingface.co/ZhengPeng7/BiRefNet).\n\n\n\n\n\n\n## Parameters for our Template_Matcher class\n### Inputs\n- ```angle_per_rotation```: angle for each rotation resolution for one template (e.g. if we define it as 10, we will have 36 preprocessed separate masks for one template). It's defined through class initialization or function ```reset_params()```\n- template: a list of binary template masks. Defined through function ```get_templates```\n- segmentation masks: a list of binary segmentation masks. Defined through function ```get_masks```\n\n### Outputs\n- ```template_scores_for_segmentations```: n * m array for n segmentations and m templates;\n- ```matching_info```: matching infomation according to maximum score of per m-dimention array for n segmentations, including best matched template index, rotation wrt. input template, translation and scales of template to original segmentation image (represented by center position and radius of minimum enclosing circle).\n\n\n## Demo\nAn example is ```main.py```, which is a simple demonstration of usage of Template_Mathcer\nThe inputs are in ```./templates/```, ```./segmentations/```.\n\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 RoboticPlus\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.\n        ",
    "summary": "UTTM(Unsupervised-Torch-Template-Matching) is a tool for robust 2D template matching based on torch unsupervised learning.",
    "version": "1.0.5",
    "project_urls": {
        "Documentation": "http://gitlab.roboticplus.com:2022/zhouxiao_jojo/unsupervised-torch-template-matching",
        "Repository": "http://gitlab.roboticplus.com:2022/zhouxiao_jojo/unsupervised-torch-template-matching",
        "SampleData": "https://drive.google.com/drive/folders/1m9idEbKWOyDbeqHgnHHvbdW2UAv4ANhC"
    },
    "split_keywords": [
        "template-matching",
        " torch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "3bc996bbadc04adf82911210e55febe77650d12da19bd4bdead897e0f51cf8a4",
                "md5": "481b1b154597fd8987ad99495075d593",
                "sha256": "287139247b56bbd12ffd51b8d518e104f4075e525418c72ebce399caed6d0a8b"
            },
            "downloads": -1,
            "filename": "uttm-1.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "481b1b154597fd8987ad99495075d593",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 11797,
            "upload_time": "2025-02-21T06:52:26",
            "upload_time_iso_8601": "2025-02-21T06:52:26.510396Z",
            "url": "https://files.pythonhosted.org/packages/3b/c9/96bbadc04adf82911210e55febe77650d12da19bd4bdead897e0f51cf8a4/uttm-1.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a9f8dde9b98169c40c828b899cc575ac7ec5f47b814e9638222c9c46dc7b5050",
                "md5": "27ee4cc73af5491399373187f1051ff0",
                "sha256": "fab272d5152fbe99ab5ffbbb3aaf9db821577c24e5acc102ee6fe5da590be9c7"
            },
            "downloads": -1,
            "filename": "uttm-1.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "27ee4cc73af5491399373187f1051ff0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 12487,
            "upload_time": "2025-02-21T06:52:28",
            "upload_time_iso_8601": "2025-02-21T06:52:28.618432Z",
            "url": "https://files.pythonhosted.org/packages/a9/f8/dde9b98169c40c828b899cc575ac7ec5f47b814e9638222c9c46dc7b5050/uttm-1.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-21 06:52:28",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "uttm"
}
        
Elapsed time: 1.57059s