autocare-dlt


Nameautocare-dlt JSON
Version 0.2.6 PyPI version JSON
download
home_pagehttps://github.com/snuailab/autocare_dlt
SummaryAutocare Tx Model
upload_time2023-10-25 07:36:13
maintainer
docs_urlNone
authorSNUAILAB
requires_python>=3.9
licenseGPL-3.0
keywords machine-learning deep-learning vision ml dl ai yolo snuailab
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Autocare DLT

Autocare DeepLearning Toolkit은 SNUAILAB의 모델 개발 및 Autocare T의 학습을 지원하기 위한 pytorch 기반 deep learning toolkit입니다.
## Updates
- v0.2
    - HPO 추가
    - Mutli-GPU 지원
    - inference 및 data_selection에서 coco input 지원

## 설치

### Prerequisite

- Python >= 3.9
- CUDA == 11.3
- pytorch >= 1.12.1 ([link](https://pytorch.org/get-started/locally/))
    - torchvision >= 0.13.1

### Install
- tx_model은 repo를 clone하여 CLI로 사용하는 방식 및 python package(*.whl) 파일을 통하여 설치하는 방법 2가지를 지원한다.

#### git clone
```bash
git clone git@github.com:snuailab/autocare_dlt.git
cd autocare_dlt
pip install -r requirements.txt
```
#### package 설치
```bash
pip install autocare_dlt
```

## 실행

### Model config 준비

- 기본적인 template은 ./models참조
- 사용하고자 하는 Model에 맞춰서 config값 수정
    - 모듈에 따라 hyper-parameter값이 다양해지기 때문에 해당 모듈의 code를 참조하여 수정 할 것을 권장

### Data config 준비

- 기본적인 template은 ./datasets 참조
- 사용하고자 하는 Dataset에 맞춰서 config값 수정
    - workers_per_gpu (int) : dataloader work 갯수
    - batch_size_per_gpu (int): GPU당 batch size
    - img_size (int): 모델의 image size (img_size, img_size) → 추후 업데이트 예정
    - train, val, test (dict): 각 dataset의 config
        - type: dataset의 type
        - data_root: data의 root path
        - ann: annotation 파일의 path
        - augmentation: data augmentation세팅
            - CV2 모듈들이 먼저 적용되고 pytorch(torchvision) 모듈 적용됨
            - top down 순서대로 적용

### 지원 하는 package Tools
- 해당 tool들을 import하여 사용 혹은 cli로 실행
> **autocare_dlt.tools.train.run**(*exp_name: str*, *model_cfg: str*, *data_cfg: str*, *gpus: str = '0'*, *ckpt: ~typing.Union[str*, *dict] = None*, *world_size: int = 1*, *output_dir: str = 'outputs'*, *resume: bool = False*, *fp16: bool = False*, *ema: bool = False*)**→ tooNone**

Run training

**Parameters**

- **exp_name** (*str*) – experiment name. a folder with this name will be created in the `output_dir`, and the log files will be saved there.

- **model_cfg** (*str*) – path for model configuration file

- **data_cfg** (*str*) – path for dataset configuration file

- **gpus** (*str, optional*) – GPU IDs to use. Default to ‘0’

- **ckpt** (*str, optional*) – path for checkpoint file. Defaults to None.

- **world_size** (*int, optional*) – world size for ddp. Defaults to 1.

- **output_dir** (*str, optional*) – log output directory. Defaults to ‘outputs’.

- **resume** (*bool, optional*) – whether to resume the previous training or not. Defaults to False.

- **fp16** (*bool, optional*) – whether to use float point 16 or not. Defaults to False.

- **ema** (*bool, optional*) – whether to use EMA(exponential moving average) or not. Defaults to False.

> **autocare_dlt.tools.inference.run**(*inputs: str*, *model_cfg: str*, *output_dir: str*, *gpus: str*, *ckpt: Union[str, dict]*, *input_size: list = None*, *letter_box: bool = None*, *vis: bool = False*, *save_imgs: bool = False*, *root_dir: str = ''*)**→ None**

Run inference

**Parameters**

- **inputs** (*str*) – path for input - image, directory, or json

- **model_cfg** (*str*) – path for model configuration file

- **output_dir** (*str*) – path for inference results

- **gpus** (*str*) – GPU IDs to use

- **ckpt** (*Union[str, dict]*) – path for checkpoint file or state dict

- **input_size** (*list, optional*) – input size of model inference. Defaults to [640].

- **letter_box** (*bool, optional*) – whether to use letter box or not. Defaults to False.

- **vis** (*bool, optional*) – whether to visualize inference in realtime or not. Defaults to False.

- **save_imgs** (*bool, optional*) – whether to draw and save inference results as images or not. Defaults to False.

- **root_dir** (*str, optional*) – path for input image when using json input. Defaults to “”.

> **autocare_dlt.tools.eval.run**(*model_cfg: str*, *data_cfg: str*, *gpus: str*, *ckpt: Union[str, dict]*)**→ None**

Evaluate a model

**Parameters**

- **model_cfg** (*str*) – path for model configuration file

- **data_cfg** (*str*) – path for dataset configureation file

- **gpus** (*str*) – GPU IDs to use

- **ckpt** (*Union[str, dict]*) – path for checkpoint file or state dict

> **autocare_dlt.tools.export_onnx.run**(*output_name: str*, *model_cfg: str*, *ckpt: Union[str, dict]*, *input_size: list = None*, *opset: int = 11*, *no_onnxsim: bool = False*)**→ None**

Export onnx file

**Parameters**

- **output_name** (*str*) – file name for onnx output (.onnx)

- **model_cfg** (*str*) – path for model configuration file

- **ckpt** (*Union[str, dict]*) – path for checkpoint file or state dict

- **input_size** (*list, optional*) – input size of model. use model config value if input_size is None. Default to None.

- **opset** (*int, optional*) – onnx opset version. Defaults to 11.

- **no_onnxsim** (*bool, optional*) – whether to use onnxsim or not. Defaults to False.

> **autocare_dlt.tools.data_selection.run**(*model_cfg: str*, *ckpt: Union[str, dict]*, *inputs: str*, *num_outputs: int*, *output_dir: str*, *gpus: str*, *input_size: list = None*, *letter_box: bool = None*, *copy_img: bool = False*, *root_dir: str = ''*)**→ None**

Select active learning data

**Parameters**

- **model_cfg** (*str*) – path for model configuration file

- **ckpt** (*Union[str, dict]*) – path for checkpoint file or state dict

- **inputs** (*str*) – path for input - image, directory, or json

- **num_outputs** (*int*) – number of images to select

- **output_dir** (*str*) – path for output result

- **gpus** (*str*) – GPU IDs to use

- **input_size** (*list, optional*) – input size of model inference. Defaults to [640].

- **letter_box** (*bool, optional*) – whether to use letter box or not. Defaults to False.

- **copy_img** (*bool, optional*) – whether to copy images to output. Defaults to False.

- **root_dir** (*str, optional*) – path for input image when using json input. Defaults to “”.

> **autocare_dlt.tools.hpo.run**(*exp_name: str*, *model_cfg: str*, *data_cfg: str*, *hpo_cfg: str = None* *gpus: str = '0'*, *ckpt: ~typing.Union[str*, *dict] = None*, *world_size: int = 1*, *output_dir: str = 'outputs'*, *resume: bool = False*, *fp16: bool = False*, *ema: bool = False*)**→ None**

Run Hyperparameter Optimization

**Parameters**

- **exp_name** (*str*) – experiment name. a folder with this name will be created in the `output_dir`, and the log files will be saved there.

- **model_cfg** (*str*) – path for model configuration file

- **data_cfg** (*str*) – path for dataset configuration file

- **hpo_cfg** (str, optional): path for hpo configuration file. Default to None.

- **gpus** (*str, optional*) – GPU IDs to use. Default to ‘0’

- **ckpt** (*str, optional*) – path for checkpoint file. Defaults to None.

- **world_size** (*int, optional*) – world size for ddp. Defaults to 1.

- **output_dir** (*str, optional*) – log output directory. Defaults to ‘outputs’.

- **resume** (*bool, optional*) – whether to resume the previous training or not. Defaults to False.

- **fp16** (*bool, optional*) – whether to use float point 16 or not. Defaults to False.

- **ema** (*bool, optional*) – whether to use EMA(exponential moving average) or not. Defaults to False.

### CLI 명령어 예시
Supervised Learning
```bash
python autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #}
```

### Distributed training (Multi-GPU training)
Multi-GPU 훈련을 진행하기 위해서는 'python'이 아닌 'torchrun'을 이용해야 함
```bash
torchrun autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #,#,...} --multi_gpu True
```
[권장] 같은 서버에서 다수의 Multi-GPU 훈련을 하기 위해서는 아래 명령어를 이용해야 함
```bash
torchrun --rdzv_backend=c10d --rdzv_endpoint=localhost:0 --nnodes=1 autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #,#,...}
```
- training 결과는 outputs/{your_exp} 위치에 저장됨
### run evaluation

```bash
python autocare_dlt/tools/eval.py --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus 0
```

### export onnx

```bash
python autocare_dlt/tools/export_onnx.py --output_name {path}/{model_name}.onnx --model_cfg {path}/{model}.json --batch_size 1 --ckpt {path}/{ckpt}.pth
```

### run inference
- OCR관련
	- Prerequest : 한글 폰트 파일 (ex. NanumPen.ttf)

```bash
python tools/inference.py --inputs {path}/{input_dir, img, video, coco json} --model_cfg {path}/{model}.json --output_dir {path}/{output dir name} --ckpt {path}/{model_name}.pth --input_size {width} {height} --gpus {gpu_id} (optional)--root_dir {root path of coco}
```

### run data selection
```bash
python tools/data_selection.py --inputs {path}/{input_dir, cocojson} --model_cfg {path}/{model}.json --output_dir {path}/{output dir name} --ckpt {path}/{model_name}.pth --num_outputs {int} --input_size {width} {height} --letter_box {bool} --gpus {gpu_id} (optional)--root_dir {root path of coco}
```

# References
This code is based on and inspired on those repositories (TBD)
- [YOLOv5](https://github.com/ultralytics/yolov5)
- [Detectron2](https://github.com/facebookresearch)
- [MMCV](https://github.com/open-mmlab/mmcv)
- [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/main)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/snuailab/autocare_dlt",
    "name": "autocare-dlt",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "machine-learning,deep-learning,vision,ML,DL,AI,YOLO,SNUAILAB",
    "author": "SNUAILAB",
    "author_email": "jaeseung.lim@snuailab.ai",
    "download_url": "https://files.pythonhosted.org/packages/a7/b2/9ea434e155e14043d62582531af97d41db2600d2ebe1939cc7aefdb9f833/autocare_dlt-0.2.6.tar.gz",
    "platform": null,
    "description": "# Autocare DLT\n\nAutocare DeepLearning Toolkit\uc740 SNUAILAB\uc758 \ubaa8\ub378 \uac1c\ubc1c \ubc0f Autocare T\uc758 \ud559\uc2b5\uc744 \uc9c0\uc6d0\ud558\uae30 \uc704\ud55c pytorch \uae30\ubc18 deep learning toolkit\uc785\ub2c8\ub2e4.\n## Updates\n- v0.2\n    - HPO \ucd94\uac00\n    - Mutli-GPU \uc9c0\uc6d0\n    - inference \ubc0f data_selection\uc5d0\uc11c coco input \uc9c0\uc6d0\n\n## \uc124\uce58\n\n### Prerequisite\n\n- Python >= 3.9\n- CUDA == 11.3\n- pytorch >= 1.12.1 ([link](https://pytorch.org/get-started/locally/))\n    - torchvision >= 0.13.1\n\n### Install\n- tx_model\uc740 repo\ub97c clone\ud558\uc5ec CLI\ub85c \uc0ac\uc6a9\ud558\ub294 \ubc29\uc2dd \ubc0f python package(*.whl) \ud30c\uc77c\uc744 \ud1b5\ud558\uc5ec \uc124\uce58\ud558\ub294 \ubc29\ubc95 2\uac00\uc9c0\ub97c \uc9c0\uc6d0\ud55c\ub2e4.\n\n#### git clone\n```bash\ngit clone git@github.com:snuailab/autocare_dlt.git\ncd autocare_dlt\npip install -r requirements.txt\n```\n#### package \uc124\uce58\n```bash\npip install autocare_dlt\n```\n\n## \uc2e4\ud589\n\n### Model config \uc900\ube44\n\n- \uae30\ubcf8\uc801\uc778 template\uc740 ./models\ucc38\uc870\n- \uc0ac\uc6a9\ud558\uace0\uc790 \ud558\ub294 Model\uc5d0 \ub9de\ucdb0\uc11c config\uac12 \uc218\uc815\n    - \ubaa8\ub4c8\uc5d0 \ub530\ub77c hyper-parameter\uac12\uc774 \ub2e4\uc591\ud574\uc9c0\uae30 \ub54c\ubb38\uc5d0 \ud574\ub2f9 \ubaa8\ub4c8\uc758 code\ub97c \ucc38\uc870\ud558\uc5ec \uc218\uc815 \ud560 \uac83\uc744 \uad8c\uc7a5\n\n### Data config \uc900\ube44\n\n- \uae30\ubcf8\uc801\uc778 template\uc740 ./datasets \ucc38\uc870\n- \uc0ac\uc6a9\ud558\uace0\uc790 \ud558\ub294 Dataset\uc5d0 \ub9de\ucdb0\uc11c config\uac12 \uc218\uc815\n    - workers_per_gpu (int) : dataloader work \uac2f\uc218\n    - batch_size_per_gpu (int): GPU\ub2f9 batch size\n    - img_size (int): \ubaa8\ub378\uc758 image size (img_size, img_size) \u2192 \ucd94\ud6c4 \uc5c5\ub370\uc774\ud2b8 \uc608\uc815\n    - train, val, test (dict): \uac01 dataset\uc758 config\n        - type: dataset\uc758 type\n        - data_root: data\uc758 root path\n        - ann: annotation \ud30c\uc77c\uc758 path\n        - augmentation: data augmentation\uc138\ud305\n            - CV2 \ubaa8\ub4c8\ub4e4\uc774 \uba3c\uc800 \uc801\uc6a9\ub418\uace0 pytorch(torchvision) \ubaa8\ub4c8 \uc801\uc6a9\ub428\n            - top down \uc21c\uc11c\ub300\ub85c \uc801\uc6a9\n\n### \uc9c0\uc6d0 \ud558\ub294 package Tools\n- \ud574\ub2f9 tool\ub4e4\uc744 import\ud558\uc5ec \uc0ac\uc6a9 \ud639\uc740 cli\ub85c \uc2e4\ud589\n> **autocare_dlt.tools.train.run**(*exp_name: str*, *model_cfg: str*, *data_cfg: str*, *gpus: str = '0'*, *ckpt: ~typing.Union[str*, *dict] = None*, *world_size: int = 1*, *output_dir: str = 'outputs'*, *resume: bool = False*, *fp16: bool = False*, *ema: bool = False*)**\u2192 tooNone**\n\nRun training\n\n**Parameters**\n\n- **exp_name**\u00a0(*str*) \u2013 experiment name. a folder with this name will be created in the\u00a0`output_dir`, and the log files will be saved there.\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **data_cfg**\u00a0(*str*) \u2013 path for dataset configuration file\n\n- **gpus**\u00a0(*str,\u00a0optional*) \u2013 GPU IDs to use. Default to \u20180\u2019\n\n- **ckpt**\u00a0(*str,\u00a0optional*) \u2013 path for checkpoint file. Defaults to None.\n\n- **world_size**\u00a0(*int,\u00a0optional*) \u2013 world size for ddp. Defaults to 1.\n\n- **output_dir**\u00a0(*str,\u00a0optional*) \u2013 log output directory. Defaults to \u2018outputs\u2019.\n\n- **resume**\u00a0(*bool,\u00a0optional*) \u2013 whether to resume the previous training or not. Defaults to False.\n\n- **fp16**\u00a0(*bool,\u00a0optional*) \u2013 whether to use float point 16 or not. Defaults to False.\n\n- **ema**\u00a0(*bool,\u00a0optional*) \u2013 whether to use EMA(exponential moving average) or not. Defaults to False.\n\n> **autocare_dlt.tools.inference.run**(*inputs:\u00a0str*,\u00a0*model_cfg:\u00a0str*,\u00a0*output_dir:\u00a0str*,\u00a0*gpus:\u00a0str*,\u00a0*ckpt:\u00a0Union[str,\u00a0dict]*,\u00a0*input_size:\u00a0list\u00a0=\u00a0None*,\u00a0*letter_box:\u00a0bool\u00a0=\u00a0None*,\u00a0*vis:\u00a0bool\u00a0=\u00a0False*,\u00a0*save_imgs:\u00a0bool\u00a0=\u00a0False*,\u00a0*root_dir:\u00a0str\u00a0=\u00a0''*)**\u2192\u00a0None**\n\nRun inference\n\n**Parameters**\n\n- **inputs**\u00a0(*str*) \u2013 path for input - image, directory, or json\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **output_dir**\u00a0(*str*) \u2013 path for inference results\n\n- **gpus**\u00a0(*str*) \u2013 GPU IDs to use\n\n- **ckpt**\u00a0(*Union[str,\u00a0dict]*) \u2013 path for checkpoint file or state dict\n\n- **input_size**\u00a0(*list,\u00a0optional*) \u2013 input size of model inference. Defaults to [640].\n\n- **letter_box**\u00a0(*bool,\u00a0optional*) \u2013 whether to use letter box or not. Defaults to False.\n\n- **vis**\u00a0(*bool,\u00a0optional*) \u2013 whether to visualize inference in realtime or not. Defaults to False.\n\n- **save_imgs**\u00a0(*bool,\u00a0optional*) \u2013 whether to draw and save inference results as images or not. Defaults to False.\n\n- **root_dir**\u00a0(*str,\u00a0optional*) \u2013 path for input image when using json input. Defaults to \u201c\u201d.\n\n> **autocare_dlt.tools.eval.run**(*model_cfg:\u00a0str*,\u00a0*data_cfg:\u00a0str*,\u00a0*gpus:\u00a0str*,\u00a0*ckpt:\u00a0Union[str,\u00a0dict]*)**\u2192\u00a0None**\n\nEvaluate a model\n\n**Parameters**\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **data_cfg**\u00a0(*str*) \u2013 path for dataset configureation file\n\n- **gpus**\u00a0(*str*) \u2013 GPU IDs to use\n\n- **ckpt**\u00a0(*Union[str,\u00a0dict]*) \u2013 path for checkpoint file or state dict\n\n> **autocare_dlt.tools.export_onnx.run**(*output_name:\u00a0str*,\u00a0*model_cfg:\u00a0str*,\u00a0*ckpt:\u00a0Union[str,\u00a0dict]*,\u00a0*input_size:\u00a0list\u00a0=\u00a0None*,\u00a0*opset:\u00a0int\u00a0=\u00a011*,\u00a0*no_onnxsim:\u00a0bool\u00a0=\u00a0False*)**\u2192\u00a0None**\n\nExport onnx file\n\n**Parameters**\n\n- **output_name**\u00a0(*str*) \u2013 file name for onnx output (.onnx)\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **ckpt**\u00a0(*Union[str,\u00a0dict]*) \u2013 path for checkpoint file or state dict\n\n- **input_size**\u00a0(*list,\u00a0optional*) \u2013 input size of model. use model config value if input_size is None. Default to None.\n\n- **opset**\u00a0(*int,\u00a0optional*) \u2013 onnx opset version. Defaults to 11.\n\n- **no_onnxsim**\u00a0(*bool,\u00a0optional*) \u2013 whether to use onnxsim or not. Defaults to False.\n\n> **autocare_dlt.tools.data_selection.run**(*model_cfg:\u00a0str*,\u00a0*ckpt:\u00a0Union[str,\u00a0dict]*,\u00a0*inputs:\u00a0str*,\u00a0*num_outputs:\u00a0int*,\u00a0*output_dir:\u00a0str*,\u00a0*gpus:\u00a0str*,\u00a0*input_size:\u00a0list\u00a0=\u00a0None*,\u00a0*letter_box:\u00a0bool\u00a0=\u00a0None*,\u00a0*copy_img:\u00a0bool\u00a0=\u00a0False*,\u00a0*root_dir:\u00a0str\u00a0=\u00a0''*)**\u2192\u00a0None**\n\nSelect active learning data\n\n**Parameters**\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **ckpt**\u00a0(*Union[str,\u00a0dict]*) \u2013 path for checkpoint file or state dict\n\n- **inputs**\u00a0(*str*) \u2013 path for input - image, directory, or json\n\n- **num_outputs**\u00a0(*int*) \u2013 number of images to select\n\n- **output_dir**\u00a0(*str*) \u2013 path for output result\n\n- **gpus**\u00a0(*str*) \u2013 GPU IDs to use\n\n- **input_size**\u00a0(*list,\u00a0optional*) \u2013 input size of model inference. Defaults to [640].\n\n- **letter_box**\u00a0(*bool,\u00a0optional*) \u2013 whether to use letter box or not. Defaults to False.\n\n- **copy_img**\u00a0(*bool,\u00a0optional*) \u2013 whether to copy images to output. Defaults to False.\n\n- **root_dir**\u00a0(*str,\u00a0optional*) \u2013 path for input image when using json input. Defaults to \u201c\u201d.\n\n> **autocare_dlt.tools.hpo.run**(*exp_name:\u00a0str*,\u00a0*model_cfg:\u00a0str*,\u00a0*data_cfg:\u00a0str*, *hpo_cfg:\u00a0str = None*\u00a0*gpus:\u00a0str\u00a0=\u00a0'0'*,\u00a0*ckpt:\u00a0~typing.Union[str*,\u00a0*dict]\u00a0=\u00a0None*,\u00a0*world_size:\u00a0int\u00a0=\u00a01*,\u00a0*output_dir:\u00a0str\u00a0=\u00a0'outputs'*,\u00a0*resume:\u00a0bool\u00a0=\u00a0False*,\u00a0*fp16:\u00a0bool\u00a0=\u00a0False*,\u00a0*ema:\u00a0bool\u00a0=\u00a0False*)**\u2192\u00a0None**\n\nRun Hyperparameter Optimization\n\n**Parameters**\n\n- **exp_name**\u00a0(*str*) \u2013 experiment name. a folder with this name will be created in the\u00a0`output_dir`, and the log files will be saved there.\n\n- **model_cfg**\u00a0(*str*) \u2013 path for model configuration file\n\n- **data_cfg**\u00a0(*str*) \u2013 path for dataset configuration file\n\n- **hpo_cfg** (str, optional): path for hpo configuration file. Default to None.\n\n- **gpus**\u00a0(*str,\u00a0optional*) \u2013 GPU IDs to use. Default to \u20180\u2019\n\n- **ckpt**\u00a0(*str,\u00a0optional*) \u2013 path for checkpoint file. Defaults to None.\n\n- **world_size**\u00a0(*int,\u00a0optional*) \u2013 world size for ddp. Defaults to 1.\n\n- **output_dir**\u00a0(*str,\u00a0optional*) \u2013 log output directory. Defaults to \u2018outputs\u2019.\n\n- **resume**\u00a0(*bool,\u00a0optional*) \u2013 whether to resume the previous training or not. Defaults to False.\n\n- **fp16**\u00a0(*bool,\u00a0optional*) \u2013 whether to use float point 16 or not. Defaults to False.\n\n- **ema**\u00a0(*bool,\u00a0optional*) \u2013 whether to use EMA(exponential moving average) or not. Defaults to False.\n\n### CLI \uba85\ub839\uc5b4 \uc608\uc2dc\nSupervised Learning\n```bash\npython autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #}\n```\n\n### Distributed training (Multi-GPU training)\nMulti-GPU \ud6c8\ub828\uc744 \uc9c4\ud589\ud558\uae30 \uc704\ud574\uc11c\ub294 'python'\uc774 \uc544\ub2cc 'torchrun'\uc744 \uc774\uc6a9\ud574\uc57c \ud568\n```bash\ntorchrun autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #,#,...} --multi_gpu True\n```\n[\uad8c\uc7a5] \uac19\uc740 \uc11c\ubc84\uc5d0\uc11c \ub2e4\uc218\uc758 Multi-GPU \ud6c8\ub828\uc744 \ud558\uae30 \uc704\ud574\uc11c\ub294 \uc544\ub798 \uba85\ub839\uc5b4\ub97c \uc774\uc6a9\ud574\uc57c \ud568\n```bash\ntorchrun --rdzv_backend=c10d --rdzv_endpoint=localhost:0 --nnodes=1 autocare_dlt/tools/train.py --exp_name {your_exp} --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus {gpu #,#,...}\n```\n- training \uacb0\uacfc\ub294 outputs/{your_exp} \uc704\uce58\uc5d0 \uc800\uc7a5\ub428\n### run evaluation\n\n```bash\npython autocare_dlt/tools/eval.py --model_cfg {path}/{model}.json --data_cfg {path}/{data}.json} --ckpt {path}/{ckpt}.pth --gpus 0\n```\n\n### export onnx\n\n```bash\npython autocare_dlt/tools/export_onnx.py --output_name {path}/{model_name}.onnx --model_cfg {path}/{model}.json --batch_size 1 --ckpt {path}/{ckpt}.pth\n```\n\n### run inference\n- OCR\uad00\ub828\n\t- Prerequest : \ud55c\uae00 \ud3f0\ud2b8 \ud30c\uc77c (ex. NanumPen.ttf)\n\n```bash\npython tools/inference.py --inputs {path}/{input_dir, img, video, coco json} --model_cfg {path}/{model}.json --output_dir {path}/{output dir name} --ckpt {path}/{model_name}.pth --input_size {width} {height} --gpus {gpu_id} (optional)--root_dir {root path of coco}\n```\n\n### run data selection\n```bash\npython tools/data_selection.py --inputs {path}/{input_dir, cocojson} --model_cfg {path}/{model}.json --output_dir {path}/{output dir name} --ckpt {path}/{model_name}.pth --num_outputs {int} --input_size {width} {height} --letter_box {bool} --gpus {gpu_id} (optional)--root_dir {root path of coco}\n```\n\n# References\nThis code is based on and inspired on those repositories (TBD)\n- [YOLOv5](https://github.com/ultralytics/yolov5)\n- [Detectron2](https://github.com/facebookresearch)\n- [MMCV](https://github.com/open-mmlab/mmcv)\n- [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/main)\n",
    "bugtrack_url": null,
    "license": "GPL-3.0",
    "summary": "Autocare Tx Model",
    "version": "0.2.6",
    "project_urls": {
        "Bug Reports": "https://github.com/snuailab/autocare_dlt/issues",
        "Homepage": "https://github.com/snuailab/autocare_dlt",
        "Source": "https://github.com/snuailab/autocare_dlt"
    },
    "split_keywords": [
        "machine-learning",
        "deep-learning",
        "vision",
        "ml",
        "dl",
        "ai",
        "yolo",
        "snuailab"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e515addabaaa4e91466081a602b274d1b641e0b3dbde640c81807fd934aa52c4",
                "md5": "0d88082aae771bb8ba3bfc09a90958d6",
                "sha256": "134af845f6a400f9eb4a7b3eb43fa1380ff7f619b17a606e31be3399ae3eddce"
            },
            "downloads": -1,
            "filename": "autocare_dlt-0.2.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0d88082aae771bb8ba3bfc09a90958d6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 175701,
            "upload_time": "2023-10-25T07:36:11",
            "upload_time_iso_8601": "2023-10-25T07:36:11.345983Z",
            "url": "https://files.pythonhosted.org/packages/e5/15/addabaaa4e91466081a602b274d1b641e0b3dbde640c81807fd934aa52c4/autocare_dlt-0.2.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a7b29ea434e155e14043d62582531af97d41db2600d2ebe1939cc7aefdb9f833",
                "md5": "49b6ad0297fe02ef3917320be323229a",
                "sha256": "3138806b862f8bf0b66630d43d9ffa5623b51f97b3f449fb3f4018a1bcd405a5"
            },
            "downloads": -1,
            "filename": "autocare_dlt-0.2.6.tar.gz",
            "has_sig": false,
            "md5_digest": "49b6ad0297fe02ef3917320be323229a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 114503,
            "upload_time": "2023-10-25T07:36:13",
            "upload_time_iso_8601": "2023-10-25T07:36:13.646735Z",
            "url": "https://files.pythonhosted.org/packages/a7/b2/9ea434e155e14043d62582531af97d41db2600d2ebe1939cc7aefdb9f833/autocare_dlt-0.2.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-25 07:36:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "snuailab",
    "github_project": "autocare_dlt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "autocare-dlt"
}
        
Elapsed time: 0.27184s