# MedVision-Classification
MedVision-Classification 是一个基于 PyTorch Lightning 的医学影像分类框架,提供了训练和推理的简单接口。
## 特点
- 基于 PyTorch Lightning 的高级接口
- 支持常见的医学影像格式(NIfTI、DICOM 等)
- 内置多种分类模型架构(ResNet、DenseNet、EfficientNet 等)
- 灵活的数据加载和预处理管道
- 模块化设计,易于扩展
- 命令行界面用于训练和推理
- 支持二分类和多分类任务
## 安装
### 系统要求
- Python 3.8+
- PyTorch 2.0+
- CUDA (可选,用于GPU加速)
### 基本安装
最简单的安装方式:
```bash
pip install -e .
```
### 从源码安装
```bash
git clone https://github.com/yourusername/medvision-classification.git
cd medvision-classification
pip install -e .
```
### 使用requirements文件
```bash
# 基本环境
pip install -r requirements.txt
# 开发环境
pip install -r requirements-dev.txt
```
### 使用conda环境
推荐使用 conda 创建独立的虚拟环境:
```bash
# 创建并激活环境
conda env create -f environment.yml
conda activate medvision-cls
# 安装项目本身
pip install -e .
```
## 快速入门
### 训练2D模型
```bash
medvision-cls train configs/train_config.yml
```
### 训练3D模型
```bash
medvision-cls train configs/train_3d_resnet_config.yml
```
### 测试模型
```bash
medvision-cls test configs/test_config.yml
```
### 推理
```bash
MedVision-cls predict configs/inference_config.yml --input /path/to/image --output /path/to/output
```
## 配置格式
### 2D分类训练配置示例
```yaml
# 2D ResNet Training Configuration
seed: 42
task_dim: 2d
# Model configuration
model:
type: "classification"
network:
name: "resnet50"
pretrained: true
num_classes: 2
# Metrics to compute
metrics:
accuracy:
type: "accuracy"
f1:
type: "f1"
precision:
type: "precision"
recall:
type: "recall"
auc:
type: "auroc"
# Loss configuration
loss:
type: "cross_entropy"
weight: null
label_smoothing: 0.0
# Optimizer configuration
optimizer:
type: "adam"
lr: 0.001
weight_decay: 0.0001
# Scheduler configuration
scheduler:
type: "cosine"
T_max: 100
eta_min: 0.00001
# Data configuration
data:
type: "medical"
batch_size: 4
num_workers: 4
data_dir: "data/classification"
image_format: "*.png"
# Transform configuration for 2D data
transforms:
image_size: [224, 224]
normalize: true
augment: true
# Data split configuration
train_val_split: [0.8, 0.2]
seed: 42
# Training configuration
training:
max_epochs: 10
accelerator: "gpu"
devices: [0,1,2,3] # Multi-GPU training
precision: 16
save_metrics: true
# Callbacks
model_checkpoint:
monitor: "val/accuracy"
mode: "max"
save_top_k: 3
filename: "epoch_{epoch:02d}-val_acc_{val/accuracy:.3f}"
# Validation configuration
validation:
check_val_every_n_epoch: 1
# Class names
class_names:
- "Class_0"
- "Class_1"
# Output paths
outputs:
output_dir: "outputs"
# Logging
logging:
log_every_n_steps: 10
wandb:
enabled: false
project: "medvision-2d-classification"
entity: null
```
### 3D分类训练配置示例
```yaml
# 3D ResNet Training Configuration
seed: 42
task_dim: 3D
# Model configuration
model:
type: "classification"
network:
name: "resnet3d_18" # Options: resnet3d_18, resnet3d_34, resnet3d_50
pretrained: false # No pretrained weights for 3D models
in_channels: 3 # Input channels (typically 1 for medical images)
dropout: 0.1
num_classes: 2
# Metrics to compute
metrics:
accuracy:
type: "accuracy"
f1:
type: "f1"
precision:
type: "precision"
recall:
type: "recall"
auc:
type: "auroc"
# Loss configuration
loss:
type: "cross_entropy"
weight: null
label_smoothing: 0.0
# Optimizer configuration
optimizer:
type: "adam"
lr: 0.001
weight_decay: 0.0001
# Scheduler configuration
scheduler:
type: "cosine"
T_max: 100
eta_min: 0.00001
# Data configuration
data:
type: "medical"
batch_size: 4 # Smaller batch size for 3D data
num_workers: 4
data_dir: "data/3D"
image_format: "*.nii.gz" # 3D medical image format
# Transform configuration for 3D data
transforms:
image_size: [64, 64, 64] # [D, H, W] for 3D volumes
normalize: true
augment: true
# Data split configuration
train_val_split: [0.8, 0.2]
seed: 42
# Training configuration
training:
max_epochs: 5
accelerator: "gpu"
devices: 1 # Single GPU for 3D (memory intensive)
precision: 16 # Use mixed precision to save memory
# Callbacks
early_stopping:
monitor: "val/loss"
patience: 10
mode: "min"
model_checkpoint:
monitor: "val/accuracy"
mode: "max"
save_top_k: 3
filename: "epoch_{epoch:02d}-val_acc_{val/accuracy:.3f}"
# Validation configuration
validation:
check_val_every_n_epoch: 1
# Output paths
outputs:
output_dir: "outputs"
# Logging
logging:
log_every_n_steps: 10
wandb:
enabled: false
project: "medvision-3d-classification"
entity: null
```
### 推理配置示例
```yaml
# Model configuration
model:
type: "classification"
network:
name: "resnet50"
pretrained: false
num_classes: 2
checkpoint_path: "outputs/checkpoints/best_model.ckpt"
# Inference settings
inference:
batch_size: 1
device: "cuda:0" # 或 "cpu"
return_probabilities: true
class_names: ["class0", "class1"]
confidence_threshold: 0.5
# Preprocessing
preprocessing:
image_size: [224, 224]
normalize: true
mean: [0.485, 0.456, 0.406]
std: [0.229, 0.224, 0.225]
# Output settings
output:
save_predictions: true
include_probabilities: true
format: "json" # 或 "csv"
```
## 数据格式
### 文件夹结构
```
data/
├── classification/
│ ├── train/
│ │ ├── class1/
│ │ │ ├── image1.png
│ │ │ └── image2.png
│ │ └── class2/
│ │ ├── image3.png
│ │ └── image4.png
│ ├── val/
│ │ ├── class1/
│ │ └── class2/
│ └── test/
│ ├── class1/
│ └── class2/
```
## 支持的模型
- **ResNet系列**: ResNet18, ResNet34, ResNet50, ResNet101, ResNet152
- **DenseNet系列**: DenseNet121, DenseNet161, DenseNet169, DenseNet201
- **EfficientNet系列**: EfficientNet-B0 到 EfficientNet-B7
- **Vision Transformer**: ViT-Base, ViT-Large
- **ConvNeXt**: ConvNeXt-Tiny, ConvNeXt-Small, ConvNeXt-Base
- **Medical专用**: MedNet, RadImageNet预训练模型
## 许可证
本项目基于 MIT 许可证开源。
## 贡献
欢迎贡献代码!请查看 [CONTRIBUTING.md](CONTRIBUTING.md) 了解详情。
Raw data
{
"_id": null,
"home_page": null,
"name": "medvision-classification",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "medical imaging, classification, pytorch, lightning, deep learning",
"author": null,
"author_email": "weizhipeng <weizhipeng@shu.edu.cn>",
"download_url": "https://files.pythonhosted.org/packages/ee/68/973864be667cad488d0de05477e1a96ac20399325d99c7f52bdeaf5db2ab/medvision_classification-0.2.3.tar.gz",
"platform": null,
"description": "# MedVision-Classification\n\nMedVision-Classification \u662f\u4e00\u4e2a\u57fa\u4e8e PyTorch Lightning \u7684\u533b\u5b66\u5f71\u50cf\u5206\u7c7b\u6846\u67b6\uff0c\u63d0\u4f9b\u4e86\u8bad\u7ec3\u548c\u63a8\u7406\u7684\u7b80\u5355\u63a5\u53e3\u3002\n\n## \u7279\u70b9\n\n- \u57fa\u4e8e PyTorch Lightning \u7684\u9ad8\u7ea7\u63a5\u53e3\n- \u652f\u6301\u5e38\u89c1\u7684\u533b\u5b66\u5f71\u50cf\u683c\u5f0f\uff08NIfTI\u3001DICOM \u7b49\uff09\n- \u5185\u7f6e\u591a\u79cd\u5206\u7c7b\u6a21\u578b\u67b6\u6784\uff08ResNet\u3001DenseNet\u3001EfficientNet \u7b49\uff09\n- \u7075\u6d3b\u7684\u6570\u636e\u52a0\u8f7d\u548c\u9884\u5904\u7406\u7ba1\u9053\n- \u6a21\u5757\u5316\u8bbe\u8ba1\uff0c\u6613\u4e8e\u6269\u5c55\n- \u547d\u4ee4\u884c\u754c\u9762\u7528\u4e8e\u8bad\u7ec3\u548c\u63a8\u7406\n- \u652f\u6301\u4e8c\u5206\u7c7b\u548c\u591a\u5206\u7c7b\u4efb\u52a1\n\n## \u5b89\u88c5\n\n### \u7cfb\u7edf\u8981\u6c42\n\n- Python 3.8+\n- PyTorch 2.0+\n- CUDA (\u53ef\u9009\uff0c\u7528\u4e8eGPU\u52a0\u901f)\n\n### \u57fa\u672c\u5b89\u88c5\n\n\u6700\u7b80\u5355\u7684\u5b89\u88c5\u65b9\u5f0f\uff1a\n\n```bash\npip install -e .\n```\n\n### \u4ece\u6e90\u7801\u5b89\u88c5\n\n```bash\ngit clone https://github.com/yourusername/medvision-classification.git\ncd medvision-classification\npip install -e .\n```\n\n### \u4f7f\u7528requirements\u6587\u4ef6\n\n```bash\n# \u57fa\u672c\u73af\u5883\npip install -r requirements.txt\n\n# \u5f00\u53d1\u73af\u5883\npip install -r requirements-dev.txt\n```\n\n### \u4f7f\u7528conda\u73af\u5883\n\n\u63a8\u8350\u4f7f\u7528 conda \u521b\u5efa\u72ec\u7acb\u7684\u865a\u62df\u73af\u5883\uff1a\n\n```bash\n# \u521b\u5efa\u5e76\u6fc0\u6d3b\u73af\u5883\nconda env create -f environment.yml\nconda activate medvision-cls\n\n# \u5b89\u88c5\u9879\u76ee\u672c\u8eab\npip install -e .\n```\n\n## \u5feb\u901f\u5165\u95e8\n\n### \u8bad\u7ec32D\u6a21\u578b\n\n```bash\nmedvision-cls train configs/train_config.yml\n```\n\n### \u8bad\u7ec33D\u6a21\u578b\n\n```bash\nmedvision-cls train configs/train_3d_resnet_config.yml\n```\n\n### \u6d4b\u8bd5\u6a21\u578b\n\n```bash\nmedvision-cls test configs/test_config.yml\n```\n\n### \u63a8\u7406\n\n```bash\nMedVision-cls predict configs/inference_config.yml --input /path/to/image --output /path/to/output\n```\n\n## \u914d\u7f6e\u683c\u5f0f\n\n### 2D\u5206\u7c7b\u8bad\u7ec3\u914d\u7f6e\u793a\u4f8b\n\n```yaml\n# 2D ResNet Training Configuration\nseed: 42\n\ntask_dim: 2d\n\n# Model configuration\nmodel:\n type: \"classification\"\n network:\n name: \"resnet50\"\n pretrained: true\n num_classes: 2\n\n # Metrics to compute\n metrics:\n accuracy:\n type: \"accuracy\"\n f1:\n type: \"f1\"\n precision:\n type: \"precision\"\n recall:\n type: \"recall\"\n auc:\n type: \"auroc\"\n \n # Loss configuration\n loss:\n type: \"cross_entropy\"\n weight: null\n label_smoothing: 0.0\n \n # Optimizer configuration\n optimizer:\n type: \"adam\"\n lr: 0.001\n weight_decay: 0.0001\n \n # Scheduler configuration\n scheduler:\n type: \"cosine\"\n T_max: 100\n eta_min: 0.00001\n\n# Data configuration\ndata:\n type: \"medical\"\n batch_size: 4\n num_workers: 4\n data_dir: \"data/classification\"\n image_format: \"*.png\"\n \n # Transform configuration for 2D data\n transforms:\n image_size: [224, 224]\n normalize: true\n augment: true\n \n # Data split configuration\n train_val_split: [0.8, 0.2]\n seed: 42\n\n# Training configuration\ntraining:\n max_epochs: 10\n accelerator: \"gpu\"\n devices: [0,1,2,3] # Multi-GPU training\n precision: 16\n save_metrics: true\n \n # Callbacks\n model_checkpoint:\n monitor: \"val/accuracy\"\n mode: \"max\"\n save_top_k: 3\n filename: \"epoch_{epoch:02d}-val_acc_{val/accuracy:.3f}\"\n\n# Validation configuration\nvalidation:\n check_val_every_n_epoch: 1\n\n# Class names\nclass_names:\n - \"Class_0\"\n - \"Class_1\"\n\n# Output paths\noutputs:\n output_dir: \"outputs\"\n\n# Logging\nlogging:\n log_every_n_steps: 10\n wandb:\n enabled: false\n project: \"medvision-2d-classification\"\n entity: null\n```\n\n### 3D\u5206\u7c7b\u8bad\u7ec3\u914d\u7f6e\u793a\u4f8b\n\n```yaml\n# 3D ResNet Training Configuration\nseed: 42\n\ntask_dim: 3D\n\n# Model configuration\nmodel:\n type: \"classification\"\n network:\n name: \"resnet3d_18\" # Options: resnet3d_18, resnet3d_34, resnet3d_50\n pretrained: false # No pretrained weights for 3D models\n in_channels: 3 # Input channels (typically 1 for medical images)\n dropout: 0.1\n num_classes: 2\n\n # Metrics to compute\n metrics:\n accuracy:\n type: \"accuracy\"\n f1:\n type: \"f1\"\n precision:\n type: \"precision\"\n recall:\n type: \"recall\"\n auc:\n type: \"auroc\"\n\n # Loss configuration\n loss:\n type: \"cross_entropy\"\n weight: null\n label_smoothing: 0.0\n \n # Optimizer configuration\n optimizer:\n type: \"adam\"\n lr: 0.001\n weight_decay: 0.0001\n \n # Scheduler configuration\n scheduler:\n type: \"cosine\"\n T_max: 100\n eta_min: 0.00001\n\n# Data configuration\ndata:\n type: \"medical\"\n batch_size: 4 # Smaller batch size for 3D data\n num_workers: 4\n data_dir: \"data/3D\"\n image_format: \"*.nii.gz\" # 3D medical image format\n \n # Transform configuration for 3D data\n transforms:\n image_size: [64, 64, 64] # [D, H, W] for 3D volumes\n normalize: true\n augment: true\n \n # Data split configuration\n train_val_split: [0.8, 0.2]\n seed: 42\n\n# Training configuration\ntraining:\n max_epochs: 5\n accelerator: \"gpu\"\n devices: 1 # Single GPU for 3D (memory intensive)\n precision: 16 # Use mixed precision to save memory\n \n # Callbacks\n early_stopping:\n monitor: \"val/loss\"\n patience: 10\n mode: \"min\"\n \n model_checkpoint:\n monitor: \"val/accuracy\"\n mode: \"max\"\n save_top_k: 3\n filename: \"epoch_{epoch:02d}-val_acc_{val/accuracy:.3f}\"\n\n# Validation configuration\nvalidation:\n check_val_every_n_epoch: 1\n\n# Output paths\noutputs:\n output_dir: \"outputs\"\n\n# Logging\nlogging:\n log_every_n_steps: 10\n wandb:\n enabled: false\n project: \"medvision-3d-classification\"\n entity: null\n```\n\n### \u63a8\u7406\u914d\u7f6e\u793a\u4f8b\n\n```yaml\n# Model configuration\nmodel:\n type: \"classification\"\n network:\n name: \"resnet50\"\n pretrained: false\n num_classes: 2\n checkpoint_path: \"outputs/checkpoints/best_model.ckpt\"\n\n# Inference settings\ninference:\n batch_size: 1\n device: \"cuda:0\" # \u6216 \"cpu\"\n return_probabilities: true\n class_names: [\"class0\", \"class1\"]\n confidence_threshold: 0.5\n\n# Preprocessing\npreprocessing:\n image_size: [224, 224]\n normalize: true\n mean: [0.485, 0.456, 0.406]\n std: [0.229, 0.224, 0.225]\n\n# Output settings\noutput:\n save_predictions: true\n include_probabilities: true\n format: \"json\" # \u6216 \"csv\"\n```\n\n## \u6570\u636e\u683c\u5f0f\n\n### \u6587\u4ef6\u5939\u7ed3\u6784\n\n```\ndata/\n\u251c\u2500\u2500 classification/\n\u2502 \u251c\u2500\u2500 train/\n\u2502 \u2502 \u251c\u2500\u2500 class1/\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 image1.png\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 image2.png\n\u2502 \u2502 \u2514\u2500\u2500 class2/\n\u2502 \u2502 \u251c\u2500\u2500 image3.png\n\u2502 \u2502 \u2514\u2500\u2500 image4.png\n\u2502 \u251c\u2500\u2500 val/\n\u2502 \u2502 \u251c\u2500\u2500 class1/\n\u2502 \u2502 \u2514\u2500\u2500 class2/\n\u2502 \u2514\u2500\u2500 test/\n\u2502 \u251c\u2500\u2500 class1/\n\u2502 \u2514\u2500\u2500 class2/\n```\n\n\n## \u652f\u6301\u7684\u6a21\u578b\n\n- **ResNet\u7cfb\u5217**: ResNet18, ResNet34, ResNet50, ResNet101, ResNet152\n- **DenseNet\u7cfb\u5217**: DenseNet121, DenseNet161, DenseNet169, DenseNet201\n- **EfficientNet\u7cfb\u5217**: EfficientNet-B0 \u5230 EfficientNet-B7\n- **Vision Transformer**: ViT-Base, ViT-Large\n- **ConvNeXt**: ConvNeXt-Tiny, ConvNeXt-Small, ConvNeXt-Base\n- **Medical\u4e13\u7528**: MedNet, RadImageNet\u9884\u8bad\u7ec3\u6a21\u578b\n\n## \u8bb8\u53ef\u8bc1\n\n\u672c\u9879\u76ee\u57fa\u4e8e MIT \u8bb8\u53ef\u8bc1\u5f00\u6e90\u3002\n\n## \u8d21\u732e\n\n\u6b22\u8fce\u8d21\u732e\u4ee3\u7801\uff01\u8bf7\u67e5\u770b [CONTRIBUTING.md](CONTRIBUTING.md) \u4e86\u89e3\u8be6\u60c5\u3002\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A comprehensive PyTorch Lightning framework for medical image classification with support for 2D/3D images",
"version": "0.2.3",
"project_urls": {
"Bug Tracker": "https://github.com/Hi-Zhipeng/MedVision-classification/issues",
"Documentation": "https://github.com/Hi-Zhipeng/MedVision-classification#readme",
"Homepage": "https://github.com/Hi-Zhipeng/MedVision-classification",
"Repository": "https://github.com/Hi-Zhipeng/MedVision-classification"
},
"split_keywords": [
"medical imaging",
" classification",
" pytorch",
" lightning",
" deep learning"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "738a991898bc34d5410a1663149535c50d8cfe4e72b8c48409e4afe5224b6a01",
"md5": "aeee1849931ca8b6e62ef3257ce4d28b",
"sha256": "a97ed6c35f24dad31f5f2feac9add41322da4599feec455ce4483045746cf865"
},
"downloads": -1,
"filename": "medvision_classification-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "aeee1849931ca8b6e62ef3257ce4d28b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 65571,
"upload_time": "2025-09-02T08:29:41",
"upload_time_iso_8601": "2025-09-02T08:29:41.009835Z",
"url": "https://files.pythonhosted.org/packages/73/8a/991898bc34d5410a1663149535c50d8cfe4e72b8c48409e4afe5224b6a01/medvision_classification-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ee68973864be667cad488d0de05477e1a96ac20399325d99c7f52bdeaf5db2ab",
"md5": "85eb707240da82207c895dec220c367c",
"sha256": "fe4c358a4c2721d5dc91399270e537bc97d21b3c9346cca1d845ed71aae6d094"
},
"downloads": -1,
"filename": "medvision_classification-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "85eb707240da82207c895dec220c367c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 50097,
"upload_time": "2025-09-02T08:29:42",
"upload_time_iso_8601": "2025-09-02T08:29:42.697686Z",
"url": "https://files.pythonhosted.org/packages/ee/68/973864be667cad488d0de05477e1a96ac20399325d99c7f52bdeaf5db2ab/medvision_classification-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-02 08:29:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Hi-Zhipeng",
"github_project": "MedVision-classification",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "torch",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
">=",
"0.15.0"
]
]
},
{
"name": "pytorch-lightning",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "lightning",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.21.0"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.64.0"
]
]
},
{
"name": "click",
"specs": [
[
">=",
"8.0.0"
]
]
},
{
"name": "pillow",
"specs": [
[
">=",
"9.0.0"
]
]
},
{
"name": "scikit-image",
"specs": [
[
">=",
"0.19.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.4.0"
]
]
},
{
"name": "nibabel",
"specs": [
[
">=",
"3.2.0"
]
]
},
{
"name": "SimpleITK",
"specs": [
[
">=",
"2.2.0"
]
]
},
{
"name": "pydicom",
"specs": [
[
">=",
"2.3.0"
]
]
},
{
"name": "monai",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
">=",
"4.7.0"
]
]
},
{
"name": "albumentations",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.6.0"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.12.0"
]
]
},
{
"name": "tensorboard",
"specs": [
[
">=",
"2.10.0"
]
]
},
{
"name": "wandb",
"specs": [
[
">=",
"0.15.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.1.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.9.0"
]
]
},
{
"name": "torchmetrics",
"specs": [
[
">=",
"0.11.0"
]
]
},
{
"name": "timm",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "efficientnet-pytorch",
"specs": [
[
">=",
"0.7.0"
]
]
},
{
"name": "hydra-core",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "omegaconf",
"specs": [
[
">=",
"2.3.0"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"13.0.0"
]
]
},
{
"name": "onnx",
"specs": [
[
">=",
"1.14.0"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
">=",
"1.15.0"
]
]
}
],
"lcname": "medvision-classification"
}