sit4onnxw


Namesit4onnxw JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/PINTO0309/sit4onnxw
SummarySimple Inference Test for ONNX Runtime Web
upload_time2025-10-08 06:12:53
maintainerNone
docs_urlNone
authorKatsuya Hyodo
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements numpy onnx click requests selenium webdriver-manager
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # sit4onnxw

Simple Inference Test for ONNX Runtime Web

https://github.com/PINTO0309/simple-onnx-processing-tools

[![Downloads](https://static.pepy.tech/personalized-badge/sit4onnxw?period=total&units=none&left_color=grey&right_color=brightgreen&left_text=Downloads)](https://pepy.tech/project/sit4onnxw) ![GitHub](https://img.shields.io/github/license/PINTO0309/sit4onnxw?color=2BAF2B) [![PyPI](https://img.shields.io/pypi/v/sit4onnxw?color=2BAF2B)](https://pypi.org/project/sit4onnxw/) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/PINTO0309/sit4onnxw)

## Overview

sit4onnxw is a comprehensive Python tool for benchmarking ONNX models using ONNX Runtime Web with support for CPU, WebGL, and WebGPU execution providers. This tool is inspired by [sit4onnx](https://github.com/PINTO0309/sit4onnx) but specifically designed to work with onnxruntime-web through browser automation.

**Key Differentiators:**
- **100% sit4onnx Compatible**: Same CLI interface and parameter specifications
- **Multi-Input Model Support**: Full support for complex models with multiple inputs/outputs
- **Dynamic Tensor Intelligence**: Automatic shape inference for dynamic dimensions
- **Robust Error Handling**: Categorized errors with actionable solutions
- **Browser Automation**: Leverages Selenium for reliable ONNX Runtime Web execution

## Features

### Core Functionality
- **Multiple Execution Providers**: CPU, WebGL, and WebGPU support with automatic fallback
- **Multi-Input Model Support**: Full support for models with multiple inputs and outputs
- **Dynamic Tensor Support**: Intelligent handling of dynamic dimensions with smart defaults
- **Model Format Support**: Both .onnx and .ort model formats

### Advanced Features
- **sit4onnx Compatible Interface**: Same parameter specification as original sit4onnx
- **Automatic Fallback**: WebGL/WebGPU failures automatically fallback to CPU
- **Smart Error Handling**: Categorized error messages with user-friendly suggestions
- **Performance Benchmarking**: Configurable test loops with detailed timing analysis
- **External Input Support**: Use your own numpy arrays as model inputs

### Usability
- **Flexible Shape Specification**: Multiple ways to specify input shapes
- **Comprehensive CLI**: Full command-line interface with all sit4onnx options
- **Python API**: Direct programmatic access for integration
- **Debug Mode**: Browser debugging support for troubleshooting

## Installation

```bash
pip install sit4onnxw
```

## Usage

### 1. Quick Start Examples

```bash
# Basic inference (CPU)
sit4onnxw --input_onnx_file_path model.onnx

# WebGL with automatic CPU fallback
sit4onnxw --input_onnx_file_path model.onnx --execution_provider webgl --fallback_to_cpu

# Dynamic tensor model with custom batch size
sit4onnxw --input_onnx_file_path dynamic_model.onnx --batch_size 4

# Multi-input model with fixed shapes (sit4onnx style)
sit4onnxw \
--input_onnx_file_path multi_input_model.onnx \
--fixed_shapes 1 64 112 200 \
--fixed_shapes 1 3 112 200 \
--execution_provider cpu

# Using external numpy input files
sit4onnxw \
--input_onnx_file_path multi_input_model.onnx \
--input_numpy_file_paths input1.npy \
--input_numpy_file_paths input2.npy \
--execution_provider cpu

# Performance benchmarking
sit4onnxw \
--input_onnx_file_path model.onnx \
--test_loop_count 100 \
--enable_profiling \
--output_numpy_file
```

### 2. CLI Usage

```bash
usage:
sit4onnxw [-h]
  --input_onnx_file_path INPUT_ONNX_FILE_PATH
  [--batch_size BATCH_SIZE]
  [--fixed_shapes FIXED_SHAPES]
  [--test_loop_count TEST_LOOP_COUNT]
  [--execution_provider {cpu,webgl,webgpu}]
  [--enable_profiling]
  [--output_numpy_file]
  [--numpy_seed NUMPY_SEED]
  [--input_numpy_file_paths INPUT_NUMPY_FILE_PATHS]
  [--input_names INPUT_NAMES]
  [--output_names OUTPUT_NAMES]
  [--ort_model_path ORT_MODEL_PATH]
  [--headless/--no-headless]
  [--timeout TIMEOUT]

optional arguments:
  -h, --help
    show this help message and exit
  --input_onnx_file_path INPUT_ONNX_FILE_PATH
    Path to ONNX model file
  --batch_size BATCH_SIZE
    Batch size for inference (default: 1)
  --fixed_shapes FIXED_SHAPES
    Input OPs with undefined shapes changed to specified shape. Can be specified multiple times for different input OPs.
  --test_loop_count TEST_LOOP_COUNT
    Number of times to run the test. Total execution time divided by test runs to show average inference time. (default: 10)
  --execution_provider {cpu,webgl,webgpu}
    ONNX Runtime Web Execution Provider. (default: cpu)
  --enable_profiling
    Outputs performance profiling result to a .json file
  --output_numpy_file
    Outputs the last inference result to an .npy file.
  --numpy_seed NUMPY_SEED
    Random seed for input data generation.
  --input_numpy_file_paths INPUT_NUMPY_FILE_PATHS
    Use external numpy.ndarray files for testing input data. Can specify multiple times.
  --debug
    Enable debug mode (keep browser open on error).
  --fallback_to_cpu
    Automatically fallback to CPU if other execution providers fail.
  --input_names INPUT_NAMES
    Input tensor names (comma-separated)
  --output_names OUTPUT_NAMES
    Output tensor names (comma-separated)
  --ort_model_path ORT_MODEL_PATH
    Path to ORT format model file
  --headless/--no-headless
    Run browser in headless mode (default: True)
  --timeout TIMEOUT
    Browser timeout in seconds (default: 60)
```

### 3. In-script Usage

```python
from sit4onnxw import inference

# Single input model
results = inference(
    input_onnx_file_path="mobilenetv2-12.onnx",
    execution_provider="webgl",
    test_loop_count=10
)

# Multi-input model with list of shapes (sit4onnx style)
results = inference(
    input_onnx_file_path="multi_input_model.onnx",
    fixed_shapes=[[1, 64, 112, 200], [1, 3, 112, 200]],
    execution_provider="cpu",
    test_loop_count=10
)

# Multi-input model with external numpy files
results = inference(
    input_onnx_file_path="multi_input_model.onnx",
    input_numpy_file_paths=["input1.npy", "input2.npy"],
    execution_provider="cpu",
    test_loop_count=10
)
```

## Model Support

### Input Types
- **Fixed Shape Models**: Models with static input dimensions
- **Dynamic Shape Models**: Models with dynamic dimensions (batch_size, seq_len, etc.)
- **Multi-Input Models**: Models with multiple input tensors
- **Mixed Models**: Combination of fixed and dynamic inputs

### Automatic Shape Inference
sit4onnxw automatically infers appropriate shapes for dynamic tensors:
- `batch_size`, `N`, `batch` → Uses --batch_size parameter (default: 1)
- `seq`, `sequence`, `seq_len`, `time` → Default: 90
- `features`, `hidden`, `embed`, `channels` → Default: 105
- Unknown dimensions by position → [batch_size, 90, 105, 1, ...]

### Tested Model Examples
- **Single Input Fixed**: [1, 90, 105] → Works perfectly
- **Single Input Dynamic**: [batch_size, seq, features] → Auto-inferred to [1, 90, 105]
- **Multi-Input Fixed**: [1, 64, 112, 200] + [1, 3, 112, 200] → 4 outputs
- **WebGL Large Models**: Works but may be slow (27+ seconds for large models)

## Performance Characteristics

| Execution Provider | Speed | Compatibility | Best Use Case |
|-------------------|-------|---------------|---------------|
| **CPU** | Fast (1-3s) | Excellent | Default choice, most reliable |
| **WebGL** | Variable | Good | When WebGL acceleration needed |
| **WebGPU** | Variable | Limited | Experimental, latest browsers |

## Error Handling

sit4onnxw provides intelligent error categorization:

- **Model Format**: Incompatible model format for execution provider
- **Input Tensor**: Shape/dimension mismatches with detailed error info
- **WebGL/WebGPU**: Provider-specific errors with fallback suggestions
- **WebAssembly**: WASM loading failures with troubleshooting hints

Use `--fallback_to_cpu` for automatic recovery from execution provider failures.

## Troubleshooting

### Common Issues
1. **WebGL Errors**: Use `--fallback_to_cpu` or switch to `--execution_provider cpu`
2. **Shape Mismatches**: Check model requirements with `--debug` mode
3. **Large Models**: Consider using CPU provider for better reliability
4. **Browser Issues**: Ensure Chrome/Chromium is installed and up-to-date

### Debug Mode
```bash
sit4onnxw --input_onnx_file_path model.onnx --debug
```
Keeps browser open for manual inspection when errors occur.

## Output Examples

### 1. Single Input Model (CPU)
```bash
$ sit4onnxw \
--input_onnx_file_path model_optimized_dynamic.onnx \
--execution_provider cpu \
--test_loop_count 5

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_optimized_dynamic.onnx
Execution Provider: cpu
Batch Size: 1
Test Loop Count: 5
--------------------------------------------------
Model has 1 input(s):
  Input 0: input - shape: [batch_size, seq, features], type: 1
Input 'input': shape = [1, 90, 105], dtype = <class 'numpy.float32'>
Generated data shape: (1, 90, 105), size: 9450
Converting 'input': original shape=(1, 90, 105), size=9450
  Converted to list: length=1, type=<class 'list'>
Execution Provider: cpu
Test Loop Count: 5
Average Inference Time: 4.840 ms
Min Inference Time: 4.400 ms
Max Inference Time: 5.200 ms
--------------------------------------------------
Inference completed successfully!
Number of outputs: 1
Output 0: shape=(1, 2), dtype=float64
```

### 2. Multi-Input Model with Fixed Shapes
```bash
$ sit4onnxw \
--input_onnx_file_path model_multi_input_fix.onnx \
--fixed_shapes 1 64 112 200 \
--fixed_shapes 1 3 112 200 \
--test_loop_count 3

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_multi_input_fix.onnx
Execution Provider: cpu
Batch Size: 1
Test Loop Count: 3
Fixed Shapes: [[1, 64, 112, 200], [1, 3, 112, 200]]
--------------------------------------------------
Model has 2 input(s):
  Input 0: feat - shape: [1, 64, 112, 200], type: 1
  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1
Applied fixed shape for input 0: [1, 64, 112, 200]
Input 'feat': shape = [1, 64, 112, 200], dtype = <class 'numpy.float32'>
Generated data shape: (1, 64, 112, 200), size: 1433600
Applied fixed shape for input 1: [1, 3, 112, 200]
Input 'pc_dep': shape = [1, 3, 112, 200], dtype = <class 'numpy.float32'>
Generated data shape: (1, 3, 112, 200), size: 67200
Execution Provider: cpu
Test Loop Count: 3
Average Inference Time: 1578.500 ms
Min Inference Time: 1568.000 ms
Max Inference Time: 1589.000 ms
--------------------------------------------------
Inference completed successfully!
Number of outputs: 4
Output 0: shape=(1, 3, 112, 200), dtype=float64
Output 1: shape=(1, 8, 112, 200), dtype=float64
Output 2: shape=(1, 1, 112, 200), dtype=float64
Output 3: shape=(1, 8, 112, 200), dtype=float64
```

### 3. WebGL with CPU Fallback
```bash
$ sit4onnxw \
--input_onnx_file_path model_optimized_dynamic.onnx \
--execution_provider webgl \
--fallback_to_cpu \
--test_loop_count 3

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_optimized_dynamic.onnx
Execution Provider: webgl
Batch Size: 1
Test Loop Count: 3
--------------------------------------------------
Warning: webgl execution provider failed: Browser error: Error (Model Format): Model format incompatible with selected execution provider. Try CPU provider.
Falling back to cpu execution provider...
Model has 1 input(s):
  Input 0: input - shape: [batch_size, seq, features], type: 1
Input 'input': shape = [1, 90, 105], dtype = <class 'numpy.float32'>
Generated data shape: (1, 90, 105), size: 9450
Execution Provider: cpu
Test Loop Count: 3
Average Inference Time: 4.767 ms
Min Inference Time: 4.400 ms
Max Inference Time: 5.100 ms
--------------------------------------------------
Inference completed successfully!
Number of outputs: 1
Output 0: shape=(1, 2), dtype=float64
```

### 4. Error Case - Shape Mismatch
```bash
$ sit4onnxw \
--input_onnx_file_path model_multi_input_fix.onnx \
--fixed_shapes 1 32 112 200 \
--test_loop_count 1

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_multi_input_fix.onnx
Execution Provider: cpu
Batch Size: 1
Test Loop Count: 1
Fixed Shapes: [[1, 32, 112, 200]]
--------------------------------------------------
Model has 2 input(s):
  Input 0: feat - shape: [1, 64, 112, 200], type: 1
  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1
Applied fixed shape for input 0: [1, 32, 112, 200]
Warning: Single fixed shape provided but this is input 1. Using defaults.
Error: Browser error: Error (Unknown): failed to call OrtRun(). ERROR_CODE: 2, ERROR_MESSAGE: Got invalid dimensions for input: feat for the following indices index: 1 Got: 32 Expected: 64 Please fix either the inputs/outputs or the model.
```

### 5. Dynamic Tensor with Batch Size
```bash
$ sit4onnxw \
--input_onnx_file_path model_optimized_dynamic.onnx \
--batch_size 4 \
--test_loop_count 2

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_optimized_dynamic.onnx
Execution Provider: cpu
Batch Size: 4
Test Loop Count: 2
--------------------------------------------------
Model has 1 input(s):
  Input 0: input - shape: [batch_size, seq, features], type: 1
Input 'input': shape = [4, 90, 105], dtype = <class 'numpy.float32'>
Generated data shape: (4, 90, 105), size: 37800
Execution Provider: cpu
Test Loop Count: 2
Average Inference Time: 16.967 ms
Min Inference Time: 16.900 ms
Max Inference Time: 17.000 ms
--------------------------------------------------
Inference completed successfully!
Number of outputs: 1
Output 0: shape=(4, 2), dtype=float64
```

### 6. Using External Numpy Files
```bash
$ sit4onnxw \
--input_onnx_file_path model_multi_input_fix.onnx \
--input_numpy_file_paths test_feat.npy \
--input_numpy_file_paths test_pc_dep.npy \
--test_loop_count 1

sit4onnxw - Simple Inference Test for ONNX Runtime Web
Model: model_multi_input_fix.onnx
Execution Provider: cpu
Batch Size: 1
Test Loop Count: 1
--------------------------------------------------
Model has 2 input(s):
  Input 0: feat - shape: [1, 64, 112, 200], type: 1
  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1
Converting 'feat': original shape=(1, 64, 112, 200), size=1433600
Converting 'pc_dep': original shape=(1, 3, 112, 200), size=67200
Execution Provider: cpu
Test Loop Count: 1
Average Inference Time: 1561.200 ms
Min Inference Time: 1561.200 ms
Max Inference Time: 1561.200 ms
--------------------------------------------------
Inference completed successfully!
Number of outputs: 4
Output 0: shape=(1, 3, 112, 200), dtype=float64
Output 1: shape=(1, 8, 112, 200), dtype=float64
Output 2: shape=(1, 1, 112, 200), dtype=float64
Output 3: shape=(1, 8, 112, 200), dtype=float64
```

## Requirements

- Python 3.10+
- Chrome or Chromium browser
- WebDriver compatible browser setup
- Chrome WebDriver (automatically managed via webdriver-manager)

## License

MIT License

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/PINTO0309/sit4onnxw",
    "name": "sit4onnxw",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Katsuya Hyodo",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/86/8a/3c4fef7ad128728860afc31aa68a8ea8345ca60444a2df074332452cc34b/sit4onnxw-1.0.2.tar.gz",
    "platform": null,
    "description": "# sit4onnxw\n\nSimple Inference Test for ONNX Runtime Web\n\nhttps://github.com/PINTO0309/simple-onnx-processing-tools\n\n[![Downloads](https://static.pepy.tech/personalized-badge/sit4onnxw?period=total&units=none&left_color=grey&right_color=brightgreen&left_text=Downloads)](https://pepy.tech/project/sit4onnxw) ![GitHub](https://img.shields.io/github/license/PINTO0309/sit4onnxw?color=2BAF2B) [![PyPI](https://img.shields.io/pypi/v/sit4onnxw?color=2BAF2B)](https://pypi.org/project/sit4onnxw/) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/PINTO0309/sit4onnxw)\n\n## Overview\n\nsit4onnxw is a comprehensive Python tool for benchmarking ONNX models using ONNX Runtime Web with support for CPU, WebGL, and WebGPU execution providers. This tool is inspired by [sit4onnx](https://github.com/PINTO0309/sit4onnx) but specifically designed to work with onnxruntime-web through browser automation.\n\n**Key Differentiators:**\n- **100% sit4onnx Compatible**: Same CLI interface and parameter specifications\n- **Multi-Input Model Support**: Full support for complex models with multiple inputs/outputs\n- **Dynamic Tensor Intelligence**: Automatic shape inference for dynamic dimensions\n- **Robust Error Handling**: Categorized errors with actionable solutions\n- **Browser Automation**: Leverages Selenium for reliable ONNX Runtime Web execution\n\n## Features\n\n### Core Functionality\n- **Multiple Execution Providers**: CPU, WebGL, and WebGPU support with automatic fallback\n- **Multi-Input Model Support**: Full support for models with multiple inputs and outputs\n- **Dynamic Tensor Support**: Intelligent handling of dynamic dimensions with smart defaults\n- **Model Format Support**: Both .onnx and .ort model formats\n\n### Advanced Features\n- **sit4onnx Compatible Interface**: Same parameter specification as original sit4onnx\n- **Automatic Fallback**: WebGL/WebGPU failures automatically fallback to CPU\n- **Smart Error Handling**: Categorized error messages with user-friendly suggestions\n- **Performance Benchmarking**: Configurable test loops with detailed timing analysis\n- **External Input Support**: Use your own numpy arrays as model inputs\n\n### Usability\n- **Flexible Shape Specification**: Multiple ways to specify input shapes\n- **Comprehensive CLI**: Full command-line interface with all sit4onnx options\n- **Python API**: Direct programmatic access for integration\n- **Debug Mode**: Browser debugging support for troubleshooting\n\n## Installation\n\n```bash\npip install sit4onnxw\n```\n\n## Usage\n\n### 1. Quick Start Examples\n\n```bash\n# Basic inference (CPU)\nsit4onnxw --input_onnx_file_path model.onnx\n\n# WebGL with automatic CPU fallback\nsit4onnxw --input_onnx_file_path model.onnx --execution_provider webgl --fallback_to_cpu\n\n# Dynamic tensor model with custom batch size\nsit4onnxw --input_onnx_file_path dynamic_model.onnx --batch_size 4\n\n# Multi-input model with fixed shapes (sit4onnx style)\nsit4onnxw \\\n--input_onnx_file_path multi_input_model.onnx \\\n--fixed_shapes 1 64 112 200 \\\n--fixed_shapes 1 3 112 200 \\\n--execution_provider cpu\n\n# Using external numpy input files\nsit4onnxw \\\n--input_onnx_file_path multi_input_model.onnx \\\n--input_numpy_file_paths input1.npy \\\n--input_numpy_file_paths input2.npy \\\n--execution_provider cpu\n\n# Performance benchmarking\nsit4onnxw \\\n--input_onnx_file_path model.onnx \\\n--test_loop_count 100 \\\n--enable_profiling \\\n--output_numpy_file\n```\n\n### 2. CLI Usage\n\n```bash\nusage:\nsit4onnxw [-h]\n  --input_onnx_file_path INPUT_ONNX_FILE_PATH\n  [--batch_size BATCH_SIZE]\n  [--fixed_shapes FIXED_SHAPES]\n  [--test_loop_count TEST_LOOP_COUNT]\n  [--execution_provider {cpu,webgl,webgpu}]\n  [--enable_profiling]\n  [--output_numpy_file]\n  [--numpy_seed NUMPY_SEED]\n  [--input_numpy_file_paths INPUT_NUMPY_FILE_PATHS]\n  [--input_names INPUT_NAMES]\n  [--output_names OUTPUT_NAMES]\n  [--ort_model_path ORT_MODEL_PATH]\n  [--headless/--no-headless]\n  [--timeout TIMEOUT]\n\noptional arguments:\n  -h, --help\n    show this help message and exit\n  --input_onnx_file_path INPUT_ONNX_FILE_PATH\n    Path to ONNX model file\n  --batch_size BATCH_SIZE\n    Batch size for inference (default: 1)\n  --fixed_shapes FIXED_SHAPES\n    Input OPs with undefined shapes changed to specified shape. Can be specified multiple times for different input OPs.\n  --test_loop_count TEST_LOOP_COUNT\n    Number of times to run the test. Total execution time divided by test runs to show average inference time. (default: 10)\n  --execution_provider {cpu,webgl,webgpu}\n    ONNX Runtime Web Execution Provider. (default: cpu)\n  --enable_profiling\n    Outputs performance profiling result to a .json file\n  --output_numpy_file\n    Outputs the last inference result to an .npy file.\n  --numpy_seed NUMPY_SEED\n    Random seed for input data generation.\n  --input_numpy_file_paths INPUT_NUMPY_FILE_PATHS\n    Use external numpy.ndarray files for testing input data. Can specify multiple times.\n  --debug\n    Enable debug mode (keep browser open on error).\n  --fallback_to_cpu\n    Automatically fallback to CPU if other execution providers fail.\n  --input_names INPUT_NAMES\n    Input tensor names (comma-separated)\n  --output_names OUTPUT_NAMES\n    Output tensor names (comma-separated)\n  --ort_model_path ORT_MODEL_PATH\n    Path to ORT format model file\n  --headless/--no-headless\n    Run browser in headless mode (default: True)\n  --timeout TIMEOUT\n    Browser timeout in seconds (default: 60)\n```\n\n### 3. In-script Usage\n\n```python\nfrom sit4onnxw import inference\n\n# Single input model\nresults = inference(\n    input_onnx_file_path=\"mobilenetv2-12.onnx\",\n    execution_provider=\"webgl\",\n    test_loop_count=10\n)\n\n# Multi-input model with list of shapes (sit4onnx style)\nresults = inference(\n    input_onnx_file_path=\"multi_input_model.onnx\",\n    fixed_shapes=[[1, 64, 112, 200], [1, 3, 112, 200]],\n    execution_provider=\"cpu\",\n    test_loop_count=10\n)\n\n# Multi-input model with external numpy files\nresults = inference(\n    input_onnx_file_path=\"multi_input_model.onnx\",\n    input_numpy_file_paths=[\"input1.npy\", \"input2.npy\"],\n    execution_provider=\"cpu\",\n    test_loop_count=10\n)\n```\n\n## Model Support\n\n### Input Types\n- **Fixed Shape Models**: Models with static input dimensions\n- **Dynamic Shape Models**: Models with dynamic dimensions (batch_size, seq_len, etc.)\n- **Multi-Input Models**: Models with multiple input tensors\n- **Mixed Models**: Combination of fixed and dynamic inputs\n\n### Automatic Shape Inference\nsit4onnxw automatically infers appropriate shapes for dynamic tensors:\n- `batch_size`, `N`, `batch` \u2192 Uses --batch_size parameter (default: 1)\n- `seq`, `sequence`, `seq_len`, `time` \u2192 Default: 90\n- `features`, `hidden`, `embed`, `channels` \u2192 Default: 105\n- Unknown dimensions by position \u2192 [batch_size, 90, 105, 1, ...]\n\n### Tested Model Examples\n- **Single Input Fixed**: [1, 90, 105] \u2192 Works perfectly\n- **Single Input Dynamic**: [batch_size, seq, features] \u2192 Auto-inferred to [1, 90, 105]\n- **Multi-Input Fixed**: [1, 64, 112, 200] + [1, 3, 112, 200] \u2192 4 outputs\n- **WebGL Large Models**: Works but may be slow (27+ seconds for large models)\n\n## Performance Characteristics\n\n| Execution Provider | Speed | Compatibility | Best Use Case |\n|-------------------|-------|---------------|---------------|\n| **CPU** | Fast (1-3s) | Excellent | Default choice, most reliable |\n| **WebGL** | Variable | Good | When WebGL acceleration needed |\n| **WebGPU** | Variable | Limited | Experimental, latest browsers |\n\n## Error Handling\n\nsit4onnxw provides intelligent error categorization:\n\n- **Model Format**: Incompatible model format for execution provider\n- **Input Tensor**: Shape/dimension mismatches with detailed error info\n- **WebGL/WebGPU**: Provider-specific errors with fallback suggestions\n- **WebAssembly**: WASM loading failures with troubleshooting hints\n\nUse `--fallback_to_cpu` for automatic recovery from execution provider failures.\n\n## Troubleshooting\n\n### Common Issues\n1. **WebGL Errors**: Use `--fallback_to_cpu` or switch to `--execution_provider cpu`\n2. **Shape Mismatches**: Check model requirements with `--debug` mode\n3. **Large Models**: Consider using CPU provider for better reliability\n4. **Browser Issues**: Ensure Chrome/Chromium is installed and up-to-date\n\n### Debug Mode\n```bash\nsit4onnxw --input_onnx_file_path model.onnx --debug\n```\nKeeps browser open for manual inspection when errors occur.\n\n## Output Examples\n\n### 1. Single Input Model (CPU)\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_optimized_dynamic.onnx \\\n--execution_provider cpu \\\n--test_loop_count 5\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_optimized_dynamic.onnx\nExecution Provider: cpu\nBatch Size: 1\nTest Loop Count: 5\n--------------------------------------------------\nModel has 1 input(s):\n  Input 0: input - shape: [batch_size, seq, features], type: 1\nInput 'input': shape = [1, 90, 105], dtype = <class 'numpy.float32'>\nGenerated data shape: (1, 90, 105), size: 9450\nConverting 'input': original shape=(1, 90, 105), size=9450\n  Converted to list: length=1, type=<class 'list'>\nExecution Provider: cpu\nTest Loop Count: 5\nAverage Inference Time: 4.840 ms\nMin Inference Time: 4.400 ms\nMax Inference Time: 5.200 ms\n--------------------------------------------------\nInference completed successfully!\nNumber of outputs: 1\nOutput 0: shape=(1, 2), dtype=float64\n```\n\n### 2. Multi-Input Model with Fixed Shapes\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_multi_input_fix.onnx \\\n--fixed_shapes 1 64 112 200 \\\n--fixed_shapes 1 3 112 200 \\\n--test_loop_count 3\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_multi_input_fix.onnx\nExecution Provider: cpu\nBatch Size: 1\nTest Loop Count: 3\nFixed Shapes: [[1, 64, 112, 200], [1, 3, 112, 200]]\n--------------------------------------------------\nModel has 2 input(s):\n  Input 0: feat - shape: [1, 64, 112, 200], type: 1\n  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1\nApplied fixed shape for input 0: [1, 64, 112, 200]\nInput 'feat': shape = [1, 64, 112, 200], dtype = <class 'numpy.float32'>\nGenerated data shape: (1, 64, 112, 200), size: 1433600\nApplied fixed shape for input 1: [1, 3, 112, 200]\nInput 'pc_dep': shape = [1, 3, 112, 200], dtype = <class 'numpy.float32'>\nGenerated data shape: (1, 3, 112, 200), size: 67200\nExecution Provider: cpu\nTest Loop Count: 3\nAverage Inference Time: 1578.500 ms\nMin Inference Time: 1568.000 ms\nMax Inference Time: 1589.000 ms\n--------------------------------------------------\nInference completed successfully!\nNumber of outputs: 4\nOutput 0: shape=(1, 3, 112, 200), dtype=float64\nOutput 1: shape=(1, 8, 112, 200), dtype=float64\nOutput 2: shape=(1, 1, 112, 200), dtype=float64\nOutput 3: shape=(1, 8, 112, 200), dtype=float64\n```\n\n### 3. WebGL with CPU Fallback\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_optimized_dynamic.onnx \\\n--execution_provider webgl \\\n--fallback_to_cpu \\\n--test_loop_count 3\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_optimized_dynamic.onnx\nExecution Provider: webgl\nBatch Size: 1\nTest Loop Count: 3\n--------------------------------------------------\nWarning: webgl execution provider failed: Browser error: Error (Model Format): Model format incompatible with selected execution provider. Try CPU provider.\nFalling back to cpu execution provider...\nModel has 1 input(s):\n  Input 0: input - shape: [batch_size, seq, features], type: 1\nInput 'input': shape = [1, 90, 105], dtype = <class 'numpy.float32'>\nGenerated data shape: (1, 90, 105), size: 9450\nExecution Provider: cpu\nTest Loop Count: 3\nAverage Inference Time: 4.767 ms\nMin Inference Time: 4.400 ms\nMax Inference Time: 5.100 ms\n--------------------------------------------------\nInference completed successfully!\nNumber of outputs: 1\nOutput 0: shape=(1, 2), dtype=float64\n```\n\n### 4. Error Case - Shape Mismatch\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_multi_input_fix.onnx \\\n--fixed_shapes 1 32 112 200 \\\n--test_loop_count 1\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_multi_input_fix.onnx\nExecution Provider: cpu\nBatch Size: 1\nTest Loop Count: 1\nFixed Shapes: [[1, 32, 112, 200]]\n--------------------------------------------------\nModel has 2 input(s):\n  Input 0: feat - shape: [1, 64, 112, 200], type: 1\n  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1\nApplied fixed shape for input 0: [1, 32, 112, 200]\nWarning: Single fixed shape provided but this is input 1. Using defaults.\nError: Browser error: Error (Unknown): failed to call OrtRun(). ERROR_CODE: 2, ERROR_MESSAGE: Got invalid dimensions for input: feat for the following indices index: 1 Got: 32 Expected: 64 Please fix either the inputs/outputs or the model.\n```\n\n### 5. Dynamic Tensor with Batch Size\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_optimized_dynamic.onnx \\\n--batch_size 4 \\\n--test_loop_count 2\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_optimized_dynamic.onnx\nExecution Provider: cpu\nBatch Size: 4\nTest Loop Count: 2\n--------------------------------------------------\nModel has 1 input(s):\n  Input 0: input - shape: [batch_size, seq, features], type: 1\nInput 'input': shape = [4, 90, 105], dtype = <class 'numpy.float32'>\nGenerated data shape: (4, 90, 105), size: 37800\nExecution Provider: cpu\nTest Loop Count: 2\nAverage Inference Time: 16.967 ms\nMin Inference Time: 16.900 ms\nMax Inference Time: 17.000 ms\n--------------------------------------------------\nInference completed successfully!\nNumber of outputs: 1\nOutput 0: shape=(4, 2), dtype=float64\n```\n\n### 6. Using External Numpy Files\n```bash\n$ sit4onnxw \\\n--input_onnx_file_path model_multi_input_fix.onnx \\\n--input_numpy_file_paths test_feat.npy \\\n--input_numpy_file_paths test_pc_dep.npy \\\n--test_loop_count 1\n\nsit4onnxw - Simple Inference Test for ONNX Runtime Web\nModel: model_multi_input_fix.onnx\nExecution Provider: cpu\nBatch Size: 1\nTest Loop Count: 1\n--------------------------------------------------\nModel has 2 input(s):\n  Input 0: feat - shape: [1, 64, 112, 200], type: 1\n  Input 1: pc_dep - shape: [1, 3, 112, 200], type: 1\nConverting 'feat': original shape=(1, 64, 112, 200), size=1433600\nConverting 'pc_dep': original shape=(1, 3, 112, 200), size=67200\nExecution Provider: cpu\nTest Loop Count: 1\nAverage Inference Time: 1561.200 ms\nMin Inference Time: 1561.200 ms\nMax Inference Time: 1561.200 ms\n--------------------------------------------------\nInference completed successfully!\nNumber of outputs: 4\nOutput 0: shape=(1, 3, 112, 200), dtype=float64\nOutput 1: shape=(1, 8, 112, 200), dtype=float64\nOutput 2: shape=(1, 1, 112, 200), dtype=float64\nOutput 3: shape=(1, 8, 112, 200), dtype=float64\n```\n\n## Requirements\n\n- Python 3.10+\n- Chrome or Chromium browser\n- WebDriver compatible browser setup\n- Chrome WebDriver (automatically managed via webdriver-manager)\n\n## License\n\nMIT License\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Simple Inference Test for ONNX Runtime Web",
    "version": "1.0.2",
    "project_urls": {
        "Homepage": "https://github.com/PINTO0309/sit4onnxw"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1ffb65b212b7a4d98e2660574f533d019cfffe42da6617e2d7e01ffa09a8d4b7",
                "md5": "874b8314f4d280c4eb1b17c53796e845",
                "sha256": "2f22c9a8aaaf90f28608af90b9abb88ca3b2b7b3b67bfbfc0247fa6940640cc8"
            },
            "downloads": -1,
            "filename": "sit4onnxw-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "874b8314f4d280c4eb1b17c53796e845",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 18411,
            "upload_time": "2025-10-08T06:12:51",
            "upload_time_iso_8601": "2025-10-08T06:12:51.976686Z",
            "url": "https://files.pythonhosted.org/packages/1f/fb/65b212b7a4d98e2660574f533d019cfffe42da6617e2d7e01ffa09a8d4b7/sit4onnxw-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "868a3c4fef7ad128728860afc31aa68a8ea8345ca60444a2df074332452cc34b",
                "md5": "4d81ef3a4abac61a55a1ab4aaaee7882",
                "sha256": "fa9799620d14784449d3cf1afbb5264a9d148ce1838f96c8d6c8dbb624171807"
            },
            "downloads": -1,
            "filename": "sit4onnxw-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "4d81ef3a4abac61a55a1ab4aaaee7882",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 20534,
            "upload_time": "2025-10-08T06:12:53",
            "upload_time_iso_8601": "2025-10-08T06:12:53.094165Z",
            "url": "https://files.pythonhosted.org/packages/86/8a/3c4fef7ad128728860afc31aa68a8ea8345ca60444a2df074332452cc34b/sit4onnxw-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-08 06:12:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "PINTO0309",
    "github_project": "sit4onnxw",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "numpy",
            "specs": [
                [
                    ">=",
                    "1.19.0"
                ]
            ]
        },
        {
            "name": "onnx",
            "specs": [
                [
                    ">=",
                    "1.12.0"
                ]
            ]
        },
        {
            "name": "click",
            "specs": [
                [
                    ">=",
                    "8.0.0"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.25.0"
                ]
            ]
        },
        {
            "name": "selenium",
            "specs": [
                [
                    ">=",
                    "4.0.0"
                ]
            ]
        },
        {
            "name": "webdriver-manager",
            "specs": [
                [
                    ">=",
                    "3.8.0"
                ]
            ]
        }
    ],
    "lcname": "sit4onnxw"
}
        
Elapsed time: 1.54040s