runlocal-hub


Namerunlocal-hub JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryPython client for benchmarking and validating ML models on real devices via RunLocal API
upload_time2025-07-28 19:06:22
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords machine-learning models benchmarking coreml onnx tflite openvino ml-ops device-testing model-optimization inference neural-networks
VCS
bugtrack_url
requirements pydantic requests rich numpy tqdm
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">
    <a href="https://runlocal.ai">
        <picture>
            <source media="(prefers-color-scheme: dark)" srcset="./assets/logo_dark_mode.svg">
            <source media="(prefers-color-scheme: light)" srcset="./assets/logo_light_mode.svg">
            <img alt="runlocal_hub Logo" src="./assets/logo_dark_mode.svg.svg" height="42" style="max-width: 100%;">
        </picture>
    </a>
</h1>

<p align="center">
    Python client for benchmarking and validating ML models on real devices via RunLocal API.
</p>

<p align="center">
    <a href="https://pypi.org/project/runlocal-hub/"><img src="https://img.shields.io/pypi/v/runlocal_hub?label=PyPI%20version" alt="PyPI version"></a>
    <a href="https://pypi.org/project/runlocal-hub/"><img src="https://img.shields.io/pypi/pyversions/runlocal-hub.svg" alt="Python Versions"></a>
    <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a>
</p>

<br/>

<div align="center">
  <img src="./assets/benchmark.gif" alt="RunLocal Benchmark Demo" width="800">
</div>

## 🎯 Key Features

- **⚡ Real Hardware Testing** - No simulators or emulators. Test on real devices maintained in our devices lab
- **🌍 Cross-Platform Coverage** - Access MacBooks, iPhones, iPads, Android, and Windows devices from a single API
- **🔧 Multiple ML Formats** - Support for CoreML, ONNX, OpenVINO, TensorFlow Lite, and GGUF models. More frameworks coming soon.
- **📊 Detailed Metrics** - Measure inference time, memory usage, and per-layer performance data
- **🚦 CI/CD Ready** - Integrate performance and accuracy testing into your deployment pipeline

## 🔍 Evaluate Results

All benchmarks performed through the python client can be evaluated on the web platform by logging into your account.
Check out our [public demo](https://edgemeter.runlocal.ai/public/pipelines) for comprehensive benchmark evaluation across different devices and model formats.

## 🛠 Installation

```bash
pip install runlocal-hub
```

### Development Installation

For development or to install from source:

```bash
git clone https://github.com/neuralize-ai/runlocal_hub.git
cd runlocal_hub
pip install -e .
```

## 🔑 Authentication

Get your API key from the [RunLocal dashboard](https://edgemeter.runlocal.ai):

1. Log in to [RunLocal](https://edgemeter.runlocal.ai)
2. Click your avatar → User Settings
3. Navigate to "API Keys"
4. Click "Create New API Key"
5. Save your key securely

```bash
export RUNLOCAL_API_KEY=<your_api_key>
```

## 🕹 Usage Guide

### Simple Benchmark

```python
from runlocal_hub import RunLocalClient, display_benchmark_results

client = RunLocalClient()

# Benchmark on any available device
result = client.benchmark("model.mlpackage")
display_benchmark_results(results)
```

### Device Filtering

Target specific devices with intuitive filters:

```python
from runlocal_hub import DeviceFilters, RunLocalClient

client = RunLocalClient()

# High-end MacBooks with M-series chips
mac_filters = DeviceFilters(
    device_name="MacBook",
    soc="Apple M",        # Matches M1, M2, M3, etc.
    ram_min=16,           # At least 16GB RAM
    year_min=2021         # Recent models only
)

# Latest iPhones with Neural Engine
iphone_filters = DeviceFilters(
    device_name="iPhone",
    year_min=2022,
    compute_units=["CPU_AND_NE"]
)

# Run benchmarks
results = client.benchmark(
    "model.mlpackage",
    device_filters=[mac_filters, iphone_filters],
    count=None  # Use all matching devices
)
```

### 🧮 Running Predictions

Test your model with real inputs:

```python
import numpy as np

# Prepare input
image = np.random.rand(1, 3, 224, 224).astype(np.float32)
inputs = {"image": image}

# Run prediction on iPhone
outputs = client.predict(
    inputs=inputs,
    model_path="model.mlpackage",
    device_filters=DeviceFilters(device_name="iPhone 15", compute_units=["CPU_AND_NE"])
)

tensors = outputs["CPU_AND_NE"]
for name, tensor in tensors.items():
    print(f"  {name}: {tensor.shape} ({tensor.dtype})")
    print(f"  First values: {tensor.flatten()[:5]}")
```

## 📚 Examples

Check out the example scripts:

- [`bench_example.py`](./bench_example.py) - Simple benchmarking example
- [`predict_example.py`](./predict_example.py) - Prediction with custom inputs, serialised outputs

## 💠 Supported Formats

| Format          | Extension                   | Platforms       |
| --------------- | --------------------------- | --------------- |
| CoreML          | `.mlpackage`/`.mlmodel`     | macOS, iOS      |
| ONNX            | `.onnx`                     | Windows, MacOS  |
| OpenVINO        | directory (`.xml` + `.bin`) | Windows (Intel) |
| TensorFlow Lite | `.tflite`                   | Android         |
| GGUF            | `.gguf`                     | All platforma   |

More frameworks coming soon.

## 📜 License

This project is licensed under the MIT License - see the [LICENSE.txt](LICENSE.txt) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "runlocal-hub",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Ciaran O'Rourke <ciaran@runlocal.ai>",
    "keywords": "machine-learning, models, benchmarking, coreml, onnx, tflite, openvino, ml-ops, device-testing, model-optimization, inference, neural-networks",
    "author": null,
    "author_email": "RunLocal <ciaran@runlocal.ai>",
    "download_url": "https://files.pythonhosted.org/packages/24/a1/03c44a49351fefd7eb8e4132da1b0292101d1fe5602a52ed7ae5369b5dfe/runlocal_hub-0.1.4.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\n    <a href=\"https://runlocal.ai\">\n        <picture>\n            <source media=\"(prefers-color-scheme: dark)\" srcset=\"./assets/logo_dark_mode.svg\">\n            <source media=\"(prefers-color-scheme: light)\" srcset=\"./assets/logo_light_mode.svg\">\n            <img alt=\"runlocal_hub Logo\" src=\"./assets/logo_dark_mode.svg.svg\" height=\"42\" style=\"max-width: 100%;\">\n        </picture>\n    </a>\n</h1>\n\n<p align=\"center\">\n    Python client for benchmarking and validating ML models on real devices via RunLocal API.\n</p>\n\n<p align=\"center\">\n    <a href=\"https://pypi.org/project/runlocal-hub/\"><img src=\"https://img.shields.io/pypi/v/runlocal_hub?label=PyPI%20version\" alt=\"PyPI version\"></a>\n    <a href=\"https://pypi.org/project/runlocal-hub/\"><img src=\"https://img.shields.io/pypi/pyversions/runlocal-hub.svg\" alt=\"Python Versions\"></a>\n    <a href=\"https://opensource.org/licenses/MIT\"><img src=\"https://img.shields.io/badge/License-MIT-yellow.svg\" alt=\"License: MIT\"></a>\n</p>\n\n<br/>\n\n<div align=\"center\">\n  <img src=\"./assets/benchmark.gif\" alt=\"RunLocal Benchmark Demo\" width=\"800\">\n</div>\n\n## \ud83c\udfaf Key Features\n\n- **\u26a1 Real Hardware Testing** - No simulators or emulators. Test on real devices maintained in our devices lab\n- **\ud83c\udf0d Cross-Platform Coverage** - Access MacBooks, iPhones, iPads, Android, and Windows devices from a single API\n- **\ud83d\udd27 Multiple ML Formats** - Support for CoreML, ONNX, OpenVINO, TensorFlow Lite, and GGUF models. More frameworks coming soon.\n- **\ud83d\udcca Detailed Metrics** - Measure inference time, memory usage, and per-layer performance data\n- **\ud83d\udea6 CI/CD Ready** - Integrate performance and accuracy testing into your deployment pipeline\n\n## \ud83d\udd0d Evaluate Results\n\nAll benchmarks performed through the python client can be evaluated on the web platform by logging into your account.\nCheck out our [public demo](https://edgemeter.runlocal.ai/public/pipelines) for comprehensive benchmark evaluation across different devices and model formats.\n\n## \ud83d\udee0 Installation\n\n```bash\npip install runlocal-hub\n```\n\n### Development Installation\n\nFor development or to install from source:\n\n```bash\ngit clone https://github.com/neuralize-ai/runlocal_hub.git\ncd runlocal_hub\npip install -e .\n```\n\n## \ud83d\udd11 Authentication\n\nGet your API key from the [RunLocal dashboard](https://edgemeter.runlocal.ai):\n\n1. Log in to [RunLocal](https://edgemeter.runlocal.ai)\n2. Click your avatar \u2192 User Settings\n3. Navigate to \"API Keys\"\n4. Click \"Create New API Key\"\n5. Save your key securely\n\n```bash\nexport RUNLOCAL_API_KEY=<your_api_key>\n```\n\n## \ud83d\udd79 Usage Guide\n\n### Simple Benchmark\n\n```python\nfrom runlocal_hub import RunLocalClient, display_benchmark_results\n\nclient = RunLocalClient()\n\n# Benchmark on any available device\nresult = client.benchmark(\"model.mlpackage\")\ndisplay_benchmark_results(results)\n```\n\n### Device Filtering\n\nTarget specific devices with intuitive filters:\n\n```python\nfrom runlocal_hub import DeviceFilters, RunLocalClient\n\nclient = RunLocalClient()\n\n# High-end MacBooks with M-series chips\nmac_filters = DeviceFilters(\n    device_name=\"MacBook\",\n    soc=\"Apple M\",        # Matches M1, M2, M3, etc.\n    ram_min=16,           # At least 16GB RAM\n    year_min=2021         # Recent models only\n)\n\n# Latest iPhones with Neural Engine\niphone_filters = DeviceFilters(\n    device_name=\"iPhone\",\n    year_min=2022,\n    compute_units=[\"CPU_AND_NE\"]\n)\n\n# Run benchmarks\nresults = client.benchmark(\n    \"model.mlpackage\",\n    device_filters=[mac_filters, iphone_filters],\n    count=None  # Use all matching devices\n)\n```\n\n### \ud83e\uddee Running Predictions\n\nTest your model with real inputs:\n\n```python\nimport numpy as np\n\n# Prepare input\nimage = np.random.rand(1, 3, 224, 224).astype(np.float32)\ninputs = {\"image\": image}\n\n# Run prediction on iPhone\noutputs = client.predict(\n    inputs=inputs,\n    model_path=\"model.mlpackage\",\n    device_filters=DeviceFilters(device_name=\"iPhone 15\", compute_units=[\"CPU_AND_NE\"])\n)\n\ntensors = outputs[\"CPU_AND_NE\"]\nfor name, tensor in tensors.items():\n    print(f\"  {name}: {tensor.shape} ({tensor.dtype})\")\n    print(f\"  First values: {tensor.flatten()[:5]}\")\n```\n\n## \ud83d\udcda Examples\n\nCheck out the example scripts:\n\n- [`bench_example.py`](./bench_example.py) - Simple benchmarking example\n- [`predict_example.py`](./predict_example.py) - Prediction with custom inputs, serialised outputs\n\n## \ud83d\udca0 Supported Formats\n\n| Format          | Extension                   | Platforms       |\n| --------------- | --------------------------- | --------------- |\n| CoreML          | `.mlpackage`/`.mlmodel`     | macOS, iOS      |\n| ONNX            | `.onnx`                     | Windows, MacOS  |\n| OpenVINO        | directory (`.xml` + `.bin`) | Windows (Intel) |\n| TensorFlow Lite | `.tflite`                   | Android         |\n| GGUF            | `.gguf`                     | All platforma   |\n\nMore frameworks coming soon.\n\n## \ud83d\udcdc License\n\nThis project is licensed under the MIT License - see the [LICENSE.txt](LICENSE.txt) file for details.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Python client for benchmarking and validating ML models on real devices via RunLocal API",
    "version": "0.1.4",
    "project_urls": {
        "Changelog": "https://github.com/neuralize-ai/runlocal_hub/blob/main/CHANGELOG.md",
        "Homepage": "https://runlocal.ai",
        "Issues": "https://github.com/neuralize-ai/runlocal_hub/issues",
        "Repository": "https://github.com/neuralize-ai/runlocal_hub"
    },
    "split_keywords": [
        "machine-learning",
        " models",
        " benchmarking",
        " coreml",
        " onnx",
        " tflite",
        " openvino",
        " ml-ops",
        " device-testing",
        " model-optimization",
        " inference",
        " neural-networks"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "29d7d6b077526c391a94a7fb687338e63009fccbf06d8beb4b927c1dc793bd51",
                "md5": "ba48394d97f9372e04fb678ef0ea0674",
                "sha256": "88148e0dc63688e71850e787ee048c8a1d2f6ef3517f0c971a3452eb2d03e35c"
            },
            "downloads": -1,
            "filename": "runlocal_hub-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ba48394d97f9372e04fb678ef0ea0674",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 43720,
            "upload_time": "2025-07-28T19:06:20",
            "upload_time_iso_8601": "2025-07-28T19:06:20.878285Z",
            "url": "https://files.pythonhosted.org/packages/29/d7/d6b077526c391a94a7fb687338e63009fccbf06d8beb4b927c1dc793bd51/runlocal_hub-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "24a103c44a49351fefd7eb8e4132da1b0292101d1fe5602a52ed7ae5369b5dfe",
                "md5": "15777bfe9c39f4033c03bc9fdbc4b3c7",
                "sha256": "59427ca59b4d63eeefa2857fc00a189010ef50353b5ff483863235aa3f25b2b6"
            },
            "downloads": -1,
            "filename": "runlocal_hub-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "15777bfe9c39f4033c03bc9fdbc4b3c7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 37446,
            "upload_time": "2025-07-28T19:06:22",
            "upload_time_iso_8601": "2025-07-28T19:06:22.456579Z",
            "url": "https://files.pythonhosted.org/packages/24/a1/03c44a49351fefd7eb8e4132da1b0292101d1fe5602a52ed7ae5369b5dfe/runlocal_hub-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-28 19:06:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "neuralize-ai",
    "github_project": "runlocal_hub",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pydantic",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "rich",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        }
    ],
    "lcname": "runlocal-hub"
}
        
Elapsed time: 1.59595s