remoterl


Nameremoterl JSON
Version 1.1.3 PyPI version JSON
download
home_pagehttps://remoterl.com
SummaryRemoteRL: high-performance command-line & Python client with pre-compiled native wheels for remote reinforcement-learning workloads.
upload_time2025-07-12 19:11:49
maintainerNone
docs_urlNone
authorJunHo Park
requires_python>=3.9
licenseREMOTE RL COMMERCIAL LICENSE
keywords remote rl reinforcement-learning online-learning gymnasium ray rllib stable-baselines3
VCS
bugtrack_url
requirements remoterl
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RemoteRL: Cloud Service for Remote Reinforcement Learning

## Overview

**RemoteRL** is a cloud-based platform that enables reinforcement-learning (RL) training to run remotely, decoupling the agent’s training process from environment (simulator) execution. It’s a scalable framework that lets you **connect environments (simulators or robots) from anywhere**, stream live experience data, and train smarter models in real time. You always run both pieces yourself—the trainer on a laptop, on-prem cluster, or robot controller, and the simulators or robots wherever they already live, nearby or across the globe. RemoteRL’s relay nodes stream data between them over secure WebSockets, letting you connect environments anywhere, send live experience data, and plug into popular RL frameworks—all without rewriting code.

**Key Characteristics:**

* **Separation of Concerns:** The **RL environment (“Simulator”)** keeps running wherever it already lives—on your PC, a robot, or another server—and connects to RemoteRL’s relay. The **learning algorithm (“Trainer”)** stays on your own machine or cluster, receiving observations and rewards through RemoteRL and sending back actions or updated policies. Because each process remains on hardware you control, you never have to install the simulator on the training host, simplifying setup and letting you leverage local compute when it makes sense.

* **Use Cases:** RemoteRL is especially useful for scenarios like **robotics and IoT** (where a physical robot or device provides real-time data to train a model remotely) and **distributed simulation** (where many simulators run in parallel on different machines to speed up training). It allows training "from anywhere" – whether the environment is on-premises or in the cloud – by handling the networking, synchronization, and scaling behind the scenes. For example, a robotics engineer could connect a robot’s control loop to RemoteRL and have a cloud-based RL agent train on the live data, or a researcher could run dozens of game simulations on local PCs all feeding into a single cloud RL learner.

## Core Features and Functionality

RemoteRL provides a number of features to make remote and distributed RL **easy to integrate and manage**:

* **Zero-Setup Integration:** It works out-of-the-box with popular Python RL libraries. Users simply install the `remoterl` Python package and add a one-line initialization with their API key to an existing Gymnasium or RLlib script – *“Run `pip install remoterl` and add `remoterl.init(api_key="…")` to any Gymnasium / Ray RLlib script, and hit run — no rewrites, zero friction”*. This means **no extensive code refactoring or complex cluster setup** is required; RemoteRL hooks into the RL frameworks to handle remote environment communication transparently.

* **Framework and Tool Support:** The service **auto-integrates with popular RL frameworks** like **OpenAI Gymnasium (Gym)** for environment interfaces and **Ray RLlib** for distributed training algorithms. It also supports integration with libraries like **Stable-Baselines3**. This broad support allows developers to use familiar APIs and algorithms (e.g. RLlib’s trainers or Stable-Baselines3’s agents) while RemoteRL manages the remote execution of environments. Whether you use custom Gym environments or standard ones, or whether you train with RLlib’s scalable architecture or a single-agent approach, RemoteRL can plug in with minimal changes.

* **Live Web Dashboard & Monitoring:** RemoteRL provides a **web‑based dashboard** that shows which simulators and trainers are online alongside basic byte/step counters. When you add `remoterl.init(api_key="…")` to a process, it appears on the dashboard within seconds. The dashboard displays **connection status only**—RemoteRL does **not** view, store, or inspect observation, reward, or model payloads. Operators can disconnect a simulator or temporarily block new sessions; they cannot peek into in‑flight data nor pause individual training iterations.
  
* **Distributed & Scalable Training:** RemoteRL’s relay layer is **logically centralized**, but it feeds data just as well into a **single-process trainer** or a **distributed RL cluster** you run with frameworks such as Ray RLlib. Many simulators—or whole fleets of robots—can stream experience to one learner, while an RLlib-style trainer can shard its own work behind the relay. RemoteRL handles fan-in/fan-out, step ordering, and back-pressure, so you scale from one to hundreds of simulators without touching networking code or changing your training loop.

* **Real-Time Online RL:** RemoteRL streams observations and rewards to the trainer—and actions back to simulators—in real time. Each simulator automatically connects through the relay node in the nearest cloud region, so every `env.step()` returns with minimal round-trip delay even when trainer and environments are on different continents. With servers rolling out across major cities, you can run a trainer in Europe while simulators in Asia and the Americas each route to their closest node, delivering interactive, online RL at global scale.

* **Usage-Based Pricing and Tiers:** RemoteRL offers a **free tier** and a premium plan. The service will be *“available free for everyone”* for light use, meaning you can run training jobs without paying as long as data usage is under a certain limit. The **free tier includes 1 GB of data traffic free** (as noted in the feature list) and likely some reasonable number of simulators. For heavier usage, **premium services (pay-as-you-go)** are offered – *“charged per GB”* for data beyond the free quota. The premium tier is designed for those who need **“extensive data usage, multiple simulators, and large-scale training across global regions.”** In other words, large experiments or enterprise deployments with many environments and high throughput would incur a usage-based fee. This pricing model is **traffic/duration-based** (billing on both bytes of data exchanged and active connection minutes), ensuring you only pay for what you use, and there is **no hard limit on speed or number of simulators in premium** aside from practical scaling limits. This makes the service accessible to individual researchers (on the free tier) while scaling up to industrial projects on the paid tier.

## Architecture and How It Works

At a high level, RemoteRL follows a **client–server architecture specialized for RL**. The **core components** are defined as follows:

* **Trainer:** The RL algorithm process that you run and manage (e.g., on your own cloud instances, on-prem servers, or a laptop). RemoteRL simply provides the networking bridge: it relays state/reward data from simulators to your trainer and returns actions. You can still scale the trainer across multiple machines—using Ray RLlib or similar frameworks—while RemoteRL abstracts away the connection details, not the computation itself.

* **Simulator:** The environment process that produces observations and receives actions—whether it’s a game engine, robotics simulator, custom code, or a real-world device such as a robot control loop or IoT sensor. It can run anywhere (local machine, edge device, or another cloud) and connects to RemoteRL over a secure WebSocket. Functionally it behaves like any Gym environment; the only difference is that these interactions travel across the network through RemoteRL’s bridge.

```python
import gymnasium as gym
import remoterl 

remoterl.init(api_key="YOUR_KEY", role="trainer") # the SDK treats this process as the trainer
env = gym.make("Humanoid-v5")  # Environment runs remotely in another city
```

**Communication:** When you integrate RemoteRL, the typical RL loop is split between the trainer and simulator via the network:

* The **trainer invokes** an environment step (e.g. calling `env.step(action)` in RL code). Instead of running locally, this call is routed through RemoteRL’s library which sends the action to the remote simulator over the WebSocket connection.
* The **simulator receives** the action, applies it (advancing the simulation or executing on the robot), and returns the resulting observation, reward, and done flag back over the socket.
* The trainer then uses the received data to update the agent’s model (e.g., via policy gradient, Q-learning update, etc.), and the cycle repeats.

            

Raw data

            {
    "_id": null,
    "home_page": "https://remoterl.com",
    "name": "remoterl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "remote rl reinforcement-learning online-learning gymnasium ray rllib stable-baselines3",
    "author": "JunHo Park",
    "author_email": "support@ccnets.org",
    "download_url": null,
    "platform": null,
    "description": "# RemoteRL: Cloud Service for Remote Reinforcement Learning\n\n## Overview\n\n**RemoteRL** is a cloud-based platform that enables reinforcement-learning (RL) training to run remotely, decoupling the agent\u2019s training process from environment (simulator) execution. It\u2019s a scalable framework that lets you **connect environments (simulators or robots) from anywhere**, stream live experience data, and train smarter models in real time. You always run both pieces yourself\u2014the trainer on a laptop, on-prem cluster, or robot controller, and the simulators or robots wherever they already live, nearby or across the globe. RemoteRL\u2019s relay nodes stream data between them over secure WebSockets, letting you connect environments anywhere, send live experience data, and plug into popular RL frameworks\u2014all without rewriting code.\n\n**Key Characteristics:**\n\n* **Separation of Concerns:** The **RL environment (\u201cSimulator\u201d)** keeps running wherever it already lives\u2014on your PC, a robot, or another server\u2014and connects to RemoteRL\u2019s relay. The **learning algorithm (\u201cTrainer\u201d)** stays on your own machine or cluster, receiving observations and rewards through RemoteRL and sending back actions or updated policies. Because each process remains on hardware you control, you never have to install the simulator on the training host, simplifying setup and letting you leverage local compute when it makes sense.\n\n* **Use Cases:** RemoteRL is especially useful for scenarios like **robotics and IoT** (where a physical robot or device provides real-time data to train a model remotely) and **distributed simulation** (where many simulators run in parallel on different machines to speed up training). It allows training \"from anywhere\" \u2013 whether the environment is on-premises or in the cloud \u2013 by handling the networking, synchronization, and scaling behind the scenes. For example, a robotics engineer could connect a robot\u2019s control loop to RemoteRL and have a cloud-based RL agent train on the live data, or a researcher could run dozens of game simulations on local PCs all feeding into a single cloud RL learner.\n\n## Core Features and Functionality\n\nRemoteRL provides a number of features to make remote and distributed RL **easy to integrate and manage**:\n\n* **Zero-Setup Integration:** It works out-of-the-box with popular Python RL libraries. Users simply install the `remoterl` Python package and add a one-line initialization with their API key to an existing Gymnasium or RLlib script \u2013 *\u201cRun `pip install remoterl` and add `remoterl.init(api_key=\"\u2026\")` to any Gymnasium / Ray RLlib script, and hit run \u2014 no rewrites, zero friction\u201d*. This means **no extensive code refactoring or complex cluster setup** is required; RemoteRL hooks into the RL frameworks to handle remote environment communication transparently.\n\n* **Framework and Tool Support:** The service **auto-integrates with popular RL frameworks** like **OpenAI Gymnasium (Gym)** for environment interfaces and **Ray RLlib** for distributed training algorithms. It also supports integration with libraries like **Stable-Baselines3**. This broad support allows developers to use familiar APIs and algorithms (e.g. RLlib\u2019s trainers or Stable-Baselines3\u2019s agents) while RemoteRL manages the remote execution of environments. Whether you use custom Gym environments or standard ones, or whether you train with RLlib\u2019s scalable architecture or a single-agent approach, RemoteRL can plug in with minimal changes.\n\n* **Live Web Dashboard & Monitoring:** RemoteRL provides a **web\u2011based dashboard** that shows which simulators and trainers are online alongside basic byte/step counters. When you add `remoterl.init(api_key=\"\u2026\")` to a process, it appears on the dashboard within seconds. The dashboard displays **connection status only**\u2014RemoteRL does **not** view, store, or inspect observation, reward, or model payloads. Operators can disconnect a simulator or temporarily block new sessions; they cannot peek into in\u2011flight data nor pause individual training iterations.\n  \n* **Distributed & Scalable Training:** RemoteRL\u2019s relay layer is **logically centralized**, but it feeds data just as well into a **single-process trainer** or a **distributed RL cluster** you run with frameworks such as Ray RLlib. Many simulators\u2014or whole fleets of robots\u2014can stream experience to one learner, while an RLlib-style trainer can shard its own work behind the relay. RemoteRL handles fan-in/fan-out, step ordering, and back-pressure, so you scale from one to hundreds of simulators without touching networking code or changing your training loop.\n\n* **Real-Time Online RL:** RemoteRL streams observations and rewards to the trainer\u2014and actions back to simulators\u2014in real time. Each simulator automatically connects through the relay node in the nearest cloud region, so every `env.step()` returns with minimal round-trip delay even when trainer and environments are on different continents. With servers rolling out across major cities, you can run a trainer in Europe while simulators in Asia and the Americas each route to their closest node, delivering interactive, online RL at global scale.\n\n* **Usage-Based Pricing and Tiers:** RemoteRL offers a **free tier** and a premium plan. The service will be *\u201cavailable free for everyone\u201d* for light use, meaning you can run training jobs without paying as long as data usage is under a certain limit. The **free tier includes 1 GB of data traffic free** (as noted in the feature list) and likely some reasonable number of simulators. For heavier usage, **premium services (pay-as-you-go)** are offered \u2013 *\u201ccharged per GB\u201d* for data beyond the free quota. The premium tier is designed for those who need **\u201cextensive data usage, multiple simulators, and large-scale training across global regions.\u201d** In other words, large experiments or enterprise deployments with many environments and high throughput would incur a usage-based fee. This pricing model is **traffic/duration-based** (billing on both bytes of data exchanged and active connection minutes), ensuring you only pay for what you use, and there is **no hard limit on speed or number of simulators in premium** aside from practical scaling limits. This makes the service accessible to individual researchers (on the free tier) while scaling up to industrial projects on the paid tier.\n\n## Architecture and How It Works\n\nAt a high level, RemoteRL follows a **client\u2013server architecture specialized for RL**. The **core components** are defined as follows:\n\n* **Trainer:** The RL algorithm process that you run and manage (e.g., on your own cloud instances, on-prem servers, or a laptop). RemoteRL simply provides the networking bridge: it relays state/reward data from simulators to your trainer and returns actions. You can still scale the trainer across multiple machines\u2014using Ray RLlib or similar frameworks\u2014while RemoteRL abstracts away the connection details, not the computation itself.\n\n* **Simulator:** The environment process that produces observations and receives actions\u2014whether it\u2019s a game engine, robotics simulator, custom code, or a real-world device such as a robot control loop or IoT sensor. It can run anywhere (local machine, edge device, or another cloud) and connects to RemoteRL over a secure WebSocket. Functionally it behaves like any Gym environment; the only difference is that these interactions travel across the network through RemoteRL\u2019s bridge.\n\n```python\nimport gymnasium as gym\nimport remoterl \n\nremoterl.init(api_key=\"YOUR_KEY\", role=\"trainer\") # the SDK treats this process as the trainer\nenv = gym.make(\"Humanoid-v5\")  # Environment runs remotely in another city\n```\n\n**Communication:** When you integrate RemoteRL, the typical RL loop is split between the trainer and simulator via the network:\n\n* The **trainer invokes** an environment step (e.g. calling `env.step(action)` in RL code). Instead of running locally, this call is routed through RemoteRL\u2019s library which sends the action to the remote simulator over the WebSocket connection.\n* The **simulator receives** the action, applies it (advancing the simulation or executing on the robot), and returns the resulting observation, reward, and done flag back over the socket.\n* The trainer then uses the received data to update the agent\u2019s model (e.g., via policy gradient, Q-learning update, etc.), and the cycle repeats.\n",
    "bugtrack_url": null,
    "license": "REMOTE RL COMMERCIAL LICENSE",
    "summary": "RemoteRL: high-performance command-line & Python client with pre-compiled native wheels for remote reinforcement-learning workloads.",
    "version": "1.1.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/ccnets-team/remoterl/issues",
        "Homepage": "https://remoterl.com",
        "Source": "https://github.com/ccnets-team/remoterl"
    },
    "split_keywords": [
        "remote",
        "rl",
        "reinforcement-learning",
        "online-learning",
        "gymnasium",
        "ray",
        "rllib",
        "stable-baselines3"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "58cefa187618899d44afc914c04d8fd30a516f4793f9e595eec10760f009da37",
                "md5": "d7d4214a9d9bcef88d1ae93292b82f49",
                "sha256": "c72ea51314b57c36c2e2c4f4e4fb16c2fe1b7524e8a7da02b33118de05a2197c"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp310-cp310-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "d7d4214a9d9bcef88d1ae93292b82f49",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 3196130,
            "upload_time": "2025-07-12T19:11:49",
            "upload_time_iso_8601": "2025-07-12T19:11:49.293719Z",
            "url": "https://files.pythonhosted.org/packages/58/ce/fa187618899d44afc914c04d8fd30a516f4793f9e595eec10760f009da37/remoterl-1.1.3-cp310-cp310-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ee849a8404a4a11aea36b28d27c96cb41fc41d07443f6a54d3f9978b32b2b6ab",
                "md5": "543fdb3513850fc74fe75cb14e25fd98",
                "sha256": "9a1a0d6a62ace9c7d2c6d0d67cd2e3977d9a4f053eaf0fa88f2c668f4d19a9ef"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "543fdb3513850fc74fe75cb14e25fd98",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 3926015,
            "upload_time": "2025-07-12T19:11:51",
            "upload_time_iso_8601": "2025-07-12T19:11:51.108748Z",
            "url": "https://files.pythonhosted.org/packages/ee/84/9a8404a4a11aea36b28d27c96cb41fc41d07443f6a54d3f9978b32b2b6ab/remoterl-1.1.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6ed4c979a7b826cca0bc132fa0a7d442c6ffa1c786175fe2fe139d065977d182",
                "md5": "abc004cdcbe3eebc34ea3c1f207eedbe",
                "sha256": "2e0db26702f3ac45856de207d46566cb1cb6d464514b0599b118bb30f039ddd3"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp310-cp310-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "abc004cdcbe3eebc34ea3c1f207eedbe",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 3970779,
            "upload_time": "2025-07-12T19:11:52",
            "upload_time_iso_8601": "2025-07-12T19:11:52.688125Z",
            "url": "https://files.pythonhosted.org/packages/6e/d4/c979a7b826cca0bc132fa0a7d442c6ffa1c786175fe2fe139d065977d182/remoterl-1.1.3-cp310-cp310-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "216acb26cf11822bb3fd2b295c3282f6c521f7dabc667e4f23a541afde02e8a2",
                "md5": "1247d9dc00713f63684ca961bbd09db7",
                "sha256": "911ba9f83473b9c3c12be5d70d4c8c1438d708168d4166a139f1ea93ceb1e37d"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "1247d9dc00713f63684ca961bbd09db7",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.9",
            "size": 5008665,
            "upload_time": "2025-07-12T19:11:54",
            "upload_time_iso_8601": "2025-07-12T19:11:54.002971Z",
            "url": "https://files.pythonhosted.org/packages/21/6a/cb26cf11822bb3fd2b295c3282f6c521f7dabc667e4f23a541afde02e8a2/remoterl-1.1.3-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7a5fee2ef1c010e318799a779dfa2492632d8a02c0998f26ec1c3b4a122cd59d",
                "md5": "51e698b3fc04d1218f04ac4889d0f5eb",
                "sha256": "d2d8dde317cbd00b7d2db05594b1abb1edf5acdf385b2b7e1ccad4b24c219feb"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp311-cp311-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "51e698b3fc04d1218f04ac4889d0f5eb",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 3323158,
            "upload_time": "2025-07-12T19:11:55",
            "upload_time_iso_8601": "2025-07-12T19:11:55.251605Z",
            "url": "https://files.pythonhosted.org/packages/7a/5f/ee2ef1c010e318799a779dfa2492632d8a02c0998f26ec1c3b4a122cd59d/remoterl-1.1.3-cp311-cp311-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7cf21607efbef2de79cfe74826f6f90fa43878f12ea7198be7b41dd3ebd8abe0",
                "md5": "7533db222c0b72028eb5c93943bb4ca5",
                "sha256": "a1e1c6cecc993e5095a2566d010d2f3b628793367dce5b5d73fe223279131b89"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "7533db222c0b72028eb5c93943bb4ca5",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 4041999,
            "upload_time": "2025-07-12T19:11:56",
            "upload_time_iso_8601": "2025-07-12T19:11:56.474454Z",
            "url": "https://files.pythonhosted.org/packages/7c/f2/1607efbef2de79cfe74826f6f90fa43878f12ea7198be7b41dd3ebd8abe0/remoterl-1.1.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2594fe827d9f9fcec8da661e2c4df5837cf7bb1c840421d3a02feba84d548dc5",
                "md5": "a05771d06f741e8aba7756c56de37b9e",
                "sha256": "4d914cd4da0035db84f414b49752f97177dace6b42dee8c88b6c612ae18f6abd"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp311-cp311-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a05771d06f741e8aba7756c56de37b9e",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 4075624,
            "upload_time": "2025-07-12T19:11:58",
            "upload_time_iso_8601": "2025-07-12T19:11:58.065008Z",
            "url": "https://files.pythonhosted.org/packages/25/94/fe827d9f9fcec8da661e2c4df5837cf7bb1c840421d3a02feba84d548dc5/remoterl-1.1.3-cp311-cp311-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "7e1bfdb561ddc811eab158934b23c1c7b15dd29f70449efa6cb0d5089e59eee9",
                "md5": "bd562c3e88bb06466fc3ed70be168de5",
                "sha256": "ccedb39480eae0e05b187cbad0d012e68b0a7f791b8064c89f026d91575129ea"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "bd562c3e88bb06466fc3ed70be168de5",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.9",
            "size": 5115908,
            "upload_time": "2025-07-12T19:11:59",
            "upload_time_iso_8601": "2025-07-12T19:11:59.776852Z",
            "url": "https://files.pythonhosted.org/packages/7e/1b/fdb561ddc811eab158934b23c1c7b15dd29f70449efa6cb0d5089e59eee9/remoterl-1.1.3-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9870390363ac68867f28c4a72d3ad35e20149c10b848b60e799c4bdc0aff2100",
                "md5": "b055e935b78e2f26c64db66d25f9c8ec",
                "sha256": "af7f0cecea89eb0337eaccbc702fdfa3bdab270c2ec102cca26afda44c681c2e"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp312-cp312-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "b055e935b78e2f26c64db66d25f9c8ec",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 3365407,
            "upload_time": "2025-07-12T19:12:01",
            "upload_time_iso_8601": "2025-07-12T19:12:01.159049Z",
            "url": "https://files.pythonhosted.org/packages/98/70/390363ac68867f28c4a72d3ad35e20149c10b848b60e799c4bdc0aff2100/remoterl-1.1.3-cp312-cp312-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9cf5c7c4bb2a6af07900ffefd1f354b59d4547ecd823e8874ad561cb4aecff0d",
                "md5": "a750b50ef7b40e09ff53491b2aec2e54",
                "sha256": "fc1c7c049f169dd6685ab75a02ee2503fe8a22049c70619a63600a57b480cb97"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a750b50ef7b40e09ff53491b2aec2e54",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 4343362,
            "upload_time": "2025-07-12T19:12:02",
            "upload_time_iso_8601": "2025-07-12T19:12:02.739782Z",
            "url": "https://files.pythonhosted.org/packages/9c/f5/c7c4bb2a6af07900ffefd1f354b59d4547ecd823e8874ad561cb4aecff0d/remoterl-1.1.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "55dac31b1293d4ee2c9b257b82d68f50a7d7dd598e22f8b4904046c40a1e7887",
                "md5": "c209e3f509d61cbf8903a9f77830e7ed",
                "sha256": "b959b097813e22677d368a34aefa4865c8cdae41a4482fa6024becdce09ff236"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp312-cp312-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "c209e3f509d61cbf8903a9f77830e7ed",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 4393987,
            "upload_time": "2025-07-12T19:12:04",
            "upload_time_iso_8601": "2025-07-12T19:12:04.362087Z",
            "url": "https://files.pythonhosted.org/packages/55/da/c31b1293d4ee2c9b257b82d68f50a7d7dd598e22f8b4904046c40a1e7887/remoterl-1.1.3-cp312-cp312-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9011e12e5afc2995af63789754c2dc0fa669a4a3e25a100b6828709c44799ef3",
                "md5": "6f7108311d36b5a01ad2a4db04b8a0ff",
                "sha256": "154991ce41282dd04f833aa0972fa8c9c82b402fd6f1d3b548a271d986c5fe68"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "6f7108311d36b5a01ad2a4db04b8a0ff",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.9",
            "size": 5263698,
            "upload_time": "2025-07-12T19:12:06",
            "upload_time_iso_8601": "2025-07-12T19:12:06.048447Z",
            "url": "https://files.pythonhosted.org/packages/90/11/e12e5afc2995af63789754c2dc0fa669a4a3e25a100b6828709c44799ef3/remoterl-1.1.3-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6fd7f2f4735d89111cfa339bc63b09c11f616835289b2e4e652224140aafb633",
                "md5": "beb0e3a70865308f28110a4bcd2f0e39",
                "sha256": "5fd0b5ed871c16c3d3d1fbd53e2a7d530db92c22784b491a92b32b6268535d07"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp39-cp39-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "beb0e3a70865308f28110a4bcd2f0e39",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 3087958,
            "upload_time": "2025-07-12T19:12:07",
            "upload_time_iso_8601": "2025-07-12T19:12:07.365617Z",
            "url": "https://files.pythonhosted.org/packages/6f/d7/f2f4735d89111cfa339bc63b09c11f616835289b2e4e652224140aafb633/remoterl-1.1.3-cp39-cp39-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ca2ab2fb3d7385813a80f2a015db820cc84f2f89a86bf90e71fe8103c1b764cc",
                "md5": "a38e31aa1d54705b854029a4fd10b0c5",
                "sha256": "f4488739a4abf2ae90dbae10a9dc639d43af97ce89918fc21d607212e180c045"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "a38e31aa1d54705b854029a4fd10b0c5",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 3853939,
            "upload_time": "2025-07-12T19:12:08",
            "upload_time_iso_8601": "2025-07-12T19:12:08.583474Z",
            "url": "https://files.pythonhosted.org/packages/ca/2a/b2fb3d7385813a80f2a015db820cc84f2f89a86bf90e71fe8103c1b764cc/remoterl-1.1.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d262da3bd3dcc5a3a0b7d3f823d946892b7b8d560298928c1d774bb6c8762b2a",
                "md5": "063f5be83385ff24f5eae0d16f60fc5b",
                "sha256": "0840f0da8570cf3e3c80c7fdc2f6b7d519a5ff6232f6e4b33d24202924fbc8df"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp39-cp39-musllinux_1_2_x86_64.whl",
            "has_sig": false,
            "md5_digest": "063f5be83385ff24f5eae0d16f60fc5b",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 3899405,
            "upload_time": "2025-07-12T19:12:09",
            "upload_time_iso_8601": "2025-07-12T19:12:09.860912Z",
            "url": "https://files.pythonhosted.org/packages/d2/62/da3bd3dcc5a3a0b7d3f823d946892b7b8d560298928c1d774bb6c8762b2a/remoterl-1.1.3-cp39-cp39-musllinux_1_2_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "02e3d467727d0a87f7499f9c81dc55f04ffe79bc1aa0b59dd76eeee2240a4958",
                "md5": "a2062cd441a44ba8baa295f24c6468c2",
                "sha256": "bfb3669ac12d30d4d51fee886d883f10ab8581c4dcf856e9d1f3031c45b09bf9"
            },
            "downloads": -1,
            "filename": "remoterl-1.1.3-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "a2062cd441a44ba8baa295f24c6468c2",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.9",
            "size": 4927050,
            "upload_time": "2025-07-12T19:12:11",
            "upload_time_iso_8601": "2025-07-12T19:12:11.127632Z",
            "url": "https://files.pythonhosted.org/packages/02/e3/d467727d0a87f7499f9c81dc55f04ffe79bc1aa0b59dd76eeee2240a4958/remoterl-1.1.3-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-12 19:11:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ccnets-team",
    "github_project": "remoterl",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "remoterl",
            "specs": []
        }
    ],
    "lcname": "remoterl"
}
        
Elapsed time: 0.43213s