# RLay
RLay (pronounced like "relay") is a tool that enables building [Gymnasium](https://github.com/Farama-Foundation/Gymnasium) environments with ~any language or software toolkit.
The main inspiration is interfacing with games built in powerful engines like Unity and Unreal.
Adding a client or a server in the environment code will expose it for interaction with the standard
Gymnasium API.
There are two possible paradigms -- the environment runs either as a server, or as a client.
ClientEnv has a relatively intuitive interpretation. The server maintains an instance of the environment,
and calls its methods according to the MemServer calls. The user (or the RL algorithm) calls the methods of `ClientEnv`,
which in turn calls the MemServer methods on the server.
ServerEnv works the other way around. It expects that the user creates a server which implements a policy,
and the environment lives in a client which can query that policy. When the client queries the server, it sends an observation,
and receives the following observation.
In summary, in ClientEnv:
- The underlying environment logic lives on the server
- The `Env` instance exists in the client
- The algorithmic logic is in the client
In ServerEnv:
- The underlying environment logic is in the client
- The `Env` instance exists on the server
- The algorithmic logic is on the server
The `ServerEnv` implementation is inspired by ML-Agents, but we generally recommend using `ClientEnv`.
## Protocol
ClientBackend - ServerEnv:
- Handshake -- server sends a message, client sends a message
- Server sends a message to hand over control
# TODO: finish this
Raw data
{
"_id": null,
"home_page": null,
"name": "rlay",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "AI,RL,Reinforcement Learning,game,rlay",
"author": null,
"author_email": "Farama Foundation <contact@farama.org>",
"download_url": "https://files.pythonhosted.org/packages/c2/69/00712fa207f1916caf885774f3c54c26497b68a0743f13b3c7d727aca5d9/rlay-0.0.1.tar.gz",
"platform": null,
"description": "# RLay\n\nRLay (pronounced like \"relay\") is a tool that enables building [Gymnasium](https://github.com/Farama-Foundation/Gymnasium) environments with ~any language or software toolkit.\n\nThe main inspiration is interfacing with games built in powerful engines like Unity and Unreal.\nAdding a client or a server in the environment code will expose it for interaction with the standard\nGymnasium API.\n\nThere are two possible paradigms -- the environment runs either as a server, or as a client.\n\nClientEnv has a relatively intuitive interpretation. The server maintains an instance of the environment,\nand calls its methods according to the MemServer calls. The user (or the RL algorithm) calls the methods of `ClientEnv`,\nwhich in turn calls the MemServer methods on the server.\n\nServerEnv works the other way around. It expects that the user creates a server which implements a policy,\nand the environment lives in a client which can query that policy. When the client queries the server, it sends an observation,\nand receives the following observation.\n\n\nIn summary, in ClientEnv:\n- The underlying environment logic lives on the server\n- The `Env` instance exists in the client\n- The algorithmic logic is in the client\n\nIn ServerEnv:\n- The underlying environment logic is in the client\n- The `Env` instance exists on the server\n- The algorithmic logic is on the server\n\n\nThe `ServerEnv` implementation is inspired by ML-Agents, but we generally recommend using `ClientEnv`.\n\n## Protocol\n\nClientBackend - ServerEnv:\n- Handshake -- server sends a message, client sends a message\n- Server sends a message to hand over control\n# TODO: finish this\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "A new Farama library",
"version": "0.0.1",
"project_urls": {
"Bug Report": "https://github.com/Farama-Foundation/rlay/issues",
"Documentation": "https://rlay.farama.org",
"Homepage": "https://farama.org",
"Repository": "https://github.com/Farama-Foundation/rlay"
},
"split_keywords": [
"ai",
"rl",
"reinforcement learning",
"game",
"rlay"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0c738803091f07bbb0b71f7a482b3735629efa7832a350d12c2cf0daa841521e",
"md5": "4367b5b25cca95ffbb3c48db310dcfcc",
"sha256": "fdfadab84a4cf783e5b46c5b881406cc1136ded695ddcea9cd620e126a38027a"
},
"downloads": -1,
"filename": "rlay-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4367b5b25cca95ffbb3c48db310dcfcc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 10445,
"upload_time": "2023-12-30T17:25:00",
"upload_time_iso_8601": "2023-12-30T17:25:00.442370Z",
"url": "https://files.pythonhosted.org/packages/0c/73/8803091f07bbb0b71f7a482b3735629efa7832a350d12c2cf0daa841521e/rlay-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c26900712fa207f1916caf885774f3c54c26497b68a0743f13b3c7d727aca5d9",
"md5": "c8fe5ea9c9e73ca8b97decaf995a4b7e",
"sha256": "f908ab1fcd1b1f91c74c8f7c30e05506ab6b5846a73f8e8c322ab346f76980f0"
},
"downloads": -1,
"filename": "rlay-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "c8fe5ea9c9e73ca8b97decaf995a4b7e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 8832,
"upload_time": "2023-12-30T17:25:03",
"upload_time_iso_8601": "2023-12-30T17:25:03.066725Z",
"url": "https://files.pythonhosted.org/packages/c2/69/00712fa207f1916caf885774f3c54c26497b68a0743f13b3c7d727aca5d9/rlay-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-30 17:25:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Farama-Foundation",
"github_project": "rlay",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "rlay"
}