# TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
[![PyPI tf-agents](https://badge.fury.io/py/tf-agents.svg)](https://badge.fury.io/py/tf-agents)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tf-agents)
[TF-Agents](https://github.com/tensorflow/agents) makes implementing, deploying,
and testing new Bandits and RL algorithms easier. It provides well tested and
modular components that can be modified and extended. It enables fast code
iteration, with good test integration and benchmarking.
To get started, we recommend checking out one of our Colab tutorials. If you
need an intro to RL (or a quick recap),
[start here](docs/tutorials/0_intro_rl.ipynb). Otherwise, check out our
[DQN tutorial](docs/tutorials/1_dqn_tutorial.ipynb) to get an agent up and
running in the Cartpole environment. API documentation for the current stable
release is on
[tensorflow.org](https://www.tensorflow.org/agents/api_docs/python/tf_agents).
TF-Agents is under active development and interfaces may change at any time.
Feedback and comments are welcome.
## Table of contents
<a href='#Agents'>Agents</a><br>
<a href='#Tutorials'>Tutorials</a><br>
<a href='#Multi-Armed Bandits'>Multi-Armed Bandits</a><br>
<a href='#Examples'>Examples</a><br>
<a href='#Installation'>Installation</a><br>
<a href='#Contributing'>Contributing</a><br>
<a href='#Releases'>Releases</a><br>
<a href='#Principles'>Principles</a><br>
<a href='#Contributors'>Contributors</a><br>
<a href='#Citation'>Citation</a><br>
<a href='#Disclaimer'>Disclaimer</a><br>
<a id='Agents'></a>
## Agents
In TF-Agents, the core elements of RL algorithms are implemented as `Agents`. An
agent encompasses two main responsibilities: defining a Policy to interact with
the Environment, and how to learn/train that Policy from collected experience.
Currently the following algorithms are available under TF-Agents:
* [DQN: __Human level control through deep reinforcement learning__ Mnih et
al., 2015](https://deepmind.com/research/dqn/)
* [DDQN: __Deep Reinforcement Learning with Double Q-learning__ Hasselt et
al., 2015](https://arxiv.org/abs/1509.06461)
* [DDPG: __Continuous control with deep reinforcement learning__ Lillicrap et
al., 2015](https://arxiv.org/abs/1509.02971)
* [TD3: __Addressing Function Approximation Error in Actor-Critic Methods__
Fujimoto et al., 2018](https://arxiv.org/abs/1802.09477)
* [REINFORCE: __Simple Statistical Gradient-Following Algorithms for
Connectionist Reinforcement Learning__ Williams,
1992](https://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)
* [PPO: __Proximal Policy Optimization Algorithms__ Schulman et al., 2017](https://arxiv.org/abs/1707.06347)
* [SAC: __Soft Actor Critic__ Haarnoja et al., 2018](https://arxiv.org/abs/1812.05905)
<a id='Tutorials'></a>
## Tutorials
See [`docs/tutorials/`](docs/tutorials) for tutorials on the major components
provided.
<a id='Multi-Armed Bandits'></a>
## Multi-Armed Bandits
The TF-Agents library contains a comprehensive Multi-Armed Bandits suite,
including Bandits environments and agents. RL agents can also be used on Bandit
environments. There is a tutorial in
[`bandits_tutorial.ipynb`](https://github.com/tensorflow/agents/tree/master/docs/tutorials/bandits_tutorial.ipynb).
and ready-to-run examples in
[`tf_agents/bandits/agents/examples/v2`](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits/agents/examples/v2).
<a id='Examples'></a>
## Examples
End-to-end examples training agents can be found under each agent directory.
e.g.:
* DQN:
[`tf_agents/agents/dqn/examples/v2/train_eval.py`](https://github.com/tensorflow/agents/tree/master/tf_agents/agents/dqn/examples/v2/train_eval.py)
<a id='Installation'></a>
## Installation
TF-Agents publishes nightly and stable builds. For a list of releases read the
<a href='#Releases'>Releases</a> section. The commands below cover installing
TF-Agents stable and nightly from [pypi.org](https://pypi.org) as well as from a
GitHub clone.
> :warning: If using Reverb (replay buffer), which is very common,
TF-Agents will only work with Linux.
> Note: Python 3.11 requires pygame 2.1.3+.
### Stable
Run the commands below to install the most recent stable release. API
documentation for the release is on
[tensorflow.org](https://www.tensorflow.org/agents/api_docs/python/tf_agents).
```shell
$ pip install --user tf-agents[reverb]
# Use this tag get the matching examples and colabs.
$ git clone https://github.com/tensorflow/agents.git
$ cd agents
$ git checkout v0.18.0
```
If you want to install TF-Agents with versions of Tensorflow or
[Reverb](https://github.com/deepmind/reverb) that are flagged as not compatible
by the pip dependency check, use the following pattern below at your own risk.
```shell
$ pip install --user tensorflow
$ pip install --user dm-reverb
$ pip install --user tf-agents
```
If you want to use TF-Agents with TensorFlow 1.15 or 2.0, install version 0.3.0:
```shell
# Newer versions of tensorflow-probability require newer versions of TensorFlow.
$ pip install tensorflow-probability==0.8.0
$ pip install tf-agents==0.3.0
```
### Nightly
Nightly builds include newer features, but may be less stable than the versioned
releases. The nightly build is pushed as `tf-agents-nightly`. We suggest
installing nightly versions of TensorFlow (`tf-nightly`) and TensorFlow
Probability (`tfp-nightly`) as those are the versions TF-Agents nightly are
tested against.
To install the nightly build version, run the following:
```shell
# `--force-reinstall helps guarantee the right versions.
$ pip install --user --force-reinstall tf-nightly
$ pip install --user --force-reinstall tfp-nightly
$ pip install --user --force-reinstall dm-reverb-nightly
# Installing with the `--upgrade` flag ensures you'll get the latest version.
$ pip install --user --upgrade tf-agents-nightly
```
### From GitHub
After cloning the repository, the dependencies can be installed by running `pip
install -e .[tests]`. TensorFlow needs to be installed independently: `pip
install --user tf-nightly`.
<a id='Contributing'></a>
## Contributing
We're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)
for a guide on how to contribute. This project adheres to TensorFlow's
[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to
uphold this code.
<a id='Releases'></a>
## Releases
TF Agents has stable and nightly releases. The nightly releases are often fine
but can have issues due to upstream libraries being in flux. The table below
lists the version(s) of TensorFlow that align with each TF Agents' release.
Release versions of interest:
* 0.18.0 dropped Python 3.8 support.
* 0.16.0 is the first version to support Python 3.11.
* 0.15.0 is the last release compatible with Python 3.7.
* If using numpy < 1.19, then use TF-Agents 0.15.0 or earlier.
* 0.9.0 is the last release compatible with Python 3.6.
* 0.3.0 is the last release compatible with Python 2.x.
Release | Branch / Tag | TensorFlow Version | dm-reverb Version
------- | ---------------------------------------------------------- | ------------------ | -----------
Nightly | [master](https://github.com/tensorflow/agents) | tf-nightly | dm-reverb-nightly
0.18.0 | [v0.18.0](https://github.com/tensorflow/agents/tree/v0.18.0) | 2.14.0 | 0.13.0
0.17.0 | [v0.17.0](https://github.com/tensorflow/agents/tree/v0.17.0) | 2.13.0 | 0.12.0
0.16.0 | [v0.16.0](https://github.com/tensorflow/agents/tree/v0.16.0) | 2.12.0 | 0.11.0
0.15.0 | [v0.15.0](https://github.com/tensorflow/agents/tree/v0.15.0) | 2.11.0 | 0.10.0
0.14.0 | [v0.14.0](https://github.com/tensorflow/agents/tree/v0.14.0) | 2.10.0 | 0.9.0
0.13.0 | [v0.13.0](https://github.com/tensorflow/agents/tree/v0.13.0) | 2.9.0 | 0.8.0
0.12.0 | [v0.12.0](https://github.com/tensorflow/agents/tree/v0.12.0) | 2.8.0 | 0.7.0
0.11.0 | [v0.11.0](https://github.com/tensorflow/agents/tree/v0.11.0) | 2.7.0 | 0.6.0
0.10.0 | [v0.10.0](https://github.com/tensorflow/agents/tree/v0.10.0) | 2.6.0 |
0.9.0 | [v0.9.0](https://github.com/tensorflow/agents/tree/v0.9.0) | 2.6.0 |
0.8.0 | [v0.8.0](https://github.com/tensorflow/agents/tree/v0.8.0) | 2.5.0 |
0.7.1 | [v0.7.1](https://github.com/tensorflow/agents/tree/v0.7.1) | 2.4.0 |
0.6.0 | [v0.6.0](https://github.com/tensorflow/agents/tree/v0.6.0) | 2.3.0 |
0.5.0 | [v0.5.0](https://github.com/tensorflow/agents/tree/v0.5.0) | 2.2.0 |
0.4.0 | [v0.4.0](https://github.com/tensorflow/agents/tree/v0.4.0) | 2.1.0 |
0.3.0 | [v0.3.0](https://github.com/tensorflow/agents/tree/v0.3.0) | 1.15.0 and 2.0.0. |
<a id='Principles'></a>
## Principles
This project adheres to [Google's AI principles](PRINCIPLES.md). By
participating, using or contributing to this project you are expected to adhere
to these principles.
<a id='Contributors'></a>
## Contributors
We would like to recognize the following individuals for their code
contributions, discussions, and other work to make the TF-Agents library.
* James Davidson
* Ethan Holly
* Toby Boyd
* Summer Yue
* Robert Ormandi
* Kuang-Huei Lee
* Alexa Greenberg
* Amir Yazdanbakhsh
* Yao Lu
* Gaurav Jain
* Christof Angermueller
* Mark Daoust
* Adam Wood
<a id='Citation'></a>
## Citation
If you use this code, please cite it as:
```
@misc{TFAgents,
title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},
author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and
Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and
Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and
Jamie Smith and Gábor Bartók and Jesse Berent and Chris Harris and
Vincent Vanhoucke and Eugene Brevdo},
howpublished = {\url{https://github.com/tensorflow/agents}},
url = "https://github.com/tensorflow/agents",
year = 2018,
note = "[Online; accessed 25-June-2019]"
}
```
<a id='Disclaimer'></a>
## Disclaimer
This is not an official Google product.
Raw data
{
"_id": null,
"home_page": "https://github.com/tensorflow/agents",
"name": "tf-agents",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3",
"maintainer_email": "",
"keywords": "tensorflow agents reinforcement learning machine bandits",
"author": "Google LLC",
"author_email": "no-reply@google.com",
"download_url": "",
"platform": null,
"description": "# TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.\n\n[![PyPI tf-agents](https://badge.fury.io/py/tf-agents.svg)](https://badge.fury.io/py/tf-agents)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tf-agents)\n\n[TF-Agents](https://github.com/tensorflow/agents) makes implementing, deploying,\nand testing new Bandits and RL algorithms easier. It provides well tested and\nmodular components that can be modified and extended. It enables fast code\niteration, with good test integration and benchmarking.\n\nTo get started, we recommend checking out one of our Colab tutorials. If you\nneed an intro to RL (or a quick recap),\n[start here](docs/tutorials/0_intro_rl.ipynb). Otherwise, check out our\n[DQN tutorial](docs/tutorials/1_dqn_tutorial.ipynb) to get an agent up and\nrunning in the Cartpole environment. API documentation for the current stable\nrelease is on\n[tensorflow.org](https://www.tensorflow.org/agents/api_docs/python/tf_agents).\n\nTF-Agents is under active development and interfaces may change at any time.\nFeedback and comments are welcome.\n\n## Table of contents\n\n<a href='#Agents'>Agents</a><br>\n<a href='#Tutorials'>Tutorials</a><br>\n<a href='#Multi-Armed Bandits'>Multi-Armed Bandits</a><br>\n<a href='#Examples'>Examples</a><br>\n<a href='#Installation'>Installation</a><br>\n<a href='#Contributing'>Contributing</a><br>\n<a href='#Releases'>Releases</a><br>\n<a href='#Principles'>Principles</a><br>\n<a href='#Contributors'>Contributors</a><br>\n<a href='#Citation'>Citation</a><br>\n<a href='#Disclaimer'>Disclaimer</a><br>\n\n<a id='Agents'></a>\n\n## Agents\n\nIn TF-Agents, the core elements of RL algorithms are implemented as `Agents`. An\nagent encompasses two main responsibilities: defining a Policy to interact with\nthe Environment, and how to learn/train that Policy from collected experience.\n\nCurrently the following algorithms are available under TF-Agents:\n\n* [DQN: __Human level control through deep reinforcement learning__ Mnih et\n al., 2015](https://deepmind.com/research/dqn/)\n* [DDQN: __Deep Reinforcement Learning with Double Q-learning__ Hasselt et\n al., 2015](https://arxiv.org/abs/1509.06461)\n* [DDPG: __Continuous control with deep reinforcement learning__ Lillicrap et\n al., 2015](https://arxiv.org/abs/1509.02971)\n* [TD3: __Addressing Function Approximation Error in Actor-Critic Methods__\n Fujimoto et al., 2018](https://arxiv.org/abs/1802.09477)\n* [REINFORCE: __Simple Statistical Gradient-Following Algorithms for\n Connectionist Reinforcement Learning__ Williams,\n 1992](https://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)\n* [PPO: __Proximal Policy Optimization Algorithms__ Schulman et al., 2017](https://arxiv.org/abs/1707.06347)\n* [SAC: __Soft Actor Critic__ Haarnoja et al., 2018](https://arxiv.org/abs/1812.05905)\n\n<a id='Tutorials'></a>\n\n## Tutorials\n\nSee [`docs/tutorials/`](docs/tutorials) for tutorials on the major components\nprovided.\n\n<a id='Multi-Armed Bandits'></a>\n\n## Multi-Armed Bandits\n\nThe TF-Agents library contains a comprehensive Multi-Armed Bandits suite,\nincluding Bandits environments and agents. RL agents can also be used on Bandit\nenvironments. There is a tutorial in\n[`bandits_tutorial.ipynb`](https://github.com/tensorflow/agents/tree/master/docs/tutorials/bandits_tutorial.ipynb).\nand ready-to-run examples in\n[`tf_agents/bandits/agents/examples/v2`](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits/agents/examples/v2).\n\n<a id='Examples'></a>\n\n## Examples\n\nEnd-to-end examples training agents can be found under each agent directory.\ne.g.:\n\n* DQN:\n [`tf_agents/agents/dqn/examples/v2/train_eval.py`](https://github.com/tensorflow/agents/tree/master/tf_agents/agents/dqn/examples/v2/train_eval.py)\n\n<a id='Installation'></a>\n\n## Installation\n\nTF-Agents publishes nightly and stable builds. For a list of releases read the\n<a href='#Releases'>Releases</a> section. The commands below cover installing\nTF-Agents stable and nightly from [pypi.org](https://pypi.org) as well as from a\nGitHub clone.\n\n> :warning: If using Reverb (replay buffer), which is very common,\nTF-Agents will only work with Linux.\n\n> Note: Python 3.11 requires pygame 2.1.3+.\n\n### Stable\n\nRun the commands below to install the most recent stable release. API\ndocumentation for the release is on\n[tensorflow.org](https://www.tensorflow.org/agents/api_docs/python/tf_agents).\n\n```shell\n$ pip install --user tf-agents[reverb]\n\n# Use this tag get the matching examples and colabs.\n$ git clone https://github.com/tensorflow/agents.git\n$ cd agents\n$ git checkout v0.18.0\n```\n\nIf you want to install TF-Agents with versions of Tensorflow or\n[Reverb](https://github.com/deepmind/reverb) that are flagged as not compatible\nby the pip dependency check, use the following pattern below at your own risk.\n\n```shell\n$ pip install --user tensorflow\n$ pip install --user dm-reverb\n$ pip install --user tf-agents\n```\n\nIf you want to use TF-Agents with TensorFlow 1.15 or 2.0, install version 0.3.0:\n\n```shell\n# Newer versions of tensorflow-probability require newer versions of TensorFlow.\n$ pip install tensorflow-probability==0.8.0\n$ pip install tf-agents==0.3.0\n```\n\n### Nightly\n\nNightly builds include newer features, but may be less stable than the versioned\nreleases. The nightly build is pushed as `tf-agents-nightly`. We suggest\ninstalling nightly versions of TensorFlow (`tf-nightly`) and TensorFlow\nProbability (`tfp-nightly`) as those are the versions TF-Agents nightly are\ntested against.\n\nTo install the nightly build version, run the following:\n\n```shell\n# `--force-reinstall helps guarantee the right versions.\n$ pip install --user --force-reinstall tf-nightly\n$ pip install --user --force-reinstall tfp-nightly\n$ pip install --user --force-reinstall dm-reverb-nightly\n\n# Installing with the `--upgrade` flag ensures you'll get the latest version.\n$ pip install --user --upgrade tf-agents-nightly\n```\n\n### From GitHub\n\nAfter cloning the repository, the dependencies can be installed by running `pip\ninstall -e .[tests]`. TensorFlow needs to be installed independently: `pip\ninstall --user tf-nightly`.\n\n<a id='Contributing'></a>\n\n## Contributing\n\nWe're eager to collaborate with you! See [`CONTRIBUTING.md`](CONTRIBUTING.md)\nfor a guide on how to contribute. This project adheres to TensorFlow's\n[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to\nuphold this code.\n\n<a id='Releases'></a>\n\n## Releases\n\nTF Agents has stable and nightly releases. The nightly releases are often fine\nbut can have issues due to upstream libraries being in flux. The table below\nlists the version(s) of TensorFlow that align with each TF Agents' release.\nRelease versions of interest:\n\n * 0.18.0 dropped Python 3.8 support.\n * 0.16.0 is the first version to support Python 3.11.\n * 0.15.0 is the last release compatible with Python 3.7.\n * If using numpy < 1.19, then use TF-Agents 0.15.0 or earlier.\n * 0.9.0 is the last release compatible with Python 3.6.\n * 0.3.0 is the last release compatible with Python 2.x.\n\nRelease | Branch / Tag | TensorFlow Version | dm-reverb Version\n------- | ---------------------------------------------------------- | ------------------ | -----------\nNightly | [master](https://github.com/tensorflow/agents) | tf-nightly | dm-reverb-nightly\n0.18.0 | [v0.18.0](https://github.com/tensorflow/agents/tree/v0.18.0) | 2.14.0 | 0.13.0\n0.17.0 | [v0.17.0](https://github.com/tensorflow/agents/tree/v0.17.0) | 2.13.0 | 0.12.0\n0.16.0 | [v0.16.0](https://github.com/tensorflow/agents/tree/v0.16.0) | 2.12.0 | 0.11.0\n0.15.0 | [v0.15.0](https://github.com/tensorflow/agents/tree/v0.15.0) | 2.11.0 | 0.10.0\n0.14.0 | [v0.14.0](https://github.com/tensorflow/agents/tree/v0.14.0) | 2.10.0 | 0.9.0\n0.13.0 | [v0.13.0](https://github.com/tensorflow/agents/tree/v0.13.0) | 2.9.0 | 0.8.0\n0.12.0 | [v0.12.0](https://github.com/tensorflow/agents/tree/v0.12.0) | 2.8.0 | 0.7.0\n0.11.0 | [v0.11.0](https://github.com/tensorflow/agents/tree/v0.11.0) | 2.7.0 | 0.6.0\n0.10.0 | [v0.10.0](https://github.com/tensorflow/agents/tree/v0.10.0) | 2.6.0 |\n0.9.0 | [v0.9.0](https://github.com/tensorflow/agents/tree/v0.9.0) | 2.6.0 |\n0.8.0 | [v0.8.0](https://github.com/tensorflow/agents/tree/v0.8.0) | 2.5.0 |\n0.7.1 | [v0.7.1](https://github.com/tensorflow/agents/tree/v0.7.1) | 2.4.0 |\n0.6.0 | [v0.6.0](https://github.com/tensorflow/agents/tree/v0.6.0) | 2.3.0 |\n0.5.0 | [v0.5.0](https://github.com/tensorflow/agents/tree/v0.5.0) | 2.2.0 |\n0.4.0 | [v0.4.0](https://github.com/tensorflow/agents/tree/v0.4.0) | 2.1.0 |\n0.3.0 | [v0.3.0](https://github.com/tensorflow/agents/tree/v0.3.0) | 1.15.0 and 2.0.0. |\n\n<a id='Principles'></a>\n\n## Principles\n\nThis project adheres to [Google's AI principles](PRINCIPLES.md). By\nparticipating, using or contributing to this project you are expected to adhere\nto these principles.\n\n\n<a id='Contributors'></a>\n\n## Contributors\n\n\nWe would like to recognize the following individuals for their code\ncontributions, discussions, and other work to make the TF-Agents library.\n\n* James Davidson\n* Ethan Holly\n* Toby Boyd\n* Summer Yue\n* Robert Ormandi\n* Kuang-Huei Lee\n* Alexa Greenberg\n* Amir Yazdanbakhsh\n* Yao Lu\n* Gaurav Jain\n* Christof Angermueller\n* Mark Daoust\n* Adam Wood\n\n\n<a id='Citation'></a>\n\n## Citation\n\nIf you use this code, please cite it as:\n\n```\n@misc{TFAgents,\n title = {{TF-Agents}: A library for Reinforcement Learning in TensorFlow},\n author = {Sergio Guadarrama and Anoop Korattikara and Oscar Ramirez and\n Pablo Castro and Ethan Holly and Sam Fishman and Ke Wang and\n Ekaterina Gonina and Neal Wu and Efi Kokiopoulou and Luciano Sbaiz and\n Jamie Smith and G\u00e1bor Bart\u00f3k and Jesse Berent and Chris Harris and\n Vincent Vanhoucke and Eugene Brevdo},\n howpublished = {\\url{https://github.com/tensorflow/agents}},\n url = \"https://github.com/tensorflow/agents\",\n year = 2018,\n note = \"[Online; accessed 25-June-2019]\"\n}\n```\n\n<a id='Disclaimer'></a>\n\n## Disclaimer\n\nThis is not an official Google product.\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "TF-Agents: A Reinforcement Learning Library for TensorFlow",
"version": "0.19.0",
"project_urls": {
"Homepage": "https://github.com/tensorflow/agents"
},
"split_keywords": [
"tensorflow",
"agents",
"reinforcement",
"learning",
"machine",
"bandits"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "6f6eba4bf83fb7314451f25fc7c4413a50623e41cf6da40075455f1a7e9ce431",
"md5": "5b84f71f4c7b60a72c0f467b02304989",
"sha256": "ff27f3020c486746faff2046abea1fbd8b16fb93984ff685d8ee34a17f9f84f8"
},
"downloads": -1,
"filename": "tf_agents-0.19.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5b84f71f4c7b60a72c0f467b02304989",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3",
"size": 1389128,
"upload_time": "2023-12-14T04:07:38",
"upload_time_iso_8601": "2023-12-14T04:07:38.986385Z",
"url": "https://files.pythonhosted.org/packages/6f/6e/ba4bf83fb7314451f25fc7c4413a50623e41cf6da40075455f1a7e9ce431/tf_agents-0.19.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-14 04:07:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tensorflow",
"github_project": "agents",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "tf-agents"
}