# Drone Swarm Search
## Quick Start
#### Install
`pip install DroneSwarmSearchEnvironment`
#### Use
`from DroneSwarmSearchEnvironment.env import DroneSwarmSearch`
## About
The Drone Swarm Search project is an environment, based on PettingZoo, that is to be used in conjunction with multi-agent (or single-agent) reinforcement learning algorithms. It is an environment in which the agents (drones), have to find the targets (shipwrecked people). The agents do not know the position of the target, and do not receive rewards related to their own distance to the target(s). However, the agents receive the probabilities of the target(s) being in a certain cell of the map. The aim of this project is to aid in the study of reinforcement learning algorithms that require dynamic probabilities as inputs. A visual representation of the environment is displayed below. To test the environment (without an algorithm), run `basic_env.py`.
<p align="center">
<img src="https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/gifs/render_with_grid_gradient.gif" width="400" height="400" align="center">
</p>
## Outcome
| If drone is found | If drone is not found |
:-------------------------:|:-------------------------:
| ![](https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/pics/victory_render.png) | ![](https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/pics/fail_render.png) |
## Basic Usage
```python
from DroneSwarmSearchEnvironment.env import DroneSwarmSearch
env = DroneSwarmSearch(
grid_size=50,
render_mode="human",
render_grid = True,
render_gradient = True,
n_drones=11,
vector=[0.5, 0.5],
person_initial_position = [5, 10],
disperse_constant = 3)
def policy(obs, agent):
actions = {}
for i in range(11):
actions["drone{}".format(i)] = 1
return actions
observations = env.reset()
rewards = 0
done = False
while not done:
actions = policy(observations, env.get_agents())
observations, reward, _, done, info = env.step(actions)
rewards += reward["total_reward"]
done = True if True in [e for e in done.values()] else False
print(rewards)
```
### Installing Dependencies
Using Python version above or equal to 3.10.5.
In order to use the environment download the dependencies using the following command `pip install -r requirements.txt`.
### General Info
| Import | from core.environment.env import DroneSwarmSearch |
| ------------- | ------------- |
| Action Space | Discrete (5) |
| Action Values | [0,1,2,3,4,5] |
| Agents | N |
| Observation Space | {droneN: {observation: ((x, y), probability_matrix}|
### Action Space
| Value | Meaning |
| ------------- | ------------- |
| 0 | Move Left |
| 1 | Move Right |
| 2 | Move Up |
| 3 | Move Down |
| 4 | Search Cell |
| 5 | Idle |
### Inputs
| Inputs | Possible Values | Default Values |
| ------------- | ------------- | ------------- |
| `grid_size` | `int(N)` | `7` |
| `render_mode` | `"ansi" or "human"` | `"ansi"` |
| `render_grid` | `bool` | `False` |
| `render_gradient` | `bool` | `True` |
| `n_drones` | `int(N)` | `1` |
| `vector` | `[float(x), float(y)` | `(-0.5, -0.5)` |
| `person_initial_position` | `[int(x), int(y)]` | `[0, 0]` |
| `disperse_constant` | `float` | `10` |
| `timestep_limit` | `int` | `100` |
### `grid_size`:
The grid size defines the area in which the search will happen. It should always be an integer greater than one.
### `render_mode`:
There are two available render modes, *ansi* and *human*.
**Ansi**: This mode presents no visualization and is intended to train the reinforcement learning algorithm.
**Human**: This mode presents a visualization of the drones actively searching the target, as well as the visualization of the person moving according to the input vector.
### `render_grid`:
The *render_grid* variable is a simple boolean that if set to **True** along with the `render_mode = “human”` the visualization will be rendered with a grid, if it is set to **False** there will be no grid when rendering.
### `render_gradient`:
The *render_gradient* variable is a simple boolean that if set to **True** along with the `render_mode = “human”` the colors in the visualization will be interpolated according to the probability of the cell. Otherwise the color of the cell will be solid according to the following values, considering the values of the matrix are normalized between 0 and 1: `1 > value >= 0.75` the cell will be *green* |` 0.75 > value >= 0.25` the cell will be *yellow* | `0.25 > value` the cell will be *red*.
### `n_drones`:
The `n_drones` input defines the number of drones that will be involved in the search. It needs to be an integer greater than one.
### `vector`:
The `vector` is a list with two values that defines the direction in which the person will drift over time. It is a list with two components where the first value of the list is the displacement in the `x axis` and the second value is the displacement in the `y axis`. A positive x value will result in a displacement to the right and vice versa, and a positive y value will result in a displacement downward. A value equal to 1 will result in a displacement of 1 cell per timestamp, a value of 0.5 will result in a displacement of 1 cell every 2 timesteps, and so on.
### `person_initial_position`:
The `person_initial_position` defines the starting point of the target, it should be a list with two values where the first component is the `x axis` and the second component is the `y axis`. The `y axis` is directed downward. The values have to be integers.
### `disperse_constant`:
The `disperse_constant` is a float that defines the dispersion of the probability matrix. The greater the number the quicker the probability matrix will disperse.
### `timestep_limit`:
The `timestep_limit` is an integer that defines the length of an episode. This means that the `timestep_limit` is essentially the amount of steps that can be done without resetting or ending the environment.
## Built in Functions:
### `env.reset`:
`env.reset()` will reset the environment to the initial position. If you wish to choose the initial positions of the drones an argument can be sent to the method. To do so, the following syntax should be considered. `env.reset(drones_positions=[[5, 5], [25, 5], [45, 5], [5, 15], [25, 15], [45, 15], [10, 35], [30, 35], [45, 25], [33, 45]])`
Each value of the list represents the `[x, y]` initial position of each drone. Make sure that the list has the same number of positions as the number of drones defined in the environment.
Additionally, to change the vector, a tuple (representing the vector) can be sent as an argument. This can be done using the following syntax: `env.reset(vector=(0.3, 0.3))`. This way, the person's movement will change according to the new vector.
In the case of no argument `env.reset()` will simply allocate the drones from left to right each in the next adjacent cell. Once there are no more available cells in the row it will go to the next row and do the same from left to right. The vector will also remain the same as before, when there is no argument in the reset function.
The method will also return a observation dictionary with the observations of all drones.
### `env.step`:
The `env.step()` method defines the drone's next movement. When called upon, the method receives a dictionary with all the drones names as keys and the action as values. For example, in an environment initialized with 10 drones: `env.step({'drone0': 2, 'drone1': 3, 'drone2': 2, 'drone3': 5:, 'drone4’: 1, 'drone5': 0, 'drone6': 2, 'drone7': 5, 'drone8': 0, 'drone9': 1})`. All drones must be in the dictionary and have an action value associated with it, every drone receives an action in every step, otherwise an error will be raised.
The method returns the **observation**, the **reward**, the **termination** state, the **truncation** state and **info**, in the respectful order.
### Person movement:
The person's movement is done using the probability matrix and the vector. The vector essentially dislocates the probabilities, which in turn defines the position of the person. The chances of a person being in a cell is determined by the probability of each cell. Moreover, the person can only move one cell at a time. This means that in every step, the person can only move to one of the cells adjacent to the one he is currently at. This was done in order to create a more realistic movement for the shipwrecked person.
#### Observation:
The observation is a dictionary with all the drones as keys. Each drone has a value of another dictionary with “observation” as key and a tuple as its value. The tuple follows the following pattern, `((x_position, y_position), probability_matrix)`. An output example can be seen below.
```bash
{
'drone0':
{'observation': ((5, 5), array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]))
},
'drone1':
{'observation': ((25, 5), array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]))
},
'drone2':
{'observation': ((45, 5), array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]))
},
.................................
'drone9':
{'observation': ((33, 45), array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]))
}
}
```
#### Reward:
The reward returns a dictionary with the drones names as keys and their respectful rewards as values, as well as a total reward which is the sum of all agents rewards. For example `{'drone0': 1, 'drone1': 89.0, 'drone2': 1, 'total_reward': 91.0}`
The rewards values goes as follows:
- **1** for every action by default
- **-100000** if the drone leaves the grid
- **(*sum_of_rewards* * -1) -100000** if the drone does not find the person after timestep exceeds timestep_limit
- **-100000** if the drones collide
- ***(probability of cell * 10000) if (probability of cell * 100 > 1) else -100*** for searching a cell
- ***10000 + 10000 * (1 - timestep / timestep_limit)*** if the drone searches the cell in which the person is located
#### Termination & Truncation:
The termination and truncation variables return a dictionary with all drones as keys and boolean as values. For example `{'drone0': False, 'drone1': False, 'drone2': False}`. The booleans will be False by default and will turn True in the event of the conditions below:
- If two or more drones collide
- If one of the drones leave the grid
- If timestep exceeds timestep_limit
- If a drone searches the cell in which the person is located
#### Info:
Info is a dictionary that contains a key called "Found" that contains a boolean value. The value begins as `False`, and is only changed to `True` once any drone finds the shipwrecked person. The info section is to be used as an indicator to see if the person was found. For example, before finding the shipwrecked person, the dictionary will be `{"Found": False}`. Once the person is found, the dictionary will be `{"Found": True}`.
### `env.get_agents`:
`env.get_agents()` will return a list of all the possible agents initialized in the scene, you can use it to confirm that all the drones exist in the environment. For example `['drone0', 'drone1', 'drone2', 'drone3', 'drone4', 'drone5', 'drone6', 'drone7', 'drone8', 'drone9']` in an environment with 10 drones.
### `env.close`:
`env.close()` will simply close the render window. Not a necessary function but may be used.
Raw data
{
"_id": null,
"home_page": "https://github.com/PFE-Embraer/drone-swarm-search",
"name": "DroneSwarmSearchEnvironment",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "",
"author": "Luis Filipe Carrete, Manuel Castanares, Enrico Damiani, Leonardo Malta",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/bd/45/7505c51663851b45a2b81756ce7565761c68503b751d09ff20b1c4c144bb/DroneSwarmSearchEnvironment-0.1.8.tar.gz",
"platform": null,
"description": "# Drone Swarm Search\r\n\r\n## Quick Start\r\n\r\n#### Install\r\n`pip install DroneSwarmSearchEnvironment`\r\n\r\n#### Use\r\n`from DroneSwarmSearchEnvironment.env import DroneSwarmSearch`\r\n\r\n## About\r\n\r\nThe Drone Swarm Search project is an environment, based on PettingZoo, that is to be used in conjunction with multi-agent (or single-agent) reinforcement learning algorithms. It is an environment in which the agents (drones), have to find the targets (shipwrecked people). The agents do not know the position of the target, and do not receive rewards related to their own distance to the target(s). However, the agents receive the probabilities of the target(s) being in a certain cell of the map. The aim of this project is to aid in the study of reinforcement learning algorithms that require dynamic probabilities as inputs. A visual representation of the environment is displayed below. To test the environment (without an algorithm), run `basic_env.py`.\r\n\r\n<p align=\"center\">\r\n <img src=\"https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/gifs/render_with_grid_gradient.gif\" width=\"400\" height=\"400\" align=\"center\">\r\n</p>\r\n\r\n\r\n## Outcome\r\n\r\n| If drone is found | If drone is not found |\r\n:-------------------------:|:-------------------------:\r\n| ![](https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/pics/victory_render.png) | ![](https://raw.githubusercontent.com/PFE-Embraer/drone-swarm-search/main/docs/pics/fail_render.png) |\r\n\r\n\r\n## Basic Usage\r\n```python\r\nfrom DroneSwarmSearchEnvironment.env import DroneSwarmSearch\r\n\r\nenv = DroneSwarmSearch(\r\n grid_size=50, \r\n render_mode=\"human\", \r\n render_grid = True,\r\n render_gradient = True,\r\n n_drones=11, \r\n vector=[0.5, 0.5],\r\n person_initial_position = [5, 10],\r\n disperse_constant = 3)\r\n\r\ndef policy(obs, agent):\r\n actions = {}\r\n for i in range(11):\r\n actions[\"drone{}\".format(i)] = 1\r\n return actions\r\n\r\n\r\nobservations = env.reset()\r\nrewards = 0\r\ndone = False\r\n\r\nwhile not done:\r\n actions = policy(observations, env.get_agents())\r\n observations, reward, _, done, info = env.step(actions)\r\n rewards += reward[\"total_reward\"]\r\n done = True if True in [e for e in done.values()] else False\r\n\r\nprint(rewards)\r\n```\r\n\r\n### Installing Dependencies\r\n\r\nUsing Python version above or equal to 3.10.5.\r\n\r\nIn order to use the environment download the dependencies using the following command `pip install -r requirements.txt`.\r\n\r\n### General Info\r\n| Import | from core.environment.env import DroneSwarmSearch |\r\n| ------------- | ------------- |\r\n| Action Space | Discrete (5) |\r\n| Action Values | [0,1,2,3,4,5] | \r\n| Agents | N |\r\n| Observation Space | {droneN: {observation: ((x, y), probability_matrix}|\r\n\r\n### Action Space\r\n| Value | Meaning |\r\n| ------------- | ------------- |\r\n| 0 | Move Left |\r\n| 1 | Move Right |\r\n| 2 | Move Up |\r\n| 3 | Move Down |\r\n| 4 | Search Cell |\r\n| 5 | Idle |\r\n\r\n### Inputs\r\n| Inputs | Possible Values | Default Values |\r\n| ------------- | ------------- | ------------- |\r\n| `grid_size` | `int(N)` | `7` |\r\n| `render_mode` | `\"ansi\" or \"human\"` | `\"ansi\"` |\r\n| `render_grid` | `bool` | `False` |\r\n| `render_gradient` | `bool` | `True` |\r\n| `n_drones` | `int(N)` | `1` |\r\n| `vector` | `[float(x), float(y)` | `(-0.5, -0.5)` |\r\n| `person_initial_position` | `[int(x), int(y)]` | `[0, 0]` |\r\n| `disperse_constant` | `float` | `10` |\r\n| `timestep_limit` | `int` | `100` |\r\n\r\n### `grid_size`:\r\n\r\nThe grid size defines the area in which the search will happen. It should always be an integer greater than one.\r\n\r\n### `render_mode`:\r\n\r\nThere are two available render modes, *ansi* and *human*.\r\n\r\n**Ansi**: This mode presents no visualization and is intended to train the reinforcement learning algorithm.\r\n\r\n**Human**: This mode presents a visualization of the drones actively searching the target, as well as the visualization of the person moving according to the input vector. \r\n\r\n### `render_grid`:\r\n\r\nThe *render_grid* variable is a simple boolean that if set to **True** along with the `render_mode = \u201chuman\u201d` the visualization will be rendered with a grid, if it is set to **False** there will be no grid when rendering. \r\n\r\n### `render_gradient`:\r\n\r\nThe *render_gradient* variable is a simple boolean that if set to **True** along with the `render_mode = \u201chuman\u201d` the colors in the visualization will be interpolated according to the probability of the cell. Otherwise the color of the cell will be solid according to the following values, considering the values of the matrix are normalized between 0 and 1: `1 > value >= 0.75` the cell will be *green* |` 0.75 > value >= 0.25` the cell will be *yellow* | `0.25 > value` the cell will be *red*.\r\n\r\n### `n_drones`:\r\n\r\nThe `n_drones` input defines the number of drones that will be involved in the search. It needs to be an integer greater than one.\r\n\r\n### `vector`:\r\n\r\nThe `vector` is a list with two values that defines the direction in which the person will drift over time. It is a list with two components where the first value of the list is the displacement in the `x axis` and the second value is the displacement in the `y axis`. A positive x value will result in a displacement to the right and vice versa, and a positive y value will result in a displacement downward. A value equal to 1 will result in a displacement of 1 cell per timestamp, a value of 0.5 will result in a displacement of 1 cell every 2 timesteps, and so on. \r\n\r\n### `person_initial_position`:\r\n\r\nThe `person_initial_position` defines the starting point of the target, it should be a list with two values where the first component is the `x axis` and the second component is the `y axis`. The `y axis` is directed downward. The values have to be integers.\r\n\r\n### `disperse_constant`:\r\n\r\nThe `disperse_constant` is a float that defines the dispersion of the probability matrix. The greater the number the quicker the probability matrix will disperse.\r\n\r\n### `timestep_limit`:\r\n\r\nThe `timestep_limit` is an integer that defines the length of an episode. This means that the `timestep_limit` is essentially the amount of steps that can be done without resetting or ending the environment.\r\n\r\n## Built in Functions:\r\n\r\n### `env.reset`:\r\n\r\n`env.reset()` will reset the environment to the initial position. If you wish to choose the initial positions of the drones an argument can be sent to the method. To do so, the following syntax should be considered. `env.reset(drones_positions=[[5, 5], [25, 5], [45, 5], [5, 15], [25, 15], [45, 15], [10, 35], [30, 35], [45, 25], [33, 45]])`\r\n\r\nEach value of the list represents the `[x, y]` initial position of each drone. Make sure that the list has the same number of positions as the number of drones defined in the environment. \r\n\r\nAdditionally, to change the vector, a tuple (representing the vector) can be sent as an argument. This can be done using the following syntax: `env.reset(vector=(0.3, 0.3))`. This way, the person's movement will change according to the new vector. \r\n\r\nIn the case of no argument `env.reset()` will simply allocate the drones from left to right each in the next adjacent cell. Once there are no more available cells in the row it will go to the next row and do the same from left to right. The vector will also remain the same as before, when there is no argument in the reset function.\r\n\r\nThe method will also return a observation dictionary with the observations of all drones. \r\n\r\n### `env.step`:\r\n\r\nThe `env.step()` method defines the drone's next movement. When called upon, the method receives a dictionary with all the drones names as keys and the action as values. For example, in an environment initialized with 10 drones: `env.step({'drone0': 2, 'drone1': 3, 'drone2': 2, 'drone3': 5:, 'drone4\u2019: 1, 'drone5': 0, 'drone6': 2, 'drone7': 5, 'drone8': 0, 'drone9': 1})`. All drones must be in the dictionary and have an action value associated with it, every drone receives an action in every step, otherwise an error will be raised.\r\n\r\nThe method returns the **observation**, the **reward**, the **termination** state, the **truncation** state and **info**, in the respectful order.\r\n\r\n### Person movement:\r\n\r\nThe person's movement is done using the probability matrix and the vector. The vector essentially dislocates the probabilities, which in turn defines the position of the person. The chances of a person being in a cell is determined by the probability of each cell. Moreover, the person can only move one cell at a time. This means that in every step, the person can only move to one of the cells adjacent to the one he is currently at. This was done in order to create a more realistic movement for the shipwrecked person.\r\n\r\n#### Observation:\r\n\r\nThe observation is a dictionary with all the drones as keys. Each drone has a value of another dictionary with \u201cobservation\u201d as key and a tuple as its value. The tuple follows the following pattern, `((x_position, y_position), probability_matrix)`. An output example can be seen below.\r\n\r\n```bash\r\n{\r\n 'drone0': \r\n {'observation': ((5, 5), array([[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]]))\r\n }, \r\n 'drone1': \r\n {'observation': ((25, 5), array([[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]]))\r\n }, \r\n 'drone2': \r\n {'observation': ((45, 5), array([[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]]))\r\n }, \r\n \r\n .................................\r\n \r\n 'drone9': \r\n {'observation': ((33, 45), array([[0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n ...,\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.],\r\n [0., 0., 0., ..., 0., 0., 0.]]))\r\n }\r\n}\r\n```\r\n\r\n#### Reward:\r\n\r\nThe reward returns a dictionary with the drones names as keys and their respectful rewards as values, as well as a total reward which is the sum of all agents rewards. For example `{'drone0': 1, 'drone1': 89.0, 'drone2': 1, 'total_reward': 91.0}`\r\n\r\nThe rewards values goes as follows:\r\n\r\n- **1** for every action by default\r\n- **-100000** if the drone leaves the grid \r\n- **(*sum_of_rewards* * -1) -100000** if the drone does not find the person after timestep exceeds timestep_limit\r\n- **-100000** if the drones collide \r\n- ***(probability of cell * 10000) if (probability of cell * 100 > 1) else -100*** for searching a cell\r\n- ***10000 + 10000 * (1 - timestep / timestep_limit)*** if the drone searches the cell in which the person is located\r\n\r\n#### Termination & Truncation:\r\n\r\nThe termination and truncation variables return a dictionary with all drones as keys and boolean as values. For example `{'drone0': False, 'drone1': False, 'drone2': False}`. The booleans will be False by default and will turn True in the event of the conditions below:\r\n\r\n- If two or more drones collide\r\n- If one of the drones leave the grid \r\n- If timestep exceeds timestep_limit\r\n- If a drone searches the cell in which the person is located\r\n\r\n#### Info:\r\n\r\nInfo is a dictionary that contains a key called \"Found\" that contains a boolean value. The value begins as `False`, and is only changed to `True` once any drone finds the shipwrecked person. The info section is to be used as an indicator to see if the person was found. For example, before finding the shipwrecked person, the dictionary will be `{\"Found\": False}`. Once the person is found, the dictionary will be `{\"Found\": True}`.\r\n\r\n### `env.get_agents`:\r\n\r\n`env.get_agents()` will return a list of all the possible agents initialized in the scene, you can use it to confirm that all the drones exist in the environment. For example `['drone0', 'drone1', 'drone2', 'drone3', 'drone4', 'drone5', 'drone6', 'drone7', 'drone8', 'drone9']` in an environment with 10 drones. \r\n\r\n### `env.close`:\r\n\r\n`env.close()` will simply close the render window. Not a necessary function but may be used.\r\n",
"bugtrack_url": null,
"license": "",
"summary": "An environment to train drones to search and find a shipwrecked person lost in the ocean using reinforcement learning.",
"version": "0.1.8",
"project_urls": {
"Homepage": "https://github.com/PFE-Embraer/drone-swarm-search"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2f86f33bc53bf2849cad245a557c617b4d7ea94bdd46a6b129465ec29c425693",
"md5": "dd1e370f7a8edff2b4c5d4bde9cad285",
"sha256": "a64c7edc0b1b9d91a77b99b030252c29cc17350177bbcffe99e419ab92fbebcb"
},
"downloads": -1,
"filename": "DroneSwarmSearchEnvironment-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "dd1e370f7a8edff2b4c5d4bde9cad285",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 41020,
"upload_time": "2023-05-22T14:43:14",
"upload_time_iso_8601": "2023-05-22T14:43:14.504707Z",
"url": "https://files.pythonhosted.org/packages/2f/86/f33bc53bf2849cad245a557c617b4d7ea94bdd46a6b129465ec29c425693/DroneSwarmSearchEnvironment-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "bd457505c51663851b45a2b81756ce7565761c68503b751d09ff20b1c4c144bb",
"md5": "e58ad75953041b70d5e0f3e5f8b875bd",
"sha256": "a834bd710a0b64ae3abf00204b00f6d221039a90b118f631b92f47b941ff915d"
},
"downloads": -1,
"filename": "DroneSwarmSearchEnvironment-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "e58ad75953041b70d5e0f3e5f8b875bd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 45578,
"upload_time": "2023-05-22T14:43:32",
"upload_time_iso_8601": "2023-05-22T14:43:32.999152Z",
"url": "https://files.pythonhosted.org/packages/bd/45/7505c51663851b45a2b81756ce7565761c68503b751d09ff20b1c4c144bb/DroneSwarmSearchEnvironment-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-05-22 14:43:32",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "PFE-Embraer",
"github_project": "drone-swarm-search",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "droneswarmsearchenvironment"
}