pyqlearning


Namepyqlearning JSON
Version 1.2.7 PyPI version JSON
download
home_pagehttps://github.com/accel-brain/accel-brain-code/tree/master/Reinforcement-Learning
Summarypyqlearning is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.
upload_time2023-12-26 03:00:08
maintainer
docs_urlNone
authoraccel-brain
requires_python
licenseGPL2
keywords q-learning deep q-network dqn reinforcement learning boltzmann multi-agent lstm cnn convolution
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Reinforcement Learning Library: pyqlearning

`pyqlearning` is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.

This library makes it possible to design the information search algorithm such as the Game AI, web crawlers, or Robotics. But this library provides components for designers, not for end-users of state-of-the-art black boxes. Briefly speaking the philosophy of this library, *give user hype-driven blackboxes and you feed him for a day; show him how to design algorithms and you feed him for a lifetime.* So algorithm is power.

<div align="center">
    <table style="border: none;">
        <tr>
            <td width="45%" align="center">
            <p><a href="https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_single_agent_goal_compressed-loop.gif" /></a></p>
            <p>Deep Reinforcement Learning (Deep Q-Network: DQN) to solve Maze.</p>
            </td>
            <td width="45%" align="center">
            <p><a href="https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_goal_enemy_2-compressed.gif" /></a></p>
            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game.</p>
            </td>
        </tr>
    </table>
</div>

## Installation

Install using pip:

```sh
pip install pyqlearning
```

### Source code

The source code is currently hosted on GitHub.

- [accel-brain-code/Reinforcement-Learning](https://github.com/chimera0/accel-brain-code/tree/master/Reinforcement-Learning)

### Python package index(PyPI)

Installers for the latest released version are available at the Python package index.

- [pyqlearning : Python Package Index](https://pypi.python.org/pypi/pyqlearning/)

### Dependencies

- [numpy](https://github.com/numpy/numpy): v1.13.3 or higher.
- [pandas](https://github.com/pandas-dev/pandas): v0.22.0 or higher.

#### Option

- [accel-brain-base](https://github.com/accel-brain/accel-brain-code/tree/master/Accel-Brain-Base): v1.0.0 or higher.
    * Only if you want to implement the *Deep* Reinforcement Learning.
- [mxnet](https://github.com/apache/incubator-mxnet) or [mxnet-cu*](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/crash-course/6-use_gpus.html): latest.
  * Only when building a model of this library using [Apache MXNet](https://mxnet.apache.org/).
- [torch](https://pytorch.org/get-started/locally/)
  * Only when building a model of this library using [PyTorch](https://pytorch.org/).

## Documentation

Full documentation is available on [https://code.accel-brain.com/Reinforcement-Learning/](https://code.accel-brain.com/Reinforcement-Learning/) . This document contains information on functionally reusability, functional scalability and functional extensibility.

## Description

`pyqlearning` is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.

This library provides components for designers, not for end-users of state-of-the-art black boxes. Reinforcement learning algorithms are highly variable because they must design single or multi-agent behavior depending on their problem setup. Designers of algorithms and architectures are required to design according to the situation at each occasion. Commonization and commoditization for end users who want easy-to-use tools is not easy. Nonetheless, commonality / variability analysis and object-oriented analysis are not impossible. I am convinced that a designer who can *practice* *abstraction* of concepts by *drawing a distinction* of concepts related to his/her *own concrete problem settings* makes it possible to distinguish commonality and variability of various Reinforcement Learning algorithms.

### The commonality/variability of Epsilon Greedy Q-Leanring and Boltzmann Q-Learning

According to the Reinforcement Learning problem settings, Q-Learning is a kind of **Temporal Difference learning(TD Learning)** that can be considered as hybrid of **Monte Carlo** method and **Dynamic Programming** method. As Monte Carlo method, TD Learning algorithm can learn by experience without model of environment. And this learning algorithm is functional extension of bootstrap method as Dynamic Programming Method.

In this library, Q-Learning can be distinguished into **Epsilon Greedy Q-Leanring** and **Boltzmann Q-Learning**. These algorithm is functionally equivalent but their structures should be conceptually distinguished.

Epsilon Greedy Q-Leanring algorithm is a typical off-policy algorithm. In this paradigm, *stochastic* searching and *deterministic* searching can coexist by hyperparameter <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/epsilon.gif" /> that is probability that agent searches greedy. Greedy searching is *deterministic* in the sense that policy of agent follows the selection that maximizes the Q-Value.

Boltzmann Q-Learning algorithm is based on Boltzmann action selection mechanism, where the probability
<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/x_i.gif" /> of selecting the action <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif" /> is given by

<!-- $$x_i(t) = \frac{e^{\frac{Q_i(t)}{T}}}{\sum_{k}^{ } e^{\frac{Q_i(t)}{T}}} \ \  (i = 1, 2, ..., n)$$ -->
<div><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/boltzmann_action_selection.gif" /></div>

where the temperature <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_gt_0.gif" /> controls exploration/exploitation tradeoff. For <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_to_0.gif" /> the agent always acts greedily and chooses the strategy corresponding to the maximum Q–value, so as to be pure *deterministic* exploitation, whereas for <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_to_infty.gif" /> the agent’s strategy is completely random, so as to be pure *stochastic* exploration.

### Commonality/variability of Q-learning models

Considering many variable parts and functional extensions in the Q-learning paradigm from perspective of *commonality/variability analysis* in order to practice object-oriented design, this library provides abstract class that defines the skeleton of a Q-Learning algorithm in an operation, deferring some steps in concrete variant algorithms such as Epsilon Greedy Q-Leanring and Boltzmann Q-Learning to client subclasses. The abstract class in this library lets subclasses redefine certain steps of a Q-Learning algorithm without changing the algorithm's structure.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/class_diagram_pyqleanring_QLearning.png" />

Typical concepts such as `State`, `Action`, `Reward`, and `Q-Value` in Q-learning models should be refered as viewpoints for distinguishing between *commonality* and *variability*. Among the functions related to these concepts, the class `QLearning` is responsible for more *common* attributes and behaviors. On the other hand, in relation to *your* concrete problem settings, more *variable* elements have to be implemented by subclasses such as `YourGreedyQLearning` or `YourBoltzmannQLearning`.

For more detailed specification of this template method, refer to API documentation: [pyqlearning.q_learning module](https://code.accel-brain.com/Reinforcement-Learning/pyqlearning.html#module-pyqlearning.q_learning). If you want to know the samples of implemented code, see [demo/](https://github.com/chimera0/accel-brain-code/tree/master/Reinforcement-Learning/demo). 

### Structural extension: Deep Reinforcement Learning

The Reinforcement learning theory presents several issues from a perspective of deep learning theory(Mnih, V., et al. 2013). Firstly, deep learning applications have required large amounts of hand-labelled training data. Reinforcement learning algorithms, on the other hand, must be able to learn from a scalar reward signal that is frequently sparse, noisy and delayed.

The difference between the two theories is not only the type of data but also the timing to be observed. The delay between taking actions and receiving rewards, which can be thousands of timesteps long, seems particularly daunting when compared to the direct association between inputs and targets found in supervised learning.

Another issue is that deep learning algorithms assume the data samples to be independent, while in reinforcement learning one typically encounters sequences of highly correlated states. Furthermore, in Reinforcement learning, the data distribution changes as the algorithm learns new behaviours, presenting aspects of *recursive learning*, which can be problematic for deep learning methods that assume a fixed underlying distribution.

#### Generalisation, or a function approximation

This library considers problem setteing in which an agent interacts with an environment <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mathcal_E.png" />, in a sequence of actions, observations and rewards. At each time-step the agent selects an action at from the set of possible actions, <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/A_1_K.png" />. The state/action-value function is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_s_a.png" />.

The goal of the agent is to interact with the <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mathcal_E.png" /> by selecting actions in a way that maximises future rewards. We can make the standard assumption that future rewards are discounted by a factor of $\gamma$ per time-step, and define the future discounted return at time <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t.png" /> as 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/r_t_sum_t_t_T_gamma.png" />, 

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Tt.png" /> is the time-step at which the agent will reach the goal. This library defines the optimal state/action-value function <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_a.png" /> as the maximum expected return achievable by following any strategy, after seeing some state <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/s.png" /> and then taking some action <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a.png" />, 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_a_max_pi_E.png" />, 

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pi.png" /> is a policy mapping sequences to actions (or distributions over actions). 

The optimal state/action-value function obeys an important identity known as the Bellman equation. This is based on the following *intuition*: if the optimal value <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_d_a_d.png" /> of the sequence <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/s_d.png" /> at the next time-step was known for all possible actions <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a_d.png" />, then the optimal strategy is to select the action <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a_d.png" /> maximising the expected value of 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/r_gamma_Q_ast_s_d_a_d.png" />, 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_d_a_d_mathbb_E_s_d_sim_mathcal_E.png" />.

The basic idea behind many reinforcement learning algorithms is to estimate the state/action-value function, by using the Bellman equation as an iterative update,

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_i_1_s_a_mathbb_E_r_gamma_max_a_d.png" />.

Such *value iteration algorithms* converge to the optimal state/action-value function, <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_i_rightarrow_Q_ast.png" /> as <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i_rightarrow_infty.png" />. 

But increasing the complexity of states/actions is equivalent to increasing the number of combinations of states/actions. If the value function is continuous and granularities of states/actions are extremely fine, the combinatorial explosion will be encountered. In other words, this basic approach is totally impractical, because the state/action-value function is estimated separately for each sequence, without any **generalisation**. Instead, it is common to use a **function approximator** to estimate the state/action-value function,

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_s_a_theta_approx_Q_ast_s_a.png" />

So the Reduction of complexities is required.

### Deep Q-Network

In this problem setting, the function of nerual network or deep learning is a function approximation with weights <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/theta.png" /> as a Q-Network. A Q-Network can be trained by minimising a loss functions <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i.png" /> that changes at each iteration <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.png" />,

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i_mathbb_E_s_a_sim_rho_cdot.png" />

where 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/y_i_mathbb_E_s_d_sim_mathcal_E_r_gamma_max_a_d.png" />

is the target for iteration <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.png" /> and <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/rho_cdot.png" /> is a so-called behaviour distribution. This is probability distribution over states and actions. The parameters from the previous iteration <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/theta_i_1.png" /> are held fixed when optimising the loss function <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i.png" />. Differentiating the loss function with respect to the weights we arrive at the following gradient,

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/nabla_theta_i_L_i_theta_i_mathbb_E_s_a_sim_rho_cdot.png" />

## Tutorial: Maze Solving and the pursuit-evasion game by Deep Q-Network (Jupyter notebook)

[demo/search_maze_by_deep_q_network.ipynb](https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb) is a Jupyter notebook which demonstrates a maze solving algorithm based on Deep Q-Network, rigidly coupled with Deep Convolutional Neural Networks(Deep CNNs). The function of the Deep Learning is **generalisation** and CNNs is-a **function approximator**. In this notebook, several functional equivalents such as CNN and LSTM can be compared from a functional point of view.

<div align="center">
    <p><a href="https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_single_agent_goal_compressed-loop.gif" /></a></p>
    <p>Deep Reinforcement Learning to solve the Maze.</p>
</div>

* Black squares represent a wall.
* Light gray squares represent passages.
* A dark gray square represents a start point.
* A white squeare represents a goal point.

### The pursuit-evasion game

Expanding the search problem of the maze makes it possible to describe the pursuit-evasion game that is a family of problems in mathematics and computer science in which one group attempts to track down members of another group in an environment.

This problem can be re-described as the multi-agent control problem, which involves decomposing the global system state into an image like representation with information encoded in separate channels. This reformulation allows us to use convolutional neural networks to efficiently extract important features from the image-like state.

Egorov, M. (2016) and Gupta, J. K. et al.(2017) proposed new algorithm which uses the image-like state representation of the multi-agent system as an input, and outputs the estimated Q-values for the agent in question. They described a number of implementation contributions that make training efficient and allow agents to learn directly from the behavior of other agents in the system.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/multi_agent_q_learning_and_channels_big.png" />
<p><cite><a href="https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf" target="_blank">Egorov, M. (2016). Multi-agent deep reinforcement learning., p4.</a></cite></p>

An important aspect of this data modeling is that by expressing each state of the multi-agent as channels, it is possible to enclose states of all the agents as **a target of convolution operation all at once**. By the affine transformation executed by the neural network, combinations of an enormous number of states of multi-agent can be computed in principle with an allowable range of memory.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/multi_agent_q_learning_and_cnn_model_big.png" />
<p><cite><a href="https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf" target="_blank">Egorov, M. (2016). Multi-agent deep reinforcement learning., p4.</a></cite></p>

[demo/multi_agent_maze_by_deep_q_network.ipynb](https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb) also prototypes Multi Agent Deep Q-Network to solve the pursuit-evasion game based on the image-like state representation of the multi-agent.

<div align="center">
    <table style="border: none;">
        <tr>
            <td width="45%" align="center">
            <p><a href="https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_crash_enemy_2-compressed.gif" /></a></p>
            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game. The player is caught by enemies.</p>
            </td>
            <td width="45%" align="center">
            <p><a href="https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_goal_enemy_2-compressed.gif" /></a></p>
            <p>
            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game. The player reaches the goal.</p>
            </td>
        </tr>
    </table>
</div>

* Black squares represent a wall.
* Light gray squares represent passages.
* A dark gray square represents a start point.
* Moving dark gray squares represent enemies.
* A white squeare represents a goal point.

## Tutorial: Complexity of Hyperparameters, or how can be hyperparameters decided?

There are many hyperparameters that we have to set before the actual searching and learning process begins. Each parameter should be decided in relation to Deep/Reinforcement Learning theory and it cause side effects in training model. Because of this complexity of hyperparameters, so-called the hyperparameter tuning must become a burden of Data scientists and R & D engineers from the perspective of not only a theoretical point of view but also implementation level.

### Combinatorial optimization problem and Simulated Annealing.

This issue can be considered as **Combinatorial optimization problem** which is an optimization problem, where an optimal solution has to be identified from a finite set of solutions. The solutions are normally discrete or can be converted into discrete. This is an important topic studied in operations research such as software engineering, artificial intelligence(AI), and machine learning. For instance, travelling sales man problem is one of the popular combinatorial optimization problem.

In this problem setting, this library provides an Annealing Model to search optimal combination of hyperparameters. For instance, **Simulated Annealing** is a probabilistic single solution based search method inspired by the annealing process in metallurgy. Annealing is a physical process referred to as tempering certain alloys of metal, glass, or crystal by heating above its melting point, holding its temperature, and then cooling it very slowly until it solidifies into a perfect crystalline structure. The simulation of this process is known as simulated annealing.

### Functional comparison.

[demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb) is a Jupyter notebook which demonstrates a very simple classification problem: Recognizing hand-written digits, in which the aim is to assign each input vector to one of a finite number of discrete categories, to learn observed data points from already labeled data how to predict the class of unlabeled data. In the usecase of hand-written digits dataset, the task is to predict, given an image, which digit it represents.

There are many structural extensions and functional equivalents of **Simulated Annealing**. For instance, **Adaptive Simulated Annealing**, also known as the very fast simulated reannealing, is a very efficient version of simulated annealing. And **Quantum Monte Carlo**, which is generally known a stochastic method to solve the Schrödinger equation, is one of the earliest types of solution in order to simulate the **Quantum Annealing** in classical computer. In summary, one of the function of this algorithm is to solve the ground state search problem which is known as logically equivalent to combinatorial optimization problem. Then this Jupyter notebook demonstrates functional comparison in the same problem setting.

## Demonstration: Epsilon Greedy Q-Learning and Simulated Annealing.

Import python modules.

```python
from pyqlearning.annealingmodel.costfunctionable.greedy_q_learning_cost import GreedyQLearningCost
from pyqlearning.annealingmodel.simulated_annealing import SimulatedAnnealing
# See demo/demo_maze_greedy_q_learning.py
from demo.demo_maze_greedy_q_learning import MazeGreedyQLearning
```

The class `GreedyQLearningCost` is implemented the interface `CostFunctionable` to be called by `AnnealingModel`. This cost function is defined by

<div><img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/q_cost.gif"></div>

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n_search.gif"> is the number of searching(learning) and L is a limit of <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n_search.gif">. 

Like Monte Carlo method, let us draw random samples from a normal (Gaussian) or unifrom distribution.

```python
# Epsilon-Greedy rate in Epsilon-Greedy-Q-Learning.
greedy_rate_arr = np.random.normal(loc=0.5, scale=0.1, size=100)
# Alpha value in Q-Learning.
alpha_value_arr = np.random.normal(loc=0.5, scale=0.1, size=100)
# Gamma value in Q-Learning.
gamma_value_arr = np.random.normal(loc=0.5, scale=0.1, size=100)
# Limit of the number of Learning(searching).
limit_arr = np.random.normal(loc=10, scale=1, size=100)

var_arr = np.c_[greedy_rate_arr, alpha_value_arr, gamma_value_arr, limit_arr]
```

Instantiate and initialize `MazeGreedyQLearning` which is-a `GreedyQLearning`.

```python
# Instantiation.
greedy_q_learning = MazeGreedyQLearning()
greedy_q_learning.initialize(hoge=fuga)
```

Instantiate `GreedyQLearningCost` which is implemented the interface `CostFunctionable` to be called by `AnnealingModel`.

```python
init_state_key = ("Some", "data")
cost_functionable = GreedyQLearningCost(
    greedy_q_learning, 
    init_state_key=init_state_key
)
```

Instantiate `SimulatedAnnealing` which is-a `AnnealingModel`.

```python
annealing_model = SimulatedAnnealing(
    # is-a `CostFunctionable`.
    cost_functionable=cost_functionable,
    # The number of annealing cycles.
    cycles_num=5,
    # The number of trials of searching per a cycle.
    trials_per_cycle=3
)
```

Fit the `var_arr` to `annealing_model`.

```python
annealing_model.var_arr = var_arr
```

Start annealing.

```python
annealing_model.annealing()
```

To extract result of searching, call the property `predicted_log_list` which is list of tuple: `(Cost, Delta energy, Mean of delta energy, probability in Boltzmann distribution, accept flag)`. And refer the property `x` which is `np.ndarray` that has combination of hyperparameters. The optimal combination can be extracted as follow.

```python
# Extract list: [(Cost, Delta energy, Mean of delta energy, probability, accept)]
predicted_log_arr = annealing_model.predicted_log_arr

# [greedy rate, Alpha value, Gamma value, Limit of the number of searching.]
min_e_v_arr = annealing_model.var_arr[np.argmin(predicted_log_arr[:, 2])]
```

### Contingency of definitions

The above definition of cost function is possible option: not necessity but contingent from the point of view of modal logic. You should questions the necessity of definition and re-define, for designing the implementation of interface `CostFunctionable`, in relation to *your* problem settings.

## Demonstration: Epsilon Greedy Q-Learning and Adaptive Simulated Annealing.

There are various Simulated Annealing such as Boltzmann Annealing, Adaptive Simulated Annealing(SAS), and Quantum Simulated Annealing. On the premise of Combinatorial optimization problem, these annealing methods can be considered as functionally equivalent. The *Commonality/Variability* in these methods are able to keep responsibility of objects all straight as the class diagram below indicates.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/class_diagram_annealing_model.png" />

### Code sample.

`AdaptiveSimulatedAnnealing` is-a subclass of `SimulatedAnnealing`. The *variability* is aggregated in the method `AdaptiveSimulatedAnnealing.adaptive_set()` which must be called before executing `AdaptiveSimulatedAnnealing.annealing()`.

```python
from pyqlearning.annealingmodel.simulatedannealing.adaptive_simulated_annealing import AdaptiveSimulatedAnnealing

annealing_model = AdaptiveSimulatedAnnealing(
    cost_functionable=cost_functionable,
    cycles_num=33,
    trials_per_cycle=3,
    accepted_sol_num=0.0,
    init_prob=0.7,
    final_prob=0.001,
    start_pos=0,
    move_range=3
)

# Variability part.
annealing_model.adaptive_set(
    # How often will this model reanneals there per cycles.
    reannealing_per=50,
    # Thermostat.
    thermostat=0.,
    # The minimum temperature.
    t_min=0.001,
    # The default temperature.
    t_default=1.0
)
annealing_model.var_arr = params_arr
annealing_model.annealing()
```

To extract result of searching, call the property like the case of using `SimulatedAnnealing`. If you want to know how to visualize the searching process, see my Jupyter notebook: [demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb).

## Demonstration: Epsilon Greedy Q-Learning and Quantum Monte Carlo.

Generally, Quantum Monte Carlo is a stochastic method to solve the Schrödinger equation. This algorithm is one of the earliest types of solution in order to simulate the Quantum Annealing in classical computer. In summary, one of the function of this algorithm is to solve the ground state search problem which is known as logically equivalent to combinatorial optimization problem.

According to theory of spin glasses, the ground state search problem can be described as minimization energy determined by the hamiltonian <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h_0.png" /> as follow

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/hamiltonian_in_ising_model.png" />

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pauli_z_i.png" /> refers to the Pauli spin matrix below for the spin-half particle at lattice point <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif" />. In spin glasses, random value is assigned to <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/j_i_j.png" />. The number of combinations is enormous. If this value is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n.png" />, a trial frequency is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/2_n.png" />. This computation complexity makes it impossible to solve the ground state search problem. Then, in theory of spin glasses, the standard hamiltonian is re-described in expanded form.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/hamiltonian_in_t_ising_model.png" />

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pauli_x_i.png" /> also refers to the Pauli spin matrix and <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/gamma.png" /> is so-called annealing coefficient, which is hyperparameter that contains vely high value. Ising model to follow this Hamiltonian is known as the Transverse Ising model.

In relation to this system, thermal equilibrium amount of a physical quantity <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/q.png?1" /> is as follow.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/langle_q_rangle.png" />

If <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png" /> is a diagonal matrix, then also <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h.png" /> is diagonal matrix. If diagonal element in <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png" /> is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_i.png" />, Each diagonal element is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h_ij_e_i.png" />. However if <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png" /> has off-diagonal elements, It is known that <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h_ij_e_i_neq.png" /> since for any of the exponent <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif" /> we must exponentiate the matrix as follow.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_matrix_infty.png" />

Therefore, a path integration based on Trotter-Suzuki decomposition has been introduced in Quantum Monte Carlo Method. This path integration makes it possible to obtain the partition function <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png" />.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model.png" />

where if <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/m.png" /> is large enough, relational expression below is established.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/exp_left_frac_1_m_beta_h_right.png" /></td></tr>

Then the partition function <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png" /> can be re-descibed as follow.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model_re_described.png" />

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mid_sigma_k_rangle.png" /> is <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/l.png" /> topological products (product spaces). Because <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h_0.png" /> is the diagonal matrix, <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/tilde_sigma_j_z_mid_sigma.png" />.

Therefore, 

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/langle_sigma_k_mid.png" />

The partition function <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png" /> can be re-descibed as follow.

<img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model_re_described_last.png" />

where <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/m.png" /> is the number of trotter.

This relational expression indicates that the quantum - mechanical Hamiltonian in <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/d.png" /> dimentional Tranverse Ising model is functional equivalence to classical Hamiltonian in <img src="https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/d_1.png" /> dimentional Ising model, which means that the state of the quantum - mechanical system can be approximate by the state of classical system.

### Code sample.

```python
from pyqlearning.annealingmodel.quantum_monte_carlo import QuantumMonteCarlo
from pyqlearning.annealingmodel.distancecomputable.cost_as_distance import CostAsDistance

# User defined function which is-a `CostFuntionable`.
cost_functionable = YourCostFunctions()

# Compute cost as distance for `QuantumMonteCarlo`.
distance_computable = CostAsDistance(params_arr, cost_functionable)

# Init.
annealing_model = QuantumMonteCarlo(
    distance_computable=distance_computable,

    # The number of annealing cycles.
    cycles_num=100,

    # Inverse temperature (Beta).
    inverse_temperature_beta=0.1,

    # Gamma. (so-called annealing coefficient.) 
    gammma=1.0,

    # Attenuation rate for simulated time.
    fractional_reduction=0.99,

    # The dimention of Trotter.
    trotter_dimention=10,

    # The number of Monte Carlo steps.
    mc_step=100,

    # The number of parameters which can be optimized.
    point_num=100,

    # Default `np.ndarray` of 2-D spin glass in Ising model.
    spin_arr=None,

    # Tolerance for the optimization.
    # When the ΔE is not improving by at least `tolerance_diff_e`
    # for two consecutive iterations, annealing will stops.
    tolerance_diff_e=0.01
)

# Execute annealing.
annealing_model.annealing()
```

To extract result of searching, call the property like the case of using `SimulatedAnnealing`. If you want to know how to visualize the searching process, see my Jupyter notebook: [demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb).

## References

The basic concepts, theories, and methods behind this library are described in the following books.

<div align="center"><a href="https://www.amazon.co.jp/dp/B08PV4ZQG5/" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/In-house_R_and_D_in_the_era_of_democratization_of_AI/book_cover.jpg" width="160px" /></a>
  <p>『<a href="https://www.amazon.co.jp/dp/B08PV4ZQG5/ref=sr_1_1?dchild=1&qid=1607343553&s=digital-text&sr=1-1&text=%E6%A0%AA%E5%BC%8F%E4%BC%9A%E7%A4%BEAccel+Brain" target="_blank">「AIの民主化」時代の企業内研究開発: 深層学習の「実学」としての機能分析</a>』(Japanese)</p></div>

<br />
  
<div align="center"><a href="https://www.amazon.co.jp/dp/B093Z533LK" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/AI_vs_Investors_as_Noise_Traders/book_cover.jpg" width="160px" /></a>
  <p>『<a href="https://www.amazon.co.jp/dp/B093Z533LK" target="_blank">AI vs. ノイズトレーダーとしての投資家たち: 「アルゴリズム戦争」時代の証券投資戦略</a>』(Japanese)</p></div>

<br />

<div align="center"><a href="https://www.amazon.co.jp/dp/B0994CH3CM" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/Babel_of_Natural_Language_Processing/book_cover.jpg" width="160px" /></a>
  <p>『<a href="https://www.amazon.co.jp/dp/B0994CH3CM" target="_blank">自然言語処理のバベル: 文書自動要約、文章生成AI、チャットボットの意味論</a>』(Japanese)</p></div>

<br />

<div align="center"><a href="https://www.amazon.co.jp/dp/B09C4KYZBX" target="_blank"><img src="https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/Origin_of_the_statistical_machine_learning/book_cover.jpg" width="160px" /></a>
  <p>『<a href="https://www.amazon.co.jp/dp/B09C4KYZBX" target="_blank">統計的機械学習の根源: 熱力学、量子力学、統計力学における天才物理学者たちの神学的な理念</a>』(Japanese)</p></div>


Specific references are the following papers and books.

### Q-Learning models.

- Agrawal, S., & Goyal, N. (2011). Analysis of Thompson sampling for the multi-armed bandit problem. arXiv preprint arXiv:1111.1797.
- Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721.
- Chapelle, O., & Li, L. (2011). An empirical evaluation of thompson sampling. In Advances in neural information processing systems (pp. 2249-2257).
- Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics (p. 434). New York City: Springer.
- Kaufmann, E., Cappe, O., & Garivier, A. (2012). On Bayesian upper confidence bounds for bandit problems. In International Conference on Artificial Intelligence and Statistics (pp. 592-600).
- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
- Richard Sutton and Andrew Barto (1998). Reinforcement Learning. MIT Press.
- Watkins, C. J. C. H. (1989). Learning from delayed rewards (Doctoral dissertation, University of Cambridge).
- Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine learning, 8(3-4), 279-292.
- White, J. (2012). Bandit algorithms for website optimization. ” O’Reilly Media, Inc.”.

### Deep Q-Network models.

- Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
- <a href="https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf" target="_blank">Egorov, M. (2016). Multi-agent deep reinforcement learning.</a>
- Gupta, J. K., Egorov, M., & Kochenderfer, M. (2017, May). Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (pp. 66-83). Springer, Cham.
- Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.
- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.
- Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015, April). Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4580-4584). IEEE.
- Xingjian, S. H. I., Chen, Z., Wang, H., Yeung, D. Y., Wong, W. K., & Woo, W. C. (2015). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (pp. 802-810).
- Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.

### Annealing models.

- Bektas, T. (2006). The multiple traveling salesman problem: an overview of formulations and solution procedures. Omega, 34(3), 209-219.
- Bertsimas, D., & Tsitsiklis, J. (1993). Simulated annealing. Statistical science, 8(1), 10-15.
- Das, A., & Chakrabarti, B. K. (Eds.). (2005). Quantum annealing and related optimization methods (Vol. 679). Springer Science & Business Media.
- Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics. New York City: Springer.
- Edwards, S. F., & Anderson, P. W. (1975). Theory of spin glasses. Journal of Physics F: Metal Physics, 5(5), 965.
- Facchi, P., & Pascazio, S. (2008). Quantum Zeno dynamics: mathematical and physical aspects. Journal of Physics A: Mathematical and Theoretical, 41(49), 493001.
- Heim, B., Rønnow, T. F., Isakov, S. V., & Troyer, M. (2015). Quantum versus classical annealing of Ising spin glasses. Science, 348(6231), 215-217.
- Heisenberg, W. (1925) Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen. Z. Phys. 33, pp.879—893.
- Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift fur Physik, 43, 172-198.
- Heisenberg, W. (1984). The development of quantum mechanics. In Scientific Review Papers, Talks, and Books -Wissenschaftliche Übersichtsartikel, Vorträge und Bücher (pp. 226-237). Springer Berlin Heidelberg.
Hilgevoord, Jan and Uffink, Jos, "The Uncertainty Principle", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/qt-uncertainty/>.
- Jarzynski, C. (1997). Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14), 2690.
- Messiah, A. (1966). Quantum mechanics. 2 (1966). North-Holland Publishing Company.
- Mezard, M., & Montanari, A. (2009). Information, physics, and computation. Oxford University Press.
- Nallusamy, R., Duraiswamy, K., Dhanalaksmi, R., & Parthiban, P. (2009). Optimization of non-linear multiple traveling salesman problem using k-means clustering, shrink wrap algorithm and meta-heuristics. International Journal of Nonlinear Science, 8(4), 480-487.
- Schrödinger, E. (1926). Quantisierung als eigenwertproblem. Annalen der physik, 385(13), S.437-490.
- Somma, R. D., Batista, C. D., & Ortiz, G. (2007). Quantum approach to classical statistical mechanics. Physical review letters, 99(3), 030603.
- 鈴木正. (2008). 「組み合わせ最適化問題と量子アニーリング: 量子断熱発展の理論と性能評価」.,『物性研究』, 90(4): pp598-676. 参照箇所はpp619-624.
- 西森秀稔、大関真之(2018) 『量子アニーリングの基礎』須藤 彰三、岡 真 監修、共立出版、参照箇所はpp9-46.

## Author

- accel-brain

## Author URI

- https://accel-brain.co.jp/
- https://accel-brain.com/

## License

- GNU General Public License v2.0

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/accel-brain/accel-brain-code/tree/master/Reinforcement-Learning",
    "name": "pyqlearning",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Q-Learning Deep Q-Network DQN Reinforcement Learning Boltzmann Multi-agent LSTM CNN Convolution",
    "author": "accel-brain",
    "author_email": "info@accel-brain.co.jp",
    "download_url": "https://files.pythonhosted.org/packages/cd/6b/0f819a8ea4395fe6d5a00cf05d0465d12f786e04f23bd6f74537ee7c3b18/pyqlearning-1.2.7.tar.gz",
    "platform": null,
    "description": "# Reinforcement Learning Library: pyqlearning\r\n\r\n`pyqlearning` is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.\r\n\r\nThis library makes it possible to design the information search algorithm such as the Game AI, web crawlers, or Robotics. But this library provides components for designers, not for end-users of state-of-the-art black boxes. Briefly speaking the philosophy of this library, *give user hype-driven blackboxes and you feed him for a day; show him how to design algorithms and you feed him for a lifetime.* So algorithm is power.\r\n\r\n<div align=\"center\">\r\n    <table style=\"border: none;\">\r\n        <tr>\r\n            <td width=\"45%\" align=\"center\">\r\n            <p><a href=\"https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_single_agent_goal_compressed-loop.gif\" /></a></p>\r\n            <p>Deep Reinforcement Learning (Deep Q-Network: DQN) to solve Maze.</p>\r\n            </td>\r\n            <td width=\"45%\" align=\"center\">\r\n            <p><a href=\"https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_goal_enemy_2-compressed.gif\" /></a></p>\r\n            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game.</p>\r\n            </td>\r\n        </tr>\r\n    </table>\r\n</div>\r\n\r\n## Installation\r\n\r\nInstall using pip:\r\n\r\n```sh\r\npip install pyqlearning\r\n```\r\n\r\n### Source code\r\n\r\nThe source code is currently hosted on GitHub.\r\n\r\n- [accel-brain-code/Reinforcement-Learning](https://github.com/chimera0/accel-brain-code/tree/master/Reinforcement-Learning)\r\n\r\n### Python package index(PyPI)\r\n\r\nInstallers for the latest released version are available at the Python package index.\r\n\r\n- [pyqlearning : Python Package Index](https://pypi.python.org/pypi/pyqlearning/)\r\n\r\n### Dependencies\r\n\r\n- [numpy](https://github.com/numpy/numpy): v1.13.3 or higher.\r\n- [pandas](https://github.com/pandas-dev/pandas): v0.22.0 or higher.\r\n\r\n#### Option\r\n\r\n- [accel-brain-base](https://github.com/accel-brain/accel-brain-code/tree/master/Accel-Brain-Base): v1.0.0 or higher.\r\n    * Only if you want to implement the *Deep* Reinforcement Learning.\r\n- [mxnet](https://github.com/apache/incubator-mxnet) or [mxnet-cu*](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/crash-course/6-use_gpus.html): latest.\r\n  * Only when building a model of this library using [Apache MXNet](https://mxnet.apache.org/).\r\n- [torch](https://pytorch.org/get-started/locally/)\r\n  * Only when building a model of this library using [PyTorch](https://pytorch.org/).\r\n\r\n## Documentation\r\n\r\nFull documentation is available on [https://code.accel-brain.com/Reinforcement-Learning/](https://code.accel-brain.com/Reinforcement-Learning/) . This document contains information on functionally reusability, functional scalability and functional extensibility.\r\n\r\n## Description\r\n\r\n`pyqlearning` is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.\r\n\r\nThis library provides components for designers, not for end-users of state-of-the-art black boxes. Reinforcement learning algorithms are highly variable because they must design single or multi-agent behavior depending on their problem setup. Designers of algorithms and architectures are required to design according to the situation at each occasion. Commonization and commoditization for end users who want easy-to-use tools is not easy. Nonetheless, commonality / variability analysis and object-oriented analysis are not impossible. I am convinced that a designer who can *practice* *abstraction* of concepts by *drawing a distinction* of concepts related to his/her *own concrete problem settings* makes it possible to distinguish commonality and variability of various Reinforcement Learning algorithms.\r\n\r\n### The commonality/variability of Epsilon Greedy Q-Leanring and Boltzmann Q-Learning\r\n\r\nAccording to the Reinforcement Learning problem settings, Q-Learning is a kind of **Temporal Difference learning(TD Learning)** that can be considered as hybrid of **Monte Carlo** method and **Dynamic Programming** method. As Monte Carlo method, TD Learning algorithm can learn by experience without model of environment. And this learning algorithm is functional extension of bootstrap method as Dynamic Programming Method.\r\n\r\nIn this library, Q-Learning can be distinguished into **Epsilon Greedy Q-Leanring** and **Boltzmann Q-Learning**. These algorithm is functionally equivalent but their structures should be conceptually distinguished.\r\n\r\nEpsilon Greedy Q-Leanring algorithm is a typical off-policy algorithm. In this paradigm, *stochastic* searching and *deterministic* searching can coexist by hyperparameter <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/epsilon.gif\" /> that is probability that agent searches greedy. Greedy searching is *deterministic* in the sense that policy of agent follows the selection that maximizes the Q-Value.\r\n\r\nBoltzmann Q-Learning algorithm is based on Boltzmann action selection mechanism, where the probability\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/x_i.gif\" /> of selecting the action <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif\" /> is given by\r\n\r\n<!-- $$x_i(t) = \\frac{e^{\\frac{Q_i(t)}{T}}}{\\sum_{k}^{ } e^{\\frac{Q_i(t)}{T}}} \\ \\  (i = 1, 2, ..., n)$$ -->\r\n<div><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/boltzmann_action_selection.gif\" /></div>\r\n\r\nwhere the temperature <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_gt_0.gif\" /> controls exploration/exploitation tradeoff. For <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_to_0.gif\" /> the agent always acts greedily and chooses the strategy corresponding to the maximum Q\u2013value, so as to be pure *deterministic* exploitation, whereas for <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t_to_infty.gif\" /> the agent\u2019s strategy is completely random, so as to be pure *stochastic* exploration.\r\n\r\n### Commonality/variability of Q-learning models\r\n\r\nConsidering many variable parts and functional extensions in the Q-learning paradigm from perspective of *commonality/variability analysis* in order to practice object-oriented design, this library provides abstract class that defines the skeleton of a Q-Learning algorithm in an operation, deferring some steps in concrete variant algorithms such as Epsilon Greedy Q-Leanring and Boltzmann Q-Learning to client subclasses. The abstract class in this library lets subclasses redefine certain steps of a Q-Learning algorithm without changing the algorithm's structure.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/class_diagram_pyqleanring_QLearning.png\" />\r\n\r\nTypical concepts such as `State`, `Action`, `Reward`, and `Q-Value` in Q-learning models should be refered as viewpoints for distinguishing between *commonality* and *variability*. Among the functions related to these concepts, the class `QLearning` is responsible for more *common* attributes and behaviors. On the other hand, in relation to *your* concrete problem settings, more *variable* elements have to be implemented by subclasses such as `YourGreedyQLearning` or `YourBoltzmannQLearning`.\r\n\r\nFor more detailed specification of this template method, refer to API documentation: [pyqlearning.q_learning module](https://code.accel-brain.com/Reinforcement-Learning/pyqlearning.html#module-pyqlearning.q_learning). If you want to know the samples of implemented code, see [demo/](https://github.com/chimera0/accel-brain-code/tree/master/Reinforcement-Learning/demo). \r\n\r\n### Structural extension: Deep Reinforcement Learning\r\n\r\nThe Reinforcement learning theory presents several issues from a perspective of deep learning theory(Mnih, V., et al. 2013). Firstly, deep learning applications have required large amounts of hand-labelled training data. Reinforcement learning algorithms, on the other hand, must be able to learn from a scalar reward signal that is frequently sparse, noisy and delayed.\r\n\r\nThe difference between the two theories is not only the type of data but also the timing to be observed. The delay between taking actions and receiving rewards, which can be thousands of timesteps long, seems particularly daunting when compared to the direct association between inputs and targets found in supervised learning.\r\n\r\nAnother issue is that deep learning algorithms assume the data samples to be independent, while in reinforcement learning one typically encounters sequences of highly correlated states. Furthermore, in Reinforcement learning, the data distribution changes as the algorithm learns new behaviours, presenting aspects of *recursive learning*, which can be problematic for deep learning methods that assume a fixed underlying distribution.\r\n\r\n#### Generalisation, or a function approximation\r\n\r\nThis library considers problem setteing in which an agent interacts with an environment <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mathcal_E.png\" />, in a sequence of actions, observations and rewards. At each time-step the agent selects an action at from the set of possible actions, <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/A_1_K.png\" />. The state/action-value function is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_s_a.png\" />.\r\n\r\nThe goal of the agent is to interact with the <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mathcal_E.png\" /> by selecting actions in a way that maximises future rewards. We can make the standard assumption that future rewards are discounted by a factor of $\\gamma$ per time-step, and define the future discounted return at time <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/t.png\" /> as \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/r_t_sum_t_t_T_gamma.png\" />, \r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Tt.png\" /> is the time-step at which the agent will reach the goal. This library defines the optimal state/action-value function <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_a.png\" /> as the maximum expected return achievable by following any strategy, after seeing some state <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/s.png\" /> and then taking some action <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a.png\" />, \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_a_max_pi_E.png\" />, \r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pi.png\" /> is a policy mapping sequences to actions (or distributions over actions). \r\n\r\nThe optimal state/action-value function obeys an important identity known as the Bellman equation. This is based on the following *intuition*: if the optimal value <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_d_a_d.png\" /> of the sequence <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/s_d.png\" /> at the next time-step was known for all possible actions <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a_d.png\" />, then the optimal strategy is to select the action <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/a_d.png\" /> maximising the expected value of \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/r_gamma_Q_ast_s_d_a_d.png\" />, \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_ast_s_d_a_d_mathbb_E_s_d_sim_mathcal_E.png\" />.\r\n\r\nThe basic idea behind many reinforcement learning algorithms is to estimate the state/action-value function, by using the Bellman equation as an iterative update,\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_i_1_s_a_mathbb_E_r_gamma_max_a_d.png\" />.\r\n\r\nSuch *value iteration algorithms* converge to the optimal state/action-value function, <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_i_rightarrow_Q_ast.png\" /> as <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i_rightarrow_infty.png\" />. \r\n\r\nBut increasing the complexity of states/actions is equivalent to increasing the number of combinations of states/actions. If the value function is continuous and granularities of states/actions are extremely fine, the combinatorial explosion will be encountered. In other words, this basic approach is totally impractical, because the state/action-value function is estimated separately for each sequence, without any **generalisation**. Instead, it is common to use a **function approximator** to estimate the state/action-value function,\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/Q_s_a_theta_approx_Q_ast_s_a.png\" />\r\n\r\nSo the Reduction of complexities is required.\r\n\r\n### Deep Q-Network\r\n\r\nIn this problem setting, the function of nerual network or deep learning is a function approximation with weights <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/theta.png\" /> as a Q-Network. A Q-Network can be trained by minimising a loss functions <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i.png\" /> that changes at each iteration <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.png\" />,\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i_mathbb_E_s_a_sim_rho_cdot.png\" />\r\n\r\nwhere \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/y_i_mathbb_E_s_d_sim_mathcal_E_r_gamma_max_a_d.png\" />\r\n\r\nis the target for iteration <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.png\" /> and <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/rho_cdot.png\" /> is a so-called behaviour distribution. This is probability distribution over states and actions. The parameters from the previous iteration <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/theta_i_1.png\" /> are held fixed when optimising the loss function <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/L_i_theta_i.png\" />. Differentiating the loss function with respect to the weights we arrive at the following gradient,\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/nabla_theta_i_L_i_theta_i_mathbb_E_s_a_sim_rho_cdot.png\" />\r\n\r\n## Tutorial: Maze Solving and the pursuit-evasion game by Deep Q-Network (Jupyter notebook)\r\n\r\n[demo/search_maze_by_deep_q_network.ipynb](https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb) is a Jupyter notebook which demonstrates a maze solving algorithm based on Deep Q-Network, rigidly coupled with Deep Convolutional Neural Networks(Deep CNNs). The function of the Deep Learning is **generalisation** and CNNs is-a **function approximator**. In this notebook, several functional equivalents such as CNN and LSTM can be compared from a functional point of view.\r\n\r\n<div align=\"center\">\r\n    <p><a href=\"https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/search_maze_by_deep_q_network.ipynb\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_single_agent_goal_compressed-loop.gif\" /></a></p>\r\n    <p>Deep Reinforcement Learning to solve the Maze.</p>\r\n</div>\r\n\r\n* Black squares represent a wall.\r\n* Light gray squares represent passages.\r\n* A dark gray square represents a start point.\r\n* A white squeare represents a goal point.\r\n\r\n### The pursuit-evasion game\r\n\r\nExpanding the search problem of the maze makes it possible to describe the pursuit-evasion game that is a family of problems in mathematics and computer science in which one group attempts to track down members of another group in an environment.\r\n\r\nThis problem can be re-described as the multi-agent control problem, which involves decomposing the global system state into an image like representation with information encoded in separate channels. This reformulation allows us to use convolutional neural networks to efficiently extract important features from the image-like state.\r\n\r\nEgorov, M. (2016) and Gupta, J. K. et al.(2017) proposed new algorithm which uses the image-like state representation of the multi-agent system as an input, and outputs the estimated Q-values for the agent in question. They described a number of implementation contributions that make training efficient and allow agents to learn directly from the behavior of other agents in the system.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/multi_agent_q_learning_and_channels_big.png\" />\r\n<p><cite><a href=\"https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf\" target=\"_blank\">Egorov, M. (2016). Multi-agent deep reinforcement learning., p4.</a></cite></p>\r\n\r\nAn important aspect of this data modeling is that by expressing each state of the multi-agent as channels, it is possible to enclose states of all the agents as **a target of convolution operation all at once**. By the affine transformation executed by the neural network, combinations of an enormous number of states of multi-agent can be computed in principle with an allowable range of memory.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/multi_agent_q_learning_and_cnn_model_big.png\" />\r\n<p><cite><a href=\"https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf\" target=\"_blank\">Egorov, M. (2016). Multi-agent deep reinforcement learning., p4.</a></cite></p>\r\n\r\n[demo/multi_agent_maze_by_deep_q_network.ipynb](https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb) also prototypes Multi Agent Deep Q-Network to solve the pursuit-evasion game based on the image-like state representation of the multi-agent.\r\n\r\n<div align=\"center\">\r\n    <table style=\"border: none;\">\r\n        <tr>\r\n            <td width=\"45%\" align=\"center\">\r\n            <p><a href=\"https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_crash_enemy_2-compressed.gif\" /></a></p>\r\n            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game. The player is caught by enemies.</p>\r\n            </td>\r\n            <td width=\"45%\" align=\"center\">\r\n            <p><a href=\"https://github.com/accel-brain/accel-brain-code/blob/master/Reinforcement-Learning/demo/multi_agent_maze_by_deep_q_network.ipynb\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/DQN_multi_agent_demo_goal_enemy_2-compressed.gif\" /></a></p>\r\n            <p>\r\n            <p>Multi-agent Deep Reinforcement Learning to solve the pursuit-evasion game. The player reaches the goal.</p>\r\n            </td>\r\n        </tr>\r\n    </table>\r\n</div>\r\n\r\n* Black squares represent a wall.\r\n* Light gray squares represent passages.\r\n* A dark gray square represents a start point.\r\n* Moving dark gray squares represent enemies.\r\n* A white squeare represents a goal point.\r\n\r\n## Tutorial: Complexity of Hyperparameters, or how can be hyperparameters decided?\r\n\r\nThere are many hyperparameters that we have to set before the actual searching and learning process begins. Each parameter should be decided in relation to Deep/Reinforcement Learning theory and it cause side effects in training model. Because of this complexity of hyperparameters, so-called the hyperparameter tuning must become a burden of Data scientists and R & D engineers from the perspective of not only a theoretical point of view but also implementation level.\r\n\r\n### Combinatorial optimization problem and Simulated Annealing.\r\n\r\nThis issue can be considered as **Combinatorial optimization problem** which is an optimization problem, where an optimal solution has to be identified from a finite set of solutions. The solutions are normally discrete or can be converted into discrete. This is an important topic studied in operations research such as software engineering, artificial intelligence(AI), and machine learning. For instance, travelling sales man problem is one of the popular combinatorial optimization problem.\r\n\r\nIn this problem setting, this library provides an Annealing Model to search optimal combination of hyperparameters. For instance, **Simulated Annealing** is a probabilistic single solution based search method inspired by the annealing process in metallurgy. Annealing is a physical process referred to as tempering certain alloys of metal, glass, or crystal by heating above its melting point, holding its temperature, and then cooling it very slowly until it solidifies into a perfect crystalline structure. The simulation of this process is known as simulated annealing.\r\n\r\n### Functional comparison.\r\n\r\n[demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb) is a Jupyter notebook which demonstrates a very simple classification problem: Recognizing hand-written digits, in which the aim is to assign each input vector to one of a finite number of discrete categories, to learn observed data points from already labeled data how to predict the class of unlabeled data. In the usecase of hand-written digits dataset, the task is to predict, given an image, which digit it represents.\r\n\r\nThere are many structural extensions and functional equivalents of **Simulated Annealing**. For instance, **Adaptive Simulated Annealing**, also known as the very fast simulated reannealing, is a very efficient version of simulated annealing. And **Quantum Monte Carlo**, which is generally known a stochastic method to solve the Schr\u00f6dinger equation, is one of the earliest types of solution in order to simulate the **Quantum Annealing** in classical computer. In summary, one of the function of this algorithm is to solve the ground state search problem which is known as logically equivalent to combinatorial optimization problem. Then this Jupyter notebook demonstrates functional comparison in the same problem setting.\r\n\r\n## Demonstration: Epsilon Greedy Q-Learning and Simulated Annealing.\r\n\r\nImport python modules.\r\n\r\n```python\r\nfrom pyqlearning.annealingmodel.costfunctionable.greedy_q_learning_cost import GreedyQLearningCost\r\nfrom pyqlearning.annealingmodel.simulated_annealing import SimulatedAnnealing\r\n# See demo/demo_maze_greedy_q_learning.py\r\nfrom demo.demo_maze_greedy_q_learning import MazeGreedyQLearning\r\n```\r\n\r\nThe class `GreedyQLearningCost` is implemented the interface `CostFunctionable` to be called by `AnnealingModel`. This cost function is defined by\r\n\r\n<div><img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/q_cost.gif\"></div>\r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n_search.gif\"> is the number of searching(learning) and L is a limit of <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n_search.gif\">. \r\n\r\nLike Monte Carlo method, let us draw random samples from a normal (Gaussian) or unifrom distribution.\r\n\r\n```python\r\n# Epsilon-Greedy rate in Epsilon-Greedy-Q-Learning.\r\ngreedy_rate_arr = np.random.normal(loc=0.5, scale=0.1, size=100)\r\n# Alpha value in Q-Learning.\r\nalpha_value_arr = np.random.normal(loc=0.5, scale=0.1, size=100)\r\n# Gamma value in Q-Learning.\r\ngamma_value_arr = np.random.normal(loc=0.5, scale=0.1, size=100)\r\n# Limit of the number of Learning(searching).\r\nlimit_arr = np.random.normal(loc=10, scale=1, size=100)\r\n\r\nvar_arr = np.c_[greedy_rate_arr, alpha_value_arr, gamma_value_arr, limit_arr]\r\n```\r\n\r\nInstantiate and initialize `MazeGreedyQLearning` which is-a `GreedyQLearning`.\r\n\r\n```python\r\n# Instantiation.\r\ngreedy_q_learning = MazeGreedyQLearning()\r\ngreedy_q_learning.initialize(hoge=fuga)\r\n```\r\n\r\nInstantiate `GreedyQLearningCost` which is implemented the interface `CostFunctionable` to be called by `AnnealingModel`.\r\n\r\n```python\r\ninit_state_key = (\"Some\", \"data\")\r\ncost_functionable = GreedyQLearningCost(\r\n    greedy_q_learning, \r\n    init_state_key=init_state_key\r\n)\r\n```\r\n\r\nInstantiate `SimulatedAnnealing` which is-a `AnnealingModel`.\r\n\r\n```python\r\nannealing_model = SimulatedAnnealing(\r\n    # is-a `CostFunctionable`.\r\n    cost_functionable=cost_functionable,\r\n    # The number of annealing cycles.\r\n    cycles_num=5,\r\n    # The number of trials of searching per a cycle.\r\n    trials_per_cycle=3\r\n)\r\n```\r\n\r\nFit the `var_arr` to `annealing_model`.\r\n\r\n```python\r\nannealing_model.var_arr = var_arr\r\n```\r\n\r\nStart annealing.\r\n\r\n```python\r\nannealing_model.annealing()\r\n```\r\n\r\nTo extract result of searching, call the property `predicted_log_list` which is list of tuple: `(Cost, Delta energy, Mean of delta energy, probability in Boltzmann distribution, accept flag)`. And refer the property `x` which is `np.ndarray` that has combination of hyperparameters. The optimal combination can be extracted as follow.\r\n\r\n```python\r\n# Extract list: [(Cost, Delta energy, Mean of delta energy, probability, accept)]\r\npredicted_log_arr = annealing_model.predicted_log_arr\r\n\r\n# [greedy rate, Alpha value, Gamma value, Limit of the number of searching.]\r\nmin_e_v_arr = annealing_model.var_arr[np.argmin(predicted_log_arr[:, 2])]\r\n```\r\n\r\n### Contingency of definitions\r\n\r\nThe above definition of cost function is possible option: not necessity but contingent from the point of view of modal logic. You should questions the necessity of definition and re-define, for designing the implementation of interface `CostFunctionable`, in relation to *your* problem settings.\r\n\r\n## Demonstration: Epsilon Greedy Q-Learning and Adaptive Simulated Annealing.\r\n\r\nThere are various Simulated Annealing such as Boltzmann Annealing, Adaptive Simulated Annealing(SAS), and Quantum Simulated Annealing. On the premise of Combinatorial optimization problem, these annealing methods can be considered as functionally equivalent. The *Commonality/Variability* in these methods are able to keep responsibility of objects all straight as the class diagram below indicates.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/class_diagram_annealing_model.png\" />\r\n\r\n### Code sample.\r\n\r\n`AdaptiveSimulatedAnnealing` is-a subclass of `SimulatedAnnealing`. The *variability* is aggregated in the method `AdaptiveSimulatedAnnealing.adaptive_set()` which must be called before executing `AdaptiveSimulatedAnnealing.annealing()`.\r\n\r\n```python\r\nfrom pyqlearning.annealingmodel.simulatedannealing.adaptive_simulated_annealing import AdaptiveSimulatedAnnealing\r\n\r\nannealing_model = AdaptiveSimulatedAnnealing(\r\n    cost_functionable=cost_functionable,\r\n    cycles_num=33,\r\n    trials_per_cycle=3,\r\n    accepted_sol_num=0.0,\r\n    init_prob=0.7,\r\n    final_prob=0.001,\r\n    start_pos=0,\r\n    move_range=3\r\n)\r\n\r\n# Variability part.\r\nannealing_model.adaptive_set(\r\n    # How often will this model reanneals there per cycles.\r\n    reannealing_per=50,\r\n    # Thermostat.\r\n    thermostat=0.,\r\n    # The minimum temperature.\r\n    t_min=0.001,\r\n    # The default temperature.\r\n    t_default=1.0\r\n)\r\nannealing_model.var_arr = params_arr\r\nannealing_model.annealing()\r\n```\r\n\r\nTo extract result of searching, call the property like the case of using `SimulatedAnnealing`. If you want to know how to visualize the searching process, see my Jupyter notebook: [demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb).\r\n\r\n## Demonstration: Epsilon Greedy Q-Learning and Quantum Monte Carlo.\r\n\r\nGenerally, Quantum Monte Carlo is a stochastic method to solve the Schr\u00f6dinger equation. This algorithm is one of the earliest types of solution in order to simulate the Quantum Annealing in classical computer. In summary, one of the function of this algorithm is to solve the ground state search problem which is known as logically equivalent to combinatorial optimization problem.\r\n\r\nAccording to theory of spin glasses, the ground state search problem can be described as minimization energy determined by the hamiltonian <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h_0.png\" /> as follow\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/hamiltonian_in_ising_model.png\" />\r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pauli_z_i.png\" /> refers to the Pauli spin matrix below for the spin-half particle at lattice point <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif\" />. In spin glasses, random value is assigned to <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/j_i_j.png\" />. The number of combinations is enormous. If this value is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/n.png\" />, a trial frequency is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/2_n.png\" />. This computation complexity makes it impossible to solve the ground state search problem. Then, in theory of spin glasses, the standard hamiltonian is re-described in expanded form.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/hamiltonian_in_t_ising_model.png\" />\r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/pauli_x_i.png\" /> also refers to the Pauli spin matrix and <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/gamma.png\" /> is so-called annealing coefficient, which is hyperparameter that contains vely high value. Ising model to follow this Hamiltonian is known as the Transverse Ising model.\r\n\r\nIn relation to this system, thermal equilibrium amount of a physical quantity <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/q.png?1\" /> is as follow.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/langle_q_rangle.png\" />\r\n\r\nIf <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png\" /> is a diagonal matrix, then also <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h.png\" /> is diagonal matrix. If diagonal element in <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png\" /> is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_i.png\" />, Each diagonal element is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h_ij_e_i.png\" />. However if <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h.png\" /> has off-diagonal elements, It is known that <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_beta_h_ij_e_i_neq.png\" /> since for any of the exponent <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/i.gif\" /> we must exponentiate the matrix as follow.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/e_matrix_infty.png\" />\r\n\r\nTherefore, a path integration based on Trotter-Suzuki decomposition has been introduced in Quantum Monte Carlo Method. This path integration makes it possible to obtain the partition function <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png\" />.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model.png\" />\r\n\r\nwhere if <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/m.png\" /> is large enough, relational expression below is established.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/exp_left_frac_1_m_beta_h_right.png\" /></td></tr>\r\n\r\nThen the partition function <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png\" /> can be re-descibed as follow.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model_re_described.png\" />\r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/mid_sigma_k_rangle.png\" /> is <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/l.png\" /> topological products (product spaces). Because <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/h_0.png\" /> is the diagonal matrix, <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/tilde_sigma_j_z_mid_sigma.png\" />.\r\n\r\nTherefore, \r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/langle_sigma_k_mid.png\" />\r\n\r\nThe partition function <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z.png\" /> can be re-descibed as follow.\r\n\r\n<img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/z_in_t_ising_model_re_described_last.png\" />\r\n\r\nwhere <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/m.png\" /> is the number of trotter.\r\n\r\nThis relational expression indicates that the quantum - mechanical Hamiltonian in <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/d.png\" /> dimentional Tranverse Ising model is functional equivalence to classical Hamiltonian in <img src=\"https://storage.googleapis.com/accel-brain-code/Reinforcement-Learning/img/latex/d_1.png\" /> dimentional Ising model, which means that the state of the quantum - mechanical system can be approximate by the state of classical system.\r\n\r\n### Code sample.\r\n\r\n```python\r\nfrom pyqlearning.annealingmodel.quantum_monte_carlo import QuantumMonteCarlo\r\nfrom pyqlearning.annealingmodel.distancecomputable.cost_as_distance import CostAsDistance\r\n\r\n# User defined function which is-a `CostFuntionable`.\r\ncost_functionable = YourCostFunctions()\r\n\r\n# Compute cost as distance for `QuantumMonteCarlo`.\r\ndistance_computable = CostAsDistance(params_arr, cost_functionable)\r\n\r\n# Init.\r\nannealing_model = QuantumMonteCarlo(\r\n    distance_computable=distance_computable,\r\n\r\n    # The number of annealing cycles.\r\n    cycles_num=100,\r\n\r\n    # Inverse temperature (Beta).\r\n    inverse_temperature_beta=0.1,\r\n\r\n    # Gamma. (so-called annealing coefficient.) \r\n    gammma=1.0,\r\n\r\n    # Attenuation rate for simulated time.\r\n    fractional_reduction=0.99,\r\n\r\n    # The dimention of Trotter.\r\n    trotter_dimention=10,\r\n\r\n    # The number of Monte Carlo steps.\r\n    mc_step=100,\r\n\r\n    # The number of parameters which can be optimized.\r\n    point_num=100,\r\n\r\n    # Default `np.ndarray` of 2-D spin glass in Ising model.\r\n    spin_arr=None,\r\n\r\n    # Tolerance for the optimization.\r\n    # When the \u0394E is not improving by at least `tolerance_diff_e`\r\n    # for two consecutive iterations, annealing will stops.\r\n    tolerance_diff_e=0.01\r\n)\r\n\r\n# Execute annealing.\r\nannealing_model.annealing()\r\n```\r\n\r\nTo extract result of searching, call the property like the case of using `SimulatedAnnealing`. If you want to know how to visualize the searching process, see my Jupyter notebook: [demo/annealing_hand_written_digits.ipynb](https://github.com/chimera0/accel-brain-code/blob/master/Reinforcement-Learning/demo/annealing_hand_written_digits.ipynb).\r\n\r\n## References\r\n\r\nThe basic concepts, theories, and methods behind this library are described in the following books.\r\n\r\n<div align=\"center\"><a href=\"https://www.amazon.co.jp/dp/B08PV4ZQG5/\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/In-house_R_and_D_in_the_era_of_democratization_of_AI/book_cover.jpg\" width=\"160px\" /></a>\r\n  <p>\u300e<a href=\"https://www.amazon.co.jp/dp/B08PV4ZQG5/ref=sr_1_1?dchild=1&qid=1607343553&s=digital-text&sr=1-1&text=%E6%A0%AA%E5%BC%8F%E4%BC%9A%E7%A4%BEAccel+Brain\" target=\"_blank\">\u300cAI\u306e\u6c11\u4e3b\u5316\u300d\u6642\u4ee3\u306e\u4f01\u696d\u5185\u7814\u7a76\u958b\u767a: \u6df1\u5c64\u5b66\u7fd2\u306e\u300c\u5b9f\u5b66\u300d\u3068\u3057\u3066\u306e\u6a5f\u80fd\u5206\u6790</a>\u300f(Japanese)</p></div>\r\n\r\n<br />\r\n  \r\n<div align=\"center\"><a href=\"https://www.amazon.co.jp/dp/B093Z533LK\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/AI_vs_Investors_as_Noise_Traders/book_cover.jpg\" width=\"160px\" /></a>\r\n  <p>\u300e<a href=\"https://www.amazon.co.jp/dp/B093Z533LK\" target=\"_blank\">AI vs. \u30ce\u30a4\u30ba\u30c8\u30ec\u30fc\u30c0\u30fc\u3068\u3057\u3066\u306e\u6295\u8cc7\u5bb6\u305f\u3061: \u300c\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u6226\u4e89\u300d\u6642\u4ee3\u306e\u8a3c\u5238\u6295\u8cc7\u6226\u7565</a>\u300f(Japanese)</p></div>\r\n\r\n<br />\r\n\r\n<div align=\"center\"><a href=\"https://www.amazon.co.jp/dp/B0994CH3CM\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/Babel_of_Natural_Language_Processing/book_cover.jpg\" width=\"160px\" /></a>\r\n  <p>\u300e<a href=\"https://www.amazon.co.jp/dp/B0994CH3CM\" target=\"_blank\">\u81ea\u7136\u8a00\u8a9e\u51e6\u7406\u306e\u30d0\u30d9\u30eb: \u6587\u66f8\u81ea\u52d5\u8981\u7d04\u3001\u6587\u7ae0\u751f\u6210AI\u3001\u30c1\u30e3\u30c3\u30c8\u30dc\u30c3\u30c8\u306e\u610f\u5473\u8ad6</a>\u300f(Japanese)</p></div>\r\n\r\n<br />\r\n\r\n<div align=\"center\"><a href=\"https://www.amazon.co.jp/dp/B09C4KYZBX\" target=\"_blank\"><img src=\"https://storage.googleapis.com/accel-brain-code/Accel-Brain-Books/Origin_of_the_statistical_machine_learning/book_cover.jpg\" width=\"160px\" /></a>\r\n  <p>\u300e<a href=\"https://www.amazon.co.jp/dp/B09C4KYZBX\" target=\"_blank\">\u7d71\u8a08\u7684\u6a5f\u68b0\u5b66\u7fd2\u306e\u6839\u6e90: \u71b1\u529b\u5b66\u3001\u91cf\u5b50\u529b\u5b66\u3001\u7d71\u8a08\u529b\u5b66\u306b\u304a\u3051\u308b\u5929\u624d\u7269\u7406\u5b66\u8005\u305f\u3061\u306e\u795e\u5b66\u7684\u306a\u7406\u5ff5</a>\u300f(Japanese)</p></div>\r\n\r\n\r\nSpecific references are the following papers and books.\r\n\r\n### Q-Learning models.\r\n\r\n- Agrawal, S., & Goyal, N. (2011). Analysis of Thompson sampling for the multi-armed bandit problem. arXiv preprint arXiv:1111.1797.\r\n- Bubeck, S., & Cesa-Bianchi, N. (2012). Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721.\r\n- Chapelle, O., & Li, L. (2011). An empirical evaluation of thompson sampling. In Advances in neural information processing systems (pp. 2249-2257).\r\n- Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics (p. 434). New York City: Springer.\r\n- Kaufmann, E., Cappe, O., & Garivier, A. (2012). On Bayesian upper confidence bounds for bandit problems. In International Conference on Artificial Intelligence and Statistics (pp. 592-600).\r\n- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.\r\n- Richard Sutton and Andrew Barto (1998). Reinforcement Learning. MIT Press.\r\n- Watkins, C. J. C. H. (1989). Learning from delayed rewards (Doctoral dissertation, University of Cambridge).\r\n- Watkins, C. J., & Dayan, P. (1992). Q-learning. Machine learning, 8(3-4), 279-292.\r\n- White, J. (2012). Bandit algorithms for website optimization. \u201d O\u2019Reilly Media, Inc.\u201d.\r\n\r\n### Deep Q-Network models.\r\n\r\n- Cho, K., Van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.\r\n- <a href=\"https://pdfs.semanticscholar.org/dd98/9d94613f439c05725bad958929357e365084.pdf\" target=\"_blank\">Egorov, M. (2016). Multi-agent deep reinforcement learning.</a>\r\n- Gupta, J. K., Egorov, M., & Kochenderfer, M. (2017, May). Cooperative multi-agent control using deep reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems (pp. 66-83). Springer, Cham.\r\n- Malhotra, P., Ramakrishnan, A., Anand, G., Vig, L., Agarwal, P., & Shroff, G. (2016). LSTM-based encoder-decoder for multi-sensor anomaly detection. arXiv preprint arXiv:1607.00148.\r\n- Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.\r\n- Sainath, T. N., Vinyals, O., Senior, A., & Sak, H. (2015, April). Convolutional, long short-term memory, fully connected deep neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (pp. 4580-4584). IEEE.\r\n- Xingjian, S. H. I., Chen, Z., Wang, H., Yeung, D. Y., Wong, W. K., & Woo, W. C. (2015). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (pp. 802-810).\r\n- Zaremba, W., Sutskever, I., & Vinyals, O. (2014). Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.\r\n\r\n### Annealing models.\r\n\r\n- Bektas, T. (2006). The multiple traveling salesman problem: an overview of formulations and solution procedures. Omega, 34(3), 209-219.\r\n- Bertsimas, D., & Tsitsiklis, J. (1993). Simulated annealing. Statistical science, 8(1), 10-15.\r\n- Das, A., & Chakrabarti, B. K. (Eds.). (2005). Quantum annealing and related optimization methods (Vol. 679). Springer Science & Business Media.\r\n- Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics. New York City: Springer.\r\n- Edwards, S. F., & Anderson, P. W. (1975). Theory of spin glasses. Journal of Physics F: Metal Physics, 5(5), 965.\r\n- Facchi, P., & Pascazio, S. (2008). Quantum Zeno dynamics: mathematical and physical aspects. Journal of Physics A: Mathematical and Theoretical, 41(49), 493001.\r\n- Heim, B., R\u00f8nnow, T. F., Isakov, S. V., & Troyer, M. (2015). Quantum versus classical annealing of Ising spin glasses. Science, 348(6231), 215-217.\r\n- Heisenberg, W. (1925) \u00dcber quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen. Z. Phys. 33, pp.879\u2014893.\r\n- Heisenberg, W. (1927). \u00dcber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Zeitschrift fur Physik, 43, 172-198.\r\n- Heisenberg, W. (1984). The development of quantum mechanics. In Scientific Review Papers, Talks, and Books -Wissenschaftliche \u00dcbersichtsartikel, Vortr\u00e4ge und B\u00fccher (pp. 226-237). Springer Berlin Heidelberg.\r\nHilgevoord, Jan and Uffink, Jos, \"The Uncertainty Principle\", The Stanford Encyclopedia of Philosophy (Winter 2016 Edition), Edward N. Zalta (ed.), URL = \uff1chttps://plato.stanford.edu/archives/win2016/entries/qt-uncertainty/\uff1e.\r\n- Jarzynski, C. (1997). Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14), 2690.\r\n- Messiah, A. (1966). Quantum mechanics. 2 (1966). North-Holland Publishing Company.\r\n- Mezard, M., & Montanari, A. (2009). Information, physics, and computation. Oxford University Press.\r\n- Nallusamy, R., Duraiswamy, K., Dhanalaksmi, R., & Parthiban, P. (2009). Optimization of non-linear multiple traveling salesman problem using k-means clustering, shrink wrap algorithm and meta-heuristics. International Journal of Nonlinear Science, 8(4), 480-487.\r\n- Schr\u00f6dinger, E. (1926). Quantisierung als eigenwertproblem. Annalen der physik, 385(13), S.437-490.\r\n- Somma, R. D., Batista, C. D., & Ortiz, G. (2007). Quantum approach to classical statistical mechanics. Physical review letters, 99(3), 030603.\r\n- \u9234\u6728\u6b63. (2008). \u300c\u7d44\u307f\u5408\u308f\u305b\u6700\u9069\u5316\u554f\u984c\u3068\u91cf\u5b50\u30a2\u30cb\u30fc\u30ea\u30f3\u30b0: \u91cf\u5b50\u65ad\u71b1\u767a\u5c55\u306e\u7406\u8ad6\u3068\u6027\u80fd\u8a55\u4fa1\u300d.,\u300e\u7269\u6027\u7814\u7a76\u300f, 90(4): pp598-676. \u53c2\u7167\u7b87\u6240\u306fpp619-624.\r\n- \u897f\u68ee\u79c0\u7a14\u3001\u5927\u95a2\u771f\u4e4b(2018) \u300e\u91cf\u5b50\u30a2\u30cb\u30fc\u30ea\u30f3\u30b0\u306e\u57fa\u790e\u300f\u9808\u85e4 \u5f70\u4e09\u3001\u5ca1 \u771f \u76e3\u4fee\u3001\u5171\u7acb\u51fa\u7248\u3001\u53c2\u7167\u7b87\u6240\u306fpp9-46.\r\n\r\n## Author\r\n\r\n- accel-brain\r\n\r\n## Author URI\r\n\r\n- https://accel-brain.co.jp/\r\n- https://accel-brain.com/\r\n\r\n## License\r\n\r\n- GNU General Public License v2.0\r\n",
    "bugtrack_url": null,
    "license": "GPL2",
    "summary": "pyqlearning is Python library to implement Reinforcement Learning and Deep Reinforcement Learning, especially for Q-Learning, Deep Q-Network, and Multi-agent Deep Q-Network which can be optimized by Annealing models such as Simulated Annealing, Adaptive Simulated Annealing, and Quantum Monte Carlo Method.",
    "version": "1.2.7",
    "project_urls": {
        "Homepage": "https://github.com/accel-brain/accel-brain-code/tree/master/Reinforcement-Learning"
    },
    "split_keywords": [
        "q-learning",
        "deep",
        "q-network",
        "dqn",
        "reinforcement",
        "learning",
        "boltzmann",
        "multi-agent",
        "lstm",
        "cnn",
        "convolution"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cd6b0f819a8ea4395fe6d5a00cf05d0465d12f786e04f23bd6f74537ee7c3b18",
                "md5": "9b4313205e33129bd9bb99a2d5fcbdfd",
                "sha256": "996a2444b9c121bccd4cf2681eb73fd2080a33355d7d85b1de039e3886bd0a13"
            },
            "downloads": -1,
            "filename": "pyqlearning-1.2.7.tar.gz",
            "has_sig": false,
            "md5_digest": "9b4313205e33129bd9bb99a2d5fcbdfd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 59146,
            "upload_time": "2023-12-26T03:00:08",
            "upload_time_iso_8601": "2023-12-26T03:00:08.997561Z",
            "url": "https://files.pythonhosted.org/packages/cd/6b/0f819a8ea4395fe6d5a00cf05d0465d12f786e04f23bd6f74537ee7c3b18/pyqlearning-1.2.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-26 03:00:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "accel-brain",
    "github_project": "accel-brain-code",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pyqlearning"
}
        
Elapsed time: 0.17331s