## XCS-RC
*Accuracy-based Learning Classifier Systems* with **Rule Combining** mechanism, shortly `XCS-RC` for Python3, loosely based on Martin Butz's XCS Java code (2001). Read my PhD thesis [here](https://publikationen.bibliothek.kit.edu/1000046880) for the complete algorithmic description.
*Rule Combining* is novel function that employs inductive reasoning, replacing ~~all Darwinian genetic operation like mutation and crossover~~. It can handle `binaries` and `real`, reaching better *correctness rate* and *population size* quicker than several XCS instances. My earlier papers comparing them can be obtained at [here](https://link.springer.com/chapter/10.1007/978-3-642-17298-4_30) and [here](https://dl.acm.org/citation.cfm?id=2331009).
---
## Relevant links
* [PyPI](https://pypi.org/project/xcs-rc/)
* [Github repo](https://github.com/nuggfr/xcs-rc-python)
* Examples:
* [Classic problems: multiplexer, Markov env](https://github.com/nuggfr/xcs-rc-python)
* [Churn dataset](https://routing.nuggfr.com/churn)
* [Flappy Bird](https://routing.nuggfr.com/flappy)
* [OpenAI Gym](https://routing.nuggfr.com/openai)
---
**Installation**
```
pip install xcs-rc
```
**Initialization**
```
import xcs_rc
agent = xcs_rc.Agent()
```
**Classic Reinforcement Learning cycle**
```
# input: binary string, e.g., "100110" or decimal array
state = str(randint(0, 1))
# pick methods: 0 = explore, 1 = exploit, 2 = explore_it
action = agent.next_action(state, pick_method=1)
# determine reward and apply it, e.g.,
reward = agent.maxreward if action == int(state[0]) else 0.0
agent.apply_reward(reward)
```
**Partially Observable Markov Decision Process (POMDP) environment**
```
# create env and agent
env = xcs_rc.MarkovEnv('maze4') # maze4 is built-in
env.add_agents(num=1, tcomb=100, xmax=50)
agent = env.agents[0]
for episode in range(8000):
steps = env.one_episode(pick_method=2) # returns the number of taken steps
```
**Print population, save it to CSV file, or use append mode**
```
agent.pop.print(title="Population")
agent.save('xcs_population.csv', title="Final XCS Population")
agent.save('xcs_pop_every_100_cycles.csv', title="Cycle: ###", save_mode='a')
```
**Finally, inserting rules to population**
```
# automatically load the last set (important for append mode)
agent.load("xcs_population.csv", empty_first=True)
agent.pop.add(my_list_of_rules) # from a list of classifiers
```
---
## Main Parameters
**XCS-RC Parameters**
* `tcomb`: *combining period*, number of learning cycles before the next rule combining
* `predtol`: *prediction tolerance*, maximum difference between two classifiers to be combined
* `prederrtol`: *prediction error tolerance*, threshold for deletion of inappropriately combined rules
**How to Set**
```
agent.tcomb = 50 # perform rule combining every 50 cycles
agent.predtol = 20.0 # combines rules whose prediction value differences <= 20.0
agent.prederrtol = 10.0 # remove if error > 10.0, after previously below it
```
**Latest updates**
* ~~all related to mutation and crossover is removed~~
* ~~dependencies like pandas and numpy are removed, as well as data science features~~
---
## Results
**Classical Problems: `multiplexer` and `Markov environment`:**
![Binary MP11-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp11-binary.png)
![Real MP6-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp6-real.png)
![Markov Maze4](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-markov-maze4.png)
**Flappy Bird from PyGame Learning Environment:**
![Flappy Bird XCS-RC plot](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/flappy_plot.png)
[![Flappy Bird XCS-RC youtube](https://img.youtube.com/vi/Fz05s-stCbE/0.jpg)](https://youtu.be/Fz05s-stCbE)
**Youtube: CartPole-v0 Benchmark from OpenAI Gym:**
[![CartPole XCS-RC](https://img.youtube.com/vi/mJoavWV80MM/0.jpg)](https://youtu.be/mJoavWV80MM)
Raw data
{
"_id": null,
"home_page": null,
"name": "xcs-rc",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "machine learning, reinforcement learning, classifier systems, rule-based",
"author": "Nugroho Fredivianus",
"author_email": "nuggfr@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/53/a3/47fba55cf359539a886decd94b2c6fa99d4a2e244880d36e9f016b81e818/xcs-rc-1.2.4.tar.gz",
"platform": null,
"description": "## XCS-RC\r\n\r\n*Accuracy-based Learning Classifier Systems* with **Rule Combining** mechanism, shortly `XCS-RC` for Python3, loosely based on Martin Butz's XCS Java code (2001). Read my PhD thesis [here](https://publikationen.bibliothek.kit.edu/1000046880) for the complete algorithmic description.\r\n\r\n*Rule Combining* is novel function that employs inductive reasoning, replacing ~~all Darwinian genetic operation like mutation and crossover~~. It can handle `binaries` and `real`, reaching better *correctness rate* and *population size* quicker than several XCS instances. My earlier papers comparing them can be obtained at [here](https://link.springer.com/chapter/10.1007/978-3-642-17298-4_30) and [here](https://dl.acm.org/citation.cfm?id=2331009).\r\n\r\n---\r\n\r\n## Relevant links\r\n* [PyPI](https://pypi.org/project/xcs-rc/)\r\n* [Github repo](https://github.com/nuggfr/xcs-rc-python)\r\n* Examples:\r\n * [Classic problems: multiplexer, Markov env](https://github.com/nuggfr/xcs-rc-python)\r\n * [Churn dataset](https://routing.nuggfr.com/churn)\r\n * [Flappy Bird](https://routing.nuggfr.com/flappy)\r\n * [OpenAI Gym](https://routing.nuggfr.com/openai)\r\n\r\n---\r\n\r\n**Installation**\r\n```\r\npip install xcs-rc\r\n```\r\n\r\n**Initialization**\r\n```\r\nimport xcs_rc\r\nagent = xcs_rc.Agent()\r\n```\r\n\r\n**Classic Reinforcement Learning cycle**\r\n```\r\n# input: binary string, e.g., \"100110\" or decimal array\r\nstate = str(randint(0, 1))\r\n\r\n# pick methods: 0 = explore, 1 = exploit, 2 = explore_it\r\naction = agent.next_action(state, pick_method=1)\r\n\r\n# determine reward and apply it, e.g.,\r\nreward = agent.maxreward if action == int(state[0]) else 0.0\r\nagent.apply_reward(reward)\r\n```\r\n\r\n**Partially Observable Markov Decision Process (POMDP) environment**\r\n```\r\n# create env and agent\r\nenv = xcs_rc.MarkovEnv('maze4') # maze4 is built-in\r\nenv.add_agents(num=1, tcomb=100, xmax=50)\r\nagent = env.agents[0]\r\n\r\nfor episode in range(8000):\r\n steps = env.one_episode(pick_method=2) # returns the number of taken steps\r\n```\r\n\r\n**Print population, save it to CSV file, or use append mode**\r\n```\r\nagent.pop.print(title=\"Population\")\r\nagent.save('xcs_population.csv', title=\"Final XCS Population\")\r\nagent.save('xcs_pop_every_100_cycles.csv', title=\"Cycle: ###\", save_mode='a')\r\n```\r\n\r\n**Finally, inserting rules to population**\r\n```\r\n# automatically load the last set (important for append mode)\r\nagent.load(\"xcs_population.csv\", empty_first=True)\r\nagent.pop.add(my_list_of_rules) # from a list of classifiers\r\n```\r\n\r\n---\r\n\r\n## Main Parameters\r\n\r\n**XCS-RC Parameters**\r\n* `tcomb`: *combining period*, number of learning cycles before the next rule combining\r\n* `predtol`: *prediction tolerance*, maximum difference between two classifiers to be combined\r\n* `prederrtol`: *prediction error tolerance*, threshold for deletion of inappropriately combined rules\r\n\r\n\r\n**How to Set**\r\n```\r\nagent.tcomb = 50 # perform rule combining every 50 cycles\r\nagent.predtol = 20.0 # combines rules whose prediction value differences <= 20.0\r\nagent.prederrtol = 10.0 # remove if error > 10.0, after previously below it\r\n```\r\n\r\n\r\n**Latest updates**\r\n* ~~all related to mutation and crossover is removed~~\r\n* ~~dependencies like pandas and numpy are removed, as well as data science features~~\r\n\r\n---\r\n\r\n## Results\r\n\r\n**Classical Problems: `multiplexer` and `Markov environment`:**\r\n\r\n![Binary MP11-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp11-binary.png)\r\n\r\n![Real MP6-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp6-real.png)\r\n\r\n![Markov Maze4](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-markov-maze4.png)\r\n\r\n**Flappy Bird from PyGame Learning Environment:**\r\n\r\n![Flappy Bird XCS-RC plot](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/flappy_plot.png)\r\n\r\n[![Flappy Bird XCS-RC youtube](https://img.youtube.com/vi/Fz05s-stCbE/0.jpg)](https://youtu.be/Fz05s-stCbE)\r\n\r\n**Youtube: CartPole-v0 Benchmark from OpenAI Gym:**\r\n\r\n[![CartPole XCS-RC](https://img.youtube.com/vi/mJoavWV80MM/0.jpg)](https://youtu.be/mJoavWV80MM)\r\n",
"bugtrack_url": null,
"license": "Free for non-commercial use",
"summary": "Accuracy-based Learning Classifier Systems with Rule Combining",
"version": "1.2.4",
"project_urls": null,
"split_keywords": [
"machine learning",
" reinforcement learning",
" classifier systems",
" rule-based"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d365fbc60e26e72ad2997d077f0d817e9ecac8805b3b5865a54b54073cb6be4c",
"md5": "42ce4c1bc4edbb6979d04a7a74628417",
"sha256": "b49e7561c2ab195ae4fd94497d1b48a21b220cb0ba3fd56b25644b8822a680f6"
},
"downloads": -1,
"filename": "xcs_rc-1.2.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "42ce4c1bc4edbb6979d04a7a74628417",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 18388,
"upload_time": "2024-11-06T08:30:24",
"upload_time_iso_8601": "2024-11-06T08:30:24.962515Z",
"url": "https://files.pythonhosted.org/packages/d3/65/fbc60e26e72ad2997d077f0d817e9ecac8805b3b5865a54b54073cb6be4c/xcs_rc-1.2.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "53a347fba55cf359539a886decd94b2c6fa99d4a2e244880d36e9f016b81e818",
"md5": "67a6a72dfdbcf07579af8e6bdae6ba51",
"sha256": "a1c28a8d83f3384689569e2c3128357c51bae9bf0f670d512d7a6102a84fd6be"
},
"downloads": -1,
"filename": "xcs-rc-1.2.4.tar.gz",
"has_sig": false,
"md5_digest": "67a6a72dfdbcf07579af8e6bdae6ba51",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 20200,
"upload_time": "2024-11-06T08:30:26",
"upload_time_iso_8601": "2024-11-06T08:30:26.110159Z",
"url": "https://files.pythonhosted.org/packages/53/a3/47fba55cf359539a886decd94b2c6fa99d4a2e244880d36e9f016b81e818/xcs-rc-1.2.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-06 08:30:26",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "xcs-rc"
}