## XCS-RC
*Accuracy-based Learning Classifier Systems* with **Rule Combining** mechanism, shortly `XCS-RC` for Python3, loosely based on Martin Butz's XCS Java code (2001). Read my PhD thesis [here](https://publikationen.bibliothek.kit.edu/1000046880) for the complete algorithmic description.
*Rule Combining* is novel function that employs inductive reasoning, replacing ~~all Darwinian genetic operation like mutation and crossover~~. It can handle `binaries` and `real`, reaching better *correctness rate* and *population size* quicker than several XCS instances. My earlier papers comparing them can be obtained at [here](https://link.springer.com/chapter/10.1007/978-3-642-17298-4_30) and [here](https://dl.acm.org/citation.cfm?id=2331009).
---
## Relevant links
* [PyPI](https://pypi.org/project/xcs-rc/)
* [Github repo](https://github.com/nuggfr/xcs-rc-python)
* Examples:
* [Classic problems: multiplexer, Markov env](https://github.com/nuggfr/xcs-rc-python)
* [Churn dataset](https://routing.nuggfr.com/churn)
* [Flappy Bird](https://routing.nuggfr.com/flappy)
* [OpenAI Gym](https://routing.nuggfr.com/openai)
---
**Installation**
```
pip install xcs-rc
```
**Initialization**
```
import xcs_rc
agent = xcs_rc.Agent()
```
**Classic Reinforcement Learning cycle**
```
# input: binary string, e.g., "100110" or decimal array
state = str(randint(0, 1))
# pick methods: 0 = explore, 1 = exploit, 2 = explore_it
action = agent.next_action(state, pick_method=1)
# determine reward and apply it, e.g.,
reward = agent.maxreward if action == int(state[0]) else 0.0
agent.apply_reward(reward)
```
**Partially Observable Markov Decision Process (POMDP) environment**
```
# create env and agent
env = xcs_rc.MarkovEnv('maze4') # maze4 is built-in
env.add_agents(num=1, tcomb=100, xmax=50)
agent = env.agents[0]
for episode in range(8000):
steps = env.one_episode(pick_method=2) # returns the number of taken steps
```
**Data classification**
```
agent.train(X_train, y_train)
cm = agent.test(X_test, y_test) # returns the confusion matrix
preds, probs = agent.predict(X) # returns lists of predictions and probabilities
```
**Print population, save it to CSV file, or use append mode**
```
agent.pop.print(title="Population")
agent.save('xcs_population.csv', title="Final XCS Population")
agent.save('xcs_pop_every_100_cycles.csv', title="Cycle: ###", save_mode='a')
```
**Finally, inserting rules to population**
```
# automatically load the last set (important for append mode)
agent.load("xcs_population.csv", empty_first=True)
agent.pop.add(my_list_of_rules) # from a list of classifiers
```
---
## Main Parameters
**XCS-RC Parameters**
* `tcomb`: *combining period*, number of learning cycles before the next rule combining
* `predtol`: *prediction tolerance*, maximum difference between two classifiers to be combined
* `prederrtol`: *prediction error tolerance*, threshold for deletion of inappropriately combined rules
**How to Set**
```
agent.tcomb = 50 # perform rule combining every 50 cycles
agent.predtol = 20.0 # combines rules whose prediction value differences <= 20.0
agent.prederrtol = 10.0 # remove if error > 10.0, after previously below it
```
**Parameters from original XCS**
* ~~all related to mutation and crossover is removed~~
* the others are kept and accessible (e.g., `agent.alpha = 0.15`)
---
## Results
**Classical Problems: `multiplexer` and `Markov environment`:**
![Binary MP11-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp11-binary.png)
![Real MP6-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp6-real.png)
![Markov Maze4](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-markov-maze4.png)
**Flappy Bird from PyGame Learning Environment:**
![Flappy Bird XCS-RC plot](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/flappy_plot.png)
[![Flappy Bird XCS-RC youtube](https://img.youtube.com/vi/Fz05s-stCbE/0.jpg)](https://youtu.be/Fz05s-stCbE)
**Youtube: CartPole-v0 Benchmark from OpenAI Gym:**
[![CartPole XCS-RC](https://img.youtube.com/vi/mJoavWV80MM/0.jpg)](https://youtu.be/mJoavWV80MM)
Raw data
{
"_id": null,
"home_page": null,
"name": "xcs-rc",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "machine learning, reinforcement learning, classifier systems, rule-based",
"author": "Nugroho Fredivianus",
"author_email": "nuggfr@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/fc/09/7061813aba81d00fa317261d992749d6e95295bd99538f33ffcede87c9c9/xcs-rc-1.1.9.tar.gz",
"platform": null,
"description": "## XCS-RC\r\n\r\n*Accuracy-based Learning Classifier Systems* with **Rule Combining** mechanism, shortly `XCS-RC` for Python3, loosely based on Martin Butz's XCS Java code (2001). Read my PhD thesis [here](https://publikationen.bibliothek.kit.edu/1000046880) for the complete algorithmic description.\r\n\r\n*Rule Combining* is novel function that employs inductive reasoning, replacing ~~all Darwinian genetic operation like mutation and crossover~~. It can handle `binaries` and `real`, reaching better *correctness rate* and *population size* quicker than several XCS instances. My earlier papers comparing them can be obtained at [here](https://link.springer.com/chapter/10.1007/978-3-642-17298-4_30) and [here](https://dl.acm.org/citation.cfm?id=2331009).\r\n\r\n---\r\n\r\n## Relevant links\r\n* [PyPI](https://pypi.org/project/xcs-rc/)\r\n* [Github repo](https://github.com/nuggfr/xcs-rc-python)\r\n* Examples:\r\n * [Classic problems: multiplexer, Markov env](https://github.com/nuggfr/xcs-rc-python)\r\n * [Churn dataset](https://routing.nuggfr.com/churn)\r\n * [Flappy Bird](https://routing.nuggfr.com/flappy)\r\n * [OpenAI Gym](https://routing.nuggfr.com/openai)\r\n\r\n---\r\n\r\n**Installation**\r\n```\r\npip install xcs-rc\r\n```\r\n\r\n**Initialization**\r\n```\r\nimport xcs_rc\r\nagent = xcs_rc.Agent()\r\n```\r\n\r\n**Classic Reinforcement Learning cycle**\r\n```\r\n# input: binary string, e.g., \"100110\" or decimal array\r\nstate = str(randint(0, 1))\r\n\r\n# pick methods: 0 = explore, 1 = exploit, 2 = explore_it\r\naction = agent.next_action(state, pick_method=1)\r\n\r\n# determine reward and apply it, e.g.,\r\nreward = agent.maxreward if action == int(state[0]) else 0.0\r\nagent.apply_reward(reward)\r\n```\r\n\r\n**Partially Observable Markov Decision Process (POMDP) environment**\r\n```\r\n# create env and agent\r\nenv = xcs_rc.MarkovEnv('maze4') # maze4 is built-in\r\nenv.add_agents(num=1, tcomb=100, xmax=50)\r\nagent = env.agents[0]\r\n\r\nfor episode in range(8000):\r\n steps = env.one_episode(pick_method=2) # returns the number of taken steps\r\n```\r\n\r\n**Data classification**\r\n```\r\nagent.train(X_train, y_train)\r\ncm = agent.test(X_test, y_test) # returns the confusion matrix\r\npreds, probs = agent.predict(X) # returns lists of predictions and probabilities\r\n```\r\n\r\n**Print population, save it to CSV file, or use append mode**\r\n```\r\nagent.pop.print(title=\"Population\")\r\nagent.save('xcs_population.csv', title=\"Final XCS Population\")\r\nagent.save('xcs_pop_every_100_cycles.csv', title=\"Cycle: ###\", save_mode='a')\r\n```\r\n\r\n**Finally, inserting rules to population**\r\n```\r\n# automatically load the last set (important for append mode)\r\nagent.load(\"xcs_population.csv\", empty_first=True)\r\nagent.pop.add(my_list_of_rules) # from a list of classifiers\r\n```\r\n\r\n---\r\n\r\n## Main Parameters\r\n\r\n**XCS-RC Parameters**\r\n* `tcomb`: *combining period*, number of learning cycles before the next rule combining\r\n* `predtol`: *prediction tolerance*, maximum difference between two classifiers to be combined\r\n* `prederrtol`: *prediction error tolerance*, threshold for deletion of inappropriately combined rules\r\n\r\n\r\n**How to Set**\r\n```\r\nagent.tcomb = 50 # perform rule combining every 50 cycles\r\nagent.predtol = 20.0 # combines rules whose prediction value differences <= 20.0\r\nagent.prederrtol = 10.0 # remove if error > 10.0, after previously below it\r\n```\r\n\r\n\r\n**Parameters from original XCS**\r\n* ~~all related to mutation and crossover is removed~~\r\n* the others are kept and accessible (e.g., `agent.alpha = 0.15`)\r\n\r\n---\r\n\r\n## Results\r\n\r\n**Classical Problems: `multiplexer` and `Markov environment`:**\r\n\r\n![Binary MP11-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp11-binary.png)\r\n\r\n![Real MP6-HIGH](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-mp6-real.png)\r\n\r\n![Markov Maze4](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/xcs-rc-markov-maze4.png)\r\n\r\n**Flappy Bird from PyGame Learning Environment:**\r\n\r\n![Flappy Bird XCS-RC plot](https://raw.githubusercontent.com/nuggfr/xcs-rc-python/master/flappy_plot.png)\r\n\r\n[![Flappy Bird XCS-RC youtube](https://img.youtube.com/vi/Fz05s-stCbE/0.jpg)](https://youtu.be/Fz05s-stCbE)\r\n\r\n**Youtube: CartPole-v0 Benchmark from OpenAI Gym:**\r\n\r\n[![CartPole XCS-RC](https://img.youtube.com/vi/mJoavWV80MM/0.jpg)](https://youtu.be/mJoavWV80MM)\r\n",
"bugtrack_url": null,
"license": "Free for non-commercial use",
"summary": "Accuracy-based Learning Classifier Systems with Rule Combining",
"version": "1.1.9",
"project_urls": null,
"split_keywords": [
"machine learning",
" reinforcement learning",
" classifier systems",
" rule-based"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "55149a4d9b1b28520c1332c8b96c9559880bffd311f8c3484714b273bb4dd5a0",
"md5": "653b70082d72cf0e72f07f0e732f5358",
"sha256": "804891ad25cd8b822b7b8be64a60b6e27f9059273e415522764c1ea9916186e2"
},
"downloads": -1,
"filename": "xcs_rc-1.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "653b70082d72cf0e72f07f0e732f5358",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 18842,
"upload_time": "2024-04-15T12:39:59",
"upload_time_iso_8601": "2024-04-15T12:39:59.563354Z",
"url": "https://files.pythonhosted.org/packages/55/14/9a4d9b1b28520c1332c8b96c9559880bffd311f8c3484714b273bb4dd5a0/xcs_rc-1.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fc097061813aba81d00fa317261d992749d6e95295bd99538f33ffcede87c9c9",
"md5": "06d710342077a939a6f844a6b66d6c91",
"sha256": "6b1c0e01cc9df2fec8d24a1d723d63382a437f601df6c2a8be882d8bc2de6e8f"
},
"downloads": -1,
"filename": "xcs-rc-1.1.9.tar.gz",
"has_sig": false,
"md5_digest": "06d710342077a939a6f844a6b66d6c91",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 20798,
"upload_time": "2024-04-15T12:40:01",
"upload_time_iso_8601": "2024-04-15T12:40:01.618476Z",
"url": "https://files.pythonhosted.org/packages/fc/09/7061813aba81d00fa317261d992749d6e95295bd99538f33ffcede87c9c9/xcs-rc-1.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-15 12:40:01",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "xcs-rc"
}