# plAIer
A Python library that uses a reinforcement learning AI to play board games such as Chess, Tic Tac Toe, and Reversi.
You can also find the project on [PyPI][]
[PyPI]: https://pypi.org/project/plAIer/
## Example of use
```python
# The current game state :
gameState = """
X | O |
---+---+---
O | O | X
---+---+---
X | | O
"""
# Import the library :
import plAIer
# Create database if not exists :
plAIer.createDatabase("database_filename.json", "database_name", "description", ["outcomes", "list"])
# Initialize the AI :
outcomesRating = {"O won": 1, "tie": 0, "X won": -1} # The definition of outcomes allows us to determine what the goals of the AI are.
AI = plAIer.Game("database_filename.json", outcomesRating)
# Find the best move in the current game position :
bestMove = AI.findBestMove([
{"move" : "[0, 2]",
"stateAfterMove": "\n X | O | O\n---+---+---\n O | O | X\n---+---+---\n X | | O\n"},
{"move" : "[2, 1]",
"stateAfterMove": "\n X | O | \n---+---+---\n O | O | X\n---+---+---\n X | O | O\n"}
])
print(eval(bestMove)) # Output : [2, 1]
# Tell the AI what the outcome of the game is :
AI.setOutcome("O won")
```
### Optimization
For better prediction of the best move to play, please ensure that the format of the given board is always the same. Example with tic-tac-toe :
- Board 1 : OXO|OXX|XOO ✅
- Board 2 : XXO|OOO|XOX ✅
- Board 3 : OOX XOO XXO ❌
### Database format
Databases are JSON files which follow this format :
```json
{
"name": "name of the game",
"description": "additional informations",
"outcomes": ["list", "of", "outcomes"],
"moves": {"gameState1": {"outcome1": 1234, "outcome2": 2345, "outcomeN": 4567},
"gameState2": {"outcome1": 5678, "outcome2": 6789, "outcomeN": 7890},
"gameStateN": {"outcome1": 8901, "outcome2": 9011, "outcomeN": 0}
}
}
```
## How to get it
Write in your terminal (bash or Windows cmd/powershell) the following command :
```bash
pip install plAIer
```
## Contribute
Don't hesitate to contribute with pull requests :
1. Fork the repository
2. Do your commits
3. Add your pull request on the repository with a description of what your pull request does
Raw data
{
"_id": null,
"home_page": "https://github.com/L-Martin7/plAIer",
"name": "plAIer",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "AI, artificial intelligence, play, game, reinforcement AI, player, board games",
"author": "L-Martin7",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/29/6e/32d211a246523f02cf489ee6f1ebf6b452eb3bc10368e333c1eb2825a1a9/plaier-1.1.1.tar.gz",
"platform": "Linux",
"description": "# plAIer\n\nA Python library that uses a reinforcement learning AI to play board games such as Chess, Tic Tac Toe, and Reversi.\n\nYou can also find the project on [PyPI][]\n\n[PyPI]: https://pypi.org/project/plAIer/\n\n## Example of use\n```python\n# The current game state :\ngameState = \"\"\"\n X | O | \n---+---+---\n O | O | X\n---+---+---\n X | | O\n\"\"\"\n\n# Import the library :\nimport plAIer\n\n# Create database if not exists :\nplAIer.createDatabase(\"database_filename.json\", \"database_name\", \"description\", [\"outcomes\", \"list\"])\n\n# Initialize the AI :\noutcomesRating = {\"O won\": 1, \"tie\": 0, \"X won\": -1} # The definition of outcomes allows us to determine what the goals of the AI are.\nAI = plAIer.Game(\"database_filename.json\", outcomesRating)\n\n# Find the best move in the current game position :\nbestMove = AI.findBestMove([\n {\"move\" : \"[0, 2]\",\n \"stateAfterMove\": \"\\n X | O | O\\n---+---+---\\n O | O | X\\n---+---+---\\n X | | O\\n\"},\n {\"move\" : \"[2, 1]\",\n \"stateAfterMove\": \"\\n X | O | \\n---+---+---\\n O | O | X\\n---+---+---\\n X | O | O\\n\"}\n])\n\nprint(eval(bestMove)) # Output : [2, 1]\n\n# Tell the AI what the outcome of the game is :\nAI.setOutcome(\"O won\")\n```\n### Optimization\nFor better prediction of the best move to play, please ensure that the format of the given board is always the same. Example with tic-tac-toe :\n- Board 1 : OXO|OXX|XOO \u2705\n- Board 2 : XXO|OOO|XOX \u2705\n- Board 3 : OOX XOO XXO \u274c\n\n### Database format\nDatabases are JSON files which follow this format :\n```json\n{\n \"name\": \"name of the game\",\n \"description\": \"additional informations\",\n \"outcomes\": [\"list\", \"of\", \"outcomes\"],\n \"moves\": {\"gameState1\": {\"outcome1\": 1234, \"outcome2\": 2345, \"outcomeN\": 4567},\n \"gameState2\": {\"outcome1\": 5678, \"outcome2\": 6789, \"outcomeN\": 7890},\n \"gameStateN\": {\"outcome1\": 8901, \"outcome2\": 9011, \"outcomeN\": 0}\n }\n}\n```\n\n## How to get it\nWrite in your terminal (bash or Windows cmd/powershell) the following command :\n```bash\npip install plAIer\n```\n\n## Contribute\nDon't hesitate to contribute with pull requests :\n1. Fork the repository\n2. Do your commits\n3. Add your pull request on the repository with a description of what your pull request does\n",
"bugtrack_url": null,
"license": "BSD 3-Clause \"New\" or \"Revised\" License",
"summary": "A Python library that uses a reinforcement learning AI to play board games such as Chess, Tic Tac Toe, and Reversi.",
"version": "1.1.1",
"project_urls": {
"Download": "https://pypi.python.org/pypi/plAIer",
"Homepage": "https://github.com/L-Martin7/plAIer",
"Issues": "https://github.com/L-Martin7/plAIer/issues",
"Source": "https://github.com/L-Martin7/plAIer"
},
"split_keywords": [
"ai",
" artificial intelligence",
" play",
" game",
" reinforcement ai",
" player",
" board games"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6e0552696ff7127f74945db6f92744756a587af842bc6d446fb75631bf25eb2c",
"md5": "bd2a027a76a1a116621bfc5d7b990291",
"sha256": "69bdd33160fc52f7af21a4c3f413e4ece251ae2926861b4c005c67334ddb1136"
},
"downloads": -1,
"filename": "plAIer-1.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bd2a027a76a1a116621bfc5d7b990291",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 5039,
"upload_time": "2025-01-23T13:17:46",
"upload_time_iso_8601": "2025-01-23T13:17:46.366395Z",
"url": "https://files.pythonhosted.org/packages/6e/05/52696ff7127f74945db6f92744756a587af842bc6d446fb75631bf25eb2c/plAIer-1.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "296e32d211a246523f02cf489ee6f1ebf6b452eb3bc10368e333c1eb2825a1a9",
"md5": "f981f82d77b79ffbfa02dc3800dd4c69",
"sha256": "ca260077e8bc6198fc993001ed4b0847987660b6a8a35526bb5d0f368c78de88"
},
"downloads": -1,
"filename": "plaier-1.1.1.tar.gz",
"has_sig": false,
"md5_digest": "f981f82d77b79ffbfa02dc3800dd4c69",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 4765,
"upload_time": "2025-01-23T13:17:48",
"upload_time_iso_8601": "2025-01-23T13:17:48.001056Z",
"url": "https://files.pythonhosted.org/packages/29/6e/32d211a246523f02cf489ee6f1ebf6b452eb3bc10368e333c1eb2825a1a9/plaier-1.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-23 13:17:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "L-Martin7",
"github_project": "plAIer",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "plaier"
}