# <img src="./misc/images/scrambledCube.png" width="50"> DeepXube <img src="./misc/images/solvedCube.png" width="50">

--------------------------------------------------------------------------------
DeepXube (pronounced "Deep Cube") aims to solve classical planning problems in an explainable manner using deep reinforcement learning,
heuristic search, and formal logic. The current project can:
1) Train a heuristic function to estimate the cost-to-go between state/goal pairs,
where a goal represents a set of states considered goal states. The representation of the goal can come
in any form: i.e. a state, a set of ground atoms in first-order logic, natural language, an image/sketch, etc.
2) Specify goals with answer set programming, a robust form of logic programming, in the case where goals are represented as a set of ground atoms in first-order logic.
DeepXube is a generalization of DeepCubeA ([code](https://github.com/forestagostinelli/DeepCubeA/),[paper](https://cse.sc.edu/~foresta/assets/files/SolvingTheRubiksCubeWithDeepReinforcementLearningAndSearch_Final.pdf)).
For any issues, you can create a GitHub issue or contact Forest Agostinelli (foresta@cse.sc.edu).
**Overview**:\
<img src="./misc/images/overview.png" width="500">
**Outline**:
- [Installation](#installation)
- [Environment](#environment-implementation)
- [Training Heuristic Function](#training-heuristic-function)
- [Heuristic Search](#heuristic-search)
- [Answer Set Programming Specification](#specifying-goals-with-answer-set-programming)
- [Examples](#examples)
## Installation
`pip install deepxube`
See [INSTALL.md](INSTALL.md) for more details
## Environment
The environment includes a state object that defines states, a goal object that defines goals (a set of states considered goal states),
and an environment object that generate start states, define state transitions, when a state a goal state, and the neural network that takes states as an input.
See [ENVIRONMENT.md](ENVIRONMENT.md) for more details
## Training Heuristic Function
Once an environment has been implemented, a heuristic function can be trained to map states and goals to heuristic
values (estimates of the cost-to-go from a given start state to a given goal).
See [TRAIN.md](TRAIN.md) for more details.
## Heuristic Search
Given a trained heuristic function, a start state, and a goal, heuristic search is used to find a path from the start state
to the goal.
See [HEURSEARCH.md](HEURSEARCH.md) for more details.
## Specifying Goals with Answer Set Programming
Coming soon.
## Examples
Raw data
{
"_id": null,
"home_page": null,
"name": "deepxube",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "DeepXube, DeepCubeA, DeepCube, deep learning, reinforcement learning, search, heuristic search, pathfinding, planning, Rubik's Cube, Sliding Tile Puzzle, Sokoban",
"author": null,
"author_email": "Forest Agostinelli <foresta@cse.sc.edu>",
"download_url": "https://files.pythonhosted.org/packages/c2/c9/6a3766b853b0199c1ed41034f36e40fd5edbfa46c7edb60f5e8477562b56/deepxube-0.1.5.tar.gz",
"platform": null,
"description": "# <img src=\"./misc/images/scrambledCube.png\" width=\"50\"> DeepXube <img src=\"./misc/images/solvedCube.png\" width=\"50\">\n\n\n--------------------------------------------------------------------------------\n\nDeepXube (pronounced \"Deep Cube\") aims to solve classical planning problems in an explainable manner using deep reinforcement learning, \nheuristic search, and formal logic. The current project can:\n\n1) Train a heuristic function to estimate the cost-to-go between state/goal pairs, \nwhere a goal represents a set of states considered goal states. The representation of the goal can come \nin any form: i.e. a state, a set of ground atoms in first-order logic, natural language, an image/sketch, etc.\n2) Specify goals with answer set programming, a robust form of logic programming, in the case where goals are represented as a set of ground atoms in first-order logic.\n\nDeepXube is a generalization of DeepCubeA ([code](https://github.com/forestagostinelli/DeepCubeA/),[paper](https://cse.sc.edu/~foresta/assets/files/SolvingTheRubiksCubeWithDeepReinforcementLearningAndSearch_Final.pdf)).\n\nFor any issues, you can create a GitHub issue or contact Forest Agostinelli (foresta@cse.sc.edu).\n\n**Overview**:\\\n<img src=\"./misc/images/overview.png\" width=\"500\">\n\n**Outline**:\n\n- [Installation](#installation)\n- [Environment](#environment-implementation)\n- [Training Heuristic Function](#training-heuristic-function)\n- [Heuristic Search](#heuristic-search)\n- [Answer Set Programming Specification](#specifying-goals-with-answer-set-programming)\n- [Examples](#examples)\n\n\n\n## Installation\n\n`pip install deepxube`\n\nSee [INSTALL.md](INSTALL.md) for more details\n\n## Environment\nThe environment includes a state object that defines states, a goal object that defines goals (a set of states considered goal states),\nand an environment object that generate start states, define state transitions, when a state a goal state, and the neural network that takes states as an input.\n\nSee [ENVIRONMENT.md](ENVIRONMENT.md) for more details\n\n\n## Training Heuristic Function\nOnce an environment has been implemented, a heuristic function can be trained to map states and goals to heuristic \nvalues (estimates of the cost-to-go from a given start state to a given goal).\n\nSee [TRAIN.md](TRAIN.md) for more details.\n\n## Heuristic Search\nGiven a trained heuristic function, a start state, and a goal, heuristic search is used to find a path from the start state \nto the goal.\n\nSee [HEURSEARCH.md](HEURSEARCH.md) for more details.\n\n## Specifying Goals with Answer Set Programming\nComing soon.\n\n## Examples\n",
"bugtrack_url": null,
"license": null,
"summary": "Solving pathfinding problems in an explainable manner with deep learning, reinforcement learning, heuristic search, and logic",
"version": "0.1.5",
"project_urls": {
"Repository": "https://github.com/forestagostinelli/DeepXube/"
},
"split_keywords": [
"deepxube",
" deepcubea",
" deepcube",
" deep learning",
" reinforcement learning",
" search",
" heuristic search",
" pathfinding",
" planning",
" rubik's cube",
" sliding tile puzzle",
" sokoban"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ac65d76e319973c7c2e79828d9ea604d8a31baff9b9df552da06dcc38fd0b870",
"md5": "5a9bcdfb5e8c7a611a22d2f75a9a742c",
"sha256": "5e5b15481111d48886689b6243f8de43197e94a0d27cd33729e2754ff3a720fa"
},
"downloads": -1,
"filename": "deepxube-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5a9bcdfb5e8c7a611a22d2f75a9a742c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 67017,
"upload_time": "2024-06-23T03:00:43",
"upload_time_iso_8601": "2024-06-23T03:00:43.824963Z",
"url": "https://files.pythonhosted.org/packages/ac/65/d76e319973c7c2e79828d9ea604d8a31baff9b9df552da06dcc38fd0b870/deepxube-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c2c96a3766b853b0199c1ed41034f36e40fd5edbfa46c7edb60f5e8477562b56",
"md5": "18f8fe96fa76e7683bddb8cca7174e7b",
"sha256": "e9aff455182b78c052dfeb0d48f5da20fadd1f6c42ffc7eebb22c900250dc4c3"
},
"downloads": -1,
"filename": "deepxube-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "18f8fe96fa76e7683bddb8cca7174e7b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 58862,
"upload_time": "2024-06-23T03:00:45",
"upload_time_iso_8601": "2024-06-23T03:00:45.602476Z",
"url": "https://files.pythonhosted.org/packages/c2/c9/6a3766b853b0199c1ed41034f36e40fd5edbfa46c7edb60f5e8477562b56/deepxube-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-23 03:00:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "forestagostinelli",
"github_project": "DeepXube",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "deepxube"
}