# arcsolver
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
This library contains tools for visualizing, analyzing and solving tasks
from the Abstraction and Reasoning Corpus (ARC) challenge dataset.
As this library was built using
[`nbdev`](https://github.com/AnswerDotAI/claudette.git), the source code
can be found in the jupyter notebooks directory
([nbs](https://github.com/agemoai/arcsolver/tree/main/nbs)).
Full documentation available at https://agemoai.github.io/arcsolver.
## Installation
1. Install `claudette` from its GitHub
[repository](https://github.com/AnswerDotAI/claudette) (PyPi version
is a bit behind):
``` sh
$ pip install git+https://github.com/AnswerDotAI/claudette.git@5ea3a59
```
2. Install `arcsolver`:
``` sh
$ pip install arcsolver
```
> [!NOTE]
>
> To use the automated description or solution generation features of
> this library, access to Anthropic’s Claude Sonnet 3.5 model is
> required. Set the `ANTHROPIC_API_KEY` environment variable or
> configure appropriate credentials for AWS bedrock or Google Vertex.
## Key Features
- **Task Management:** Load and visualize ARC tasks with the
[`ArcTask`](https://agemoai.github.io/arcsolver/task.html#arctask)
class
- **Object-Centric Modelling:** A set of primitive classes for
representing grid objects and transformations
- **LLM Integration:** Designed to use Claude Sonnet 3.5 for automated
task analysis and solution generation
- **Extensible Architecture:** Easy to add new primitives and helper
functions to enhance solving capabilities
## Quick Start
### Task Representation
The `task` module provides classes for working with ARC tasks
``` python
from arcsolver.task import ArcGrid, ArcPair, ArcTask
task = ArcTask('1e0a9b12'); task.plot()
```
![](index_files/figure-commonmark/cell-2-output-1.png)
An [`ArcTask`](https://agemoai.github.io/arcsolver/task.html#arctask)
comprises a list of input-output example
[`ArcPair`](https://agemoai.github.io/arcsolver/task.html#arcpair)s,
each of which holds two
[`ArcGrid`](https://agemoai.github.io/arcsolver/task.html#arcgrid)s.
Each class has convenient `plot` methods for visualization or directly
outputting to binary strings that can be passed to Claude.
``` python
print(f"Input grid 1 plot: {task.train[0].input.plot(to_base64=True)[:20]}...")
```
Input grid 1 plot: b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01H'...
### Object-centric Models
The `ocm` module provides a set of primitive classes for constructing
object-centric models of ARC grids. For example:
``` python
from arcsolver.ocm import Vector, Rectangle, Line, Grid, Color, Direction
grid = Grid(
size=Vector(8,8),
background_color=Color('grey'),
objects=[
Rectangle(position=Vector(1,1), size=Vector(2,2), color=Color('red')),
Line(position=Vector(6,1), direction=Direction.NE, length=6, color=Color('pink'))
]
)
ArcGrid(grid.to_array()).plot()
```
![](index_files/figure-commonmark/cell-4-output-1.png)
### Task Descriptions
Use Claude to analyze and describe ARC tasks
``` python
from arcsolver.describe import DescriptionGenerator
describer = DescriptionGenerator()
d = await describer.describe_task(task, 1); print(d[0].d)
```
> The input grids contain various colored squares arranged on a black
> background in different positions. In the transformation, all colored
> squares “fall” vertically to the bottom row while maintaining their
> relative horizontal positions and original colors. The rest of the
> grid becomes filled with black squares, resulting in an output where
> all non-black squares are aligned in the bottom row, preserving their
> left-to-right ordering from the input grid.
> [!WARNING]
>
> Depending on the task and the description strategy used (see
> [docs](https://agemoai.github.io/arcsolver/describe.html)),
> [`DescriptionGenerator`](https://agemoai.github.io/arcsolver/describe.html#descriptiongenerator)
> may decompose the task into multiple images, resulting in a
> token-intensive prompt (~\$0.10 using Sonnet 3.5).
### Solution Generation
Use Claude to construct a solution to an ARC task, automatically
refining its attempts based on execution and prediction error feedback.
``` python
from arcsolver.solve import ArcSolver
solver = ArcSolver()
solutions = await solver.solve(task)
```
Solving task: 1e0a9b12
Generating descriptions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.000
Starting solution attempts... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.142
Generating initial solutions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.142
Testing solutions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.231
Continuing refinement... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.231
Refining previous solutions... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.231
Testing solutions... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.332
Continuing refinement... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.332
Refining previous solutions... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.332
Testing solutions... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.424
Continuing refinement... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.424
Refining previous solutions... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.424
Testing solutions... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.528
Continuing refinement... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.528
Refining previous solutions... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.528
Testing solutions... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.633
Continuing refinement... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.633
Refining previous solutions... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.633
Testing solutions... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.732
Continuing refinement... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.732
Refining previous solutions... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.732
Testing solutions... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.835
Found potential solution, validating... | Attempts: 12/30 | Best Score: 1.000 | Cost: $0.835
Solution found! | Attempts: 14/30 | Best Score: 1.000 | Cost: $0.835
Solution found! 🎉 | Attempts: 14/30 | Best Score: 1.000 | Cost: $0.835
## Contributing
Contributions are welcome! Refined prompts, new OCM primitives, expanded
tool-use, alternative retry strategy…
Feel free to raise an issue or submit a PR.
### Learn More
To read about the motivation for building this tool, check out our
[blog](https://agemo.ai/resources/summer-of-arc-agi) and watch out for
future posts
Raw data
{
"_id": null,
"home_page": "https://github.com/agemoai/arcsolver",
"name": "arcsolver",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "nbdev jupyter notebook python",
"author": "Jack Hogan",
"author_email": "jack@agemo.ai",
"download_url": "https://files.pythonhosted.org/packages/77/53/53c7b2d53b20f6135c4ae17495e8cbe0624412ef4b2dbe02456e212fa13b/arcsolver-0.0.10.tar.gz",
"platform": null,
"description": "# arcsolver\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\nThis library contains tools for visualizing, analyzing and solving tasks\nfrom the Abstraction and Reasoning Corpus (ARC) challenge dataset.\n\nAs this library was built using\n[`nbdev`](https://github.com/AnswerDotAI/claudette.git), the source code\ncan be found in the jupyter notebooks directory\n([nbs](https://github.com/agemoai/arcsolver/tree/main/nbs)).\n\nFull documentation available at https://agemoai.github.io/arcsolver.\n\n## Installation\n\n1. Install `claudette` from its GitHub\n [repository](https://github.com/AnswerDotAI/claudette) (PyPi version\n is a bit behind):\n\n``` sh\n$ pip install git+https://github.com/AnswerDotAI/claudette.git@5ea3a59\n```\n\n2. Install `arcsolver`:\n\n``` sh\n$ pip install arcsolver\n```\n\n> [!NOTE]\n>\n> To use the automated description or solution generation features of\n> this library, access to Anthropic\u2019s Claude Sonnet 3.5 model is\n> required. Set the `ANTHROPIC_API_KEY` environment variable or\n> configure appropriate credentials for AWS bedrock or Google Vertex.\n\n## Key Features\n\n- **Task Management:** Load and visualize ARC tasks with the\n [`ArcTask`](https://agemoai.github.io/arcsolver/task.html#arctask)\n class\n- **Object-Centric Modelling:** A set of primitive classes for\n representing grid objects and transformations\n- **LLM Integration:** Designed to use Claude Sonnet 3.5 for automated\n task analysis and solution generation\n- **Extensible Architecture:** Easy to add new primitives and helper\n functions to enhance solving capabilities\n\n## Quick Start\n\n### Task Representation\n\nThe `task` module provides classes for working with ARC tasks\n\n``` python\nfrom arcsolver.task import ArcGrid, ArcPair, ArcTask\n\ntask = ArcTask('1e0a9b12'); task.plot()\n```\n\n![](index_files/figure-commonmark/cell-2-output-1.png)\n\nAn [`ArcTask`](https://agemoai.github.io/arcsolver/task.html#arctask)\ncomprises a list of input-output example\n[`ArcPair`](https://agemoai.github.io/arcsolver/task.html#arcpair)s,\neach of which holds two\n[`ArcGrid`](https://agemoai.github.io/arcsolver/task.html#arcgrid)s.\nEach class has convenient `plot` methods for visualization or directly\noutputting to binary strings that can be passed to Claude.\n\n``` python\nprint(f\"Input grid 1 plot: {task.train[0].input.plot(to_base64=True)[:20]}...\")\n```\n\n Input grid 1 plot: b'\\x89PNG\\r\\n\\x1a\\n\\x00\\x00\\x00\\rIHDR\\x00\\x00\\x01H'...\n\n### Object-centric Models\n\nThe `ocm` module provides a set of primitive classes for constructing\nobject-centric models of ARC grids. For example:\n\n``` python\nfrom arcsolver.ocm import Vector, Rectangle, Line, Grid, Color, Direction\n\ngrid = Grid(\n size=Vector(8,8),\n background_color=Color('grey'),\n objects=[\n Rectangle(position=Vector(1,1), size=Vector(2,2), color=Color('red')),\n Line(position=Vector(6,1), direction=Direction.NE, length=6, color=Color('pink'))\n ]\n)\nArcGrid(grid.to_array()).plot()\n```\n\n![](index_files/figure-commonmark/cell-4-output-1.png)\n\n### Task Descriptions\n\nUse Claude to analyze and describe ARC tasks\n\n``` python\nfrom arcsolver.describe import DescriptionGenerator\n\ndescriber = DescriptionGenerator()\nd = await describer.describe_task(task, 1); print(d[0].d)\n```\n\n> The input grids contain various colored squares arranged on a black\n> background in different positions. In the transformation, all colored\n> squares \u201cfall\u201d vertically to the bottom row while maintaining their\n> relative horizontal positions and original colors. The rest of the\n> grid becomes filled with black squares, resulting in an output where\n> all non-black squares are aligned in the bottom row, preserving their\n> left-to-right ordering from the input grid.\n\n> [!WARNING]\n>\n> Depending on the task and the description strategy used (see\n> [docs](https://agemoai.github.io/arcsolver/describe.html)),\n> [`DescriptionGenerator`](https://agemoai.github.io/arcsolver/describe.html#descriptiongenerator)\n> may decompose the task into multiple images, resulting in a\n> token-intensive prompt (~\\$0.10 using Sonnet 3.5).\n\n### Solution Generation\n\nUse Claude to construct a solution to an ARC task, automatically\nrefining its attempts based on execution and prediction error feedback.\n\n``` python\nfrom arcsolver.solve import ArcSolver\n\nsolver = ArcSolver()\nsolutions = await solver.solve(task)\n```\n\n\n Solving task: 1e0a9b12\n Generating descriptions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.000\n Starting solution attempts... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.142\n Generating initial solutions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.142\n Testing solutions... | Attempts: 0/30 | Best Score: 0.000 | Cost: $0.231\n Continuing refinement... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.231\n Refining previous solutions... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.231\n Testing solutions... | Attempts: 2/30 | Best Score: 0.866 | Cost: $0.332\n Continuing refinement... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.332\n Refining previous solutions... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.332\n Testing solutions... | Attempts: 4/30 | Best Score: 0.904 | Cost: $0.424\n Continuing refinement... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.424\n Refining previous solutions... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.424\n Testing solutions... | Attempts: 6/30 | Best Score: 0.951 | Cost: $0.528\n Continuing refinement... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.528\n Refining previous solutions... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.528\n Testing solutions... | Attempts: 8/30 | Best Score: 0.951 | Cost: $0.633\n Continuing refinement... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.633\n Refining previous solutions... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.633\n Testing solutions... | Attempts: 10/30 | Best Score: 0.958 | Cost: $0.732\n Continuing refinement... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.732\n Refining previous solutions... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.732\n Testing solutions... | Attempts: 12/30 | Best Score: 0.965 | Cost: $0.835\n Found potential solution, validating... | Attempts: 12/30 | Best Score: 1.000 | Cost: $0.835\n Solution found! | Attempts: 14/30 | Best Score: 1.000 | Cost: $0.835\n Solution found! \ud83c\udf89 | Attempts: 14/30 | Best Score: 1.000 | Cost: $0.835\n\n## Contributing\n\nContributions are welcome! Refined prompts, new OCM primitives, expanded\ntool-use, alternative retry strategy\u2026\n\nFeel free to raise an issue or submit a PR.\n\n### Learn More\n\nTo read about the motivation for building this tool, check out our\n[blog](https://agemo.ai/resources/summer-of-arc-agi) and watch out for\nfuture posts\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "A Python library for automatically solving Abstraction and Reasoning Corpus (ARC) challenges using Claude and object-centric modeling.",
"version": "0.0.10",
"project_urls": {
"Homepage": "https://github.com/agemoai/arcsolver"
},
"split_keywords": [
"nbdev",
"jupyter",
"notebook",
"python"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "455201ee59f7f95827e31f509f81e32c3bac08511a580b1aced853158b693f38",
"md5": "821a4888c035134b518e83960489f81d",
"sha256": "2e7bc45071e570faa5bf61ab92be120721a77a3d62d140a9cc1cb0577758042e"
},
"downloads": -1,
"filename": "arcsolver-0.0.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "821a4888c035134b518e83960489f81d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 519045,
"upload_time": "2024-12-06T14:55:44",
"upload_time_iso_8601": "2024-12-06T14:55:44.331071Z",
"url": "https://files.pythonhosted.org/packages/45/52/01ee59f7f95827e31f509f81e32c3bac08511a580b1aced853158b693f38/arcsolver-0.0.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "775353c7b2d53b20f6135c4ae17495e8cbe0624412ef4b2dbe02456e212fa13b",
"md5": "47ad7cf5ada07b42648fe6d7d258c525",
"sha256": "c491d64b4500e6051404e8133ed40a0e321a4337b43b18f7305dbf8f434e3203"
},
"downloads": -1,
"filename": "arcsolver-0.0.10.tar.gz",
"has_sig": false,
"md5_digest": "47ad7cf5ada07b42648fe6d7d258c525",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 320709,
"upload_time": "2024-12-06T14:55:45",
"upload_time_iso_8601": "2024-12-06T14:55:45.673208Z",
"url": "https://files.pythonhosted.org/packages/77/53/53c7b2d53b20f6135c4ae17495e8cbe0624412ef4b2dbe02456e212fa13b/arcsolver-0.0.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-06 14:55:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "agemoai",
"github_project": "arcsolver",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "arcsolver"
}