Here's the updated README skeleton with the name "Dachi":
---
# Dachi - AI Library for Building Intelligent Systems
**Dachi** is designed to provide flexibility and control when building intelligent systems using LLMs (Large Language Models). The library provides developers to interact with large language models and other AI models seamlessly within Python code, to be able to easily process responses from LLMs, and to construct complex behaviors making use of parallelization and other decision processes through the behavior tree paradigm.
## Features
- **Flexible Interaction**: Easily read and interact with LLMs directly in Python code, allowing for more natural and controlled usage.
- **Behavior Trees**: Use behavior trees to build complex behaviors such as decision-making and parallel processes.
- **Customizable Workflows**: Define and customize workflows for LLM interaction, giving you fine-grained control over the AI's behavior.
## Installation
To install **Dachi**, use pip:
```bash
pip install dachi
```
## Getting Started
### Basic Usage
Below is a basic example of how to use Dachi to interact with an LLM:
```python
from dachi import LLM
engine = dachi.act.OpenAIChatModel(model='gpt-4o-mini')
@dachi.signaturefunc(engine=engine)
def summarize(document: str, num_sentences: int) -> str:
"""
Summarize the document in {num_sentences} sentences.
# Document
{document}
"""
pass
# Instantiate the LLM and read a response
summary = summarize(document, 5)
print(summary)
# instructfunc is an alternative
@dachi.instructfunc(engine=engine)
def summarize(document: str, num_sentences: int) -> str:
cue = dachi.Cue('Summarize the document in {num_sentences} sentences')
cue2 = dachi.Cue("""
# Document
{document}
""")
dachi.op.join([cue, cue2], '\n')
return dachi.op.fill(document=document, num_sentences=num_sentences)
```
### Advanced usage
You can leverage behavior trees to define more complex interactions and processes with the LLM:
```python
import dachi
# This is the standard way to create a behavior tree
engine = dachi.adapt.openai.OpenAIChatModel(model='gpt-4o-mini')
# Define behavior tree nodes
@dachi.signaturefunc(engine=engine)
def task1(document: str):
"""
Extract keywords from the doucment passed in.
# Document
{document}
"""
pass
@dachi.signaturefunc(engine=engine)
def task2(document: str, num_sentences: int):
"""
Write a summary of the document passed in {num_sentences} sentences
{document}
"""
pass
# Create a behavior tree with parallel execution
tree = Sango(
root=Parallel([
dachi.act.taskf(task1),
dachi.act.taskf(task2)
])
)
while status != dachi.act.RUNNING:
status = tree.tick()
# You can also use function decorators to create
# a behavior tree
class Agent(Task):
def __init__(self):
self.context = ContextSpawner()
self.data = Shared()
self.task1 = SomeTask()
self.task2 = SomeTask2(self.data)
@sequencefunc('context.sequence')
def sequence(self):
yield y == 'ready'
yield self.task1
yield self.task2
def tick(self) -> TaskStatus:
return self.sequence.task()()
def reset(self):
self.context = ContextSpawner()
```
## Roadmap
- **Improve planning**: Add support for proactive planning and better integration of planning systems with LLMs.
- **Add more reading features**: Increase the
- **Add adapters**: Add adapters for a wider variety.
- **Add evaluation and learning capabilities**: Add the ability for the systems to evaluate the output and learn.
## Contributing
Contributions are welcome! Please follow these steps:
1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Commit your changes and submit a pull request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
---
Let me know if you need any additional changes!
Raw data
{
"_id": null,
"home_page": "https://github.com/short-greg/dachi",
"name": "dachi",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Greg Short",
"author_email": "g.short@kurenai.waseda.jp",
"download_url": "https://files.pythonhosted.org/packages/1f/fb/7a8ab02b218969f31851a6510c6803189b993e2621aa4d82696fdc37c2d4/dachi-0.0.1.tar.gz",
"platform": null,
"description": "Here's the updated README skeleton with the name \"Dachi\":\n\n---\n\n# Dachi - AI Library for Building Intelligent Systems\n\n**Dachi** is designed to provide flexibility and control when building intelligent systems using LLMs (Large Language Models). The library provides developers to interact with large language models and other AI models seamlessly within Python code, to be able to easily process responses from LLMs, and to construct complex behaviors making use of parallelization and other decision processes through the behavior tree paradigm.\n\n## Features\n\n- **Flexible Interaction**: Easily read and interact with LLMs directly in Python code, allowing for more natural and controlled usage.\n- **Behavior Trees**: Use behavior trees to build complex behaviors such as decision-making and parallel processes.\n- **Customizable Workflows**: Define and customize workflows for LLM interaction, giving you fine-grained control over the AI's behavior.\n\n## Installation\n\nTo install **Dachi**, use pip:\n\n```bash\npip install dachi\n```\n\n## Getting Started\n\n### Basic Usage\n\nBelow is a basic example of how to use Dachi to interact with an LLM:\n\n```python\nfrom dachi import LLM\n\nengine = dachi.act.OpenAIChatModel(model='gpt-4o-mini')\n\n@dachi.signaturefunc(engine=engine)\ndef summarize(document: str, num_sentences: int) -> str:\n \"\"\"\n Summarize the document in {num_sentences} sentences.\n\n # Document\n\n {document}\n \"\"\"\n pass\n\n# Instantiate the LLM and read a response\nsummary = summarize(document, 5)\nprint(summary)\n\n\n# instructfunc is an alternative \n@dachi.instructfunc(engine=engine)\ndef summarize(document: str, num_sentences: int) -> str:\n cue = dachi.Cue('Summarize the document in {num_sentences} sentences')\n cue2 = dachi.Cue(\"\"\"\n \n # Document\n\n {document} \n \"\"\")\n dachi.op.join([cue, cue2], '\\n')\n\n return dachi.op.fill(document=document, num_sentences=num_sentences)\n \n```\n\n### Advanced usage\n\nYou can leverage behavior trees to define more complex interactions and processes with the LLM:\n\n```python\nimport dachi\n\n\n# This is the standard way to create a behavior tree\n\nengine = dachi.adapt.openai.OpenAIChatModel(model='gpt-4o-mini')\n\n# Define behavior tree nodes\n@dachi.signaturefunc(engine=engine)\ndef task1(document: str):\n \"\"\"\n Extract keywords from the doucment passed in.\n\n # Document\n\n {document}\n \"\"\"\n pass\n\n@dachi.signaturefunc(engine=engine)\ndef task2(document: str, num_sentences: int):\n \"\"\"\n Write a summary of the document passed in {num_sentences} sentences\n\n {document}\n \"\"\"\n pass\n\n\n# Create a behavior tree with parallel execution\ntree = Sango(\n root=Parallel([\n dachi.act.taskf(task1), \n dachi.act.taskf(task2)\n ])\n)\n\nwhile status != dachi.act.RUNNING:\n status = tree.tick()\n\n\n# You can also use function decorators to create\n# a behavior tree\n\nclass Agent(Task):\n\n def __init__(self):\n\n self.context = ContextSpawner()\n self.data = Shared()\n self.task1 = SomeTask()\n self.task2 = SomeTask2(self.data)\n\n @sequencefunc('context.sequence')\n def sequence(self):\n\n yield y == 'ready'\n yield self.task1\n yield self.task2\n\n def tick(self) -> TaskStatus:\n\n return self.sequence.task()()\n \n def reset(self):\n self.context = ContextSpawner()\n\n```\n\n## Roadmap\n\n- **Improve planning**: Add support for proactive planning and better integration of planning systems with LLMs.\n- **Add more reading features**: Increase the \n- **Add adapters**: Add adapters for a wider variety.\n- **Add evaluation and learning capabilities**: Add the ability for the systems to evaluate the output and learn.\n\n## Contributing\n\nContributions are welcome! Please follow these steps:\n\n1. Fork the repository.\n2. Create a new branch for your feature or bug fix.\n3. Commit your changes and submit a pull request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n---\n\nLet me know if you need any additional changes!\n\n",
"bugtrack_url": null,
"license": "LICENSE",
"summary": "A framework for flexibly creating AI agents.",
"version": "0.0.1",
"project_urls": {
"Documentation": "https://dachi.readthedocs.com",
"Homepage": "https://github.com/short-greg/dachi",
"Repository": "https://github.com/short-greg/dachi"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9d7b9f57bfd0ea8886e54f3abacea9546d880f1e7817c705ed1032120dce788f",
"md5": "54529226a7cae1327c0e77f2e25e216d",
"sha256": "0f68e9243e160358cdb6ff3e617fc88908162401ab4159e38796578c64d44ba0"
},
"downloads": -1,
"filename": "dachi-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "54529226a7cae1327c0e77f2e25e216d",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 62180,
"upload_time": "2024-11-12T22:36:59",
"upload_time_iso_8601": "2024-11-12T22:36:59.767127Z",
"url": "https://files.pythonhosted.org/packages/9d/7b/9f57bfd0ea8886e54f3abacea9546d880f1e7817c705ed1032120dce788f/dachi-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1ffb7a8ab02b218969f31851a6510c6803189b993e2621aa4d82696fdc37c2d4",
"md5": "bf3d6f2d49afb84928fd12e032616d8d",
"sha256": "4e41b303ae8e463332401f25e8932f5bfa9f30665559255478d1345a79b8a44b"
},
"downloads": -1,
"filename": "dachi-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "bf3d6f2d49afb84928fd12e032616d8d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 50812,
"upload_time": "2024-11-12T22:37:01",
"upload_time_iso_8601": "2024-11-12T22:37:01.858258Z",
"url": "https://files.pythonhosted.org/packages/1f/fb/7a8ab02b218969f31851a6510c6803189b993e2621aa4d82696fdc37c2d4/dachi-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-12 22:37:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "short-greg",
"github_project": "dachi",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dachi"
}