Name | chatthy JSON |
Version |
0.2.11
JSON |
| download |
home_page | None |
Summary | A minimal LLM network chat server/client app. |
upload_time | 2025-08-15 17:46:25 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | None |
keywords |
hy
hylang
zeromq
llm
openai
anthropic
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# chatthy
An asynchronous terminal server/multiple-client setup for conducting and managing chats with LLMs.
This is the successor project to [llama-farm](https://github.com/atisharma/llama_farm)
The RAG/agent functionality should be split out into an API layer.
### network architecture
- [x] client/server RPC-type architecture
- [x] message signing
- [ ] ensure stream chunk ordering
### chat management
- [x] basic chat persistence and management
- [x] set, switch to saved system prompts (personalities)
- [ ] manage prompts like chats (as files)
- [x] chat truncation to token length
- [x] rename chat
- [x] profiles (profile x personalities -> sets of chats)
- [ ] import/export chat to client-side file
- [x] remove text between <think> tags when saving
### context workspace
- [x] context workspace (load/drop files)
- [x] client inject from file
- [x] client inject from other sources, e.g. youtube (trag)
- [x] templates for standard instruction requests (trag)
- [x] context workspace - bench/suspend files (hidden by filename)
- [ ] local files / folders in transient workspace
- [ ] checkboxes for delete / show / hide
### client interface
- [x] can switch between Anthropic, OpenAI, tabbyAPI providers and models
- [x] streaming
- [x] syntax highlighting
- [x] decent REPL
- [x] REPL command mode
- [x] cut/copy from output
- [x] client-side prompt editing
- [ ] vimish keys in output
- [ ] client-side chat/message editing (how? temporarily set the input field history? Fire up `$EDITOR` in client?)
- edit via chat local import/export
- [ ] latex rendering (this is tricky in the context of prompt-toolkit, but see flatlatex).
- [ ] generation cancellation
- [ ] tkinter UI
### multimodal
- [ ] design with multimodal models in mind
- [ ] image sending and use
- [ ] image display
### miscellaneous / extensions
- [x] use proper config dir (group?)
- [ ] dump default conf if missing
### tool / agentic use
Use agents at the API level, which is to say, use an intelligent router.
This separates the chatthy system from the RAG/LLM logic.
- [ ] (auto) tools (evolve from llama-farm -> trag)
- [ ] user defined tool plugins
- [ ] server use vdb context at LLM will (tool)
- [ ] iterative workflows (refer to llama-farm, consider smolagents)
- [ ] tool chains
- [ ] tool: workspace file write, delete
- [ ] tool: workspace file patch/diff
- [ ] tool: rag query tool
- [ ] MCP agents?
- [ ] smolagents / archgw?
### RAG
- [x] summaries and standard client instructions (trag)
- [x] server use vdb context on request
- [x] set RAG provider client-side (e.g. Mistral Small, Phi-4)
- [ ] consider best method of pdf conversion / ingestion (fvdb), OOB (image models?)
- [ ] full arxiv paper ingestion (fvdb) - consolidate into one latex file OOB
- [ ] vdb result reranking with context, and winnowing (agent?)
- [ ] vdb results -> workspace (agent?)
## unallocated / out of scope
audio streaming ? - see matatonic's servers
workflows (tree of instruction templates)
tasks
Raw data
{
"_id": null,
"home_page": null,
"name": "chatthy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "hy, hylang, zeromq, llm, openai, anthropic",
"author": null,
"author_email": "Ati Sharma <ati+chatthy@agalmic.ltd>",
"download_url": "https://files.pythonhosted.org/packages/7e/d1/b21023e96caadff1528d299028a4e41ebb0ddfcb54cfdd376e074fa09464/chatthy-0.2.11.tar.gz",
"platform": null,
"description": "# chatthy\n\nAn asynchronous terminal server/multiple-client setup for conducting and managing chats with LLMs.\n\nThis is the successor project to [llama-farm](https://github.com/atisharma/llama_farm)\n\nThe RAG/agent functionality should be split out into an API layer.\n\n\n### network architecture\n\n- [x] client/server RPC-type architecture\n- [x] message signing\n- [ ] ensure stream chunk ordering\n\n\n### chat management\n\n- [x] basic chat persistence and management\n- [x] set, switch to saved system prompts (personalities)\n- [ ] manage prompts like chats (as files)\n- [x] chat truncation to token length\n- [x] rename chat\n- [x] profiles (profile x personalities -> sets of chats)\n- [ ] import/export chat to client-side file\n- [x] remove text between <think> tags when saving\n\n\n### context workspace\n\n- [x] context workspace (load/drop files)\n- [x] client inject from file\n- [x] client inject from other sources, e.g. youtube (trag)\n- [x] templates for standard instruction requests (trag)\n- [x] context workspace - bench/suspend files (hidden by filename)\n- [ ] local files / folders in transient workspace\n- [ ] checkboxes for delete / show / hide\n\n\n### client interface\n\n- [x] can switch between Anthropic, OpenAI, tabbyAPI providers and models\n- [x] streaming\n- [x] syntax highlighting\n- [x] decent REPL\n- [x] REPL command mode\n- [x] cut/copy from output\n- [x] client-side prompt editing\n- [ ] vimish keys in output\n- [ ] client-side chat/message editing (how? temporarily set the input field history? Fire up `$EDITOR` in client?)\n - edit via chat local import/export\n- [ ] latex rendering (this is tricky in the context of prompt-toolkit, but see flatlatex).\n- [ ] generation cancellation\n- [ ] tkinter UI\n\n\n### multimodal\n\n- [ ] design with multimodal models in mind\n- [ ] image sending and use\n- [ ] image display\n\n\n### miscellaneous / extensions\n\n- [x] use proper config dir (group?)\n- [ ] dump default conf if missing\n\n\n### tool / agentic use\n\nUse agents at the API level, which is to say, use an intelligent router.\nThis separates the chatthy system from the RAG/LLM logic.\n\n- [ ] (auto) tools (evolve from llama-farm -> trag)\n- [ ] user defined tool plugins\n- [ ] server use vdb context at LLM will (tool)\n- [ ] iterative workflows (refer to llama-farm, consider smolagents)\n- [ ] tool chains\n- [ ] tool: workspace file write, delete\n- [ ] tool: workspace file patch/diff\n- [ ] tool: rag query tool\n- [ ] MCP agents?\n- [ ] smolagents / archgw?\n\n\n### RAG\n\n- [x] summaries and standard client instructions (trag)\n- [x] server use vdb context on request\n- [x] set RAG provider client-side (e.g. Mistral Small, Phi-4)\n- [ ] consider best method of pdf conversion / ingestion (fvdb), OOB (image models?)\n- [ ] full arxiv paper ingestion (fvdb) - consolidate into one latex file OOB\n- [ ] vdb result reranking with context, and winnowing (agent?)\n- [ ] vdb results -> workspace (agent?)\n\n\n## unallocated / out of scope\n\naudio streaming ? - see matatonic's servers\nworkflows (tree of instruction templates)\ntasks\n",
"bugtrack_url": null,
"license": null,
"summary": "A minimal LLM network chat server/client app.",
"version": "0.2.11",
"project_urls": {
"Repository": "https://github.com/atisharma/chatthy"
},
"split_keywords": [
"hy",
" hylang",
" zeromq",
" llm",
" openai",
" anthropic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a5a288e6ae6ec2a9530ddfb3fcd52bcfd229f380ab5c9aa7feef38fff1ce8849",
"md5": "d38210348a38743edb4effea82189875",
"sha256": "5a466191b6143ce67ac38abb8a8d29fd4266ead9177f20865b9c0fb4655c2669"
},
"downloads": -1,
"filename": "chatthy-0.2.11-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d38210348a38743edb4effea82189875",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 40970,
"upload_time": "2025-08-15T17:46:24",
"upload_time_iso_8601": "2025-08-15T17:46:24.482355Z",
"url": "https://files.pythonhosted.org/packages/a5/a2/88e6ae6ec2a9530ddfb3fcd52bcfd229f380ab5c9aa7feef38fff1ce8849/chatthy-0.2.11-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7ed1b21023e96caadff1528d299028a4e41ebb0ddfcb54cfdd376e074fa09464",
"md5": "5a9e55b5b5631c96fb8f5f1fd3467dc4",
"sha256": "df0210761d3c377e565e19f99a7f0b59ff80e4af534ae1bcc851b688036a2df2"
},
"downloads": -1,
"filename": "chatthy-0.2.11.tar.gz",
"has_sig": false,
"md5_digest": "5a9e55b5b5631c96fb8f5f1fd3467dc4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 40187,
"upload_time": "2025-08-15T17:46:25",
"upload_time_iso_8601": "2025-08-15T17:46:25.656120Z",
"url": "https://files.pythonhosted.org/packages/7e/d1/b21023e96caadff1528d299028a4e41ebb0ddfcb54cfdd376e074fa09464/chatthy-0.2.11.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 17:46:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "atisharma",
"github_project": "chatthy",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "chatthy"
}