Name | mchat-core JSON |
Version |
0.1.2
JSON |
| download |
home_page | None |
Summary | Framework for building multi-agent chat applications with autogen |
upload_time | 2025-08-23 16:18:50 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | MIT |
keywords |
llm
agents
autogen
chat
tools
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# mchat_core
A collection of convenience functions for using LLM models and autogen agents driven by configuration files. Primarily used in [MChat](https://github.com/jspv/mchat) but written to be useful in a variety of use cases.
---
## Installation and Usage
Dependencies are declared in `pyproject.toml`. Development and dependency management are primarily done using `uv`.
The `[tools]` dependency group includes additional requirements for bundled LLM Module tools. You only need these if you plan on using the provided tools.
### Optional Tools Dependencies
- With uv (recommended):
- uv sync --group tools
- With pip (install only what you need - see pyproject.toml)
- pip install tzlocal fredapi chromadb beautifulsoup4
### Running Tests
- Run all tests:
- pytest
- Show skip reasons:
- pytest -rs
- Run only the tools tests:
- pytest -m tools -rs
- Exclude tools tests:
- pytest -m "not tools"
Note: Tool tests are marked with the “tools” marker and will auto-skip if optional packages are not installed.
---
## Configuration
**Note:** This code is actively developed. Instructions and sample configs may become outdated.
**See the examples.ipynb notebook for more details**
Configuration is managed in three files:
- `settings.toml`
- `.secrets.toml` *(optional but recommended)*
- `agents.yaml`
Edit `settings.toml` to configure your application. Here’s a guide to the available options:
---
### Models
Sections should start with `models.` (with a period) and contain no other periods in the section name.
Format: `models.type.model_id` — `model_id` will appear in list of available models
[models.chat.gpt-4o]
api_key = "@format {this.openai_api_key}"
model = "gpt-4o"
api_type = "open_ai"
base_url = "https://api.openai.com/v1"
> NOTE: Image models and settings here are only for explicitly calling image models from prompts.
> The `generate_image` tool uses only the API key.
---
#### Required Fields
**Chat Models**
- `api_type`: "open_ai" or "azure"
- `model_type`: "chat"
- `model`: Name of the model
- `api_key`: Your key or Dynaconf lookup
- `base_url`: (if required by API)
**Azure Chat Models (additional)**
- `azure_endpoint`: URL for your endpoint
- `azure_deployment`: Deployment name for the model
- `api_version`: API version
**Image Models**
- `api_type`: "open_ai" or "azure"
- `model_type`: "image"
- `model`: Name of the model
- `size`: Size of images to generate
- `num_images`: Number of images to generate
- `api_key`: Your key or Dynaconf lookup
---
### Default Settings
- `default_model`: Default model to use
- `default_temperature`: Default temperature for generation
- `default_persona`: Default persona for generation
---
### Memory Model Configuration (Currently Disabled)
mchat can maintain conversational memory for long chats. When memory size exceeds model limits, conversations are summarized using a designated model (ideally, a cost-effective one).
Configurable properties:
- `memory_model`: Model ID used for memory (should match one in `models`)
- `memory_model_temperature`: Temperature for memory summarization
- `memory_model_max_tokens`: Token limit for memory model
---
### Secrets Configuration
Some sensitive config settings (like API keys) should be in `.secrets.toml`:
# .secrets.toml
# dynaconf_merge = true
# Replace the following with your actual API keys
# openai_models_api_key = "oai_ai_api_key_goes_here"
# ms_models_api_key = "ms_openai_api_key_goes_here"
---
## Agents & Teams
mchat provides:
- A default persona
- Example agents: *linux computer* & *financial manager*
- Example teams: round-robin and selector
You can add more agents and teams at the top level in `agents.yaml` (same directory as this README), following the structure in `mchat/default_personas.yaml`.
When configuring personas, the `extra_context` list lets you define multi-shot prompts—see the `linux computer` persona in `mchat/default_personas.json` for an example.
## Contributing
Thank you for considering contributing to the project! To contribute, please follow these guidelines:
1. Fork the repository and clone it to your local machine.
2. Create a new branch for your feature or bug fix:
```shell
git checkout -b feature/your-feature-name
```
Replace `your-feature-name` with a descriptive name for your contribution.
3. Make the necessary changes and ensure that your code follows the project's coding conventions and style guidelines - which currently are using PEP 8 for style and *black* for formatting
4. Commit your changes with a clear and descriptive commit message:
```shell
git commit -m "Add your commit message here"
```
5. Push your branch to your forked repository:
```shell
git push origin feature/your-feature-name
```
6. Open a pull request from your forked repository to the main repository's `main` branch.
7. Provide a clear and detailed description of your changes in the pull request. Include any relevant information that would help reviewers understand your contribution.
## License
This project is licensed under the [MIT License](LICENSE).
## Contact
Feel free to reach out to me at @jspv on GitHub
Raw data
{
"_id": null,
"home_page": null,
"name": "mchat-core",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "llm, agents, autogen, chat, tools",
"author": null,
"author_email": "jspv <jspvgithub@twinleaf.xyz>",
"download_url": "https://files.pythonhosted.org/packages/1f/da/d20345f71a72f78f79486afc2b9b552137197ba89cbbc0663a0ee9e5abde/mchat_core-0.1.2.tar.gz",
"platform": null,
"description": "# mchat_core\n\nA collection of convenience functions for using LLM models and autogen agents driven by configuration files. Primarily used in [MChat](https://github.com/jspv/mchat) but written to be useful in a variety of use cases.\n\n---\n\n## Installation and Usage\n\nDependencies are declared in `pyproject.toml`. Development and dependency management are primarily done using `uv`.\n\nThe `[tools]` dependency group includes additional requirements for bundled LLM Module tools. You only need these if you plan on using the provided tools.\n\n### Optional Tools Dependencies\n\n- With uv (recommended):\n - uv sync --group tools\n- With pip (install only what you need - see pyproject.toml)\n - pip install tzlocal fredapi chromadb beautifulsoup4\n\n### Running Tests\n\n- Run all tests:\n - pytest\n- Show skip reasons:\n - pytest -rs\n- Run only the tools tests:\n - pytest -m tools -rs\n- Exclude tools tests:\n - pytest -m \"not tools\"\n\nNote: Tool tests are marked with the \u201ctools\u201d marker and will auto-skip if optional packages are not installed.\n\n---\n\n## Configuration\n\n**Note:** This code is actively developed. Instructions and sample configs may become outdated.\n\n**See the examples.ipynb notebook for more details**\n\nConfiguration is managed in three files:\n\n- `settings.toml`\n- `.secrets.toml` *(optional but recommended)*\n- `agents.yaml`\n\nEdit `settings.toml` to configure your application. Here\u2019s a guide to the available options:\n\n---\n\n### Models\n\nSections should start with `models.` (with a period) and contain no other periods in the section name. \nFormat: `models.type.model_id` \u2014\u00a0`model_id` will appear in list of available models\n\n [models.chat.gpt-4o]\n api_key = \"@format {this.openai_api_key}\"\n model = \"gpt-4o\"\n api_type = \"open_ai\"\n base_url = \"https://api.openai.com/v1\"\n\n> NOTE: Image models and settings here are only for explicitly calling image models from prompts.\n> The `generate_image` tool uses only the API key.\n\n---\n\n#### Required Fields\n\n**Chat Models**\n- `api_type`: \"open_ai\" or \"azure\"\n- `model_type`: \"chat\"\n- `model`: Name of the model\n- `api_key`: Your key or Dynaconf lookup\n- `base_url`: (if required by API)\n\n**Azure Chat Models (additional)**\n- `azure_endpoint`: URL for your endpoint\n- `azure_deployment`: Deployment name for the model\n- `api_version`: API version\n\n**Image Models**\n- `api_type`: \"open_ai\" or \"azure\"\n- `model_type`: \"image\"\n- `model`: Name of the model\n- `size`: Size of images to generate\n- `num_images`: Number of images to generate\n- `api_key`: Your key or Dynaconf lookup\n\n---\n\n### Default Settings\n\n- `default_model`: Default model to use\n- `default_temperature`: Default temperature for generation\n- `default_persona`: Default persona for generation\n\n---\n\n### Memory Model Configuration (Currently Disabled)\n\nmchat can maintain conversational memory for long chats. When memory size exceeds model limits, conversations are summarized using a designated model (ideally, a cost-effective one).\n\nConfigurable properties:\n- `memory_model`: Model ID used for memory (should match one in `models`)\n- `memory_model_temperature`: Temperature for memory summarization\n- `memory_model_max_tokens`: Token limit for memory model\n\n---\n\n### Secrets Configuration\n\nSome sensitive config settings (like API keys) should be in `.secrets.toml`:\n\n # .secrets.toml\n # dynaconf_merge = true\n\n # Replace the following with your actual API keys\n # openai_models_api_key = \"oai_ai_api_key_goes_here\"\n # ms_models_api_key = \"ms_openai_api_key_goes_here\"\n\n---\n\n## Agents & Teams\n\nmchat provides:\n- A default persona\n- Example agents: *linux computer* & *financial manager*\n- Example teams: round-robin and selector\n\nYou can add more agents and teams at the top level in `agents.yaml` (same directory as this README), following the structure in `mchat/default_personas.yaml`. \nWhen configuring personas, the `extra_context` list lets you define multi-shot prompts\u2014see the `linux computer` persona in `mchat/default_personas.json` for an example.\n\n## Contributing\n\nThank you for considering contributing to the project! To contribute, please follow these guidelines:\n\n1. Fork the repository and clone it to your local machine.\n\n2. Create a new branch for your feature or bug fix:\n\n ```shell\n git checkout -b feature/your-feature-name\n ```\n\n Replace `your-feature-name` with a descriptive name for your contribution.\n\n3. Make the necessary changes and ensure that your code follows the project's coding conventions and style guidelines - which currently are using PEP 8 for style and *black* for formatting \n\n4. Commit your changes with a clear and descriptive commit message:\n\n ```shell\n git commit -m \"Add your commit message here\"\n ```\n\n5. Push your branch to your forked repository:\n\n ```shell\n git push origin feature/your-feature-name\n ```\n\n6. Open a pull request from your forked repository to the main repository's `main` branch.\n\n7. Provide a clear and detailed description of your changes in the pull request. Include any relevant information that would help reviewers understand your contribution.\n\n\n\n## License\nThis project is licensed under the [MIT License](LICENSE).\n\n## Contact\nFeel free to reach out to me at @jspv on GitHub\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Framework for building multi-agent chat applications with autogen",
"version": "0.1.2",
"project_urls": {
"Homepage": "https://github.com/jspv/mchat_core",
"Issues": "https://github.com/jspv/mchat_core/issues",
"Repository": "https://github.com/jspv/mchat_core"
},
"split_keywords": [
"llm",
" agents",
" autogen",
" chat",
" tools"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "a616e7003df414288035068ef9c0fc829b7cf8747ec9447111a8c96c72c73b4c",
"md5": "c3662dca758a8554af6f6124a2b6b495",
"sha256": "9f278ceebee1b81e1da9419a05123e8c27b501947e239595f1ce1a5f8d0b41b0"
},
"downloads": -1,
"filename": "mchat_core-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c3662dca758a8554af6f6124a2b6b495",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 31193,
"upload_time": "2025-08-23T16:18:49",
"upload_time_iso_8601": "2025-08-23T16:18:49.137391Z",
"url": "https://files.pythonhosted.org/packages/a6/16/e7003df414288035068ef9c0fc829b7cf8747ec9447111a8c96c72c73b4c/mchat_core-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1fdad20345f71a72f78f79486afc2b9b552137197ba89cbbc0663a0ee9e5abde",
"md5": "ed603772bc35a37cf30c1651796276e5",
"sha256": "07d267879f13eb3f2805359ce7a4ab6e4ba8c8ce6f90d38f0cc63a19e2e869a0"
},
"downloads": -1,
"filename": "mchat_core-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "ed603772bc35a37cf30c1651796276e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 36930,
"upload_time": "2025-08-23T16:18:50",
"upload_time_iso_8601": "2025-08-23T16:18:50.385483Z",
"url": "https://files.pythonhosted.org/packages/1f/da/d20345f71a72f78f79486afc2b9b552137197ba89cbbc0663a0ee9e5abde/mchat_core-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-23 16:18:50",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jspv",
"github_project": "mchat_core",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "mchat-core"
}