chain-of-thought-tool


Namechain-of-thought-tool JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryA lightweight Chain of Thought reasoning tool for LLM function calling
upload_time2025-09-03 02:48:08
maintainerNone
docs_urlNone
authorCode Developer
requires_python>=3.8
licenseNone
keywords llm function-calling reasoning ai tools chain-of-thought cot
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Chain of Thought Tool

A lightweight Python package that provides structured Chain of Thought reasoning capabilities for LLMs through function calling.

## Installation

```bash
pip install chain-of-thought-tool
```

Or install from source:
```bash
cd chain-of-thought-tool
pip install -e .
```

## Quick Start

```python
from chain_of_thought import TOOL_SPECS, HANDLERS

# Add to your LLM tools array
tools = [
    *TOOL_SPECS,  # Adds chain_of_thought_step, get_chain_summary, clear_chain
]

# In your tool handling logic
def handle_tool_call(tool_name, tool_args):
    if tool_name in HANDLERS:
        return HANDLERS[tool_name](**tool_args)
    # ... handle other tools
```

## Usage with AWS Bedrock Converse API

```python
import boto3
from chain_of_thought import TOOL_SPECS, HANDLERS

bedrock = boto3.client('bedrock-runtime')

# Your conversation with tools
response = bedrock.converse(
    modelId="anthropic.claude-3-5-sonnet-20241022-v2:0",
    messages=[
        {
            "role": "user", 
            "content": [{"text": "Help me think through whether I should buy a house or keep renting."}]
        }
    ],
    toolConfig={
        "tools": TOOL_SPECS  # Just drop it in!
    }
)

# Handle tool calls
for content in response['output']['message']['content']:
    if content.get('toolUse'):
        tool_use = content['toolUse']
        tool_name = tool_use['name']
        tool_args = tool_use['input']
        
        # Execute the tool
        result = HANDLERS[tool_name](**tool_args)
        print(f"Tool {tool_name} result: {result}")
```

## Usage with OpenAI

```python
import openai
from chain_of_thought import TOOL_SPECS, HANDLERS

# Convert to OpenAI format
openai_tools = []
for tool in TOOL_SPECS:
    openai_tools.append({
        "type": "function",
        "function": {
            "name": tool["toolSpec"]["name"],
            "description": tool["toolSpec"]["description"],
            "parameters": tool["toolSpec"]["inputSchema"]["json"]
        }
    })

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Help me think through a complex decision."}],
    tools=openai_tools
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        result = HANDLERS[tool_call.function.name](**eval(tool_call.function.arguments))
```

## How It Works

The Chain of Thought tool provides three main functions:

### 1. `chain_of_thought_step`
Process individual thoughts in a structured sequence with confidence tracking:

```python
{
    "thought": "I need to consider the financial implications of buying vs renting",
    "step_number": 1,
    "total_steps": 5,
    "next_step_needed": true,
    "reasoning_stage": "Problem Definition",
    "confidence": 0.8,
    "evidence": ["Current market conditions", "Personal financial situation"],
    "assumptions": ["Interest rates will remain stable"]
}
```

### 2. `get_chain_summary`
Get a comprehensive summary of the thinking process:

```python
# No arguments needed
{}
```

### 3. `clear_chain`
Reset the thinking process:

```python
# No arguments needed  
{}
```

## Advanced Features

### Confidence Tracking
Each step can include a confidence level (0.0-1.0) to indicate certainty:

```python
{
    "thought": "Based on my analysis, renting is more flexible",
    "confidence": 0.85,
    ...
}
```

### Dependencies and Contradictions
Track relationships between thoughts:

```python
{
    "thought": "This contradicts my earlier assumption",
    "dependencies": [1, 2],  # Depends on steps 1 and 2
    "contradicts": [3],      # Contradicts step 3
    ...
}
```

### Evidence and Assumptions
Make reasoning transparent:

```python
{
    "evidence": ["Market data shows 5% annual appreciation"],
    "assumptions": ["My job will remain stable"],
    ...
}
```

### Structured Stages
Guide thinking through defined stages:
- `Problem Definition`
- `Research` 
- `Analysis`
- `Synthesis`
- `Conclusion`

## Why This Approach?

**Traditional Problems:**
- ❌ MCP tools require separate server processes
- ❌ Framework-specific tools (LangChain, etc.)
- ❌ Complex infrastructure for simple functions

**Our Solution:**
- ✅ Simple `pip install` and import
- ✅ Works with any LLM API (OpenAI, Anthropic, etc.)
- ✅ Self-contained tool specs and implementations
- ✅ Zero infrastructure - just Python functions
- ✅ Structured reasoning with confidence tracking

## Thread Safety

For production use with multiple concurrent conversations:

```python
from chain_of_thought import ThreadAwareChainOfThought

# Create isolated instance per conversation
cot = ThreadAwareChainOfThought(conversation_id="user-123")
tools = cot.get_tool_specs()
handlers = cot.get_handlers()

# Use in your conversation
response = bedrock.converse(
    toolConfig={"tools": tools},
    # ...
)

# Handle with thread-specific handlers
result = handlers[tool_name](**tool_args)
```

## Contributing

This project demonstrates pluggable LLM tools. Contributions welcome for:
- Improved reasoning capabilities
- Additional metadata tracking
- Better summarization algorithms
- Integration helpers for more platforms

## License

MIT License

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chain-of-thought-tool",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "llm, function-calling, reasoning, ai, tools, chain-of-thought, cot",
    "author": "Code Developer",
    "author_email": "Code Developer <code-developer@democratize.technology>",
    "download_url": "https://files.pythonhosted.org/packages/36/11/8ee425f81b4740961a1f803368fe56f45e9c91af1de8abdd5b2e9d123a0d/chain_of_thought_tool-0.1.0.tar.gz",
    "platform": null,
    "description": "# Chain of Thought Tool\n\nA lightweight Python package that provides structured Chain of Thought reasoning capabilities for LLMs through function calling.\n\n## Installation\n\n```bash\npip install chain-of-thought-tool\n```\n\nOr install from source:\n```bash\ncd chain-of-thought-tool\npip install -e .\n```\n\n## Quick Start\n\n```python\nfrom chain_of_thought import TOOL_SPECS, HANDLERS\n\n# Add to your LLM tools array\ntools = [\n    *TOOL_SPECS,  # Adds chain_of_thought_step, get_chain_summary, clear_chain\n]\n\n# In your tool handling logic\ndef handle_tool_call(tool_name, tool_args):\n    if tool_name in HANDLERS:\n        return HANDLERS[tool_name](**tool_args)\n    # ... handle other tools\n```\n\n## Usage with AWS Bedrock Converse API\n\n```python\nimport boto3\nfrom chain_of_thought import TOOL_SPECS, HANDLERS\n\nbedrock = boto3.client('bedrock-runtime')\n\n# Your conversation with tools\nresponse = bedrock.converse(\n    modelId=\"anthropic.claude-3-5-sonnet-20241022-v2:0\",\n    messages=[\n        {\n            \"role\": \"user\", \n            \"content\": [{\"text\": \"Help me think through whether I should buy a house or keep renting.\"}]\n        }\n    ],\n    toolConfig={\n        \"tools\": TOOL_SPECS  # Just drop it in!\n    }\n)\n\n# Handle tool calls\nfor content in response['output']['message']['content']:\n    if content.get('toolUse'):\n        tool_use = content['toolUse']\n        tool_name = tool_use['name']\n        tool_args = tool_use['input']\n        \n        # Execute the tool\n        result = HANDLERS[tool_name](**tool_args)\n        print(f\"Tool {tool_name} result: {result}\")\n```\n\n## Usage with OpenAI\n\n```python\nimport openai\nfrom chain_of_thought import TOOL_SPECS, HANDLERS\n\n# Convert to OpenAI format\nopenai_tools = []\nfor tool in TOOL_SPECS:\n    openai_tools.append({\n        \"type\": \"function\",\n        \"function\": {\n            \"name\": tool[\"toolSpec\"][\"name\"],\n            \"description\": tool[\"toolSpec\"][\"description\"],\n            \"parameters\": tool[\"toolSpec\"][\"inputSchema\"][\"json\"]\n        }\n    })\n\nclient = openai.OpenAI()\nresponse = client.chat.completions.create(\n    model=\"gpt-4\",\n    messages=[{\"role\": \"user\", \"content\": \"Help me think through a complex decision.\"}],\n    tools=openai_tools\n)\n\n# Handle tool calls\nif response.choices[0].message.tool_calls:\n    for tool_call in response.choices[0].message.tool_calls:\n        result = HANDLERS[tool_call.function.name](**eval(tool_call.function.arguments))\n```\n\n## How It Works\n\nThe Chain of Thought tool provides three main functions:\n\n### 1. `chain_of_thought_step`\nProcess individual thoughts in a structured sequence with confidence tracking:\n\n```python\n{\n    \"thought\": \"I need to consider the financial implications of buying vs renting\",\n    \"step_number\": 1,\n    \"total_steps\": 5,\n    \"next_step_needed\": true,\n    \"reasoning_stage\": \"Problem Definition\",\n    \"confidence\": 0.8,\n    \"evidence\": [\"Current market conditions\", \"Personal financial situation\"],\n    \"assumptions\": [\"Interest rates will remain stable\"]\n}\n```\n\n### 2. `get_chain_summary`\nGet a comprehensive summary of the thinking process:\n\n```python\n# No arguments needed\n{}\n```\n\n### 3. `clear_chain`\nReset the thinking process:\n\n```python\n# No arguments needed  \n{}\n```\n\n## Advanced Features\n\n### Confidence Tracking\nEach step can include a confidence level (0.0-1.0) to indicate certainty:\n\n```python\n{\n    \"thought\": \"Based on my analysis, renting is more flexible\",\n    \"confidence\": 0.85,\n    ...\n}\n```\n\n### Dependencies and Contradictions\nTrack relationships between thoughts:\n\n```python\n{\n    \"thought\": \"This contradicts my earlier assumption\",\n    \"dependencies\": [1, 2],  # Depends on steps 1 and 2\n    \"contradicts\": [3],      # Contradicts step 3\n    ...\n}\n```\n\n### Evidence and Assumptions\nMake reasoning transparent:\n\n```python\n{\n    \"evidence\": [\"Market data shows 5% annual appreciation\"],\n    \"assumptions\": [\"My job will remain stable\"],\n    ...\n}\n```\n\n### Structured Stages\nGuide thinking through defined stages:\n- `Problem Definition`\n- `Research` \n- `Analysis`\n- `Synthesis`\n- `Conclusion`\n\n## Why This Approach?\n\n**Traditional Problems:**\n- \u274c MCP tools require separate server processes\n- \u274c Framework-specific tools (LangChain, etc.)\n- \u274c Complex infrastructure for simple functions\n\n**Our Solution:**\n- \u2705 Simple `pip install` and import\n- \u2705 Works with any LLM API (OpenAI, Anthropic, etc.)\n- \u2705 Self-contained tool specs and implementations\n- \u2705 Zero infrastructure - just Python functions\n- \u2705 Structured reasoning with confidence tracking\n\n## Thread Safety\n\nFor production use with multiple concurrent conversations:\n\n```python\nfrom chain_of_thought import ThreadAwareChainOfThought\n\n# Create isolated instance per conversation\ncot = ThreadAwareChainOfThought(conversation_id=\"user-123\")\ntools = cot.get_tool_specs()\nhandlers = cot.get_handlers()\n\n# Use in your conversation\nresponse = bedrock.converse(\n    toolConfig={\"tools\": tools},\n    # ...\n)\n\n# Handle with thread-specific handlers\nresult = handlers[tool_name](**tool_args)\n```\n\n## Contributing\n\nThis project demonstrates pluggable LLM tools. Contributions welcome for:\n- Improved reasoning capabilities\n- Additional metadata tracking\n- Better summarization algorithms\n- Integration helpers for more platforms\n\n## License\n\nMIT License\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A lightweight Chain of Thought reasoning tool for LLM function calling",
    "version": "0.1.0",
    "project_urls": {
        "Changelog": "https://github.com/democratize-technology/chain-of-thought-tool/releases",
        "Homepage": "https://github.com/democratize-technology/chain-of-thought-tool",
        "Issues": "https://github.com/democratize-technology/chain-of-thought-tool/issues",
        "Repository": "https://github.com/democratize-technology/chain-of-thought-tool"
    },
    "split_keywords": [
        "llm",
        " function-calling",
        " reasoning",
        " ai",
        " tools",
        " chain-of-thought",
        " cot"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c522cb5b44db9f93ca3b34812bb0a831c6d72db8eb4ffca65f53329aef562bb9",
                "md5": "abf470c93c7ed3647c58f986e186c5d1",
                "sha256": "35248ef23fd7ee57d62f60f8e7537028264216dead008c21bd96aea59d2ab03f"
            },
            "downloads": -1,
            "filename": "chain_of_thought_tool-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "abf470c93c7ed3647c58f986e186c5d1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 16251,
            "upload_time": "2025-09-03T02:48:07",
            "upload_time_iso_8601": "2025-09-03T02:48:07.066889Z",
            "url": "https://files.pythonhosted.org/packages/c5/22/cb5b44db9f93ca3b34812bb0a831c6d72db8eb4ffca65f53329aef562bb9/chain_of_thought_tool-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "36118ee425f81b4740961a1f803368fe56f45e9c91af1de8abdd5b2e9d123a0d",
                "md5": "fae3ed3f61655653d40c08b617311b1a",
                "sha256": "4b497d57cf3d24c070abbfa02f78a8f91f926860cf913a1cd5d18788996cd58d"
            },
            "downloads": -1,
            "filename": "chain_of_thought_tool-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "fae3ed3f61655653d40c08b617311b1a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 27031,
            "upload_time": "2025-09-03T02:48:08",
            "upload_time_iso_8601": "2025-09-03T02:48:08.466185Z",
            "url": "https://files.pythonhosted.org/packages/36/11/8ee425f81b4740961a1f803368fe56f45e9c91af1de8abdd5b2e9d123a0d/chain_of_thought_tool-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-03 02:48:08",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "democratize-technology",
    "github_project": "chain-of-thought-tool",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "chain-of-thought-tool"
}
        
Elapsed time: 0.44286s