# Reasoning Framework
A Python package that adds R1-style reasoning capabilities to any Language Model (LLM). This framework enables step-by-step reasoning and verification of responses using a two-step process:
1. Initial reasoning and response generation
2. Verification and refinement of the response
## Installation
Basic installation:
```bash
pip install reasoning
```
With OpenAI support:
```bash
pip install "reasoning[openai]"
```
With Anthropic support:
```bash
pip install "reasoning[anthropic]"
```
With all supported APIs:
```bash
pip install "reasoning[openai,anthropic]"
```
## Environment Variables
Depending on which API you're using, you'll need to set the appropriate environment variables:
- For OpenAI: `OPENAI_API_KEY`
- For Anthropic: `ANTHROPIC_API_KEY`
- For OpenRouter: `OPENROUTER_API_KEY`
You can set these in your shell:
```bash
export OPENAI_API_KEY='your-api-key'
export ANTHROPIC_API_KEY='your-api-key'
export OPENROUTER_API_KEY='your-api-key'
```
Or in Python:
```python
import os
os.environ['OPENAI_API_KEY'] = 'your-api-key'
```
## Quick Start
### Using OpenAI
```python
from reasoning import ReasoningFramework
from reasoning.examples.openai_example import create_openai_call
# Create model-specific callers
gpt4_call = create_openai_call("gpt-4")
gpt35_call = create_openai_call("gpt-3.5-turbo")
# Initialize the framework
framework = ReasoningFramework(
reasoning_llm_call=gpt4_call,
verification_llm_call=gpt35_call
)
# Process a question
response = framework.process(
"What would be the implications of achieving AGI?",
reasoning_kwargs={"temperature": 0.7},
verification_kwargs={"temperature": 0.5}
)
print("Original Message:", response.message)
print("\nReasoning Process:", response.reasoning)
print("\nInitial Response:", response.initial_response)
print("\nVerified Response:", response.final_response)
```
### Using Anthropic
```python
from reasoning import ReasoningFramework
from reasoning.examples.anthropic_example import create_anthropic_call
# Create model-specific callers
sonnet_call = create_anthropic_call("claude-3-sonnet")
opus_call = create_anthropic_call("claude-3-opus")
# Initialize the framework
framework = ReasoningFramework(
reasoning_llm_call=sonnet_call,
verification_llm_call=opus_call
)
# Process a question
response = framework.process(
"What would be the implications of achieving AGI?",
reasoning_kwargs={"temperature": 0.7},
verification_kwargs={"temperature": 0.5}
)
```
### Using OpenRouter
```python
from reasoning import ReasoningFramework
from reasoning.examples.openrouter_example import create_openrouter_call
# Create model-specific callers
r1_call = create_openrouter_call("deepseek/deepseek-r1")
claude_call = create_openrouter_call("anthropic/claude-3-sonnet")
# Initialize the framework
framework = ReasoningFramework(
reasoning_llm_call=r1_call,
verification_llm_call=claude_call
)
# Process a question
response = framework.process(
"What would be the implications of achieving AGI?",
reasoning_kwargs={"temperature": 0.7},
verification_kwargs={"temperature": 0.5}
)
```
## Features
- Flexible integration with any LLM through callback functions
- Built-in support for OpenAI, Anthropic, and OpenRouter APIs
- Structured reasoning process with verification
- Customizable system prompts for both reasoning and verification
- Type-safe implementation using Pydantic models
- Comprehensive logging for debugging
## Advanced Usage
### Custom System Prompts
```python
framework = ReasoningFramework(
reasoning_llm_call=my_llm_call,
verification_llm_call=my_verification_call,
reasoning_system_prompt="You are an expert at breaking down complex problems...",
verification_system_prompt="You are a critical thinker who verifies conclusions..."
)
```
### Error Handling
The framework includes built-in error handling and logging:
```python
import logging
logging.basicConfig(level=logging.DEBUG) # Set to see detailed logs
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/colesmcintosh/reasoning",
"name": "reasoning",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "llm, ai, reasoning, framework, openai, anthropic",
"author": "colesmcintosh",
"author_email": "colemcintosh6@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/fc/24/4b4b70d2c1617271a092a9ad3e0ffacfbbe5b13fdf3d7e75f45d977ca304/reasoning-0.1.0.tar.gz",
"platform": null,
"description": "# Reasoning Framework\n\nA Python package that adds R1-style reasoning capabilities to any Language Model (LLM). This framework enables step-by-step reasoning and verification of responses using a two-step process:\n\n1. Initial reasoning and response generation\n2. Verification and refinement of the response\n\n## Installation\n\nBasic installation:\n```bash\npip install reasoning\n```\n\nWith OpenAI support:\n```bash\npip install \"reasoning[openai]\"\n```\n\nWith Anthropic support:\n```bash\npip install \"reasoning[anthropic]\"\n```\n\nWith all supported APIs:\n```bash\npip install \"reasoning[openai,anthropic]\"\n```\n\n## Environment Variables\n\nDepending on which API you're using, you'll need to set the appropriate environment variables:\n\n- For OpenAI: `OPENAI_API_KEY`\n- For Anthropic: `ANTHROPIC_API_KEY`\n- For OpenRouter: `OPENROUTER_API_KEY`\n\nYou can set these in your shell:\n```bash\nexport OPENAI_API_KEY='your-api-key'\nexport ANTHROPIC_API_KEY='your-api-key'\nexport OPENROUTER_API_KEY='your-api-key'\n```\n\nOr in Python:\n```python\nimport os\nos.environ['OPENAI_API_KEY'] = 'your-api-key'\n```\n\n## Quick Start\n\n### Using OpenAI\n\n```python\nfrom reasoning import ReasoningFramework\nfrom reasoning.examples.openai_example import create_openai_call\n\n# Create model-specific callers\ngpt4_call = create_openai_call(\"gpt-4\")\ngpt35_call = create_openai_call(\"gpt-3.5-turbo\")\n\n# Initialize the framework\nframework = ReasoningFramework(\n reasoning_llm_call=gpt4_call,\n verification_llm_call=gpt35_call\n)\n\n# Process a question\nresponse = framework.process(\n \"What would be the implications of achieving AGI?\",\n reasoning_kwargs={\"temperature\": 0.7},\n verification_kwargs={\"temperature\": 0.5}\n)\n\nprint(\"Original Message:\", response.message)\nprint(\"\\nReasoning Process:\", response.reasoning)\nprint(\"\\nInitial Response:\", response.initial_response)\nprint(\"\\nVerified Response:\", response.final_response)\n```\n\n### Using Anthropic\n\n```python\nfrom reasoning import ReasoningFramework\nfrom reasoning.examples.anthropic_example import create_anthropic_call\n\n# Create model-specific callers\nsonnet_call = create_anthropic_call(\"claude-3-sonnet\")\nopus_call = create_anthropic_call(\"claude-3-opus\")\n\n# Initialize the framework\nframework = ReasoningFramework(\n reasoning_llm_call=sonnet_call,\n verification_llm_call=opus_call\n)\n\n# Process a question\nresponse = framework.process(\n \"What would be the implications of achieving AGI?\",\n reasoning_kwargs={\"temperature\": 0.7},\n verification_kwargs={\"temperature\": 0.5}\n)\n```\n\n### Using OpenRouter\n\n```python\nfrom reasoning import ReasoningFramework\nfrom reasoning.examples.openrouter_example import create_openrouter_call\n\n# Create model-specific callers\nr1_call = create_openrouter_call(\"deepseek/deepseek-r1\")\nclaude_call = create_openrouter_call(\"anthropic/claude-3-sonnet\")\n\n# Initialize the framework\nframework = ReasoningFramework(\n reasoning_llm_call=r1_call,\n verification_llm_call=claude_call\n)\n\n# Process a question\nresponse = framework.process(\n \"What would be the implications of achieving AGI?\",\n reasoning_kwargs={\"temperature\": 0.7},\n verification_kwargs={\"temperature\": 0.5}\n)\n```\n\n## Features\n\n- Flexible integration with any LLM through callback functions\n- Built-in support for OpenAI, Anthropic, and OpenRouter APIs\n- Structured reasoning process with verification\n- Customizable system prompts for both reasoning and verification\n- Type-safe implementation using Pydantic models\n- Comprehensive logging for debugging\n\n## Advanced Usage\n\n### Custom System Prompts\n\n```python\nframework = ReasoningFramework(\n reasoning_llm_call=my_llm_call,\n verification_llm_call=my_verification_call,\n reasoning_system_prompt=\"You are an expert at breaking down complex problems...\",\n verification_system_prompt=\"You are a critical thinker who verifies conclusions...\"\n)\n```\n\n### Error Handling\n\nThe framework includes built-in error handling and logging:\n\n```python\nimport logging\nlogging.basicConfig(level=logging.DEBUG) # Set to see detailed logs\n```\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details. ",
"bugtrack_url": null,
"license": "MIT",
"summary": "A reasoning framework for all LLMs",
"version": "0.1.0",
"project_urls": {
"Documentation": "https://github.com/colesmcintosh/reasoning",
"Homepage": "https://github.com/colesmcintosh/reasoning",
"Repository": "https://github.com/colesmcintosh/reasoning"
},
"split_keywords": [
"llm",
" ai",
" reasoning",
" framework",
" openai",
" anthropic"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "466789b0b972d978e1e0c759abbe203f4671cb4f357d3a0cef3e96419b0bb65b",
"md5": "a4982d718ecc8e6ab701ac1daf495c16",
"sha256": "d563685b36888cce0b608e505bf79310916fa977afac523b59f23ee4d8c12f6a"
},
"downloads": -1,
"filename": "reasoning-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a4982d718ecc8e6ab701ac1daf495c16",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 8679,
"upload_time": "2025-02-04T03:00:01",
"upload_time_iso_8601": "2025-02-04T03:00:01.662408Z",
"url": "https://files.pythonhosted.org/packages/46/67/89b0b972d978e1e0c759abbe203f4671cb4f357d3a0cef3e96419b0bb65b/reasoning-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fc244b4b70d2c1617271a092a9ad3e0ffacfbbe5b13fdf3d7e75f45d977ca304",
"md5": "e8245afb6bb4d1538bd45965a7e34907",
"sha256": "5fb470d223d91dcb494c5b881a17e82296fbd0b292a77962358c6ae220b6e624"
},
"downloads": -1,
"filename": "reasoning-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "e8245afb6bb4d1538bd45965a7e34907",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 6798,
"upload_time": "2025-02-04T03:00:03",
"upload_time_iso_8601": "2025-02-04T03:00:03.795130Z",
"url": "https://files.pythonhosted.org/packages/fc/24/4b4b70d2c1617271a092a9ad3e0ffacfbbe5b13fdf3d7e75f45d977ca304/reasoning-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-04 03:00:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "colesmcintosh",
"github_project": "reasoning",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "reasoning"
}