autologic


Nameautologic JSON
Version 0.0.3 PyPI version JSON
download
home_page
SummaryA framework for LLMs to self-compose reasoning structures
upload_time2024-02-20 01:34:27
maintainer
docs_urlNone
author
requires_python>=3.8
licenseApache-2.0
keywords llms framework reasoning structures
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            

# autologic

autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper ["SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures".](https://arxiv.org/abs/2402.03620) 

It provides a way for large language models (LLMs) to automatically compose modular reasoning structures to tackle complex reasoning tasks, without the need for training data or labels.
## Key Features

- Implements the full SELF-DISCOVER pipeline enabling LLMs to self-discover reasoning structures
- Works with Gemini Pro and local GGUF models via llama-cpp-python
- Flexible integration powered by a simple LLMConfig
- Interactive prompts or standalone execution
- CLI and Python API access

## Framework Overview

The SELF-DISCOVER framework consists of two key stages:

Stage 1: Self-discover a reasoning structure for the task from a set of seed "reasoning modules"

Stage 2: Solve instances by following the composed structure

The first stage has 3 steps guided by meta-prompting:

1. SELECT relevant reasoning modules
2. ADAPT modules to be task-specific
3. IMPLEMENT structure with adapted modules

## Getting Started

### Install from Pypi

```bash
python3 -m venv venv
source  venv/bin/activate
CMAKE_ARGS="-DLLAMA_METAL=on" python3 -m pip install autologic
```

### Installation as editable package

The following instructions show the CMAKE arguments for compiling llama-cpp-python dependencies with support for metal.
Refer to https://github.com/abetlen/llama-cpp-python for instructions regarding different GPUs.

```bash
git clone https://github.com/waszumteufel/autologic.git
cd autologic
python3 -m venv venv
source  venv/bin/activate
CMAKE_ARGS="-DLLAMA_METAL=on" python3 -m pip install -e .
```

### Directly install from Github

```bash
python3 -m venv venv
source  venv/bin/activate
CMAKE_ARGS="-DLLAMA_METAL=on" python3 -m pip install git+https://github.com/waszumteufel/autologic@main#egg=autologic
```

#### Note on CMAKE_ARGS

The CMAKE_ARGS environment variable is used to direct llama-cpp-python to be compiled with specific support for GPU drivers for hardware acceleration.
Below is a non-exhaustive list of common values. 

**Metal (MPS)**

For Macs with the M-series processor.

```
CMAKE_ARGS="-DLLAMA_METAL=on" 
```

**Vulkan**

```
CMAKE_ARGS="-DLLAMA_VULKAN=on" 
```

**OpenBLAS**

```
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS"
```

**cuBLAS**

```
CMAKE_ARGS="-DLLAMA_CUBLAS=on
```

**hipBLAS**

For AMD Cards

```
CMAKE_ARGS="-DLLAMA_HIPBLAS=on"
```

## Configuration

### Gemini API Key

To use the Gemini model, set the `GEMINI_PRO_API_KEY` variable:

```
GEMINI_PRO_API_KEY="sk-..."
```

If this is not set, `solve()` will automatically read it from the `.env` file in the working directory or general environment.

### OpenAI API Key

To use the OpenAI models, set the `AUTOLOGIC_OPENAI_API_KEY` variable:

```
AUTOLOGIC_OPENAI_API_KEY="sk-..."
```

If this is not set, `solve()` will automatically read it from the `.env` file in the working directory or general environment.

### Environment Variables 

It is recommended to set sensitive values like API keys in a `.env` file.

This will be automatically loaded from the current working directory if present, and used to populate any unset configuration values that have matching environment variable names (i.e. `GEMINI_PRO_API_KEY`, `AUTOLOGIC_OPENAI_API_KEY`).

See the "Configuration" section of the documentation for the full set of options. Using `.env` and environment variables allows keeping credentials secure.

This allows conveniently configuring API keys and other settings without exposing them directly in code. The `LLMConfig` can then be initialized without explicitly passing these values every time.

## Usage

### Import and call the top-level solve() method:

#### Mixtral Example with the Python API 
The below code block illustrates how to use the Python API to use SELF-DISCOVER with a Mixtral GGUF.

```python
from autologic import reasoningEngine

my_config = reasoningEngine.LLMConfig(
        gguf_path="/tmp/mixtral-8x7b-instruct-v0.1.Q8_0.gguf",
        context_length=8000,
        model_type=reasoningEngine.ModelType.LOCAL,
        temp=0.2,
        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,
        threads=12
    )

result = reasoningEngine.solve(task = "What is 2 + 2?", discover_config=my_config)
print(result)
```
#### Gemini Pro Example with the Python API

The API Key for Gemini Pro is read from the `GEMINI_PRO_API_KEY` environment variable or .env file. It can optionally be passed in through the `autologic.reasoningEngine.LLMConfig.api_key` field.

```python
from autologic import reasoningEngine

my_config = reasoningEngine.LLMConfig(
        context_length=8000,
        model_type=reasoningEngine.ModelType.GEMINI,
        temp=0.2,
    )
result = reasoningEngine.solve(task = "What is 2 + 2?", discover_config=my_config)
print(result)
```

#### OpenAI Example with the Python API

The API Key for OpenAI is read from the `AUTOLOGIC_OPENAI_API_KEY` environment variable or .env file. It can optionally be passed in through the `autologic.reasoningEngine.LLMConfig.api_key` field.

```python
from autologic import reasoningEngine

llmConfig = reasoningEngine.LLMConfig(
    model_type = reasoningEngine.ModelType.OPENAI,
    model_name = "gpt-3.5-turbo-0125", # You can use any model name available to you through openai - omitting the model_name will default to gpt-3.5-turbo-0125
    temp = 0.2,
    context_length = 2000
)

problem_task = "Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?" # 9:20PM
answer = reasoningEngine.solve(task = problem_task,verbose=True,discover_config=llmConfig)
```

#### Mixtral Solving a prompt with a reasoning Structure generated by OpenAI's GPT4

```python 
from autologic import reasoningEngine

solve_config = reasoningEngine.LLMConfig(
        gguf_path="/tmp/mixtral-8x7b-instruct-v0.1.Q8_0.gguf",
        context_length=8000,
        model_type=reasoningEngine.ModelType.LOCAL,
        temp=0.2,
        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,
        threads=12
    )

discover_config = reasoningEngine.LLMConfig(
    model_type = reasoningEngine.ModelType.OPENAI,
    # You can use any model name available to you through openai - omitting the model_name will default to gpt-3.5-turbo-0125
    model_name = "gpt-4-turbo-preview", 
    temp = 0.2,
    context_length = 2000
)

prompt = "Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?" # 9:20PM

result = reasoningEngine.solve(
    task = prompt, 
    discover_config=discover_config,
    solve_config=solve_config,
    verbose=True
)
print(result)
```

#### Mixtral Solving a prompt with a reasoning Structure generated by Google's Gemini Pro

```python
from autologic import reasoningEngine

solve_config = reasoningEngine.LLMConfig(
        gguf_path="/Users/kerekovskik/.cache/lm-studio/models/misc/manual_dl/mixtral-8x7b-instruct-v0.1.Q8_0.gguf",
        context_length=8000,
        model_type=reasoningEngine.ModelType.LOCAL,
        temp=0.2,
        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,
        threads=12
    )

discover_config = reasoningEngine.LLMConfig(
    model_type = reasoningEngine.ModelType.GEMINI,
    temp = 0.2,
    context_length = 2000
)

prompt = "Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?" # 9:20PM

result = reasoningEngine.solve(
    task = prompt, 
    discover_config=discover_config,
    solve_config=solve_config,
    verbose=True
)
print(result)
```

### Or use the CLI:

#### CLI Usage with a prompt

##### Gemini Pro example with CLI

Example with a prompt 

```bash
autologic gemini --temp 0.2 \
--retries 3 \
--prompt "What weighs more? A pound of feathers or a pound of lead?"
```

Example output 


```
Thinking...
2024-02-14 18:50:34.054248 | Starting SELECT Phase
2024-02-14 18:50:37.008261 | SELECT Phase Complete
2024-02-14 18:50:37.008367 | Starting ADAPT Phase
2024-02-14 18:50:40.322604 | ADAPT Phase Complete
2024-02-14 18:50:40.322641 | Starting IMPLEMENT Phase
2024-02-14 18:50:45.567405 | IMPLEMENT Phase Complete
2024-02-14 18:50:45.567738 | Starting to Solve Problem using Reasoning Structure
2024-02-14 18:50:52.775653 | Solution has been found.


ANSWER: The weight of a pound of feathers is equal to the weight of a pound of lead.
```


##### OpenAI example with CLI

Example with a prompt 

```bash
autologic openai --temp 0.2 \
--retries 3 \
--prompt "What weighs more? A pound of feathers or a pound of lead?"
```

Example output 


```
Thinking...
2024-02-14 18:50:34.054248 | Starting SELECT Phase
2024-02-14 18:50:37.008261 | SELECT Phase Complete
2024-02-14 18:50:37.008367 | Starting ADAPT Phase
2024-02-14 18:50:40.322604 | ADAPT Phase Complete
2024-02-14 18:50:40.322641 | Starting IMPLEMENT Phase
2024-02-14 18:50:45.567405 | IMPLEMENT Phase Complete
2024-02-14 18:50:45.567738 | Starting to Solve Problem using Reasoning Structure
2024-02-14 18:50:52.775653 | Solution has been found.


ANSWER: The weight of a pound of feathers is equal to the weight of a pound of lead.
```

##### Qwen1.5 72B example with CLI

The below example shows how to specify a path to a GGUF model and how to specify the prompt format for the given model with the `--format` flag.
The example also showcases the usage of the `--verbose` flag which shows detailed reasoning information as well as the reasoning structure that the LLM self-discovers for the given problem.

```bash
autologic local --temp 0.2 \
--retries 5 \
--prompt "What weighs more? A pound of feathers or a pound of lead?" \
--gguf_path /tmp/qwen1_5-72b-chat-q5_k_m.gguf \
--threads 12 \
--layers -1 \
--format chatml \
--verbose  
```

Example output:

```
Thinking...
2024-02-14 20:05:23.116629 | Starting SELECT Phase
2024-02-14 20:06:41.595659 | SELECT Phase Complete
2024-02-14 20:06:41.595711 | Reasoning Modules Picked:
- 3. How can I simplify the problem so that it is easier to solve?
- 9. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.
- 16. What are the underlying causes or factors contributing to the problem?
- 22. How can progress or success in solving the problem be measured or evaluated?
- 23. What indicators or metrics can be used?

2024-02-14 20:06:41.595719 | Starting ADAPT Phase
2024-02-14 20:07:22.370683 | ADAPT Phase Complete
2024-02-14 20:07:22.370713 | Task-specific Reasoning Module verbiage:
- Clarify the units and equalities: Both items are measured in pounds, which means they weigh the same amount by definition; a pound is a unit of weight or mass.
- Understand the context and potential misconceptions: The question is a classic example of a trick question, as some may think lead is denser and therefore heavier, but in reality, both weigh the same.
- Address the physical properties without affecting the weight: While lead is denser than feathers, density does not affect the weight for the same volume; both weigh one pound, regardless of their density.

2024-02-14 20:07:22.370720 | Starting IMPLEMENT Phase
2024-02-14 20:08:46.753718 | IMPLEMENT Phase Complete
2024-02-14 20:08:46.753805 | Reasoning Structure:
{
  "Reasoning Structure": {
    "Step 1: Understand the Weighing Units": {
      "Description": "Both items are measured in the same unit.",
      "Action": "Confirm that both 'pound of feathers' and 'pound of lead' use the same unit (pound).",
      "Unit": "pound"
    },
    "Step 2: Recognize the Trick Question": {
      "Description": "The question is designed to prompt a misconception.",
      "Action": "Identify that some might think density affects weight, but it doesn't for the same volume.",
      "Misconception": "Density affects weight"
    },
    "Step 3: Define Density": {
      "Description": "Density is mass per unit volume.",
      "Action": "Explain that density does not change the weight for the same amount of mass.",
      "Density": "mass / volume"
    },
    "Step 4: Compare Weights Without Density": {
      "Description": "Focus on the weight aspect.",
      "Action": "Since both weigh one pound, compare them without considering density.",
      "Comparison": "Equal weight"
    },
    "Step 5: Conclusion": {
      "Description": "Both weigh the same amount.",
      "Action": "Both a pound of feathers and a pound of lead weigh the same.",
      "Final Conclusion": ""
    },
    "FINAL_ANSWER": ""
  }
}
2024-02-14 20:08:46.753961 | Starting to Solve Problem using Reasoning Structure
2024-02-14 20:10:07.283210 | Problem Solved
Completed Reasoning Structure:
{
  "Reasoning Structure": {
    "Step 1: Understand the Weighing Units": {
      "Description": "Both items are measured in the same unit.",
      "Action": "Confirm that both 'pound of feathers' and 'pound of lead' use the same unit (pound).",
      "Unit": "pound"
    },
    "Step 2: Recognize the Trick Question": {
      "Description": "The question is designed to prompt a misconception.",
      "Action": "Identify that some might think density affects weight, but it doesn't for the same volume.",
      "Misconception": "Density affects weight"
    },
    "Step 3: Define Density": {
      "Description": "Density is mass per unit volume.",
      "Action": "Explain that density does not change the weight for the same amount of mass.",
      "Density": "mass / volume"
    },
    "Step 4: Compare Weights Without Density": {
      "Description": "Focus on the weight aspect.",
      "Action": "Since both weigh one pound, compare them without considering density.",
      "Comparison": "Equal weight"
    },
    "Step 5: Conclusion": {
      "Description": "Both weigh the same amount.",
      "Action": "Both a pound of feathers and a pound of lead weigh the same.",
      "Final Conclusion": "A pound of feathers and a pound of lead weigh equally."
    },
    "FINAL_ANSWER": "A pound of feathers and a pound of lead weigh equally."
  }
}
2024-02-14 20:10:07.283302 | Solution has been found.


ANSWER: A pound of feathers and a pound of lead weigh equally.
```

#### Mixed Inference mode via the CLI - Qwen1.5 solving a prompt with a Gemini Pro Reasoning Structure

```bash
autologic mixed \
--solve-temp 0.2 \
--solve-gguf_path $HOME/hf/qwen1_5-72b-chat-q5_k_m.gguf \
--solve-threads 12 \
--solve-layers -1 \
--solve-format chatml \
--solve-model_type local \
--discover-model_type gemini \
--discover-temp 0.2 \
--prompt "What weighs more? A pound of feathers or a pound of lead?" \
--verbose  \
--retries 5
```

#### Mixed Inference mode via the CLI - Qwen1.5 solving a prompt with a GP4 Reasoning Structure

```bash
autologic mixed \
--solve-temp 0.2 \
--solve-gguf_path $HOME/hf/qwen1_5-72b-chat-q5_k_m.gguf \
--solve-threads 12 \
--solve-layers -1 \
--solve-format chatml \
--solve-model_type local \
--discover-model_type openai \
--discover-model_name 'gpt-4-turbo-preview' \
--discover-temp 0.2 \
--prompt "What weighs more? A pound of feathers or a pound of lead?" \
--verbose  \
--retries 5
```


#### Interactive CLI Mode


The CLI can be invoked without passing a `--prompt` argument to enter an interactive mode. This allows conveniently sending multi-line prompts for the model to reason over:

```bash
% autologic gemini --temp 0.2

Entering Interactive Mode: CTRL + C to send multi-line input. CTRL + D to exit the program.
> 
```

After starting interactive mode, you will be prompted to enter text:

```bash
Entering Interactive Mode: CTRL + C to send multi-line input. CTRL + D to exit the program.
> What weighs more?  
> A pound of feathers?
> Or a pound of lead?
> 
```
Enter your reasoning prompt spanning multiple lines.

To submit the prompt, press **CTRL + C** to send the input while staying in interactive mode.

This will trigger the model to start reasoning over the prompt:

```
> What weighs more?
> a pound of feathers or 
> a pound of lead?
> ^C
Thinking...
2024-02-14 20:17:38.307588 | Starting SELECT Phase
2024-02-14 20:17:42.403366 | SELECT Phase Complete
2024-02-14 20:17:42.403473 | Starting ADAPT Phase
2024-02-14 20:17:45.680310 | ADAPT Phase Complete
2024-02-14 20:17:45.680418 | Starting IMPLEMENT Phase
2024-02-14 20:17:52.029561 | IMPLEMENT Phase Complete
2024-02-14 20:17:52.029897 | Starting to Solve Problem using Reasoning Structure
2024-02-14 20:17:57.064476 | Solution has been found.


ANSWER: A pound of feathers weighs the same as a pound of lead.
```

Once you are done, send the **CTRL + D** signal to exit interactive mode and quit the CLI program.

```
> Goodbye!
```

This interactive workflow allows you to conveniently test long, complex reasoning without having to put all the prompt in quotes or escape newlines.

## TODO
- Expose information on Reasoning Structure and Reasoning Module selection via Python API. Currently, it is only visible when using verbose=True in CLI and API. 
- Add support for other prompt formats (Llama2, airoboros, etc.)
- Add REST support via Flask

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "autologic",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "LLMs,framework,reasoning structures",
    "author": "",
    "author_email": "Konstantin Kerekovski <kerekovskik@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/41/60/fb2274065bc269ba24a8f0f47febada1bcd56cb8dcaaac458bb45940d61f/autologic-0.0.3.tar.gz",
    "platform": null,
    "description": "\n\n# autologic\n\nautologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper [\"SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures\".](https://arxiv.org/abs/2402.03620) \n\nIt provides a way for large language models (LLMs) to automatically compose modular reasoning structures to tackle complex reasoning tasks, without the need for training data or labels.\n## Key Features\n\n- Implements the full SELF-DISCOVER pipeline enabling LLMs to self-discover reasoning structures\n- Works with Gemini Pro and local GGUF models via llama-cpp-python\n- Flexible integration powered by a simple LLMConfig\n- Interactive prompts or standalone execution\n- CLI and Python API access\n\n## Framework Overview\n\nThe SELF-DISCOVER framework consists of two key stages:\n\nStage 1: Self-discover a reasoning structure for the task from a set of seed \"reasoning modules\"\n\nStage 2: Solve instances by following the composed structure\n\nThe first stage has 3 steps guided by meta-prompting:\n\n1. SELECT relevant reasoning modules\n2. ADAPT modules to be task-specific\n3. IMPLEMENT structure with adapted modules\n\n## Getting Started\n\n### Install from Pypi\n\n```bash\npython3 -m venv venv\nsource  venv/bin/activate\nCMAKE_ARGS=\"-DLLAMA_METAL=on\" python3 -m pip install autologic\n```\n\n### Installation as editable package\n\nThe following instructions show the CMAKE arguments for compiling llama-cpp-python dependencies with support for metal.\nRefer to https://github.com/abetlen/llama-cpp-python for instructions regarding different GPUs.\n\n```bash\ngit clone https://github.com/waszumteufel/autologic.git\ncd autologic\npython3 -m venv venv\nsource  venv/bin/activate\nCMAKE_ARGS=\"-DLLAMA_METAL=on\" python3 -m pip install -e .\n```\n\n### Directly install from Github\n\n```bash\npython3 -m venv venv\nsource  venv/bin/activate\nCMAKE_ARGS=\"-DLLAMA_METAL=on\" python3 -m pip install git+https://github.com/waszumteufel/autologic@main#egg=autologic\n```\n\n#### Note on CMAKE_ARGS\n\nThe CMAKE_ARGS environment variable is used to direct llama-cpp-python to be compiled with specific support for GPU drivers for hardware acceleration.\nBelow is a non-exhaustive list of common values. \n\n**Metal (MPS)**\n\nFor Macs with the M-series processor.\n\n```\nCMAKE_ARGS=\"-DLLAMA_METAL=on\" \n```\n\n**Vulkan**\n\n```\nCMAKE_ARGS=\"-DLLAMA_VULKAN=on\" \n```\n\n**OpenBLAS**\n\n```\nCMAKE_ARGS=\"-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS\"\n```\n\n**cuBLAS**\n\n```\nCMAKE_ARGS=\"-DLLAMA_CUBLAS=on\n```\n\n**hipBLAS**\n\nFor AMD Cards\n\n```\nCMAKE_ARGS=\"-DLLAMA_HIPBLAS=on\"\n```\n\n## Configuration\n\n### Gemini API Key\n\nTo use the Gemini model, set the `GEMINI_PRO_API_KEY` variable:\n\n```\nGEMINI_PRO_API_KEY=\"sk-...\"\n```\n\nIf this is not set, `solve()` will automatically read it from the `.env` file in the working directory or general environment.\n\n### OpenAI API Key\n\nTo use the OpenAI models, set the `AUTOLOGIC_OPENAI_API_KEY` variable:\n\n```\nAUTOLOGIC_OPENAI_API_KEY=\"sk-...\"\n```\n\nIf this is not set, `solve()` will automatically read it from the `.env` file in the working directory or general environment.\n\n### Environment Variables \n\nIt is recommended to set sensitive values like API keys in a `.env` file.\n\nThis will be automatically loaded from the current working directory if present, and used to populate any unset configuration values that have matching environment variable names (i.e. `GEMINI_PRO_API_KEY`, `AUTOLOGIC_OPENAI_API_KEY`).\n\nSee the \"Configuration\" section of the documentation for the full set of options. Using `.env` and environment variables allows keeping credentials secure.\n\nThis allows conveniently configuring API keys and other settings without exposing them directly in code. The `LLMConfig` can then be initialized without explicitly passing these values every time.\n\n## Usage\n\n### Import and call the top-level solve() method:\n\n#### Mixtral Example with the Python API \nThe below code block illustrates how to use the Python API to use SELF-DISCOVER with a Mixtral GGUF.\n\n```python\nfrom autologic import reasoningEngine\n\nmy_config = reasoningEngine.LLMConfig(\n        gguf_path=\"/tmp/mixtral-8x7b-instruct-v0.1.Q8_0.gguf\",\n        context_length=8000,\n        model_type=reasoningEngine.ModelType.LOCAL,\n        temp=0.2,\n        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,\n        threads=12\n    )\n\nresult = reasoningEngine.solve(task = \"What is 2 + 2?\", discover_config=my_config)\nprint(result)\n```\n#### Gemini Pro Example with the Python API\n\nThe API Key for Gemini Pro is read from the `GEMINI_PRO_API_KEY` environment variable or .env file. It can optionally be passed in through the `autologic.reasoningEngine.LLMConfig.api_key` field.\n\n```python\nfrom autologic import reasoningEngine\n\nmy_config = reasoningEngine.LLMConfig(\n        context_length=8000,\n        model_type=reasoningEngine.ModelType.GEMINI,\n        temp=0.2,\n    )\nresult = reasoningEngine.solve(task = \"What is 2 + 2?\", discover_config=my_config)\nprint(result)\n```\n\n#### OpenAI Example with the Python API\n\nThe API Key for OpenAI is read from the `AUTOLOGIC_OPENAI_API_KEY` environment variable or .env file. It can optionally be passed in through the `autologic.reasoningEngine.LLMConfig.api_key` field.\n\n```python\nfrom autologic import reasoningEngine\n\nllmConfig = reasoningEngine.LLMConfig(\n    model_type = reasoningEngine.ModelType.OPENAI,\n    model_name = \"gpt-3.5-turbo-0125\", # You can use any model name available to you through openai - omitting the model_name will default to gpt-3.5-turbo-0125\n    temp = 0.2,\n    context_length = 2000\n)\n\nproblem_task = \"Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?\" # 9:20PM\nanswer = reasoningEngine.solve(task = problem_task,verbose=True,discover_config=llmConfig)\n```\n\n#### Mixtral Solving a prompt with a reasoning Structure generated by OpenAI's GPT4\n\n```python \nfrom autologic import reasoningEngine\n\nsolve_config = reasoningEngine.LLMConfig(\n        gguf_path=\"/tmp/mixtral-8x7b-instruct-v0.1.Q8_0.gguf\",\n        context_length=8000,\n        model_type=reasoningEngine.ModelType.LOCAL,\n        temp=0.2,\n        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,\n        threads=12\n    )\n\ndiscover_config = reasoningEngine.LLMConfig(\n    model_type = reasoningEngine.ModelType.OPENAI,\n    # You can use any model name available to you through openai - omitting the model_name will default to gpt-3.5-turbo-0125\n    model_name = \"gpt-4-turbo-preview\", \n    temp = 0.2,\n    context_length = 2000\n)\n\nprompt = \"Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?\" # 9:20PM\n\nresult = reasoningEngine.solve(\n    task = prompt, \n    discover_config=discover_config,\n    solve_config=solve_config,\n    verbose=True\n)\nprint(result)\n```\n\n#### Mixtral Solving a prompt with a reasoning Structure generated by Google's Gemini Pro\n\n```python\nfrom autologic import reasoningEngine\n\nsolve_config = reasoningEngine.LLMConfig(\n        gguf_path=\"/Users/kerekovskik/.cache/lm-studio/models/misc/manual_dl/mixtral-8x7b-instruct-v0.1.Q8_0.gguf\",\n        context_length=8000,\n        model_type=reasoningEngine.ModelType.LOCAL,\n        temp=0.2,\n        chat_template=reasoningEngine.ChatTemplate.MIXTRAL_INSTRUCT,\n        threads=12\n    )\n\ndiscover_config = reasoningEngine.LLMConfig(\n    model_type = reasoningEngine.ModelType.GEMINI,\n    temp = 0.2,\n    context_length = 2000\n)\n\nprompt = \"Beth and Sam are 500 miles apart. If Beth travels at 60mph and leaves her house at 1pm, what time will she arrive at Sam's house?\" # 9:20PM\n\nresult = reasoningEngine.solve(\n    task = prompt, \n    discover_config=discover_config,\n    solve_config=solve_config,\n    verbose=True\n)\nprint(result)\n```\n\n### Or use the CLI:\n\n#### CLI Usage with a prompt\n\n##### Gemini Pro example with CLI\n\nExample with a prompt \n\n```bash\nautologic gemini --temp 0.2 \\\n--retries 3 \\\n--prompt \"What weighs more? A pound of feathers or a pound of lead?\"\n```\n\nExample output \n\n\n```\nThinking...\n2024-02-14 18:50:34.054248 | Starting SELECT Phase\n2024-02-14 18:50:37.008261 | SELECT Phase Complete\n2024-02-14 18:50:37.008367 | Starting ADAPT Phase\n2024-02-14 18:50:40.322604 | ADAPT Phase Complete\n2024-02-14 18:50:40.322641 | Starting IMPLEMENT Phase\n2024-02-14 18:50:45.567405 | IMPLEMENT Phase Complete\n2024-02-14 18:50:45.567738 | Starting to Solve Problem using Reasoning Structure\n2024-02-14 18:50:52.775653 | Solution has been found.\n\n\nANSWER: The weight of a pound of feathers is equal to the weight of a pound of lead.\n```\n\n\n##### OpenAI example with CLI\n\nExample with a prompt \n\n```bash\nautologic openai --temp 0.2 \\\n--retries 3 \\\n--prompt \"What weighs more? A pound of feathers or a pound of lead?\"\n```\n\nExample output \n\n\n```\nThinking...\n2024-02-14 18:50:34.054248 | Starting SELECT Phase\n2024-02-14 18:50:37.008261 | SELECT Phase Complete\n2024-02-14 18:50:37.008367 | Starting ADAPT Phase\n2024-02-14 18:50:40.322604 | ADAPT Phase Complete\n2024-02-14 18:50:40.322641 | Starting IMPLEMENT Phase\n2024-02-14 18:50:45.567405 | IMPLEMENT Phase Complete\n2024-02-14 18:50:45.567738 | Starting to Solve Problem using Reasoning Structure\n2024-02-14 18:50:52.775653 | Solution has been found.\n\n\nANSWER: The weight of a pound of feathers is equal to the weight of a pound of lead.\n```\n\n##### Qwen1.5 72B example with CLI\n\nThe below example shows how to specify a path to a GGUF model and how to specify the prompt format for the given model with the `--format` flag.\nThe example also showcases the usage of the `--verbose` flag which shows detailed reasoning information as well as the reasoning structure that the LLM self-discovers for the given problem.\n\n```bash\nautologic local --temp 0.2 \\\n--retries 5 \\\n--prompt \"What weighs more? A pound of feathers or a pound of lead?\" \\\n--gguf_path /tmp/qwen1_5-72b-chat-q5_k_m.gguf \\\n--threads 12 \\\n--layers -1 \\\n--format chatml \\\n--verbose  \n```\n\nExample output:\n\n```\nThinking...\n2024-02-14 20:05:23.116629 | Starting SELECT Phase\n2024-02-14 20:06:41.595659 | SELECT Phase Complete\n2024-02-14 20:06:41.595711 | Reasoning Modules Picked:\n- 3. How can I simplify the problem so that it is easier to solve?\n- 9. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.\n- 16. What are the underlying causes or factors contributing to the problem?\n- 22. How can progress or success in solving the problem be measured or evaluated?\n- 23. What indicators or metrics can be used?\n\n2024-02-14 20:06:41.595719 | Starting ADAPT Phase\n2024-02-14 20:07:22.370683 | ADAPT Phase Complete\n2024-02-14 20:07:22.370713 | Task-specific Reasoning Module verbiage:\n- Clarify the units and equalities: Both items are measured in pounds, which means they weigh the same amount by definition; a pound is a unit of weight or mass.\n- Understand the context and potential misconceptions: The question is a classic example of a trick question, as some may think lead is denser and therefore heavier, but in reality, both weigh the same.\n- Address the physical properties without affecting the weight: While lead is denser than feathers, density does not affect the weight for the same volume; both weigh one pound, regardless of their density.\n\n2024-02-14 20:07:22.370720 | Starting IMPLEMENT Phase\n2024-02-14 20:08:46.753718 | IMPLEMENT Phase Complete\n2024-02-14 20:08:46.753805 | Reasoning Structure:\n{\n  \"Reasoning Structure\": {\n    \"Step 1: Understand the Weighing Units\": {\n      \"Description\": \"Both items are measured in the same unit.\",\n      \"Action\": \"Confirm that both 'pound of feathers' and 'pound of lead' use the same unit (pound).\",\n      \"Unit\": \"pound\"\n    },\n    \"Step 2: Recognize the Trick Question\": {\n      \"Description\": \"The question is designed to prompt a misconception.\",\n      \"Action\": \"Identify that some might think density affects weight, but it doesn't for the same volume.\",\n      \"Misconception\": \"Density affects weight\"\n    },\n    \"Step 3: Define Density\": {\n      \"Description\": \"Density is mass per unit volume.\",\n      \"Action\": \"Explain that density does not change the weight for the same amount of mass.\",\n      \"Density\": \"mass / volume\"\n    },\n    \"Step 4: Compare Weights Without Density\": {\n      \"Description\": \"Focus on the weight aspect.\",\n      \"Action\": \"Since both weigh one pound, compare them without considering density.\",\n      \"Comparison\": \"Equal weight\"\n    },\n    \"Step 5: Conclusion\": {\n      \"Description\": \"Both weigh the same amount.\",\n      \"Action\": \"Both a pound of feathers and a pound of lead weigh the same.\",\n      \"Final Conclusion\": \"\"\n    },\n    \"FINAL_ANSWER\": \"\"\n  }\n}\n2024-02-14 20:08:46.753961 | Starting to Solve Problem using Reasoning Structure\n2024-02-14 20:10:07.283210 | Problem Solved\nCompleted Reasoning Structure:\n{\n  \"Reasoning Structure\": {\n    \"Step 1: Understand the Weighing Units\": {\n      \"Description\": \"Both items are measured in the same unit.\",\n      \"Action\": \"Confirm that both 'pound of feathers' and 'pound of lead' use the same unit (pound).\",\n      \"Unit\": \"pound\"\n    },\n    \"Step 2: Recognize the Trick Question\": {\n      \"Description\": \"The question is designed to prompt a misconception.\",\n      \"Action\": \"Identify that some might think density affects weight, but it doesn't for the same volume.\",\n      \"Misconception\": \"Density affects weight\"\n    },\n    \"Step 3: Define Density\": {\n      \"Description\": \"Density is mass per unit volume.\",\n      \"Action\": \"Explain that density does not change the weight for the same amount of mass.\",\n      \"Density\": \"mass / volume\"\n    },\n    \"Step 4: Compare Weights Without Density\": {\n      \"Description\": \"Focus on the weight aspect.\",\n      \"Action\": \"Since both weigh one pound, compare them without considering density.\",\n      \"Comparison\": \"Equal weight\"\n    },\n    \"Step 5: Conclusion\": {\n      \"Description\": \"Both weigh the same amount.\",\n      \"Action\": \"Both a pound of feathers and a pound of lead weigh the same.\",\n      \"Final Conclusion\": \"A pound of feathers and a pound of lead weigh equally.\"\n    },\n    \"FINAL_ANSWER\": \"A pound of feathers and a pound of lead weigh equally.\"\n  }\n}\n2024-02-14 20:10:07.283302 | Solution has been found.\n\n\nANSWER: A pound of feathers and a pound of lead weigh equally.\n```\n\n#### Mixed Inference mode via the CLI - Qwen1.5 solving a prompt with a Gemini Pro Reasoning Structure\n\n```bash\nautologic mixed \\\n--solve-temp 0.2 \\\n--solve-gguf_path $HOME/hf/qwen1_5-72b-chat-q5_k_m.gguf \\\n--solve-threads 12 \\\n--solve-layers -1 \\\n--solve-format chatml \\\n--solve-model_type local \\\n--discover-model_type gemini \\\n--discover-temp 0.2 \\\n--prompt \"What weighs more? A pound of feathers or a pound of lead?\" \\\n--verbose  \\\n--retries 5\n```\n\n#### Mixed Inference mode via the CLI - Qwen1.5 solving a prompt with a GP4 Reasoning Structure\n\n```bash\nautologic mixed \\\n--solve-temp 0.2 \\\n--solve-gguf_path $HOME/hf/qwen1_5-72b-chat-q5_k_m.gguf \\\n--solve-threads 12 \\\n--solve-layers -1 \\\n--solve-format chatml \\\n--solve-model_type local \\\n--discover-model_type openai \\\n--discover-model_name 'gpt-4-turbo-preview' \\\n--discover-temp 0.2 \\\n--prompt \"What weighs more? A pound of feathers or a pound of lead?\" \\\n--verbose  \\\n--retries 5\n```\n\n\n#### Interactive CLI Mode\n\n\nThe CLI can be invoked without passing a `--prompt` argument to enter an interactive mode. This allows conveniently sending multi-line prompts for the model to reason over:\n\n```bash\n% autologic gemini --temp 0.2\n\nEntering Interactive Mode: CTRL + C to send multi-line input. CTRL + D to exit the program.\n> \n```\n\nAfter starting interactive mode, you will be prompted to enter text:\n\n```bash\nEntering Interactive Mode: CTRL + C to send multi-line input. CTRL + D to exit the program.\n> What weighs more?  \n> A pound of feathers?\n> Or a pound of lead?\n> \n```\nEnter your reasoning prompt spanning multiple lines.\n\nTo submit the prompt, press **CTRL + C** to send the input while staying in interactive mode.\n\nThis will trigger the model to start reasoning over the prompt:\n\n```\n> What weighs more?\n> a pound of feathers or \n> a pound of lead?\n> ^C\nThinking...\n2024-02-14 20:17:38.307588 | Starting SELECT Phase\n2024-02-14 20:17:42.403366 | SELECT Phase Complete\n2024-02-14 20:17:42.403473 | Starting ADAPT Phase\n2024-02-14 20:17:45.680310 | ADAPT Phase Complete\n2024-02-14 20:17:45.680418 | Starting IMPLEMENT Phase\n2024-02-14 20:17:52.029561 | IMPLEMENT Phase Complete\n2024-02-14 20:17:52.029897 | Starting to Solve Problem using Reasoning Structure\n2024-02-14 20:17:57.064476 | Solution has been found.\n\n\nANSWER: A pound of feathers weighs the same as a pound of lead.\n```\n\nOnce you are done, send the **CTRL + D** signal to exit interactive mode and quit the CLI program.\n\n```\n> Goodbye!\n```\n\nThis interactive workflow allows you to conveniently test long, complex reasoning without having to put all the prompt in quotes or escape newlines.\n\n## TODO\n- Expose information on Reasoning Structure and Reasoning Module selection via Python API. Currently, it is only visible when using verbose=True in CLI and API. \n- Add support for other prompt formats (Llama2, airoboros, etc.)\n- Add REST support via Flask\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A framework for LLMs to self-compose reasoning structures",
    "version": "0.0.3",
    "project_urls": {
        "Repository": "https://github.com/waszumteufel/autologic"
    },
    "split_keywords": [
        "llms",
        "framework",
        "reasoning structures"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8197ca9d07fe71e34ecf1488dc614d096f4afd487fc22f3d41e2688a5d07c010",
                "md5": "a51584f649a8aff62e4aebf321284aa4",
                "sha256": "9587ee369f595aeaba2abc3616173e8fa6edb4e5f35ea826cba959e16a8ffd3a"
            },
            "downloads": -1,
            "filename": "autologic-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a51584f649a8aff62e4aebf321284aa4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 22300,
            "upload_time": "2024-02-20T01:34:25",
            "upload_time_iso_8601": "2024-02-20T01:34:25.542974Z",
            "url": "https://files.pythonhosted.org/packages/81/97/ca9d07fe71e34ecf1488dc614d096f4afd487fc22f3d41e2688a5d07c010/autologic-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4160fb2274065bc269ba24a8f0f47febada1bcd56cb8dcaaac458bb45940d61f",
                "md5": "54dc5eb0df0cff5a540946ab6bfc6876",
                "sha256": "cd87d5802018f4b715997c4b8a256a9eca1bc8461960f5db0f6c02363dfc90f6"
            },
            "downloads": -1,
            "filename": "autologic-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "54dc5eb0df0cff5a540946ab6bfc6876",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 24849,
            "upload_time": "2024-02-20T01:34:27",
            "upload_time_iso_8601": "2024-02-20T01:34:27.116690Z",
            "url": "https://files.pythonhosted.org/packages/41/60/fb2274065bc269ba24a8f0f47febada1bcd56cb8dcaaac458bb45940d61f/autologic-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-20 01:34:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "waszumteufel",
    "github_project": "autologic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "autologic"
}
        
Elapsed time: 2.68765s