synth_machine


Namesynth_machine JSON
Version 0.6.1 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-07-08 12:39:02
maintainerNone
docs_urlNone
authorDavid
requires_python<3.13,>=3.10
licenseGPL-3.0-or-later
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Synth Machine

[![python](https://img.shields.io/badge/Python-3.12-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)
[![tests](https://github.com/HireSynth/synth_machine/actions/workflows/tests.yaml/badge.svg)](https://github.com/HireSynth/synth_machine/actions/workflows/tests.yaml)
[![pre-commit](https://github.com/HireSynth/synth_machine/actions/workflows/precommit.yaml/badge.svg)](https://github.com/HireSynth/synth_machine/actions/workflows/precommit.yaml)


**AI Agents are State Machines not DAGs**

Synth Machines lets users create and run AI agent state machines (`Synth`) by providing a `SynthDefinition` to define a structured AI workflow.  
State machines are a powerful construct as they enable a domain expert to deconstruct the problem into sets of states and transitions.  
Transitions between states can then call an LLM, tool, data process or a mixture of many outputs.  

### Installation

#### API Models
Install the package.
`pip install synth_machine[openai,togetherai,anthropic]`
or
`poetry add synth_machine[openai,togetherai,anthropic] `

Add either setup your API provider environment keys for which 

```
# You only need to set the API providers you want to use.
export OPENAI_API_KUY=secret
export ANTHROPIC_API_KEY=secret
export TOGETHER_API_KEY=secret
```

#### (soon) Local Models
`pip install synth_machine[vllm,llamacpp]`
or
`poetry add synth_machine[vllm,llamacpp]`

You will likely need to setup CUDA, VLLM or Llama.cpp for local use.

Helpful links:
- https://docs.vllm.ai/en/latest/getting_started/installation.html
- https://developer.nvidia.com/cuda-toolkit
- https://github.com/ggerganov/llama.cpp

### Define a Synth
```
agent = Synth(
    config: dict[SynthDefinition], # Synth state machine defining states, transitions and prompts.
    tools=[], # List of tools the agent will use
    memory={}, # Any existing memory to add on top of any model_config.initial_memory
    rag_runner: Optional[RAG] = None # Define a RAG integration for your agent.
    postprocess_functions = [] # Any glue code functions
    store : ObjectStore = ObjectStore(":memory:") # Any files created by tools will automatically go to you object store    

```

The `SynthDefinition` can be found in [SynthDefinition Docs](./synth_definition.md) or [synth_machine/synth_definition.py](synth_machine/synth_definition.py). The Pydantic BaseModels which make up `SynthDefinition` will be the most accurate representation of a `Synth`.  
We expect the specification to have updates between major versions. 

### Agent state and possible triggers

**At any point, you can check the current state and next triggers**
```
# Check state
agent.current_state()

# Triggers
agent.interfaces_for_available_triggers()
```


### Run a Synth


#### Batch
```
await agent.trigger(
    "[trigger_name]",
    params={
        "input_1": "hello"
    }
)

```
Batch transition calls will output any output variable generated in that transition.

### Streaming
```
await agent.streaming_trigger(
    "[trigger_name]",
    params={
        "input_1": "hello"
    }
)
```

Streaming responses yield any of the following events:
```
class YieldTasks(StrEnum):
    CHUNK = "CHUNK"
    MODEL_CONFIG = "MODEL_CONFIG"
    SET_MEMORY = "SET_MEMORY"
    SET_ACTIVE_OUTPUT = "SET_ACTIVE_OUTPUT"

```

- `CHUNK` : LLM generations are sent by chunks one token at a time.
- `MODEL_CONFIG` : Yields which executor is currently being used for any provider specific frontend interfaces.
- `SET_MEMORP` : Sends events setting new memory variables
- `SET_ACTIVE_OUTPUT` : Yields the current transition output trigger.

This lets users experiment using `trigger` and then integrate to real time stream LLM generations to users using Server Side Events (SSE) and `trigger_streaming`.

### LLMs

We offer multiple executors to generate local or API driven LLM chat completions.

#### API Models
- `openai` : https://openai.com/api/pricing/
- `togetherai` : https://docs.together.ai/docs/inference-models
- `anthropic` : https://docs.anthropic.com/en/docs/models-overview
- (soon) `google` : https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview

#### Local (soon)
- `VLLM` : https://github.com/vllm-project/vllm
- `Llama-CPP` : https://github.com/ggerganov/llama.cpp

#### `Model Config`
You can specify the provider and model in either `default-model-config` and the synth base or `model_config` on transition output.

```
ModelConfig:
...
executor: [openai|togetherai|anthropic|vllm|llamacpp]
llm_name: [model_name]

```

### Memory

Agent memory is a dictionary containing all interim variables creates in previous states and human / system inputs.

```
agent.memory
# -> {
#   "[memory_key]": [memory_value]
# }
```

### Tools

Postprocess functions should only be used for basic glue code, all major functionality should be built into Tools.

#### Tools are RestAPIs and can be added by providing a JSON API schema

Go to `"./tools/tofuTool/api.py` to view the functionality.

**Start API**
```
cd tools/tofuTool
poetry install
poetry run uvicorn api:app --port=5001 --reload

```

**Retrieve API spec**
```
curl -X GET http://localhost:5001/openapi.json > openapi_schema.json
```

**Define Tool**

You can define a Tool as such with only the name, API endpoint and tool openapi schema.
```
tofu_tool = Tool(
    name="tofu_tool",
    api_endpoint="http://localhost:5001",
    api_spec=tool_spec
)
```

### Synth Machine RAG

Retrieval augemented generation is a powerful tool to improve LLM responses by providing semantically similar examples or exerts to the material the LLM is attempting to generate.

`synth_machine` is flexibly in such that as long as you inherit from `synth_machine.RAG` and create:
- `embed(documents: List[str])` and
- `query(prompt: str, rag_config: Optional[synth_machine.RAGConfig])`

It is easy to integrate multiple providers and vector databases. Over time there will be supported and community RAG implementations across a wide variety of embeddings providers and vector databases.

#### RAG Example Qdrant & FastEmbed
The following RAG class is ideal for experimenting with local RAG setups on CPU. 
```
pip install qdrant-client, fastembed
```
**Define RAG class**
```
from synth_machine.rag import RAG
from qdrant_client import AsyncQdrantClient
from fastembed import TextEmbedding
from typing import List, Optional
from qdrant_client.models import Distance, VectorParams, PointStruct


class Qdrant(RAG):

    """
    VectorDB: Qdrant - https://github.com/qdrant/qdrant
    Embeddings: FastEmbed - https://github.com/qdrant/fastembed

    This provides fast and lightweight on-device CPU embeddings creation and 
    similarity search using Qdrant in memory.
    """
    
    def __init__(
        self,
        collection_name: str,
        embedding_model: str="BAAI/bge-small-en-v1.5",
        embedding_dimensions: int=384,
        embedding_threads: int=-1,
        qdrant_location: str=":memory:",
    ):
        self.embedding_model = TextEmbedding(
            model_name=embedding_model,
            threads=embedding_threads
        )
        self.embedding_dimensions = embedding_dimensions
        self.qdrant = AsyncQdrantClient(qdrant_location)
        self.collection_name = collection_name
    
    async def create_collection(self) -> bool:
        if await self.qdrant.collection_exists(self.collection_name):
            return True
        else:
            return await self.qdrant.create_collection(
                collection_name=self.collection_name,
                vectors_config=VectorParams(
                    size=self.embedding_dimensions, # maps to 'BAAI/bge-small-en-v1.5' model dimensions
                    distance=Distance.COSINE
                ) 
            )
    
    async def embed(self, documents: List[str], metadata: Optional[List[dict]]=None):
        if metadata and len(documents) != len(metadata):
            raise ValueError("documents and metadata must be the same length")
        embedding_list = list(
            self.embedding_model.embed(documents)
        )
        upsert_response = await self.qdrant.upsert(
            collection_name=self.collection_name,
            points=[
                PointStruct(
                    id=i,
                    vector=list(vector),
                    payload=metadata[i]
                )
                for i, vector in enumerate(embedding_list)
            ]
        )
        return upsert_response.status
        

    async def query(self, prompt: str, rag_config: RAGConfig) -> List[dict]:
        embedding = next(self.embedding_model.embed([prompt]))
            
        similar_responses = await self.qdrant.search(
            collection_name=self.collection_name,
            query_vector=embedding,
            limit=rag_config.n
        )
        return [
            point.payload for point in similar_responses
        ]
```

**Now initiate the Qdrant class and provide when defining `Synth`.**

```
qdrant = Qdrant(collection_name="tofu_examples")
await qdrant.create_collection()

agent = Synth(
    ...
    rag_runner=Qdrant
)
```

#### **Store**  

Tools can return a variety of different objects. Any file created by a tool will automatically go to your `agent.store`.
We use [ObjectStore](https://pypi.org/project/object-store-python/) for file storage, with `ObjectStore(":memory:")`as the default.

To retrieve a file: `agent.store.get(file_name)`

ObjectStore allowing easy integration to:
- Local file store
- S3
- GCS
- Azure

#### Example GCS object store
```
from synth_machine.machine import ObjectStore

agent = Agent(
    ...
    store=ObjectStore("gs://[bucket_name]/[prefix]))
)
```

### User Defined Functions

Any custom functionality can be defined as a user defined function (UDF).  
These take `Synth.memory`as input and allows you to run custom functionality as part of the `synth-machine`.  

```
# Define postprocess function

from synth_machine.user_defined_functions import udf

@udf
def abc_postprocesss(memory):
    ...
    return memory["variable_key"]

agent = Synth(
  ...
  user_defined_functions = {
    "abc": abc_postprocess
  }
)
```
 
#### Example UDF Transition Config
```
...
- key: trigger_udf
  inputs:
    - key: variable_key
  outputs:
    - key: example_udf
      udf: abc
```

**Note:** Any non trivial functionality should be a tool and not UDF.  
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "synth_machine",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "David",
    "author_email": "david@hiresynth.ai",
    "download_url": "https://files.pythonhosted.org/packages/ee/f1/065bf707cef6447d6c5849f187d9a801ed4360472d1723ddddd802ee6619/synth_machine-0.6.1.tar.gz",
    "platform": null,
    "description": "# Synth Machine\n\n[![python](https://img.shields.io/badge/Python-3.12-3776AB.svg?style=flat&logo=python&logoColor=white)](https://www.python.org)\n[![tests](https://github.com/HireSynth/synth_machine/actions/workflows/tests.yaml/badge.svg)](https://github.com/HireSynth/synth_machine/actions/workflows/tests.yaml)\n[![pre-commit](https://github.com/HireSynth/synth_machine/actions/workflows/precommit.yaml/badge.svg)](https://github.com/HireSynth/synth_machine/actions/workflows/precommit.yaml)\n\n\n**AI Agents are State Machines not DAGs**\n\nSynth Machines lets users create and run AI agent state machines (`Synth`) by providing a `SynthDefinition` to define a structured AI workflow.  \nState machines are a powerful construct as they enable a domain expert to deconstruct the problem into sets of states and transitions.  \nTransitions between states can then call an LLM, tool, data process or a mixture of many outputs.  \n\n### Installation\n\n#### API Models\nInstall the package.\n`pip install synth_machine[openai,togetherai,anthropic]`\nor\n`poetry add synth_machine[openai,togetherai,anthropic] `\n\nAdd either setup your API provider environment keys for which \n\n```\n# You only need to set the API providers you want to use.\nexport OPENAI_API_KUY=secret\nexport ANTHROPIC_API_KEY=secret\nexport TOGETHER_API_KEY=secret\n```\n\n#### (soon) Local Models\n`pip install synth_machine[vllm,llamacpp]`\nor\n`poetry add synth_machine[vllm,llamacpp]`\n\nYou will likely need to setup CUDA, VLLM or Llama.cpp for local use.\n\nHelpful links:\n- https://docs.vllm.ai/en/latest/getting_started/installation.html\n- https://developer.nvidia.com/cuda-toolkit\n- https://github.com/ggerganov/llama.cpp\n\n### Define a Synth\n```\nagent = Synth(\n    config: dict[SynthDefinition], # Synth state machine defining states, transitions and prompts.\n    tools=[], # List of tools the agent will use\n    memory={}, # Any existing memory to add on top of any model_config.initial_memory\n    rag_runner: Optional[RAG] = None # Define a RAG integration for your agent.\n    postprocess_functions = [] # Any glue code functions\n    store : ObjectStore = ObjectStore(\":memory:\") # Any files created by tools will automatically go to you object store    \n\n```\n\nThe `SynthDefinition` can be found in [SynthDefinition Docs](./synth_definition.md) or [synth_machine/synth_definition.py](synth_machine/synth_definition.py). The Pydantic BaseModels which make up `SynthDefinition` will be the most accurate representation of a `Synth`.  \nWe expect the specification to have updates between major versions. \n\n### Agent state and possible triggers\n\n**At any point, you can check the current state and next triggers**\n```\n# Check state\nagent.current_state()\n\n# Triggers\nagent.interfaces_for_available_triggers()\n```\n\n\n### Run a Synth\n\n\n#### Batch\n```\nawait agent.trigger(\n    \"[trigger_name]\",\n    params={\n        \"input_1\": \"hello\"\n    }\n)\n\n```\nBatch transition calls will output any output variable generated in that transition.\n\n### Streaming\n```\nawait agent.streaming_trigger(\n    \"[trigger_name]\",\n    params={\n        \"input_1\": \"hello\"\n    }\n)\n```\n\nStreaming responses yield any of the following events:\n```\nclass YieldTasks(StrEnum):\n    CHUNK = \"CHUNK\"\n    MODEL_CONFIG = \"MODEL_CONFIG\"\n    SET_MEMORY = \"SET_MEMORY\"\n    SET_ACTIVE_OUTPUT = \"SET_ACTIVE_OUTPUT\"\n\n```\n\n- `CHUNK` : LLM generations are sent by chunks one token at a time.\n- `MODEL_CONFIG` : Yields which executor is currently being used for any provider specific frontend interfaces.\n- `SET_MEMORP` : Sends events setting new memory variables\n- `SET_ACTIVE_OUTPUT` : Yields the current transition output trigger.\n\nThis lets users experiment using `trigger` and then integrate to real time stream LLM generations to users using Server Side Events (SSE) and `trigger_streaming`.\n\n### LLMs\n\nWe offer multiple executors to generate local or API driven LLM chat completions.\n\n#### API Models\n- `openai` : https://openai.com/api/pricing/\n- `togetherai` : https://docs.together.ai/docs/inference-models\n- `anthropic` : https://docs.anthropic.com/en/docs/models-overview\n- (soon) `google` : https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/overview\n\n#### Local (soon)\n- `VLLM` : https://github.com/vllm-project/vllm\n- `Llama-CPP` : https://github.com/ggerganov/llama.cpp\n\n#### `Model Config`\nYou can specify the provider and model in either `default-model-config` and the synth base or `model_config` on transition output.\n\n```\nModelConfig:\n...\nexecutor: [openai|togetherai|anthropic|vllm|llamacpp]\nllm_name: [model_name]\n\n```\n\n### Memory\n\nAgent memory is a dictionary containing all interim variables creates in previous states and human / system inputs.\n\n```\nagent.memory\n# -> {\n#   \"[memory_key]\": [memory_value]\n# }\n```\n\n### Tools\n\nPostprocess functions should only be used for basic glue code, all major functionality should be built into Tools.\n\n#### Tools are RestAPIs and can be added by providing a JSON API schema\n\nGo to `\"./tools/tofuTool/api.py` to view the functionality.\n\n**Start API**\n```\ncd tools/tofuTool\npoetry install\npoetry run uvicorn api:app --port=5001 --reload\n\n```\n\n**Retrieve API spec**\n```\ncurl -X GET http://localhost:5001/openapi.json > openapi_schema.json\n```\n\n**Define Tool**\n\nYou can define a Tool as such with only the name, API endpoint and tool openapi schema.\n```\ntofu_tool = Tool(\n    name=\"tofu_tool\",\n    api_endpoint=\"http://localhost:5001\",\n    api_spec=tool_spec\n)\n```\n\n### Synth Machine RAG\n\nRetrieval augemented generation is a powerful tool to improve LLM responses by providing semantically similar examples or exerts to the material the LLM is attempting to generate.\n\n`synth_machine` is flexibly in such that as long as you inherit from `synth_machine.RAG` and create:\n- `embed(documents: List[str])` and\n- `query(prompt: str, rag_config: Optional[synth_machine.RAGConfig])`\n\nIt is easy to integrate multiple providers and vector databases. Over time there will be supported and community RAG implementations across a wide variety of embeddings providers and vector databases.\n\n#### RAG Example Qdrant & FastEmbed\nThe following RAG class is ideal for experimenting with local RAG setups on CPU. \n```\npip install qdrant-client, fastembed\n```\n**Define RAG class**\n```\nfrom synth_machine.rag import RAG\nfrom qdrant_client import AsyncQdrantClient\nfrom fastembed import TextEmbedding\nfrom typing import List, Optional\nfrom qdrant_client.models import Distance, VectorParams, PointStruct\n\n\nclass Qdrant(RAG):\n\n    \"\"\"\n    VectorDB: Qdrant - https://github.com/qdrant/qdrant\n    Embeddings: FastEmbed - https://github.com/qdrant/fastembed\n\n    This provides fast and lightweight on-device CPU embeddings creation and \n    similarity search using Qdrant in memory.\n    \"\"\"\n    \n    def __init__(\n        self,\n        collection_name: str,\n        embedding_model: str=\"BAAI/bge-small-en-v1.5\",\n        embedding_dimensions: int=384,\n        embedding_threads: int=-1,\n        qdrant_location: str=\":memory:\",\n    ):\n        self.embedding_model = TextEmbedding(\n            model_name=embedding_model,\n            threads=embedding_threads\n        )\n        self.embedding_dimensions = embedding_dimensions\n        self.qdrant = AsyncQdrantClient(qdrant_location)\n        self.collection_name = collection_name\n    \n    async def create_collection(self) -> bool:\n        if await self.qdrant.collection_exists(self.collection_name):\n            return True\n        else:\n            return await self.qdrant.create_collection(\n                collection_name=self.collection_name,\n                vectors_config=VectorParams(\n                    size=self.embedding_dimensions, # maps to 'BAAI/bge-small-en-v1.5' model dimensions\n                    distance=Distance.COSINE\n                ) \n            )\n    \n    async def embed(self, documents: List[str], metadata: Optional[List[dict]]=None):\n        if metadata and len(documents) != len(metadata):\n            raise ValueError(\"documents and metadata must be the same length\")\n        embedding_list = list(\n            self.embedding_model.embed(documents)\n        )\n        upsert_response = await self.qdrant.upsert(\n            collection_name=self.collection_name,\n            points=[\n                PointStruct(\n                    id=i,\n                    vector=list(vector),\n                    payload=metadata[i]\n                )\n                for i, vector in enumerate(embedding_list)\n            ]\n        )\n        return upsert_response.status\n        \n\n    async def query(self, prompt: str, rag_config: RAGConfig) -> List[dict]:\n        embedding = next(self.embedding_model.embed([prompt]))\n            \n        similar_responses = await self.qdrant.search(\n            collection_name=self.collection_name,\n            query_vector=embedding,\n            limit=rag_config.n\n        )\n        return [\n            point.payload for point in similar_responses\n        ]\n```\n\n**Now initiate the Qdrant class and provide when defining `Synth`.**\n\n```\nqdrant = Qdrant(collection_name=\"tofu_examples\")\nawait qdrant.create_collection()\n\nagent = Synth(\n    ...\n    rag_runner=Qdrant\n)\n```\n\n#### **Store**  \n\nTools can return a variety of different objects. Any file created by a tool will automatically go to your `agent.store`.\nWe use [ObjectStore](https://pypi.org/project/object-store-python/) for file storage, with `ObjectStore(\":memory:\")`as the default.\n\nTo retrieve a file: `agent.store.get(file_name)`\n\nObjectStore allowing easy integration to:\n- Local file store\n- S3\n- GCS\n- Azure\n\n#### Example GCS object store\n```\nfrom synth_machine.machine import ObjectStore\n\nagent = Agent(\n    ...\n    store=ObjectStore(\"gs://[bucket_name]/[prefix]))\n)\n```\n\n### User Defined Functions\n\nAny custom functionality can be defined as a user defined function (UDF).  \nThese take `Synth.memory`as input and allows you to run custom functionality as part of the `synth-machine`.  \n\n```\n# Define postprocess function\n\nfrom synth_machine.user_defined_functions import udf\n\n@udf\ndef abc_postprocesss(memory):\n    ...\n    return memory[\"variable_key\"]\n\nagent = Synth(\n  ...\n  user_defined_functions = {\n    \"abc\": abc_postprocess\n  }\n)\n```\n \n#### Example UDF Transition Config\n```\n...\n- key: trigger_udf\n  inputs:\n    - key: variable_key\n  outputs:\n    - key: example_udf\n      udf: abc\n```\n\n**Note:** Any non trivial functionality should be a tool and not UDF.  ",
    "bugtrack_url": null,
    "license": "GPL-3.0-or-later",
    "summary": null,
    "version": "0.6.1",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2d8c8f3882af86ea0b6fe75b4a4ea73046ede4c2c3831cda58a197f87be4010b",
                "md5": "eb1a581473d5700ab7c9d00e8151e7b0",
                "sha256": "a539d0ca927fb6db4d614b76c2271e1f726e84273328f6b3fbc08b3c61429cbb"
            },
            "downloads": -1,
            "filename": "synth_machine-0.6.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eb1a581473d5700ab7c9d00e8151e7b0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.10",
            "size": 42267,
            "upload_time": "2024-07-08T12:39:01",
            "upload_time_iso_8601": "2024-07-08T12:39:01.380732Z",
            "url": "https://files.pythonhosted.org/packages/2d/8c/8f3882af86ea0b6fe75b4a4ea73046ede4c2c3831cda58a197f87be4010b/synth_machine-0.6.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "eef1065bf707cef6447d6c5849f187d9a801ed4360472d1723ddddd802ee6619",
                "md5": "82c09cc84bd25f6d09f45a4e1c4f9166",
                "sha256": "a5abf7ce7f47015f2f3d295a6df4fdbf8f2b2aec9987258163f0d267a0fd20db"
            },
            "downloads": -1,
            "filename": "synth_machine-0.6.1.tar.gz",
            "has_sig": false,
            "md5_digest": "82c09cc84bd25f6d09f45a4e1c4f9166",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>=3.10",
            "size": 37786,
            "upload_time": "2024-07-08T12:39:02",
            "upload_time_iso_8601": "2024-07-08T12:39:02.859927Z",
            "url": "https://files.pythonhosted.org/packages/ee/f1/065bf707cef6447d6c5849f187d9a801ed4360472d1723ddddd802ee6619/synth_machine-0.6.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-08 12:39:02",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "synth_machine"
}
        
Elapsed time: 0.98761s