<p align="center">
<a href="https://www.getdynamiq.ai/"><img src="https://github.com/dynamiq-ai/dynamiq/blob/main/docs/img/Dynamiq_Logo_Universal_Github.png?raw=true" alt="Dynamiq"></a>
</p>
<p align="center">
<em>Dynamiq is an orchestration framework for agentic AI and LLM applications</em>
</p>
<p align="center">
<a href="https://getdynamiq.ai">
<img src="https://img.shields.io/website?label=website&up_message=online&url=https%3A%2F%2Fgetdynamiq.ai" alt="Website">
</a>
<a href="https://github.com/dynamiq-ai/dynamiq/releases" target="_blank">
<img src="https://img.shields.io/github/release/dynamiq-ai/dynamiq" alt="Release Notes">
</a>
<a href="#" target="_blank">
<img src="https://img.shields.io/badge/Python-3.10%2B-brightgreen.svg" alt="Python 3.10+">
</a>
<a href="https://github.com/dynamiq-ai/dynamiq/blob/main/LICENSE" target="_blank">
<img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
</a>
<a href="https://dynamiq-ai.github.io/dynamiq" target="_blank">
<img src="https://img.shields.io/website?label=documentation&up_message=online&url=https%3A%2F%2Fdynamiq-ai.github.io%2Fdynamiq" alt="Documentation">
</a>
</p>
Welcome to Dynamiq! 🤖
Dynamiq is your all-in-one Gen AI framework, designed to streamline the development of AI-powered applications. Dynamiq specializes in orchestrating retrieval-augmented generation (RAG) and large language model (LLM) agents.
## Getting Started
Ready to dive in? Here's how you can get started with Dynamiq:
### Installation
First, let's get Dynamiq installed. You'll need Python, so make sure that's set up on your machine. Then run:
```sh
pip install dynamiq
```
Or build the Python package from the source code:
```sh
git clone https://github.com/dynamiq-ai/dynamiq.git
cd dynamiq
poetry install
```
## Documentation
For more examples and detailed guides, please refer to our [documentation](https://dynamiq-ai.github.io/dynamiq).
## Examples
### Simple LLM Flow
Here's a simple example to get you started with Dynamiq:
```python
from dynamiq.nodes.llms.openai import OpenAI
from dynamiq.connections import OpenAI as OpenAIConnection
from dynamiq import Workflow
from dynamiq.prompts import Prompt, Message
# Define the prompt template for translation
prompt_template = """
Translate the following text into English: {{ text }}
"""
# Create a Prompt object with the defined template
prompt = Prompt(messages=[Message(content=prompt_template, role="user")])
# Setup your LLM (Large Language Model) Node
llm = OpenAI(
id="openai", # Unique identifier for the node
connection=OpenAIConnection(api_key="$OPENAI_API_KEY"), # Connection using API key
model="gpt-4o", # Model to be used
temperature=0.3, # Sampling temperature for the model
max_tokens=1000, # Maximum number of tokens in the output
prompt=prompt # Prompt to be used for the model
)
# Create a Workflow object
workflow = Workflow()
# Add the LLM node to the workflow
workflow.flow.add_nodes(llm)
# Run the workflow with the input data
result = workflow.run(
input_data={
"text": "Hola Mundo!" # Text to be translated
}
)
# Print the result of the translation
print(result.output)
```
### Simple ReAct Agent
An agent that has the access to E2B Code Interpreter and is capable of solving complex coding tasks.
```python
from dynamiq.nodes.llms.openai import OpenAI
from dynamiq.connections import OpenAI as OpenAIConnection, E2B as E2BConnection
from dynamiq.nodes.agents.react import ReActAgent
from dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool
# Initialize the E2B tool
e2b_tool = E2BInterpreterTool(
connection=E2BConnection(api_key="$API_KEY")
)
# Setup your LLM
llm = OpenAI(
id="openai",
connection=OpenAIConnection(api_key="$API_KEY"),
model="gpt-4o",
temperature=0.3,
max_tokens=1000,
)
# Create the ReAct agent
agent = ReActAgent(
name="react-agent",
llm=llm,
tools=[e2b_tool],
role="Senior Data Scientist",
max_loops=10,
)
# Run the agent with an input
result = agent.run(
input_data={
"input": "Add the first 10 numbers and tell if the result is prime.",
}
)
print(result.output.get("content"))
```
### Multi-agent orchestration
```python
from dynamiq.connections import (OpenAI as OpenAIConnection,
ScaleSerp as ScaleSerpConnection,
E2B as E2BConnection)
from dynamiq.nodes.llms import OpenAI
from dynamiq.nodes.agents.orchestrators.adaptive import AdaptiveOrchestrator
from dynamiq.nodes.agents.orchestrators.adaptive_manager import AdaptiveAgentManager
from dynamiq.nodes.agents.react import ReActAgent
from dynamiq.nodes.agents.reflection import ReflectionAgent
from dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool
from dynamiq.nodes.tools.scale_serp import ScaleSerpTool
# Initialize tools
python_tool = E2BInterpreterTool(
connection=E2BConnection(api_key="$E2B_API_KEY")
)
search_tool = ScaleSerpTool(
connection=ScaleSerpConnection(api_key="$SCALESERP_API_KEY")
)
# Initialize LLM
llm = OpenAI(
connection=OpenAIConnection(api_key="$OPENAI_API_KEY"),
model="gpt-4o",
temperature=0.1,
)
# Define agents
coding_agent = ReActAgent(
name="coding-agent",
llm=llm,
tools=[python_tool],
role=("Expert agent with coding skills."
"Goal is to provide the solution to the input task"
"using Python software engineering skills."),
max_loops=15,
)
planner_agent = ReflectionAgent(
name="planner-agent",
llm=llm,
role=("Expert agent with planning skills."
"Goal is to analyze complex requests"
"and provide a detailed action plan."),
)
search_agent = ReActAgent(
name="search-agent",
llm=llm,
tools=[search_tool],
role=("Expert agent with web search skills."
"Goal is to provide the solution to the input task"
"using web search and summarization skills."),
max_loops=10,
)
# Initialize the adaptive agent manager
agent_manager = AdaptiveAgentManager(llm=llm)
# Create the orchestrator
orchestrator = AdaptiveOrchestrator(
name="adaptive-orchestrator",
agents=[coding_agent, planner_agent, search_agent],
manager=agent_manager,
)
# Define the input task
input_task = (
"Use coding skills to gather data about Nvidia and Intel stock prices for the last 10 years, "
"calculate the average per year for each company, and create a table. Then craft a report "
"and add a conclusion: what would have been better if I had invested $100 ten years ago?"
)
# Run the orchestrator
result = orchestrator.run(
input_data={"input": input_task},
)
# Print the result
print(result.output.get("content"))
```
### RAG - document indexing flow
This workflow takes input PDF files, pre-processes them, converts them to vector embeddings, and stores them in the Pinecone vector database.
The example provided is for an existing index in Pinecone. You can find examples for index creation on the `docs/tutorials/rag` page.
```python
from io import BytesIO
from dynamiq import Workflow
from dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection
from dynamiq.nodes.converters import PyPDFConverter
from dynamiq.nodes.splitters.document import DocumentSplitter
from dynamiq.nodes.embedders import OpenAIDocumentEmbedder
from dynamiq.nodes.writers import PineconeDocumentWriter
rag_wf = Workflow()
# PyPDF document converter
converter = PyPDFConverter(document_creation_mode="one-doc-per-page")
rag_wf.flow.add_nodes(converter) # add node to the DAG
# Document splitter
document_splitter = (
DocumentSplitter(
split_by="sentence",
split_length=10,
split_overlap=1,
)
.inputs(documents=converter.outputs.documents) # map converter node output to the expected input of the current node
.depends_on(converter)
)
rag_wf.flow.add_nodes(document_splitter)
# OpenAI vector embeddings
embedder = (
OpenAIDocumentEmbedder(
connection=OpenAIConnection(api_key="$OPENAI_API_KEY"),
model="text-embedding-3-small",
)
.inputs(documents=document_splitter.outputs.documents)
.depends_on(document_splitter)
)
rag_wf.flow.add_nodes(embedder)
# Pinecone vector storage
vector_store = (
PineconeDocumentWriter(
connection=PineconeConnection(api_key="$PINECONE_API_KEY"),
index_name="default",
dimension=1536,
)
.inputs(documents=embedder.outputs.documents)
.depends_on(embedder)
)
rag_wf.flow.add_nodes(vector_store)
# Prepare input PDF files
file_paths = ["example.pdf"]
input_data = {
"files": [
BytesIO(open(path, "rb").read()) for path in file_paths
],
"metadata": [
{"filename": path} for path in file_paths
],
}
# Run RAG indexing flow
rag_wf.run(input_data=input_data)
```
### RAG - document retrieval flow
Simple retrieval RAG flow that searches for relevant documents and answers the original user question using retrieved documents.
```python
from dynamiq import Workflow
from dynamiq.nodes import InputTransformer
from dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection
from dynamiq.nodes.embedders import OpenAITextEmbedder
from dynamiq.nodes.retrievers import PineconeDocumentRetriever
from dynamiq.nodes.llms import OpenAI
from dynamiq.prompts import Message, Prompt
# Initialize the RAG retrieval workflow
retrieval_wf = Workflow()
# Shared OpenAI connection
openai_connection = OpenAIConnection(api_key="$OPENAI_API_KEY")
# OpenAI text embedder for query embedding
embedder = OpenAITextEmbedder(
connection=openai_connection,
model="text-embedding-3-small",
)
retrieval_wf.flow.add_nodes(embedder)
# Pinecone document retriever
document_retriever = (
PineconeDocumentRetriever(
connection=PineconeConnection(api_key="$PINECONE_API_KEY"),
index_name="default",
dimension=1536,
top_k=5,
)
.inputs(embedding=embedder.outputs.embedding)
.depends_on(embedder)
)
retrieval_wf.flow.add_nodes(document_retriever)
# Define the prompt template
prompt_template = """
Please answer the question based on the provided context.
Question: {{ query }}
Context:
{% for document in documents %}
- {{ document.content }}
{% endfor %}
"""
# OpenAI LLM for answer generation
prompt = Prompt(messages=[Message(content=prompt_template, role="user")])
answer_generator = (
OpenAI(
connection=openai_connection,
model="gpt-4o",
prompt=prompt,
)
.inputs(
documents=document_retriever.outputs.documents,
query=embedder.outputs.query,
) # take documents from the vector store node and query from the embedder
.depends_on([document_retriever, embedder])
)
retrieval_wf.flow.add_nodes(answer_generator)
# Run the RAG retrieval flow
question = "What are the line intems provided in the invoice?"
result = retrieval_wf.run(input_data={"query": question})
answer = result.output.get(answer_generator.id).get("output", {}).get("content")
print(answer)
```
### Simple Chatbot with Memory
A simple chatbot that uses the `Memory` module to store and retrieve conversation history.
```python
from dynamiq.connections import OpenAI as OpenAIConnection
from dynamiq.memory import Memory
from dynamiq.memory.backends.in_memory import InMemory
from dynamiq.nodes.agents.simple import SimpleAgent
from dynamiq.nodes.llms import OpenAI
AGENT_ROLE = "helpful assistant, goal is to provide useful information and answer questions"
llm = OpenAI(
connection=OpenAIConnection(api_key="$OPENAI_API_KEY"),
model="gpt-4o",
temperature=0.1,
)
memory = Memory(backend=InMemory())
agent = SimpleAgent(
name="Agent",
llm=llm,
role=AGENT_ROLE,
id="agent",
memory=memory,
)
def main():
print("Welcome to the AI Chat! (Type 'exit' to end)")
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
break
response = agent.run({"input": user_input})
response_content = response.output.get("content")
print(f"AI: {response_content}")
if __name__ == "__main__":
main()
```
## Contributing
We love contributions! Whether it's bug reports, feature requests, or pull requests, head over to our [CONTRIBUTING.md](CONTRIBUTING.md) to see how you can help.
## License
Dynamiq is open-source and available under the [Apache 2 License](LICENSE).
Happy coding! 🚀
Raw data
{
"_id": null,
"home_page": "https://www.getdynamiq.ai",
"name": "dynamiq",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "ai, gpt, agents, rag, llm, generative-ai, llmops",
"author": "Dynamiq Team",
"author_email": "hello@getdynamiq.ai",
"download_url": "https://files.pythonhosted.org/packages/6a/9b/15db1656712e7227d91bf83e0eb393451c5aaf2c1e41d12b99a8192ed1a8/dynamiq-0.5.2.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <a href=\"https://www.getdynamiq.ai/\"><img src=\"https://github.com/dynamiq-ai/dynamiq/blob/main/docs/img/Dynamiq_Logo_Universal_Github.png?raw=true\" alt=\"Dynamiq\"></a>\n</p>\n\n\n<p align=\"center\">\n <em>Dynamiq is an orchestration framework for agentic AI and LLM applications</em>\n</p>\n\n<p align=\"center\">\n <a href=\"https://getdynamiq.ai\">\n <img src=\"https://img.shields.io/website?label=website&up_message=online&url=https%3A%2F%2Fgetdynamiq.ai\" alt=\"Website\">\n </a>\n <a href=\"https://github.com/dynamiq-ai/dynamiq/releases\" target=\"_blank\">\n <img src=\"https://img.shields.io/github/release/dynamiq-ai/dynamiq\" alt=\"Release Notes\">\n </a>\n <a href=\"#\" target=\"_blank\">\n <img src=\"https://img.shields.io/badge/Python-3.10%2B-brightgreen.svg\" alt=\"Python 3.10+\">\n </a>\n <a href=\"https://github.com/dynamiq-ai/dynamiq/blob/main/LICENSE\" target=\"_blank\">\n <img src=\"https://img.shields.io/badge/License-Apache_2.0-blue.svg\" alt=\"License\">\n </a>\n <a href=\"https://dynamiq-ai.github.io/dynamiq\" target=\"_blank\">\n <img src=\"https://img.shields.io/website?label=documentation&up_message=online&url=https%3A%2F%2Fdynamiq-ai.github.io%2Fdynamiq\" alt=\"Documentation\">\n </a>\n</p>\n\n\nWelcome to Dynamiq! \ud83e\udd16\n\nDynamiq is your all-in-one Gen AI framework, designed to streamline the development of AI-powered applications. Dynamiq specializes in orchestrating retrieval-augmented generation (RAG) and large language model (LLM) agents.\n\n## Getting Started\n\nReady to dive in? Here's how you can get started with Dynamiq:\n\n### Installation\n\nFirst, let's get Dynamiq installed. You'll need Python, so make sure that's set up on your machine. Then run:\n\n```sh\npip install dynamiq\n```\n\nOr build the Python package from the source code:\n```sh\ngit clone https://github.com/dynamiq-ai/dynamiq.git\ncd dynamiq\npoetry install\n```\n\n## Documentation\nFor more examples and detailed guides, please refer to our [documentation](https://dynamiq-ai.github.io/dynamiq).\n\n## Examples\n\n### Simple LLM Flow\n\nHere's a simple example to get you started with Dynamiq:\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq import Workflow\nfrom dynamiq.prompts import Prompt, Message\n\n# Define the prompt template for translation\nprompt_template = \"\"\"\nTranslate the following text into English: {{ text }}\n\"\"\"\n\n# Create a Prompt object with the defined template\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\n# Setup your LLM (Large Language Model) Node\nllm = OpenAI(\n id=\"openai\", # Unique identifier for the node\n connection=OpenAIConnection(api_key=\"$OPENAI_API_KEY\"), # Connection using API key\n model=\"gpt-4o\", # Model to be used\n temperature=0.3, # Sampling temperature for the model\n max_tokens=1000, # Maximum number of tokens in the output\n prompt=prompt # Prompt to be used for the model\n)\n\n# Create a Workflow object\nworkflow = Workflow()\n\n# Add the LLM node to the workflow\nworkflow.flow.add_nodes(llm)\n\n# Run the workflow with the input data\nresult = workflow.run(\n input_data={\n \"text\": \"Hola Mundo!\" # Text to be translated\n }\n)\n\n# Print the result of the translation\nprint(result.output)\n```\n\n### Simple ReAct Agent\nAn agent that has the access to E2B Code Interpreter and is capable of solving complex coding tasks.\n\n```python\nfrom dynamiq.nodes.llms.openai import OpenAI\nfrom dynamiq.connections import OpenAI as OpenAIConnection, E2B as E2BConnection\nfrom dynamiq.nodes.agents.react import ReActAgent\nfrom dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool\n\n# Initialize the E2B tool\ne2b_tool = E2BInterpreterTool(\n connection=E2BConnection(api_key=\"$API_KEY\")\n)\n\n# Setup your LLM\nllm = OpenAI(\n id=\"openai\",\n connection=OpenAIConnection(api_key=\"$API_KEY\"),\n model=\"gpt-4o\",\n temperature=0.3,\n max_tokens=1000,\n)\n\n# Create the ReAct agent\nagent = ReActAgent(\n name=\"react-agent\",\n llm=llm,\n tools=[e2b_tool],\n role=\"Senior Data Scientist\",\n max_loops=10,\n)\n\n# Run the agent with an input\nresult = agent.run(\n input_data={\n \"input\": \"Add the first 10 numbers and tell if the result is prime.\",\n }\n)\n\nprint(result.output.get(\"content\"))\n```\n\n### Multi-agent orchestration\n```python\nfrom dynamiq.connections import (OpenAI as OpenAIConnection,\n ScaleSerp as ScaleSerpConnection,\n E2B as E2BConnection)\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.nodes.agents.orchestrators.adaptive import AdaptiveOrchestrator\nfrom dynamiq.nodes.agents.orchestrators.adaptive_manager import AdaptiveAgentManager\nfrom dynamiq.nodes.agents.react import ReActAgent\nfrom dynamiq.nodes.agents.reflection import ReflectionAgent\nfrom dynamiq.nodes.tools.e2b_sandbox import E2BInterpreterTool\nfrom dynamiq.nodes.tools.scale_serp import ScaleSerpTool\n\n\n# Initialize tools\npython_tool = E2BInterpreterTool(\n connection=E2BConnection(api_key=\"$E2B_API_KEY\")\n)\nsearch_tool = ScaleSerpTool(\n connection=ScaleSerpConnection(api_key=\"$SCALESERP_API_KEY\")\n)\n\n# Initialize LLM\nllm = OpenAI(\n connection=OpenAIConnection(api_key=\"$OPENAI_API_KEY\"),\n model=\"gpt-4o\",\n temperature=0.1,\n)\n\n# Define agents\ncoding_agent = ReActAgent(\n name=\"coding-agent\",\n llm=llm,\n tools=[python_tool],\n role=(\"Expert agent with coding skills.\"\n \"Goal is to provide the solution to the input task\"\n \"using Python software engineering skills.\"),\n max_loops=15,\n)\n\nplanner_agent = ReflectionAgent(\n name=\"planner-agent\",\n llm=llm,\n role=(\"Expert agent with planning skills.\"\n \"Goal is to analyze complex requests\"\n \"and provide a detailed action plan.\"),\n)\n\nsearch_agent = ReActAgent(\n name=\"search-agent\",\n llm=llm,\n tools=[search_tool],\n role=(\"Expert agent with web search skills.\"\n \"Goal is to provide the solution to the input task\"\n \"using web search and summarization skills.\"),\n max_loops=10,\n)\n\n# Initialize the adaptive agent manager\nagent_manager = AdaptiveAgentManager(llm=llm)\n\n# Create the orchestrator\norchestrator = AdaptiveOrchestrator(\n name=\"adaptive-orchestrator\",\n agents=[coding_agent, planner_agent, search_agent],\n manager=agent_manager,\n)\n\n# Define the input task\ninput_task = (\n \"Use coding skills to gather data about Nvidia and Intel stock prices for the last 10 years, \"\n \"calculate the average per year for each company, and create a table. Then craft a report \"\n \"and add a conclusion: what would have been better if I had invested $100 ten years ago?\"\n)\n\n# Run the orchestrator\nresult = orchestrator.run(\n input_data={\"input\": input_task},\n)\n\n# Print the result\nprint(result.output.get(\"content\"))\n\n```\n\n### RAG - document indexing flow\nThis workflow takes input PDF files, pre-processes them, converts them to vector embeddings, and stores them in the Pinecone vector database.\nThe example provided is for an existing index in Pinecone. You can find examples for index creation on the `docs/tutorials/rag` page.\n\n```python\nfrom io import BytesIO\n\nfrom dynamiq import Workflow\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.converters import PyPDFConverter\nfrom dynamiq.nodes.splitters.document import DocumentSplitter\nfrom dynamiq.nodes.embedders import OpenAIDocumentEmbedder\nfrom dynamiq.nodes.writers import PineconeDocumentWriter\n\nrag_wf = Workflow()\n\n# PyPDF document converter\nconverter = PyPDFConverter(document_creation_mode=\"one-doc-per-page\")\nrag_wf.flow.add_nodes(converter) # add node to the DAG\n\n# Document splitter\ndocument_splitter = (\n DocumentSplitter(\n split_by=\"sentence\",\n split_length=10,\n split_overlap=1,\n )\n .inputs(documents=converter.outputs.documents) # map converter node output to the expected input of the current node\n .depends_on(converter)\n)\nrag_wf.flow.add_nodes(document_splitter)\n\n# OpenAI vector embeddings\nembedder = (\n OpenAIDocumentEmbedder(\n connection=OpenAIConnection(api_key=\"$OPENAI_API_KEY\"),\n model=\"text-embedding-3-small\",\n )\n .inputs(documents=document_splitter.outputs.documents)\n .depends_on(document_splitter)\n)\nrag_wf.flow.add_nodes(embedder)\n\n# Pinecone vector storage\nvector_store = (\n PineconeDocumentWriter(\n connection=PineconeConnection(api_key=\"$PINECONE_API_KEY\"),\n index_name=\"default\",\n dimension=1536,\n )\n .inputs(documents=embedder.outputs.documents)\n .depends_on(embedder)\n)\nrag_wf.flow.add_nodes(vector_store)\n\n# Prepare input PDF files\nfile_paths = [\"example.pdf\"]\ninput_data = {\n \"files\": [\n BytesIO(open(path, \"rb\").read()) for path in file_paths\n ],\n \"metadata\": [\n {\"filename\": path} for path in file_paths\n ],\n}\n\n# Run RAG indexing flow\nrag_wf.run(input_data=input_data)\n```\n\n### RAG - document retrieval flow\nSimple retrieval RAG flow that searches for relevant documents and answers the original user question using retrieved documents.\n\n```python\nfrom dynamiq import Workflow\nfrom dynamiq.nodes import InputTransformer\nfrom dynamiq.connections import OpenAI as OpenAIConnection, Pinecone as PineconeConnection\nfrom dynamiq.nodes.embedders import OpenAITextEmbedder\nfrom dynamiq.nodes.retrievers import PineconeDocumentRetriever\nfrom dynamiq.nodes.llms import OpenAI\nfrom dynamiq.prompts import Message, Prompt\n\n# Initialize the RAG retrieval workflow\nretrieval_wf = Workflow()\n\n# Shared OpenAI connection\nopenai_connection = OpenAIConnection(api_key=\"$OPENAI_API_KEY\")\n\n# OpenAI text embedder for query embedding\nembedder = OpenAITextEmbedder(\n connection=openai_connection,\n model=\"text-embedding-3-small\",\n)\nretrieval_wf.flow.add_nodes(embedder)\n\n# Pinecone document retriever\ndocument_retriever = (\n PineconeDocumentRetriever(\n connection=PineconeConnection(api_key=\"$PINECONE_API_KEY\"),\n index_name=\"default\",\n dimension=1536,\n top_k=5,\n )\n .inputs(embedding=embedder.outputs.embedding)\n .depends_on(embedder)\n)\nretrieval_wf.flow.add_nodes(document_retriever)\n\n# Define the prompt template\nprompt_template = \"\"\"\nPlease answer the question based on the provided context.\n\nQuestion: {{ query }}\n\nContext:\n{% for document in documents %}\n- {{ document.content }}\n{% endfor %}\n\n\"\"\"\n\n# OpenAI LLM for answer generation\nprompt = Prompt(messages=[Message(content=prompt_template, role=\"user\")])\n\nanswer_generator = (\n OpenAI(\n connection=openai_connection,\n model=\"gpt-4o\",\n prompt=prompt,\n )\n .inputs(\n documents=document_retriever.outputs.documents,\n query=embedder.outputs.query,\n ) # take documents from the vector store node and query from the embedder\n .depends_on([document_retriever, embedder])\n)\nretrieval_wf.flow.add_nodes(answer_generator)\n\n# Run the RAG retrieval flow\nquestion = \"What are the line intems provided in the invoice?\"\nresult = retrieval_wf.run(input_data={\"query\": question})\n\nanswer = result.output.get(answer_generator.id).get(\"output\", {}).get(\"content\")\nprint(answer)\n```\n\n### Simple Chatbot with Memory\nA simple chatbot that uses the `Memory` module to store and retrieve conversation history.\n\n```python\nfrom dynamiq.connections import OpenAI as OpenAIConnection\nfrom dynamiq.memory import Memory\nfrom dynamiq.memory.backends.in_memory import InMemory\nfrom dynamiq.nodes.agents.simple import SimpleAgent\nfrom dynamiq.nodes.llms import OpenAI\n\nAGENT_ROLE = \"helpful assistant, goal is to provide useful information and answer questions\"\nllm = OpenAI(\n connection=OpenAIConnection(api_key=\"$OPENAI_API_KEY\"),\n model=\"gpt-4o\",\n temperature=0.1,\n)\n\nmemory = Memory(backend=InMemory())\nagent = SimpleAgent(\n name=\"Agent\",\n llm=llm,\n role=AGENT_ROLE,\n id=\"agent\",\n memory=memory,\n)\n\n\ndef main():\n print(\"Welcome to the AI Chat! (Type 'exit' to end)\")\n while True:\n user_input = input(\"You: \")\n if user_input.lower() == \"exit\":\n break\n\n response = agent.run({\"input\": user_input})\n response_content = response.output.get(\"content\")\n print(f\"AI: {response_content}\")\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n## Contributing\n\nWe love contributions! Whether it's bug reports, feature requests, or pull requests, head over to our [CONTRIBUTING.md](CONTRIBUTING.md) to see how you can help.\n\n## License\n\nDynamiq is open-source and available under the [Apache 2 License](LICENSE).\n\nHappy coding! \ud83d\ude80\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Dynamiq is an orchestration framework for agentic AI and LLM applications",
"version": "0.5.2",
"project_urls": {
"Documentation": "https://dynamiq-ai.github.io/dynamiq",
"Homepage": "https://www.getdynamiq.ai",
"Repository": "https://github.com/dynamiq-ai/dynamiq"
},
"split_keywords": [
"ai",
" gpt",
" agents",
" rag",
" llm",
" generative-ai",
" llmops"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "65a499cf864583bcb448dda233aa8e8a76f49a189da35124386fe8bf4ae791a4",
"md5": "0aef28e89b20391d1547f36677f2409f",
"sha256": "9e4e0e4efbc0ee958846eccedbe8695a191a8d4742e865be19854eda02429b77"
},
"downloads": -1,
"filename": "dynamiq-0.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0aef28e89b20391d1547f36677f2409f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 302656,
"upload_time": "2024-12-10T16:05:58",
"upload_time_iso_8601": "2024-12-10T16:05:58.023732Z",
"url": "https://files.pythonhosted.org/packages/65/a4/99cf864583bcb448dda233aa8e8a76f49a189da35124386fe8bf4ae791a4/dynamiq-0.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6a9b15db1656712e7227d91bf83e0eb393451c5aaf2c1e41d12b99a8192ed1a8",
"md5": "686f5f494a29a9d9b8988bc3e5a5c33a",
"sha256": "33524b8e95376a42c1c51e0326f1c4ee2a3fbf3857705e9d1c22879e19af5d69"
},
"downloads": -1,
"filename": "dynamiq-0.5.2.tar.gz",
"has_sig": false,
"md5_digest": "686f5f494a29a9d9b8988bc3e5a5c33a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 194292,
"upload_time": "2024-12-10T16:05:59",
"upload_time_iso_8601": "2024-12-10T16:05:59.398505Z",
"url": "https://files.pythonhosted.org/packages/6a/9b/15db1656712e7227d91bf83e0eb393451c5aaf2c1e41d12b99a8192ed1a8/dynamiq-0.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-10 16:05:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dynamiq-ai",
"github_project": "dynamiq",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "dynamiq"
}