# Maxim SDK
<div style="display: flex; justify-content: center; align-items: center;margin-bottom:20px;">
<img src="https://cdn.getmaxim.ai/third-party/sdk.png">
</div>
This is Python SDK for enabling Maxim observability. [Maxim](https://www.getmaxim.ai?ref=npm) is an enterprise grade evaluation and observability platform.
## How to integrate
### Install
```
pip install maxim-py
```
### Initialize Maxim logger
```python
from maxim.maxim import Maxim
maxim = Maxim(Config(apiKey=apiKey))
```
### Start sending traces
```python
from maxim.logger.logger import LoggerConfig
# Initializing logger
logger = maxim.logger(LoggerConfig(id="log-repository-id"))
# Initializing a new trace
trace = logger.trace(TraceConfig(id="trace-id",name="trace-name",tags={"key":"value"}))
# Creating the generation
generation = trace.generation(GenerationConfig(id=str(uuid4()), model="text-davinci-002", provider="azure", model_parameters={"temperature": 0.7, "max_tokens": 100}))
# Making LLM call
completion = self.client.completions.create(
model="text-davinci-002",
prompt="Translate the following English text to French: 'Hello, how are you?'",
max_tokens=100,
temperature=0.7
)
# Updating generation
generation.result(completion)
# Ending trace
trace.end()
```
## Integrations with other frameworks
### Langchain
We have built in Langchain tracer support
```python
logger = self.maxim.logger(LoggerConfig(id=repoId))
trace_id = str(uuid4())
trace = logger.trace(TraceConfig(
id=trace_id, name="pre-defined-trace"))
model = OpenAI(callbacks=[MaximLangchainTracer(logger)],api_key=openAIKey)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
model.invoke(messages, config={
"metadata": {
"maxim": {
"trace_id": trace_id,
"generation_name": "get-answer",
"generation_tags": {
"test": "123"
}
}
}
})
trace.event(id=str(uuid4()), name="test event")
trace.end()
```
#### Langchain module compatibility
| | Anthropic | Bedrock Anthropic | Bedrock Meta | OpenAI | Azure |
|---|---|---|---|---|---|
|Chat (0.3.x) | ✅ | ✅ | ✅ | ✅ | ✅ |
|Chat (0.1.x) | ✅ | ✅ | ✅ | ✅ | ✅ |
|Tool call (0.3.x)| ✅ |✅ |❓| ✅ | ✅ |
|Tool call (0.1.x) | ✅ | ✅ | ✅ | ✅ | ✅ |
|Chain (via LLM) (0.3.x)| ✅ | ✅ | ✅ | ✅ | ✅ |
|Chain (via LLM) (0.1.x)| ✅ | ✅ | ✅ | ✅ | ✅ |
|Streaming (0.3.x)| ✅ | ✅ |✅ | ✅ | ✳️ Token usage is not supported by Langchain |
|Streaming (0.1.x) Token usage is not supported by Langchain| ✳️ | ✳️ | ✳️ | ✳️ | ✳️ |
|Agent (0.3.x)|⛔️ |⛔️ |⛔️ | ⛔️| ⛔️|
|Agent (0.1.x)|⛔️ |⛔️ |⛔️ | ⛔️| ⛔️|
> Please reach out to us if you need support for any other package + provider + classes.
## Version changelog
### v2.0.0
- Feat: Jinja 2.0 variables support
### v1.5.13
- Fix: Fixes issue where model was None for some prompt versions.
### v1.5.12
- Fix: Fixes edge case of race condition while fetching prompts, prompt chains and folders.
### v1.5.11
- Fix: Fixes import of dataclasse
### v1.5.10
- Feat: Adds new config called `raise_exceptions`. Unless this is set to `True`, the SDK will not raise any exceptions.
### v1.5.9
- Chore - Removes raising alert when repo not found
### v1.5.8
- Fix - Removes a no-op command for retrieval
- Fix - Fixes retrieval output command
### v1.5.7
- Feature - Supports 0.1.x langchain
### v1.5.6
- Improvement - Improved langchain support
### v1.5.5
- Improvement - Improves cleanups for log writer for quick returns.
### v1.5.4
- Improvement - Improved fs access checks.
- Improvement - Fixes threading locks for periodic syncs in Python3.9
### v1.5.3
- Improvement - Adds lambda env support for SDK with no access to filesystem.
### v1.5.2
- Feature - Adds support to new langchain_openai.AzureChatOpenAI class in langchain tracer
### v1.5.1
- Fix - Adds Python 3.9 compatibility
### v1.5.0
- Improvement - Updates connection pool to use session that enforces re-connects before making API calls.
### v1.4.5
- Improvement - Adds backoff retries to failed REST calls.
### v1.4.4
- Improvement - langchain becomes optional dependency
### v1.4.3
- Fix - connection pooling for network calls.
- Fix - connection close issue.
### v1.4.2 (🚧 Yanked)
- Fix - connection close issue
### v1.4.1
- Adds validation for provider in generation
### v1.4.0
- Now generation.result accepts
- OpenAI chat completion object
- Azure OpenAI chat completion object
- Langchain LLMResult, AIMessage object
### v1.3.4
- Fixes message_parser
### v1.3.2
- Fixes utility function for langchain to parse AIMessage into Maxim logger completion result
### v1.3.1
- Adds tool call parsing support for Langchain tracer
### v1.3.0
- Adds support for ChatCompletion in generations
- Adds type safety for retrieval results
### v1.2.7
- Bug fix where input sent with trace.config was getting overridden with None
### v1.2.6
- Adds `trace.set_input` and `trace.set_output` methods to control what to show in logs dashboard
### v1.2.5
- Removes one no_op command while creating spans
- Minor bug fixes
### v1.2.1
- Fixed MaximLangchainTracer error logging flow.
### v1.2.0
- Adds langchain support
- Adds local parsers to validate payloads on client side
### v1.1.0
- Minor bug fixes around log writer cleanup
### v1.0.0
- Public release
Raw data
{
"_id": null,
"home_page": null,
"name": "maxim-py",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "python, prompts, logs, workflow, testing",
"author": "Maxim Engineering",
"author_email": "<eng@getmaxim.ai>",
"download_url": "https://files.pythonhosted.org/packages/3b/26/66e40c90c0d51297593ab61500a3c13db4a6419dae9023c2b7a02de4ffc4/maxim_py-2.0.0.tar.gz",
"platform": null,
"description": "\n# Maxim SDK\n\n<div style=\"display: flex; justify-content: center; align-items: center;margin-bottom:20px;\">\n<img src=\"https://cdn.getmaxim.ai/third-party/sdk.png\">\n</div>\n\nThis is Python SDK for enabling Maxim observability. [Maxim](https://www.getmaxim.ai?ref=npm) is an enterprise grade evaluation and observability platform.\n\n## How to integrate\n\n### Install\n\n```\npip install maxim-py\n```\n\n### Initialize Maxim logger\n\n```python\nfrom maxim.maxim import Maxim\n\nmaxim = Maxim(Config(apiKey=apiKey))\n```\n\n### Start sending traces\n\n```python\nfrom maxim.logger.logger import LoggerConfig\n# Initializing logger\nlogger = maxim.logger(LoggerConfig(id=\"log-repository-id\"))\n# Initializing a new trace\ntrace = logger.trace(TraceConfig(id=\"trace-id\",name=\"trace-name\",tags={\"key\":\"value\"}))\n# Creating the generation\ngeneration = trace.generation(GenerationConfig(id=str(uuid4()), model=\"text-davinci-002\", provider=\"azure\", model_parameters={\"temperature\": 0.7, \"max_tokens\": 100}))\n# Making LLM call\ncompletion = self.client.completions.create(\n model=\"text-davinci-002\",\n prompt=\"Translate the following English text to French: 'Hello, how are you?'\",\n max_tokens=100,\n temperature=0.7\n)\n# Updating generation\ngeneration.result(completion)\n# Ending trace\ntrace.end()\n```\n\n## Integrations with other frameworks\n\n### Langchain\n\nWe have built in Langchain tracer support\n\n```python\nlogger = self.maxim.logger(LoggerConfig(id=repoId))\ntrace_id = str(uuid4())\ntrace = logger.trace(TraceConfig(\n id=trace_id, name=\"pre-defined-trace\"))\n\nmodel = OpenAI(callbacks=[MaximLangchainTracer(logger)],api_key=openAIKey)\nmessages = [\n (\n \"system\",\n \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n ),\n (\"human\", \"I love programming.\"),\n]\nmodel.invoke(messages, config={\n \"metadata\": {\n \"maxim\": {\n \"trace_id\": trace_id,\n \"generation_name\": \"get-answer\",\n \"generation_tags\": {\n \"test\": \"123\"\n }\n }\n }\n})\ntrace.event(id=str(uuid4()), name=\"test event\")\ntrace.end()\n```\n\n#### Langchain module compatibility\n\n| | Anthropic | Bedrock Anthropic | Bedrock Meta | OpenAI | Azure |\n|---|---|---|---|---|---|\n|Chat (0.3.x) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n|Chat (0.1.x) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n|Tool call (0.3.x)| \u2705 |\u2705 |\u2753| \u2705 | \u2705 |\n|Tool call (0.1.x) | \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n|Chain (via LLM) (0.3.x)| \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n|Chain (via LLM) (0.1.x)| \u2705 | \u2705 | \u2705 | \u2705 | \u2705 |\n|Streaming (0.3.x)| \u2705 | \u2705 |\u2705 | \u2705 | \u2733\ufe0f Token usage is not supported by Langchain |\n|Streaming (0.1.x) Token usage is not supported by Langchain| \u2733\ufe0f | \u2733\ufe0f | \u2733\ufe0f | \u2733\ufe0f | \u2733\ufe0f |\n|Agent (0.3.x)|\u26d4\ufe0f |\u26d4\ufe0f |\u26d4\ufe0f | \u26d4\ufe0f| \u26d4\ufe0f|\n|Agent (0.1.x)|\u26d4\ufe0f |\u26d4\ufe0f |\u26d4\ufe0f | \u26d4\ufe0f| \u26d4\ufe0f|\n\n> Please reach out to us if you need support for any other package + provider + classes.\n\n## Version changelog\n\n### v2.0.0\n\n- Feat: Jinja 2.0 variables support\n\n### v1.5.13\n\n- Fix: Fixes issue where model was None for some prompt versions.\n\n### v1.5.12\n\n- Fix: Fixes edge case of race condition while fetching prompts, prompt chains and folders.\n\n### v1.5.11\n\n- Fix: Fixes import of dataclasse\n\n### v1.5.10\n\n- Feat: Adds new config called `raise_exceptions`. Unless this is set to `True`, the SDK will not raise any exceptions.\n\n### v1.5.9\n\n- Chore - Removes raising alert when repo not found\n\n### v1.5.8\n\n- Fix - Removes a no-op command for retrieval\n- Fix - Fixes retrieval output command\n\n### v1.5.7\n\n- Feature - Supports 0.1.x langchain\n\n### v1.5.6\n\n- Improvement - Improved langchain support\n\n### v1.5.5\n\n- Improvement - Improves cleanups for log writer for quick returns.\n\n### v1.5.4\n\n- Improvement - Improved fs access checks.\n- Improvement - Fixes threading locks for periodic syncs in Python3.9\n\n### v1.5.3\n\n- Improvement - Adds lambda env support for SDK with no access to filesystem.\n\n### v1.5.2\n\n- Feature - Adds support to new langchain_openai.AzureChatOpenAI class in langchain tracer\n\n### v1.5.1\n\n- Fix - Adds Python 3.9 compatibility\n\n### v1.5.0\n\n- Improvement - Updates connection pool to use session that enforces re-connects before making API calls.\n\n### v1.4.5\n\n- Improvement - Adds backoff retries to failed REST calls.\n\n### v1.4.4\n\n- Improvement - langchain becomes optional dependency\n\n### v1.4.3\n\n- Fix - connection pooling for network calls.\n- Fix - connection close issue.\n\n### v1.4.2 (\ud83d\udea7 Yanked)\n\n- Fix - connection close issue\n\n### v1.4.1\n\n- Adds validation for provider in generation\n\n### v1.4.0\n\n- Now generation.result accepts\n - OpenAI chat completion object\n - Azure OpenAI chat completion object\n - Langchain LLMResult, AIMessage object\n\n### v1.3.4\n\n- Fixes message_parser\n\n### v1.3.2\n\n- Fixes utility function for langchain to parse AIMessage into Maxim logger completion result\n\n### v1.3.1\n\n- Adds tool call parsing support for Langchain tracer\n\n### v1.3.0\n\n- Adds support for ChatCompletion in generations\n- Adds type safety for retrieval results\n\n### v1.2.7\n\n- Bug fix where input sent with trace.config was getting overridden with None\n\n### v1.2.6\n\n- Adds `trace.set_input` and `trace.set_output` methods to control what to show in logs dashboard\n\n### v1.2.5\n\n- Removes one no_op command while creating spans\n- Minor bug fixes\n\n### v1.2.1\n\n- Fixed MaximLangchainTracer error logging flow.\n\n### v1.2.0\n\n- Adds langchain support\n- Adds local parsers to validate payloads on client side\n\n### v1.1.0\n\n- Minor bug fixes around log writer cleanup\n\n### v1.0.0\n\n- Public release\n",
"bugtrack_url": null,
"license": null,
"summary": "Maxim Python Library",
"version": "2.0.0",
"project_urls": null,
"split_keywords": [
"python",
" prompts",
" logs",
" workflow",
" testing"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "684901eec57a4a0e4dd28d630b31d3aa88fac529675131f23de4c5fb5df7a07a",
"md5": "2027d2d30df715b33796ae0603b000fb",
"sha256": "f250fd97540cdae3c5c4d64eb77a864feb872b5c7f4debc44206a7b2bacb6a6d"
},
"downloads": -1,
"filename": "maxim_py-2.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2027d2d30df715b33796ae0603b000fb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 38089,
"upload_time": "2024-11-26T13:06:35",
"upload_time_iso_8601": "2024-11-26T13:06:35.668302Z",
"url": "https://files.pythonhosted.org/packages/68/49/01eec57a4a0e4dd28d630b31d3aa88fac529675131f23de4c5fb5df7a07a/maxim_py-2.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "3b2666e40c90c0d51297593ab61500a3c13db4a6419dae9023c2b7a02de4ffc4",
"md5": "393e3bccc688872f153ebfabfba661d6",
"sha256": "9e22dc20e30aa96fe53b7efd1cd9c0ad47344b5b171822fb7c8921f7250d3a08"
},
"downloads": -1,
"filename": "maxim_py-2.0.0.tar.gz",
"has_sig": false,
"md5_digest": "393e3bccc688872f153ebfabfba661d6",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 30680,
"upload_time": "2024-11-26T13:06:37",
"upload_time_iso_8601": "2024-11-26T13:06:37.260583Z",
"url": "https://files.pythonhosted.org/packages/3b/26/66e40c90c0d51297593ab61500a3c13db4a6419dae9023c2b7a02de4ffc4/maxim_py-2.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-26 13:06:37",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "maxim-py"
}