maxim-py


Namemaxim-py JSON
Version 3.4.1 PyPI version JSON
download
home_pageNone
SummaryMaxim Python Library
upload_time2025-02-20 10:59:30
maintainerNone
docs_urlNone
authorMaxim Engineering
requires_pythonNone
licenseNone
keywords python prompts logs workflow testing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Maxim SDK

<div style="display: flex; justify-content: center; align-items: center;margin-bottom:20px;">
<img src="https://cdn.getmaxim.ai/third-party/sdk.png">
</div>

This is Python SDK for enabling Maxim observability. [Maxim](https://www.getmaxim.ai?ref=npm) is an enterprise grade evaluation and observability platform.

## How to integrate

### Install

```
pip install maxim-py
```

### Initialize Maxim logger

```python
from maxim import Maxim, Config

maxim = Maxim(Config(api_key=apiKey))
```

### Start sending traces

```python
from maxim.logger import LoggerConfig
# Initializing logger
logger = maxim.logger(LoggerConfig(id="log-repository-id"))
# Initializing a new trace
trace = logger.trace(TraceConfig(id="trace-id",name="trace-name",tags={"key":"value"}))
# Creating the generation
generation = trace.generation(GenerationConfig(id=str(uuid4()), model="text-davinci-002", provider="azure", model_parameters={"temperature": 0.7, "max_tokens": 100}))
# Making LLM call
completion = self.client.completions.create(
   model="text-davinci-002",
   prompt="Translate the following English text to French: 'Hello, how are you?'",
   max_tokens=100,
   temperature=0.7
)
# Updating generation
generation.result(completion)
# Ending trace
trace.end()
```

## Integrations with other frameworks

### Langchain

We have built in Langchain tracer support

```python
logger = self.maxim.logger(LoggerConfig(id=repoId))
trace_id = str(uuid4())
trace = logger.trace(TraceConfig(
   id=trace_id, name="pre-defined-trace"))

model = OpenAI(callbacks=[MaximLangchainTracer(logger)],api_key=openAIKey)
messages = [
   (
         "system",
         "You are a helpful assistant that translates English to French. Translate the user sentence.",
   ),
   ("human", "I love programming."),
]
model.invoke(messages, config={
   "metadata": {
         "maxim": {
            "trace_id": trace_id,
            "generation_name": "get-answer",
            "generation_tags": {
               "test": "123"
            }
         }
   }
})
trace.event(id=str(uuid4()), name="test event")
trace.end()
```

#### Langchain module compatibility

|                                                             | Anthropic | Bedrock Anthropic | Bedrock Meta | OpenAI | Azure                                        |
| ----------------------------------------------------------- | --------- | ----------------- | ------------ | ------ | -------------------------------------------- |
| Chat (0.3.x)                                                | ✅        | ✅                | ✅           | ✅     | ✅                                           |
| Chat (0.1.x)                                                | ✅        | ✅                | ✅           | ✅     | ✅                                           |
| Tool call (0.3.x)                                           | ✅        | ✅                | ❓           | ✅     | ✅                                           |
| Tool call (0.1.x)                                           | ✅        | ✅                | ✅           | ✅     | ✅                                           |
| Chain (via LLM) (0.3.x)                                     | ✅        | ✅                | ✅           | ✅     | ✅                                           |
| Chain (via LLM) (0.1.x)                                     | ✅        | ✅                | ✅           | ✅     | ✅                                           |
| Streaming (0.3.x)                                           | ✅        | ✅                | ✅           | ✅     | ✳️ Token usage is not supported by Langchain |
| Streaming (0.1.x) Token usage is not supported by Langchain | ✳️        | ✳️                | ✳️           | ✳️     | ✳️                                           |
| Agent (0.3.x)                                               | ⛔️       | ⛔️               | ⛔️          | ⛔️    | ⛔️                                          |
| Agent (0.1.x)                                               | ⛔️       | ⛔️               | ⛔️          | ⛔️    | ⛔️                                          |

> Please reach out to us if you need support for any other package + provider + classes.

### Litellm (Beta)

| completion | acompletion | fallback | Prompt Management |
| ---------- | ----------- | -------- | ----------------- |
| ✅         | ✅          | ⛔️      | ⛔️               |

## Version changelog

### 3.4.1

- fix: Resolved duplicate serialization of metadata entries

### 3.4.0

- Breaking change: Prompt and Prompt chain object properties are now with snake cases
- fix: Prompt chain nodes are properly parsed in all cases

### 3.3.9

- fix: fixes litellm pre_api_call message parsing

### v3.3.8

- fix: updates create test run api to use v2 api

### v3.3.7

- fix: handles marking test run as failed if the test run raises error at any point after creating it on the platform.
- feat: adds support for `context_to_evaluate` in `with_prompt_version_id` and `with_workflow_id` (by passing it as the second parameter) to be able to choose whichever variable or dataset column to use as context to evaluate, as opposed to only having the dataset column as context through the `CONTEXT_TO_EVALUATE` datastructure mapping.

### v3.3.6

- fix: fixes garbled message formatting when invalid testrun config is passed to the TestRunBuilder

### v3.3.5

- chore: now sdk propagates system errors in formatted structures (specifically for test runs)

### v3.3.4

- fix: add missing deps in the requirement

### v3.3.3

- chore: minor bug fixes

### v3.3.2

- feat: adds support for gemini outputs
- feat: adds local evaluator support for test runs

### v3.3.1

- chore: Litellm failure exceptions will be sent to the default logger.

### v3.3.0

- feat: Adds litellm support (Beta)

### v3.2.3

- fix: Fixes duplicate container ids for langchain tracer

### v3.2.2

- fix: Langgraph capture fixes
- chore: Adds missing docstrings

### v3.2.1

- fix: Adds support for dict as an output to yields_output function during test runs.

### v3.2.0

- fix: Fixed dependency issues

### v3.1.0 (🚧 Yanked)

- feat: Adds new flow to trigger test runs via Python SDK
- fix: Minor bug fixes

### v3.0.1 [Breaking changes](https://www.getmaxim.ai/docs/sdk/observability/python/upgrading-to-v3)

- beta release
- feat: New decorators support for tracing, langchain and langgraph

### v3.0.0rc6

- feat: Adds new decorator for langgraph. @langgraph_agent
- feat: Adds support for chains in langchain tracer
- fix: Some minor bug fixes

### v3.0.0rc5

- chore: Keeps logger till function call context is present

### v3.0.0rc4

- fix: Fixes automatic retrieval capture from vector dbs

### v3.0.0rc3

- fix: Fixes langchain_llm_call to handle chat models

### v3.0.0rc2

- fix: Minor bug fixes

### v3.0.0rc1

- Check [upgrade steps](https://www.getmaxim.ai/docs/sdk/observability/python/upgrading-to-v3)
- feat: Adds new decorators flow to simplify tracing
- chore: apiKey and baseUrl parameters in MaximConfig are now api_key and base_url respectively.

### v2.0.0 (Breaking changes)

- feat: Jinja 2.0 variables support

### v1.5.13

- fix: Fixes issue where model was None for some prompt versions.

### v1.5.12

- fix: Fixes edge case of race condition while fetching prompts, prompt chains and folders.

### v1.5.11

- fix: Fixes import of dataclasse

### v1.5.10

- feat: Adds new config called `raise_exceptions`. Unless this is set to `True`, the SDK will not raise any exceptions.

### v1.5.9

- Chore - Removes raising alert when repo not found

### v1.5.8

- fix - Removes a no-op command for retrieval
- fix - Fixes retrieval output command

### v1.5.7

- feat - Supports 0.1.x langchain

### v1.5.6

- chore - Improved langchain support

### v1.5.5

- chore - Improves cleanups for log writer for quick returns.

### v1.5.4

- chore - Improved fs access checks.
- chore - Fixes threading locks for periodic syncs in Python3.9

### v1.5.3

- chore - Adds lambda env support for SDK with no access to filesystem.

### v1.5.2

- feat - Adds support to new langchain_openai.AzureChatOpenAI class in langchain tracer

### v1.5.1

- fix - Adds Python 3.9 compatibility

### v1.5.0

- chore - Updates connection pool to use session that enforces re-connects before making API calls.

### v1.4.5

- chore - Adds backoff retries to failed REST calls.

### v1.4.4

- chore - langchain becomes optional dependency

### v1.4.3

- fix - connection pooling for network calls.
- fix - connection close issue.

### v1.4.2 (🚧 Yanked)

- fix - connection close issue

### v1.4.1

- Adds validation for provider in generation

### v1.4.0

- Now generation.result accepts
  - OpenAI chat completion object
  - Azure OpenAI chat completion object
  - Langchain LLMResult, AIMessage object

### v1.3.4

- fix: Fixes message_parser

### v1.3.2

- fix: Fixes utility function for langchain to parse AIMessage into Maxim logger completion result

### v1.3.1

- feat: Adds tool call parsing support for Langchain tracer

### v1.3.0

- feat: Adds support for ChatCompletion in generations
- feat: Adds type safety for retrieval results

### v1.2.7

- fix: Bug fix where input sent with trace.config was getting overridden with None

### v1.2.6

- chore: Adds `trace.set_input` and `trace.set_output` methods to control what to show in logs dashboard

### v1.2.5

- chore: Removes one no_op command while creating spans
- fix: Minor bug fixes

### v1.2.1

- fix: Fixed MaximLangchainTracer error logging flow.

### v1.2.0

- feat: Adds langchain support
- chore: Adds local parsers to validate payloads on client side

### v1.1.0

- fix: Minor bug fixes around log writer cleanup

### v1.0.0

- Public release

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "maxim-py",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "python, prompts, logs, workflow, testing",
    "author": "Maxim Engineering",
    "author_email": "<eng@getmaxim.ai>",
    "download_url": "https://files.pythonhosted.org/packages/b6/0a/db8ebfd07fd1177da824e5b3cf2695a33d328c2b5310c62f3926319dad7a/maxim_py-3.4.1.tar.gz",
    "platform": null,
    "description": "\n# Maxim SDK\n\n<div style=\"display: flex; justify-content: center; align-items: center;margin-bottom:20px;\">\n<img src=\"https://cdn.getmaxim.ai/third-party/sdk.png\">\n</div>\n\nThis is Python SDK for enabling Maxim observability. [Maxim](https://www.getmaxim.ai?ref=npm) is an enterprise grade evaluation and observability platform.\n\n## How to integrate\n\n### Install\n\n```\npip install maxim-py\n```\n\n### Initialize Maxim logger\n\n```python\nfrom maxim import Maxim, Config\n\nmaxim = Maxim(Config(api_key=apiKey))\n```\n\n### Start sending traces\n\n```python\nfrom maxim.logger import LoggerConfig\n# Initializing logger\nlogger = maxim.logger(LoggerConfig(id=\"log-repository-id\"))\n# Initializing a new trace\ntrace = logger.trace(TraceConfig(id=\"trace-id\",name=\"trace-name\",tags={\"key\":\"value\"}))\n# Creating the generation\ngeneration = trace.generation(GenerationConfig(id=str(uuid4()), model=\"text-davinci-002\", provider=\"azure\", model_parameters={\"temperature\": 0.7, \"max_tokens\": 100}))\n# Making LLM call\ncompletion = self.client.completions.create(\n   model=\"text-davinci-002\",\n   prompt=\"Translate the following English text to French: 'Hello, how are you?'\",\n   max_tokens=100,\n   temperature=0.7\n)\n# Updating generation\ngeneration.result(completion)\n# Ending trace\ntrace.end()\n```\n\n## Integrations with other frameworks\n\n### Langchain\n\nWe have built in Langchain tracer support\n\n```python\nlogger = self.maxim.logger(LoggerConfig(id=repoId))\ntrace_id = str(uuid4())\ntrace = logger.trace(TraceConfig(\n   id=trace_id, name=\"pre-defined-trace\"))\n\nmodel = OpenAI(callbacks=[MaximLangchainTracer(logger)],api_key=openAIKey)\nmessages = [\n   (\n         \"system\",\n         \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n   ),\n   (\"human\", \"I love programming.\"),\n]\nmodel.invoke(messages, config={\n   \"metadata\": {\n         \"maxim\": {\n            \"trace_id\": trace_id,\n            \"generation_name\": \"get-answer\",\n            \"generation_tags\": {\n               \"test\": \"123\"\n            }\n         }\n   }\n})\ntrace.event(id=str(uuid4()), name=\"test event\")\ntrace.end()\n```\n\n#### Langchain module compatibility\n\n|                                                             | Anthropic | Bedrock Anthropic | Bedrock Meta | OpenAI | Azure                                        |\n| ----------------------------------------------------------- | --------- | ----------------- | ------------ | ------ | -------------------------------------------- |\n| Chat (0.3.x)                                                | \u2705        | \u2705                | \u2705           | \u2705     | \u2705                                           |\n| Chat (0.1.x)                                                | \u2705        | \u2705                | \u2705           | \u2705     | \u2705                                           |\n| Tool call (0.3.x)                                           | \u2705        | \u2705                | \u2753           | \u2705     | \u2705                                           |\n| Tool call (0.1.x)                                           | \u2705        | \u2705                | \u2705           | \u2705     | \u2705                                           |\n| Chain (via LLM) (0.3.x)                                     | \u2705        | \u2705                | \u2705           | \u2705     | \u2705                                           |\n| Chain (via LLM) (0.1.x)                                     | \u2705        | \u2705                | \u2705           | \u2705     | \u2705                                           |\n| Streaming (0.3.x)                                           | \u2705        | \u2705                | \u2705           | \u2705     | \u2733\ufe0f Token usage is not supported by Langchain |\n| Streaming (0.1.x) Token usage is not supported by Langchain | \u2733\ufe0f        | \u2733\ufe0f                | \u2733\ufe0f           | \u2733\ufe0f     | \u2733\ufe0f                                           |\n| Agent (0.3.x)                                               | \u26d4\ufe0f       | \u26d4\ufe0f               | \u26d4\ufe0f          | \u26d4\ufe0f    | \u26d4\ufe0f                                          |\n| Agent (0.1.x)                                               | \u26d4\ufe0f       | \u26d4\ufe0f               | \u26d4\ufe0f          | \u26d4\ufe0f    | \u26d4\ufe0f                                          |\n\n> Please reach out to us if you need support for any other package + provider + classes.\n\n### Litellm (Beta)\n\n| completion | acompletion | fallback | Prompt Management |\n| ---------- | ----------- | -------- | ----------------- |\n| \u2705         | \u2705          | \u26d4\ufe0f      | \u26d4\ufe0f               |\n\n## Version changelog\n\n### 3.4.1\n\n- fix: Resolved duplicate serialization of metadata entries\n\n### 3.4.0\n\n- Breaking change: Prompt and Prompt chain object properties are now with snake cases\n- fix: Prompt chain nodes are properly parsed in all cases\n\n### 3.3.9\n\n- fix: fixes litellm pre_api_call message parsing\n\n### v3.3.8\n\n- fix: updates create test run api to use v2 api\n\n### v3.3.7\n\n- fix: handles marking test run as failed if the test run raises error at any point after creating it on the platform.\n- feat: adds support for `context_to_evaluate` in `with_prompt_version_id` and `with_workflow_id` (by passing it as the second parameter) to be able to choose whichever variable or dataset column to use as context to evaluate, as opposed to only having the dataset column as context through the `CONTEXT_TO_EVALUATE` datastructure mapping.\n\n### v3.3.6\n\n- fix: fixes garbled message formatting when invalid testrun config is passed to the TestRunBuilder\n\n### v3.3.5\n\n- chore: now sdk propagates system errors in formatted structures (specifically for test runs)\n\n### v3.3.4\n\n- fix: add missing deps in the requirement\n\n### v3.3.3\n\n- chore: minor bug fixes\n\n### v3.3.2\n\n- feat: adds support for gemini outputs\n- feat: adds local evaluator support for test runs\n\n### v3.3.1\n\n- chore: Litellm failure exceptions will be sent to the default logger.\n\n### v3.3.0\n\n- feat: Adds litellm support (Beta)\n\n### v3.2.3\n\n- fix: Fixes duplicate container ids for langchain tracer\n\n### v3.2.2\n\n- fix: Langgraph capture fixes\n- chore: Adds missing docstrings\n\n### v3.2.1\n\n- fix: Adds support for dict as an output to yields_output function during test runs.\n\n### v3.2.0\n\n- fix: Fixed dependency issues\n\n### v3.1.0 (\ud83d\udea7 Yanked)\n\n- feat: Adds new flow to trigger test runs via Python SDK\n- fix: Minor bug fixes\n\n### v3.0.1 [Breaking changes](https://www.getmaxim.ai/docs/sdk/observability/python/upgrading-to-v3)\n\n- beta release\n- feat: New decorators support for tracing, langchain and langgraph\n\n### v3.0.0rc6\n\n- feat: Adds new decorator for langgraph. @langgraph_agent\n- feat: Adds support for chains in langchain tracer\n- fix: Some minor bug fixes\n\n### v3.0.0rc5\n\n- chore: Keeps logger till function call context is present\n\n### v3.0.0rc4\n\n- fix: Fixes automatic retrieval capture from vector dbs\n\n### v3.0.0rc3\n\n- fix: Fixes langchain_llm_call to handle chat models\n\n### v3.0.0rc2\n\n- fix: Minor bug fixes\n\n### v3.0.0rc1\n\n- Check [upgrade steps](https://www.getmaxim.ai/docs/sdk/observability/python/upgrading-to-v3)\n- feat: Adds new decorators flow to simplify tracing\n- chore: apiKey and baseUrl parameters in MaximConfig are now api_key and base_url respectively.\n\n### v2.0.0 (Breaking changes)\n\n- feat: Jinja 2.0 variables support\n\n### v1.5.13\n\n- fix: Fixes issue where model was None for some prompt versions.\n\n### v1.5.12\n\n- fix: Fixes edge case of race condition while fetching prompts, prompt chains and folders.\n\n### v1.5.11\n\n- fix: Fixes import of dataclasse\n\n### v1.5.10\n\n- feat: Adds new config called `raise_exceptions`. Unless this is set to `True`, the SDK will not raise any exceptions.\n\n### v1.5.9\n\n- Chore - Removes raising alert when repo not found\n\n### v1.5.8\n\n- fix - Removes a no-op command for retrieval\n- fix - Fixes retrieval output command\n\n### v1.5.7\n\n- feat - Supports 0.1.x langchain\n\n### v1.5.6\n\n- chore - Improved langchain support\n\n### v1.5.5\n\n- chore - Improves cleanups for log writer for quick returns.\n\n### v1.5.4\n\n- chore - Improved fs access checks.\n- chore - Fixes threading locks for periodic syncs in Python3.9\n\n### v1.5.3\n\n- chore - Adds lambda env support for SDK with no access to filesystem.\n\n### v1.5.2\n\n- feat - Adds support to new langchain_openai.AzureChatOpenAI class in langchain tracer\n\n### v1.5.1\n\n- fix - Adds Python 3.9 compatibility\n\n### v1.5.0\n\n- chore - Updates connection pool to use session that enforces re-connects before making API calls.\n\n### v1.4.5\n\n- chore - Adds backoff retries to failed REST calls.\n\n### v1.4.4\n\n- chore - langchain becomes optional dependency\n\n### v1.4.3\n\n- fix - connection pooling for network calls.\n- fix - connection close issue.\n\n### v1.4.2 (\ud83d\udea7 Yanked)\n\n- fix - connection close issue\n\n### v1.4.1\n\n- Adds validation for provider in generation\n\n### v1.4.0\n\n- Now generation.result accepts\n  - OpenAI chat completion object\n  - Azure OpenAI chat completion object\n  - Langchain LLMResult, AIMessage object\n\n### v1.3.4\n\n- fix: Fixes message_parser\n\n### v1.3.2\n\n- fix: Fixes utility function for langchain to parse AIMessage into Maxim logger completion result\n\n### v1.3.1\n\n- feat: Adds tool call parsing support for Langchain tracer\n\n### v1.3.0\n\n- feat: Adds support for ChatCompletion in generations\n- feat: Adds type safety for retrieval results\n\n### v1.2.7\n\n- fix: Bug fix where input sent with trace.config was getting overridden with None\n\n### v1.2.6\n\n- chore: Adds `trace.set_input` and `trace.set_output` methods to control what to show in logs dashboard\n\n### v1.2.5\n\n- chore: Removes one no_op command while creating spans\n- fix: Minor bug fixes\n\n### v1.2.1\n\n- fix: Fixed MaximLangchainTracer error logging flow.\n\n### v1.2.0\n\n- feat: Adds langchain support\n- chore: Adds local parsers to validate payloads on client side\n\n### v1.1.0\n\n- fix: Minor bug fixes around log writer cleanup\n\n### v1.0.0\n\n- Public release\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Maxim Python Library",
    "version": "3.4.1",
    "project_urls": null,
    "split_keywords": [
        "python",
        " prompts",
        " logs",
        " workflow",
        " testing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b18c36c1c08cb234d622fd81688c7487e90982580b3ffa46b3de7b3c4c10d0b6",
                "md5": "b53a3e1412285193e868ab131f73e3bf",
                "sha256": "131e313a09184fe509c09a74b90cd3cec137753eab87d3a3c02854cc224c03c8"
            },
            "downloads": -1,
            "filename": "maxim_py-3.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b53a3e1412285193e868ab131f73e3bf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 93403,
            "upload_time": "2025-02-20T10:59:24",
            "upload_time_iso_8601": "2025-02-20T10:59:24.551594Z",
            "url": "https://files.pythonhosted.org/packages/b1/8c/36c1c08cb234d622fd81688c7487e90982580b3ffa46b3de7b3c4c10d0b6/maxim_py-3.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b60adb8ebfd07fd1177da824e5b3cf2695a33d328c2b5310c62f3926319dad7a",
                "md5": "85fe40c3634dac8f0cf97c85c1b2b92e",
                "sha256": "cae99fa285febf8b9a4d61274a06929926456501535f97fcd9162657ec069f43"
            },
            "downloads": -1,
            "filename": "maxim_py-3.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "85fe40c3634dac8f0cf97c85c1b2b92e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 73701,
            "upload_time": "2025-02-20T10:59:30",
            "upload_time_iso_8601": "2025-02-20T10:59:30.010605Z",
            "url": "https://files.pythonhosted.org/packages/b6/0a/db8ebfd07fd1177da824e5b3cf2695a33d328c2b5310c62f3926319dad7a/maxim_py-3.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-20 10:59:30",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "maxim-py"
}
        
Elapsed time: 0.43324s