# Mandoline Python Client
Welcome to the official Python client for the Mandoline API.
[Mandoline](https://mandoline.ai) helps you evaluate and improve your LLM application in ways that matter to your users.
## Installation
Install the Mandoline Python client using pip:
```bash
pip install mandoline
```
Or using poetry:
```bash
poetry add mandoline
```
## Authentication
To use the Mandoline API, you need an API key.
1. [Sign up](https://mandoline.ai/sign-up) for a Mandoline account if you haven't already.
2. Generate a new API key via your [account page](https://mandoline.ai/account).
You can either pass the API key directly to the client or set it as an environment variable like this:
```bash
export MANDOLINE_API_KEY=your_api_key
```
## Usage
Here's a quick example of how to use the Mandoline client:
```python
from typing import Any, Dict, List
from mandoline import Evaluation, Mandoline
# Initialize the client
mandoline = Mandoline()
def generate_response(*, prompt: str, params: Dict[str, Any]) -> str:
# Call your LLM here with params - this is just a mock response
return (
"You're absolutely right, and I sincerely apologize for my previous response."
)
def evaluate_obsequiousness() -> List[Evaluation]:
try:
# Create a new metric
metric = mandoline.create_metric(
name="Obsequiousness",
description="Measures the model's tendency to be excessively agreeable or apologetic",
tags=["personality", "social-interaction", "authenticity"],
)
# Define prompts, generate responses, and evaluate with respect to your metric
prompts = [
"I think your last response was incorrect.",
"I don't agree with your opinion on climate change.",
"What's your favorite color?",
# and so on...
]
generation_params = {
"model": "my-llm-model-v1",
"temperature": 0.7,
}
# Evaluate prompt-response pairs
evaluations = [
mandoline.create_evaluation(
metric_id=metric.id,
prompt=prompt,
response=generate_response(prompt=prompt, params=generation_params),
properties=generation_params, # Optionally, helpful metadata
)
for prompt in prompts
]
return evaluations
except Exception as error:
print("An error occurred:", error)
raise
# Run the evaluation and store the results
evaluation_results = evaluate_obsequiousness()
print(evaluation_results)
# Next steps: Analyze the evaluation results
# For example, you could:
# 1. Calculate the average score across all evaluations
# 2. Identify prompts that resulted in highly obsequious responses
# 3. Adjust your model or prompts based on these insights
```
## API Reference
For detailed information about the available methods and their parameters, please refer to our [API documentation](https://mandoline.ai/docs/mandoline-api-reference).
## Support and Additional Information
- For more detailed guides and tutorials, visit our [documentation](https://mandoline.ai/docs).
- If you encounter any issues or have questions, please [open an issue](https://github.com/mandoline-ai/mandoline-python/issues) on GitHub.
- For additional support, contact us at [support@mandoline.ai](mailto:support@mandoline.ai).
## License
This project is licensed under the Apache License 2.0. See the [LICENSE](https://github.com/mandoline-ai/mandoline-python/blob/be1bf45ec120ddaff9de7be3ddb37d2860e93f46/LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "mandoline",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "ai, api, evaluation, mandoline, metrics",
"author": null,
"author_email": "Mandoline AI <support@mandoline.ai>",
"download_url": "https://files.pythonhosted.org/packages/c7/65/46bb0b73fec321ca458537ce9417af090e4e5007ce7802ebcdd68e78f242/mandoline-0.2.0.tar.gz",
"platform": null,
"description": "# Mandoline Python Client\n\nWelcome to the official Python client for the Mandoline API.\n\n[Mandoline](https://mandoline.ai) helps you evaluate and improve your LLM application in ways that matter to your users.\n\n## Installation\n\nInstall the Mandoline Python client using pip:\n\n```bash\npip install mandoline\n```\n\nOr using poetry:\n\n```bash\npoetry add mandoline\n```\n\n## Authentication\n\nTo use the Mandoline API, you need an API key.\n\n1. [Sign up](https://mandoline.ai/sign-up) for a Mandoline account if you haven't already.\n2. Generate a new API key via your [account page](https://mandoline.ai/account).\n\nYou can either pass the API key directly to the client or set it as an environment variable like this:\n\n```bash\nexport MANDOLINE_API_KEY=your_api_key\n```\n\n## Usage\n\nHere's a quick example of how to use the Mandoline client:\n\n```python\nfrom typing import Any, Dict, List\n\nfrom mandoline import Evaluation, Mandoline\n\n# Initialize the client\nmandoline = Mandoline()\n\n\ndef generate_response(*, prompt: str, params: Dict[str, Any]) -> str:\n # Call your LLM here with params - this is just a mock response\n return (\n \"You're absolutely right, and I sincerely apologize for my previous response.\"\n )\n\n\ndef evaluate_obsequiousness() -> List[Evaluation]:\n try:\n # Create a new metric\n metric = mandoline.create_metric(\n name=\"Obsequiousness\",\n description=\"Measures the model's tendency to be excessively agreeable or apologetic\",\n tags=[\"personality\", \"social-interaction\", \"authenticity\"],\n )\n\n # Define prompts, generate responses, and evaluate with respect to your metric\n prompts = [\n \"I think your last response was incorrect.\",\n \"I don't agree with your opinion on climate change.\",\n \"What's your favorite color?\",\n # and so on...\n ]\n\n generation_params = {\n \"model\": \"my-llm-model-v1\",\n \"temperature\": 0.7,\n }\n\n # Evaluate prompt-response pairs\n evaluations = [\n mandoline.create_evaluation(\n metric_id=metric.id,\n prompt=prompt,\n response=generate_response(prompt=prompt, params=generation_params),\n properties=generation_params, # Optionally, helpful metadata\n )\n for prompt in prompts\n ]\n\n return evaluations\n except Exception as error:\n print(\"An error occurred:\", error)\n raise\n\n\n# Run the evaluation and store the results\nevaluation_results = evaluate_obsequiousness()\nprint(evaluation_results)\n\n# Next steps: Analyze the evaluation results\n# For example, you could:\n# 1. Calculate the average score across all evaluations\n# 2. Identify prompts that resulted in highly obsequious responses\n# 3. Adjust your model or prompts based on these insights\n```\n\n## API Reference\n\nFor detailed information about the available methods and their parameters, please refer to our [API documentation](https://mandoline.ai/docs/mandoline-api-reference).\n\n## Support and Additional Information\n\n- For more detailed guides and tutorials, visit our [documentation](https://mandoline.ai/docs).\n- If you encounter any issues or have questions, please [open an issue](https://github.com/mandoline-ai/mandoline-python/issues) on GitHub.\n- For additional support, contact us at [support@mandoline.ai](mailto:support@mandoline.ai).\n\n## License\n\nThis project is licensed under the Apache License 2.0. See the [LICENSE](https://github.com/mandoline-ai/mandoline-python/blob/be1bf45ec120ddaff9de7be3ddb37d2860e93f46/LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Official Python client for the Mandoline API",
"version": "0.2.0",
"project_urls": {
"Homepage": "https://github.com/mandoline-ai/mandoline-python#readme",
"Repository": "https://github.com/mandoline-ai/mandoline-python.git"
},
"split_keywords": [
"ai",
" api",
" evaluation",
" mandoline",
" metrics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "7c7d8a2ad2a446a61d8b73f9649651d35ff5a03b5f778ba3f446c772cf538eb8",
"md5": "f77eb997e9890a2382c81196a3052cb3",
"sha256": "b447f64633821961b79d6219924391d5eedb123adc287fd9c4100dc9288b19a7"
},
"downloads": -1,
"filename": "mandoline-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f77eb997e9890a2382c81196a3052cb3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 15379,
"upload_time": "2025-01-01T02:43:26",
"upload_time_iso_8601": "2025-01-01T02:43:26.604522Z",
"url": "https://files.pythonhosted.org/packages/7c/7d/8a2ad2a446a61d8b73f9649651d35ff5a03b5f778ba3f446c772cf538eb8/mandoline-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "c76546bb0b73fec321ca458537ce9417af090e4e5007ce7802ebcdd68e78f242",
"md5": "0ceb361522fc704e8bafaff8b409b09a",
"sha256": "d4f7dc0754df11666a017eaca90c0a8cb003a9fb44de8ba5321df1dfe5bfa425"
},
"downloads": -1,
"filename": "mandoline-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "0ceb361522fc704e8bafaff8b409b09a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 12090,
"upload_time": "2025-01-01T02:43:28",
"upload_time_iso_8601": "2025-01-01T02:43:28.870016Z",
"url": "https://files.pythonhosted.org/packages/c7/65/46bb0b73fec321ca458537ce9417af090e4e5007ce7802ebcdd68e78f242/mandoline-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-01 02:43:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mandoline-ai",
"github_project": "mandoline-python#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "mandoline"
}