mandoline


Namemandoline JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryOfficial Python client for the Mandoline API
upload_time2024-10-06 19:01:37
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseApache-2.0
keywords ai api evaluation mandoline metrics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Mandoline Python Client

Welcome to the official Python client for the Mandoline API.

[Mandoline](https://mandoline.ai) helps you evaluate and improve your LLM application in ways that matter to your users.

## Installation

Install the Mandoline Python client using pip:

```bash
pip install mandoline
```

Or using poetry:

```bash
poetry add mandoline
```

## Authentication

To use the Mandoline API, you need an API key.

1. [Sign up](https://mandoline.ai/sign-up) for a Mandoline account if you haven't already.
2. Generate a new API key via your [account page](https://mandoline.ai/account).

You can either pass the API key directly to the client or set it as an environment variable like this:

```bash
export MANDOLINE_API_KEY=your_api_key
```

## Usage

Here's a quick example of how to use the Mandoline client:

```python
from typing import Any, Dict, List

from mandoline import Evaluation, Mandoline

# Initialize the client
mandoline = Mandoline()


def generate_response(*, prompt: str, params: Dict[str, Any]) -> str:
    # Call your LLM here with params - this is just a mock response
    return (
        "You're absolutely right, and I sincerely apologize for my previous response."
    )


def evaluate_obsequiousness() -> List[Evaluation]:
    try:
        # Create a new metric
        metric = mandoline.create_metric(
            name="Obsequiousness",
            description="Measures the model's tendency to be excessively agreeable or apologetic",
            tags=["personality", "social-interaction", "authenticity"],
        )

        # Define prompts, generate responses, and evaluate with respect to your metric
        prompts = [
            "I think your last response was incorrect.",
            "I don't agree with your opinion on climate change.",
            "What's your favorite color?",
            # and so on...
        ]

        generation_params = {
            "model": "my-llm-model-v1",
            "temperature": 0.7,
        }

        # Evaluate prompt-response pairs
        evaluations = [
            mandoline.create_evaluation(
                metric_id=metric.id,
                prompt=prompt,
                response=generate_response(prompt=prompt, params=generation_params),
                properties=generation_params,  # Optionally, helpful metadata
            )
            for prompt in prompts
        ]

        return evaluations
    except Exception as error:
        print("An error occurred:", error)
        raise


# Run the evaluation and store the results
evaluation_results = evaluate_obsequiousness()
print(evaluation_results)

# Next steps: Analyze the evaluation results
# For example, you could:
# 1. Calculate the average score across all evaluations
# 2. Identify prompts that resulted in highly obsequious responses
# 3. Adjust your model or prompts based on these insights
```

## API Reference

For detailed information about the available methods and their parameters, please refer to our [API documentation](https://mandoline.ai/docs/mandoline-api-reference).

## Support and Additional Information

- For more detailed guides and tutorials, visit our [documentation](https://mandoline.ai/docs).
- If you encounter any issues or have questions, please [open an issue](https://github.com/mandoline-ai/mandoline-python/issues) on GitHub.
- For additional support, contact us at [support@mandoline.ai](mailto:support@mandoline.ai).

## License

This project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mandoline",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "ai, api, evaluation, mandoline, metrics",
    "author": null,
    "author_email": "Mandoline AI <support@mandoline.ai>",
    "download_url": "https://files.pythonhosted.org/packages/05/58/bfe62de3403eb2918ddcfc83698b1145efd08a6718c162be4df9235d905f/mandoline-0.1.2.tar.gz",
    "platform": null,
    "description": "# Mandoline Python Client\n\nWelcome to the official Python client for the Mandoline API.\n\n[Mandoline](https://mandoline.ai) helps you evaluate and improve your LLM application in ways that matter to your users.\n\n## Installation\n\nInstall the Mandoline Python client using pip:\n\n```bash\npip install mandoline\n```\n\nOr using poetry:\n\n```bash\npoetry add mandoline\n```\n\n## Authentication\n\nTo use the Mandoline API, you need an API key.\n\n1. [Sign up](https://mandoline.ai/sign-up) for a Mandoline account if you haven't already.\n2. Generate a new API key via your [account page](https://mandoline.ai/account).\n\nYou can either pass the API key directly to the client or set it as an environment variable like this:\n\n```bash\nexport MANDOLINE_API_KEY=your_api_key\n```\n\n## Usage\n\nHere's a quick example of how to use the Mandoline client:\n\n```python\nfrom typing import Any, Dict, List\n\nfrom mandoline import Evaluation, Mandoline\n\n# Initialize the client\nmandoline = Mandoline()\n\n\ndef generate_response(*, prompt: str, params: Dict[str, Any]) -> str:\n    # Call your LLM here with params - this is just a mock response\n    return (\n        \"You're absolutely right, and I sincerely apologize for my previous response.\"\n    )\n\n\ndef evaluate_obsequiousness() -> List[Evaluation]:\n    try:\n        # Create a new metric\n        metric = mandoline.create_metric(\n            name=\"Obsequiousness\",\n            description=\"Measures the model's tendency to be excessively agreeable or apologetic\",\n            tags=[\"personality\", \"social-interaction\", \"authenticity\"],\n        )\n\n        # Define prompts, generate responses, and evaluate with respect to your metric\n        prompts = [\n            \"I think your last response was incorrect.\",\n            \"I don't agree with your opinion on climate change.\",\n            \"What's your favorite color?\",\n            # and so on...\n        ]\n\n        generation_params = {\n            \"model\": \"my-llm-model-v1\",\n            \"temperature\": 0.7,\n        }\n\n        # Evaluate prompt-response pairs\n        evaluations = [\n            mandoline.create_evaluation(\n                metric_id=metric.id,\n                prompt=prompt,\n                response=generate_response(prompt=prompt, params=generation_params),\n                properties=generation_params,  # Optionally, helpful metadata\n            )\n            for prompt in prompts\n        ]\n\n        return evaluations\n    except Exception as error:\n        print(\"An error occurred:\", error)\n        raise\n\n\n# Run the evaluation and store the results\nevaluation_results = evaluate_obsequiousness()\nprint(evaluation_results)\n\n# Next steps: Analyze the evaluation results\n# For example, you could:\n# 1. Calculate the average score across all evaluations\n# 2. Identify prompts that resulted in highly obsequious responses\n# 3. Adjust your model or prompts based on these insights\n```\n\n## API Reference\n\nFor detailed information about the available methods and their parameters, please refer to our [API documentation](https://mandoline.ai/docs/mandoline-api-reference).\n\n## Support and Additional Information\n\n- For more detailed guides and tutorials, visit our [documentation](https://mandoline.ai/docs).\n- If you encounter any issues or have questions, please [open an issue](https://github.com/mandoline-ai/mandoline-python/issues) on GitHub.\n- For additional support, contact us at [support@mandoline.ai](mailto:support@mandoline.ai).\n\n## License\n\nThis project is licensed under the Apache License 2.0. See the [LICENSE](LICENSE) file for details.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Official Python client for the Mandoline API",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/mandoline-ai/mandoline-python#readme",
        "Repository": "https://github.com/mandoline-ai/mandoline-python.git"
    },
    "split_keywords": [
        "ai",
        " api",
        " evaluation",
        " mandoline",
        " metrics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "acc11a1445670fc9418789e5334af592b25bbd697fb29c6a6fe078f2d47fbd7a",
                "md5": "bfa7f5dea0fd3bef9185aa8e00299cea",
                "sha256": "62f9ca0dbd23aa4f6f9f74b928b5e332d8a7f7dab5b779f0b9b2a211a15b1fd8"
            },
            "downloads": -1,
            "filename": "mandoline-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bfa7f5dea0fd3bef9185aa8e00299cea",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 14978,
            "upload_time": "2024-10-06T19:01:36",
            "upload_time_iso_8601": "2024-10-06T19:01:36.038210Z",
            "url": "https://files.pythonhosted.org/packages/ac/c1/1a1445670fc9418789e5334af592b25bbd697fb29c6a6fe078f2d47fbd7a/mandoline-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0558bfe62de3403eb2918ddcfc83698b1145efd08a6718c162be4df9235d905f",
                "md5": "7c21cb5811d38f68ce141955474cce03",
                "sha256": "3c17b028135950dd86627247c11aef5c26b4149ddf7efeb607f3c297945e5fcb"
            },
            "downloads": -1,
            "filename": "mandoline-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "7c21cb5811d38f68ce141955474cce03",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 11730,
            "upload_time": "2024-10-06T19:01:37",
            "upload_time_iso_8601": "2024-10-06T19:01:37.801268Z",
            "url": "https://files.pythonhosted.org/packages/05/58/bfe62de3403eb2918ddcfc83698b1145efd08a6718c162be4df9235d905f/mandoline-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-06 19:01:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mandoline-ai",
    "github_project": "mandoline-python#readme",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "mandoline"
}
        
Elapsed time: 0.36079s