HAPI-SDK


NameHAPI-SDK JSON
Version 0.2.23 PyPI version JSON
download
home_pagehttps://github.com/cyprienn967/HAPI_SDK
SummaryA plug-and-play SDK for de-hallucinating outputs from LLMs using semantic entropy and trained classifiers
upload_time2025-07-12 02:44:18
maintainerNone
docs_urlNone
authorTrue
requires_python>=3.7
licenseNone
keywords llm hallucination detection ai nlp semantic-entropy machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # HAPI SDK

A plug-and-play SDK for detecting and reducing hallucinations in Large Language Model outputs using semantic entropy analysis and trained classifiers.

## Features

- **Hallucination Detection**: Uses trained classifiers to detect hallucinations in LLM outputs
- **Semantic Entropy Analysis**: Advanced semantic analysis to identify uncertain or inconsistent outputs
- **Easy Integration**: Simple API that works with any Hugging Face model
- **Multiple Detection Methods**: Combines classifier-based and semantic entropy-based approaches
- **Real-time Analysis**: Generate and analyze outputs in real-time

## Installation

```bash
pip install HAPI-SDK
```

## Quick Start

```python
from dehallucinate_sdk import DeHallucinationClient

# Initialize the client
client = DeHallucinationClient(
    model_id="meta-llama/Llama-2-7b-chat-hf",
    license_key="your-license-key"
)

# Generate and analyze output
prompt = "Explain quantum entanglement in simple terms."
output, flagged_sentences = client.generate_output(prompt)

print("Generated Output:", output)
print("Flagged Sentences:", flagged_sentences)
```

## Available Methods

The `DeHallucinationClient` exposes the following main methods:

### 1. `generate_output(prompt, max_tokens=512)`
Generates a response and performs hallucination analysis in one step.

```python
output, flagged_sentences = client.generate_output(
    "What is the capital of France?", 
    max_tokens=100
)
```

### 2. `semantic_entropy_check(prompt, num_generations=5)`
Analyzes semantic entropy to detect potential hallucinations.

```python
entropy_score, is_hallucinated = client.semantic_entropy_check(
    "Tell me about the history of artificial intelligence"
)
```

### 3. `sentence_contains_hallucination(sentence, context="")`
Checks if a specific sentence contains hallucinations.

```python
is_hallucinated = client.sentence_contains_hallucination(
    "The capital of France is Berlin.",
    context="Geography facts"
)
```

### 4. `generate(prompt, max_tokens=512)`
Simple text generation without hallucination analysis.

```python
output = client.generate("Write a story about space exploration", max_tokens=200)
```

### 5. `generate_sentence(prompt)`
Generates a single sentence response.

```python
sentence = client.generate_sentence("Complete this: The best way to learn programming is")
```

## Supported Models

- Llama-2-7b-chat (meta-llama/Llama-2-7b-chat-hf)
- Falcon-40b (tiiuae/falcon-40b)  
- Llama-2-7b (meta-llama/Llama-2-7b-hf)
- MPT-7b (mosaicml/mpt-7b)
- More models coming soon!

## Configuration

### Environment Variables

```bash
export OPENAI_API_KEY="your_openai_api_key"
export HUGGINGFACE_API_KEY="your_hf_token"
```

### Advanced Usage

```python
# Configure with custom parameters
client = DeHallucinationClient(
    model_id="meta-llama/Llama-2-7b-chat-hf",
    license_key="your-license-key",
    device="cuda",  # or "cpu"
    temperature=0.7,
    use_semantic_entropy=True,
    confidence_threshold=0.8
)

# Batch processing
prompts = [
    "What causes climate change?",
    "How do vaccines work?", 
    "Explain machine learning basics"
]

results = []
for prompt in prompts:
    output, flagged = client.generate_output(prompt)
    results.append({"prompt": prompt, "output": output, "flagged": flagged})
```

## Error Handling

```python
try:
    output, flagged = client.generate_output("Your prompt here")
except Exception as e:
    print(f"Error during generation: {e}")
```

## Contributing

We welcome contributions! Please see our [GitHub repository](https://github.com/cyprienn967/HAPI_SDK) for more information.

## License

MIT License - see LICENSE file for details.

## Support

For questions, suggestions, or issues, please contact us or open an issue on GitHub.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/cyprienn967/HAPI_SDK",
    "name": "HAPI-SDK",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "True <cyprienseydoux@gmail.com>",
    "keywords": "llm, hallucination, detection, ai, nlp, semantic-entropy, machine-learning",
    "author": "True",
    "author_email": "True <cyprienseydoux@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/21/ba/70324a18d7c961a152445f62eb1dc8595c62c8fb98a09feb54db5b0a296c/hapi_sdk-0.2.23.tar.gz",
    "platform": null,
    "description": "# HAPI SDK\n\nA plug-and-play SDK for detecting and reducing hallucinations in Large Language Model outputs using semantic entropy analysis and trained classifiers.\n\n## Features\n\n- **Hallucination Detection**: Uses trained classifiers to detect hallucinations in LLM outputs\n- **Semantic Entropy Analysis**: Advanced semantic analysis to identify uncertain or inconsistent outputs\n- **Easy Integration**: Simple API that works with any Hugging Face model\n- **Multiple Detection Methods**: Combines classifier-based and semantic entropy-based approaches\n- **Real-time Analysis**: Generate and analyze outputs in real-time\n\n## Installation\n\n```bash\npip install HAPI-SDK\n```\n\n## Quick Start\n\n```python\nfrom dehallucinate_sdk import DeHallucinationClient\n\n# Initialize the client\nclient = DeHallucinationClient(\n    model_id=\"meta-llama/Llama-2-7b-chat-hf\",\n    license_key=\"your-license-key\"\n)\n\n# Generate and analyze output\nprompt = \"Explain quantum entanglement in simple terms.\"\noutput, flagged_sentences = client.generate_output(prompt)\n\nprint(\"Generated Output:\", output)\nprint(\"Flagged Sentences:\", flagged_sentences)\n```\n\n## Available Methods\n\nThe `DeHallucinationClient` exposes the following main methods:\n\n### 1. `generate_output(prompt, max_tokens=512)`\nGenerates a response and performs hallucination analysis in one step.\n\n```python\noutput, flagged_sentences = client.generate_output(\n    \"What is the capital of France?\", \n    max_tokens=100\n)\n```\n\n### 2. `semantic_entropy_check(prompt, num_generations=5)`\nAnalyzes semantic entropy to detect potential hallucinations.\n\n```python\nentropy_score, is_hallucinated = client.semantic_entropy_check(\n    \"Tell me about the history of artificial intelligence\"\n)\n```\n\n### 3. `sentence_contains_hallucination(sentence, context=\"\")`\nChecks if a specific sentence contains hallucinations.\n\n```python\nis_hallucinated = client.sentence_contains_hallucination(\n    \"The capital of France is Berlin.\",\n    context=\"Geography facts\"\n)\n```\n\n### 4. `generate(prompt, max_tokens=512)`\nSimple text generation without hallucination analysis.\n\n```python\noutput = client.generate(\"Write a story about space exploration\", max_tokens=200)\n```\n\n### 5. `generate_sentence(prompt)`\nGenerates a single sentence response.\n\n```python\nsentence = client.generate_sentence(\"Complete this: The best way to learn programming is\")\n```\n\n## Supported Models\n\n- Llama-2-7b-chat (meta-llama/Llama-2-7b-chat-hf)\n- Falcon-40b (tiiuae/falcon-40b)  \n- Llama-2-7b (meta-llama/Llama-2-7b-hf)\n- MPT-7b (mosaicml/mpt-7b)\n- More models coming soon!\n\n## Configuration\n\n### Environment Variables\n\n```bash\nexport OPENAI_API_KEY=\"your_openai_api_key\"\nexport HUGGINGFACE_API_KEY=\"your_hf_token\"\n```\n\n### Advanced Usage\n\n```python\n# Configure with custom parameters\nclient = DeHallucinationClient(\n    model_id=\"meta-llama/Llama-2-7b-chat-hf\",\n    license_key=\"your-license-key\",\n    device=\"cuda\",  # or \"cpu\"\n    temperature=0.7,\n    use_semantic_entropy=True,\n    confidence_threshold=0.8\n)\n\n# Batch processing\nprompts = [\n    \"What causes climate change?\",\n    \"How do vaccines work?\", \n    \"Explain machine learning basics\"\n]\n\nresults = []\nfor prompt in prompts:\n    output, flagged = client.generate_output(prompt)\n    results.append({\"prompt\": prompt, \"output\": output, \"flagged\": flagged})\n```\n\n## Error Handling\n\n```python\ntry:\n    output, flagged = client.generate_output(\"Your prompt here\")\nexcept Exception as e:\n    print(f\"Error during generation: {e}\")\n```\n\n## Contributing\n\nWe welcome contributions! Please see our [GitHub repository](https://github.com/cyprienn967/HAPI_SDK) for more information.\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\nFor questions, suggestions, or issues, please contact us or open an issue on GitHub.\n\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A plug-and-play SDK for de-hallucinating outputs from LLMs using semantic entropy and trained classifiers",
    "version": "0.2.23",
    "project_urls": {
        "Bug Reports": "https://github.com/cyprienn967/HAPI_SDK/issues",
        "Homepage": "https://github.com/cyprienn967/HAPI_SDK",
        "Source": "https://github.com/cyprienn967/HAPI_SDK"
    },
    "split_keywords": [
        "llm",
        " hallucination",
        " detection",
        " ai",
        " nlp",
        " semantic-entropy",
        " machine-learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "905ed96b33c71fcf92767c549594a3ae4916c08ec08164c711df0a7b4abb7167",
                "md5": "32fbc5121cc4d19ee292874d628f1156",
                "sha256": "c80ffa95f511396ed15e4285c53f437d0e817ff1521567fbc211e8187420d271"
            },
            "downloads": -1,
            "filename": "hapi_sdk-0.2.23-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "32fbc5121cc4d19ee292874d628f1156",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 15958002,
            "upload_time": "2025-07-12T02:44:15",
            "upload_time_iso_8601": "2025-07-12T02:44:15.663148Z",
            "url": "https://files.pythonhosted.org/packages/90/5e/d96b33c71fcf92767c549594a3ae4916c08ec08164c711df0a7b4abb7167/hapi_sdk-0.2.23-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "21ba70324a18d7c961a152445f62eb1dc8595c62c8fb98a09feb54db5b0a296c",
                "md5": "bf6489daa4abf99aee8a4c3a641ac44f",
                "sha256": "e3e640d7a65b222978152b6bf5ab235e22282b62a64982ce92ab2e34d333f8fb"
            },
            "downloads": -1,
            "filename": "hapi_sdk-0.2.23.tar.gz",
            "has_sig": false,
            "md5_digest": "bf6489daa4abf99aee8a4c3a641ac44f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 15940752,
            "upload_time": "2025-07-12T02:44:18",
            "upload_time_iso_8601": "2025-07-12T02:44:18.656165Z",
            "url": "https://files.pythonhosted.org/packages/21/ba/70324a18d7c961a152445f62eb1dc8595c62c8fb98a09feb54db5b0a296c/hapi_sdk-0.2.23.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-12 02:44:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "cyprienn967",
    "github_project": "HAPI_SDK",
    "github_not_found": true,
    "lcname": "hapi-sdk"
}
        
Elapsed time: 0.43350s