# π‘οΈ Trustra β Trust-First AutoML Framework
> **"One `fit()`. Full trust."**
Trustra is a **next-generation, open-source AutoML framework** that doesnβt just maximize accuracy β it **ensures model integrity** by automatically detecting **data leakage, bias, drift, and instability** β and generating **auditable trust reports**.
Unlike traditional AutoML tools that optimize only for performance, **Trustra enforces responsibility by design**.
---
## π Why Trustra?
Most AutoML tools (like H2O, AutoGluon, or SageMaker) focus on **"How accurate is the model?"**
Trustra asks:
> β **"Can we trust this model?"**
> β **"Is it fair?"**
> β **"Is it safe for production?"**
We built Trustra because:
- Real-world models fail due to **hidden data issues**, not poor algorithms.
- Bias goes undetected until it harms users.
- Drift creeps in silently.
- Teams waste weeks on manual validation.
π **Trustra automates trust.**
---
## β¨ Key Features
| Feature | Description |
|-------|-------------|
| π **Data Quality Checks** | Detects missing values, duplicates, class imbalance, and **data leakage** (e.g., target leakage) |
| βοΈ **Fairness Audit** | Automatically audits bias across sensitive features (e.g., gender, race) using **Demographic Parity & Equalized Odds** |
| π **Drift Detection** | Flags feature drift between train/validation using KS test & PSI |
| π§ **Auto Model Selection** | Uses **Optuna** to find the best model (Logistic Regression, Random Forest, Gradient Boosting) and hyperparameters |
| π **Trust Report** | Generates a **self-contained HTML report** with model performance, fairness metrics, and detected issues |
| π **Simple API** | Just `model.fit(X_train, y_train)` β no complex pipelines |
| π‘ **Explainability Ready** | Designed for integration with SHAP/LIME (coming soon) |
---
## π Results on Synthetic Data
| Metric | Result |
|-------|--------|
| **CV AUC** | 0.960 |
| **Bias (DPD)** | 0.051 (Low) |
| **Data Issues Found** | 0 |
| **Training Time** | < 10 seconds |
| **Fairness Audit** | β
Passed |
> β
Generated fully automatic, no manual checks.
---
## π What Makes Trustra Unique?
| Trustra | Traditional AutoML |
|--------|-------------------|
| Built-in **fairness** | Fairness? You code it. |
| Auto **data leakage** detection | Silent failure risk |
| **Trust report** generated | Just predictions |
| **Drift & imbalance** checks | Ignored |
| One `fit()` β full audit | Manual validation needed |
| **Open, transparent, auditable** | Black-box models |
> Trustra is **not just AutoML β itβs Responsible AI automation**.
---
## π§© How It Works
```python
from trustra import TrustRA
# Initialize with target and sensitive features
model = TrustRA(target="income", sensitive_features=["gender"])
# Run full trust-first pipeline
model.fit(X_train, y_train, X_val, y_val)
# Get predictions
preds = model.predict(X_val)
# Report saved as: trustra_report.html
```
---
## Pipeline Stages:
> Data Validation β Check quality, leakage, duplicates
> Fairness Audit β Measure DPD/EOD
> Model Training β Optuna + Cross-validation
> Report Generation β Interactive HTML with Plotly
---
## π¦ Installation
```bash
# Clone the repo
git clone https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework.git
cd Trustra---Trust-First-AutoML-Framework
# Install in editable mode
pip install -e .
# Optional: Install dependencies
pip install -r requirements.txt
```
---
## π§ͺ Example Usage
```bash
python examples/binary_classification.py
```
### Generates:
> β
```bash trustra_report.html```
> β
Console metrics (AUC, fairness, issues)
---
## π License
MIT License
Copyright Β© 2025 Devansh
---
## π€ Author
> Devansh Singh
> devansh.jay.singh@gmail.com
> "Built Trustra to make AI trustworthy, one model at a time."
Raw data
{
"_id": null,
"home_page": "https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework",
"name": "trustra",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "automl, fairness, bias, drift, explainability, machine-learning, responsible-ai",
"author": "Devansh Singh",
"author_email": "devansh.jay.singh@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/b0/3a/fd139d8b239167327792892e4de4ffe1b121d5665b481cfc45e895953d2c/trustra-1.0.0.tar.gz",
"platform": null,
"description": "# \ud83d\udee1\ufe0f Trustra \u2014 Trust-First AutoML Framework\r\n\r\n> **\"One `fit()`. Full trust.\"**\r\n\r\nTrustra is a **next-generation, open-source AutoML framework** that doesn\u2019t just maximize accuracy \u2014 it **ensures model integrity** by automatically detecting **data leakage, bias, drift, and instability** \u2014 and generating **auditable trust reports**.\r\n\r\nUnlike traditional AutoML tools that optimize only for performance, **Trustra enforces responsibility by design**.\r\n\r\n---\r\n\r\n## \ud83d\ude80 Why Trustra?\r\n\r\nMost AutoML tools (like H2O, AutoGluon, or SageMaker) focus on **\"How accurate is the model?\"** \r\nTrustra asks: \r\n> \u2753 **\"Can we trust this model?\"** \r\n> \u2753 **\"Is it fair?\"** \r\n> \u2753 **\"Is it safe for production?\"**\r\n\r\nWe built Trustra because:\r\n- Real-world models fail due to **hidden data issues**, not poor algorithms.\r\n- Bias goes undetected until it harms users.\r\n- Drift creeps in silently.\r\n- Teams waste weeks on manual validation.\r\n\r\n\ud83d\udc49 **Trustra automates trust.**\r\n\r\n---\r\n\r\n## \u2728 Key Features\r\n\r\n| Feature | Description |\r\n|-------|-------------|\r\n| \ud83d\udd0d **Data Quality Checks** | Detects missing values, duplicates, class imbalance, and **data leakage** (e.g., target leakage) |\r\n| \u2696\ufe0f **Fairness Audit** | Automatically audits bias across sensitive features (e.g., gender, race) using **Demographic Parity & Equalized Odds** |\r\n| \ud83d\udcc9 **Drift Detection** | Flags feature drift between train/validation using KS test & PSI |\r\n| \ud83e\udde0 **Auto Model Selection** | Uses **Optuna** to find the best model (Logistic Regression, Random Forest, Gradient Boosting) and hyperparameters |\r\n| \ud83d\udcca **Trust Report** | Generates a **self-contained HTML report** with model performance, fairness metrics, and detected issues |\r\n| \ud83d\ude80 **Simple API** | Just `model.fit(X_train, y_train)` \u2014 no complex pipelines |\r\n| \ud83d\udca1 **Explainability Ready** | Designed for integration with SHAP/LIME (coming soon) |\r\n\r\n---\r\n\r\n## \ud83c\udfc6 Results on Synthetic Data\r\n\r\n| Metric | Result |\r\n|-------|--------|\r\n| **CV AUC** | 0.960 |\r\n| **Bias (DPD)** | 0.051 (Low) |\r\n| **Data Issues Found** | 0 |\r\n| **Training Time** | < 10 seconds |\r\n| **Fairness Audit** | \u2705 Passed |\r\n\r\n> \u2705 Generated fully automatic, no manual checks.\r\n\r\n---\r\n\r\n## \ud83c\udf1f What Makes Trustra Unique?\r\n\r\n| Trustra | Traditional AutoML |\r\n|--------|-------------------|\r\n| Built-in **fairness** | Fairness? You code it. |\r\n| Auto **data leakage** detection | Silent failure risk |\r\n| **Trust report** generated | Just predictions |\r\n| **Drift & imbalance** checks | Ignored |\r\n| One `fit()` \u2192 full audit | Manual validation needed |\r\n| **Open, transparent, auditable** | Black-box models |\r\n\r\n> Trustra is **not just AutoML \u2014 it\u2019s Responsible AI automation**.\r\n\r\n---\r\n\r\n## \ud83e\udde9 How It Works\r\n\r\n```python\r\nfrom trustra import TrustRA\r\n\r\n# Initialize with target and sensitive features\r\nmodel = TrustRA(target=\"income\", sensitive_features=[\"gender\"])\r\n\r\n# Run full trust-first pipeline\r\nmodel.fit(X_train, y_train, X_val, y_val)\r\n\r\n# Get predictions\r\npreds = model.predict(X_val)\r\n\r\n# Report saved as: trustra_report.html\r\n```\r\n---\r\n## Pipeline Stages:\r\n> Data Validation \u2192 Check quality, leakage, duplicates\r\n> Fairness Audit \u2192 Measure DPD/EOD\r\n> Model Training \u2192 Optuna + Cross-validation\r\n> Report Generation \u2192 Interactive HTML with Plotly\r\n---\r\n\r\n## \ud83d\udce6 Installation\r\n```bash\r\n# Clone the repo\r\ngit clone https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework.git\r\ncd Trustra---Trust-First-AutoML-Framework\r\n\r\n# Install in editable mode\r\npip install -e .\r\n\r\n# Optional: Install dependencies\r\npip install -r requirements.txt\r\n```\r\n---\r\n\r\n## \ud83e\uddea Example Usage\r\n```bash\r\npython examples/binary_classification.py\r\n```\r\n\r\n### Generates: \r\n> \u2705 ```bash trustra_report.html```\r\n> \u2705 Console metrics (AUC, fairness, issues)\r\n---\r\n\r\n## \ud83d\udcc4 License\r\nMIT License\r\nCopyright \u00a9 2025 Devansh\r\n\r\n---\r\n\r\n## \ud83d\udc64 Author\r\n> Devansh Singh\r\n> devansh.jay.singh@gmail.com\r\n> \"Built Trustra to make AI trustworthy, one model at a time.\"\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Trust-first AutoML: Automated ML with built-in fairness, drift, and reliability.",
"version": "1.0.0",
"project_urls": {
"Documentation": "https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework#readme",
"Homepage": "https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework",
"Source": "https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework",
"Tracker": "https://github.com/Devansh-567/Trustra---Trust-First-AutoML-Framework/issues"
},
"split_keywords": [
"automl",
" fairness",
" bias",
" drift",
" explainability",
" machine-learning",
" responsible-ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ca304a93f216c1e2e9791fbcfa428fd9063cdd67dca3616a1dc30b9d09b034f2",
"md5": "1c61515cc44b8a2f2518462da4fc02a4",
"sha256": "c9371798ac2cab0e9aeb86618f01c7262f3a8e62960470388552547672d8d7b2"
},
"downloads": -1,
"filename": "trustra-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1c61515cc44b8a2f2518462da4fc02a4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 9641,
"upload_time": "2025-08-19T11:32:15",
"upload_time_iso_8601": "2025-08-19T11:32:15.523869Z",
"url": "https://files.pythonhosted.org/packages/ca/30/4a93f216c1e2e9791fbcfa428fd9063cdd67dca3616a1dc30b9d09b034f2/trustra-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b03afd139d8b239167327792892e4de4ffe1b121d5665b481cfc45e895953d2c",
"md5": "fd719583cb111039968aab295d13d72f",
"sha256": "8cca52a07be9d048028b86a78e0d1b2d4af4866ee0dedf75a8495c15cfcd5ec6"
},
"downloads": -1,
"filename": "trustra-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "fd719583cb111039968aab295d13d72f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 10316,
"upload_time": "2025-08-19T11:32:26",
"upload_time_iso_8601": "2025-08-19T11:32:26.432185Z",
"url": "https://files.pythonhosted.org/packages/b0/3a/fd139d8b239167327792892e4de4ffe1b121d5665b481cfc45e895953d2c/trustra-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-19 11:32:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Devansh-567",
"github_project": "Trustra---Trust-First-AutoML-Framework",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pandas",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "scikit-learn",
"specs": []
},
{
"name": "optuna",
"specs": []
},
{
"name": "plotly",
"specs": []
},
{
"name": "jinja2",
"specs": []
},
{
"name": "streamlit",
"specs": []
},
{
"name": "fairlearn",
"specs": []
},
{
"name": "xgboost",
"specs": []
}
],
"lcname": "trustra"
}