fairsight


Namefairsight JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryComprehensive AI Ethics and Bias Detection Toolkit with SAP Integration
upload_time2025-09-08 08:34:14
maintainerNone
docs_urlNone
authorAbhay Pratap Singh
requires_python>=3.8
licenseMIT
keywords ai ethics bias detection fairness machine learning audit sap hana explainability responsible ai illegal data detection perceptual hashing image similarity
VCS
bugtrack_url
requirements pandas numpy scikit-learn shap lime matplotlib seaborn plotly markdown fpdf hdbcli scipy joblib pytest pytest-cov black flake8 jupyter notebook xgboost lightgbm catboost asyncio-throttle numba imagehash requests sentence-transformers faiss-cpu torch diffusers transformers accelerate
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🧠 Fairsight Toolkit

> **Comprehensive AI Ethics and Bias Detection Toolkit with SAP Integration**

[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![SAP HANA](https://img.shields.io/badge/SAP%20HANA-Cloud-blue)](https://www.sap.com/products/hana.html)

Fairsight is a production-ready Python toolkit for detecting bias, ensuring fairness, and maintaining ethical standards in machine learning models and datasets. Built with enterprise integration in mind, it features seamless SAP HANA Cloud and SAP Analytics Cloud connectivity.

## 🌟 Key Features

- **🔍 Comprehensive Bias Detection**: Statistical parity, disparate impact, equal opportunity, and more
- **⚖️ Fairness Metrics**: Demographic parity, equalized odds, predictive parity
- **🔮 Model Explainability**: SHAP and LIME integration for interpretable AI
- **📊 Enterprise Integration**: Native SAP HANA Cloud and SAP Analytics Cloud support
- **📋 Justified Attributes**: Smart handling of business-justified discriminatory features
- **🚀 Easy to Use**: Simple API for both datasets and trained models
- **📈 Automated Reporting**: Beautiful, actionable audit reports
- **🏢 Production Ready**: Enterprise-grade logging, error handling, and scalability

## 🛠️ Installation

### Basic Installation
```bash
pip install fairsight
```

### With SAP Integration
```bash
pip install fairsight[sap]
```

### Development Installation
```bash
git clone https://github.com/vijayk/fairsight.git
cd fairsight
pip install -e .[dev,sap]
```

## 🚀 Quick Start

### Basic Dataset Audit
```python
from fairsight import FSAuditor

# Simple dataset audit
auditor = FSAuditor(
    dataset="data/hiring_data.csv",
    sensitive_features=["gender", "race"],
    target="hired",
    justified_attributes=["experience_years"]  # Job-relevant factors
)

results = auditor.run_audit()
print(f"Ethical Score: {results['ethical_score']}/100")
```

### Model + Dataset Audit
```python
from fairsight import FSAuditor
from sklearn.ensemble import RandomForestClassifier

# Train your model
model = RandomForestClassifier()
model.fit(X_train, y_train)

# Comprehensive audit
auditor = FSAuditor(
    dataset="data/loan_data.csv",
    model=model,
    sensitive_features=["gender", "race", "age"],
    target="loan_approved",
    justified_attributes=["credit_score", "income"],  # Financially relevant
    fairness_threshold=0.8
)

# Run complete audit
audit_results = auditor.run_audit()

# Export results
auditor.export_results("audit_report.json")
```

### Handling "Justified" Attributes

The key innovation of Fairsight is handling **justified attributes** - features that may appear discriminatory but are business-justified:

```python
# Example: House loan approval
auditor = FSAuditor(
    dataset="house_loans.csv",
    sensitive_features=["gender", "race", "job"],  
    justified_attributes=["job"],  # Job status is legally justified for loans
    target="approved"
)

results = auditor.run_audit()

# Job-related disparities won't be flagged as bias
# Gender/race disparities will still be detected
```

## 🚀 Quick Start: One-liner Wrappers

Fairsight provides convenient wrapper functions for the most common bias and fairness analysis tasks. These wrappers let you run a full analysis in a single line of code.

### Dataset Bias Detection (Wrapper)
```python
from fairsight import detect_dataset_bias
import pandas as pd

df = pd.read_csv('data.csv')
results = detect_dataset_bias(df, protected_attributes=['gender', 'race'], target_column='outcome')
for r in results:
    print(r)
```
**Output:**
```
BiasResult(gender.Disparate Impact: 0.82 [FAIR])
BiasResult(gender.Statistical Parity Difference: 0.05 [FAIR])
BiasResult(race.Disparate Impact: 0.76 [BIASED])
BiasResult(race.Statistical Parity Difference: 0.18 [BIASED])
```

### Model Bias Detection (Wrapper)
```python
from fairsight import detect_model_bias
from sklearn.ensemble import RandomForestClassifier
import pandas as pd

df = pd.read_csv('data.csv')
model = RandomForestClassifier().fit(df.drop('outcome', axis=1), df['outcome'])
results = detect_model_bias(model, df, protected_attributes=['gender'], target_column='outcome')
for r in results:
    print(r)
```

### Full Dataset Audit (Wrapper)
```python
from fairsight import audit_dataset
import pandas as pd

df = pd.read_csv('data.csv')
results = audit_dataset(df, protected_attributes=['gender'], target_column='outcome')
print(results['bias_detection'])
print(results['fairness_metrics'])
```

### Full Model Audit (Wrapper)
```python
from fairsight import audit_model
from sklearn.ensemble import RandomForestClassifier
import pandas as pd

df = pd.read_csv('data.csv')
X = df.drop('outcome', axis=1)
y = df['outcome']
model = RandomForestClassifier().fit(X, y)
results = audit_model(model, X, y, protected_attributes=['gender'])
print(results['bias_detection'])
print(results['fairness_metrics'])
```

## 🏗️ Architecture

```
fairsight/
├── __init__.py              # Main package exports  
├── auditor.py              # FSAuditor main class
├── bias_detection.py       # Enhanced bias detection with justified attributes
├── dataset_audit.py        # Comprehensive dataset auditing
├── model_audit.py          # Model performance and bias auditing  
├── explainability.py       # SHAP/LIME model explanations
├── fairness_metrics.py     # Fairness metric computations
├── report_generator.py     # Automated report generation
├── dashboard_push.py       # SAP HANA Cloud integration
└── utils.py               # Utility functions
```

## 📊 SAP Integration

### SAP HANA Cloud Setup
```python
from fairsight import Dashboard

# Configure SAP HANA connection
dashboard = Dashboard({
    "host": "your-hana-instance.hanacloud.ondemand.com",
    "port": 443,
    "user": "DBADMIN", 
    "password": "your_password",
    "encrypt": True
})

# Audit results automatically pushed to HANA
auditor = FSAuditor(
    dataset="data.csv",
    sensitive_features=["gender"],
    enable_sap_integration=True
)

results = auditor.run_audit()  # Automatically pushes to HANA
```

### SAP Analytics Cloud Dashboard
```python
# Generate SAP Analytics Cloud configuration
dashboard_config = dashboard.create_sac_dashboard_config()

# Use this configuration to set up your SAC dashboard
print(dashboard_config)
```

## 🔍 Comprehensive Example

```python
import pandas as pd
from fairsight import FSAuditor
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split

# Load data
df = pd.read_csv("hiring_dataset.csv")

# Define protected and justified attributes
protected_attrs = ["gender", "race", "age"]
justified_attrs = ["years_experience", "education_level"]  # Job-relevant

# Split data
X = df.drop("hired", axis=1)
y = df["hired"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

# Train model
model = RandomForestClassifier(random_state=42)
model.fit(X_train, y_train)

# Comprehensive audit
auditor = FSAuditor(
    model=model,
    X_test=X_test,
    y_test=y_test,
    sensitive_features=protected_attrs,
    justified_attributes=justified_attrs,
    fairness_threshold=0.8,
    enable_sap_integration=True
)

# Run audit with all components
results = auditor.run_audit(
    include_dataset=True,
    include_model=True,
    include_bias_detection=True,
    generate_report=True,
    push_to_dashboard=True
)

# Print summary
print("=" * 50)
print(f"🏆 ETHICAL SCORE: {results['ethical_score']}/100")
print(f"📊 OVERALL ASSESSMENT: {results['executive_summary']['overall_assessment']}")
print("=" * 50)

# Key findings
for finding in results['executive_summary']['key_findings']:
    print(f"✅ {finding}")

# Critical issues  
for issue in results['executive_summary']['critical_issues']:
    print(f"🚨 {issue}")

# Recommendations
for rec in results['executive_summary']['recommendations']:
    print(f"💡 {rec}")

# Export detailed results
auditor.export_results("detailed_audit_results.json")

# View audit history
history = auditor.get_audit_history(limit=5)
print(history)
```

## 📋 Key Metrics

### Bias Detection Metrics
- **Disparate Impact**: 80% rule compliance
- **Statistical Parity Difference**: Difference in positive rates
- **Equal Opportunity Difference**: Difference in TPR across groups  
- **Predictive Parity**: Difference in precision across groups
- **Equalized Odds**: Both TPR and FPR differences

### Fairness Metrics  
- **Demographic Parity**: Equal positive prediction rates
- **Equal Opportunity**: Equal TPR for qualified individuals
- **Predictive Equality**: Equal FPR across groups
- **Overall Accuracy Equality**: Equal accuracy across groups

## 🎯 Use Cases

### 1. **Hiring & Recruitment**
```python
# Audit hiring algorithms
auditor = FSAuditor(
    dataset="hiring_data.csv",
    sensitive_features=["gender", "race", "age"],
    justified_attributes=["experience", "education", "skills_score"],
    target="hired"
)
```

### 2. **Financial Services**
```python  
# Audit loan approval models
auditor = FSAuditor(
    model=loan_model,
    sensitive_features=["gender", "race", "marital_status"], 
    justified_attributes=["credit_score", "income", "debt_ratio"],
    target="loan_approved"
)
```

### 3. **Healthcare**
```python
# Audit medical diagnosis systems
auditor = FSAuditor(
    model=diagnosis_model,
    sensitive_features=["gender", "race", "age"],
    justified_attributes=["symptoms", "medical_history", "test_results"],
    target="diagnosis"
)
```

## 📊 Example Output

```
🧠 AI Fairness & Bias Audit Report
===================================

**Ethical Score**: 87/100

🔍 Attribute-wise Bias Analysis
--------------------------------

➤ Gender
- Disparate Impact: 0.85
- Equal Opportunity Difference: 0.08  
- Statistical Parity Difference: 0.12
- **Interpretation**: Minor disparity detected, within acceptable range.

➤ Job (justified attribute)  
- Disparate Impact: 0.62
- Equal Opportunity Difference: 0.28
- **Interpretation**: This feature is justified for decision-making per business requirements.

📊 Fairness Metric Gaps
------------------------

| Attribute | Precision Gap | Recall Gap | F1 Score Gap |
|-----------|---------------|-----------|-------------|
| Gender    | 0.05          | 0.07      | 0.06        |
| Job       | 0.15          | 0.18      | 0.16        |

📌 Final Ethical Assessment
----------------------------

✅ The model demonstrates strong ethical integrity with low bias across protected groups.

📋 Note: job is marked as a justified attribute and disparities here are acceptable per business configuration.
```

## 🔧 Advanced Configuration

### Custom Fairness Thresholds
```python
auditor = FSAuditor(
    dataset="data.csv",
    fairness_threshold=0.85,  # Stricter 85% rule
    sensitive_features=["gender", "race"]
)
```

### Custom Privileged Groups
```python
auditor = FSAuditor(
    dataset="data.csv",
    sensitive_features=["gender", "race"],
    privileged_groups={
        "gender": "male",      # Specify privileged group
        "race": "white"
    }
)
```

## 🧑‍💻 Advanced: Core Class Usage

For advanced users, Fairsight exposes all core classes for maximum flexibility and custom workflows.

### BiasDetector (Direct Use)
```python
from fairsight import BiasDetector
import pandas as pd

df = pd.read_csv('data.csv')
detector = BiasDetector(dataset=df, sensitive_features=['gender'], target='outcome')
results = detector.detect_bias_on_dataset()
for r in results:
    print(r)
```

### DatasetAuditor (Direct Use)
```python
from fairsight import DatasetAuditor
import pandas as pd

df = pd.read_csv('data.csv')
auditor = DatasetAuditor(dataset=df, protected_attributes=['gender'], target_column='outcome')
results = auditor.audit()
print(results['bias_detection'])
print(results['fairness_metrics'])
```

### ModelAuditor (Direct Use)
```python
from fairsight import ModelAuditor
from sklearn.ensemble import RandomForestClassifier
import pandas as pd

df = pd.read_csv('data.csv')
X = df.drop('outcome', axis=1)
y = df['outcome']
model = RandomForestClassifier().fit(X, y)
auditor = ModelAuditor(model=model, X_test=X, y_test=y, protected_attributes=['gender'], target_column='outcome')
results = auditor.audit()
print(results['bias_detection'])
print(results['fairness_metrics'])
```

### FairnessMetrics (Direct Use)
```python
from fairsight import FairnessMetrics
import numpy as np

y_true = np.array([1, 0, 1, 0])
y_pred = np.array([1, 0, 0, 0])
protected = np.array([0, 1, 0, 1])
fm = FairnessMetrics(y_true, y_pred, protected_attr=protected, privileged_group=0)
print(fm.demographic_parity())
print(fm.equalized_odds())
print(fm.predictive_parity())
```

### ExplainabilityEngine (Direct Use)
```python
from fairsight import ExplainabilityEngine
from sklearn.linear_model import LogisticRegression
import pandas as pd

df = pd.read_csv('data.csv')
X = df.drop('outcome', axis=1)
y = df['outcome']
model = LogisticRegression().fit(X, y)
engine = ExplainabilityEngine(model=model, training_data=X, feature_names=list(X.columns))
shap_result = engine.explain_with_shap(X)
print(shap_result)
```

## 🧩 Standalone Utilities (Quick Use)

Fairsight exposes key utilities as standalone functions for maximum flexibility. You can use these independently of the main pipeline:

```python
from fairsight import (
    explain_with_shap, explain_with_lime, detect_illegal_data,
    preprocess_data, calculate_privilege_groups, generate_html_report
)

# Preprocessing
df_processed, encoders = preprocess_data(df, target_column='outcome', protected_attributes=['gender'])

# Privilege group calculation
priv_groups = calculate_privilege_groups(df, ['gender'])

# Illegal data detection
illegal_results = detect_illegal_data(df)

# Explainability (SHAP & LIME)
shap_result = explain_with_shap(model, X, feature_names)
lime_result = explain_with_lime(model, X, feature_names)

# Quick HTML report
dummy_bias = {'gender': {'statistical_parity': 0.1}}
dummy_fairness = {'gender': {'demographic_parity': 0.12}}
report_path = generate_html_report(dummy_bias, dummy_fairness, model_name='DemoModel')
print(f"HTML report at: {report_path}")
```

---

## 📚 References & Citations

### Algorithms & Metrics
- **Reweighing (Bias Mitigation):**
  - Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33. [Springer Link](https://link.springer.com/article/10.1007/s10115-011-0463-8)
- **Fairness Metrics:**
  - Demographic Parity, Equalized Odds, Equal Opportunity, Predictive Parity, Disparate Impact, Statistical Parity, etc. are based on open academic literature, e.g.:
    - Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. NeurIPS. [arXiv](https://arxiv.org/abs/1610.02413)
    - Feldman, M., et al. (2015). Certifying and removing disparate impact. KDD. [arXiv](https://arxiv.org/abs/1412.3756)
    - Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. [fairmlbook.org](https://fairmlbook.org/)
- **Explainability:**
  - SHAP: Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS. [arXiv](https://arxiv.org/abs/1705.07874)
  - LIME: Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD. [arXiv](https://arxiv.org/abs/1602.04938)
- **Generalized Entropy Index:**
  - Speicher, T., et al. (2018). A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. KDD. [arXiv](https://arxiv.org/abs/1807.00799)

### Libraries Used
- **scikit-learn** (BSD-3-Clause License): Machine learning models and utilities
- **pandas** (BSD-3-Clause License): Data processing
- **numpy** (BSD License): Numerical computing
- **SHAP** (MIT License): Model explainability
- **LIME** (MIT License): Model explainability
- **matplotlib, seaborn** (matplotlib: PSF License, seaborn: BSD): Visualization

All algorithms and metrics are implemented based on open academic literature and open-source libraries. No proprietary or closed-source code is used.

---

## 🤝 Contributing

We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`) 
5. Open a Pull Request

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- **SAP HANA Cloud** for enterprise data integration
- **SHAP & LIME** for model explainability  
- **scikit-learn** for machine learning utilities
- **pandas & numpy** for data processing

## 📞 Support

- 📧 Email: support@fairsight.com
- 💬 GitHub Issues: [Create an issue](https://github.com/vijayk/fairsight/issues)
- 📖 Documentation: [fairsight.readthedocs.io](https://fairsight.readthedocs.io/)

---

**Made with ❤️ for Ethical AI**

*Fairsight Toolkit - Making AI Fair, Transparent, and Accountable*

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fairsight",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Vijay K S <ksvijay2005@gmail.com>",
    "keywords": "ai ethics, bias detection, fairness, machine learning, audit, sap hana, explainability, responsible ai, illegal data detection, perceptual hashing, image similarity",
    "author": "Abhay Pratap Singh",
    "author_email": "Vijay K S <ksvijay2005@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/18/1d/90e22d5567965131c920a381fab38e1425f9ee67b5f1bf5c72f8fdde2f38/fairsight-1.0.0.tar.gz",
    "platform": null,
    "description": "# \ud83e\udde0 Fairsight Toolkit\r\n\r\n> **Comprehensive AI Ethics and Bias Detection Toolkit with SAP Integration**\r\n\r\n[![Python 3.8+](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)\r\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\r\n[![SAP HANA](https://img.shields.io/badge/SAP%20HANA-Cloud-blue)](https://www.sap.com/products/hana.html)\r\n\r\nFairsight is a production-ready Python toolkit for detecting bias, ensuring fairness, and maintaining ethical standards in machine learning models and datasets. Built with enterprise integration in mind, it features seamless SAP HANA Cloud and SAP Analytics Cloud connectivity.\r\n\r\n## \ud83c\udf1f Key Features\r\n\r\n- **\ud83d\udd0d Comprehensive Bias Detection**: Statistical parity, disparate impact, equal opportunity, and more\r\n- **\u2696\ufe0f Fairness Metrics**: Demographic parity, equalized odds, predictive parity\r\n- **\ud83d\udd2e Model Explainability**: SHAP and LIME integration for interpretable AI\r\n- **\ud83d\udcca Enterprise Integration**: Native SAP HANA Cloud and SAP Analytics Cloud support\r\n- **\ud83d\udccb Justified Attributes**: Smart handling of business-justified discriminatory features\r\n- **\ud83d\ude80 Easy to Use**: Simple API for both datasets and trained models\r\n- **\ud83d\udcc8 Automated Reporting**: Beautiful, actionable audit reports\r\n- **\ud83c\udfe2 Production Ready**: Enterprise-grade logging, error handling, and scalability\r\n\r\n## \ud83d\udee0\ufe0f Installation\r\n\r\n### Basic Installation\r\n```bash\r\npip install fairsight\r\n```\r\n\r\n### With SAP Integration\r\n```bash\r\npip install fairsight[sap]\r\n```\r\n\r\n### Development Installation\r\n```bash\r\ngit clone https://github.com/vijayk/fairsight.git\r\ncd fairsight\r\npip install -e .[dev,sap]\r\n```\r\n\r\n## \ud83d\ude80 Quick Start\r\n\r\n### Basic Dataset Audit\r\n```python\r\nfrom fairsight import FSAuditor\r\n\r\n# Simple dataset audit\r\nauditor = FSAuditor(\r\n    dataset=\"data/hiring_data.csv\",\r\n    sensitive_features=[\"gender\", \"race\"],\r\n    target=\"hired\",\r\n    justified_attributes=[\"experience_years\"]  # Job-relevant factors\r\n)\r\n\r\nresults = auditor.run_audit()\r\nprint(f\"Ethical Score: {results['ethical_score']}/100\")\r\n```\r\n\r\n### Model + Dataset Audit\r\n```python\r\nfrom fairsight import FSAuditor\r\nfrom sklearn.ensemble import RandomForestClassifier\r\n\r\n# Train your model\r\nmodel = RandomForestClassifier()\r\nmodel.fit(X_train, y_train)\r\n\r\n# Comprehensive audit\r\nauditor = FSAuditor(\r\n    dataset=\"data/loan_data.csv\",\r\n    model=model,\r\n    sensitive_features=[\"gender\", \"race\", \"age\"],\r\n    target=\"loan_approved\",\r\n    justified_attributes=[\"credit_score\", \"income\"],  # Financially relevant\r\n    fairness_threshold=0.8\r\n)\r\n\r\n# Run complete audit\r\naudit_results = auditor.run_audit()\r\n\r\n# Export results\r\nauditor.export_results(\"audit_report.json\")\r\n```\r\n\r\n### Handling \"Justified\" Attributes\r\n\r\nThe key innovation of Fairsight is handling **justified attributes** - features that may appear discriminatory but are business-justified:\r\n\r\n```python\r\n# Example: House loan approval\r\nauditor = FSAuditor(\r\n    dataset=\"house_loans.csv\",\r\n    sensitive_features=[\"gender\", \"race\", \"job\"],  \r\n    justified_attributes=[\"job\"],  # Job status is legally justified for loans\r\n    target=\"approved\"\r\n)\r\n\r\nresults = auditor.run_audit()\r\n\r\n# Job-related disparities won't be flagged as bias\r\n# Gender/race disparities will still be detected\r\n```\r\n\r\n## \ud83d\ude80 Quick Start: One-liner Wrappers\r\n\r\nFairsight provides convenient wrapper functions for the most common bias and fairness analysis tasks. These wrappers let you run a full analysis in a single line of code.\r\n\r\n### Dataset Bias Detection (Wrapper)\r\n```python\r\nfrom fairsight import detect_dataset_bias\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nresults = detect_dataset_bias(df, protected_attributes=['gender', 'race'], target_column='outcome')\r\nfor r in results:\r\n    print(r)\r\n```\r\n**Output:**\r\n```\r\nBiasResult(gender.Disparate Impact: 0.82 [FAIR])\r\nBiasResult(gender.Statistical Parity Difference: 0.05 [FAIR])\r\nBiasResult(race.Disparate Impact: 0.76 [BIASED])\r\nBiasResult(race.Statistical Parity Difference: 0.18 [BIASED])\r\n```\r\n\r\n### Model Bias Detection (Wrapper)\r\n```python\r\nfrom fairsight import detect_model_bias\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nmodel = RandomForestClassifier().fit(df.drop('outcome', axis=1), df['outcome'])\r\nresults = detect_model_bias(model, df, protected_attributes=['gender'], target_column='outcome')\r\nfor r in results:\r\n    print(r)\r\n```\r\n\r\n### Full Dataset Audit (Wrapper)\r\n```python\r\nfrom fairsight import audit_dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nresults = audit_dataset(df, protected_attributes=['gender'], target_column='outcome')\r\nprint(results['bias_detection'])\r\nprint(results['fairness_metrics'])\r\n```\r\n\r\n### Full Model Audit (Wrapper)\r\n```python\r\nfrom fairsight import audit_model\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nX = df.drop('outcome', axis=1)\r\ny = df['outcome']\r\nmodel = RandomForestClassifier().fit(X, y)\r\nresults = audit_model(model, X, y, protected_attributes=['gender'])\r\nprint(results['bias_detection'])\r\nprint(results['fairness_metrics'])\r\n```\r\n\r\n## \ud83c\udfd7\ufe0f Architecture\r\n\r\n```\r\nfairsight/\r\n\u251c\u2500\u2500 __init__.py              # Main package exports  \r\n\u251c\u2500\u2500 auditor.py              # FSAuditor main class\r\n\u251c\u2500\u2500 bias_detection.py       # Enhanced bias detection with justified attributes\r\n\u251c\u2500\u2500 dataset_audit.py        # Comprehensive dataset auditing\r\n\u251c\u2500\u2500 model_audit.py          # Model performance and bias auditing  \r\n\u251c\u2500\u2500 explainability.py       # SHAP/LIME model explanations\r\n\u251c\u2500\u2500 fairness_metrics.py     # Fairness metric computations\r\n\u251c\u2500\u2500 report_generator.py     # Automated report generation\r\n\u251c\u2500\u2500 dashboard_push.py       # SAP HANA Cloud integration\r\n\u2514\u2500\u2500 utils.py               # Utility functions\r\n```\r\n\r\n## \ud83d\udcca SAP Integration\r\n\r\n### SAP HANA Cloud Setup\r\n```python\r\nfrom fairsight import Dashboard\r\n\r\n# Configure SAP HANA connection\r\ndashboard = Dashboard({\r\n    \"host\": \"your-hana-instance.hanacloud.ondemand.com\",\r\n    \"port\": 443,\r\n    \"user\": \"DBADMIN\", \r\n    \"password\": \"your_password\",\r\n    \"encrypt\": True\r\n})\r\n\r\n# Audit results automatically pushed to HANA\r\nauditor = FSAuditor(\r\n    dataset=\"data.csv\",\r\n    sensitive_features=[\"gender\"],\r\n    enable_sap_integration=True\r\n)\r\n\r\nresults = auditor.run_audit()  # Automatically pushes to HANA\r\n```\r\n\r\n### SAP Analytics Cloud Dashboard\r\n```python\r\n# Generate SAP Analytics Cloud configuration\r\ndashboard_config = dashboard.create_sac_dashboard_config()\r\n\r\n# Use this configuration to set up your SAC dashboard\r\nprint(dashboard_config)\r\n```\r\n\r\n## \ud83d\udd0d Comprehensive Example\r\n\r\n```python\r\nimport pandas as pd\r\nfrom fairsight import FSAuditor\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nfrom sklearn.model_selection import train_test_split\r\n\r\n# Load data\r\ndf = pd.read_csv(\"hiring_dataset.csv\")\r\n\r\n# Define protected and justified attributes\r\nprotected_attrs = [\"gender\", \"race\", \"age\"]\r\njustified_attrs = [\"years_experience\", \"education_level\"]  # Job-relevant\r\n\r\n# Split data\r\nX = df.drop(\"hired\", axis=1)\r\ny = df[\"hired\"]\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)\r\n\r\n# Train model\r\nmodel = RandomForestClassifier(random_state=42)\r\nmodel.fit(X_train, y_train)\r\n\r\n# Comprehensive audit\r\nauditor = FSAuditor(\r\n    model=model,\r\n    X_test=X_test,\r\n    y_test=y_test,\r\n    sensitive_features=protected_attrs,\r\n    justified_attributes=justified_attrs,\r\n    fairness_threshold=0.8,\r\n    enable_sap_integration=True\r\n)\r\n\r\n# Run audit with all components\r\nresults = auditor.run_audit(\r\n    include_dataset=True,\r\n    include_model=True,\r\n    include_bias_detection=True,\r\n    generate_report=True,\r\n    push_to_dashboard=True\r\n)\r\n\r\n# Print summary\r\nprint(\"=\" * 50)\r\nprint(f\"\ud83c\udfc6 ETHICAL SCORE: {results['ethical_score']}/100\")\r\nprint(f\"\ud83d\udcca OVERALL ASSESSMENT: {results['executive_summary']['overall_assessment']}\")\r\nprint(\"=\" * 50)\r\n\r\n# Key findings\r\nfor finding in results['executive_summary']['key_findings']:\r\n    print(f\"\u2705 {finding}\")\r\n\r\n# Critical issues  \r\nfor issue in results['executive_summary']['critical_issues']:\r\n    print(f\"\ud83d\udea8 {issue}\")\r\n\r\n# Recommendations\r\nfor rec in results['executive_summary']['recommendations']:\r\n    print(f\"\ud83d\udca1 {rec}\")\r\n\r\n# Export detailed results\r\nauditor.export_results(\"detailed_audit_results.json\")\r\n\r\n# View audit history\r\nhistory = auditor.get_audit_history(limit=5)\r\nprint(history)\r\n```\r\n\r\n## \ud83d\udccb Key Metrics\r\n\r\n### Bias Detection Metrics\r\n- **Disparate Impact**: 80% rule compliance\r\n- **Statistical Parity Difference**: Difference in positive rates\r\n- **Equal Opportunity Difference**: Difference in TPR across groups  \r\n- **Predictive Parity**: Difference in precision across groups\r\n- **Equalized Odds**: Both TPR and FPR differences\r\n\r\n### Fairness Metrics  \r\n- **Demographic Parity**: Equal positive prediction rates\r\n- **Equal Opportunity**: Equal TPR for qualified individuals\r\n- **Predictive Equality**: Equal FPR across groups\r\n- **Overall Accuracy Equality**: Equal accuracy across groups\r\n\r\n## \ud83c\udfaf Use Cases\r\n\r\n### 1. **Hiring & Recruitment**\r\n```python\r\n# Audit hiring algorithms\r\nauditor = FSAuditor(\r\n    dataset=\"hiring_data.csv\",\r\n    sensitive_features=[\"gender\", \"race\", \"age\"],\r\n    justified_attributes=[\"experience\", \"education\", \"skills_score\"],\r\n    target=\"hired\"\r\n)\r\n```\r\n\r\n### 2. **Financial Services**\r\n```python  \r\n# Audit loan approval models\r\nauditor = FSAuditor(\r\n    model=loan_model,\r\n    sensitive_features=[\"gender\", \"race\", \"marital_status\"], \r\n    justified_attributes=[\"credit_score\", \"income\", \"debt_ratio\"],\r\n    target=\"loan_approved\"\r\n)\r\n```\r\n\r\n### 3. **Healthcare**\r\n```python\r\n# Audit medical diagnosis systems\r\nauditor = FSAuditor(\r\n    model=diagnosis_model,\r\n    sensitive_features=[\"gender\", \"race\", \"age\"],\r\n    justified_attributes=[\"symptoms\", \"medical_history\", \"test_results\"],\r\n    target=\"diagnosis\"\r\n)\r\n```\r\n\r\n## \ud83d\udcca Example Output\r\n\r\n```\r\n\ud83e\udde0 AI Fairness & Bias Audit Report\r\n===================================\r\n\r\n**Ethical Score**: 87/100\r\n\r\n\ud83d\udd0d Attribute-wise Bias Analysis\r\n--------------------------------\r\n\r\n\u27a4 Gender\r\n- Disparate Impact: 0.85\r\n- Equal Opportunity Difference: 0.08  \r\n- Statistical Parity Difference: 0.12\r\n- **Interpretation**: Minor disparity detected, within acceptable range.\r\n\r\n\u27a4 Job (justified attribute)  \r\n- Disparate Impact: 0.62\r\n- Equal Opportunity Difference: 0.28\r\n- **Interpretation**: This feature is justified for decision-making per business requirements.\r\n\r\n\ud83d\udcca Fairness Metric Gaps\r\n------------------------\r\n\r\n| Attribute | Precision Gap | Recall Gap | F1 Score Gap |\r\n|-----------|---------------|-----------|-------------|\r\n| Gender    | 0.05          | 0.07      | 0.06        |\r\n| Job       | 0.15          | 0.18      | 0.16        |\r\n\r\n\ud83d\udccc Final Ethical Assessment\r\n----------------------------\r\n\r\n\u2705 The model demonstrates strong ethical integrity with low bias across protected groups.\r\n\r\n\ud83d\udccb Note: job is marked as a justified attribute and disparities here are acceptable per business configuration.\r\n```\r\n\r\n## \ud83d\udd27 Advanced Configuration\r\n\r\n### Custom Fairness Thresholds\r\n```python\r\nauditor = FSAuditor(\r\n    dataset=\"data.csv\",\r\n    fairness_threshold=0.85,  # Stricter 85% rule\r\n    sensitive_features=[\"gender\", \"race\"]\r\n)\r\n```\r\n\r\n### Custom Privileged Groups\r\n```python\r\nauditor = FSAuditor(\r\n    dataset=\"data.csv\",\r\n    sensitive_features=[\"gender\", \"race\"],\r\n    privileged_groups={\r\n        \"gender\": \"male\",      # Specify privileged group\r\n        \"race\": \"white\"\r\n    }\r\n)\r\n```\r\n\r\n## \ud83e\uddd1\u200d\ud83d\udcbb Advanced: Core Class Usage\r\n\r\nFor advanced users, Fairsight exposes all core classes for maximum flexibility and custom workflows.\r\n\r\n### BiasDetector (Direct Use)\r\n```python\r\nfrom fairsight import BiasDetector\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\ndetector = BiasDetector(dataset=df, sensitive_features=['gender'], target='outcome')\r\nresults = detector.detect_bias_on_dataset()\r\nfor r in results:\r\n    print(r)\r\n```\r\n\r\n### DatasetAuditor (Direct Use)\r\n```python\r\nfrom fairsight import DatasetAuditor\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nauditor = DatasetAuditor(dataset=df, protected_attributes=['gender'], target_column='outcome')\r\nresults = auditor.audit()\r\nprint(results['bias_detection'])\r\nprint(results['fairness_metrics'])\r\n```\r\n\r\n### ModelAuditor (Direct Use)\r\n```python\r\nfrom fairsight import ModelAuditor\r\nfrom sklearn.ensemble import RandomForestClassifier\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nX = df.drop('outcome', axis=1)\r\ny = df['outcome']\r\nmodel = RandomForestClassifier().fit(X, y)\r\nauditor = ModelAuditor(model=model, X_test=X, y_test=y, protected_attributes=['gender'], target_column='outcome')\r\nresults = auditor.audit()\r\nprint(results['bias_detection'])\r\nprint(results['fairness_metrics'])\r\n```\r\n\r\n### FairnessMetrics (Direct Use)\r\n```python\r\nfrom fairsight import FairnessMetrics\r\nimport numpy as np\r\n\r\ny_true = np.array([1, 0, 1, 0])\r\ny_pred = np.array([1, 0, 0, 0])\r\nprotected = np.array([0, 1, 0, 1])\r\nfm = FairnessMetrics(y_true, y_pred, protected_attr=protected, privileged_group=0)\r\nprint(fm.demographic_parity())\r\nprint(fm.equalized_odds())\r\nprint(fm.predictive_parity())\r\n```\r\n\r\n### ExplainabilityEngine (Direct Use)\r\n```python\r\nfrom fairsight import ExplainabilityEngine\r\nfrom sklearn.linear_model import LogisticRegression\r\nimport pandas as pd\r\n\r\ndf = pd.read_csv('data.csv')\r\nX = df.drop('outcome', axis=1)\r\ny = df['outcome']\r\nmodel = LogisticRegression().fit(X, y)\r\nengine = ExplainabilityEngine(model=model, training_data=X, feature_names=list(X.columns))\r\nshap_result = engine.explain_with_shap(X)\r\nprint(shap_result)\r\n```\r\n\r\n## \ud83e\udde9 Standalone Utilities (Quick Use)\r\n\r\nFairsight exposes key utilities as standalone functions for maximum flexibility. You can use these independently of the main pipeline:\r\n\r\n```python\r\nfrom fairsight import (\r\n    explain_with_shap, explain_with_lime, detect_illegal_data,\r\n    preprocess_data, calculate_privilege_groups, generate_html_report\r\n)\r\n\r\n# Preprocessing\r\ndf_processed, encoders = preprocess_data(df, target_column='outcome', protected_attributes=['gender'])\r\n\r\n# Privilege group calculation\r\npriv_groups = calculate_privilege_groups(df, ['gender'])\r\n\r\n# Illegal data detection\r\nillegal_results = detect_illegal_data(df)\r\n\r\n# Explainability (SHAP & LIME)\r\nshap_result = explain_with_shap(model, X, feature_names)\r\nlime_result = explain_with_lime(model, X, feature_names)\r\n\r\n# Quick HTML report\r\ndummy_bias = {'gender': {'statistical_parity': 0.1}}\r\ndummy_fairness = {'gender': {'demographic_parity': 0.12}}\r\nreport_path = generate_html_report(dummy_bias, dummy_fairness, model_name='DemoModel')\r\nprint(f\"HTML report at: {report_path}\")\r\n```\r\n\r\n---\r\n\r\n## \ud83d\udcda References & Citations\r\n\r\n### Algorithms & Metrics\r\n- **Reweighing (Bias Mitigation):**\r\n  - Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33. [Springer Link](https://link.springer.com/article/10.1007/s10115-011-0463-8)\r\n- **Fairness Metrics:**\r\n  - Demographic Parity, Equalized Odds, Equal Opportunity, Predictive Parity, Disparate Impact, Statistical Parity, etc. are based on open academic literature, e.g.:\r\n    - Hardt, M., Price, E., & Srebro, N. (2016). Equality of Opportunity in Supervised Learning. NeurIPS. [arXiv](https://arxiv.org/abs/1610.02413)\r\n    - Feldman, M., et al. (2015). Certifying and removing disparate impact. KDD. [arXiv](https://arxiv.org/abs/1412.3756)\r\n    - Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. [fairmlbook.org](https://fairmlbook.org/)\r\n- **Explainability:**\r\n  - SHAP: Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS. [arXiv](https://arxiv.org/abs/1705.07874)\r\n  - LIME: Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). \"Why Should I Trust You?\": Explaining the Predictions of Any Classifier. KDD. [arXiv](https://arxiv.org/abs/1602.04938)\r\n- **Generalized Entropy Index:**\r\n  - Speicher, T., et al. (2018). A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices. KDD. [arXiv](https://arxiv.org/abs/1807.00799)\r\n\r\n### Libraries Used\r\n- **scikit-learn** (BSD-3-Clause License): Machine learning models and utilities\r\n- **pandas** (BSD-3-Clause License): Data processing\r\n- **numpy** (BSD License): Numerical computing\r\n- **SHAP** (MIT License): Model explainability\r\n- **LIME** (MIT License): Model explainability\r\n- **matplotlib, seaborn** (matplotlib: PSF License, seaborn: BSD): Visualization\r\n\r\nAll algorithms and metrics are implemented based on open academic literature and open-source libraries. No proprietary or closed-source code is used.\r\n\r\n---\r\n\r\n## \ud83e\udd1d Contributing\r\n\r\nWe welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\r\n\r\n1. Fork the repository\r\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\r\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\r\n4. Push to the branch (`git push origin feature/amazing-feature`) \r\n5. Open a Pull Request\r\n\r\n## \ud83d\udcc4 License\r\n\r\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\r\n\r\n## \ud83d\ude4f Acknowledgments\r\n\r\n- **SAP HANA Cloud** for enterprise data integration\r\n- **SHAP & LIME** for model explainability  \r\n- **scikit-learn** for machine learning utilities\r\n- **pandas & numpy** for data processing\r\n\r\n## \ud83d\udcde Support\r\n\r\n- \ud83d\udce7 Email: support@fairsight.com\r\n- \ud83d\udcac GitHub Issues: [Create an issue](https://github.com/vijayk/fairsight/issues)\r\n- \ud83d\udcd6 Documentation: [fairsight.readthedocs.io](https://fairsight.readthedocs.io/)\r\n\r\n---\r\n\r\n**Made with \u2764\ufe0f for Ethical AI**\r\n\r\n*Fairsight Toolkit - Making AI Fair, Transparent, and Accountable*\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Comprehensive AI Ethics and Bias Detection Toolkit with SAP Integration",
    "version": "1.0.0",
    "project_urls": {
        "Bug Reports": "https://github.com/KS-Vijay/fairsight/issues",
        "Documentation": "https://fairsight.readthedocs.io/",
        "Homepage": "https://github.com/KS-Vijay/fairsight",
        "Repository": "https://github.com/KS-Vijay/fairsight",
        "Source": "https://github.com/KS-Vijay/fairsight"
    },
    "split_keywords": [
        "ai ethics",
        " bias detection",
        " fairness",
        " machine learning",
        " audit",
        " sap hana",
        " explainability",
        " responsible ai",
        " illegal data detection",
        " perceptual hashing",
        " image similarity"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "359c84cc1930cb01120e7c23eecac1c2b313d3fc888d4e8f04ece6de2d929b3e",
                "md5": "538db78d8992809e617607c887cc0be9",
                "sha256": "d54393e4166b99f01883f36090bd77ccd9d24a5edda771e94e38d9c759e6eabf"
            },
            "downloads": -1,
            "filename": "fairsight-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "538db78d8992809e617607c887cc0be9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 80885794,
            "upload_time": "2025-09-08T08:31:53",
            "upload_time_iso_8601": "2025-09-08T08:31:53.090096Z",
            "url": "https://files.pythonhosted.org/packages/35/9c/84cc1930cb01120e7c23eecac1c2b313d3fc888d4e8f04ece6de2d929b3e/fairsight-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "181d90e22d5567965131c920a381fab38e1425f9ee67b5f1bf5c72f8fdde2f38",
                "md5": "5cd6de098f7e0d1918003dc95a848170",
                "sha256": "c3183034aa8e96331790ab843970843421e2a9d980fed79dc72877f443d96ad8"
            },
            "downloads": -1,
            "filename": "fairsight-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5cd6de098f7e0d1918003dc95a848170",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 67339829,
            "upload_time": "2025-09-08T08:34:14",
            "upload_time_iso_8601": "2025-09-08T08:34:14.495065Z",
            "url": "https://files.pythonhosted.org/packages/18/1d/90e22d5567965131c920a381fab38e1425f9ee67b5f1bf5c72f8fdde2f38/fairsight-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-08 08:34:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "KS-Vijay",
    "github_project": "fairsight",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pandas",
            "specs": [
                [
                    ">=",
                    "1.3.0"
                ]
            ]
        },
        {
            "name": "numpy",
            "specs": [
                [
                    "<",
                    "2.3.0"
                ],
                [
                    ">=",
                    "1.21.0"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "shap",
            "specs": [
                [
                    ">=",
                    "0.40.0"
                ]
            ]
        },
        {
            "name": "lime",
            "specs": [
                [
                    ">=",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.3.0"
                ]
            ]
        },
        {
            "name": "seaborn",
            "specs": [
                [
                    ">=",
                    "0.11.0"
                ]
            ]
        },
        {
            "name": "plotly",
            "specs": [
                [
                    ">=",
                    "5.0.0"
                ]
            ]
        },
        {
            "name": "markdown",
            "specs": [
                [
                    ">=",
                    "3.3.0"
                ]
            ]
        },
        {
            "name": "fpdf",
            "specs": [
                [
                    ">=",
                    "1.7.2"
                ]
            ]
        },
        {
            "name": "hdbcli",
            "specs": [
                [
                    ">=",
                    "2.15.0"
                ]
            ]
        },
        {
            "name": "scipy",
            "specs": [
                [
                    ">=",
                    "1.7.0"
                ]
            ]
        },
        {
            "name": "joblib",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    ">=",
                    "6.0.0"
                ]
            ]
        },
        {
            "name": "pytest-cov",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "black",
            "specs": [
                [
                    ">=",
                    "21.0.0"
                ]
            ]
        },
        {
            "name": "flake8",
            "specs": [
                [
                    ">=",
                    "3.8.0"
                ]
            ]
        },
        {
            "name": "jupyter",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "notebook",
            "specs": [
                [
                    ">=",
                    "6.0.0"
                ]
            ]
        },
        {
            "name": "xgboost",
            "specs": [
                [
                    ">=",
                    "1.5.0"
                ]
            ]
        },
        {
            "name": "lightgbm",
            "specs": [
                [
                    ">=",
                    "3.0.0"
                ]
            ]
        },
        {
            "name": "catboost",
            "specs": [
                [
                    ">=",
                    "0.26.0"
                ]
            ]
        },
        {
            "name": "asyncio-throttle",
            "specs": [
                [
                    ">=",
                    "1.0.0"
                ]
            ]
        },
        {
            "name": "numba",
            "specs": [
                [
                    ">=",
                    "0.56.0"
                ]
            ]
        },
        {
            "name": "imagehash",
            "specs": [
                [
                    ">=",
                    "4.3.1"
                ]
            ]
        },
        {
            "name": "requests",
            "specs": [
                [
                    ">=",
                    "2.25.0"
                ]
            ]
        },
        {
            "name": "sentence-transformers",
            "specs": []
        },
        {
            "name": "faiss-cpu",
            "specs": []
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.9.0"
                ]
            ]
        },
        {
            "name": "diffusers",
            "specs": [
                [
                    ">=",
                    "0.11.0"
                ]
            ]
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.20.0"
                ]
            ]
        },
        {
            "name": "accelerate",
            "specs": [
                [
                    ">=",
                    "0.20.0"
                ]
            ]
        }
    ],
    "lcname": "fairsight"
}
        
Elapsed time: 5.03084s