# Ethical AI (eai)
A comprehensive Python package for ethical AI validation and auditing, designed with a modular structure similar to scikit-learn.
## Features
- **Bias Detection**: Identify and measure bias in AI models across different demographic groups
- **Fairness Assessment**: Evaluate model fairness using various metrics and statistical tests
- **GDPR Compliance**: Check for data privacy and consent requirements
- **AI Act Compliance**: Validate compliance with EU AI Act regulations
- **Comprehensive Reporting**: Generate detailed audit reports with visualizations
- **Multiple Model Support**: Works with scikit-learn, TensorFlow, and PyTorch models
## Installation
### From PyPI (Recommended)
```bash
pip install whis-ethical-ai
```
### From Source
```bash
git clone https://github.com/whis-19/ethical-ai.git
cd ethical-ai
pip install -e .
```
## Quick Start
### Basic Usage
```python
from ethical_ai_validator import EthicalAIValidator
import numpy as np
import pandas as pd
# Create sample data
predictions = np.array([1, 0, 1, 0, 1, 0, 1, 1, 0, 1])
true_labels = np.array([1, 0, 1, 1, 0, 0, 1, 1, 0, 1])
protected_attributes = {
'gender': ['male', 'female', 'male', 'female', 'male', 'female', 'male', 'female', 'male', 'female'],
'race': ['white', 'black', 'white', 'black', 'white', 'black', 'white', 'black', 'white', 'black']
}
# Initialize validator
validator = EthicalAIValidator()
# Detect bias
bias_report = validator.audit_bias(predictions, true_labels, protected_attributes)
print("Bias Report:")
print(bias_report)
# Calculate fairness metrics
fairness_metrics = validator.calculate_fairness_metrics(predictions, protected_attributes)
print("Fairness Metrics:")
print(fairness_metrics)
```
### Advanced Usage
```python
# Generate compliance report
metadata = {'model_name': 'RandomForest', 'version': '1.0'}
audit_criteria = {'bias_threshold': 0.1, 'fairness_threshold': 0.8}
report_path = validator.generate_compliance_report(metadata, audit_criteria)
# Real-time monitoring
predictions_stream = [
np.random.choice([0, 1], size=1000),
np.random.choice([0, 1], size=1000)
]
alerts = validator.monitor_realtime(predictions_stream)
# Suggest mitigations
mitigations = validator.suggest_mitigations(bias_report)
print("Mitigation Suggestions:")
print(mitigations)
```
### Using Convenience Functions
```python
from ethical_ai_validator import (
audit_bias, calculate_fairness_metrics, generate_compliance_report,
monitor_realtime, suggest_mitigations
)
# Direct function calls
bias_report = audit_bias(predictions, true_labels, protected_attributes)
fairness_metrics = calculate_fairness_metrics(predictions, protected_attributes)
report_path = generate_compliance_report(metadata, audit_criteria)
alerts = monitor_realtime(predictions_stream)
mitigations = suggest_mitigations(bias_report)
```
## Development Setup
### Prerequisites
- Python 2.7 or higher
- Git
- pip
### Step-by-Step Setup
1. **Clone the repository**
```bash
git clone https://github.com/whis-19/ethical-ai.git
cd ethical-ai
```
2. **Create a virtual environment**
```bash
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
```
3. **Install dependencies**
```bash
pip install -r requirements.txt
pip install -e .[dev]
```
4. **Run tests**
```bash
pytest
```
5. **Check code coverage**
```bash
pytest --cov=ethical_ai_validator --cov-report=html
```
### VS Code Setup
1. Install recommended extensions:
- Python (ms-python.python)
- Pylance (ms-python.vscode-pylance)
- Python Test Explorer (littlefoxteam.vscode-python-test-adapter)
2. Configure settings in `.vscode/settings.json` (already included)
## Project Structure
```
ethical-ai/
├── src/
│ └── ethical_ai_validator/
│ ├── __init__.py
│ ├── core/
│ ├── validators/
│ ├── metrics/
│ └── reporting/
├── tests/
├── docs/
├── requirements.txt
├── pyproject.toml
├── README.md
└── SETUP.md
```
## Testing
Run the test suite:
```bash
# Run all tests
pytest
# Run with coverage
pytest --cov=ethical_ai_validator --cov-report=html
# Run specific test categories
pytest -m unit
pytest -m integration
pytest -m "not slow"
```
## Contributing
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
### Code Style
This project uses:
- **Black** for code formatting
- **Flake8** for linting
- **MyPy** for type checking
- **Pre-commit** hooks for automated checks
Run pre-commit hooks:
```bash
pre-commit install
pre-commit run --all-files
```
## Documentation
- [User Guide](https://whis-19.github.io/ethical-ai/)
- [Contributing Guide](CONTRIBUTING.md)
## License
This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for complete details.
### License Summary
**✅ Permitted:**
- Commercial use
- Modification and distribution
- Private and public use
- Patent use
**❌ Limitations:**
- No warranty provided
- No liability for damages
**📋 Requirements:**
- Include copyright notice
- Include license text
- State any modifications
### License Compatibility
The MIT License is compatible with:
- GPL (v2 and v3)
- Apache License 2.0
- BSD Licenses
- Most other open-source licenses
This makes it suitable for use in both open-source and commercial projects.
### Third-Party Dependencies
All dependencies are BSD-3-Clause licensed and compatible with MIT:
- numpy, pandas, scikit-learn, reportlab
For detailed license information, see the [LICENSE](LICENSE) file.
## Support
- **Issues**: [GitHub Issues](https://github.com/whis-19/ethical-ai/issues)
- **Discussions**: [GitHub Discussions](https://github.com/whis-19/ethical-ai/discussions)
- **Email**: muhammadabdullahinbox@gmail.com
## Acknowledgments
- Inspired by the need for ethical AI development
- Built with support from the open-source community
- Special thanks to contributors and maintainers ([WHIS-19](https://github.com/whis-19/))
Raw data
{
"_id": null,
"home_page": "https://github.com/whis-19/ethical-ai",
"name": "whis-ethical-ai",
"maintainer": null,
"docs_url": null,
"requires_python": ">=2.7",
"maintainer_email": "Ethical AI Team <muhammadabdullahinbox@gmail.com>",
"keywords": "ethical-ai, bias-detection, fairness, ai-auditing, gdpr-compliance",
"author": "Ethical AI Team",
"author_email": "Ethical AI Team <muhammadabdullahinbox@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/b8/73/e893507d298131fdfacbbba659bd18a4849f115bb5104479be05c416ff1c/whis_ethical_ai-1.1.0.tar.gz",
"platform": "any",
"description": "# Ethical AI (eai)\r\n\r\nA comprehensive Python package for ethical AI validation and auditing, designed with a modular structure similar to scikit-learn.\r\n\r\n## Features\r\n\r\n- **Bias Detection**: Identify and measure bias in AI models across different demographic groups\r\n- **Fairness Assessment**: Evaluate model fairness using various metrics and statistical tests\r\n- **GDPR Compliance**: Check for data privacy and consent requirements\r\n- **AI Act Compliance**: Validate compliance with EU AI Act regulations\r\n- **Comprehensive Reporting**: Generate detailed audit reports with visualizations\r\n- **Multiple Model Support**: Works with scikit-learn, TensorFlow, and PyTorch models\r\n\r\n## Installation\r\n\r\n### From PyPI (Recommended)\r\n\r\n```bash\r\npip install whis-ethical-ai\r\n```\r\n\r\n### From Source\r\n\r\n```bash\r\ngit clone https://github.com/whis-19/ethical-ai.git\r\ncd ethical-ai\r\npip install -e .\r\n```\r\n\r\n## Quick Start\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom ethical_ai_validator import EthicalAIValidator\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\n# Create sample data\r\npredictions = np.array([1, 0, 1, 0, 1, 0, 1, 1, 0, 1])\r\ntrue_labels = np.array([1, 0, 1, 1, 0, 0, 1, 1, 0, 1])\r\nprotected_attributes = {\r\n 'gender': ['male', 'female', 'male', 'female', 'male', 'female', 'male', 'female', 'male', 'female'],\r\n 'race': ['white', 'black', 'white', 'black', 'white', 'black', 'white', 'black', 'white', 'black']\r\n}\r\n\r\n# Initialize validator\r\nvalidator = EthicalAIValidator()\r\n\r\n# Detect bias\r\nbias_report = validator.audit_bias(predictions, true_labels, protected_attributes)\r\nprint(\"Bias Report:\")\r\nprint(bias_report)\r\n\r\n# Calculate fairness metrics\r\nfairness_metrics = validator.calculate_fairness_metrics(predictions, protected_attributes)\r\nprint(\"Fairness Metrics:\")\r\nprint(fairness_metrics)\r\n```\r\n\r\n### Advanced Usage\r\n\r\n```python\r\n# Generate compliance report\r\nmetadata = {'model_name': 'RandomForest', 'version': '1.0'}\r\naudit_criteria = {'bias_threshold': 0.1, 'fairness_threshold': 0.8}\r\nreport_path = validator.generate_compliance_report(metadata, audit_criteria)\r\n\r\n# Real-time monitoring\r\npredictions_stream = [\r\n np.random.choice([0, 1], size=1000),\r\n np.random.choice([0, 1], size=1000)\r\n]\r\nalerts = validator.monitor_realtime(predictions_stream)\r\n\r\n# Suggest mitigations\r\nmitigations = validator.suggest_mitigations(bias_report)\r\nprint(\"Mitigation Suggestions:\")\r\nprint(mitigations)\r\n```\r\n\r\n### Using Convenience Functions\r\n\r\n```python\r\nfrom ethical_ai_validator import (\r\n audit_bias, calculate_fairness_metrics, generate_compliance_report,\r\n monitor_realtime, suggest_mitigations\r\n)\r\n\r\n# Direct function calls\r\nbias_report = audit_bias(predictions, true_labels, protected_attributes)\r\nfairness_metrics = calculate_fairness_metrics(predictions, protected_attributes)\r\nreport_path = generate_compliance_report(metadata, audit_criteria)\r\nalerts = monitor_realtime(predictions_stream)\r\nmitigations = suggest_mitigations(bias_report)\r\n```\r\n\r\n## Development Setup\r\n\r\n### Prerequisites\r\n\r\n- Python 2.7 or higher\r\n- Git\r\n- pip\r\n\r\n### Step-by-Step Setup\r\n\r\n1. **Clone the repository**\r\n ```bash\r\n git clone https://github.com/whis-19/ethical-ai.git\r\ncd ethical-ai\r\n ```\r\n\r\n2. **Create a virtual environment**\r\n ```bash\r\n python -m venv .venv\r\n source .venv/bin/activate # On Windows: .venv\\Scripts\\activate\r\n ```\r\n\r\n3. **Install dependencies**\r\n ```bash\r\n pip install -r requirements.txt\r\n pip install -e .[dev]\r\n ```\r\n\r\n4. **Run tests**\r\n ```bash\r\n pytest\r\n ```\r\n\r\n5. **Check code coverage**\r\n ```bash\r\n pytest --cov=ethical_ai_validator --cov-report=html\r\n ```\r\n\r\n### VS Code Setup\r\n\r\n1. Install recommended extensions:\r\n - Python (ms-python.python)\r\n - Pylance (ms-python.vscode-pylance)\r\n - Python Test Explorer (littlefoxteam.vscode-python-test-adapter)\r\n\r\n2. Configure settings in `.vscode/settings.json` (already included)\r\n\r\n## Project Structure\r\n\r\n```\r\nethical-ai/\r\n\u251c\u2500\u2500 src/\r\n\u2502 \u2514\u2500\u2500 ethical_ai_validator/\r\n\u2502 \u251c\u2500\u2500 __init__.py\r\n\u2502 \u251c\u2500\u2500 core/\r\n\u2502 \u251c\u2500\u2500 validators/\r\n\u2502 \u251c\u2500\u2500 metrics/\r\n\u2502 \u2514\u2500\u2500 reporting/\r\n\u251c\u2500\u2500 tests/\r\n\u251c\u2500\u2500 docs/\r\n\u251c\u2500\u2500 requirements.txt\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u251c\u2500\u2500 README.md\r\n\u2514\u2500\u2500 SETUP.md\r\n```\r\n\r\n## Testing\r\n\r\nRun the test suite:\r\n\r\n```bash\r\n# Run all tests\r\npytest\r\n\r\n# Run with coverage\r\npytest --cov=ethical_ai_validator --cov-report=html\r\n\r\n# Run specific test categories\r\npytest -m unit\r\npytest -m integration\r\npytest -m \"not slow\"\r\n```\r\n\r\n## Contributing\r\n\r\n1. Fork the repository\r\n2. Create a feature branch (`git checkout -b feature/amazing-feature`)\r\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\r\n4. Push to the branch (`git push origin feature/amazing-feature`)\r\n5. Open a Pull Request\r\n\r\n### Code Style\r\n\r\nThis project uses:\r\n- **Black** for code formatting\r\n- **Flake8** for linting\r\n- **MyPy** for type checking\r\n- **Pre-commit** hooks for automated checks\r\n\r\nRun pre-commit hooks:\r\n```bash\r\npre-commit install\r\npre-commit run --all-files\r\n```\r\n\r\n## Documentation\r\n\r\n- [User Guide](https://whis-19.github.io/ethical-ai/)\r\n- [Contributing Guide](CONTRIBUTING.md)\r\n\r\n## License\r\n\r\nThis project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for complete details.\r\n\r\n### License Summary\r\n\r\n**\u2705 Permitted:**\r\n- Commercial use\r\n- Modification and distribution\r\n- Private and public use\r\n- Patent use\r\n\r\n**\u274c Limitations:**\r\n- No warranty provided\r\n- No liability for damages\r\n\r\n**\ud83d\udccb Requirements:**\r\n- Include copyright notice\r\n- Include license text\r\n- State any modifications\r\n\r\n### License Compatibility\r\n\r\nThe MIT License is compatible with:\r\n- GPL (v2 and v3)\r\n- Apache License 2.0\r\n- BSD Licenses\r\n- Most other open-source licenses\r\n\r\nThis makes it suitable for use in both open-source and commercial projects.\r\n\r\n### Third-Party Dependencies\r\n\r\nAll dependencies are BSD-3-Clause licensed and compatible with MIT:\r\n- numpy, pandas, scikit-learn, reportlab\r\n\r\nFor detailed license information, see the [LICENSE](LICENSE) file.\r\n\r\n## Support\r\n\r\n- **Issues**: [GitHub Issues](https://github.com/whis-19/ethical-ai/issues)\r\n- **Discussions**: [GitHub Discussions](https://github.com/whis-19/ethical-ai/discussions)\r\n- **Email**: muhammadabdullahinbox@gmail.com\r\n\r\n## Acknowledgments\r\n\r\n- Inspired by the need for ethical AI development\r\n- Built with support from the open-source community\r\n- Special thanks to contributors and maintainers ([WHIS-19](https://github.com/whis-19/))\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Comprehensive ethical AI validation and auditing package",
"version": "1.1.0",
"project_urls": {
"Bug Tracker": "https://github.com/whis-19/ethical-ai/issues",
"Documentation": "https://whis-19.github.io/ethical-ai/",
"Homepage": "https://github.com/whis-19/ethical-ai",
"Repository": "https://github.com/whis-19/ethical-ai"
},
"split_keywords": [
"ethical-ai",
" bias-detection",
" fairness",
" ai-auditing",
" gdpr-compliance"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e30c1de3be2ab0531bfc910b0591c5483e6eac8cad94647f1fa412552933e8e4",
"md5": "2d20cbf89d265204a46c98a7a53f3480",
"sha256": "d01bcc12dcca5c4818b07ad859c5cc4d261c489ee4b1615c5e8d90ddb1417dff"
},
"downloads": -1,
"filename": "whis_ethical_ai-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2d20cbf89d265204a46c98a7a53f3480",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=2.7",
"size": 27922,
"upload_time": "2025-08-07T23:17:51",
"upload_time_iso_8601": "2025-08-07T23:17:51.596868Z",
"url": "https://files.pythonhosted.org/packages/e3/0c/1de3be2ab0531bfc910b0591c5483e6eac8cad94647f1fa412552933e8e4/whis_ethical_ai-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "b873e893507d298131fdfacbbba659bd18a4849f115bb5104479be05c416ff1c",
"md5": "2ac6bed3ca2a65cad5d36e900297d268",
"sha256": "b62f649acaa3268dd4fbd301d3d84f8f450a118c1887d7a7fc759b0d11ec4b96"
},
"downloads": -1,
"filename": "whis_ethical_ai-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "2ac6bed3ca2a65cad5d36e900297d268",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=2.7",
"size": 83785,
"upload_time": "2025-08-07T23:17:53",
"upload_time_iso_8601": "2025-08-07T23:17:53.202657Z",
"url": "https://files.pythonhosted.org/packages/b8/73/e893507d298131fdfacbbba659bd18a4849f115bb5104479be05c416ff1c/whis_ethical_ai-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-07 23:17:53",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "whis-19",
"github_project": "ethical-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "pandas",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "reportlab",
"specs": [
[
">=",
"3.6.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"7.0.0"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
">=",
"4.0.0"
]
]
},
{
"name": "pytest-mock",
"specs": [
[
">=",
"3.10.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"22.0.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "pre-commit",
"specs": [
[
">=",
"2.20.0"
]
]
},
{
"name": "sphinx",
"specs": [
[
">=",
"5.0.0"
]
]
},
{
"name": "sphinx-rtd-theme",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.21.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
">=",
"3.5.0"
]
]
},
{
"name": "seaborn",
"specs": [
[
">=",
"0.11.0"
]
]
}
],
"lcname": "whis-ethical-ai"
}