fl-healthcare-client


Namefl-healthcare-client JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA self-contained CLI tool for federated learning with healthcare data - works out of the box without external service setup
upload_time2025-07-16 10:32:55
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords federated-learning machine-learning healthcare cli privacy
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Federated Learning Client CLI with Supabase Integration

A comprehensive command-line interface for federated learning using scikit-learn MLPClassifier with Supabase Storage and Firebase Authentication.

## 🚀 Features

- **🤖 Train**: Train MLP models locally on CSV datasets using scikit-learn
- **📊 Evaluate**: Comprehensive model evaluation with metrics and comparisons
- **☁️ Supabase Storage**: Secure model storage and retrieval with signed URLs
- **🔐 Firebase Auth**: Secure authentication for protected operations
- **🔄 Sync**: Download/upload models with automatic fallback to HTTP
- **📈 Compare**: Compare multiple models on the same test dataset
- **🎯 Lightweight**: No PyTorch/TensorFlow dependencies, works in <100MB environments

## Installation

### From PyPI (Recommended)
```bash
pip install federated-learning-client
```

### From Source
1. Clone the repository:
```bash
git clone https://github.com/yourusername/federated-learning-client.git
cd federated-learning-client
```

2. Install in development mode:
```bash
pip install -e .
```

## Quick Start

### 1. Train a Model
```bash
fl-client train ./data/sample_diabetes_client1.csv --rounds 10 --model ./models/client1_model.pth
```

### 2. Evaluate the Model
```bash
fl-client evaluate ./models/client1_model.pth ./data/sample_diabetes_test.csv --save
```

### 3. Sync with Server
```bash
fl-client sync https://server.com/global_model.pth --model ./models/global_model.pth
```

### 4. Upload Local Model
```bash
fl-client upload ./models/client1_model.pth https://server.com/upload --round 5
```

## Commands

### `train`
Train an MLP model locally on CSV data.

**Usage:**
```bash
python cli.py train [DATA_PATH] [OPTIONS]
```

**Options:**
- `--model, -m`: Path to save/load model (default: `./models/local_model.pth`)
- `--rounds, -r`: Number of training epochs (default: 10)
- `--batch-size, -b`: Batch size for training (default: 32)
- `--lr`: Learning rate (default: 0.001)
- `--target, -t`: Name of target column (default: last column)
- `--log, -l`: Path to save training log
- `--round-num`: Federated learning round number

**Example:**
```bash
python cli.py train ./data/client1.csv --rounds 15 --batch-size 64 --lr 0.01
```

### `evaluate`
Evaluate a trained model on test data.

**Usage:**
```bash
python cli.py evaluate [MODEL_PATH] [TEST_DATA_PATH] [OPTIONS]
```

**Options:**
- `--target, -t`: Name of target column
- `--batch-size, -b`: Batch size for evaluation (default: 32)
- `--save, -s`: Save evaluation results to file
- `--output, -o`: Path to save results (default: `./results/evaluation_results.txt`)

**Example:**
```bash
python cli.py evaluate ./models/model.pth ./data/test.csv --save --output ./results/eval.txt
```

### `sync`
Download the latest global model from a server URL.

**Usage:**
```bash
python cli.py sync [URL] [OPTIONS]
```

**Options:**
- `--model, -m`: Local path to save downloaded model (default: `./models/global_model.pth`)
- `--client-id, -c`: Client identifier (default: `client_001`)
- `--timeout`: Download timeout in seconds (default: 30)

**Example:**
```bash
python cli.py sync https://federated-server.com/global_model.pth --client-id client_hospital_1
```

### `upload`
Upload local model weights to a server endpoint.

**Usage:**
```bash
python cli.py upload [MODEL_PATH] [SERVER_URL] [OPTIONS]
```

**Options:**
- `--client-id, -c`: Client identifier (default: `client_001`)
- `--round, -r`: Current federated learning round number
- `--timeout`: Upload timeout in seconds (default: 30)

**Example:**
```bash
python cli.py upload ./models/local_model.pth https://server.com/upload --round 3
```

### `full-sync`
Perform complete synchronization with federated learning server.

**Usage:**
```bash
python cli.py full-sync [SERVER_URL] [OPTIONS]
```

**Options:**
- `--model, -m`: Local model path (default: `./models/federated_model.pth`)
- `--client-id, -c`: Client identifier (default: `client_001`)
- `--round, -r`: Current round number
- `--upload-after`: Upload local model after downloading global model

**Example:**
```bash
python cli.py full-sync https://federated-server.com --round 5 --upload-after
```

### `compare`
Compare multiple models on the same test dataset.

**Usage:**
```bash
python cli.py compare [MODEL_PATHS...] --test-data [TEST_DATA_PATH] [OPTIONS]
```

**Options:**
- `--test-data, -d`: Path to test CSV file (required)
- `--target, -t`: Name of target column
- `--batch-size, -b`: Batch size for evaluation (default: 32)

**Example:**
```bash
python cli.py compare model1.pth model2.pth model3.pth --test-data ./data/test.csv
```

### `info`
Display information about the federated learning client.

**Usage:**
```bash
python cli.py info
```

## Data Format

The CLI expects CSV files with the following characteristics:

- **Headers**: First row should contain column names
- **Features**: Numerical or categorical features (categorical will be automatically encoded)
- **Target**: Binary classification target (0/1 or categorical labels)
- **Missing Values**: Will be automatically filled with mean values for numerical columns

### Example CSV Format:
```csv
pregnancies,glucose,blood_pressure,skin_thickness,insulin,bmi,diabetes_pedigree,age,outcome
6,148,72,35,0,33.6,0.627,50,1
1,85,66,29,0,26.6,0.351,31,0
8,183,64,0,0,23.3,0.672,32,1
```

## Model Architecture

The MLP model uses the following architecture:

- **Input Layer**: Matches the number of features in your dataset
- **Hidden Layers**: Configurable (default: [64, 32] neurons)
- **Activation**: ReLU activation functions
- **Regularization**: Dropout (0.2) between layers
- **Output Layer**: 2 neurons for binary classification
- **Initialization**: Xavier uniform weight initialization

## Configuration

You can customize default settings by editing `config.json`:

```json
{
  "federated_learning": {
    "server_base_url": "https://your-server.com",
    "client_id": "your_client_id"
  },
  "model": {
    "hidden_sizes": [128, 64, 32],
    "dropout_rate": 0.3
  },
  "training": {
    "default_epochs": 20,
    "default_batch_size": 64,
    "default_learning_rate": 0.001
  }
}
```

## Logging

Training and evaluation activities are automatically logged to:
- Console output with rich formatting
- Log files (when specified)
- Training history for federated learning rounds

## File Structure

```
client_cli/
├── cli.py              # Main CLI entry point
├── model.py            # MLP model architecture
├── train.py            # Training logic
├── evaluate.py         # Evaluation logic
├── sync.py             # Synchronization with server
├── requirements.txt    # Python dependencies
├── config.json         # Configuration file
├── data/               # Sample data files
│   ├── sample_diabetes_client1.csv
│   └── sample_diabetes_test.csv
├── models/             # Saved models (created automatically)
├── logs/               # Training logs (created automatically)
└── results/            # Evaluation results (created automatically)
```

## Example Workflow

Here's a complete federated learning workflow:

```bash
# 1. Train local model
python cli.py train ./data/sample_diabetes_client1.csv --rounds 10 --log ./logs/round1.log

# 2. Evaluate local model
python cli.py evaluate ./models/local_model.pth ./data/sample_diabetes_test.csv --save

# 3. Download global model from server
python cli.py sync https://federated-server.com/global_model.pth

# 4. Upload local model weights
python cli.py upload ./models/local_model.pth https://federated-server.com/upload --round 1

# 5. Compare models
python cli.py compare ./models/local_model.pth ./models/global_model.pth --test-data ./data/sample_diabetes_test.csv
```

## Troubleshooting

### Common Issues

1. **Import Errors**: Make sure all dependencies are installed with `pip install -r requirements.txt`

2. **CUDA Issues**: The client automatically detects and uses GPU if available, falls back to CPU

3. **File Not Found**: Ensure data files exist and paths are correct

4. **Model Loading Errors**: Check that model files are valid PyTorch models

5. **Network Issues**: For sync operations, ensure server URLs are accessible

### Getting Help

For detailed help on any command:
```bash
python cli.py [COMMAND] --help
```

For general information:
```bash
python cli.py info
```

## License

This federated learning client is designed for educational and research purposes in healthcare ML applications.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fl-healthcare-client",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Znaxh <znaxh@gmail.com>",
    "keywords": "federated-learning, machine-learning, healthcare, cli, privacy",
    "author": null,
    "author_email": "Znaxh <znaxh@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/fa/de/c0abf0563dceab9feb381c1813173a564f925e6132629dc51369a1c9570a/fl_healthcare_client-0.2.0.tar.gz",
    "platform": null,
    "description": "# Federated Learning Client CLI with Supabase Integration\n\nA comprehensive command-line interface for federated learning using scikit-learn MLPClassifier with Supabase Storage and Firebase Authentication.\n\n## \ud83d\ude80 Features\n\n- **\ud83e\udd16 Train**: Train MLP models locally on CSV datasets using scikit-learn\n- **\ud83d\udcca Evaluate**: Comprehensive model evaluation with metrics and comparisons\n- **\u2601\ufe0f Supabase Storage**: Secure model storage and retrieval with signed URLs\n- **\ud83d\udd10 Firebase Auth**: Secure authentication for protected operations\n- **\ud83d\udd04 Sync**: Download/upload models with automatic fallback to HTTP\n- **\ud83d\udcc8 Compare**: Compare multiple models on the same test dataset\n- **\ud83c\udfaf Lightweight**: No PyTorch/TensorFlow dependencies, works in <100MB environments\n\n## Installation\n\n### From PyPI (Recommended)\n```bash\npip install federated-learning-client\n```\n\n### From Source\n1. Clone the repository:\n```bash\ngit clone https://github.com/yourusername/federated-learning-client.git\ncd federated-learning-client\n```\n\n2. Install in development mode:\n```bash\npip install -e .\n```\n\n## Quick Start\n\n### 1. Train a Model\n```bash\nfl-client train ./data/sample_diabetes_client1.csv --rounds 10 --model ./models/client1_model.pth\n```\n\n### 2. Evaluate the Model\n```bash\nfl-client evaluate ./models/client1_model.pth ./data/sample_diabetes_test.csv --save\n```\n\n### 3. Sync with Server\n```bash\nfl-client sync https://server.com/global_model.pth --model ./models/global_model.pth\n```\n\n### 4. Upload Local Model\n```bash\nfl-client upload ./models/client1_model.pth https://server.com/upload --round 5\n```\n\n## Commands\n\n### `train`\nTrain an MLP model locally on CSV data.\n\n**Usage:**\n```bash\npython cli.py train [DATA_PATH] [OPTIONS]\n```\n\n**Options:**\n- `--model, -m`: Path to save/load model (default: `./models/local_model.pth`)\n- `--rounds, -r`: Number of training epochs (default: 10)\n- `--batch-size, -b`: Batch size for training (default: 32)\n- `--lr`: Learning rate (default: 0.001)\n- `--target, -t`: Name of target column (default: last column)\n- `--log, -l`: Path to save training log\n- `--round-num`: Federated learning round number\n\n**Example:**\n```bash\npython cli.py train ./data/client1.csv --rounds 15 --batch-size 64 --lr 0.01\n```\n\n### `evaluate`\nEvaluate a trained model on test data.\n\n**Usage:**\n```bash\npython cli.py evaluate [MODEL_PATH] [TEST_DATA_PATH] [OPTIONS]\n```\n\n**Options:**\n- `--target, -t`: Name of target column\n- `--batch-size, -b`: Batch size for evaluation (default: 32)\n- `--save, -s`: Save evaluation results to file\n- `--output, -o`: Path to save results (default: `./results/evaluation_results.txt`)\n\n**Example:**\n```bash\npython cli.py evaluate ./models/model.pth ./data/test.csv --save --output ./results/eval.txt\n```\n\n### `sync`\nDownload the latest global model from a server URL.\n\n**Usage:**\n```bash\npython cli.py sync [URL] [OPTIONS]\n```\n\n**Options:**\n- `--model, -m`: Local path to save downloaded model (default: `./models/global_model.pth`)\n- `--client-id, -c`: Client identifier (default: `client_001`)\n- `--timeout`: Download timeout in seconds (default: 30)\n\n**Example:**\n```bash\npython cli.py sync https://federated-server.com/global_model.pth --client-id client_hospital_1\n```\n\n### `upload`\nUpload local model weights to a server endpoint.\n\n**Usage:**\n```bash\npython cli.py upload [MODEL_PATH] [SERVER_URL] [OPTIONS]\n```\n\n**Options:**\n- `--client-id, -c`: Client identifier (default: `client_001`)\n- `--round, -r`: Current federated learning round number\n- `--timeout`: Upload timeout in seconds (default: 30)\n\n**Example:**\n```bash\npython cli.py upload ./models/local_model.pth https://server.com/upload --round 3\n```\n\n### `full-sync`\nPerform complete synchronization with federated learning server.\n\n**Usage:**\n```bash\npython cli.py full-sync [SERVER_URL] [OPTIONS]\n```\n\n**Options:**\n- `--model, -m`: Local model path (default: `./models/federated_model.pth`)\n- `--client-id, -c`: Client identifier (default: `client_001`)\n- `--round, -r`: Current round number\n- `--upload-after`: Upload local model after downloading global model\n\n**Example:**\n```bash\npython cli.py full-sync https://federated-server.com --round 5 --upload-after\n```\n\n### `compare`\nCompare multiple models on the same test dataset.\n\n**Usage:**\n```bash\npython cli.py compare [MODEL_PATHS...] --test-data [TEST_DATA_PATH] [OPTIONS]\n```\n\n**Options:**\n- `--test-data, -d`: Path to test CSV file (required)\n- `--target, -t`: Name of target column\n- `--batch-size, -b`: Batch size for evaluation (default: 32)\n\n**Example:**\n```bash\npython cli.py compare model1.pth model2.pth model3.pth --test-data ./data/test.csv\n```\n\n### `info`\nDisplay information about the federated learning client.\n\n**Usage:**\n```bash\npython cli.py info\n```\n\n## Data Format\n\nThe CLI expects CSV files with the following characteristics:\n\n- **Headers**: First row should contain column names\n- **Features**: Numerical or categorical features (categorical will be automatically encoded)\n- **Target**: Binary classification target (0/1 or categorical labels)\n- **Missing Values**: Will be automatically filled with mean values for numerical columns\n\n### Example CSV Format:\n```csv\npregnancies,glucose,blood_pressure,skin_thickness,insulin,bmi,diabetes_pedigree,age,outcome\n6,148,72,35,0,33.6,0.627,50,1\n1,85,66,29,0,26.6,0.351,31,0\n8,183,64,0,0,23.3,0.672,32,1\n```\n\n## Model Architecture\n\nThe MLP model uses the following architecture:\n\n- **Input Layer**: Matches the number of features in your dataset\n- **Hidden Layers**: Configurable (default: [64, 32] neurons)\n- **Activation**: ReLU activation functions\n- **Regularization**: Dropout (0.2) between layers\n- **Output Layer**: 2 neurons for binary classification\n- **Initialization**: Xavier uniform weight initialization\n\n## Configuration\n\nYou can customize default settings by editing `config.json`:\n\n```json\n{\n  \"federated_learning\": {\n    \"server_base_url\": \"https://your-server.com\",\n    \"client_id\": \"your_client_id\"\n  },\n  \"model\": {\n    \"hidden_sizes\": [128, 64, 32],\n    \"dropout_rate\": 0.3\n  },\n  \"training\": {\n    \"default_epochs\": 20,\n    \"default_batch_size\": 64,\n    \"default_learning_rate\": 0.001\n  }\n}\n```\n\n## Logging\n\nTraining and evaluation activities are automatically logged to:\n- Console output with rich formatting\n- Log files (when specified)\n- Training history for federated learning rounds\n\n## File Structure\n\n```\nclient_cli/\n\u251c\u2500\u2500 cli.py              # Main CLI entry point\n\u251c\u2500\u2500 model.py            # MLP model architecture\n\u251c\u2500\u2500 train.py            # Training logic\n\u251c\u2500\u2500 evaluate.py         # Evaluation logic\n\u251c\u2500\u2500 sync.py             # Synchronization with server\n\u251c\u2500\u2500 requirements.txt    # Python dependencies\n\u251c\u2500\u2500 config.json         # Configuration file\n\u251c\u2500\u2500 data/               # Sample data files\n\u2502   \u251c\u2500\u2500 sample_diabetes_client1.csv\n\u2502   \u2514\u2500\u2500 sample_diabetes_test.csv\n\u251c\u2500\u2500 models/             # Saved models (created automatically)\n\u251c\u2500\u2500 logs/               # Training logs (created automatically)\n\u2514\u2500\u2500 results/            # Evaluation results (created automatically)\n```\n\n## Example Workflow\n\nHere's a complete federated learning workflow:\n\n```bash\n# 1. Train local model\npython cli.py train ./data/sample_diabetes_client1.csv --rounds 10 --log ./logs/round1.log\n\n# 2. Evaluate local model\npython cli.py evaluate ./models/local_model.pth ./data/sample_diabetes_test.csv --save\n\n# 3. Download global model from server\npython cli.py sync https://federated-server.com/global_model.pth\n\n# 4. Upload local model weights\npython cli.py upload ./models/local_model.pth https://federated-server.com/upload --round 1\n\n# 5. Compare models\npython cli.py compare ./models/local_model.pth ./models/global_model.pth --test-data ./data/sample_diabetes_test.csv\n```\n\n## Troubleshooting\n\n### Common Issues\n\n1. **Import Errors**: Make sure all dependencies are installed with `pip install -r requirements.txt`\n\n2. **CUDA Issues**: The client automatically detects and uses GPU if available, falls back to CPU\n\n3. **File Not Found**: Ensure data files exist and paths are correct\n\n4. **Model Loading Errors**: Check that model files are valid PyTorch models\n\n5. **Network Issues**: For sync operations, ensure server URLs are accessible\n\n### Getting Help\n\nFor detailed help on any command:\n```bash\npython cli.py [COMMAND] --help\n```\n\nFor general information:\n```bash\npython cli.py info\n```\n\n## License\n\nThis federated learning client is designed for educational and research purposes in healthcare ML applications.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A self-contained CLI tool for federated learning with healthcare data - works out of the box without external service setup",
    "version": "0.2.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/yourusername/federated-learning-client/issues",
        "Documentation": "https://github.com/yourusername/federated-learning-client#readme",
        "Homepage": "https://github.com/yourusername/federated-learning-client",
        "Repository": "https://github.com/yourusername/federated-learning-client"
    },
    "split_keywords": [
        "federated-learning",
        " machine-learning",
        " healthcare",
        " cli",
        " privacy"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a506e9f9af387dd1a756a66d9fcd07c098ac0c86893232f406ee6bbb20a31e71",
                "md5": "2776c4a634ddc6ddcf6a9ba8cd6ff3a6",
                "sha256": "828eb3a3688edbf6424cf2a15a4be51ac9d2e9de5a39420d5796e9b1a8cb777c"
            },
            "downloads": -1,
            "filename": "fl_healthcare_client-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2776c4a634ddc6ddcf6a9ba8cd6ff3a6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 32645,
            "upload_time": "2025-07-16T10:32:53",
            "upload_time_iso_8601": "2025-07-16T10:32:53.949539Z",
            "url": "https://files.pythonhosted.org/packages/a5/06/e9f9af387dd1a756a66d9fcd07c098ac0c86893232f406ee6bbb20a31e71/fl_healthcare_client-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fadec0abf0563dceab9feb381c1813173a564f925e6132629dc51369a1c9570a",
                "md5": "d2781b831a717ca7b58695dd157109e5",
                "sha256": "8a9520bc2513b8c899dfdb7ff17480eaa95b893702ecf11adb6e0adce7200ab9"
            },
            "downloads": -1,
            "filename": "fl_healthcare_client-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d2781b831a717ca7b58695dd157109e5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 31662,
            "upload_time": "2025-07-16T10:32:55",
            "upload_time_iso_8601": "2025-07-16T10:32:55.779756Z",
            "url": "https://files.pythonhosted.org/packages/fa/de/c0abf0563dceab9feb381c1813173a564f925e6132629dc51369a1c9570a/fl_healthcare_client-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-16 10:32:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "federated-learning-client",
    "github_not_found": true,
    "lcname": "fl-healthcare-client"
}
        
Elapsed time: 0.49942s