candlefl


Namecandlefl JSON
Version 0.2.2 PyPI version JSON
download
home_pagehttps://candlefl.readthedocs.io/en/latest/
SummaryA Python library for rapid prototyping, experimenting, and logging of federated learning using state-of-the-art models and datasets. Built using PyTorch and PyTorch Lightning.
upload_time2024-03-26 05:07:18
maintainerNone
docs_urlNone
authorslothrabbit77
requires_python<4.0,>=3.8
licenseGNU General Public License v3
keywords federated-learning pytorch pytorch-lightning candlefl
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
## Table of Contents

- [Key Features](#features)
- [Installation](#installation)
- [Examples and Usage](#examples-and-usage)
- [Available Models](#available-models)
- [Available Datasets](#available-datasets)
- [Contributing](#contributing)
- [Citation](#citation)

## Features

- Python 3.6+ support. Built using ```torch-1.10.1```, ```torchvision-0.11.2```, and ```pytorch-lightning-1.5.7```.
- Customizable implementations for state-of-the-art deep learning [models](#available-models) which can be trained in federated or non-federated settings.
- Supports finetuning of the pre-trained deep learning models, allowing for faster training using transfer learning.
- PyTorch LightningDataModule wrappers for the most commonly used [datasets](#available-datasets) to reduce the boilerplate code before experiments.
- Built using the bottom-up approach for the datamodules and models which ensures abstractions while allowing for customization.
- Provides implementation of the federated learning (FL) samplers, aggregators, and wrappers, to prototype FL experiments on-the-go.
- Backwards compatible with the PyTorch LightningDataModule, LightningModule, loggers, and DevOps tools.
- More details about the examples and usage can be found [below](#examples-and-usage).

## Installation
### Stable Release
As of now, ```candlefl``` is available on PyPI and can be installed using the following command in your terminal:
```
$ pip install candlefl
```
This is the preferred method to install ```candlefl``` with the most stable release.
If you don't have [pip](https://pip.pypa.io/en/stable/) installed, this [Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) can guide you through the process.

## Examples and Usage
Although ```candlefl``` is primarily built for quick prototyping of federated learning experiments, the models, datasets, and abstractions can also speed up the non-federated learning experiments. In this section, we will explore examples and usages under both the settings.

### Non-Federated Learning
The following steps should be followed on a high-level to train a non-federated learning experiment. We are using the ```EMNIST (MNIST)``` dataset and ```densenet121``` for this example.

1. Import the relevant modules.
	```python
	from candlefl.datamodules.emnist import EMNISTDataModule
	from candlefl.models.wrapper.emnist import MNISTEMNIST
	```

	```python
	import pytorch_lightning as pl
	from pytorch_lightning.loggers import TensorBoardLogger
	from pytorch_lightning.callbacks import (
		ModelCheckpoint,
		LearningRateMonitor,
		DeviceStatsMonitor,
		ModelSummary,
		ProgressBar,
		...
	)
	```
	For more details, view the full list of PyTorch Lightning [callbacks](https://pytorch-lightning.readthedocs.io/en/stable/extensions/callbacks.html#callback) and [loggers](https://pytorch-lightning.readthedocs.io/en/latest/common/loggers.html#loggers) on the official website.
2. Setup the PyTorch Lightning trainer.
	```python
	trainer = pl.Trainer(
		...
		logger=[
			TensorBoardLogger(
				name=experiment_name,
				save_dir=os.path.join(checkpoint_save_path, experiment_name),
			)
		],
		callbacks=[
			ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc"),
			LearningRateMonitor("epoch"),
			DeviceStatsMonitor(),
			ModelSummary(),
			ProgressBar(),
		],
		...
	)
	```
	More details about the PyTorch Lightning [Trainer API](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#) can be found on their official website.

3. Prepare the dataset using the wrappers provided by ```candlefl.datamodules```.
	```python
	datamodule = EMNISTDataModule(dataset_name="mnist")
	datamodule.prepare_data()
	datamodule.setup()
	```

4. Initialize the model using the wrappers provided by ```candlefl.models.wrappers```.
	```python
	# check if the model can be loaded from a given checkpoint
	if (checkpoint_load_path) and os.path.isfile(checkpoint_load_path):
		model = MNISTEMNIST(
			"densenet121", "adam", {"lr": 0.001}
			).load_from_checkpoint(checkpoint_load_path)
	else:
		pl.seed_everything(42)
		model = MNISTEMNIST("densenet121", "adam", {"lr": 0.001})
		trainer.fit(model, datamodule.train_dataloader(), datamodule.val_dataloader())
	```

5. Collect the results.
	```python
	val_result = trainer.test(
		model, test_dataloaders=datamodule.val_dataloader(), verbose=True
	)
	test_result = trainer.test(
		model, test_dataloaders=datamodule.test_dataloader(), verbose=True
	)
	```

6. The corresponding files for the experiment (model checkpoints and logger metadata) will be stored at ```default_root_dir``` argument given to the PyTorch Lightning ```Trainer``` object in Step 2. For this experiment, we use the [Tensorboard](https://www.tensorflow.org/tensorboard) logger. To view the logs (and related plots and metrics), go to the ```default_root_dir``` path and find the Tensorboard log files. Upload the files to the Tensorboard Development portal following the instructions [here](https://tensorboard.dev/#get-started). Once the log files are uploaded, a unique url to your experiment will be generated which can be shared with ease! An example can be found [here](https://tensorboard.dev/experiment/Q1tw19FySLSjLN6CW5DaUw/).

7. Note that, ```candlefl``` is compatible with all the loggers supported by PyTorch Lightning. More information about the PyTorch Lightning loggers can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/common/loggers.html#loggers).


### Federated Learning
The following steps should be followed on a high-level to train a federated learning experiment.

1. Pick a dataset and use the ```datamodules``` to create federated data shards with iid or non-iid distribution.
	```python
	def get_datamodule() -> EMNISTDataModule:
		datamodule: EMNISTDataModule = EMNISTDataModule(
			dataset_name=SUPPORTED_DATASETS_TYPE.MNIST, train_batch_size=10
		)
		datamodule.prepare_data()
		datamodule.setup()
		return datamodule

    agent_data_shard_map = get_agent_data_shard_map().federated_iid_dataloader(
        num_workers=fl_params.num_agents,
        workers_batch_size=fl_params.local_train_batch_size,
    )
	```
2. Use the TorchFL ```agents``` module and the  ```models``` module to initialize the global model, agents, and distribute their models.
	```python
	def initialize_agents(
		fl_params: FLParams, agent_data_shard_map: Dict[int, DataLoader]
	) -> List[V1Agent]:
		"""Initialize agents."""
		agents = []
		for agent_id in range(fl_params.num_agents):
			agent = V1Agent(
				id=agent_id,
				model=MNISTEMNIST(
					model_name=EMNIST_MODELS_ENUM.MOBILENETV3SMALL,
					optimizer_name=OPTIMIZERS_TYPE.ADAM,
					optimizer_hparams={"lr": 0.001},
					model_hparams={"pre_trained": True, "feature_extract": True},
					fl_hparams=fl_params,
				),
				data_shard=agent_data_shard_map[agent_id],
			)
			agents.append(agent)
		return agents

	global_model = MNISTEMNIST(
        model_name=EMNIST_MODELS_ENUM.MOBILENETV3SMALL,
        optimizer_name=OPTIMIZERS_TYPE.ADAM,
        optimizer_hparams={"lr": 0.001},
        model_hparams={"pre_trained": True, "feature_extract": True},
        fl_hparams=fl_params,
    )

	all_agents = initialize_agents(fl_params, agent_data_shard_map)
	```
3. Initiliaze an ```FLParam``` object with the desired FL hyperparameters and pass it on to the ```Entrypoint``` object which will abstract the training.
	```python
    fl_params = FLParams(
        experiment_name="iid_mnist_fedavg_10_agents_5_sampled_50_epochs_mobilenetv3small_latest",
        num_agents=10,
        global_epochs=10,
        local_epochs=2,
        sampling_ratio=0.5,
    )
    entrypoint = Entrypoint(
        global_model=global_model,
        global_datamodule=get_agent_data_shard_map(),
        fl_hparams=fl_params,
        agents=all_agents,
        aggregator=FedAvgAggregator(all_agents=all_agents),
        sampler=RandomSampler(all_agents=all_agents),
    )
    entrypoint.run()
	```


## Available Models
For the initial release, ```candlefl``` will only support state-of-the-art computer vision models. The following table summarizes the available models, support for pre-training, and the possibility of feature-extracting. Please note that the models have been tested with all the available datasets. Therefore, the link to the tests will be provided in the next section.

## Available Datasets
Following datasets have been wrapped inside a ```LightningDataModule``` and made available for the initial release of ```candlefl```. To add a new dataset, check the source code in ```candlefl.datamodules```, add tests, and create a PR with ```Features``` tag.

## Contributing
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.

You can contribute in many ways:

### Types of Contributions
#### Report Bugs

If you are reporting a bug, please include:
- Your operating system name and version.
- Any details about your local setup that might be helpful in troubleshooting.
- Detailed steps to reproduce the bug.

#### Fix Bugs
Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it.

#### Implement Features
Look through the GitHub issues for features. Anything tagged with "enhancement", "help wanted", "feature" is open to whoever wants to implement it.

#### Write Documentation
```candlefl``` could always use more documentation, whether as part of the official candlefl docs, in docstrings, or even on the web in blog posts, articles, and such.

#### Submit Feedback
If you are proposing a feature:
- Explain in detail how it would work.
- Keep the scope as narrow as possible, to make it easier to implement.
- Remember that this is a volunteer-driven project, and that contributions are welcome :)

### Get Started
Ready to contribute? Here's how to set up candlefl for local development.
1. Fork the candlefl repo on GitHub.

2. Clone your fork locally:
	```
	$ git clone git@github.com:<your_username_here>/candlefl.git
	```

3. Install Poetry to manage dependencies and virtual environments from https://python-poetry.org/docs/.

4. Install the project dependencies using:
	```
	$ poetry install
	```

5. To add a new dependency to the project, use:
	```
	$ poetry add <dependency_name>
	```

6. Create a branch for local development:
	```
	$ git checkout -b name-of-your-bugfix-or-feature
	```
	Now you can make your changes locally and maintain them on your own branch.

7. When you're done making changes, check that your changes pass the tests:
	```
	$ poetry run pytest tests
	```
	If you want to run a specific test file, use:
	```
	$ poetry pytest <path-to-the-file>
	```
	If your changes are not covered by the tests, please add tests.

8. The pre-commit hooks will be run before every commit. If you want to run them manually, use:
	```
	$ pre-commit run --all
	```

9. Commit your changes and push your branch to GitHub:
	```
	$ git add --all
	$ git commit -m "Your detailed description of your changes."
	$ git push origin <name-of-your-bugfix-or-feature>
	```
10. Submit a pull request through the Github web interface.
11. Once the pull request has been submitted, the continuous integration pipelines on Github Actions will be triggered. Ensure that all of them pass before one of the maintainers can review the request.

### Pull Request Guidelines
Before you submit a pull request, check that it meets these guidelines:
1. The pull request should include tests.
	- Try adding new test cases for new features or enhancements and make changes to the CI pipelines accordingly.
	- Modify the existing tests (if required) for the bug fixes.
2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in ```README.md```.
3. The pull request should pass all the existing CI pipelines (Github Actions) and the new/modified workflows should be added as required.


            

Raw data

            {
    "_id": null,
    "home_page": "https://candlefl.readthedocs.io/en/latest/",
    "name": "candlefl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.8",
    "maintainer_email": null,
    "keywords": "federated-learning, pytorch, pytorch-lightning, candlefl",
    "author": "slothrabbit77 ",
    "author_email": "slothrabbit77@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d9/9b/b7cedeed1a82fdf9b352d490008dd1da2c52c4e3d8a2710ad79d99d663eb/candlefl-0.2.2.tar.gz",
    "platform": null,
    "description": "\n## Table of Contents\n\n- [Key Features](#features)\n- [Installation](#installation)\n- [Examples and Usage](#examples-and-usage)\n- [Available Models](#available-models)\n- [Available Datasets](#available-datasets)\n- [Contributing](#contributing)\n- [Citation](#citation)\n\n## Features\n\n- Python 3.6+ support. Built using ```torch-1.10.1```, ```torchvision-0.11.2```, and ```pytorch-lightning-1.5.7```.\n- Customizable implementations for state-of-the-art deep learning [models](#available-models) which can be trained in federated or non-federated settings.\n- Supports finetuning of the pre-trained deep learning models, allowing for faster training using transfer learning.\n- PyTorch LightningDataModule wrappers for the most commonly used [datasets](#available-datasets) to reduce the boilerplate code before experiments.\n- Built using the bottom-up approach for the datamodules and models which ensures abstractions while allowing for customization.\n- Provides implementation of the federated learning (FL) samplers, aggregators, and wrappers, to prototype FL experiments on-the-go.\n- Backwards compatible with the PyTorch LightningDataModule, LightningModule, loggers, and DevOps tools.\n- More details about the examples and usage can be found [below](#examples-and-usage).\n\n## Installation\n### Stable Release\nAs of now, ```candlefl``` is available on PyPI and can be installed using the following command in your terminal:\n```\n$ pip install candlefl\n```\nThis is the preferred method to install ```candlefl``` with the most stable release.\nIf you don't have [pip](https://pip.pypa.io/en/stable/) installed, this [Python installation guide](http://docs.python-guide.org/en/latest/starting/installation/) can guide you through the process.\n\n## Examples and Usage\nAlthough ```candlefl``` is primarily built for quick prototyping of federated learning experiments, the models, datasets, and abstractions can also speed up the non-federated learning experiments. In this section, we will explore examples and usages under both the settings.\n\n### Non-Federated Learning\nThe following steps should be followed on a high-level to train a non-federated learning experiment. We are using the ```EMNIST (MNIST)``` dataset and ```densenet121``` for this example.\n\n1. Import the relevant modules.\n\t```python\n\tfrom candlefl.datamodules.emnist import EMNISTDataModule\n\tfrom candlefl.models.wrapper.emnist import MNISTEMNIST\n\t```\n\n\t```python\n\timport pytorch_lightning as pl\n\tfrom pytorch_lightning.loggers import TensorBoardLogger\n\tfrom pytorch_lightning.callbacks import (\n\t\tModelCheckpoint,\n\t\tLearningRateMonitor,\n\t\tDeviceStatsMonitor,\n\t\tModelSummary,\n\t\tProgressBar,\n\t\t...\n\t)\n\t```\n\tFor more details, view the full list of PyTorch Lightning [callbacks](https://pytorch-lightning.readthedocs.io/en/stable/extensions/callbacks.html#callback) and [loggers](https://pytorch-lightning.readthedocs.io/en/latest/common/loggers.html#loggers) on the official website.\n2. Setup the PyTorch Lightning trainer.\n\t```python\n\ttrainer = pl.Trainer(\n\t\t...\n\t\tlogger=[\n\t\t\tTensorBoardLogger(\n\t\t\t\tname=experiment_name,\n\t\t\t\tsave_dir=os.path.join(checkpoint_save_path, experiment_name),\n\t\t\t)\n\t\t],\n\t\tcallbacks=[\n\t\t\tModelCheckpoint(save_weights_only=True, mode=\"max\", monitor=\"val_acc\"),\n\t\t\tLearningRateMonitor(\"epoch\"),\n\t\t\tDeviceStatsMonitor(),\n\t\t\tModelSummary(),\n\t\t\tProgressBar(),\n\t\t],\n\t\t...\n\t)\n\t```\n\tMore details about the PyTorch Lightning [Trainer API](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#) can be found on their official website.\n\n3. Prepare the dataset using the wrappers provided by ```candlefl.datamodules```.\n\t```python\n\tdatamodule = EMNISTDataModule(dataset_name=\"mnist\")\n\tdatamodule.prepare_data()\n\tdatamodule.setup()\n\t```\n\n4. Initialize the model using the wrappers provided by ```candlefl.models.wrappers```.\n\t```python\n\t# check if the model can be loaded from a given checkpoint\n\tif (checkpoint_load_path) and os.path.isfile(checkpoint_load_path):\n\t\tmodel = MNISTEMNIST(\n\t\t\t\"densenet121\", \"adam\", {\"lr\": 0.001}\n\t\t\t).load_from_checkpoint(checkpoint_load_path)\n\telse:\n\t\tpl.seed_everything(42)\n\t\tmodel = MNISTEMNIST(\"densenet121\", \"adam\", {\"lr\": 0.001})\n\t\ttrainer.fit(model, datamodule.train_dataloader(), datamodule.val_dataloader())\n\t```\n\n5. Collect the results.\n\t```python\n\tval_result = trainer.test(\n\t\tmodel, test_dataloaders=datamodule.val_dataloader(), verbose=True\n\t)\n\ttest_result = trainer.test(\n\t\tmodel, test_dataloaders=datamodule.test_dataloader(), verbose=True\n\t)\n\t```\n\n6. The corresponding files for the experiment (model checkpoints and logger metadata) will be stored at ```default_root_dir``` argument given to the PyTorch Lightning ```Trainer``` object in Step 2. For this experiment, we use the [Tensorboard](https://www.tensorflow.org/tensorboard) logger. To view the logs (and related plots and metrics), go to the ```default_root_dir``` path and find the Tensorboard log files. Upload the files to the Tensorboard Development portal following the instructions [here](https://tensorboard.dev/#get-started). Once the log files are uploaded, a unique url to your experiment will be generated which can be shared with ease! An example can be found [here](https://tensorboard.dev/experiment/Q1tw19FySLSjLN6CW5DaUw/).\n\n7. Note that, ```candlefl``` is compatible with all the loggers supported by PyTorch Lightning. More information about the PyTorch Lightning loggers can be found [here](https://pytorch-lightning.readthedocs.io/en/latest/common/loggers.html#loggers).\n\n\n### Federated Learning\nThe following steps should be followed on a high-level to train a federated learning experiment.\n\n1. Pick a dataset and use the ```datamodules``` to create federated data shards with iid or non-iid distribution.\n\t```python\n\tdef get_datamodule() -> EMNISTDataModule:\n\t\tdatamodule: EMNISTDataModule = EMNISTDataModule(\n\t\t\tdataset_name=SUPPORTED_DATASETS_TYPE.MNIST, train_batch_size=10\n\t\t)\n\t\tdatamodule.prepare_data()\n\t\tdatamodule.setup()\n\t\treturn datamodule\n\n    agent_data_shard_map = get_agent_data_shard_map().federated_iid_dataloader(\n        num_workers=fl_params.num_agents,\n        workers_batch_size=fl_params.local_train_batch_size,\n    )\n\t```\n2. Use the TorchFL ```agents``` module and the  ```models``` module to initialize the global model, agents, and distribute their models.\n\t```python\n\tdef initialize_agents(\n\t\tfl_params: FLParams, agent_data_shard_map: Dict[int, DataLoader]\n\t) -> List[V1Agent]:\n\t\t\"\"\"Initialize agents.\"\"\"\n\t\tagents = []\n\t\tfor agent_id in range(fl_params.num_agents):\n\t\t\tagent = V1Agent(\n\t\t\t\tid=agent_id,\n\t\t\t\tmodel=MNISTEMNIST(\n\t\t\t\t\tmodel_name=EMNIST_MODELS_ENUM.MOBILENETV3SMALL,\n\t\t\t\t\toptimizer_name=OPTIMIZERS_TYPE.ADAM,\n\t\t\t\t\toptimizer_hparams={\"lr\": 0.001},\n\t\t\t\t\tmodel_hparams={\"pre_trained\": True, \"feature_extract\": True},\n\t\t\t\t\tfl_hparams=fl_params,\n\t\t\t\t),\n\t\t\t\tdata_shard=agent_data_shard_map[agent_id],\n\t\t\t)\n\t\t\tagents.append(agent)\n\t\treturn agents\n\n\tglobal_model = MNISTEMNIST(\n        model_name=EMNIST_MODELS_ENUM.MOBILENETV3SMALL,\n        optimizer_name=OPTIMIZERS_TYPE.ADAM,\n        optimizer_hparams={\"lr\": 0.001},\n        model_hparams={\"pre_trained\": True, \"feature_extract\": True},\n        fl_hparams=fl_params,\n    )\n\n\tall_agents = initialize_agents(fl_params, agent_data_shard_map)\n\t```\n3. Initiliaze an ```FLParam``` object with the desired FL hyperparameters and pass it on to the ```Entrypoint``` object which will abstract the training.\n\t```python\n    fl_params = FLParams(\n        experiment_name=\"iid_mnist_fedavg_10_agents_5_sampled_50_epochs_mobilenetv3small_latest\",\n        num_agents=10,\n        global_epochs=10,\n        local_epochs=2,\n        sampling_ratio=0.5,\n    )\n    entrypoint = Entrypoint(\n        global_model=global_model,\n        global_datamodule=get_agent_data_shard_map(),\n        fl_hparams=fl_params,\n        agents=all_agents,\n        aggregator=FedAvgAggregator(all_agents=all_agents),\n        sampler=RandomSampler(all_agents=all_agents),\n    )\n    entrypoint.run()\n\t```\n\n\n## Available Models\nFor the initial release, ```candlefl``` will only support state-of-the-art computer vision models. The following table summarizes the available models, support for pre-training, and the possibility of feature-extracting. Please note that the models have been tested with all the available datasets. Therefore, the link to the tests will be provided in the next section.\n\n## Available Datasets\nFollowing datasets have been wrapped inside a ```LightningDataModule``` and made available for the initial release of ```candlefl```. To add a new dataset, check the source code in ```candlefl.datamodules```, add tests, and create a PR with ```Features``` tag.\n\n## Contributing\nContributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.\n\nYou can contribute in many ways:\n\n### Types of Contributions\n#### Report Bugs\n\nIf you are reporting a bug, please include:\n- Your operating system name and version.\n- Any details about your local setup that might be helpful in troubleshooting.\n- Detailed steps to reproduce the bug.\n\n#### Fix Bugs\nLook through the GitHub issues for bugs. Anything tagged with \"bug\" and \"help wanted\" is open to whoever wants to implement it.\n\n#### Implement Features\nLook through the GitHub issues for features. Anything tagged with \"enhancement\", \"help wanted\", \"feature\" is open to whoever wants to implement it.\n\n#### Write Documentation\n```candlefl``` could always use more documentation, whether as part of the official candlefl docs, in docstrings, or even on the web in blog posts, articles, and such.\n\n#### Submit Feedback\nIf you are proposing a feature:\n- Explain in detail how it would work.\n- Keep the scope as narrow as possible, to make it easier to implement.\n- Remember that this is a volunteer-driven project, and that contributions are welcome :)\n\n### Get Started\nReady to contribute? Here's how to set up candlefl for local development.\n1. Fork the candlefl repo on GitHub.\n\n2. Clone your fork locally:\n\t```\n\t$ git clone git@github.com:<your_username_here>/candlefl.git\n\t```\n\n3. Install Poetry to manage dependencies and virtual environments from https://python-poetry.org/docs/.\n\n4. Install the project dependencies using:\n\t```\n\t$ poetry install\n\t```\n\n5. To add a new dependency to the project, use:\n\t```\n\t$ poetry add <dependency_name>\n\t```\n\n6. Create a branch for local development:\n\t```\n\t$ git checkout -b name-of-your-bugfix-or-feature\n\t```\n\tNow you can make your changes locally and maintain them on your own branch.\n\n7. When you're done making changes, check that your changes pass the tests:\n\t```\n\t$ poetry run pytest tests\n\t```\n\tIf you want to run a specific test file, use:\n\t```\n\t$ poetry pytest <path-to-the-file>\n\t```\n\tIf your changes are not covered by the tests, please add tests.\n\n8. The pre-commit hooks will be run before every commit. If you want to run them manually, use:\n\t```\n\t$ pre-commit run --all\n\t```\n\n9. Commit your changes and push your branch to GitHub:\n\t```\n\t$ git add --all\n\t$ git commit -m \"Your detailed description of your changes.\"\n\t$ git push origin <name-of-your-bugfix-or-feature>\n\t```\n10. Submit a pull request through the Github web interface.\n11. Once the pull request has been submitted, the continuous integration pipelines on Github Actions will be triggered. Ensure that all of them pass before one of the maintainers can review the request.\n\n### Pull Request Guidelines\nBefore you submit a pull request, check that it meets these guidelines:\n1. The pull request should include tests.\n\t- Try adding new test cases for new features or enhancements and make changes to the CI pipelines accordingly.\n\t- Modify the existing tests (if required) for the bug fixes.\n2. If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in ```README.md```.\n3. The pull request should pass all the existing CI pipelines (Github Actions) and the new/modified workflows should be added as required.\n\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3",
    "summary": "A Python library for rapid prototyping, experimenting, and logging of federated learning using state-of-the-art models and datasets. Built using PyTorch and PyTorch Lightning.",
    "version": "0.2.2",
    "project_urls": {
        "Documentation": "https://candlefl.readthedocs.io/en/latest/",
        "Homepage": "https://candlefl.readthedocs.io/en/latest/",
        "Issue Tracker": "https://github.com/candlefl-org/candlefl/issues",
        "Repository": "https://github.com/candlefl-org/candlefl"
    },
    "split_keywords": [
        "federated-learning",
        " pytorch",
        " pytorch-lightning",
        " candlefl"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "22143d3859700a51a6202bb0f32582af2a199cdd1e8b4bb17b9caa8c513e6554",
                "md5": "3eae30f9f8def0e7f43efb2133ff0cba",
                "sha256": "96b9e2b8dcb5455bd4ac5d9ac30d3a6a42c6dd273fe315c6bc5d36d820a31dfc"
            },
            "downloads": -1,
            "filename": "candlefl-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3eae30f9f8def0e7f43efb2133ff0cba",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.8",
            "size": 122811,
            "upload_time": "2024-03-26T05:07:15",
            "upload_time_iso_8601": "2024-03-26T05:07:15.907493Z",
            "url": "https://files.pythonhosted.org/packages/22/14/3d3859700a51a6202bb0f32582af2a199cdd1e8b4bb17b9caa8c513e6554/candlefl-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d99bb7cedeed1a82fdf9b352d490008dd1da2c52c4e3d8a2710ad79d99d663eb",
                "md5": "062dfb759580118b1f63aed4cabe8445",
                "sha256": "0010ccd769349b20abc5f436eafeaa70a4d871e112d35fda4e19d3305057968c"
            },
            "downloads": -1,
            "filename": "candlefl-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "062dfb759580118b1f63aed4cabe8445",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.8",
            "size": 54339,
            "upload_time": "2024-03-26T05:07:18",
            "upload_time_iso_8601": "2024-03-26T05:07:18.011927Z",
            "url": "https://files.pythonhosted.org/packages/d9/9b/b7cedeed1a82fdf9b352d490008dd1da2c52c4e3d8a2710ad79d99d663eb/candlefl-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-26 05:07:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "candlefl-org",
    "github_project": "candlefl",
    "github_not_found": true,
    "lcname": "candlefl"
}
        
Elapsed time: 0.31487s