concrete-ml


Nameconcrete-ml JSON
Version 1.7.0 PyPI version JSON
download
home_pagehttps://zama.ai/concrete-ml/
SummaryConcrete ML is an open-source set of tools which aims to simplify the use of fully homomorphic encryption (FHE) for data scientists.
upload_time2024-09-30 09:05:42
maintainerNone
docs_urlNone
authorZama
requires_python<3.12,>=3.8.1
licenseBSD-3-Clause-Clear
keywords fhe homomorphic encryption privacy security
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            <p align="center">
<!-- product name logo -->
<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://github.com/zama-ai/concrete-ml/assets/157474013/5ed658d7-0abd-4444-9063-99d8b76c2602">
  <source media="(prefers-color-scheme: light)" srcset="https://github.com/zama-ai/concrete-ml/assets/157474013/7c67e594-5e2c-483e-858f-ce473a36e37f">
  <img width=600 alt="Zama Concrete ML">
</picture>
</p>

<hr>

<p align="center">
  <a href="https://docs.zama.ai/concrete-ml"> 📒 Documentation</a> | <a href="https://zama.ai/community"> 💛 Community support</a> | <a href="https://github.com/zama-ai/awesome-zama"> 📚 FHE resources by Zama</a>
</p>

<p align="center">
  <a href="https://github.com/zama-ai/concrete-ml/releases"><img src="https://img.shields.io/github/v/release/zama-ai/concrete-ml?style=flat-square"></a>
  <a href="LICENSE"><img src="https://img.shields.io/badge/License-BSD--3--Clause--Clear-%23ffb243?style=flat-square"></a>
  <a href="https://github.com/zama-ai/bounty-program"><img src="https://img.shields.io/badge/Contribute-Zama%20Bounty%20Program-%23ffd208?style=flat-square"></a>
  <a href="https://slsa.dev"><img alt="SLSA 3" src="https://slsa.dev/images/gh-badge-level3.svg" /></a>
</p>

## About

### What is Concrete ML

**Concrete ML** is a Privacy-Preserving Machine Learning (PPML) open-source set of tools built on top of [Concrete](https://github.com/zama-ai/concrete) by [Zama](https://github.com/zama-ai).

It simplifies the use of fully homomorphic encryption (FHE) for data scientists so that they can automatically turn machine learning models into their homomorphic equivalents, and use them without knowledge of cryptography.

Concrete ML is designed with ease of use in mind. Data scientists can use models with APIs that are close to the frameworks they already know well, while additional options to those models allow them to run inference or training on encrypted data with FHE. The Concrete ML model classes are similar to those in scikit-learn and it is also possible to convert PyTorch models to FHE.
<br></br>

### Main features

- **Built-in models**: Ready-to-use FHE-friendly models with a user interface that is equivalent to their the scikit-learn and XGBoost counterparts
- **Customs models**: Concrete ML supports models that can use quantization-aware training. These are developed by the user using PyTorch or keras/tensorflow and are imported into Concrete ML through ONNX

*Learn more about Concrete ML features in the [documentation](https://docs.zama.ai/concrete-ml).*
<br></br>

### Use cases

By leveraging FHE, Concrete ML can unlock a myriad of new use cases for machine learning, such as enabling secure and private data collaboration, protecting sensitive data while still allowing for analysis, and facilitating machine learning on data-sets that are subject to strict data privacy regulations, for instance

- **Healthcare data analysis**: Improve patient care while maintaining privacy by allowing secure, confidential data sharing between healthcare providers.
- **Financial services**: Facilitate secure financial data analysis for risk management and fraud detection, keeping client information encrypted and safe.
- **Ad campaign tracking**: Create targeted advertising and campaign insights in a post-cookie era, ensuring user privacy through encrypted data analysis.
- **Industries:** Enable predictive maintenance in the cloud while keeping sensitive data confidential, enhancing efficiency and data security.
- **Biometrics:** Give the ability to create user authentication applications without having to reveal their identities.
- **Government:** Enable governments to create digitized versions of their services without having to trust cloud providers.

*See more use cases in the list of [demos](#demos).*
<br></br>

## Table of Contents

- **[Getting Started](#getting-started)**
  - [Installation](#installation)
  - [A simple example](#a-simple-example)
- **[Resources](#resources)**
  - [Demos](#demos)
  - [Tutorials](#tutorials)
  - [Documentation](#documentation)
- **[Working with Concrete ML](#working-with-concrete-ml)**
  - [Citations](#citations)
  - [Contributing](#contributing)
  - [License](#license)
- **[Support](#support)**
  <br></br>

## Getting Started

### Installation

Depending on your OS, Concrete ML may be installed with Docker or with pip:

|                 OS / HW                 | Available on Docker | Available on pip |
| :-------------------------------------: | :-----------------: | :--------------: |
|                  Linux                  |         Yes         |       Yes        |
|                 Windows                 |         Yes         |        No        |
|       Windows Subsystem for Linux       |         Yes         |       Yes        |
|            macOS 11+ (Intel)            |         Yes         |       Yes        |
| macOS 11+ (Apple Silicon: M1, M2, etc.) |     Coming soon     |       Yes        |

Note: Concrete ML only supports Python `3.8`, `3.9`, `3.10` and `3.11`.
Concrete ML can be installed on Kaggle ([see this question on the community for more details](https://community.zama.ai/t/how-do-we-use-concrete-ml-on-kaggle/332)) and on Google Colab.

#### Docker

To install with Docker, pull the `concrete-ml` image as follows:
`docker pull zamafhe/concrete-ml:latest`

#### Pip

To install Concrete ML from PyPi, run the following:

```
pip install -U pip wheel setuptools
pip install concrete-ml
```

*Find more detailed installation instructions in [this part of the documentation](https://docs.zama.ai/concrete-ml/getting-started/pip_installing)*

<p align="right">
  <a href="#about" > ↑ Back to top </a>
</p>

### A simple example

Here is a simple example which is very close to scikit-learn for a logistic regression :

```python
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression

# Lets create a synthetic data-set
x, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)

# Split the data-set into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=42
)

# Now we train in the clear and quantize the weights
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)

# We can simulate the predictions in the clear
y_pred_clear = model.predict(X_test)

# We then compile on a representative set
model.compile(X_train)

# Finally we run the inference on encrypted inputs !
y_pred_fhe = model.predict(X_test, fhe="execute")

print("In clear  :", y_pred_clear)
print("In FHE    :", y_pred_fhe)
print(f"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%")

# Output:
    # In clear  : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # In FHE    : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # Similarity: 100%
```

<br></br>
It is also possible to call encryption, model prediction, and decryption functions separately as follows.
Executing these steps separately is equivalent to calling `predict_proba` on the model instance.

<!--pytest-codeblocks:cont-->

```python
# Predict probability for a single example
y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute")

# Quantize an original float input
q_input = model.quantize_input(X_test[[0]])

# Encrypt the input
q_input_enc = model.fhe_circuit.encrypt(q_input)

# Execute the linear product in FHE
q_y_enc = model.fhe_circuit.run(q_input_enc)

# Decrypt the result (integer)
q_y = model.fhe_circuit.decrypt(q_y_enc)

# De-quantize and post-process the result
y0 = model.post_processing(model.dequantize_output(q_y))

print("Probability with `predict_proba`: ", y_proba_fhe)
print("Probability with encrypt/run/decrypt calls: ", y0)
```

*This example is explained in more detail in the [linear model documentation](https://docs.zama.ai/concrete-ml/built-in-models/linear).*

Concrete ML built-in models have APIs that are almost identical to their scikit-learn counterparts. It is also possible to convert PyTorch networks to FHE with the Concrete ML conversion APIs. Please refer to the [linear models](docs/built-in-models/linear.md), [tree-based models](docs/built-in-models/tree.md) and [neural networks](docs/built-in-models/neural-networks.md) documentation for more examples, showing the scikit-learn-like API of the built-in models.

<p align="right">
  <a href="#about" > ↑ Back to top </a>
</p>

> \[!Note\]
> **Zama 5-Question Developer Survey**
>
> We want to hear from you! Take 1 minute to share your thoughts and helping us enhance our documentation and libraries. 👉 **[Click here](https://www.zama.ai/developer-survey)** to participate.

## Resources

### Demos

#### Live demos on Hugging Face

- [Credit card approval](https://huggingface.co/spaces/zama-fhe/credit_card_approval_prediction):  Predicting credit scoring card approval application in which sensitive data can be shared and analyzed without exposing the actual information to neither the three parties involved, nor the server processing it.
  - Check the code [here](https://huggingface.co/spaces/zama-fhe/credit_card_approval_prediction/tree/main)
- [Sentiment analysis with transformers](https://huggingface.co/blog/sentiment-analysis-fhe): predicting if an encrypted tweet / short message is positive, negative or neutral, using FHE.
  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis/tree/main) and the [blog post](https://huggingface.co/blog/sentiment-analysis-fhe)
- [Health diagnosis](https://huggingface.co/spaces/zama-fhe/encrypted_health_prediction): giving a diagnosis using FHE to preserve the privacy of the patient based on a patient's symptoms, history and other health factors.
  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_health_prediction/tree/main)
- [Encrypted image filtering](https://huggingface.co/spaces/zama-fhe/encrypted_image_filtering) : filtering encrypted images by applying filters such as black-and-white, ridge detection, or your own filter.
  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_image_filtering/tree/main)

#### Other demos

- [Encrypted Large Language Model](use_case_examples/llm/): converting a user-defined part of a Large Language Model for encrypted text generation. This demo shows the trade-off between quantization and accuracy for text generation and shows how to run the model in FHE.
- [Private inference for federated learned models](use_case_examples/federated_learning/): private training of a Logistic Regression model and then importing the model into Concrete ML and performing encrypted prediction.
- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solving the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html).
- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): training a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%.
- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.

*If you have built awesome projects using Concrete ML, please let us know and we will be happy to showcase them here!*
<br></br>

### Tutorials

- [\[Video tutorial\] Train a linear classifier on encrypted data using Concrete ML and Fully Homomorphic Encryption (FHE)](https://www.youtube.com/watch?v=QVsZ33jBlq4)
- [\[Video tutorial\] How To Convert a Scikit-learn Model Into Its Homomorphic Equivalent](https://www.zama.ai/post/how-to-convert-a-scikit-learn-model-into-its-homomorphic-equivalent)
- [Linear Regression Over Encrypted Data With Homomorphic Encryption](https://www.zama.ai/post/linear-regression-using-linear-svr-and-concrete-ml-homomorphic-encryption)
- [How to Deploy a Machine Learning Model With Concrete ML](https://www.zama.ai/post/how-to-deploy-machine-learning-models-with-concrete-ml)
- More [Built-in models tutorials](docs/tutorials/ml_examples.md) and [Deep learning tutorials](docs/tutorials/dl_examples.md)

*Explore more useful resources in [Awesome Zama repo](https://github.com/zama-ai/awesome-zama)*
<br></br>

### Documentation

Full, comprehensive documentation is available here: [https://docs.zama.ai/concrete-ml](https://docs.zama.ai/concrete-ml).

<p align="right">
  <a href="#about" > ↑ Back to top </a>
</p>

## Working with Concrete ML

### Citations

To cite Concrete ML in academic papers, please use the following entry:

```text
@Misc{ConcreteML,
  title={Concrete {ML}: a Privacy-Preserving Machine Learning Library using Fully Homomorphic Encryption for Data Scientists},
  author={Zama},
  year={2022},
  note={\url{https://github.com/zama-ai/concrete-ml}},
}
```

### Contributing

To contribute to Concrete ML, please refer to [this section of the documentation](docs/developer/contributing.md).
<br></br>

### License

This software is distributed under the **BSD-3-Clause-Clear** license. Read [this](LICENSE) for more details.

#### FAQ

**Is Zama’s technology free to use?**

> Zama’s libraries are free to use under the BSD 3-Clause Clear license only for development, research, prototyping, and experimentation purposes. However, for any commercial use of Zama's open source code, companies must purchase Zama’s commercial patent license.
>
> All our work is open source and we strive for full transparency about Zama's IP strategy. To know more about what this means for Zama product users, read about how we monetize our open source products in [this blog post](https://www.zama.ai/post/open-source).

**What do I need to do if I want to use Zama’s technology for commercial purposes?**

> To commercially use Zama’s technology you need to be granted Zama’s patent license. Please contact us at hello@zama.ai for more information.

**Do you file IP on your technology?**

> Yes, all of Zama’s technologies are patented.

**Can you customize a solution for my specific use case?**

> We are open to collaborating and advancing the FHE space with our partners. If you have specific needs, please email us at hello@zama.ai.

<p align="right">
  <a href="#about" > ↑ Back to top </a>
</p>

## Support

<a target="_blank" href="https://zama.ai/community-channels">
<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://github.com/zama-ai/concrete-ml/assets/157474013/86502167-4ea4-49e9-a881-0cf97d141818">
  <source media="(prefers-color-scheme: light)" srcset="https://github.com/zama-ai/concrete-ml/assets/157474013/3dcf41e2-1c00-471b-be53-2c804879b8cb">
  <img alt="Support">
</picture>
</a>

🌟 If you find this project helpful or interesting, please consider giving it a star on GitHub! Your support helps to grow the community and motivates further development.

<p align="right">
  <a href="#about" > ↑ Back to top </a>
</p>


            

Raw data

            {
    "_id": null,
    "home_page": "https://zama.ai/concrete-ml/",
    "name": "concrete-ml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.8.1",
    "maintainer_email": null,
    "keywords": "FHE, homomorphic encryption, privacy, security",
    "author": "Zama",
    "author_email": "hello@zama.ai",
    "download_url": null,
    "platform": null,
    "description": "<p align=\"center\">\n<!-- product name logo -->\n<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/zama-ai/concrete-ml/assets/157474013/5ed658d7-0abd-4444-9063-99d8b76c2602\">\n  <source media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/zama-ai/concrete-ml/assets/157474013/7c67e594-5e2c-483e-858f-ce473a36e37f\">\n  <img width=600 alt=\"Zama Concrete ML\">\n</picture>\n</p>\n\n<hr>\n\n<p align=\"center\">\n  <a href=\"https://docs.zama.ai/concrete-ml\"> \ud83d\udcd2 Documentation</a> | <a href=\"https://zama.ai/community\"> \ud83d\udc9b Community support</a> | <a href=\"https://github.com/zama-ai/awesome-zama\"> \ud83d\udcda FHE resources by Zama</a>\n</p>\n\n<p align=\"center\">\n  <a href=\"https://github.com/zama-ai/concrete-ml/releases\"><img src=\"https://img.shields.io/github/v/release/zama-ai/concrete-ml?style=flat-square\"></a>\n  <a href=\"LICENSE\"><img src=\"https://img.shields.io/badge/License-BSD--3--Clause--Clear-%23ffb243?style=flat-square\"></a>\n  <a href=\"https://github.com/zama-ai/bounty-program\"><img src=\"https://img.shields.io/badge/Contribute-Zama%20Bounty%20Program-%23ffd208?style=flat-square\"></a>\n  <a href=\"https://slsa.dev\"><img alt=\"SLSA 3\" src=\"https://slsa.dev/images/gh-badge-level3.svg\" /></a>\n</p>\n\n## About\n\n### What is Concrete ML\n\n**Concrete ML** is a Privacy-Preserving Machine Learning (PPML) open-source set of tools built on top of [Concrete](https://github.com/zama-ai/concrete) by [Zama](https://github.com/zama-ai).\n\nIt simplifies the use of fully homomorphic encryption (FHE) for data scientists so that they can automatically turn machine learning models into their homomorphic equivalents, and use them without knowledge of cryptography.\n\nConcrete ML is designed with ease of use in mind. Data scientists can use models with APIs that are close to the frameworks they already know well, while additional options to those models allow them to run inference or training on encrypted data with FHE. The Concrete ML model classes are similar to those in scikit-learn and it is also possible to convert PyTorch models to FHE.\n<br></br>\n\n### Main features\n\n- **Built-in models**: Ready-to-use FHE-friendly models with a user interface that is equivalent to their the scikit-learn and XGBoost counterparts\n- **Customs models**: Concrete ML supports models that can use quantization-aware training. These are developed by the user using PyTorch or keras/tensorflow and are imported into Concrete ML through ONNX\n\n*Learn more about Concrete ML features in the [documentation](https://docs.zama.ai/concrete-ml).*\n<br></br>\n\n### Use cases\n\nBy leveraging FHE, Concrete ML can unlock a myriad of new use cases for machine learning, such as enabling secure and private data collaboration, protecting sensitive data while still allowing for analysis, and facilitating machine learning on data-sets that are subject to strict data privacy regulations, for instance\n\n- **Healthcare data analysis**: Improve patient care while maintaining privacy by allowing secure, confidential data sharing between healthcare providers.\n- **Financial services**: Facilitate secure financial data analysis for risk management and fraud detection, keeping client information encrypted and safe.\n- **Ad campaign tracking**: Create targeted advertising and campaign insights in a post-cookie era, ensuring user privacy through encrypted data analysis.\n- **Industries:** Enable predictive maintenance in the cloud while keeping sensitive data confidential, enhancing efficiency and data security.\n- **Biometrics:** Give the ability to create user authentication applications without having to reveal their identities.\n- **Government:** Enable governments to create digitized versions of their services without having to trust cloud providers.\n\n*See more use cases in the list of [demos](#demos).*\n<br></br>\n\n## Table of Contents\n\n- **[Getting Started](#getting-started)**\n  - [Installation](#installation)\n  - [A simple example](#a-simple-example)\n- **[Resources](#resources)**\n  - [Demos](#demos)\n  - [Tutorials](#tutorials)\n  - [Documentation](#documentation)\n- **[Working with Concrete ML](#working-with-concrete-ml)**\n  - [Citations](#citations)\n  - [Contributing](#contributing)\n  - [License](#license)\n- **[Support](#support)**\n  <br></br>\n\n## Getting Started\n\n### Installation\n\nDepending on your OS, Concrete ML may be installed with Docker or with pip:\n\n|                 OS / HW                 | Available on Docker | Available on pip |\n| :-------------------------------------: | :-----------------: | :--------------: |\n|                  Linux                  |         Yes         |       Yes        |\n|                 Windows                 |         Yes         |        No        |\n|       Windows Subsystem for Linux       |         Yes         |       Yes        |\n|            macOS 11+ (Intel)            |         Yes         |       Yes        |\n| macOS 11+ (Apple Silicon: M1, M2, etc.) |     Coming soon     |       Yes        |\n\nNote: Concrete ML only supports Python `3.8`, `3.9`, `3.10` and `3.11`.\nConcrete ML can be installed on Kaggle ([see this question on the community for more details](https://community.zama.ai/t/how-do-we-use-concrete-ml-on-kaggle/332)) and on Google Colab.\n\n#### Docker\n\nTo install with Docker, pull the `concrete-ml` image as follows:\n`docker pull zamafhe/concrete-ml:latest`\n\n#### Pip\n\nTo install Concrete ML from PyPi, run the following:\n\n```\npip install -U pip wheel setuptools\npip install concrete-ml\n```\n\n*Find more detailed installation instructions in [this part of the documentation](https://docs.zama.ai/concrete-ml/getting-started/pip_installing)*\n\n<p align=\"right\">\n  <a href=\"#about\" > \u2191 Back to top </a>\n</p>\n\n### A simple example\n\nHere is a simple example which is very close to scikit-learn for a logistic regression :\n\n```python\nfrom sklearn.datasets import make_classification\nfrom sklearn.model_selection import train_test_split\nfrom concrete.ml.sklearn import LogisticRegression\n\n# Lets create a synthetic data-set\nx, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)\n\n# Split the data-set into a train and test set\nX_train, X_test, y_train, y_test = train_test_split(\n    x, y, test_size=0.2, random_state=42\n)\n\n# Now we train in the clear and quantize the weights\nmodel = LogisticRegression(n_bits=8)\nmodel.fit(X_train, y_train)\n\n# We can simulate the predictions in the clear\ny_pred_clear = model.predict(X_test)\n\n# We then compile on a representative set\nmodel.compile(X_train)\n\n# Finally we run the inference on encrypted inputs !\ny_pred_fhe = model.predict(X_test, fhe=\"execute\")\n\nprint(\"In clear  :\", y_pred_clear)\nprint(\"In FHE    :\", y_pred_fhe)\nprint(f\"Similarity: {int((y_pred_fhe == y_pred_clear).mean()*100)}%\")\n\n# Output:\n    # In clear  : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]\n    # In FHE    : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]\n    # Similarity: 100%\n```\n\n<br></br>\nIt is also possible to call encryption, model prediction, and decryption functions separately as follows.\nExecuting these steps separately is equivalent to calling `predict_proba` on the model instance.\n\n<!--pytest-codeblocks:cont-->\n\n```python\n# Predict probability for a single example\ny_proba_fhe = model.predict_proba(X_test[[0]], fhe=\"execute\")\n\n# Quantize an original float input\nq_input = model.quantize_input(X_test[[0]])\n\n# Encrypt the input\nq_input_enc = model.fhe_circuit.encrypt(q_input)\n\n# Execute the linear product in FHE\nq_y_enc = model.fhe_circuit.run(q_input_enc)\n\n# Decrypt the result (integer)\nq_y = model.fhe_circuit.decrypt(q_y_enc)\n\n# De-quantize and post-process the result\ny0 = model.post_processing(model.dequantize_output(q_y))\n\nprint(\"Probability with `predict_proba`: \", y_proba_fhe)\nprint(\"Probability with encrypt/run/decrypt calls: \", y0)\n```\n\n*This example is explained in more detail in the [linear model documentation](https://docs.zama.ai/concrete-ml/built-in-models/linear).*\n\nConcrete ML built-in models have APIs that are almost identical to their scikit-learn counterparts. It is also possible to convert PyTorch networks to FHE with the Concrete ML conversion APIs. Please refer to the [linear models](docs/built-in-models/linear.md), [tree-based models](docs/built-in-models/tree.md) and [neural networks](docs/built-in-models/neural-networks.md) documentation for more examples, showing the scikit-learn-like API of the built-in models.\n\n<p align=\"right\">\n  <a href=\"#about\" > \u2191 Back to top </a>\n</p>\n\n> \\[!Note\\]\n> **Zama 5-Question Developer Survey**\n>\n> We want to hear from you! Take 1 minute to share your thoughts and helping us enhance our documentation and libraries. \ud83d\udc49 **[Click here](https://www.zama.ai/developer-survey)** to participate.\n\n## Resources\n\n### Demos\n\n#### Live demos on Hugging Face\n\n- [Credit card approval](https://huggingface.co/spaces/zama-fhe/credit_card_approval_prediction):  Predicting credit scoring card approval application in which sensitive data can be shared and analyzed without exposing the actual information to neither the three parties involved, nor the server processing it.\n  - Check the code [here](https://huggingface.co/spaces/zama-fhe/credit_card_approval_prediction/tree/main)\n- [Sentiment analysis with transformers](https://huggingface.co/blog/sentiment-analysis-fhe): predicting if an encrypted tweet / short message is positive, negative or neutral, using FHE.\n  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_sentiment_analysis/tree/main) and the [blog post](https://huggingface.co/blog/sentiment-analysis-fhe)\n- [Health diagnosis](https://huggingface.co/spaces/zama-fhe/encrypted_health_prediction): giving a diagnosis using FHE to preserve the privacy of the patient based on a patient's symptoms, history and other health factors.\n  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_health_prediction/tree/main)\n- [Encrypted image filtering](https://huggingface.co/spaces/zama-fhe/encrypted_image_filtering) : filtering encrypted images by applying filters such as black-and-white, ridge detection, or your own filter.\n  - Check the code [here](https://huggingface.co/spaces/zama-fhe/encrypted_image_filtering/tree/main)\n\n#### Other demos\n\n- [Encrypted Large Language Model](use_case_examples/llm/): converting a user-defined part of a Large Language Model for encrypted text generation. This demo shows the trade-off between quantization and accuracy for text generation and shows how to run the model in FHE.\n- [Private inference for federated learned models](use_case_examples/federated_learning/): private training of a Logistic Regression model and then importing the model into Concrete ML and performing encrypted prediction.\n- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solving the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html).\n- [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): training a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%.\n- [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%.\n\n*If you have built awesome projects using Concrete ML, please let us know and we will be happy to showcase them here!*\n<br></br>\n\n### Tutorials\n\n- [\\[Video tutorial\\] Train a linear classifier on encrypted data using Concrete ML and Fully Homomorphic Encryption (FHE)](https://www.youtube.com/watch?v=QVsZ33jBlq4)\n- [\\[Video tutorial\\] How To Convert a Scikit-learn Model Into Its Homomorphic Equivalent](https://www.zama.ai/post/how-to-convert-a-scikit-learn-model-into-its-homomorphic-equivalent)\n- [Linear Regression Over Encrypted Data With Homomorphic Encryption](https://www.zama.ai/post/linear-regression-using-linear-svr-and-concrete-ml-homomorphic-encryption)\n- [How to Deploy a Machine Learning Model With Concrete ML](https://www.zama.ai/post/how-to-deploy-machine-learning-models-with-concrete-ml)\n- More [Built-in models tutorials](docs/tutorials/ml_examples.md) and [Deep learning tutorials](docs/tutorials/dl_examples.md)\n\n*Explore more useful resources in [Awesome Zama repo](https://github.com/zama-ai/awesome-zama)*\n<br></br>\n\n### Documentation\n\nFull, comprehensive documentation is available here: [https://docs.zama.ai/concrete-ml](https://docs.zama.ai/concrete-ml).\n\n<p align=\"right\">\n  <a href=\"#about\" > \u2191 Back to top </a>\n</p>\n\n## Working with Concrete ML\n\n### Citations\n\nTo cite Concrete ML in academic papers, please use the following entry:\n\n```text\n@Misc{ConcreteML,\n  title={Concrete {ML}: a Privacy-Preserving Machine Learning Library using Fully Homomorphic Encryption for Data Scientists},\n  author={Zama},\n  year={2022},\n  note={\\url{https://github.com/zama-ai/concrete-ml}},\n}\n```\n\n### Contributing\n\nTo contribute to Concrete ML, please refer to [this section of the documentation](docs/developer/contributing.md).\n<br></br>\n\n### License\n\nThis software is distributed under the **BSD-3-Clause-Clear** license. Read [this](LICENSE) for more details.\n\n#### FAQ\n\n**Is Zama\u2019s technology free to use?**\n\n> Zama\u2019s libraries are free to use under the BSD 3-Clause Clear license only for development, research, prototyping, and experimentation purposes. However, for any commercial use of Zama's open source code, companies must purchase Zama\u2019s commercial patent license.\n>\n> All our work is open source and we strive for full transparency about Zama's IP strategy. To know more about what this means for Zama product users, read about how we monetize our open source products in [this blog post](https://www.zama.ai/post/open-source).\n\n**What do I need to do if I want to use Zama\u2019s technology for commercial purposes?**\n\n> To commercially use Zama\u2019s technology you need to be granted Zama\u2019s patent license. Please contact us at hello@zama.ai for more information.\n\n**Do you file IP on your technology?**\n\n> Yes, all of Zama\u2019s technologies are patented.\n\n**Can you customize a solution for my specific use case?**\n\n> We are open to collaborating and advancing the FHE space with our partners. If you have specific needs, please email us at hello@zama.ai.\n\n<p align=\"right\">\n  <a href=\"#about\" > \u2191 Back to top </a>\n</p>\n\n## Support\n\n<a target=\"_blank\" href=\"https://zama.ai/community-channels\">\n<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/zama-ai/concrete-ml/assets/157474013/86502167-4ea4-49e9-a881-0cf97d141818\">\n  <source media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/zama-ai/concrete-ml/assets/157474013/3dcf41e2-1c00-471b-be53-2c804879b8cb\">\n  <img alt=\"Support\">\n</picture>\n</a>\n\n\ud83c\udf1f If you find this project helpful or interesting, please consider giving it a star on GitHub! Your support helps to grow the community and motivates further development.\n\n<p align=\"right\">\n  <a href=\"#about\" > \u2191 Back to top </a>\n</p>\n\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause-Clear",
    "summary": "Concrete ML is an open-source set of tools which aims to simplify the use of fully homomorphic encryption (FHE) for data scientists.",
    "version": "1.7.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/zama-ai/concrete-ml/issues",
        "Documentation": "http://docs.zama.ai/concrete-ml/",
        "Homepage": "https://zama.ai/concrete-ml/",
        "README": "https://github.com/zama-ai/concrete-ml/blob/main/README.md",
        "Repository": "https://github.com/zama-ai/concrete-ml"
    },
    "split_keywords": [
        "fhe",
        " homomorphic encryption",
        " privacy",
        " security"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fb5217c89924611384c3af895f9f9a9ada9f499b49749302faf8642c0a1687ae",
                "md5": "16a80235143ba9efadcb133875bf615f",
                "sha256": "68b41426e36f484acd32b2c5d59023f7638ec967ac9bd908bedede3010908cd8"
            },
            "downloads": -1,
            "filename": "concrete_ml-1.7.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "16a80235143ba9efadcb133875bf615f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.8.1",
            "size": 260711,
            "upload_time": "2024-09-30T09:05:42",
            "upload_time_iso_8601": "2024-09-30T09:05:42.451970Z",
            "url": "https://files.pythonhosted.org/packages/fb/52/17c89924611384c3af895f9f9a9ada9f499b49749302faf8642c0a1687ae/concrete_ml-1.7.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-30 09:05:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zama-ai",
    "github_project": "concrete-ml",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "concrete-ml"
}
        
Elapsed time: 0.69730s