⭐️ Star the repo on Github | 🗣 Community support forum | 📁 Contribute to the project
Concrete-ML is an open-source, privacy-preserving, machine learning inference framework based on fully homomorphic encryption (FHE). It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent, using familiar APIs from Scikit-learn and PyTorch (see how it looks for linear models, tree-based models, and neural networks).
Fully Homomorphic Encryption (FHE) is an encryption technique that allows computing directly on encrypted data, without needing to decrypt it. With FHE, you can build private-by-design applications without compromising on features. You can learn more about FHE in this introduction or by joining the FHE.org community.
Here is a simple example of classification on encrypted data using logistic regression. More examples can be found here.
This example shows the typical flow of a Concrete-ML model:
The model is trained on unencrypted (plaintext) data using scikit-learn. As FHE operates over integers, Concrete-ML quantizes the model to use only integers during inference.
The quantized model is compiled to a FHE equivalent. Under the hood, the model is first converted to a Concrete-Numpy program, then compiled.
Inference can then be done on encrypted data. The above example shows encrypted inference in the model-development phase. Alternatively, during deployment in a client/server setting, the data is encrypted by the client, processed securely by the server, and then decrypted by the client.
To make a model work with FHE, the only constraint is to make it run within the supported precision limitations of Concrete-ML (currently 16-bit integers). Thus, machine learning models are required to be quantized, which sometimes leads to a loss of accuracy versus the original model, which operates on plaintext.
Additionally, Concrete-ML currently only supports FHE inference. On the other hand, training has to be done on unencrypted data, producing a model which is then converted to a FHE equivalent that can perform encrypted inference, i.e. prediction over encrypted data.
Finally, in Concrete-ML there is currently no support for pre-processing model inputs and post-processing model outputs. These processing stages may involve text-to-numerical feature transformation, dimensionality reduction, KNN or clustering, featurization, normalization, and the mixing of results of ensemble models.
All of these issues are currently being addressed and significant improvements are expected to be released in the coming months.
Concrete-ML is built on top of Zama's Concrete framework. It uses Concrete-Numpy, which itself uses the Concrete-Compiler and the Concrete-Library. To use these libraries directly, refer to the Concrete-Numpy and Concrete-Framework documentations.
Various tutorials are available for the built-in models and for deep learning. In addition, several standalone demos for use-cases can be found in the Demos and Tutorials section.
If you have built awesome projects using Concrete-ML, feel free to let us know and we'll link to your work!
Support forum: https://community.zama.ai (we answer in less than 24 hours).
Live discussion on the FHE.org Discord server: https://discord.fhe.org (inside the #concrete channel).
Do you have a question about Zama? You can write us on Twitter or send us an email at: hello@zama.ai