This document explains how to perform encryption, execution, and decryption of Fully Homomorphic Encryption (FHE) using one function call of the Concrete ML API, or multiple function calls separately.
All Concrete ML built-in models have a single predict method that performs the encryption, FHE execution, and decryption with only one function call.
The following example shows how to create a synthetic data-set and how to use it to train a LogisticRegression model from Concrete ML.
from sklearn.datasets import make_classificationfrom sklearn.model_selection import train_test_splitfrom concrete.ml.sklearn import LogisticRegressionimport numpy# Create a synthetic data-set for a classification problemx, y =make_classification(n_samples=100, class_sep=2, n_features=3, n_informative=3, n_redundant=0, random_state=42)# Split the data-set into a train and test setx_train, x_test, y_train, y_test =train_test_split(x, y, test_size=0.2, random_state=42)# Instantiate and train the modelmodel =LogisticRegression()model.fit(x_train,y_train)# Simulate the predictions in the clear (optional)y_pred_clear = model.predict(x_test)# Compile the model on a representative setfhe_circuit = model.compile(x_train)
Concrete ML models follow the same API as scikit-learn models, transparently performing the steps related to encryption for convenience.
# Predict in FHEy_pred_fhe = model.predict(x_test, fhe="execute")
Regarding this LogisticRegression model, as with scikit-learn, it is possible to predict the logits as well as the class probabilities by respectively using the decision_function or predict_proba methods instead.
Using separate functions
Alternatively, you can execute key generation, quantization, encryption, FHE execution and decryption separately.
# Generate the keys (set force to True in order to generate new keys at each execution)fhe_circuit.keygen(force=True)y_pred_fhe_step = []for f_input in x_test:# Quantize an input (float) q_input = model.quantize_input([f_input])# Encrypt the input q_input_enc = fhe_circuit.encrypt(q_input)# Execute the linear product in FHE q_y_enc = fhe_circuit.run(q_input_enc)# Decrypt the result (integer) q_y = fhe_circuit.decrypt(q_y_enc)# De-quantize the result y = model.dequantize_output(q_y)# Apply either the sigmoid if it is a binary classification task, which is the case in this # example, or a softmax function in order to get the probabilities (in the clear) y_proba = model.post_processing(y)# Since this model does classification, apply the argmax to get the class predictions (in the clear)# Note that regression models won't need the following line y_class = numpy.argmax(y_proba, axis=1) y_pred_fhe_step +=list(y_class)y_pred_fhe_step = numpy.array(y_pred_fhe_step)print("Predictions in clear:", y_pred_clear)print("Predictions in FHE :", y_pred_fhe_step)print(f"Similarity: {int((y_pred_fhe_step == y_pred_clear).mean()*100)}%")
Custom models
For custom models, the API to execute inference in FHE or simulation is as follows: