Linear Models

Concrete-ML provides several of the most popular linear models for regression or classification that can be found in Scikit-learn:

Using these models in FHE is extremely similar to what can be done with scikit-learn's API, making it easy for data scientists that are used to this framework to get started with Concrete ML.

Models are also compatible with some of scikit-learn's main worflows, such as Pipeline() or GridSearch().


Here's an example of how to use this model in FHE on a simple dataset below. A more complete example can be found in the LogisticRegression notebook.

import numpy
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from import LogisticRegression

# Create the data for classification
X, y = make_classification(

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# Instantiate the model
model = LogisticRegression(n_bits=2)

# Fit the model, y_train)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test)

# Compile the model

# Perform the inference in FHE
# Warning: this will take a while. It is recommended to run this with a very small batch of
# example first (e.g. N_TEST_FHE = 1)
# Note that here the encryption and decryption is done behind the scene.
y_pred_fhe = model.predict(X_test[:N_TEST_FHE], execute_in_fhe=True)

# Assert that FHE predictions are the same as the clear predictions
print(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "
      f"examples over {N_TEST_FHE} have a FHE inference equal to the clear inference.")

# Output:
#  3 examples over 3 have a FHE inference equal to the clear inference

We can then plot how the model classifies the inputs and then compare those results with a scikit-learn model executed in clear. The complete code can be found in the LogisticRegression notebook.

We can clearly observe the impact of quantization over the decision boundaries in the FHE model, breaking the initial lines into broken lines with steps. However, this does not change the overall score as both models output the same accuracy (90%).

In fact, the quantization process may sometimes create some artifacts that could lead to a decrease in performance. Still, the impact of those artifacts is often minor when considering linear models, making FHE models reach similar scores as their equivalent clear ones.

Last updated