Concrete-ML provides several of the most popular classification and regression tree models that can be found in Scikit-learn:
Concrete-ML
scikit-learn
In addition to support for scikit-learn, Concrete-ML also supports XGBoost 's XGBClassifier:
Concrete-ML
XGboost
Example
Here's an example of how to use this model in FHE on a popular data-set using some of scikit-learn's pre-processing tools. A more complete example can be found in the XGBClassifier notebook.
from sklearn.datasets import load_breast_cancerfrom sklearn.decomposition import PCAfrom sklearn.model_selection import GridSearchCV, train_test_splitfrom sklearn.pipeline import Pipelinefrom sklearn.preprocessing import StandardScalerfrom concrete.ml.sklearn.xgb import XGBClassifier# Get data-set and split into train and testX, y =load_breast_cancer(return_X_y=True)# Split the train and test setX_train, X_test, y_train, y_test =train_test_split(X, y, test_size=0.2, random_state=0)# Define our modelmodel =XGBClassifier(n_jobs=1, n_bits=3)# Define the pipeline# We will normalize the data and apply a PCA before fitting the modelpipeline =Pipeline( [("standard_scaler", StandardScaler()), ("pca", PCA(random_state=0)), ("model", model)])# Define the parameters to tuneparam_grid ={"pca__n_components": [2,5,10,15],"model__max_depth": [2,3,5],"model__n_estimators": [5,10,20],}# Instantiate the grid search with 5-fold cross validation on all available cores:grid =GridSearchCV(pipeline, param_grid, cv=5, n_jobs=-1, scoring="accuracy")# Launch the grid searchgrid.fit(X_train, y_train)# Print the best parameters foundprint(f"Best parameters found: {grid.best_params_}")# Output:# Best parameters found: {'model__max_depth': 5, 'model__n_estimators': 10, 'pca__n_components': 5}# Currently we only focus on model inference in FHE# The data transformation will be done in clear (client machine)# while the model inference will be done in FHE on a server.# The pipeline can be split into 2 parts:# 1. data transformation# 2. estimatorbest_pipeline = grid.best_estimator_data_transformation_pipeline = best_pipeline[:-1]model = best_pipeline[-1]# Transform test setX_train_transformed = data_transformation_pipeline.transform(X_train)X_test_transformed = data_transformation_pipeline.transform(X_test)# Evaluate the model on the test set in cleary_pred_clear = model.predict(X_test_transformed)print(f"Test accuracy in clear: {(y_pred_clear == y_test).mean():0.2f}")# In the output, the Test accuracy in clear should be > 0.9# Compile the model to FHEmodel.compile(X_train_transformed)# Perform the inference in FHE# Warning: this will take a while. It is recommended to run this with a very small batch of# example first (e.g. N_TEST_FHE = 1)# Note that here the encryption and decryption is done behind the scene.N_TEST_FHE =1y_pred_fhe = model.predict(X_test_transformed[:N_TEST_FHE], execute_in_fhe=True)# Assert that FHE predictions are the same as the clear predictionsprint(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "f"examples over {N_TEST_FHE} have a FHE inference equal to the clear inference.")# Output:# 1 examples over 1 have a FHE inference equal to the clear inference
In a similar example, the decision boundaries of the Concrete-ML model can be plotted, and, then, compared to the results of the classical XGBoost model executed in the clear. A 6-bit model is shown in order to illustrate the impact of quantization on classification. Similar plots can be found in the Classifier Comparison notebook.
Quantization parameters
This graph above shows that, when using a sufficiently high bit-width, quantization has little impact on the decision boundaries of the Concrete-ML FHE decision tree models. As the quantization is done individually on each input feature, the impact of quantization is strongly reduced, and, thus, FHE tree-based models reach similar accuracy as their floating point equivalents. Using 6 bits for quantization makes the Concrete-ML model reach or exceed the floating point accuracy. The number of bits for quantization can be adjusted through the n_bits parameter.
When n_bits is set low, the quantization process may sometimes create some artifacts that could lead to a decrease in performance, but the execution speed in FHE decreases. In this way, it is possible to adjust the accuracy/speed trade-off, and some accuracy can be recovered by increasing the n_estimators.
The following graph shows that using 5-6 bits of quantization is usually sufficient to reach the performance of a non-quantized XGBoost model on floating point data. The metrics plotted are accuracy and F1-score on the spambase data-set.