Only this pageAll pages
Powered by GitBook
1 of 94

1.2

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Built-in Models

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deep Learning

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

Loading...

Loading...

Loading...

Loading...

Advanced topics

Loading...

Loading...

Loading...

Loading...

Developer Guide

Workflow

Loading...

Loading...

Loading...

Loading...

Loading...

Inner Workings

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

What is Concrete ML?

Example usage

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression

# Lets create a synthetic data-set
x, y = make_classification(n_samples=100, class_sep=2, n_features=30, random_state=42)

# Split the data-set into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=42
)

# Now we train in the clear and quantize the weights
model = LogisticRegression(n_bits=8)
model.fit(X_train, y_train)

# We can simulate the predictions in the clear
y_pred_clear = model.predict(X_test)

# We then compile on a representative set
model.compile(X_train)

# Finally we run the inference on encrypted inputs
y_pred_fhe = model.predict(X_test, fhe="execute")

print(f"In clear  : {y_pred_clear}")
print(f"In FHE    : {y_pred_fhe}")
print(f"Similarity: {(y_pred_fhe == y_pred_clear).mean():.1%}")

# Output:
    # In clear  : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # In FHE    : [0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 1 1 0 0]
    # Similarity: 100.0%

It is also possible to call encryption, model prediction, and decryption functions separately as follows. Executing these steps separately is equivalent to calling predict_proba on the model instance.

# Predict probability for a single example
y_proba_fhe = model.predict_proba(X_test[[0]], fhe="execute")

# Quantize an original float input
q_input = model.quantize_input(X_test[[0]])

# Encrypt the input
q_input_enc = model.fhe_circuit.encrypt(q_input)

# Execute the linear product in FHE 
q_y_enc = model.fhe_circuit.run(q_input_enc)

# Decrypt the result (integer)
q_y = model.fhe_circuit.decrypt(q_y_enc)

# De-quantize and post-process the result
y0 = model.post_processing(model.dequantize_output(q_y))

print("Probability with `predict_proba`: ", y_proba_fhe)
print("Probability with encrypt/run/decrypt calls: ", y0)

This example shows the typical flow of a Concrete ML model:

  • The model is trained on unencrypted (plaintext) data using scikit-learn. As FHE operates over integers, Concrete ML quantizes the model to use only integers during inference.

  • The quantized model is compiled to an FHE equivalent. Under the hood, the model is first converted to a Concrete Python program, then compiled.

Current limitations

To make a model work with FHE, the only constraint is to make it run within the supported precision limitations of Concrete ML (currently 16-bit integers). Thus, machine learning models must be quantized, which sometimes leads to a loss of accuracy versus the original model, which operates on plaintext.

Additionally, Concrete ML currently only supports FHE inference. Training has to be done on unencrypted data, producing a model which is then converted to an FHE equivalent that can perform encrypted inference (i.e., prediction over encrypted data).

Finally, there is currently no support for pre-processing model inputs and post-processing model outputs. These processing stages may involve text-to-numerical feature transformation, dimensionality reduction, KNN or clustering, featurization, normalization, and the mixing of results of ensemble models.

These issues are currently being addressed, and significant improvements are expected to be released in the coming months.

Concrete stack

Online demos and tutorials

If you have built awesome projects using Concrete ML, feel free to let us know and we'll link to your work!

Additional resources

Support

Demos and Tutorials

This section lists several demos that apply Concrete ML to some popular machine learning problems. They show how to build ML models that perform well under FHE constraints, and then how to perform the conversion to FHE.

| |

Concrete ML is an open source, privacy-preserving, machine learning inference framework based on Fully Homomorphic Encryption (FHE). It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent, using familiar APIs from scikit-learn and PyTorch (see how it looks for , , and ).

Fully Homomorphic Encryption is an encryption technique that allows computing directly on encrypted data, without needing to decrypt it. With FHE, you can build private-by-design applications without compromising on features. You can learn more about FHE in or by joining the community.

Here is a simple example of classification on encrypted data using logistic regression. More examples can be found .

Inference can then be done on encrypted data. The above example shows encrypted inference in the model-development phase. Alternatively, during in a client/server setting, the data is encrypted by the client, processed securely by the server, and then decrypted by the client.

Concrete ML is built on top of Zama's .

Various tutorials are available for and . Several stand-alone demos for use cases can be found in the section.

Support forum: (we answer in less than 24 hours).

Live discussion on the FHE.org Discord server: (inside the #concrete channel).

Do you have a question about Zama? Write us on or send us an email at: hello@zama.ai

Simpler tutorials that discuss only model usage and compilation are also available for and .

⭐️ Star the repo on Github
🗣 Community support forum
📁 Contribute to the project
linear models
tree-based models
neural networks
this introduction
FHE.org
here
deployment
Concrete
built-in models
deep learning
Demos and Tutorials
Dedicated Concrete ML community support
Zama's blog
FHE.org community
https://community.zama.ai
https://discord.fhe.org
Twitter
built-in models
deep learning

Key Concepts

Concrete ML is built on top of Concrete, which enables NumPy programs to be converted into FHE circuits.

Lifecycle of a Concrete ML model

I. Model development

  1. training: A model is trained using plaintext, non-encrypted, training data.

  2. inference: The compiled model can then be executed on encrypted data, once the proper keys have been generated. The model can also be deployed to a server and used to run private inference on encrypted inputs.

II. Model deployment

  1. client/server deployment: In a client/server setting, the model can be exported in a way that:

    • allows the client to generate keys, encrypt, and decrypt.

    • provides a compiled model that can run on the server to perform inference on encrypted data.

  2. key generation: The data owner (client) needs to generate a set of keys: a private key (to encrypt/decrypt their data and results) and a public evaluation key (for the model's FHE evaluation on the server).

Cryptography concepts

Concrete ML and Concrete are tools that hide away the details of the underlying cryptography scheme, called TFHE. However, some cryptography concepts are still useful when using these two toolkits:

  1. encryption/decryption: These operations transform plaintext (i.e., human-readable information) into ciphertext (i.e., data that contains a form of the original plaintext that is unreadable by a human or computer without the proper key to decrypt it). Encryption takes plaintext and an encryption key and produces ciphertext, while decryption is the inverse operation.

  2. encrypted inference: FHE allows a third party to execute (i.e., run inference or predict) a machine learning model on encrypted data (a ciphertext). The result of the inference is also encrypted and can only be read by the person who receives the decryption key.

  3. key generation: Cryptographic keys need to be generated using random number generators. Their size may be large and key generation may take a long time. However, keys only need to be generated once for each model used by a client.

  4. private key: A private key is a series of bits used within an encryption algorithm for encrypting data so that the corresponding ciphertext appears random.

  5. public evaluation key: A public evaluation key is used to perform homomorphic operations on encrypted data, typically by a server.

  6. guaranteed correctness of encrypted computations: To achieve security, TFHE, the underlying encryption scheme, adds random noise to ciphertexts. This can induce errors during processing of encrypted data, depending on noise parameters. By default, Concrete ML uses parameters that ensure the correctness of the encrypted computation, so there is no need to account for noise parametrization. Therefore, the results on encrypted data will be the same as the results of simulation on clear data.

Model accuracy considerations under FHE constraints

To respect FHE constraints, all numerical programs that include non-linear operations over encrypted data must have all inputs, constants, and intermediate values represented with integers of a maximum of 16 bits.

Tree-based Models

Concrete ML
scikit-learn
Concrete ML
XGboost

As the maximum depth parameter of decision trees and tree-ensemble models strongly increases the number of nodes in the trees, we recommend using the XGBoost models which achieve better performance with lower depth.

Example

from sklearn.datasets import load_breast_cancer
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

from concrete.ml.sklearn.xgb import XGBClassifier


# Get data-set and split into train and test
X, y = load_breast_cancer(return_X_y=True)

# Split the train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# Define our model
model = XGBClassifier(n_jobs=1, n_bits=3)

# Define the pipeline
# We normalize the data and apply a PCA before fitting the model
pipeline = Pipeline(
    [("standard_scaler", StandardScaler()), ("pca", PCA(random_state=0)), ("model", model)]
)

# Define the parameters to tune
param_grid = {
    "pca__n_components": [2, 5, 10, 15],
    "model__max_depth": [2, 3, 5],
    "model__n_estimators": [5, 10, 20],
}

# Instantiate the grid search with 5-fold cross validation on all available cores
grid = GridSearchCV(pipeline, param_grid, cv=5, n_jobs=-1, scoring="accuracy")

# Launch the grid search
grid.fit(X_train, y_train)

# Print the best parameters found
print(f"Best parameters found: {grid.best_params_}")

# Output:
#  Best parameters found: {'model__max_depth': 5, 'model__n_estimators': 10, 'pca__n_components': 5}

# Currently we only focus on model inference in FHE
# The data transformation is done in clear (client machine)
# while the model inference is done in FHE on a server.
# The pipeline can be split into 2 parts:
#   1. data transformation
#   2. estimator
best_pipeline = grid.best_estimator_
data_transformation_pipeline = best_pipeline[:-1]
model = best_pipeline[-1]

# Transform test set
X_train_transformed = data_transformation_pipeline.transform(X_train)
X_test_transformed = data_transformation_pipeline.transform(X_test)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test_transformed)
print(f"Test accuracy in clear: {(y_pred_clear == y_test).mean():0.2f}")

# In the output, the Test accuracy in clear should be > 0.9

# Compile the model to FHE
model.compile(X_train_transformed)

# Perform the inference in FHE
# Warning: this will take a while. It is recommended to run this with a very small batch of
# example first (e.g., N_TEST_FHE = 1)
# Note that here the encryption and decryption is done behind the scene.
N_TEST_FHE = 1
y_pred_fhe = model.predict(X_test_transformed[:N_TEST_FHE], fhe="execute")

# Assert that FHE predictions are the same as the clear predictions
print(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "
      f"examples over {N_TEST_FHE} have an FHE inference equal to the clear inference.")

# Output:
#  1 examples over 1 have an FHE inference equal to the clear inference

Quantization parameters

This graph above shows that, when using a sufficiently high bit-width, quantization has little impact on the decision boundaries of the Concrete ML FHE decision tree models. As quantization is done individually on each input feature, the impact of quantization is strongly reduced. This means that FHE tree-based models reach a similar level of accuracy as their floating point equivalents. Using 6 bits for quantization means that the Concrete ML model reaches, or exceeds, the floating point accuracy. The number of bits for quantization can be adjusted through the n_bits parameter.

When n_bits is set to a low value, the quantization process may sometimes create some artifacts that could lead to a decrease in accuracy. At the same time, the execution speed in FHE could improve. In this way, it is possible to adjust the accuracy/speed trade-off, and some accuracy can be recovered by increasing the n_estimators parameter.

The following graph shows that using 5-6 bits of quantization is usually sufficient to reach the performance of a non-quantized XGBoost model on floating point data. The metrics plotted are accuracy and F1-score on the spambase data-set.

FHE Inference time considerations

The inference time in FHE is strongly dependant on the maximum circuit bit-width. For trees, in most cases, the quantization bit-width will be the same as the circuit bit-width. Therefore, reducing the quantization bit-width to 4 or less will result in fast inference times. Adding more bits will increase FHE inference time exponentially.

In some rare cases, the bit-width of the circuit can be higher than the quantization bit-width. This could happen when the quantization bit-width is low but the tree-depth is high. In such cases, the circuit bit-width is upper bounded by ceil(log2(max_depth + 1) + 1).

Installation

Not all hardware/OS combinations are supported. Determine your platform, OS version, and Python version before referencing the table below.

Depending on your OS, Concrete ML may be installed with Docker or with pip:

OS / HW
Available on Docker
Available on pip

Linux

Yes

Yes

Windows

Yes

Not currently

Windows Subsystem for Linux

Yes

Yes

macOS 11+ (Intel)

Yes

Yes

macOS 11+ (Apple Silicon: M1, M2, etc.)

Yes

Yes

Only some versions of python are supported: In the current release, these are 3.8, 3.9 and 3.10. The Concrete ML Python package requires glibc >= 2.28. On Linux, you can check your glibc version by running ldd --version.

Most of these limits are shared with the rest of the Concrete stack (namely Concrete-Python). Support for more platforms will be added in the future.

Using PyPi

Requirements

Installing on Windows can be done using Docker or WSL. On WSL, Concrete ML will work as long as the package is not installed in the /mnt/c/ directory, which corresponds to the host OS filesystem.

Installation

To install Concrete ML from PyPi, run the following:

pip install -U pip wheel setuptools
pip install concrete-ml

This will automatically install all dependencies, notably Concrete.

Using Docker

Concrete ML can be installed using Docker by either pulling the latest image or a specific version:

docker pull zamafhe/concrete-ml:latest
# or
docker pull zamafhe/concrete-ml:v0.4.0

The image can then be used via the following command:

# Without local volume:
docker run --rm -it -p 8888:8888 zamafhe/concrete-ml

# With local volume to save notebooks on host:
docker run --rm -it -p 8888:8888 -v /host/path:/data zamafhe/concrete-ml

This will launch a Concrete ML enabled Jupyter server in Docker that can be accessed directly from a browser.

Alternatively, a shell can be lauched in Docker, with or without volumes:

docker run --rm -it zamafhe/concrete-ml /bin/bash

Pandas

Concrete ML fully supports Pandas, allowing built-in models such as linear and tree-based models to use Pandas dataframes and series just as they would be used with NumPy arrays.

The table below summarizes current compatibility:

Methods
Support Pandas dataframe

fit

✓

compile

✓

predict (fhe="simulate")

✓

predict (fhe="execute")

✓

Example

import numpy as np
import pandas as pd
from concrete.ml.sklearn import LogisticRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# Create the data-set as a Pandas dataframe
X, y = make_classification(
    n_samples=250,
    n_features=30,
    n_redundant=0,
    random_state=2,
)
X, y = pd.DataFrame(X), pd.DataFrame(y)

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# Instantiate the model
model = LogisticRegression(n_bits=8)

# Fit the model
model.fit(X_train, y_train)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test)

# Compile the model
model.compile(X_train)

# Perform the inference in FHE
y_pred_fhe = model.predict(X_test, fhe="execute")

# Assert that FHE predictions are the same as the clear predictions
print(
    f"{(y_pred_fhe == y_pred_clear).sum()} "
    f"examples over {len(y_pred_fhe)} have an FHE inference equal to the clear inference."
)

# Output:
    # 100 examples over 100 have an FHE inference equal to the clear inference.

Inference in the Cloud

Concrete ML models can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.

Keys are generated by the user once for each service they use, based on the model the service provides and its cryptographic parameters.

The overall communications protocol to enable cloud deployment of machine learning services can be summarized in the following diagram:

The steps detailed above are:

  1. The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference.

  2. The client requests the cryptographic parameters (also called "client specs"). Once it receives them from the server, the secret and evaluation keys are generated.

  3. The client sends the evaluation key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data.

  4. The server uses the evaluation key to securely run inference on the user's data and sends back the encrypted result.

  5. The client now decrypts the result and can send back new requests.

Neural Networks

Concrete ML provides simple built-in neural networks models with a scikit-learn interface through the NeuralNetClassifier and NeuralNetRegressor classes.

Concrete ML models are multi-layer, fully-connected, networks with customizable activation functions and have a number of neurons in each layer. This approach is similar to what is available in scikit-learn when using the MLPClassifier/MLPRegressor classes. The built-in models train easily with a single call to .fit(), which will automatically quantize weights and activations. These models use Quantization Aware Training, allowing good performance for low precision (down to 2-3 bits) weights and activations.

Example usage

To create an instance of a Fully Connected Neural Network (FCNN), you need to instantiate one of the NeuralNetClassifier and NeuralNetRegressor classes and configure a number of parameters that are passed to their constructor. Note that some parameters need to be prefixed by module__, while others don't. The parameters related to the model (i.e., the underlying nn.Module), must have the prefix. The parameters related to training options do not require the prefix.

The figure above right shows the Concrete ML neural network, trained with Quantization Aware Training in an FHE-compatible configuration. The figure compares this network to the floating-point equivalent, trained with scikit-learn.

Architecture parameters

  • module__n_layers: number of layers in the FCNN, must be at least 1. Note that this is the total number of layers. For a single, hidden layer NN model, set module__n_layers=2

Quantization parameters

  • n_w_bits (default 3): number of bits for weights

  • n_a_bits (default 3): number of bits for activations and inputs

Training parameters (from skorch)

  • max_epochs: The number of epochs to train the network (default 10)

  • verbose: Whether to log loss/metrics during training (default: False)

  • lr: Learning rate (default 0.001)

Advanced parameters

Class weights

You can give weights to each class to use in training. Note that this must be supported by the underlying PyTorch loss function.

Overflow errors

The n_accum_bits parameter influences training accuracy as it controls the number of non-zero neurons that are allowed in each layer. Increasing n_accum_bits improves accuracy, but should take into account precision limitations to avoid an overflow in the accumulator. The default value is a good compromise that avoids an overflow in most cases, but you may want to change the value of this parameter to reduce the breadth of the network if you have overflow errors.

Furthermore, the number of neurons on intermediate layers is controlled through the n_hidden_neurons_multiplier parameter - a value of 1 will make intermediate layers have the same number of neurons as the number of dimensions of the input data.

Linear Models

Models are also compatible with some of scikit-learn's main workflows, such as Pipeline() and GridSearch().

Quantization parameters

The n_bits parameter controls the bit-width of the inputs and weights of the linear models. When non-linear mapping is applied by the model, such as exp or sigmoid, Concrete ML applies it on the client-side, on clear-text values that are the decrypted output of the linear part of the model. Thus, Linear Models do not use table lookups, and can, therefore, use high precision integers for weight and inputs.

The n_bits parameter can be set to 8 or more bits for models with up to 300 input dimensions. When the input has more dimensions, n_bits must be reduced to 6-7. All performance metrics are preserved down to n_bits=6, compared to the non-quantized float models from scikit-learn.

Example

The overall accuracy scores are identical (93%) between the scikit-learn model (executed in the clear) and the Concrete ML one (executed in FHE). In fact, quantization has little impact on the decision boundaries, as linear models are able to consider large precision numbers when quantizing inputs and weights in Concrete ML. Additionally, as the linear models do not use PBS, the FHE computations are always exact. This means that the FHE predictions are always identical to the quantized clear ones.

Loading a pre-trained model

An alternative to the example above is to train a scikit-learn model in a separate step and then to convert it to Concrete ML.

Nearest Neighbors

Concrete ML offers nearest neighbors non-parametric classification models with a scikit-learn interface through the KNeighborsClassifier class.

Example usage

The predict method of the KNeighborsClassifier performs the following steps:

  • quantization of the test vectors, performed in the clear

  • computation of the top-k class indices of the closest training set vector, on encrypted data

  • majority vote of the top-k class labels to find the class for each test vector, performed in the clear

Inference time considerations

The FHE inference latency of this model is heavily influenced by the n_bits, the dimensionality of the data. Furthermore, the size of the data-set has a linear impact on the complexity of the data and the number of nearest neighbors, n_neighbors, also plays a role.

Built-in Model Examples

FHE constraints

In Concrete ML, built-in linear models are exact equivalents to their scikit-learn counterparts. As they do not apply any non-linearity during inference, these models are very fast (~1ms FHE inference time) and can use high-precision integers (between 20-25 bits).

Tree-based models apply non-linear functions that enable comparisons of inputs and trained thresholds. Thus, they are limited with respect to the number of bits used to represent the inputs. But as these examples show, in practice 5-6 bits are sufficient to exactly reproduce the behavior of their scikit-learn counterpart models.

In the examples below, built-in neural networks can be configured to work with user-specified accumulator sizes, which allow the user to adjust the speed/accuracy trade-off.

List of examples

1. Linear models

These examples show how to use the built-in linear models on synthetic data, which allows for easy visualization of the decision boundaries or trend lines. Executing these 1D and 2D models in FHE takes around 1 millisecond.

2. Generalized linear models

3. Decision tree

4. XGBoost and Random Forest classifier

5. XGBoost regression

6. Fully connected neural network

7. Comparison of models

Based on three different synthetic data-sets, all the built-in classifiers are demonstrated in this notebook, showing accuracies, inference times, accumulator bit-widths, and decision boundaries.

Using ONNX

ONNX models can be compiled by directly importing models that are already quantized with Quantization Aware Training (QAT) or by performing Post-Training Quantization (PTQ) with Concrete ML.

Simple example

The following example shows how to compile an ONNX model using PTQ. The model was initially trained using Keras before being exported to ONNX. The training code is not shown here.

While Keras was used in this example, it is not officially supported. Additional work is needed to test all of Keras's types of layers and models.

Quantization Aware Training

Supported operators

The following operators are supported for evaluation and conversion to an equivalent FHE circuit. Other operators were not implemented, either due to FHE constraints or because they are rarely used in PyTorch activations or scikit-learn models.

  • Abs

  • Acos

  • Acosh

  • Add

  • Asin

  • Asinh

  • Atan

  • Atanh

  • AveragePool

  • BatchNormalization

  • Cast

  • Celu

  • Clip

  • Concat

  • Constant

  • ConstantOfShape

  • Conv

  • Cos

  • Cosh

  • Div

  • Elu

  • Equal

  • Erf

  • Exp

  • Flatten

  • Floor

  • Gather

  • Gemm

  • Greater

  • GreaterOrEqual

  • HardSigmoid

  • HardSwish

  • Identity

  • LeakyRelu

  • Less

  • LessOrEqual

  • Log

  • MatMul

  • Max

  • MaxPool

  • Min

  • Mul

  • Neg

  • Not

  • Or

  • PRelu

  • Pad

  • Pow

  • ReduceSum

  • Relu

  • Reshape

  • Round

  • Selu

  • Shape

  • Sigmoid

  • Sign

  • Sin

  • Sinh

  • Slice

  • Softplus

  • Squeeze

  • Sub

  • Tan

  • Tanh

  • ThresholdedRelu

  • Transpose

  • Unsqueeze

  • Where

  • onnx.brevitas.Quant

Step-by-step Guide

This guide provides a complete example of converting a PyTorch neural network into its FHE-friendly, quantized counterpart. It focuses on Quantization Aware Training a simple network on a synthetic data-set.

In general, quantization can be carried out in two different ways: either during Quantization Aware Training (QAT) or after the training phase with Post-Training Quantization (PTQ).

For a formal explanation of the mechanisms that enable FHE-compatible neural networks, please see the the following paper.

Baseline PyTorch model

In PyTorch, using standard layers, a fully connected neural network (FCNN) would look like this:

The network was trained using different numbers of neurons in the hidden layers, and quantized using 3-bits weights and activations. The mean accumulator size shown below is measured as the mean over 10 runs of the experiment. An accumulator of 6.6 means that 4 times out of 10 the accumulator measured was 6 bits while 6 times it was 7 bits.

This shows that the fp32 accuracy and accumulator size increases with the number of hidden neurons, while the 3-bits accuracy remains low irrespective of the number of neurons. While all the configurations tried here were FHE-compatible (accumulator < 16 bits), it is often preferable to have a lower accumulator size in order to speed up inference time.

Accumulator size is determined by Concrete as being the maximum bit-width encountered anywhere in the encrypted circuit.

Quantization Aware Training:

Brevitas provides a quantized version of almost all PyTorch layers (Linear layer becomes QuantLinear, ReLU layer becomes QuantReLU and so on), plus some extra quantization parameters, such as :

  • bit_width: precision quantization bits for activations

  • act_quant: quantization protocol for the activations

  • weight_bit_width: precision quantization bits for weights

  • weight_quant: quantization protocol for the weights

In order to use FHE, the network must be quantized from end to end, and thanks to the Brevitas's QuantIdentity layer, it is possible to quantize the input by placing it at the entry point of the network. Moreover, it is also possible to combine PyTorch and Brevitas layers, provided that a QuantIdentity is placed after this PyTorch layer. The following table gives the replacements to be made to convert a PyTorch NN for Concrete ML compatibility.

Some PyTorch operators (from the PyTorch functional API), require a brevitas.quant.QuantIdentity to be applied on their inputs.

The QAT import tool in Concrete ML is a work in progress. While it has been tested with some networks built with Brevitas, it is possible to use other tools to obtain QAT networks.

With Brevitas, the network above becomes:

In the network above, biases are used for linear layers but are not quantized ("bias": True, "bias_quant": None). The addition of the bias is a univariate operation and is fused into the activation function.

Training this network with pruning (see below) with 30 out of 100 total non-zero neurons gives good accuracy while keeping the accumulator size low.

The PyTorch QAT training loop is the same as the standard floating point training loop, but hyper-parameters such as learning rate might need to be adjusted.

Quantization Aware Training is somewhat slower than normal training. QAT introduces quantization during both the forward and backward passes. The quantization process is inefficient on GPUs as its computational intensity is low with respect to data transfer time.

Pruning using Torch

Considering that FHE only works with limited integer precision, there is a risk of overflowing in the accumulator, which will make Concrete ML raise an error.

The following code shows how to use pruning in the previous example:

Results with PrunedQuantNet, a pruned version of the QuantSimpleNet with 100 neurons on the hidden layers, are given below, showing a mean accumulator size measured over 10 runs of the experiment:

This shows that the fp32 accuracy has been improved while maintaining constant mean accumulator size.

When pruning a larger neural network during training, it is easier to obtain a low bit-width accumulator while maintaining better final accuracy. Thus, pruning is more robust than training a similar, smaller network.

Using Torch

The following example uses a simple QAT PyTorch model that implements a fully connected neural network with two hidden layers. Due to its small size, making this model respect FHE constraints is relatively easy.

Configuring quantization parameters

The PyTorch/Brevitas models, created following the example above, require the user to configure quantization parameters such as bit_width (activation bit-width) and weight_bit_width. The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit-width of the network. Larger accumulator bit-widths result in higher accuracy but slower FHE inference time.

The following configurations were determined through experimentation for convolutional and dense layers.

Using the templates above, the probability of obtaining the target accumulator bit-width, for a single layer, was determined experimentally by training 10 models for each of the following data-sets.

Note that the accuracy on larger data-sets, when the accumulator size is low, is also reduced strongly.

Running encrypted inference

The model can now perform encrypted inference.

In this example, the input values x_test and the predicted values y_pred are floating points. The quantization (resp. de-quantization) step is done in the clear within the forward method, before (resp. after) any FHE computations.

Simulated FHE Inference in the clear

The user can also perform the inference on clear data. Two approaches exist:

  • quantized_module.forward(quantized_x, fhe="simulate"): simulates FHE execution taking into account Table Lookup errors. De-quantization must be done in a second step as for actual FHE execution. Simulation takes into account the p_error/global_p_error parameters

  • quantized_module.forward(quantized_x, fhe="disable"): computes predictions in the clear on quantized data, and then de-quantize the result. The return value of this function contains the de-quantized (float) output of running the model in the clear. Calling this function on clear data is useful when debugging, but this does not perform actual FHE simulation.

Generic Quantization Aware Training import

While the example above shows how to import a Brevitas/PyTorch model, Concrete ML also provides an option to import generic QAT models implemented in PyTorch or through ONNX. Deep learning models made with TensorFlow or Keras should be usable by preliminary converting them to ONNX.

QAT models contain quantizers in the PyTorch graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized.

When importing QAT models using this generic pipeline, a representative calibration set should be given as quantization parameters in the model need to be inferred from the statistics of the values encountered during inference.

Supported operators and activations

Concrete ML supports a variety of PyTorch operators that can be used to build fully connected or convolutional neural networks, with normalization and activation layers. Moreover, many element-wise operators are supported.

Operators

univariate operators

shape modifying operators

operators that take an encrypted input and unencrypted constants

Concrete ML supports these operators but also the QAT equivalents from Brevitas.

  • brevitas.nn.QuantLinear

  • brevitas.nn.QuantConv2d

operators that can take both encrypted+unencrypted and encrypted+encrypted inputs

Quantizers

  • brevitas.nn.QuantIdentity

Activations

The equivalent versions from torch.functional are also supported.

quantization: The model is converted into an integer equivalent using quantization. Concrete ML performs this step either during training (Quantization Aware Training) or after training (Post-training Quantization), depending on model type. Quantization converts inputs, model weights, and all intermediate values of the inference computation to integers. More information is available .

simulation: Testing FHE models on very large data-sets can take a long time. Furthermore, not all models are compatible with FHE constraints out of the box. Simulation allows you to execute a model that was quantized, to measure the accuracy it would have in FHE, but also to determine the modifications required to make it FHE compatible. Simulation is described in more detail .

compilation: Once the model is quantized, simulation can confirm it has good accuracy in FHE. The model then needs to be compiled using Concrete's FHE Compiler to produce an equivalent FHE circuit. This circuit is represented as an MLIR program consisting of low level cryptographic operations. You can read more about FHE compilation , MLIR , and about the low-level Concrete library .

You can find examples of the model development workflow .

You can find an example of the model deployment workflow .

While Concrete ML users only need to understand the cryptography concepts above, for a deeper understanding of the cryptography behind the Concrete stack, please see the or .

Concrete ML quantizes the input data and model outputs in the same way as weights and activations. The main levers to control accumulator bit-width are the number of bits used for the inputs, weights, and activations of the model. These parameters are crucial to comply with the constraint on accumulator bit-widths. Please refer to for more details about how to develop models with quantization in Concrete ML.

These methods may cause a reduction in the accuracy of the model since its representative power is diminished. Carefully choosing a quantization approach can alleviate accuracy loss, all the while allowing compilation to FHE. Concrete ML offers built-in models that include quantization algorithms, and users only need to configure some of their parameters, such as the number of bits, discussed above. See for information about configuring these parameters for various models.

Additional specific methods can help to make models compatible with FHE constraints. For instance, dimensionality reduction can reduce the number of input features and, thus, the maximum accumulator bit-width reached within a circuit. Similarly, sparsity-inducing training methods, such as pruning, deactivate some features during inference. For now, dimensionality reduction is considered as a pre-processing step, while pruning is used in the .

The configuration of model quantization parameters is illustrated in the advanced examples for and dimensionality reduction is shown in the .

Concrete ML provides several of the most popular classification and regression tree models that can be found in :

Concrete ML also supports 's XGBClassifier and XGBRegressor:

For a formal explanation of the mechanisms that enable FHE-compatible decision trees, please see the following paper:

Here's an example of how to use this model in FHE on a popular data-set using some of scikit-learn's pre-processing tools. A more complete example can be found in the .

Similarly, the decision boundaries of the Concrete ML model can be plotted and compared to the results of the classical XGBoost model executed in the clear. A 6-bit model is shown in order to illustrate the impact of quantization on classification. Similar plots can be found in the .

For more information on the inference time of FHE decision trees and tree-ensemble models please see .

Concrete ML can be installed on Kaggle () and on Google Colab.

Installing Concrete ML using PyPi requires a Linux-based OS or macOS running on an x86 CPU. For Apple Silicon, Docker is the only currently supported option (see ).

The image can be used with Docker volumes, .

The following example considers a LogisticRegression model on a simple classification problem. A more advanced example can be found in the , which considers a XGBClassifier.

As seen in the , once compiled to FHE, a Concrete ML model generates machine code that performs the inference on private data. Secret encryption keys are needed so that the user can securely encrypt their data and decrypt the inference result. An evaluation key is also needed for the server to securely process the user's encrypted data.

For more information on how to implement this basic secure inference protocol, refer to the and to the .

Concrete ML
scikit-learn

The neural network models are implemented with , which provides a scikit-learn-like interface to Torch models (more ).

While NeuralNetClassifier and NeuralNetClassifier provide scikit-learn-like models, their architecture is somewhat restricted to make training easy and robust. If you need more advanced models, you can convert custom neural networks as described in the .

Good quantization parameter values are critical to make models . Weights and activations should be quantized to low precision (e.g., 2-4 bits). The sparsity of the network can be tuned to avoid accumulator overflow.

Using nn.ReLU as the activation function benefits from an optimization where . This results in much faster inference times in FHE, thanks to a TFHE primitive that performs fast division by powers of two.

The shows the behavior of built-in neural networks on several synthetic data-sets.

module__activation_function: can be one of the Torch activations (e.g., nn.ReLU, see the full list ). Neural networks with nn.ReLU activation benefit from specific optimizations that make them around 10x faster than networks with other activation functions.

n_accum_bits: maximum accumulator bit-width that is desired. By default, this is unbounded, which, for weight and activation bit-width settings, . When used, the implementation will attempt to keep accumulators under this bit-width through (i.e., setting some weights to zero)

power_of_two_scaling (default True): forces quantization scales to be powers-of-two, which, when coupled with the ReLU activation, benefits from strong FHE inference time optimization. See this in the quantization documentation for more details.

Other parameters from skorch can be found in the .

module__n_hidden_neurons_multiplier: The number of hidden neurons will be automatically set proportional to the dimensionality of the input. This parameter controls the proportionality factor and is set to 4 by default. This value gives good accuracy while avoiding accumulator overflow. See the and sections for more info.

Concrete ML provides several of the most popular linear models for regression and classification that can be found in :

Concrete ML
scikit-learn

Using these models in FHE is extremely similar to what can be done with scikit-learn's , making it easy for data scientists who have used this framework to get started with Concrete ML.

It is possible to convert an already trained scikit-learn linear model to a Concrete ML one by using the method. See . This functionality is only available for linear models.

The following snippet gives an example about training a LogisticRegression model on a simple data-set followed by inference on encrypted data with FHE. A more complete example can be found in the .

We can then plot the decision boundary of the classifier and compare those results with a scikit-learn model executed in clear. The complete code can be found in the .

Concrete ML
scikit-learn

The KNeighborsClassifier class quantizes the training data-set that is given to .fit with the specified number of bits, n_bits. As this value must be kept low to comply with the accuracy of the model will depend heavily a well-chosen value n_bits and the dimensionality of the data.

The KNN computation executes in FHE in steps, where is the training data-set size and is n_neighbors. Each step requires several PBS, but the run-time of each of these PBS is influenced by the factors listed above. These factors combine to give the precision required to represent the distances between test vectors and the training data-set vectors. The PBS input precision required by the circuit is related to the one of the distance values.

These examples illustrate the basic usage of built-in Concrete ML models. For more examples showing how to train high-accuracy models on more complex data-sets, see the section.

It is recommended to use to configure the speed/accuracy trade-off for tree-based models and neural networks, using grid-search or your own heuristics.

These two examples show generalized linear models (GLM) on the real-world data-set. As the non-linear, inverse-link functions are computed, these models do not use , and are, thus, very fast (~1ms execution time).

Using the data-set, this example shows how to train a classifier that detects spam, based on features extracted from email messages. A grid-search is performed over decision-tree hyper-parameters to find the best ones.

Using the data-set, this example shows how to train regressor that predicts house prices.

This example shows how to train tree-ensemble models (either XGBoost or Random Forest), first on a synthetic data-set, and then on the data-set. Grid-search is used to find the best number of trees in the ensemble.

Privacy-preserving prediction of house prices is shown in this example, using the data-set. Using 50 trees in the ensemble, with 5 bits of precision for the input features, the FHE regressor obtains an score of 0.90 and an execution time of 7-8 seconds.

Two different configurations of the built-in, fully-connected neural networks are shown. First, a small bit-width accumulator network is trained on and compared to a PyTorch floating point network. Second, a larger accumulator (>8 bits) is demonstrated on .

In addition to Concrete ML models and , it is also possible to directly compile models. This can be particularly appealing, notably to import models trained with Keras.

This example uses Post-Training Quantization, i.e., the quantization is not performed during training. This model would not have good performance in FHE. Quantization Aware Training should be added by the model developer. Additionally, importing QAT ONNX models can be done .

Models trained using contain quantizers in the ONNX graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized. Since these QAT models have quantizers that are configured during training to a specific number of bits, the ONNX graph will need to be imported using the same settings:

Regarding FHE-friendly neural networks, QAT is the best way to reach optimal accuracy under . This technique allows weights and activations to be reduced to very low bit-widths (e.g., 2-3 bits), which, combined with pruning, can keep accumulator bit-widths low.

Concrete ML uses the third-party library to perform QAT for PyTorch NNs, but options exist for other frameworks such as Keras/Tensorflow.

Several that use Brevitas are available in the Concrete ML library, such as the .

This guide is based on a , from which some code blocks are documented.

For a more formal description of the usage of Brevitas to build FHE-compatible neural networks, please see the .

The , example shows how to train a FCNN, similarly to the one above, on a synthetic 2D data-set with a checkerboard grid pattern of 100 x 100 points. The data is split into 9500 training and 500 test samples.

Once trained, this PyTorch network can be imported using the function. This function uses simple PTQ.

neurons
10
30
100

using is the best way to guarantee a good accuracy for Concrete ML compatible neural networks.

PyTorch fp32 layer
Concrete ML model with PyTorch/Brevitas
PyTorch ops that require QuantIdentity
Non-zero neurons
30

To understand how to overcome this limitation, consider a scenario where 2 bits are used for weights and layer inputs/outputs. The Linear layer computes a dot product between weights and inputs . With 2 bits, no overflow can occur during the computation of the Linear layer as long the number of neurons does not exceed 14, as in the sum of 14 products of 2-bits numbers does not exceed 7 bits.

By default, Concrete ML uses symmetric quantization for model weights, with values in the interval . For example, for the possible values are ; for , the values can be .

In a typical setting, the weights will not all have the maximum or minimum values (e.g., ). Weights typically have a normal distribution around 0, which is one of the motivating factors for their symmetric quantization. A symmetric distribution and many zero-valued weights are desirable because opposite sign weights can cancel each other out and zero weights do not increase the accumulator size.

This fact can be leveraged to train a network with more neurons, while not overflowing the accumulator, using a technique called where the developer can impose a number of zero-valued weights. Torch out of the box.

Non-zero neurons
10
30

In addition to the built-in models, Concrete ML supports generic machine learning models implemented with Torch, or .

As is the most appropriate method of training neural networks that are compatible with , Concrete ML works with , a library providing QAT support for PyTorch.

Once the model is trained, calling the from Concrete ML will automatically perform conversion and compilation of a QAT network. Here, 3-bit quantization is used for both the weights and activations. The compile_brevitas_qat_model function automatically identifies the number of quantization bits used in the Brevitas model.

target accumulator bit-width
activation bit-width
weight bit-width
number of active neurons

FHE simulation allows to measure the impact of the Table Lookup error on the model accuracy. The Table Lookup error can be adjusted using p_error/global_p_error, as described in the section.

Suppose that n_bits_qat is the bit-width of activations and weights during the QAT process. To import a PyTorch QAT network, you can use the library function, passing import_qat=True:

Alternatively, if you want to import an ONNX model directly, please see . The also supports the import_qat parameter.

-- partial support

here
here
here
here
here
here
whitepaper on TFHE and Programmable Boostrapping
this series of blogs
the quantization documentation
built-in neural networks
Linear and Logistic Regressions
Poisson regression example
scikit-learn
XGBoost
Privacy-Preserving Tree-Based Inference with Fully Homomorphic Encryption, arXiv:2303.01254
XGBClassifier notebook
Classifier Comparison notebook
Privacy-Preserving Tree-Based Inference with Fully Homomorphic Encryption, arXiv:2303.01254
see question on community for more details
see the Docker documentation here
Titanic use case notebook
below
from concrete.ml.sklearn import NeuralNetClassifier
import torch.nn as nn

n_inputs = 10
n_outputs = 2
params = {
    "module__n_layers": 2,
    "max_epochs": 10,
}

concrete_classifier = NeuralNetClassifier(**params)
    from sklearn.utils.class_weight import compute_class_weight
    params["criterion__weight"] = compute_class_weight("balanced", classes=classes, y=y_train)
import numpy
from tqdm import tqdm
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from concrete.ml.sklearn import LogisticRegression

# Create the data for classification:
X, y = make_classification(
    n_features=30,
    n_redundant=0,
    n_informative=2,
    random_state=2,
    n_clusters_per_class=1,
    n_samples=250,
)

# Retrieve train and test sets:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# Instantiate the model:
model = LogisticRegression(n_bits=8)

# Fit the model:
model.fit(X_train, y_train)

# Evaluate the model on the test set in clear:
y_pred_clear = model.predict(X_test)

# Compile the model:
model.compile(X_train)

# Perform the inference in FHE:
y_pred_fhe = model.predict(X_test, fhe="execute")

# Assert that FHE predictions are the same as the clear predictions:
print(
    f"{(y_pred_fhe == y_pred_clear).sum()} examples over {len(y_pred_fhe)} "
    "have an FHE inference equal to the clear inference."
)

# Output:
#  100 examples over 100 have an FHE inference equal to the clear inference
from sklearn.linear_model import LogisticRegression as SKlearnLogisticRegression

# Instantiate the model:
model = SKlearnLogisticRegression()

# Fit the model:
model.fit(X_train, y_train)

cml_model = LogisticRegression.from_sklearn_model(model, X_train, n_bits=8)

# Compile the model:
cml_model.compile(X_train)

# Perform the inference in FHE:
y_pred_fhe = cml_model.predict(X_test, fhe="execute")

from concrete.ml.sklearn import KNeighborsClassifier

concrete_classifier = KNeighborsClassifier(n_bits=2, n_neighbors=3)
O(Nlog2k)O(Nlog^2k)O(Nlog2k)
NNN
kkk
import numpy
import onnx
import tensorflow
import tf2onnx

from concrete.ml.torch.compile import compile_onnx_model
from concrete.fhe.compilation import Configuration


class FC(tensorflow.keras.Model):
    """A fully-connected model."""

    def __init__(self):
        super().__init__()
        hidden_layer_size = 10
        output_size = 5

        self.dense1 = tensorflow.keras.layers.Dense(
            hidden_layer_size,
            activation=tensorflow.nn.relu,
        )
        self.dense2 = tensorflow.keras.layers.Dense(output_size, activation=tensorflow.nn.relu6)
        self.flatten = tensorflow.keras.layers.Flatten()

    def call(self, inputs):
        """Forward function."""
        x = self.flatten(inputs)
        x = self.dense1(x)
        x = self.dense2(x)
        return self.flatten(x)


n_bits = 6
input_output_feature = 2
input_shape = (input_output_feature,)
num_inputs = 1
n_examples = 5000

# Define the Keras model
keras_model = FC()
keras_model.build((None,) + input_shape)
keras_model.compute_output_shape(input_shape=(None, input_output_feature))

# Create random input
input_set = numpy.random.uniform(-100, 100, size=(n_examples, *input_shape))

# Convert to ONNX
tf2onnx.convert.from_keras(keras_model, opset=14, output_path="tmp.model.onnx")

onnx_model = onnx.load("tmp.model.onnx")
onnx.checker.check_model(onnx_model)

# Compile
quantized_module = compile_onnx_model(
    onnx_model, input_set, n_bits=2
)

# Create test data from the same distribution and quantize using
# learned quantization parameters during compilation
x_test = tuple(numpy.random.uniform(-100, 100, size=(1, *input_shape)) for _ in range(num_inputs))

y_clear = quantized_module.forward(*x_test, fhe="disable")
y_fhe = quantized_module.forward(*x_test, fhe="execute")

print("Execution in clear: ", y_clear)
print("Execution in FHE:   ", y_fhe)
print("Equality:           ", numpy.sum(y_clear == y_fhe), "over", numpy.size(y_fhe), "values")
# Define the number of bits to use for quantizing weights and activations during training
n_bits_qat = 3  

quantized_numpy_module = compile_onnx_model(
    onnx_model,
    input_set,
    import_qat=True,
    n_bits=n_bits_qat,
)
import torch
from torch import nn

IN_FEAT = 2
OUT_FEAT = 2

class SimpleNet(nn.Module):
    """Simple MLP with PyTorch"""

    def __init__(self, n_hidden = 30):
        super().__init__()
        self.fc1 = nn.Linear(in_features=IN_FEAT, out_features=n_hidden)
        self.fc2 = nn.Linear(in_features=n_hidden, out_features=n_hidden)
        self.fc3 = nn.Linear(in_features=n_hidden, out_features=OUT_FEAT)


    def forward(self, x):
        """Forward pass."""
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x

fp32 accuracy

68.70%

83.32%

88.06%

3-bit accuracy

56.44%

55.54%

56.50%

mean accumulator size

6.6

6.9

7.4

torch.nn.Linear

brevitas.quant.QuantLinear

torch.nn.Conv2d

brevitas.quant.Conv2d

torch.nn.AvgPool2d

torch.nn.AvgPool2d + brevitas.quant.QuantIdentity

torch.nn.ReLU

brevitas.quant.QuantReLU

torch.transpose

torch.add (between two activation tensors)

torch.reshape

torch.flatten

from brevitas import nn as qnn
from brevitas.core.quant import QuantType
from brevitas.quant import Int8ActPerTensorFloat, Int8WeightPerTensorFloat

N_BITS = 3
IN_FEAT = 2
OUT_FEAT = 2

class QuantSimpleNet(nn.Module):
    def __init__(
        self,
        n_hidden,
        qlinear_args={
            "weight_bit_width": N_BITS,
            "weight_quant": Int8WeightPerTensorFloat,
            "bias": True,
            "bias_quant": None,
            "narrow_range": True
        },
        qidentity_args={"bit_width": N_BITS, "act_quant": Int8ActPerTensorFloat},
    ):
        super().__init__()

        self.quant_inp = qnn.QuantIdentity(**qidentity_args)
        self.fc1 = qnn.QuantLinear(IN_FEAT, n_hidden, **qlinear_args)
        self.relu1 = qnn.QuantReLU(bit_width=qidentity_args["bit_width"])
        self.fc2 = qnn.QuantLinear(n_hidden, n_hidden, **qlinear_args)
        self.relu2 = qnn.QuantReLU(bit_width=qidentity_args["bit_width"])
        self.fc3 = qnn.QuantLinear(n_hidden, OUT_FEAT, **qlinear_args)

        for m in self.modules():
            if isinstance(m, qnn.QuantLinear):
                torch.nn.init.uniform_(m.weight.data, -1, 1)

    def forward(self, x):
        x = self.quant_inp(x)
        x = self.relu1(self.fc1(x))
        x = self.relu2(self.fc2(x))
        x = self.fc3(x)
        return x       

3-bit accuracy brevitas

95.4%

3-bit accuracy in Concrete ML

95.4%

Accumulator size

7

y=∑iwixiy = \sum_i w_i x_iy=∑i​wi​xi​
[−2nbits−1,2nbits−1−1]\left[-2^{n_{bits}-1}, 2^{n_{bits}-1}-1\right][−2nbits​−1,2nbits​−1−1]
nbits=2n_{bits}=2nbits​=2
[−2,−1,0,1][-2, -1, 0, 1][−2,−1,0,1]
nbits=3n_{bits}=3nbits​=3
[−4,−3,−2,−1,0,1,2,3][-4,-3,-2,-1,0,1,2,3][−4,−3,−2,−1,0,1,2,3]
−2nbits−1-2^{n_{bits}-1}−2nbits​−1
import torch.nn.utils.prune as prune

class PrunedQuantNet(SimpleNet):
    """Simple MLP with PyTorch"""

    pruned_layers = set()

    def prune(self, max_non_zero):
        # Linear layer weight has dimensions NumOutputs x NumInputs
        for name, layer in self.named_modules():
            if isinstance(layer, nn.Linear):
                print(name, layer)
                num_zero_weights = (layer.weight.shape[1] - max_non_zero) * layer.weight.shape[0]
                if num_zero_weights <= 0:
                    continue
                print(f"Pruning layer {name} factor {num_zero_weights}")
                prune.l1_unstructured(layer, "weight", amount=num_zero_weights)
                self.pruned_layers.add(name)

    def unprune(self):
        for name, layer in self.named_modules():
            if name in self.pruned_layers:
                prune.remove(layer, "weight")
                self.pruned_layers.remove(name)

3-bit accuracy

82.50%

88.06%

Mean accumulator size

6.6

6.8

import brevitas.nn as qnn
import torch.nn as nn
import torch

N_FEAT = 12
n_bits = 3

class QATSimpleNet(nn.Module):
    def __init__(self, n_hidden):
        super().__init__()

        self.quant_inp = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc1 = qnn.QuantLinear(N_FEAT, n_hidden, True, weight_bit_width=n_bits, bias_quant=None)
        self.quant2 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc2 = qnn.QuantLinear(n_hidden, n_hidden, True, weight_bit_width=n_bits, bias_quant=None)
        self.quant3 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc3 = qnn.QuantLinear(n_hidden, 2, True, weight_bit_width=n_bits, bias_quant=None)

    def forward(self, x):
        x = self.quant_inp(x)
        x = self.quant2(torch.relu(self.fc1(x)))
        x = self.quant3(torch.relu(self.fc2(x)))
        x = self.fc3(x)
        return x
from concrete.ml.torch.compile import compile_brevitas_qat_model
import numpy

torch_input = torch.randn(100, N_FEAT)
torch_model = QATSimpleNet(30)
quantized_module = compile_brevitas_qat_model(
    torch_model, # our model
    torch_input, # a representative input-set to be used for both quantization and compilation
)

8

3

3

80

10

4

3

90

12

5

5

110

14

6

6

110

16

7

6

120

probability of obtaining the accumulator bit-width

8

10

12

14

16

mnist,fashion

72%

100%

72%

85%

100%

cifar10

88%

88%

75%

75%

88%

cifar100

73%

88%

61%

66%

100%

accuracy for target accumulator bit-width

8

10

12

14

16

cifar10

20%

37%

89%

90%

90%

cifar100

6%

30%

67%

69%

69%

x_test = numpy.array([numpy.random.randn(N_FEAT)])

y_pred = quantized_module.forward(x_test, fhe="execute")
from concrete.ml.torch.compile import compile_torch_model
n_bits_qat = 3

quantized_module = compile_torch_model(
    torch_model,
    torch_input,
    import_qat=True,
    n_bits=n_bits_qat,
)
DecisionTreeClassifier
DecisionTreeRegressor
RandomForestClassifier
RandomForestRegressor
XGBClassifier
XGBRegressor
concepts section
Production Deployment section
client/server example
FHE-friendly models documentation
Classifier Comparison notebook
skorch documentation
pruning
quantization
scikit-learn
API
LogisticRegression notebook
LogisticRegression notebook
Demos and Tutorials
OpenML spams
House Price prediction
Diabetes
R2R^2R2
House Prices
Iris
MNIST
custom models in torch
ONNX
Quantization Aware Training
Brevitas
demos and tutorials
CIFAR classification tutorial
notebook tutorial
Deep Neural Networks for Encrypted Inference with TFHE, 7th International Symposium, CSCML 2023
notebook tutorial
Quantization Aware Training
Brevitas
pruning
provides support for pruning
exported as ONNX graphs
torch.abs
torch.clip
torch.exp
torch.log
torch.gt
torch.clamp
torch.mul, torch.Tensor operator *
torch.div, torch.Tensor operator /
torch.nn.identity
torch.nn.BatchNorm2d
torch.reshape
torch.Tensor.view
torch.flatten
torch.transpose
torch.conv2d, torch.nn.Conv2D
torch.matmul
torch.nn.Linear
torch.add, torch.Tensor operator +
torch.sub, torch.Tensor operator -
torch.nn.Celu
torch.nn.Elu
torch.nn.GELU
torch.nn.Hardshrink
torch.nn.HardSigmoid
torch.nn.Hardswish
torch.nn.HardTanh
torch.nn.LeakyRelu
torch.nn.LogSigmoid
torch.nn.Mish
torch.nn.PReLU
torch.nn.ReLU6
torch.nn.ReLU
torch.nn.Selu
torch.nn.Sigmoid
torch.nn.SiLU
torch.nn.Softplus
torch.nn.Softshrink
torch.nn.Softsign
torch.nn.Tanh
torch.nn.Tanhshrink
torch.nn.Threshold
accumulator size constraints
OpenML insurance
PBS
respect FHE constraints
as described below
pruning
may make the trained networks fail in compilation
here
as shown below
FHE constraints

Deep Learning Examples

FHE constraints considerations

Some examples constrain accumulators to 7-8 bits, which can be sufficient for simple data-sets. Up to 16-bit accumulators can be used, but this introduces a slowdown of 4-5x compared to 8-bit accumulators.

List of Examples

1. Step-by-step guide to building a custom NN

This shows how to use Quantization Aware Training and pruning when starting out from a classical PyTorch network. This example uses a simple data-set and a small NN, which achieves good accuracy with low accumulator size.

Prediction with FHE

Concrete ML has APIs that make it easy, during model development and testing, to perform encryption, execution in FHE, and decryption in a single step. For more control, these individual steps can be executed separately. The APIs used to accomplish this are different for:

Built-in models

The following example shows how to create a synthetic data-set and how to use it to train a LogisticRegression model from Concrete ML. Next, we will discuss the dedicated functions for encryption, inference, and decryption.

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression
import numpy

# Create a synthetic data-set for a classification problem
x, y = make_classification(n_samples=100, class_sep=2, n_features=3, n_informative=3, n_redundant=0, random_state=42)

# Split the data-set into a train and test set
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)

# Instantiate and train the model
model = LogisticRegression()
model.fit(x_train,y_train)

# Simulate the predictions in the clear (optional)
y_pred_clear = model.predict(x_test)

# Compile the model on a representative set
fhe_circuit = model.compile(x_train)

All Concrete ML built-in models have a monolithic predict method that performs the encryption, FHE execution, and decryption with a single function call. Concrete ML models follow the same API as scikit-learn models, transparently performing the steps related to encryption for convenience.

# Predict in FHE
y_pred_fhe = model.predict(x_test, fhe="execute")

Regarding this LogisticRegression model, as with scikit-learn, it is possible to predict the logits as well as the class probabilities by respectively using the decision_function or predict_proba methods instead.

Alternatively, it is possible to execute all main steps (key generation, quantization, encryption, FHE execution, decryption) separately.

# Generate the keys (set force to True in order to generate new keys at each execution)
fhe_circuit.keygen(force=True)

y_pred_fhe_step = []

for f_input in x_test:
    # Quantize an input (float)
    q_input = model.quantize_input([f_input])
    
    # Encrypt the input
    q_input_enc = fhe_circuit.encrypt(q_input)

    # Execute the linear product in FHE 
    q_y_enc = fhe_circuit.run(q_input_enc)

    # Decrypt the result (integer)
    q_y = fhe_circuit.decrypt(q_y_enc)

    # De-quantize the result
    y = model.dequantize_output(q_y)

    # Apply either the sigmoid if it is a binary classification task, which is the case in this 
    # example, or a softmax function in order to get the probabilities (in the clear)
    y_proba = model.post_processing(y)

    # Since this model does classification, apply the argmax to get the class predictions (in the clear)
    # Note that regression models won't need the following line
    y_class = numpy.argmax(y_proba, axis=1)

    y_pred_fhe_step += list(y_class)

y_pred_fhe_step = numpy.array(y_pred_fhe_step)

print("Predictions in clear:", y_pred_clear)
print("Predictions in FHE  :", y_pred_fhe_step)
print(f"Similarity: {int((y_pred_fhe_step == y_pred_clear).mean()*100)}%")

Custom models

For custom models, the API to execute inference in FHE or simulation is illustrated as:

from torch import nn
from brevitas import nn as qnn
from concrete.ml.torch.compile import compile_brevitas_qat_model

class FCSmall(nn.Module):
    """A small QAT NN."""

    def __init__(self, input_output):
        super().__init__()
        self.quant_input = qnn.QuantIdentity(bit_width=3)
        self.fc1 = qnn.QuantLinear(in_features=input_output, out_features=input_output, weight_bit_width=3, bias=True)
        self.act_f = nn.ReLU()
        self.fc2 = qnn.QuantLinear(in_features=input_output, out_features=input_output, weight_bit_width=3, bias=True)

    def forward(self, x):
        return self.fc2(self.act_f(self.fc1(self.quant_input(x))))

torch_model = FCSmall(3)

quantized_module = compile_brevitas_qat_model(
    torch_model,
    x_train,
)

x_test_q = quantized_module.quantize_input(x_test)
y_pred = quantized_module.quantized_forward(x_test_q, fhe="simulate")
y_pred = quantized_module.dequantize_output(y_pred)

y_pred = numpy.argmax(y_pred, axis=1)

Optimizing Inference

Neural networks pose unique challenges with regards to encrypted inference. Each neuron in a network applies an activation function that requires a PBS operation. The latency of a single PBS depends on the bit-width of the input of the PBS.

Several approaches can be used to reduce the overall latency of a neural network.

Circuit bit-width optimization

Structured pruning

Rounded activations and quantizers

TLU error probability adjustment

Production Deployment

Concrete ML provides functionality to deploy FHE machine learning models in a client/server setting. The deployment workflow and model serving pattern is as follows:

Deployment

The diagram above shows the steps that a developer goes through to prepare a model for encrypted inference in a client/server setting. The training of the model and its compilation to FHE are performed on a development machine. Three different files are created when saving the model:

  • client.zip contains client.specs.json which lists the secure cryptographic parameters needed for the client to generate private and evaluation keys. It also contains serialized_processing.json which describes the pre-processing and post-processing required by the machine learning model, such as quantization parameters to quantize the input and de-quantize the output.

  • server.zip contains the compiled model. This file is sufficient to run the model on a server. The compiled model is machine-architecture specific (i.e., a model compiled on x86 cannot run on ARM).

The compiled model (server.zip) is deployed to a server and the cryptographic parameters (client.zip) are shared with the clients. In some settings, such as a phone application, the client.zip can be directly deployed on the client device and the server does not need to host it.

Note that for built-in models, the server output + post-processing adheres to the following guidelines: if the model is a regressor, the output follows the format of the scikit-learn .predict() method; if the model is a classifier, the output follows the format of the scikit-learn .predict_proba() method.

Serving

The client-side deployment of a secured inference machine learning model follows the schema above. First, the client obtains the cryptographic parameters (stored in client.zip) and generates a private encryption/decryption key as well as a set of public evaluation keys. The public evaluation keys are then sent to the server, while the secret key remains on the client.

The private data is then encrypted by the client as described in the serialized_processing.json file in client.zip, and it is then sent to the server. Server-side, the FHE model inference is run on encrypted inputs using the public evaluation keys.

The encrypted result is then returned by the server to the client, which decrypts it using its private key. Finally, the client performs any necessary post-processing of the decrypted result as specified in serialized_processing.json (part of client.zip).

The server-side implementation of a Concrete ML model follows the diagram above. The public evaluation keys sent by clients are stored. They are then retrieved for the client that is querying the service and used to evaluate the machine learning model stored in server.zip. Finally, the server sends the encrypted result of the computation back to the client.

Example notebook

AWS

Once this first setup is done you can launch python src/concrete/ml/deployment/deploy_to_aws.py --path-to-model <path_to_your_serialized_model> from the root of the repository to create an instance that runs a FastAPI server serving the model.

Docker

Running Docker with the latest version of Concrete ML will require you to build a Docker image. To do this, run the following command: poetry build && mkdir pkg && cp dist/* pkg/ && make release_docker. You will need to have make, poetry and docker installed on your system. To test locally there is a dedicated script: python src/concrete/ml/deployment/deploy_to_docker.py --path-to-model <path_to_your_serialized_model> whoch should be run from the root of the repository in order to create a Docker that runs a FastAPI server serving the model.

Hybrid models

FHE enables cloud applications to process private user data without running the risk of data leaks. Furthermore, deploying ML models in the cloud is advantageous as it eases model updates, allows to scale to large numbers of users by using large amounts of compute power, and protects model IP by keeping the model on a trusted server instead of the client device.

However, not all applications can be easily converted to FHE computation and the computation cost of FHE may make a full conversion exceed latency requirements.

Hybrid models provide a balance between on-device deployment and cloud-based deployment. This approach entails executing parts of the model directly on the client side, while other parts are securely processed with FHE on the server side. Concrete ML facilitates the hybrid deployment of various neural network models, including MLP (multilayer perceptron), CNN (convolutional neural network), and Large Language Models.

If model IP protection is important, care must be taken in choosing the parts of a model to be executed on the cloud. Some black-box model stealing attacks rely on knowledge distillation or on differential methods. As a general rule, the difficulty to steal a machine learning model is proportional to the size of the model, in terms of numbers of parameters and model depth.

Compilation

To use hybrid model deployment, the first step is to define what part of the PyTorch neural network model must be executed in FHE. The model part must be a nn.Module and is identified by its key in the original model's .named_modules().

import numpy as np
import os
import torch

from pathlib import Path
from torch import nn

from concrete.ml.torch.hybrid_model import HybridFHEModel, tuple_to_underscore_str
from concrete.ml.deployment import FHEModelServer


class FCSmall(nn.Module):
    """Torch model for the tests."""

    def __init__(self, dim):
        super().__init__()
        self.seq = nn.Sequential(nn.Linear(dim, dim), nn.ReLU(), nn.Linear(dim, dim))

    def forward(self, x):
        return self.seq(x)

model = FCSmall(10)
model_name = "FCSmall"
submodule_name = "seq.0"

inputs = torch.Tensor(np.random.uniform(size=(10, 10)))
# Prints ['', 'seq', 'seq.0', 'seq.1', 'seq.2']
print([k for (k, _) in model.named_modules()])

# Create a hybrid model
hybrid_model = HybridFHEModel(model, [submodule_name])
hybrid_model.compile_model(
    inputs,
    n_bits=8,
)


models_dir = Path(os.path.abspath('')) / "compiled_models"
models_dir.mkdir(exist_ok=True)
model_dir = models_dir / model_name
hybrid_model.save_and_clear_private_info(model_dir, via_mlir=True)

Server Side Deployment

input_shape_subdir = tuple_to_underscore_str( (1,) + inputs.shape[1:] )
MODULES = { model_name: { submodule_name: {"path":  model_dir / submodule_name / input_shape_subdir }}}
server =  FHEModelServer(str(MODULES[model_name][submodule_name]["path"]))

Client Side

A client application that deploys a model with hybrid deployment can be developed in a very similar manner to on-premise deployment: the model is loaded normally with PyTorch, but an extra step is required to specify the remote endpoint and the model parts that are to be executed remotely.

# Modify model to use remote FHE server instead of local weights
hybrid_model = HybridFHEModel(
    model,
    submodule_name,
    server_remote_address="http://0.0.0.0:8000",
    model_name=f"{model_name}",
    verbose=False,
)
path_to_clients = Path(__file__).parent / "clients"
hybrid_model.init_client(path_to_clients=path_to_clients)

When the client application is ready to make inference requests to the server, it must set the operation mode of the HybridFHEModel instance to HybridFHEMode.REMOTE:

for module in hybrid_model.remote_modules.values():
    module.fhe_local_mode = HybridFHEMode.REMOTE    

When performing inference with the HybridFHEModel instance, hybrid_model, only the regular forward method is called, as if the model was fully deployed locally:

hybrid_model.forward(torch.randn((dim, )))

When calling forward, the HybridFHEModel handles, for each model part that is deployed remotely, all the necessary intermediate steps: quantizing the data, encrypting it, makes the request to the server using requests Python module, decrypting and de-quantizing the result.

Debugging Models

This section provides a set of tools and guidelines to help users build optimized FHE-compatible models. It discusses FHE simulation, the key-cache functionality that helps speed-up FHE result debugging, and gives a guide to evaluate circuit complexity.

Simulation

The simulation mode can be useful when developing and iterating on an ML model implementation. As FHE non-linear models work with integers up to 16 bits, with a trade-off between the number of bits and the FHE execution speed, the simulation can help to find the optimal model design.

The following example shows how to use the simulation mode in Concrete ML.

from sklearn.datasets import fetch_openml, make_circles
from concrete.ml.sklearn import RandomForestClassifier

n_bits = 2
X, y = make_circles(n_samples=1000, noise=0.1, factor=0.6, random_state=0)
concrete_clf = RandomForestClassifier(
    n_bits=n_bits, n_estimators=10, max_depth=5
)
concrete_clf.fit(X, y)

concrete_clf.compile(X)

# Running the model using FHE-simulation
y_preds_clear = concrete_clf.predict(X, fhe="simulate")

Caching keys during debugging

It is possible to avoid re-generating the keys of the models you are debugging. This feature is unsafe and should not be used in production. Here is an example that shows how to enable key-caching:

from sklearn.datasets import fetch_openml, make_circles
from concrete.ml.sklearn import RandomForestClassifier
from concrete.fhe import Configuration
debug_config = Configuration(
    enable_unsafe_features=True,
    use_insecure_key_cache=True,
    insecure_key_cache_location="~/.cml_keycache",
)

n_bits = 2
X, y = make_circles(n_samples=1000, noise=0.1, factor=0.6, random_state=0)
concrete_clf = RandomForestClassifier(
    n_bits=n_bits, n_estimators=10, max_depth=5
)
concrete_clf.fit(X, y)

concrete_clf.compile(X, debug_config)

Compilation debugging

The following produces a neural network that is not FHE-compatible:

import numpy
import torch

from torch import nn
from concrete.ml.torch.compile import compile_torch_model

N_FEAT = 2
class SimpleNet(nn.Module):
    """Simple MLP with PyTorch"""

    def __init__(self, n_hidden=30):
        super().__init__()
        self.fc1 = nn.Linear(in_features=N_FEAT, out_features=n_hidden)
        self.fc2 = nn.Linear(in_features=n_hidden, out_features=n_hidden)
        self.fc3 = nn.Linear(in_features=n_hidden, out_features=2)


    def forward(self, x):
        """Forward pass."""
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x


torch_input = torch.randn(100, N_FEAT)
torch_model = SimpleNet(120)
try:
    quantized_numpy_module = compile_torch_model(
        torch_model,
        torch_input,
        n_bits=7,
    )
except RuntimeError as err:
    print(err)

Upon execution, the Compiler will raise the following error within the graph representation:

Function you are trying to compile cannot be converted to MLIR:

%0 = _onnx__Gemm_0                    # EncryptedTensor<int7, shape=(1, 2)>        ∈ [-64, 63]
%1 = [[ 33 -27  ...   22 -29]]        # ClearTensor<int7, shape=(2, 120)>          ∈ [-63, 62]
%2 = matmul(%0, %1)                   # EncryptedTensor<int14, shape=(1, 120)>     ∈ [-4973, 4828]
%3 = subgraph(%2)                     # EncryptedTensor<uint7, shape=(1, 120)>     ∈ [0, 126]
%4 = [[ 16   6  ...   10  54]]        # ClearTensor<int7, shape=(120, 120)>        ∈ [-63, 63]
%5 = matmul(%3, %4)                   # EncryptedTensor<int17, shape=(1, 120)>     ∈ [-45632, 43208]
%6 = subgraph(%5)                     # EncryptedTensor<uint7, shape=(1, 120)>     ∈ [0, 126]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ table lookups are only supported on circuits with up to 16-bit integers
%7 = [[ -7 -52] ... [-12  62]]        # ClearTensor<int7, shape=(120, 2)>          ∈ [-63, 62]
%8 = matmul(%6, %7)                   # EncryptedTensor<int16, shape=(1, 2)>       ∈ [-26971, 29843]
return %8
quantized_numpy_module = compile_torch_model(
    torch_model,
    torch_input,
    n_bits=7,
    use_virtual_lib=True
)

res = quantized_numpy_module.bitwidth_and_range_report()
print(res)
{
    '/fc1/Gemm': {'range': (-6180, 6840), 'bitwidth': 14}, 
    '/fc2/Gemm': {'range': (-45051, 43090), 'bitwidth': 17}, 
    '/fc3/Gemm': {'range': (-17351, 13868), 'bitwidth': 16}
}

To make this network FHE-compatible one can reduce the bit-width of the second layer named fc2. To do this, a simple solution is to reduce the number of neurons, as it is proportional to the bit-width.

Reducing the number of neurons in this layer resolves the error and makes the network FHE-compatible:

torch_model = SimpleNet(10)

quantized_numpy_module = compile_torch_model(
    torch_model,
    torch_input,
    n_bits=7,
)

Complexity analysis

In FHE, univariate functions are encoded as table lookups, which are then implemented using Programmable Bootstrapping (PBS). PBS is a powerful technique but will require significantly more computing resources, and thus time, compared to simpler encrypted operations such as matrix multiplications, convolution, or additions.

Furthermore, the cost of PBS will depend on the bit-width of the compiled circuit. Every additional bit in the maximum bit-width raises the complexity of the PBS by a significant factor. It may be of interest to the model developer, then, to determine the bit-width of the circuit and the amount of PBS it performs.

This can be done by inspecting the MLIR code produced by the Compiler:

print(quantized_numpy_module.fhe_circuit.mlir)
MLIR
--------------------------------------------------------------------------------
module {
  func.func @main(%arg0: tensor<1x2x!FHE.eint<15>>) -> tensor<1x2x!FHE.eint<15>> {
    %cst = arith.constant dense<16384> : tensor<1xi16>
    %0 = "FHELinalg.sub_eint_int"(%arg0, %cst) : (tensor<1x2x!FHE.eint<15>>, tensor<1xi16>) -> tensor<1x2x!FHE.eint<15>>
    %cst_0 = arith.constant dense<[[-13, 43], [-31, 63], [1, -44], [-61, 20], [31, 2]]> : tensor<5x2xi16>
    %cst_1 = arith.constant dense<[[-45, 57, 19, 50, -63], [32, 37, 2, 52, -60], [-41, 25, -1, 31, -26], [-51, -40, -53, 0, 4], [20, -25, 56, 54, -23]]> : tensor<5x5xi16>
    %cst_2 = arith.constant dense<[[-56, -50, 57, 37, -22], [14, -1, 57, -63, 3]]> : tensor<2x5xi16>
    %c16384_i16 = arith.constant 16384 : i16
    %1 = "FHELinalg.matmul_eint_int"(%0, %cst_2) : (tensor<1x2x!FHE.eint<15>>, tensor<2x5xi16>) -> tensor<1x5x!FHE.eint<15>>
    %cst_3 = tensor.from_elements %c16384_i16 : tensor<1xi16>
    %cst_4 = tensor.from_elements %c16384_i16 : tensor<1xi16>
    %2 = "FHELinalg.add_eint_int"(%1, %cst_4) : (tensor<1x5x!FHE.eint<15>>, tensor<1xi16>) -> tensor<1x5x!FHE.eint<15>>
    %cst_5 = arith.constant

: tensor<5x32768xi64>
    %cst_6 = arith.constant dense<[[0, 1, 2, 3, 4]]> : tensor<1x5xindex>
    %3 = "FHELinalg.apply_mapped_lookup_table"(%2, %cst_5, %cst_6) : (tensor<1x5x!FHE.eint<15>>, tensor<5x32768xi64>, tensor<1x5xindex>) -> tensor<1x5x!FHE.eint<15>>
    %4 = "FHELinalg.matmul_eint_int"(%3, %cst_1) : (tensor<1x5x!FHE.eint<15>>, tensor<5x5xi16>) -> tensor<1x5x!FHE.eint<15>>
    %5 = "FHELinalg.add_eint_int"(%4, %cst_3) : (tensor<1x5x!FHE.eint<15>>, tensor<1xi16>) -> tensor<1x5x!FHE.eint<15>>
    %cst_7 = arith.constant

: tensor<5x32768xi64>
    %6 = "FHELinalg.apply_mapped_lookup_table"(%5, %cst_7, %cst_6) : (tensor<1x5x!FHE.eint<15>>, tensor<5x32768xi64>, tensor<1x5xindex>) -> tensor<1x5x!FHE.eint<15>>
    %7 = "FHELinalg.matmul_eint_int"(%6, %cst_0) : (tensor<1x5x!FHE.eint<15>>, tensor<5x2xi16>) -> tensor<1x2x!FHE.eint<15>>
    return %7 : tensor<1x2x!FHE.eint<15>>

  }
}
--------------------------------------------------------------------------------

There are several calls to FHELinalg.apply_mapped_lookup_table and FHELinalg.apply_lookup_table. These calls apply PBS to the cells of their input tensors. Their inputs in the listing above are: tensor<1x2x!FHE.eint<8>> for the first and last call and tensor<1x50x!FHE.eint<8>> for the two calls in the middle. Thus, PBS is applied 104 times.

Retrieving the bit-width of the circuit is then simply:

print(quantized_numpy_module.fhe_circuit.graph.maximum_integer_bit_width())

Decreasing the number of bits and the number of PBS applications induces large reductions in the computation time of the compiled circuit.

Serialization

Concrete ML has support for serializing all available built-in models. Using this feature, one can dump a fitted and compiled model into a JSON string or file. The estimator can then be loaded back using the JSON object.

Saving Models

All built-in models provide the following methods:

  • dumps: dumps the model as a string.

  • dump: dumps the model into a file.

For example, a logistic regression model can be dumped in a string as below.

Similarly, it can be dumped into a file.

Alternatively, Concrete ML provides two equivalent global functions.

Some parameters used for instantiating Quantized Neural Network models are not supported for serialization. In particular, one cannot serialize a model that was instantiated using callable objects for the train_split and predict_nonlinearity parameters or with callbacks being enabled.

Loading Models

Loading a built-in model is possible through the following functions:

  • loads: loads the model from a string.

  • load: loads the model from a file.

A loaded model is required to be compiled once again in order for a user to be able to execute the inference in FHE or with simulation. This is because the underlying FHE circuit is currently not serialized. There is not required when FHE mode is disabled.

The above logistic regression model can therefore be loaded as below.

Quantization

Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as real numbers) to a discrete set (such as integers).

This means that some accuracy in the representation is lost (e.g., a simple approach is to eliminate least-significant bits). In many cases in machine learning, it is possible to adapt the models to give meaningful results while using these smaller data types. This significantly reduces the number of bits necessary for intermediary results during the execution of these machine learning models.

Since FHE is currently limited to 16-bit integers, it is necessary to quantize models to make them compatible. As a general rule, the smaller the bit-width of integer values used in models, the better the FHE performance. This trade-off should be taken into account when designing models, especially neural networks.

Overview of quantization in Concrete ML

Quantization implemented in Concrete ML is applied in two ways:

  1. Built-in models apply quantization internally and the user only needs to configure some quantization parameters. This approach requires little work by the user but may not be a one-size-fits-all solution for all types of models. The final quantized model is FHE-friendly and ready to predict over encrypted data. In this setting, Post-Training Quantization (PTQ) is used for linear models, data quantization is used for tree-based models and, finally, Quantization Aware Training (QAT) is included in the built-in neural network models.

While Concrete ML quantizes machine learning models, the data that the client has is often in floating point. Concrete ML models provide APIs to quantize inputs and de-quantize outputs.

Note that the floating point input is quantized in the clear, meaning it is converted to integers before being encrypted. The model's outputs are also integers and decrypted before de-quantization.

Basics of quantization

Quantization special cases

Machine learning acceleration solutions are often based on integer computation of activations. To make quantization computations hardware-friendly, a popular approach is to ensure that scales are powers-of-two, which allows the replacement of the division in the equations above with a shift-right operation. TFHE also has a fast primitive for right bit-shift that enables acceleration in the special case of power-of-two scales.

Configuring model quantization parameters

Built-in models provide a simple interface for configuring quantization parameters, most notably the number of bits used for inputs, model weights, intermediary values, and output values.

For linear models, n_bits is used to quantize both model inputs and weights. Depending on the number of features, you can use a single integer value for the n_bits parameter (e.g., a value between 2 and 7). When the number of features is high, the n_bits parameter should be decreased if you encounter compilation errors. It is also possible to quantize inputs and weights with different numbers of bits by passing a dictionary to n_bits containing the op_inputs and op_weights keys.

Tree-based models can directly control the accumulator bit-width used. If 6 or 7 bits are not sufficient to obtain good accuracy on your data-set, one option is to use an ensemble model (RandomForest or XGBoost) and increase the number of trees in the ensemble. This, however, will have a detrimental impact on FHE execution speed.

For built-in neural networks, the maximum accumulator bit-width cannot be precisely controlled. To use many input features and a high number of bits is beneficial for model accuracy, but it can conflict with the 16-bit accumulator constraint. Finding the best quantization parameters to maximize accuracy, while keeping the accumulator size down, can only be accomplished through experimentation.

Quantizing model inputs and outputs

The models implemented in Concrete ML provide features to let the user quantize the input data and de-quantize the output data.

Here is a simple example showing how to perform inference, starting from float values and ending up with float values. The FHE engine that is compiled for ML models does not support data batching.

Alternatively, the forward method groups the quantization, FHE execution and de-quantization steps all together.

Resources

Set Up the Project

Concrete ML is a Python library, so Python should be installed to develop Concrete ML. v3.8 and v3.9 are the only supported versions. Concrete ML also uses Poetry and Make.

First of all, you need to git clone the project:

Automatic installation

For Windows users, the setup_os_deps.sh script does not install dependencies because of how many different installation methods there are due to the lack of a single package manager.

Manual installation

Python

Poetry

make

The dev tools use make to launch various commands.

On Linux, you can install make from your distribution's preferred package manager.

On macOS, you can install a more recent version of make via brew:

In the following sections, be sure to use the proper make tool for your system: make, gmake, or other.

Cloning the repository

To get the source code of Concrete ML, clone the code repository using the link for your favorite communication protocol (ssh or https).

Setting up environment on your host OS

We are going to make use of virtual environments. This helps to keep the project isolated from other Python projects in the system. The following commands will create a new virtual environment under the project directory and install dependencies to it.

The following command will not work on Windows if you don't have Poetry >= 1.2.

Activating the environment

Finally, activate the newly created environment using the following command:

macOS or Linux

Windows

Setting up environment on Docker

Docker automatically creates and sources a venv in ~/dev_venv/

The venv persists thanks to volumes. It also creates a volume for ~/.cache to speedup later reinstallations. You can check which Docker volumes exist with:

You can still run all make commands inside Docker (to update the venv, for example). Be mindful of the current venv being used (the name in parentheses at the beginning of your command prompt).

Leaving the environment

After your work is done, you can simply run the following command to leave the environment:

Syncing environment with the latest changes

From time to time, new dependencies will be added to the project or the old ones will be removed. The command below will make sure the project has the proper environment, so run it regularly!

Troubleshooting your environment

in your OS

If you are having issues, consider using the dev Docker exclusively (unless you are working on OS-specific bug fixes or features).

Here are the steps you can take on your OS to try and fix issues:

in Docker

Here are the steps you can take in your Docker to try and fix issues:

If the problem persists at this point, you should ask for help. We're here and ready to assist!

Pruning

Overview of pruning in Concrete ML

Pruning is used in Concrete ML for two types of neural networks:

Basics of pruning

In neural networks, a neuron computes a linear combination of inputs and learned weights, then applies an activation function.

The neuron computes:

When building a full neural network, each layer will contain multiple neurons, which are connected to the inputs or to the neuron outputs of a previous layer.

Fixing some of the weights to 0 makes the network graph look more similar to the following:

Pruning in practice

Advanced Features

Concrete ML provides features for advanced users to adjust cryptographic parameters generated by the Concrete stack. This allows users to identify the best trade-off between latency and performance for their specific machine learning models.

Approximate computations

Concrete ML makes use of table lookups (TLUs) to represent any non-linear operation (e.g., a sigmoid). TLUs are implemented through the Programmable Bootstrapping (PBS) operation, which applies a non-linear operation in the cryptographic realm.

The result of TLU operations is obtained with a specific error probability. Concrete ML offers the possibility to set this error probability, which influences the cryptographic parameters. The higher the success rate, the more restrictive the parameters become. This can affect both key generation and, more significantly, FHE execution time.

Concrete ML has a simulation mode where the impact of approximate computation of TLUs on the model accuracy can be determined. The simulation is much faster, speeding up model development significantly. The behavior in simulation mode is representative of the behavior of the model on encrypted data.

In Concrete ML, there are three different ways to define the error probability:

p_error and global_p_error are somehow two concurrent parameters, in the sense they both have an impact on the choice of cryptographic parameters. It is forbidden in Concrete ML to set both p_error and global_p_error simultaneously.

An error probability for an individual TLU

The first way to set error probabilities in Concrete ML is at the local level, by directly setting the probability of error of each individual TLU. This probability is referred to as p_error. A given PBS operation has a 1 - p_error chance of being successful. The successful evaluation here means that the value decrypted after FHE evaluation is exactly the same as the one that would be computed in the clear.

Here is a visualization of the effect of the p_error on a neural network model with a p_error = 0.1 compared to execution in the clear (i.e., no error):

Varying p_error in the one hidden-layer neural network above produces the following inference times. Increasing p_error to 0.1 halves the inference time with respect to a p_error of 0.001. In the graph above, the decision boundary becomes noisier with a higher p_error.

Users have the possibility to change this p_error by passing an argument to the compile function of any of the models. Here is an example:

A global error probability for the entire model

A global_p_error is also available and defines the probability of success for the entire model. Here, the p_error for every PBS is computed internally in Concrete such that the global_p_error is reached.

There might be cases where the user encounters a No cryptography parameter found error message. Increasing the p_error or the global_p_error in this case might help.

Usage is similar to the p_error parameter:

In the above example, XGBoostClassifier in FHE has a 1/10 probability to have a shifted output value compared to the expected value. The shift is relative to the expected value, so even if the result is different, it should be around the expected value.

Using default error probability

If neither p_error or global_p_error are set, Concrete ML employs p_error = 2^-40 by default.

Searching for the best error probability

Currently finding a good p_error value a-priori is not possible, as it is difficult to determine the impact of the TLU error on the output of a neural network. Concrete ML provides a tool to find a good p_error value that improves inference speed while maintaining accuracy. The method is based on binary search and evaluates the latency/accuracy trade-off iteratively.

With this optimal p_error, accuracy is maintained while execution time is improved by a factor of 1.51.

Please note that the default setting for the search interval is restricted to a range of 0.0 to 0.9. Increasing the upper bound beyond this range may result in longer execution times, especially when p_error≈1.

Rounded activations and quantizers

The rounding operation is defined as follows:

Then, the rounding operation can be computed as:

In Concrete ML, this feature is currently implemented for custom neural networks through the compile functions, including

  • concrete.ml.torch.compile_torch_model,

  • concrete.ml.torch.compile_onnx_model and

  • concrete.ml.torch.compile_brevitas_qat_model.

The rounding_threshold_bits argument can be set to a specific bit-width. It is important to choose an appropriate bit-width threshold to balance the trade-off between speed and accuracy. By reducing the bit-width of intermediate tensors, it is possible to speed-up computations while maintaining accuracy.

To find the best trade-off between speed and accuracy, it is recommended to experiment with different thresholds and check the accuracy on an evaluation set after compiling the model.

In practice, the process looks like this:

  1. Set a rounding_threshold_bits to a relatively high P. Say, 8 bits.

  2. Check the accuracy

  3. Update P = P - 1

  4. repeat steps 2 and 3 until the accuracy loss is above a certain, acceptable threshold.

Seeing compilation information

By using verbose = True and show_mlir = True during compilation, the user receives a lot of information from Concrete. These options are, however, mainly meant for power-users, so they may be hard to understand.

Here, one will see:

  • the computation graph (typically):

  • the MLIR, produced by Concrete:

  • information from the optimizer (including cryptographic parameters):

In this latter optimization, the following information will be provided:

  • The bit-width ("6-bit integers") used in the program: for the moment, the compiler only supports a single precision (i.e., that all PBS are promoted to the same bit-width - the largest one). Therefore, this bit-width predominantly drives the speed of the program, and it is essential to reduce it as much as possible for faster execution.

  • The maximal norm2 ("7 manp"), which has an impact on the crypto parameters: The larger this norm2, the slower PBS will be. The norm2 is related to the norm of some constants appearing in your program, in a way which will be clarified in the Concrete documentation.

  • The probability of error of an individual PBS, which was requested by the user ("3.300000e-02 error per pbs call" in User Config).

  • The probability of error of the full circuit, which was requested by the user ("1.000000e+00 error per circuit call" in User Config). Here, the probability 1 stands for "not used", since we had set the individual probability via p_error.

  • The probability of error of an individual PBS, which is found by the optimizer ("1/30 errors (3.234529e-02)").

  • The probability of error of the full circuit which is found by the optimizer ("1/10 errors (9.390887e-02)").

  • An estimation of the cost of the circuit ("4.214000e+02 Millions Operations"): Large values indicate a circuit that will execute more slowly.

Here is some further information about cryptographic parameters:

  • 1x glwe_dimension

  • 2**11 polynomial (2048)

  • 762 lwe dimension

  • keyswitch l,b=5,3

  • blindrota l,b=2,15

  • wopPbs : false

This optimizer feedback is a work in progress and will be modified and improved in future releases.

skorch
Linear Regression example
Logistic Regression example
Linear Support Vector Regression example
Linear SVM classification
Poisson Regression example
Generalized Linear Models comparison
Decision Tree Classifier
Decision Tree Regressor
XGBoost/Random Forest example
XGBoost Regression example
NN Iris example
NN MNIST example
Classifier comparison
Regressor comparison
Quantization Aware Training (QAT)
FHE constraints
the ONNX guide
below for an example

These examples illustrate the basic usage of Concrete ML to build various types of neural networks. They use simple data-sets, focusing on the syntax and usage of Concrete ML. For examples showing how to train high-accuracy models on more complex data-sets, see the section.

The examples listed here make use of to perform evaluation over large test sets. Since FHE execution can be slow, only a few FHE executions can be performed. The of Concrete ML ensure that accuracy measured with simulation is the same as that which will be obtained during FHE execution.

2. Custom convolutional NN on the data-set

Following the , this notebook implements a Quantization Aware Training convolutional neural network on the MNIST data-set. It uses 3-bit weights and activations, giving a 7-bit accumulator.

and introduce specific hyper-parameters that influence the accumulator sizes. It is possible to chose quantization and pruning configurations that reduce the accumulator size. A trade-off between latency and accuracy can be obtained by varying these hyper-parameters as described in the .

While un-structured pruning is used to ensure the accumulator bit-width stays low, can eliminate entire neurons from the network. Many neural networks are over-parametrized (since this enables easier training) and some neurons can be removed. Structured pruning, applied to a trained network as a fine-tuning step, can be applied to built-in neural networks using the helper function as shown in . To apply structured pruning to custom models, it is recommended to use the package.

Reducing the bit-width of the inputs to the Table Lookup (TLU) operations is a major source of improvements in the latency. Post-training, it is possible to leverage some properties of the fused activation and quantization functions expressed in the TLUs to further reduce the accumulator. This is achieved through the rounded PBS feature as described in the . Adjusting the rounding amount, relative to the initial accumulator size, can bring large improvements in latency while maintaining accuracy.

Finally, the TFHE scheme exposes a TLU error probability parameter that has an impact on crypto-system parameters that influence latency. A higher probability of TLU error results in faster computations but may reduce accuracy. One can think of the error of obtaining T[x]T[x]T[x] as a Gaussian distribution centered on xxx: TLU[x]TLU[x]TLU[x] is obtained with probability of 1 - p_error, while T[x−1]T[x-1]T[x−1], T[x+1]T[x+1]T[x+1] are obtained with much lower probability, etc. In Deep NNs, these type of errors can be tolerated up to some point. See the and more specifically the usage example of .

For a complete example, see or .

We provide scripts that leverage boto3 to deploy any Concrete ML model to AWS. The first required step is to properly set up AWS CLI on your system, which can be done by following the instructions in . To create Access keys to configure AWS CLI, go to the .

No code is required to run the server but each client is specific to the use-case, even if the workflow stays the same. To see how to create your client refer to our or .

The hybrid model deployment API provides an easy way to integrate the into neural network style models that are compiled with or .

The function serializes the FHE circuits corresponding to the various parts of the model that were chosen to be moved server-side. It also saves the client-side model, removing the weights of the layers that are transferred server-side. Furthermore it saves all necessary information required to serve these sub-models with FHE, using the class.

The class should be used to create a server application that creates end-points to serve these sub-models:

For more information about serving FHE models, see the .

Next, the client application must obtain the parameters necessary to encrypt and quantize data, as detailed in the .

The of Concrete ML provides a way to evaluate, using clear data, the results that ML models produce on encrypted data. The simulation includes any probabilistic behavior FHE may induce. The simulation is implemented with .

Simulation is much faster than FHE execution. This allows for faster debugging and model optimization. For example, this was used for the red/blue contours in the , as computing in FHE for the whole grid and all the classifiers would take significant time.

The error table lookups are only supported on circuits with up to 16-bit integers indicates that the 16-bit limit on the input of the Table Lookup operation has been exceeded. To pinpoint the model layer that causes the error, Concrete ML provides the helper function. First, the model must be compiled so that it can be . Then, calling the function on the module above returns the following:

For custom neural networks with more complex topology, obtaining FHE-compatible models with good accuracy requires QAT. Concrete ML offers the possibility for the user to perform quantization before compiling to FHE. This can be achieved through a third-party library that offers QAT tools, such as for PyTorch. In this approach, the user is responsible for implementing a full-integer model, respecting FHE constraints. Please refer to the for tips on designing FHE neural networks.

Let be the range of a value to quantize where is the minimum and is the maximum. To quantize a range of floating point values (in ) to integer values (in ), the first step is to choose the data type that is going to be used. Many ML models work with weights and activations represented as 8-bit integers, so this will be the value used in this example. Knowing the number of bits that can be used for a value in the range , the scale can be computed :

where is the number of bits (). In the following, is assumed.

In practice, the quantization scale is then . This means the gap between consecutive representable values cannot be smaller than , which, in turn, means there can be a substantial loss of precision. Every interval of length will be represented by a value within the range .

The other important parameter from this quantization schema is the zero point value. This essentially brings the 0 floating point value to a specific integer. If the quantization scheme is asymmetric (quantized values are not centered in 0), the resulting will be in .

When using quantized values in a matrix multiplication or convolution, the equations for computing the result become more complex. The IntelLabs Distiller documentation provides a more of the maths used to quantize values and how to keep computations consistent.

For , the quantization is done post-training. Thus, the model is trained in floating point, and then, the best integer weight representations are found, depending on the distribution of inputs and weights. For these models, the user selects the value of the n_bits parameter.

For , the training and test data is quantized. The maximum accumulator bit-width for a model trained with n_bits=n for this type of model is known beforehand: It will need n+1 bits. Through experimentation, it was determined that, in many cases, a value of 5 or 6 bits gives the same accuracy as training in floating point and values above n=7 do not increase model performance (but rather induce a strong slowdown).

For built-in , several linear layers are used. Thus, the outputs of a layer are used as inputs to a new layer. Built-in neural networks use Quantization Aware Training. The parameters controlling the maximum accumulator bit-width are the number of weights and activation bits ( module__n_w_bits, module__n_a_bits ), but also the pruning factor. This factor is determined automatically by specifying a desired accumulator bit-width module__n_accum_bits and, optionally, a multiplier factor, module__n_hidden_neurons_multiplier.

In a client/server setting, the client is responsible for quantizing inputs before sending them, encrypted, to the server. The client must then de-quantize the encrypted integer results received from the server. See the section for more details.

IntelLabs distiller explanation of quantization:

Several files are tracked by . While a few are required for running some tests, most of them are used for benchmarking and use case examples. By default, git clone downloads all LFS files, which can add up to several hundreds of MB to the directory. Is it however possible to disable such behavior by running the running the following command instead :

A simple way to have everything installed is to use the development Docker (see the guide). On Linux and macOS, you have to run the script in ./script/make_utils/setup_os_deps.sh. Specify the --linux-install-python flag if you want to install python3.8 as well on apt-enabled Linux distributions. The script should install everything you need for Docker and bare OS development (you can first review the content of the file to check what it will do).

The first step is to (as some of the dev tools depend on it), then . In addition to installing Python, you are still going to need the following software available on path on Windows, as some of the basic dev tools depend on them:

git

jq

make

Development on Windows only works with the Docker environment. Follow .

To manually install Python, you can follow guide (alternatively, you can google how to install Python 3.8 (or 3.9)).

Poetry is used as the package manager. It drastically simplifies dependency and environment management. You can follow official guide to install it.

It is possible to install gmake as make. Check this for more info.

On Windows, check .

At this point, you should consider using Docker as nobody will have the exact same setup as you. If, however, you need to develop on your OS directly, you can .

Pruning is a method to reduce neural network complexity, usually applied in order to reduce the computation cost or memory size. Pruning is used in Concrete ML to control the size of accumulators in neural networks, thus making them FHE-compatible. See for an explanation of accumulator bit-width constraints.

Built-in include a pruning mechanism that can be parameterized by the user. The pruning type is based on L1-norm. To comply with FHE constraints, Concrete ML uses unstructured pruning, as the aim is not to eliminate neurons or convolutional filters completely, but to decrease their accumulator bit-width.

Custom neural networks, to work well under FHE constraints, should include pruning. When implemented with PyTorch, you can use the (e.g., L1-Unstructured) to good effect.

For every neuron shown in each layer of the figure above, the linear combinations of inputs and learned weights are computed. Depending on the values of the inputs and weights, the sum - which for Concrete ML neural networks is computed with integers - can take a range of different values.

To respect the bit-width constraint of the FHE , the values of the accumulator must remain small to be representable using a maximum of 16 bits. In other words, the values must be between 0 and .

Pruning a neural network entails fixing some of the weights to be zero during training. This is advantageous to meet FHE constraints, as irrespective of the distribution of , multiplying these input values by 0 does not increase the accumulator value.

While pruning weights can reduce the prediction performance of the neural network, studies show that a high level of pruning (above 50%) can often be applied. See here how Concrete ML uses pruning in .

In the formula above, in the worst case, the maximum number of the input and weights that can make the result exceed bits is given by:

Here, is the maximum precision allowed.

For example, if and with , the worst case scenario occurs when all inputs and weights are equal to their maximal value . There can be at most elements in the multi-sums.

The distribution of the weights of a neural network is Gaussian, with many weights either 0 or having a small value. This enables exceeding the worst case number of active neurons without having to risk overflowing the bit-width. In built-in neural networks, the parameter n_hidden_neurons_multiplier is multiplied with to determine the total number of non-zero weights that should be kept in a neuron.

setting p_error, the error probability of an individual TLU (see )

setting global_p_error, the error probability of the full circuit (see )

not setting p_error nor global_p_error, and using default parameters (see )

For simplicity, it is best to use , irrespective of the type of model. Especially for deep neural networks, default values may be too pessimistic, reducing computation speed without any improvement in accuracy. For deep neural networks, some TLU errors might not affect the accuracy of the network, so p_error can be safely increased (e.g., see CIFAR classifications in ).

p_error
Inference Time (ms)

The speedup depends on model complexity, but, in an iterative approach, it is possible to search for a good value of p_error to obtain a speedup while maintaining good accuracy. Concrete ML provides a tool to find a good value for p_error based on .

If the p_error value is specified and is enabled, the run will take into account the randomness induced by the choice of p_error. This results in statistical similarity to the FHE evaluation.

To speed-up neural networks, a rounding operator can be applied on the accumulators of linear and convolution layers to retain the most significant bits on which the activation and quantization is applied. The accumulator is represented using bits, and is the desired input bit-width of the TLU operation that computes the activation and quantization.

First, compute as the difference between , the actual bit-width of the accumulator, and :

where is the input number, and denotes the operation that rounds to the nearest integer.

The rounding_threshold_bits parameter only works in FHE for TLU input bit-width () less or equal to 8 bits.

An example of such implementation is available in and

MLPClassifier
MLPRegressor
LinearRegression
LogisticRegression
LinearSVC
LinearSVR
PoissonRegressor
TweedieRegressor
GammaRegressor
Lasso
Ridge
ElasticNet
Demos and Tutorials
Digits
Step-by-step guide
the client-server notebook
the use-case examples
AWS Documentation
appropriate panel on AWS website
examples
this notebook
Classifier Comparison notebook
Built-in models
Custom models
Quantization Aware Training
pruning
deep learning design guide
client/server section
client/server documentation
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from concrete.ml.sklearn import LogisticRegression

# Create the data for classification:
X, y = make_classification()

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)

# Instantiate, train and compile the model
model = LogisticRegression()
model.fit(X_train, y_train)
model.compile(X_train)

# Run the inference in FHE
y_pred_fhe = model.predict(X_test, fhe="execute")

# Dump the model in a string
dumped_model_str = model.dumps()
from pathlib import Path

dumped_model_path = Path("logistic_regression_model.json")

# Any kind of file-like object can be used 
with dumped_model_path.open("w") as f:

    # Dump the model in a file
    model.dump(f)
from concrete.ml.common.serialization.dumpers import dump, dumps

# Dump the model in a string
dumped_model_str = dumps(model)

# Any kind of file-like object can be used 
with dumped_model_path.open("w") as f:

    # Dump the model in a file
    dump(model, f)
import numpy
from concrete.ml.common.serialization.loaders import load, loads

# Load the model from a string
loaded_model = loads(dumped_model_str)

# Any kind of file-like object can be used 
with dumped_model_path.open("r") as f:

    # Load the model from a file
    loaded_model = load(f)

# Compile the model
loaded_model.compile(X_train)

# Run the inference in FHE using the loaded model
y_pred_fhe_loaded = loaded_model.predict(X_test, fhe="execute")

print("Predictions are equal:", numpy.array_equal(y_pred_fhe, y_pred_fhe_loaded))

# Output:
#   Predictions are equal: True
[α,β][\alpha, \beta ][α,β]
α\alphaα
β\betaβ
R\mathbb{R}R
Z\mathbb{Z}Z
[α,β][\alpha, \beta ][α,β]
SSS
S=β−α2n−1S = \frac{\beta - \alpha}{2^n - 1}S=2n−1β−α​
nnn
n≤8n \leq 8n≤8
n=8n = 8n=8
S=β−α255S = \frac{\beta - \alpha}{255}S=255β−α​
SSS
SSS
[0..255][0..255][0..255]
ZpZ_pZp​
ZpZ_pZp​
Z\mathbb{Z}Z
Zp=round(−αS)Z_p = \mathtt{round} \left(- \frac{\alpha}{S} \right)Zp​=round(−Sα​)
# Assume 
#   quantized_module : QuantizedModule
#   x: numpy.ndarray (of float)

# Quantization is done in the clear
x_q = quantized_module.quantize_input(x)

# Forward in FHE (here with simulation)
q_y_proba = quantized_module.quantized_forward(x_q, fhe="simulate")

# De-quantization is done in the clear
y_proba = quantized_module.dequantize_output(q_y_proba)

# For classifiers with multi-class outputs, the arg max is done in the clear
y_pred = np.argmax(y_proba, 1)
# Assume 
#   quantized_module : QuantizedModule
#   x: numpy.ndarray (of float)

# Forward in FHE (here with simulation). Quantization and de-quantization steps are still done in 
# the clear 
y_proba = quantized_module.forward(x, fhe="simulate")

# For classifiers with multi-class outputs, the arg max is done in the clear
y_pred = np.argmax(y_proba, 1)
the advanced quantization guide
quantization uses powers-of-two scales
section
git clone https://github.com/zama-ai/concrete-ml
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/zama-ai/concrete-ml
# check for gmake
which gmake

# If you don't have it, it will error out, install gmake
brew install make

# recheck, now you should have gmake
which gmake
cd concrete-ml
make setup_env
source .venv/bin/activate
source .venv/Scripts/activate
docker volume ls
# Here we have dev_venv sourced
(dev_venv) dev_user@8e299b32283c:/src$ make setup_env
deactivate
make sync_env
# Try to install the env normally
make setup_env

# If you are still having issues, sync the environment
make sync_env

# If you are still having issues on your OS, delete the venv:
rm -rf .venv

# And re-run the env setup
make setup_env
# Try to install the env normally
make setup_env

# If you are still having issues, sync the environment
make sync_env

# If you are still having issues in Docker, delete the venv:
rm -rf ~/dev_venv/*

# Disconnect from Docker
exit

# And relaunch, the venv will be reinstalled
make docker_start

# If you are still out of luck, force a rebuild which will also delete the volumes
make docker_rebuild

# And start Docker, which will reinstall the venv
make docker_start
yk=ϕ(∑iwixi)y_k = \phi\left(\sum_i w_ix_i\right)yk​=ϕ(∑i​wi​xi​)
vk=∑iwixiv_k = \sum_i w_ix_ivk​=∑i​wi​xi​
wkw_kwk​
xix_ixi​
nnn
Ω=floor(2nmax−1(2nweights−1)(2ninputs−1))\Omega = \mathsf{floor} \left( \frac{2^{n_{\mathsf{max}}} - 1}{(2^{n_{\mathsf{weights}}} - 1)(2^{n_{\mathsf{inputs}}} - 1)} \right)Ω=floor((2nweights​−1)(2ninputs​−1)2nmax​−1​)
nmax=16n_{\mathsf{max}} = 16nmax​=16
nweights=2n_{\mathsf{weights}} = 2nweights​=2
ninputs=2n_{\mathsf{inputs}} = 2ninputs​=2
nmax=16n_{\mathsf{max}} = 16nmax​=16
22−1=32^2-1=322−1=3
Ω=7281\Omega = 7281Ω=7281
Ω\OmegaΩ

0.001

0.80

0.01

0.41

0.1

0.37

from concrete.ml.sklearn import XGBClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

x, y = make_classification(n_samples=100, class_sep=2, n_features=4, random_state=42)

# Retrieve train and test sets
X_train, _, y_train, _ = train_test_split(x, y, test_size=10, random_state=42)

clf = XGBClassifier()
clf.fit(X_train, y_train)

# Here we set the p_error parameter
clf.compile(X_train, p_error=0.1)
# Here we set the global_p_error parameter
clf.compile(X_train, global_p_error=0.1)
from time import time

from sklearn.datasets import make_classification
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split

from concrete.ml.search_parameters import BinarySearch
from concrete.ml.sklearn import DecisionTreeClassifier

x, y = make_classification(n_samples=100, class_sep=2, n_features=4, random_state=42)

# Retrieve train and test sets
X_train, _, y_train, _ = train_test_split(x, y, test_size=10, random_state=42)

clf = DecisionTreeClassifier(random_state=42)

# Fit the model
clf.fit(X_train, y_train)

# Compile the model with the default `p_error`
fhe_circuit = clf.compile(X_train)

# Key Generation
fhe_circuit.client.keygen(force=False)

start_time = time()
y_pred = clf.predict(X_train, fhe="execute")
end_time = time()

print(f"With the default p_error≈0, the inference time is {(end_time - start_time) / 60:.2f} s")
# Output: With the default p_error≈0, the inference time is 0.89 s
print(f"Accuracy = {accuracy_score(y_pred, y_train):.2%}")
# Output: Accuracy = 100.00%

# Search for the largest `p_error` that provides
# the best compromise between accuracy and computational efficiency in FHE
search = BinarySearch(estimator=clf, predict="predict", metric=accuracy_score)
p_error = search.run(x=X_train, ground_truth=y_train, max_iter=10)

# Compile the model with the optimal `p_error`
fhe_circuit = clf.compile(X_train, p_error=p_error)

# Key Generation
fhe_circuit.client.keygen(force=False)

start_time = time()
y_pred = clf.predict(X_train, fhe="execute")
end_time = time()

print(
    f"With p_error={p_error:.5f}, the inference time becomes {(end_time - start_time) / 60:.2f} s"
)
# Ouput: With p_error=0.00043, the inference time becomes 0.56 s
print(f"Accuracy = {accuracy_score(y_pred, y_train): .2%}")
# Output: Accuracy = 100.00%
LLL
P≤LP \leq LP≤L
ttt
LLL
PPP
t=L−Pt = L - Pt=L−P
round_to_t_bits(x,t)=⌊x2t⌉⋅2t\mathrm{round\_to\_t\_bits}(x, t) = \left\lfloor \frac{x}{2^t} \right\rceil \cdot 2^tround_to_t_bits(x,t)=⌊2tx​⌉⋅2t
xxx
⌊⋅⌉\lfloor \cdot \rceil⌊⋅⌉
PPP
from concrete.ml.sklearn import DecisionTreeClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

x, y = make_classification(n_samples=100, class_sep=2, n_features=4, random_state=42)

# Retrieve train and test sets
X_train, _, y_train, _ = train_test_split(x, y, test_size=10, random_state=42)

clf = DecisionTreeClassifier(random_state=42)
clf.fit(X_train, y_train)

clf.compile(X_train, verbose=True, show_mlir=True, p_error=0.033)
Computation Graph
-------------------------------------------------------------------------------------------------------------------------------
 %0 = _inputs                                  # EncryptedTensor<uint6, shape=(1, 4)>           ∈ [0, 63]
 %1 = transpose(%0)                            # EncryptedTensor<uint6, shape=(4, 1)>           ∈ [0, 63]
 %2 = [[0 0 0 1]]                              # ClearTensor<uint1, shape=(1, 4)>               ∈ [0, 1]
 %3 = matmul(%2, %1)                           # EncryptedTensor<uint6, shape=(1, 1)>           ∈ [0, 63]
 %4 = [[32]]                                   # ClearTensor<uint6, shape=(1, 1)>               ∈ [32, 32]
 %5 = less_equal(%3, %4)                       # EncryptedTensor<uint1, shape=(1, 1)>           ∈ [False, True]
 %6 = reshape(%5, newshape=[ 1  1 -1])         # EncryptedTensor<uint1, shape=(1, 1, 1)>        ∈ [False, True]
 %7 = [[[ 1]  [-1]]]                           # ClearTensor<int2, shape=(1, 2, 1)>             ∈ [-1, 1]
 %8 = matmul(%7, %6)                           # EncryptedTensor<int2, shape=(1, 2, 1)>         ∈ [-1, 1]
 %9 = reshape(%8, newshape=[ 2 -1])            # EncryptedTensor<int2, shape=(2, 1)>            ∈ [-1, 1]
%10 = [[1] [0]]                                # ClearTensor<uint1, shape=(2, 1)>               ∈ [0, 1]
%11 = equal(%10, %9)                           # EncryptedTensor<uint1, shape=(2, 1)>           ∈ [False, True]
%12 = reshape(%11, newshape=[ 1  2 -1])        # EncryptedTensor<uint1, shape=(1, 2, 1)>        ∈ [False, True]
%13 = [[[63  0]  [ 0 63]]]                     # ClearTensor<uint6, shape=(1, 2, 2)>            ∈ [0, 63]
%14 = matmul(%13, %12)                         # EncryptedTensor<uint6, shape=(1, 2, 1)>        ∈ [0, 63]
%15 = reshape(%14, newshape=[ 1  2 -1])        # EncryptedTensor<uint6, shape=(1, 2, 1)>        ∈ [0, 63]
return %15
MLIR
-------------------------------------------------------------------------------------------------------------------------------
module {
  func.func @main(%arg0: tensor<1x4x!FHE.eint<6>>) -> tensor<1x2x1x!FHE.eint<6>> {
    %cst = arith.constant dense<[[[63, 0], [0, 63]]]> : tensor<1x2x2xi7>
    %cst_0 = arith.constant dense<[[1], [0]]> : tensor<2x1xi7>
    %cst_1 = arith.constant dense<[[[1], [-1]]]> : tensor<1x2x1xi7>
    %cst_2 = arith.constant dense<32> : tensor<1x1xi7>
    %cst_3 = arith.constant dense<[[0, 0, 0, 1]]> : tensor<1x4xi7>
    %c32_i7 = arith.constant 32 : i7
    %0 = "FHELinalg.transpose"(%arg0) {axes = []} : (tensor<1x4x!FHE.eint<6>>) -> tensor<4x1x!FHE.eint<6>>
    %cst_4 = tensor.from_elements %c32_i7 : tensor<1xi7>
    %1 = "FHELinalg.matmul_int_eint"(%cst_3, %0) : (tensor<1x4xi7>, tensor<4x1x!FHE.eint<6>>) -> tensor<1x1x!FHE.eint<6>>
    %cst_5 = arith.constant dense<[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]> : tensor<64xi64>
    %2 = "FHELinalg.apply_lookup_table"(%1, %cst_5) : (tensor<1x1x!FHE.eint<6>>, tensor<64xi64>) -> tensor<1x1x!FHE.eint<6>>
    %3 = tensor.expand_shape %2 [[0], [1, 2]] : tensor<1x1x!FHE.eint<6>> into tensor<1x1x1x!FHE.eint<6>>
    %4 = "FHELinalg.matmul_int_eint"(%cst_1, %3) : (tensor<1x2x1xi7>, tensor<1x1x1x!FHE.eint<6>>) -> tensor<1x2x1x!FHE.eint<6>>
    %5 = tensor.collapse_shape %4 [[0, 1], [2]] : tensor<1x2x1x!FHE.eint<6>> into tensor<2x1x!FHE.eint<6>>
    %6 = "FHELinalg.add_eint_int"(%5, %cst_4) : (tensor<2x1x!FHE.eint<6>>, tensor<1xi7>) -> tensor<2x1x!FHE.eint<6>>
    %cst_6 = arith.constant dense<"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"> : tensor<2x64xi64>
    %cst_7 = arith.constant dense<[[0], [1]]> : tensor<2x1xindex>
    %7 = "FHELinalg.apply_mapped_lookup_table"(%6, %cst_6, %cst_7) : (tensor<2x1x!FHE.eint<6>>, tensor<2x64xi64>, tensor<2x1xindex>) -> tensor<2x1x!FHE.eint<6>>
    %8 = tensor.expand_shape %7 [[0, 1], [2]] : tensor<2x1x!FHE.eint<6>> into tensor<1x2x1x!FHE.eint<6>>
    %9 = "FHELinalg.matmul_int_eint"(%cst, %8) : (tensor<1x2x2xi7>, tensor<1x2x1x!FHE.eint<6>>) -> tensor<1x2x1x!FHE.eint<6>>
    return %9 : tensor<1x2x1x!FHE.eint<6>>
  }
}
Optimizer
-------------------------------------------------------------------------------------------------------------------------------
--- Circuit
  6 bits integers
  7 manp (maxi log2 norm2)
  388ms to solve
--- User config
  3.300000e-02 error per pbs call
  1.000000e+00 error per circuit call
--- Complexity for the full circuit
  4.214000e+02 Millions Operations
--- Correctness for each Pbs call
  1/30 errors (3.234529e-02)
--- Correctness for the full circuit
  1/10 errors (9.390887e-02)
--- Parameters resolution
  1x glwe_dimension
  2**11 polynomial (2048)
  762 lwe dimension
  keyswitch l,b=5,3
  blindrota l,b=2,15
  wopPbs : false
---
approximate computation
rounded activations and quantizers reference
p_error documentation for details
the API for finding the best p_error

Documentation

Using GitBook

Documentation with GitBook is done mainly by pushing content on GitHub. GitBook then pulls the docs from the repository and publishes. In most cases, GitBook is just a mirror of what is available in GitHub.

There are, however, some use-cases where documentation can be modified directly in GitBook (and, then, push the modifications to GitHub), for example when the documentation is modified by a person outside of Zama. In this case, a GitHub branch is created, and a GitHub space is associated to it: modifications are done in this space and automatically pushed to the branch. Once the modifications have been completed, one can simply create a pull-request, to finally merge modifications on the main branch.

Using Sphinx

Documentation can alternatively be built using Sphinx:

make docs

The documentation contains both files written by hand by developers (the .md files) and files automatically created by parsing the source files.

Then to open it, go to docs/_build/html/index.html or use the follwing command:

make open_docs

To build and open the docs at the same time, use:

make docs_and_open
Brevitas
advanced QAT tutorial
detailed explanation
linear models
tree-based models
neural networks
Production Deployment
Distiller documentation
git-lfs
Docker setup
https://gitforwindows.org/
https://github.com/stedolan/jq/releases
https://gist.github.com/evanwill/0207876c3243bbb6863e65ec5dc3f058#make
this link to setup the Docker environment
this
this
StackOverflow post
this GitHub gist
neural networks
framework's pruning mechanism
vkv_kvk​
216−12^{16}-1216−1
table lookup
evaluate_torch_cml.py
CifarInFheWithSmallerAccumulators.ipynb
install Python
Poetry
ask Zama for help
here
here
here
here
our showcase
default options
binary search
here
simulation
correctness guarantees
simulation
Concrete's simulation
simulation functionality
simulation
here
Brevitas usage reference
Brevitas

Set Up Docker

Building the image

Once you do that, you can get inside the Docker environment using the following command:

make docker_start

# or build and start at the same time
make docker_build_and_start

# or equivalently but shorter
make docker_bas

After you finish your work, you can leave Docker by using the exit command or by pressing CTRL + D.

Importing ONNX

As ONNX is becoming the standard exchange format for neural networks, this allows Concrete ML to be flexible while also making model representation manipulation easy. In addition, it allows for straight-forward mapping to NumPy operators, supported by Concrete to use Concrete stack's FHE-conversion capabilities.

Torch to NumPy conversion using ONNX

The diagram below gives an overview of the steps involved in the conversion of an ONNX graph to an FHE-compatible format (i.e., a format that can be compiled to FHE through Concrete).

All Concrete ML built-in models follow the same pattern for FHE conversion:

  1. The models are trained with sklearn or PyTorch.

  2. The Concrete ML ONNX parser checks that all the operations in the ONNX graph are supported and assigns reference NumPy operations to them. This step produces a NumpyModule.

  3. Once the QuantizedModule is built, Concrete is used to trace the ._forward() function of the QuantizedModule.

Once an ONNX model is imported, it is converted to a NumpyModule, then to a QuantizedModule and, finally, to an FHE circuit. However, as the diagram shows, it is perfectly possible to stop at the NumpyModule level if you just want to run the PyTorch model as NumPy code without doing quantization.

Inspecting the ONNX models

import onnx
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from concrete.ml.sklearn import LogisticRegression

# Create the data for classification
x, y = make_classification(n_samples=250, class_sep=2, n_features=30, random_state=42)

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.4, random_state=42
)

# Fix the number of bits to used for quantization
model = LogisticRegression(n_bits=8)

# Fit the model
model.fit(X_train, y_train)

# Access to the model
onnx_model = model.onnx_model

# Print the model
print(onnx.helper.printable_graph(onnx_model.graph))

# Save the model
onnx.save(onnx_model, "tmp.onnx")

# And then visualize it with Netron

Quantization Tools

Quantizing data

Concrete ML has support for quantized ML models and also provides quantization tools for Quantization Aware Training and Post-Training Quantization. The core of this functionality is the conversion of floating point values to integers and back. This is done using QuantizedArray in concrete.ml.quantization.

  • n_bits defines the precision used in quantization

  • values are floating point values that will be converted to integers

  • is_signed determines if the quantized integer values should allow negative values

  • is_symmetric determines if the range of floating point values to be quantized should be taken as symmetric around zero

from concrete.ml.quantization import QuantizedArray
import numpy
numpy.random.seed(0)
A = numpy.random.uniform(-2, 2, 10)
print("A = ", A)
# array([ 0.19525402,  0.86075747,  0.4110535,  0.17953273, -0.3053808,
#         0.58357645, -0.24965115,  1.567092 ,  1.85465104, -0.46623392])
q_A = QuantizedArray(7, A)
print("q_A.qvalues = ", q_A.qvalues)
# array([ 37,          73,          48,         36,          9,
#         58,          12,          112,        127,         0])
# the quantized integers values from A.
print("q_A.quantizer.scale = ", q_A.quantizer.scale)
# 0.018274684777173276, the scale S.
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# 26, the zero point Z.
print("q_A.dequant() = ", q_A.dequant())
# array([ 0.20102153,  0.85891018,  0.40204307,  0.18274685, -0.31066964,
#         0.58478991, -0.25584559,  1.57162289,  1.84574316, -0.4751418 ])
# Dequantized values.

It is also possible to use symmetric quantization, where the integer values are centered around 0:

q_A = QuantizedArray(3, A)
print("Unsigned: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Unsigned: q_A.qvalues =  [2 4 2 2 0 3 0 6 7 0]
# q_A.quantizer.zero_point =  1

q_A = QuantizedArray(3, A, is_signed=True, is_symmetric=True)
print("Signed Symmetric: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Signed Symmetric: q_A.qvalues =  [ 0  1  1  0  0  1  0  3  3 -1]
# q_A.quantizer.zero_point =  0

In the following example, showing the de-quantization of model outputs, the QuantizedArray class is used in a different way. Here it uses pre-quantized integer values and has the scale and zero-point set explicitly. Once the QuantizedArray is constructed, calling dequant() will compute the floating point values corresponding to the integer values qvalues, which are the output of the fhe_circuit.encrypt_run_decrypt(..) call.

import numpy
from concrete.ml.quantization.quantizers import QuantizationOptions

q_values = [0, 0, 1, 2, 3, -1]
QuantizedArray(
        q_A.quantizer.n_bits,
        q_values,
        value_is_float=False,
        options=q_A.quantizer.quant_options,
        stats=q_A.quantizer.quant_stats,
        params=q_A.quantizer.quant_params,
).dequant()

Quantized modules

Machine learning models are implemented with a diverse set of operations, such as convolution, linear transformations, activation functions, and element-wise operations. When working with quantized values, these operations cannot be carried out in an equivalent way to floating point values. With quantization, it is necessary to re-scale the input and output values of each operation to fit in the quantization domain.

In Concrete ML, the quantized equivalent of a scikit-learn model or a PyTorch nn.Module is the QuantizedModule. Note that only inference is implemented in the QuantizedModule, and it is built through a conversion of the inference function of the corresponding scikit-learn or PyTorch module.

Built-in neural networks expose the quantized_module member, while a QuantizedModule is also the result of the compilation of custom models through compile_torch_model and compile_brevitas_qat_model.

Calibration is the process of determining the typical distributions of values encountered for the intermediate values of a model during inference.

Resources

Support and Issues

Concrete ML is a constant work-in-progress, and thus may contain bugs or suboptimal APIs.

Furthermore, undefined behavior may occur if the input-set, which is internally used by the compilation core to set bit-widths of some intermediate data, is not sufficiently representative of the future user inputs. With all the inputs in the input-set, it appears that intermediate data can be represented as an n-bit integer. But, for a particular computation, this same intermediate data needs additional bits to be represented. The FHE execution for this computation will result in an incorrect output, as typically occurs in integer overflows in classical programs.

Submitting an issue

  • the reproducibility rate you see on your side

  • any insight you might have on the bug

  • any workaround you have been able to find

Contributing

There are three ways to contribute to Concrete ML:

  • You can open issues to report bugs and typos and to suggest ideas.

  • You can become an official contributor but you need to sign our Contributor License Agreement (CLA) on your first contribution. Our CLA-bot will guide you through the process when you will open a Pull Request on Github.

  • You can also provide new tutorials or use-cases, showing what can be done with the library. The more examples we have, the better and clearer it is for the other users.

1. Creating a new branch

To create your branch, you have to use the issue ID somewhere in the branch name:

git checkout -b {feat|fix|refactor|test|benchmark|doc|style|chore}/short-description_$issue_id
git checkout -b short-description_$issue_id
git checkout -b $issue_id_short-description

For example:

git checkout -b feat/explicit-tlu_11
git checkout -b tracing_indexing_42
git checkout -b 42_tracing_indexing

2. Before committing

2.1 Conformance

Each commit to Concrete ML should conform to the standards of the project. You can let the development tools fix some issues automatically with the following command:

make conformance

Conformance can be checked using the following command:

make pcc

2.2 Testing

Your code must be well documented, containing tests and not breaking other tests:

make pytest

You need to make sure you get 100% code coverage. The make pytest command checks that by default and will fail with a coverage report at the end should some lines of your code not be executed during testing.

If your coverage is below 100%, you should write more tests and then create the pull request. If you ignore this warning and create the PR, GitHub actions will fail and your PR will not be merged.

There may be cases where covering your code is not possible (an exception that cannot be triggered in normal execution circumstances). In those cases, you may be allowed to disable coverage for some specific lines. This should be the exception rather than the rule, and reviewers will ask why some lines are not covered. If it appears they can be covered, then the PR won't be accepted in that state.

3. Committing

Concrete ML uses a consistent commit naming scheme, and you are expected to follow it as well (the CI will make sure you do). The accepted format can be printed to your terminal by running:

make show_scope

For example:

git commit -m "feat: implement bounds checking"
git commit -m "feat(debugging): add an helper function to draw intermediate representation"
git commit -m "fix(tracing): fix a bug that crashed PyTorch tracer"

4. Rebasing

You should rebase on top of the main branch before you create your pull request. Merge commits are not allowed, so rebasing on main before pushing gives you the best chance of to avoid rewriting parts of your PR later if conflicts arise with other PRs being merged. After you commit changes to your new branch, you can use the following commands to rebase:

# fetch the list of active remote branches
git fetch --all --prune

# checkout to main
git checkout main

# pull the latest changes to main (--ff-only is there to prevent accidental commits to main)
git pull --ff-only

# checkout back to your branch
git checkout $YOUR_BRANCH

# rebase on top of main branch
git rebase main

# If there are conflicts during the rebase, resolve them
# and continue the rebase with the following command
git rebase --continue

# push the latest version of the local branch to remote
git push --force

Compilation

Compilation of a model produces machine code that executes the model on encrypted data. In some cases, notably in the client/server setting, the compilation can be done by the server when loading the model for serving.

As FHE execution is much slower than execution on non-encrypted data, Concrete ML has a simulation mode which can help to quickly evaluate the impact of FHE execution on models.

Compilation to FHE

From the perspective of the Concrete ML user, the compilation process performed by Concrete can be broken up into 3 steps:

  1. tracing the NumPy program and creating a Concrete op-graph

  2. checking the op-graph for FHE compatability

  3. producing machine code for the op-graph (this step automatically determines cryptographic parameters)

Built-in models

Compilation is performed for built-in models with the compile method :

    clf.compile(X_train)

scikit-learn pipelines

When using a pipeline, the Concrete ML model can predict with FHE during the pipeline execution, but it needs to be compiled beforehand. The compile function must be called on the Concrete ML model:

import numpy
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline

# Create the data for classification:
X, y = make_classification(
    n_features=30,
    n_redundant=0,
    n_informative=2,
    random_state=2,
    n_clusters_per_class=1,
    n_samples=250,
)

# Retrieve train and test sets:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

model_pca = Pipeline(
    [
        ("preprocessor", PCA()),
        ("cml_model", LogisticRegression(n_bits=8))
    ]
)

model_pca.fit(X_train, y_train)

# Compile the Concrete ML model
model_pca["cml_model"].compile(X_train)

model_pca.predict(X_test[[0]], fhe="execute")

Custom models

For custom models, with one of the compile_brevitas_qat_model (for Brevitas models with Quantization Aware Training) or compile_torch_model (PyTorch models using Post-Training Quantization) functions:

    quantized_numpy_module = compile_brevitas_qat_model(torch_model, X_train)

FHE simulation

The result of this single step of the compilation pipeline allows the:

  • execution of the op-graph, which includes TLUs, on clear non-encrypted data. This is not secure, but it is much faster than executing in FHE. This mode is useful for debugging, especially when looking for appropriate model hyper-parameters

  • verification of the maximum bit-width of the op-graph and the intermediary bit-widths of model layers, to evaluate their impact on FHE execution latency

Simulation is enabled for all Concrete ML models once they are compiled as shown above. Obtaining the simulated predictions of the models is done by setting the fhe="simulate" argument to prediction methods:

    Z = clf.predict_proba(X, fhe="simulate")

Moreover, the maximum accumulator bit-width is determined as follows:

    bit_width = clf.quantized_module_.fhe_circuit.graph.maximum_integer_bit_width()

A simple Concrete example

import numpy
from concrete.fhe import compiler

# Assume Quantization has been applied and we are left with integers only. This is essentially the work of Concrete ML

# Some parameters (weight and bias) for our model taking a single feature
w = [2]
b = 2

# The function that implements our model
@compiler({"x": "encrypted"})
def linear_model(x):
    return w @ x + b

# A representative input-set is needed to compile the function (used for tracing)
n_bits_input = 2
inputset = numpy.arange(0, 2**n_bits_input).reshape(-1, 1)
circuit = linear_model.compile(inputset)

# Use the API to get the maximum bit-width in the circuit
max_bit_width = circuit.graph.maximum_integer_bit_width()
print("Max bit_width = ", max_bit_width)
# Max bit_width = 4

# Test our FHE inference
circuit.encrypt_run_decrypt(numpy.array([3]))
# 8

# Print the graph of the circuit
print(circuit)
# %0 = 2                     # ClearScalar<uint2>
# %1 = [2]                   # ClearTensor<uint2, shape=(1,)>
# %2 = x                     # EncryptedTensor<uint2, shape=(1,)>
# %3 = matmul(%1, %2)        # EncryptedScalar<uint3>
# %4 = add(%3, %0)           # EncryptedScalar<uint4>
# return %4

concrete.ml.common.debugging.md

module concrete.ml.common.debugging

Module for debugging.

Global Variables

  • custom_assert

concrete.ml.common.check_inputs.md

module concrete.ml.common.check_inputs

Check and conversion tools.

Utils that are used to check (including convert) some data types which are compatible with scikit-learn to numpy types.


function check_array_and_assert

check_array_and_assert(X, *args, **kwargs)

sklearn.utils.check_array with an assert.

Equivalent of sklearn.utils.check_array, with a final assert that the type is one which is supported by Concrete ML.

Args:

  • X (object): Input object to check / convert

  • *args: The arguments to pass to check_array

  • **kwargs: The keyword arguments to pass to check_array

Returns: The converted and validated array


function check_X_y_and_assert

check_X_y_and_assert(X, y, *args, **kwargs)

sklearn.utils.check_X_y with an assert.

Equivalent of sklearn.utils.check_X_y, with a final assert that the type is one which is supported by Concrete ML.

Args:

  • X (ndarray, list, sparse matrix): Input data

  • y (ndarray, list, sparse matrix): Labels

  • *args: The arguments to pass to check_X_y

  • **kwargs: The keyword arguments to pass to check_X_y

Returns: The converted and validated arrays


function check_X_y_and_assert_multi_output

check_X_y_and_assert_multi_output(X, y, *args, **kwargs)

sklearn.utils.check_X_y with an assert and multi-output handling.

Equivalent of sklearn.utils.check_X_y, with a final assert that the type is one which is supported by Concrete ML. If y is 2D, allows multi-output.

Args:

  • X (ndarray, list, sparse matrix): Input data

  • y (ndarray, list, sparse matrix): Labels

  • *args: The arguments to pass to check_X_y

  • **kwargs: The keyword arguments to pass to check_X_y

Returns: The converted and validated arrays with multi-output targets.

concrete.ml.common.debugging.custom_assert.md

module concrete.ml.common.debugging.custom_assert

Provide some variants of assert.


function assert_true

assert_true(
    condition: bool,
    on_error_msg: str = '',
    error_type: Type[Exception] = <class 'AssertionError'>
)

Provide a custom assert to check that the condition is True.

Args:

  • condition (bool): the condition. If False, raise AssertionError

  • on_error_msg (str): optional message for precising the error, in case of error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError


function assert_false

assert_false(
    condition: bool,
    on_error_msg: str = '',
    error_type: Type[Exception] = <class 'AssertionError'>
)

Provide a custom assert to check that the condition is False.

Args:

  • condition (bool): the condition. If True, raise AssertionError

  • on_error_msg (str): optional message for precising the error, in case of error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError


function assert_not_reached

assert_not_reached(
    on_error_msg: str,
    error_type: Type[Exception] = <class 'AssertionError'>
)

Provide a custom assert to check that a piece of code is never reached.

Args:

  • on_error_msg (str): message for precising the error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError

External Libraries

Hummingbird

Concrete ML allows the conversion of an ONNX inference to NumPy inference (note that NumPy is always the entry point to run models in FHE with Concrete ML).

Hummingbird exposes a convert function that can be imported as follows from the hummingbird.ml package:

# Disable Hummingbird warnings for pytest.
import warnings
warnings.filterwarnings("ignore")
from hummingbird.ml import convert

This function can be used to convert a machine learning model to an ONNX as follows:

from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression

# Instantiate the logistic regression from sklearn
model = LogisticRegression()

# Create synthetic data
X, y = make_classification(
    n_samples=100, n_features=20, n_classes=2
)

# Fit the model
model.fit(X, y)

# Convert the model to ONNX
onnx_model = convert(model, backend="onnx", test_input=X).model

In theory, the resulting onnx_model could be used directly within Concrete ML's get_equivalent_numpy_forward method (as long as all operators present in the ONNX model are implemented in NumPy) and get the NumPy inference.

In practice, there are some steps needed to clean the ONNX output and make the graph compatible with Concrete ML, such as applying quantization where needed or deleting/replacing non-FHE friendly ONNX operators (such as Softmax and ArgMax).

skorch

This wrapper implements Torch training boilerplate code, lessening the work required of the user. It is possible to add hooks during the training phase, for example once an epoch is finished.

class SparseQuantNeuralNetImpl(nn.Module):
    """Sparse Quantized Neural Network classifier.

Brevitas

While Brevitas provides many types of quantization, for Concrete ML, a custom "mixed integer" quantization applies. This "mixed integer" quantization is much simpler than the "integer only" mode of Brevitas. The "mixed integer" network design is defined as:

  • all weights and activations of convolutional, linear and pooling layers must be quantized (e.g., using Brevitas layers, QuantConv2D, QuantAvgPool2D, QuantLinear)

For "mixed integer" quantization to work, the first layer of a Brevitas nn.Module must be a QuantIdentity layer. However, you can then use functions such as torch.sigmoid on the result of such a quantizing operation.

import torch.nn as nn

class QATnetwork(nn.Module):
    def __init__(self):
        super(QATnetwork, self).__init__()
        self.quant_inp = qnn.QuantIdentity(
            bit_width=4, return_quant_tensor=True)
        # ...

    def forward(self, x):
        out = self.quant_inp(x)
        return torch.sigmoid(out)
        # ...

For examples of such a "mixed integer" network design, please see the Quantization Aware Training examples:

FHE Op-graph Design

Float vs. quantized operations

Concrete, the underlying implementation of TFHE that powers Concrete ML, enables two types of operations on integers:

  1. arithmetic operations: the addition of two encrypted values and multiplication of encrypted values with clear scalars. These are used, for example, in dot-products, matrix multiplication (linear layers), and convolution.

  2. table lookup operations (TLU): using an encrypted value as an index, return the value of a lookup table at that index. This is implemented using Programmable Bootstrapping. This operation is used to perform any non-linear computation such as activation functions, quantization, and normalization.

Alternatively, it is possible to use a table lookup to avoid the quantization of the entire graph, by converting floating-point ONNX subgraphs into lambdas and computing their corresponding lookup tables to be evaluated directly in FHE. This operator-fusion technique only requires the input and output of the lambdas to be integers.

For example, in the following graph there is a single input, which must be an encrypted integer tensor. The following series of univariate functions is then fed into a matrix multiplication (MatMul) and fused into a single table lookup with integer inputs and outputs.

ONNX operations

Concrete ML implements ONNX operations using Concrete, which can handle floating point operations, as long as they can be fused to an integer lookup table. The ONNX operations implementations are based on the QuantizedOp class.

There are two modes of creation of a single table lookup for a chain of ONNX operations:

  1. float mode: when the operation can be fused

  2. mixed float/integer: when the ONNX operation needs to perform arithmetic operations

Thus, QuantizedOp instances may need to quantize their inputs or the result of their computation, depending on their position in the graph.

The QuantizedOp class provides a generic implementation of an ONNX operation, including the quantization of inputs and outputs, with the computation implemented in NumPy in ops_impl.py. It is possible to picture the architecture of the QuantizedOp as the following structure:

Operations that can fuse to a TLU

Depending on the position of the op in the graph and its inputs, the QuantizedOp can be fully fused to a TLU.

Many ONNX ops are trivially univariate, as they multiply variable inputs with constants or apply univariate functions such as ReLU, Sigmoid, etc. This includes operations between the input and the MatMul in the graph above (subtraction, comparison, multiplication, etc. between inputs and constants).

Operations that work on integers

Operations, such as matrix multiplication of encrypted inputs with a constant matrix or convolution with constant weights, require that the encrypted inputs be integers. In this case, the input quantizer of the QuantizedOp is applied. These types of operations are implemented with a class that derives from QuantizedOp and implements q_impl, such as QuantizedGemm and QuantizedConv.

Operations that produce graph outputs

Finally, some operations produce graph outputs, which must be integers. These operations need to quantize their outputs as follows:

The diagram above shows that both float ops and integer ops need to quantize their outputs to integers when placed at the end of the graph.

Putting it all together

To chain the operation types described above following the ONNX graph, Concrete ML constructs a function that calls the q_impl of the QuantizedOp instances in the graph in sequence, and uses Concrete to trace the execution and compile to FHE. Thus, in this chain of function calls, all groups of that instruction that operate in floating point will be fused to TLUs. In FHE, this lookup table is computed with a PBS.

The red contours show the groups of elementary Concrete instructions that will be converted to TLUs.

Note that the input is slightly different from the QuantizedOp. Since the encrypted function takes integers as inputs, the input needs to be de-quantized first.

Implementing a QuantizedOp

QuantizedOp is the base class for all ONNX-quantized operators. It abstracts away many things to allow easy implementation of new quantized ops.

Determining if the operation can be fused

The QuantizedOp class exposes a function can_fuse that:

  • helps to determine the type of implementation that will be traced.

  • determines whether operations further in the graph, that depend on the results of this operation, can fuse.

In most cases, ONNX ops have a single variable input and one or more constant inputs.

When the op implements element-wise operations between the inputs and constants (addition, subtract, multiplication, etc), the operation can be fused to a TLU. Thus, by default in QuantizedOp, the can_fuse function returns True.

When the op implements operations that mix the various scalars in the input encrypted tensor, the operation cannot fuse, as table lookups are univariate. Thus, operations such as QuantizedGemm and QuantizedConv return False in can_fuse.

Some operations may be found in both settings above. A mechanism is implemented in Concrete ML to determine if the inputs of a QuantizedOp are produced by a unique integer tensor. Therefore, the can_fuse function of some QuantizedOp types (addition, subtraction) will allow fusion to take place if both operands are produced by a unique integer tensor:

def can_fuse(self) -> bool:
    return len(self._int_input_names) == 1

Case 1: A floating point version of the op is sufficient

You can check ops_impl.py to see how some operations are implemented in NumPy. The declaration convention for these operations is as follows:

  • The required inputs should be positional arguments only before the /, which marks the limit of the positional arguments.

  • The optional inputs should be positional or keyword arguments between the / and *, which marks the limits of positional or keyword arguments.

  • The operator attributes should be keyword arguments only after the *.

The proper use of positional/keyword arguments is required to allow the QuantizedOp class to properly populate metadata automatically. It uses Python inspect modules and stores relevant information for each argument related to its positional/keyword status. This allows using the Concrete implementation as specifications for QuantizedOp, which removes some data duplication and generates a single source of truth for QuantizedOp and ONNX-NumPy implementations.

In that case (unless the quantized implementation requires special handling like QuantizedGemm), you can just set _impl_for_op_named to the name of the ONNX op for which the quantized class is implemented (this uses the mapping ONNX_OPS_TO_NUMPY_IMPL in onnx_utils.py to get the correct implementation).

Case 2: An integer implementation of the op is necessary

Providing an integer implementation requires sub-classing QuantizedOp to create a new operation. This sub-class must override q_impl in order to provide an integer implementation. QuantizedGemm is an example of such a case where quantized matrix multiplication requires proper handling of scales and zero points. The q_impl of that class reflects this.

In the body of q_impl, you can use the _prepare_inputs_with_constants function in order to obtain quantized integer values:

from concrete.ml.quantization import QuantizedArray

def q_impl(
    self,
    *q_inputs: QuantizedArray,
    **attrs,
) -> QuantizedArray:

    # Retrieve the quantized inputs
    prepared_inputs = self._prepare_inputs_with_constants(
        *q_inputs, calibrate=False, quantize_actual_values=True
    )

Here, prepared_inputs will contain one or more QuantizedArray, of which the qvalues are the quantized integers.

Once the required integer processing code is implemented, the output of the q_impl function must be implemented as a single QuantizedArray. Most commonly, this is built using the de-quantized results of the processing done in q_impl.

    result = (
        sum_result.astype(numpy.float32) - q_input.quantizer.zero_point
    ) * q_input.quantizer.scale

    return QuantizedArray(
        self.n_bits,
        result,
        value_is_float=True,
        options=self.input_quant_opts,
        stats=self.output_quant_stats,
        params=self.output_quant_params,
    )

Case 3: Both a floating point and an integer implementation are necessary

In this case, in q_impl you can check whether the current operation can be fused by calling self.can_fuse(). You can then have both a floating-point and an integer implementation. The traced execution path will depend on can_fuse():


def q_impl(
    self,
    *q_inputs: QuantizedArray,
    **attrs,
) -> QuantizedArray:

    execute_in_float = len(self.constant_inputs) > 0 or self.can_fuse()

    # a floating point implementation that can fuse
    if execute_in_float:
        prepared_inputs = self._prepare_inputs_with_constants(
            *q_inputs, calibrate=False, quantize_actual_values=False
        )

        result = prepared_inputs[0] + self.b_sign * prepared_inputs[1]
        return QuantizedArray(
            self.n_bits,
            result,
            # ......
        )
    else:
        prepared_inputs = self._prepare_inputs_with_constants(
            *q_inputs, calibrate=False, quantize_actual_values=True
        )
        # an integer implementation follows, see Case 2
        # ....

API

Modules

Classes

Functions

concrete.ml.common.serialization.loaders.md

module concrete.ml.common.serialization.loaders

Load functions for serialization.


function loads

Load any Concrete ML object that provide a dump_dict method.

Arguments:

  • content (Union[str, bytes]): A serialized object.

Returns:

  • Any: The object itself.


function load

Load any Concrete ML object that provide a load_dict method.

Arguments:

  • file (Union[IO[str], IO[bytes]): The file containing the serialized object.

Returns:

  • Any: The object itself.

concrete.ml.common.serialization.dumpers.md

module concrete.ml.common.serialization.dumpers

Dump functions for serialization.


function dumps

Dump any object as a string.

Arguments:

  • obj (Any): Object to dump.

Returns:

  • str: A string representation of the object.


function dump

Dump any Concrete ML object in a file.

Arguments:

  • obj (Any): The object to dump.

  • file (TextIO): The file to dump the serialized object into.

concrete.ml.common.serialization.encoder.md

module concrete.ml.common.serialization.encoder

Custom encoder for serialization.

Global Variables

  • INFINITY

  • USE_SKOPS


function dump_name_and_value

Dump the value into a custom dict format.

Args:

  • name (str): The custom name to use. This name should be unique for each type to encode, as it is used in the ConcreteDecoder class to detect the initial type and apply the proper load method to the serialized object.

  • value (Any): The serialized value to dump.

  • **kwargs (dict): Additional arguments to dump.

Returns:

  • Dict: The serialized custom format that includes both the serialized value and its type name.


class ConcreteEncoder

Custom json encoder to handle non-native types found in serialized Concrete ML objects.

Non-native types are serialized manually and dumped in a custom dict format that stores both the serialization value of the object and its associated type name.

The name should be unique for each type, as it is used in the ConcreteDecoder class to detect the initial type and apply the proper load method to the serialized object. The serialized value is the value that was serialized manually in a native type. Additional arguments such as a numpy array's dtype are also properly serialized. If an object has an unexpected type or is not serializable, an error is thrown.

The ConcreteEncoder is only meant to encode Concrete-ML's built-in models and therefore only supports the necessary types. For example, torch.Tensor objects are not serializable using this encoder as built-in models only use numpy arrays. However, the list of supported types might expand in future releases if new models are added and need new types.


method default

Define a custom default method that enables dumping any supported serialized values.

Arguments:

  • o (Any): The object to serialize.

Returns:

  • Any: The serialized object. Non-native types are returned as a dict of a specific format.

Raises:

  • NotImplementedError: If an FHE.Circuit, a Callable or a Generator object is given.


method isinstance

Define a custom isinstance method.

Natively, among other types, the JSONENcoder handles integers, floating points and tuples. However, a numpy.integer (resp. numpy.floating) object is automatically casted to a built-in int (resp. float) object, without keeping their dtype information. Similarly, a tuple is casted to a list, meaning that it will then be loaded as a list, which notably does not have the uniqueness property and therefore might cause issues in complex structures such as QuantizedModule instances. This is an issue as JSONEncoder only calls its customizable default method at the end of the parsing. We thus need to provide this custom isinstance method in order to make the encoder avoid handling these specific types until default is reached (where they are properly serialized using our custom format).

Args:

  • o (Any): The object to serialize.

  • cls (Type): The type to compare the object with.

Returns:

  • bool: If the object is of the given type. False if it is a numpy.floating, numpy.integer or a tuple.


method iterencode

Encode the given object and yield each string representation as available.

This method overrides the JSONEncoder's native iterencode one in order to pass our custom isinstance method to the _make_iterencode function. More information in isinstance's docstring. For simplicity, iterencode does not give the ability to use the initial c_make_encoder function, as it would required to override it in C.

Args:

  • o (Any): The object to serialize.

  • _one_shot (bool): This parameter is not used since the _make_iterencode function has been removed from the method.

Returns:

  • Generator: Yield each string representation as available.

concrete.ml.common.md

module concrete.ml.common

Module for shared data structures and code.

Global Variables

  • debugging

  • check_inputs

  • utils

concrete.ml.common.serialization.md

module concrete.ml.common.serialization

Serialization module.

Global Variables

  • USE_SKOPS

  • SUPPORTED_TORCH_ACTIVATIONS

  • UNSUPPORTED_TORCH_ACTIVATIONS

concrete.ml.common.serialization.decoder.md

module concrete.ml.common.serialization.decoder

Custom decoder for serialization.

Global Variables

  • ALL_QUANTIZED_OPS

  • SUPPORTED_TORCH_ACTIVATIONS

  • USE_SKOPS

  • TRUSTED_SKOPS

  • SERIALIZABLE_CLASSES


function object_hook

Define a custom object hook that enables loading any supported serialized values.

If the input's type is non-native, then we expect it to have the following format.More information is available in the ConcreteEncoder class.

Args:

  • d (Any): The serialized value to load.

Returns:

  • Any: The loaded value.

Raises:

  • NotImplementedError: If the serialized object does not provides a dump_dict method as expected.


class ConcreteDecoder

Custom json decoder to handle non-native types found in serialized Concrete ML objects.

method __init__

KNeighborsClassifier
Quantization aware training example
Convolutional Neural Network
structured pruning
this example
torch-pruning
standard deployment procedure
simulated

Before you start this section, you must install Docker by following official guide.

Once you have access to this repository and the dev environment is installed on your host OS (via make setup_env once ), you should be able to launch the commands to build the dev Docker image with make docker_build.

Internally, Concrete ML uses operators as intermediate representation (or IR) for manipulating machine learning models produced through export for , , and .

All models have a PyTorch implementation for inference. This implementation is provided either by a third-party tool such as or implemented directly in Concrete ML.

The PyTorch model is exported to ONNX. For more information on the use of ONNX in Concrete ML, see .

Quantization is performed on the , producing a . Two steps are performed: calibration and assignment of equivalent objects to each ONNX operation. The QuantizedModule class is the quantized counterpart of the NumpyModule.

Moreover, by passing a user provided nn.Module to step 2 of the above process, Concrete ML supports custom user models. See the associated for instructions about working with such models.

Note that the NumpyModule interpreter currently .

In order to better understand how Concrete ML works under the hood, it is possible to access each model in their ONNX format and then either print it or visualize it by importing the associated file in . For example, with LogisticRegression:

The class takes several arguments that determine how float values are quantized:

See also the reference for more information:

The quantized versions of floating point model operations are stored in the QuantizedModule. The ONNX_OPS_TO_QUANTIZED_IMPL dictionary maps ONNX floating point operators (e.g., Gemm) to their quantized equivalent (e.g., QuantizedGemm). For more information on implementing these operations, please see the .

The computation graph is taken from the corresponding floating point ONNX graph exported from scikit-learn , or from the ONNX graph exported by PyTorch. Calibration is used to obtain quantized parameters for the operations in the QuantizedModule. Parameters are also determined for the quantization of inputs during model deployment.

To perform calibration, an interpreter goes through the ONNX graph in and stores the intermediate results as it goes. The statistics of these values determine quantization parameters.

That QuantizedModule generates the Concrete function that is compiled to FHE. The compilation will succeed if the intermediate values conform to the 16-bits precision limit of the Concrete stack. See for details.

Lei Mao's blog on quantization:

Google paper on neural network quantization and integer-only inference:

Before opening an issue or asking for support, please read this documentation to understand common issues and limitations of Concrete ML. You can also check the .

If you didn't find an answer, you can ask a question on the or in the FHE.org .

When submitting an issue (), ideally include as much information as possible. In addition to the Python script, the following information is useful:

If you would like to contribute to a project and send pull requests, take a look at the guide.

Just a reminder that commit messages are checked in the conformance step and are rejected if they don't follow the rules. To learn more about conventional commits, check page.

You can learn more about rebasing .

Concrete ML implements model inference using Concrete as a backend. In order to execute in FHE, a numerical program written in Concrete needs to be compiled. This functionality is , and Concrete ML hides away most of the complexity of this step, completing the entire compilation process itself.

Additionally, the packages the result of the last step in a way that allows the deployment of the encrypted circuit to a server, as well as key generation, encryption, and decryption on the client side.

The first step in the list above takes a Python function implemented using the Concrete and transforms it into an executable operation graph.

While Concrete ML hides away all the Concrete code that performs model inference, it can be useful to understand how Concrete code works. Here is a toy example for a simple linear regression model on integers to illustrate compilation concepts. Generally, it is recommended to use the , which provide linear regression out of the box.

is a third-party, open-source library that converts machine learning models into tensor computations, and it can export these models to ONNX. The list of supported models can be found in .

Concrete ML uses to implement multi-layer, fully-connected PyTorch neural networks in a way that is compatible with the scikit-learn API.

skorch allows the user to easily create a classifier or regressor around a neural network (NN), implemented in Torch as a nn.Module, which is used by Concrete ML to provide a fully-connected, multi-layer NN with a configurable number of layers and optional pruning (see and the for more information).

Under the hood, Concrete ML uses a skorch wrapper around a single PyTorch module, SparseQuantNeuralNetwork. More information can be found .

is a quantization aware learning toolkit built on top of PyTorch. It provides quantization layers that are one-to-one equivalents to PyTorch layers, but also contain operations that perform the quantization during training.

PyTorch floating-point versions of univariate functions can be used (e.g., torch.relu, nn.BatchNormalization2D, torch.max (encrypted vs. constant), torch.add, torch.exp). See the for a full list.

The "mixed integer" mode used in Concrete ML neural networks is based on the that makes both weights and activations representable as integers during training. However, through the use of lookup tables in Concrete ML, floating point univariate PyTorch functions are supported.

You can also refer to the class, which is the basis of the built-in NeuralNetworkClassifier.

The section gave an overview of the conversion of a generic ONNX graph to an FHE-compatible Concrete ML op-graph. This section describes the implementation of operations in the Concrete ML op-graph and the way floating point can be used in some parts of the op-graphs through table lookup operations.

Since machine learning models use floating point inputs and weights, they first need to be converted to integers using .

This figure shows that the QuantizedOp has a body that implements the computation of the operation, following the . The operation's body can take either integer or float inputs and can output float or integer values. Two quantizers are attached to the operation: one that takes float inputs and produces integer inputs and one that does the same for the output.

: Module for shared data structures and code.

: Check and conversion tools.

: Module for debugging.

: Provide some variants of assert.

: Serialization module.

: Custom decoder for serialization.

: Dump functions for serialization.

: Custom encoder for serialization.

: Load functions for serialization.

: Utils that can be re-used by other pieces of code in the module.

: Module for deployment of the FHE model.

: Methods to deploy a client/server to AWS.

: Methods to deploy a server using Docker.

: APIs for FHE deployment.

: Deployment server.

: Utils.

: ONNX module.

: ONNX conversion related code.

: Utility functions for onnx operator implementations.

: Some code to manipulate models.

: Utils to interpret an ONNX model with numpy.

: ONNX ops implementation in Python + NumPy.

: Module which is used to contain common functions for pytest.

: Torch modules for our pytests.

: Common functions or lists for test files, which can't be put in fixtures.

: Modules for quantization.

: Base Quantized Op class that implements quantization for a float numpy op.

: Post Training Quantization methods.

: QuantizedModule API.

: Optimization passes for QuantizedModules.

: Quantized versions of the ONNX operators for post training quantization.

: Quantization utilities for a numpy array/tensor.

: Modules for p_error search.

: p_error binary search for classification and regression tasks.

: Import sklearn models.

: Base classes for all estimators.

: Implement sklearn's Generalized Linear Models (GLM).

: Implement sklearn linear model.

: Implement sklearn linear model.

: Scikit-learn interface for fully-connected quantized neural networks.

: Sparse Quantized Neural Network torch module.

: Implement RandomForest models.

: Implement Support Vector Machine.

: Implement DecisionTree models.

: Implements the conversion of a tree model to a numpy function.

: Implements XGBoost models.

: Modules for torch to numpy conversion.

: torch compilation function.

: Implement the conversion of a torch model to a hybrid fhe/torch inference.

: A torch to numpy module.

: File to manage the version of the package.

: Custom json decoder to handle non-native types found in serialized Concrete ML objects.

: Custom json encoder to handle non-native types found in serialized Concrete ML objects.

: Enum representing the execution mode.

: AWSInstance.

: Client API to encrypt and decrypt FHE data.

: Dev API to save the model and then load and run the FHE circuit.

: Server API to load and run the FHE circuit.

: A mixed quantized-raw valued onnx function.

: Type construct that marks an ndarray as a raw output of a quantized op.

: Torch model with some branching and skip connections.

: Torch model with some branching and skip connections.

: Torch CNN model for the tests.

: Torch CNN model with grouped convolution for compile torch tests.

: Torch CNN model for the tests.

: Torch CNN model for the tests with a max pool.

: Torch CNN model for the tests.

: Concat with fancy indexing.

: Torch model that with two different quantizers on the input.

: Torch model for the tests.

: Torch model that should generate MatMul->Add ONNX patterns.

: Torch model that should generate MatMul->Add ONNX patterns.

: Torch model for the tests.

: Torch model to test multiple inputs forward.

: Torch model to test multiple inputs forward.

: Torch model to test multiple inputs with different shape in the forward pass.

: Network that applies two quantized operations on a single input.

: Torch model to test the concat and unsqueeze operators.

: Torch QAT model that does not quantize the inputs.

: Torch model, where we reuse some elements in a loop.

: Torch QAT model that applies various padding patterns.

: A model with a QAT Module.

: Torch model that implements a simple non-uniform quantizer.

: A small quantized network with Brevitas, trained on make_classification.

: Torch QAT model that reshapes the input.

: Fake torch model used to generate some onnx.

: Torch model implements a step function that needs Greater, Cast and Where.

: Torch model that with a single conv layer that produces the output, e.g., a blur filter.

: Torch model implements a step function that needs Greater, Cast and Where.

: A very small CNN.

: A very small QAT CNN to classify the sklearn digits data-set.

: A small network with Brevitas, trained on make_classification.

: Torch model to test the ReduceSum ONNX operator in a leveled circuit.

: Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS.

: Torch model that calls univariate and shape functions of torch.

: An operator that mixes (adds or multiplies) together encrypted inputs.

: Base class for quantized ONNX ops implemented in numpy.

: An univariate operator of an encrypted value.

: Base ONNX to Concrete ML computation graph conversion class.

: Post-training Affine Quantization.

: Converter of Quantization Aware Training networks.

: Inference for a quantized model.

: Detect neural network patterns that can be optimized with round PBS.

: ConstantOfShape operator.

: Gather operator.

: Shape operator.

: Slice operator.

: Quantized Abs op.

: Quantized Addition operator.

: Quantized Average Pooling op.

: Quantized Batch normalization with encrypted input and in-the-clear normalization params.

: Brevitas uniform quantization with encrypted input.

: Cast the input to the required data type.

: Quantized Celu op.

: Quantized clip op.

: Concatenate operator.

: Quantized Conv op.

: Div operator /.

: Quantized Elu op.

: Quantized erf op.

: Quantized Exp op.

: Quantized flatten for encrypted inputs.

: Quantized Floor op.

: Quantized Gemm op.

: Comparison operator >.

: Comparison operator >=.

: Quantized HardSigmoid op.

: Quantized Hardswish op.

: Quantized Identity op.

: Quantized LeakyRelu op.

: Comparison operator <.

: Comparison operator <=.

: Quantized Log op.

: Quantized MatMul op.

: Quantized Max op.

: Quantized Max Pooling op.

: Quantized Min op.

: Multiplication operator.

: Quantized Neg op.

: Quantized Not op.

: Or operator ||.

: Quantized PRelu op.

: Quantized Padding op.

: Quantized pow op.

: ReduceSum with encrypted input.

: Quantized Relu op.

: Quantized Reshape op.

: Quantized round op.

: Quantized Selu op.

: Quantized sigmoid op.

: Quantized Neg op.

: Quantized Softplus op.

: Squeeze operator.

: Subtraction operator.

: Quantized Tanh op.

: Transpose operator for quantized inputs.

: Unsqueeze operator.

: Where operator on quantized arrays.

: Calibration set statistics.

: Options for quantization.

: Abstraction of quantized array.

: Quantization parameters for uniform quantization.

: Uniform quantizer.

: Class for p_error hyper-parameter search for classification and regression tasks.

: Base class for linear and tree-based classifiers in Concrete ML.

: Base class for all estimators in Concrete ML.

: Mixin class for tree-based classifiers.

: Mixin class for tree-based estimators.

: Mixin class for tree-based regressors.

: Mixin that provides quantization for a torch module and follows the Estimator API.

: A Mixin class for sklearn KNeighbors classifiers with FHE.

: A Mixin class for sklearn KNeighbors models with FHE.

: A Mixin class for sklearn linear classifiers with FHE.

: A Mixin class for sklearn linear models with FHE.

: A Mixin class for sklearn linear regressors with FHE.

: A Gamma regression model with FHE.

: A Poisson regression model with FHE.

: A Tweedie regression model with FHE.

: An ElasticNet regression model with FHE.

: A Lasso regression model with FHE.

: A linear regression model with FHE.

: A logistic regression model with FHE.

: A Ridge regression model with FHE.

: A k-nearest classifier model with FHE.

: A Fully-Connected Neural Network classifier with FHE.

: A Fully-Connected Neural Network regressor with FHE.

: Sparse Quantized Neural Network.

: Implements the RandomForest classifier.

: Implements the RandomForest regressor.

: A Classification Support Vector Machine (SVM).

: A Regression Support Vector Machine (SVM).

: Implements the sklearn DecisionTreeClassifier.

: Implements the sklearn DecisionTreeClassifier.

: Implements the XGBoost classifier.

: Implements the XGBoost regressor.

: Simple enum for different modes of execution of HybridModel.

: Convert a model to a hybrid model.

: Hybrid FHE Model Server.

: Placeholder type for a typical logger like the one from loguru.

: A wrapper class for the modules to be done remotely with FHE.

: General interface to transform a torch.nn.Module to numpy module.

: sklearn.utils.check_X_y with an assert.

: sklearn.utils.check_X_y with an assert and multi-output handling.

: sklearn.utils.check_array with an assert.

: Provide a custom assert to check that the condition is False.

: Provide a custom assert to check that a piece of code is never reached.

: Provide a custom assert to check that the condition is True.

: Define a custom object hook that enables loading any supported serialized values.

: Dump any Concrete ML object in a file.

: Dump any object as a string.

: Dump the value into a custom dict format.

: Load any Concrete ML object that provide a load_dict method.

: Load any Concrete ML object that provide a dump_dict method.

: Indicate if all unpacked values are of a supported float dtype.

: Indicate if all unpacked values are of a supported integer dtype.

: Indicate if all unpacked values are of the specified dtype(s).

: Convert any allowed type into an array and cast it if required.

: Check the user did not set p_error or global_p_error in configuration.

: Compute the number of bits required to represent x.

: Generate a proxy function for a function accepting only *args type arguments.

: Return the class of the model (instantiated or not), which can be a partial() instance.

: Return the name of the model, which can be a partial() instance.

: Return the ONNX opset_version.

: Check if a model is a Brevitas type.

: Indicate if the model class represents a classifier.

: Indicate if a model class, which can be a partial() instance, is an element of a_list.

: Indicate if the input container is a Pandas DataFrame.

: Indicate if the input container is a Pandas Series.

: Indicate if the input container is a Pandas DataFrame or Series.

: Indicate if the model class represents a regressor.

: Return (p_error, global_p_error) that we want to give to Concrete.

: Sanitize arg_name, replacing invalid chars by _.

: Make the input a tuple if it is not already the case.

: Create a EC2 instance.

: Terminate a AWS EC2 instance.

: Deploy a model to a EC2 AWS instance.

: Deploy a model.

: Terminate a AWS EC2 instance.

: Wait for AWS EC2 instance termination.

: Build server Docker image.

: Delete a Docker image.

: Deploy function.

: Kill all containers that use a given image.

: Check that current versions match the ones used in development.

: Filter logs based on previous logs.

: Check if ssh connection is available.

: Wait for connection to be available.

: Fuse sequence of matmul -> add into a gemm node.

: Get the numpy equivalent forward of the provided ONNX model.

: Get the numpy equivalent forward of the provided torch Module.

: Compute the output shape of a pool or conv operation.

: Compute any additional padding needed to compute pooling layers.

: Pad a tensor according to ONNX spec, using an optional custom pad value.

: Compute the average pooling normalization constant.

: Clean the graph of the onnx model by removing nodes after the given node type.

: Clean the graph of the onnx model by removing nodes at the given node type.

: Keep the outputs given in outputs_to_keep and remove the others from the model.

: Remove identity nodes from a model.

: Remove unnecessary nodes from the ONNX graph.

: Remove unused Constant nodes in the provided onnx model.

: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.

: Execute the provided ONNX graph on the given inputs.

: Get the attribute from an ONNX AttributeProto.

: Construct the qualified type name of the ONNX operator.

: Remove initializers from model inputs.

: Cast values to floating points.

: Compute abs in numpy according to ONNX spec.

: Compute acos in numpy according to ONNX spec.

: Compute acosh in numpy according to ONNX spec.

: Compute add in numpy according to ONNX spec.

: Compute asin in numpy according to ONNX spec.

: Compute sinh in numpy according to ONNX spec.

: Compute atan in numpy according to ONNX spec.

: Compute atanh in numpy according to ONNX spec.

: Compute Average Pooling using Torch.

: Compute the batch normalization of the input tensor.

: Execute ONNX cast in Numpy.

: Compute celu in numpy according to ONNX spec.

: Apply concatenate in numpy according to ONNX spec.

: Return the constant passed as a kwarg.

: Compute N-D convolution using Torch.

: Compute cos in numpy according to ONNX spec.

: Compute cosh in numpy according to ONNX spec.

: Compute div in numpy according to ONNX spec.

: Compute elu in numpy according to ONNX spec.

: Compute equal in numpy according to ONNX spec.

: Compute erf in numpy according to ONNX spec.

: Compute exponential in numpy according to ONNX spec.

: Flatten a tensor into a 2d array.

: Compute Floor in numpy according to ONNX spec.

: Compute Gemm in numpy according to ONNX spec.

: Compute greater in numpy according to ONNX spec.

: Compute greater in numpy according to ONNX spec and cast outputs to floats.

: Compute greater or equal in numpy according to ONNX spec.

: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.

: Compute hardsigmoid in numpy according to ONNX spec.

: Compute hardswish in numpy according to ONNX spec.

: Compute identity in numpy according to ONNX spec.

: Compute leakyrelu in numpy according to ONNX spec.

: Compute less in numpy according to ONNX spec.

: Compute less in numpy according to ONNX spec and cast outputs to floats.

: Compute less or equal in numpy according to ONNX spec.

: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.

: Compute log in numpy according to ONNX spec.

: Compute matmul in numpy according to ONNX spec.

: Compute Max in numpy according to ONNX spec.

: Compute Max Pooling using Torch.

: Compute Min in numpy according to ONNX spec.

: Compute mul in numpy according to ONNX spec.

: Compute Negative in numpy according to ONNX spec.

: Compute not in numpy according to ONNX spec.

: Compute not in numpy according to ONNX spec and cast outputs to floats.

: Compute or in numpy according to ONNX spec.

: Compute or in numpy according to ONNX spec and cast outputs to floats.

: Compute pow in numpy according to ONNX spec.

: Compute relu in numpy according to ONNX spec.

: Compute round in numpy according to ONNX spec.

: Compute selu in numpy according to ONNX spec.

: Compute sigmoid in numpy according to ONNX spec.

: Compute Sign in numpy according to ONNX spec.

: Compute sin in numpy according to ONNX spec.

: Compute sinh in numpy according to ONNX spec.

: Compute softmax in numpy according to ONNX spec.

: Compute softplus in numpy according to ONNX spec.

: Compute sub in numpy according to ONNX spec.

: Compute tan in numpy according to ONNX spec.

: Compute tanh in numpy according to ONNX spec.

: Compute thresholdedrelu in numpy according to ONNX spec.

: Transpose in numpy according to ONNX spec.

: Compute the equivalent of numpy.where.

: Compute the equivalent of numpy.where.

: Decorate a numpy onnx function to flag the raw/non quantized inputs.

: Check that the given object can properly be serialized.

: Reduce size of the given data-set.

: Get the pytest parameters to use for testing all models available in Concrete ML.

: Get the pytest parameters to use for testing linear models.

: Get the pytest parameters to use for testing neighbor models.

: Get the pytest parameters to use for testing neural network models.

: Get the pytest parameters to use for testing tree-based models.

: Instantiate any Concrete ML model type.

: Load an object saved with torch.save() from a file or dict.

: Indicate if two values are equal.

: Convert the n_bits parameter into a proper dictionary.

: Fill a parameter set structure from kwargs parameters.

: Get the quantized module of a given model in FHE, simulated or not.

: Add transpose after last node.

: Assert if an Add node with a specific constant exists in the ONNX graph.

: Create ONNX model with Hummingbird convert method.

: Apply post-processing from the graph.

: Apply pre-processing onto the ONNX graph.

: Convert the tree inference to a numpy functions using Hummingbird.

: Pre-process tree values.

: Workaround to fix torch issue that does not export the proper axis in the ONNX squeeze node.

: Build a quantized module from a Torch or ONNX model.

: Compile a Brevitas Quantization Aware Training model.

: Compile a torch module into an FHE equivalent.

: Compile a torch module into an FHE equivalent.

: Convert a torch tensor or a numpy array to a numpy array.

: Check if a torch model has QNN layers.

: Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear.

: Convert a tuple to a string representation.

: Convert a a string representation of a tuple to a tuple.

this
you followed the steps here
ONNX
PyTorch
Hummingbird
skorch
FHE-friendly model documentation
Netron
FHE-compatible op-graph section
topological order
the compilation section
Quantization for Neural Networks
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
outstanding issues on github
Zama forum
Discord
here
contributor
this
here
described here
client/server API
supported operation set
built-in models
Hummingbird
the Hummingbird documentation
skorch
pruning
neural network documentation
Brevitas
PyTorch supported layers page
"integer only" Brevitas quantization
QuantizationAwareTraining.ipynb
ConvolutionalNeuralNetwork.ipynb
ONNX import
quantization
ONNX spec
Hummingbird
using HummingBird
here
supports the following ONNX operators
loads(content: Union[str, bytes]) → Any
load(file: Union[IO[str], IO[bytes]])
dumps(obj: Any) → str
dump(obj: Any, file: <class 'TextIO'>)
dump_name_and_value(name: str, value: Any, **kwargs) → Dict
default(o: Any) → Any
isinstance(o: Any, cls: Type) → bool
iterencode(o: Any, _one_shot: bool = False) → Generator
object_hook(d: Any) → Any
__init__(*args, **kwargs)
concrete.ml.common
concrete.ml.common.check_inputs
check_inputs.check_X_y_and_assert
check_inputs.check_X_y_and_assert_multi_output
check_inputs.check_array_and_assert
concrete.ml.common.debugging
concrete.ml.common.debugging.custom_assert
custom_assert.assert_false
custom_assert.assert_not_reached
custom_assert.assert_true
concrete.ml.common.serialization
concrete.ml.common.serialization.decoder
decoder.ConcreteDecoder
decoder.object_hook
concrete.ml.common.serialization.dumpers
dumpers.dump
dumpers.dumps
concrete.ml.common.serialization.encoder
encoder.ConcreteEncoder
encoder.dump_name_and_value
concrete.ml.common.serialization.loaders
loaders.load
loaders.loads
concrete.ml.deployment.deploy_to_aws
deploy_to_aws.AWSInstance
deploy_to_aws.create_instance
deploy_to_aws.delete_security_group
deploy_to_aws.deploy_to_aws
deploy_to_aws.main
deploy_to_aws.terminate_instance
deploy_to_aws.wait_instance_termination
concrete.ml.common.utils
utils.FheMode
utils.all_values_are_floats
utils.all_values_are_integers
utils.all_values_are_of_dtype
utils.check_dtype_and_cast
utils.check_there_is_no_p_error_options_in_configuration
utils.compute_bits_precision
utils.generate_proxy_function
utils.get_model_class
utils.get_model_name
utils.get_onnx_opset_version
utils.is_brevitas_model
utils.is_classifier_or_partial_classifier
utils.is_model_class_in_a_list
utils.is_pandas_dataframe
utils.is_pandas_series
utils.is_pandas_type
utils.is_regressor_or_partial_regressor
utils.manage_parameters_for_pbs_errors
utils.replace_invalid_arg_name_chars
utils.to_tuple
concrete.ml.deployment.server
concrete.ml.deployment
FHEModelServer
concrete.ml.deployment.fhe_client_server
fhe_client_server.FHEModelClient
fhe_client_server.FHEModelDev
fhe_client_server.FHEModelServer
fhe_client_server.check_concrete_versions
concrete.ml.deployment.deploy_to_docker
deploy_to_docker.build_docker_image
deploy_to_docker.delete_image
deploy_to_docker.main
deploy_to_docker.stop_container
concrete.ml.onnx
concrete.ml.deployment.utils
utils.filter_logs
utils.is_connection_available
utils.wait_for_connection_to_be_available
concrete.ml.onnx.convert
convert.fuse_matmul_bias_to_gemm
convert.get_equivalent_numpy_forward_from_onnx
convert.get_equivalent_numpy_forward_from_torch
concrete.ml.onnx.onnx_impl_utils
onnx_impl_utils.compute_conv_output_dims
onnx_impl_utils.compute_onnx_pool_padding
onnx_impl_utils.numpy_onnx_pad
onnx_impl_utils.onnx_avgpool_compute_norm_const
concrete.ml.onnx.onnx_model_manipulations
onnx_model_manipulations.clean_graph_after_node_op_type
onnx_model_manipulations.clean_graph_at_node_op_type
onnx_model_manipulations.keep_following_outputs_discard_others
onnx_model_manipulations.remove_identity_nodes
onnx_model_manipulations.remove_node_types
onnx_model_manipulations.remove_unused_constant_nodes
onnx_model_manipulations.simplify_onnx_model
concrete.ml.onnx.onnx_utils
onnx_utils.execute_onnx_with_numpy
onnx_utils.get_attribute
onnx_utils.get_op_type
onnx_utils.remove_initializer_from_input

concrete.ml.deployment.deploy_to_aws.md

module concrete.ml.deployment.deploy_to_aws

Methods to deploy a client/server to AWS.

It takes as input a folder with: - client.zip - server.zip - processing.json

It spawns a AWS EC2 instance with proper security groups. Then SSHs to it to rsync the files and update Python dependencies. It then launches the server.

Global Variables

  • DATE_FORMAT

  • DEFAULT_CML_AMI_ID


function create_instance

create_instance(
    instance_type: str = 'c5.large',
    open_port=5000,
    instance_name: Optional[str] = None,
    verbose: bool = False,
    region_name: Optional[str] = None,
    ami_id='ami-0d7427e883fa00ff3'
) → Dict[str, Any]

Create a EC2 instance.

Arguments:

  • instance_type (str): the type of AWS EC2 instance.

  • open_port (int): the port to open.

  • instance_name (Optional[str]): the name to use for AWS created objects

  • verbose (bool): show logs or not

  • region_name (Optional[str]): AWS region

  • ami_id (str): AMI to use

Returns:

  • Dict[str, Any]: some information about the newly created instance. - ip - private_key - instance_id - key_path - ip_address - port


function deploy_to_aws

deploy_to_aws(
    instance_metadata: Dict[str, Any],
    path_to_model: Path,
    number_of_ssh_retries: int = -1,
    wait_bar: bool = False,
    verbose: bool = False
)

Deploy a model to a EC2 AWS instance.

Arguments:

  • instance_metadata (Dict[str, Any]): the metadata of AWS EC2 instance created using AWSInstance or create_instance

  • path_to_model (Path): the path to the serialized model

  • number_of_ssh_retries (int): the number of ssh retries (-1 is no limit)

  • wait_bar (bool): whether to show a wait bar when waiting for ssh connection to be available

  • verbose (bool): whether to show a logs

Returns: instance_metadata (Dict[str, Any])

Raises:

  • RuntimeError: if launching the server crashed


function wait_instance_termination

wait_instance_termination(instance_id: str, region_name: Optional[str] = None)

Wait for AWS EC2 instance termination.

Arguments:

  • instance_id (str): the id of the AWS EC2 instance to terminate.

  • region_name (Optional[str]): AWS region (Optional)


function terminate_instance

terminate_instance(instance_id: str, region_name: Optional[str] = None)

Terminate a AWS EC2 instance.

Arguments:

  • instance_id (str): the id of the AWS EC2 instance to terminate.

  • region_name (Optional[str]): AWS region (Optional)


function delete_security_group

delete_security_group(security_group_id: str, region_name: Optional[str] = None)

Terminate a AWS EC2 instance.

Arguments:

  • security_group_id (str): the id of the AWS EC2 instance to terminate.

  • region_name (Optional[str]): AWS region (Optional)


function main

main(
    path_to_model: Path,
    port: int = 5000,
    instance_type: str = 'c5.large',
    instance_name: Optional[str] = None,
    verbose: bool = False,
    wait_bar: bool = False,
    terminate_on_shutdown: bool = True
)

Deploy a model.

Arguments:

  • path_to_model (Path): path to serialized model to serve.

  • port (int): port to use.

  • instance_type (str): type of AWS EC2 instance to use.

  • instance_name (Optional[str]): the name to use for AWS created objects

  • verbose (bool): show logs or not

  • wait_bar (bool): show progress bar when waiting for ssh connection

  • terminate_on_shutdown (bool): terminate instance when script is over


class AWSInstance

AWSInstance.

Context manager for AWS instance that supports ssh and http over one port.

method __init__

__init__(
    instance_type: str = 'c5.large',
    open_port=5000,
    instance_name: Optional[str] = None,
    verbose: bool = False,
    terminate_on_shutdown: bool = True,
    region_name: Optional[str] = None,
    ami_id: str = 'ami-0d7427e883fa00ff3'
)

concrete.ml.common.utils.md

module concrete.ml.common.utils

Utils that can be re-used by other pieces of code in the module.

Global Variables

  • SUPPORTED_FLOAT_TYPES

  • SUPPORTED_INT_TYPES

  • SUPPORTED_TYPES

  • MAX_BITWIDTH_BACKWARD_COMPATIBLE

  • USE_OLD_VL

  • QUANT_ROUND_LIKE_ROUND_PBS


function replace_invalid_arg_name_chars

replace_invalid_arg_name_chars(arg_name: str) → str

Sanitize arg_name, replacing invalid chars by _.

This does not check that the starting character of arg_name is valid.

Args:

  • arg_name (str): the arg name to sanitize.

Returns:

  • str: the sanitized arg name, with only chars in _VALID_ARG_CHARS.


function generate_proxy_function

generate_proxy_function(
    function_to_proxy: Callable,
    desired_functions_arg_names: Iterable[str]
) → Tuple[Callable, Dict[str, str]]

Generate a proxy function for a function accepting only *args type arguments.

This returns a runtime compiled function with the sanitized argument names passed in desired_functions_arg_names as the arguments to the function.

Args:

  • function_to_proxy (Callable): the function defined like def f(*args) for which to return a function like f_proxy(arg_1, arg_2) for any number of arguments.

  • desired_functions_arg_names (Iterable[str]): the argument names to use, these names are sanitized and the mapping between the original argument name to the sanitized one is returned in a dictionary. Only the sanitized names will work for a call to the proxy function.

Returns:

  • Tuple[Callable, Dict[str, str]]: the proxy function and the mapping of the original arg name to the new and sanitized arg names.


function get_onnx_opset_version

get_onnx_opset_version(onnx_model: ModelProto) → int

Return the ONNX opset_version.

Args:

  • onnx_model (onnx.ModelProto): the model.

Returns:

  • int: the version of the model


function manage_parameters_for_pbs_errors

manage_parameters_for_pbs_errors(
    p_error: Optional[float] = None,
    global_p_error: Optional[float] = None
)

Return (p_error, global_p_error) that we want to give to Concrete.

The returned (p_error, global_p_error) depends on user's parameters and the way we want to manage defaults in Concrete ML, which may be different from the way defaults are managed in Concrete.

Principle: - if none are set, we set global_p_error to a default value of our choice - if both are set, we raise an error - if one is set, we use it and forward it to Concrete

Note that global_p_error is currently set to 0 in the FHE simulation mode.

Args:

  • p_error (Optional[float]): probability of error of a single PBS.

  • global_p_error (Optional[float]): probability of error of the full circuit.

Returns:

  • (p_error, global_p_error): parameters to give to the compiler

Raises:

  • ValueError: if the two parameters are set (this is not as in Concrete-Python)


function check_there_is_no_p_error_options_in_configuration

check_there_is_no_p_error_options_in_configuration(configuration)

Check the user did not set p_error or global_p_error in configuration.

It would be dangerous, since we set them in direct arguments in our calls to Concrete-Python.

Args:

  • configuration: Configuration object to use during compilation


function get_model_class

get_model_class(model_class)

Return the class of the model (instantiated or not), which can be a partial() instance.

Args:

  • model_class: The model, which can be a partial() instance.

Returns: The model's class.


function is_model_class_in_a_list

is_model_class_in_a_list(model_class, a_list)

Indicate if a model class, which can be a partial() instance, is an element of a_list.

Args:

  • model_class: The model, which can be a partial() instance.

  • a_list: The list in which to look into.

Returns: If the model's class is in the list or not.


function get_model_name

get_model_name(model_class)

Return the name of the model, which can be a partial() instance.

Args:

  • model_class: The model, which can be a partial() instance.

Returns: the model's name.


function is_classifier_or_partial_classifier

is_classifier_or_partial_classifier(model_class)

Indicate if the model class represents a classifier.

Args:

  • model_class: The model class, which can be a functool's partial class.

Returns:

  • bool: If the model class represents a classifier.


function is_regressor_or_partial_regressor

is_regressor_or_partial_regressor(model_class)

Indicate if the model class represents a regressor.

Args:

  • model_class: The model class, which can be a functool's partial class.

Returns:

  • bool: If the model class represents a regressor.


function is_pandas_dataframe

is_pandas_dataframe(input_container: Any) → bool

Indicate if the input container is a Pandas DataFrame.

This function is inspired from Scikit-Learn's test validation tools and avoids the need to add and import Pandas as an additional dependency to the project. See https://github.com/scikit-learn/scikit-learn/blob/98cf537f5/sklearn/utils/validation.py#L629

Args:

  • input_container (Any): The input container to consider

Returns:

  • bool: If the input container is a DataFrame


function is_pandas_series

is_pandas_series(input_container: Any) → bool

Indicate if the input container is a Pandas Series.

This function is inspired from Scikit-Learn's test validation tools and avoids the need to add and import Pandas as an additional dependency to the project. See https://github.com/scikit-learn/scikit-learn/blob/98cf537f5/sklearn/utils/validation.py#L629

Args:

  • input_container (Any): The input container to consider

Returns:

  • bool: If the input container is a Series


function is_pandas_type

is_pandas_type(input_container: Any) → bool

Indicate if the input container is a Pandas DataFrame or Series.

Args:

  • input_container (Any): The input container to consider

Returns:

  • bool: If the input container is a DataFrame orSeries


function check_dtype_and_cast

check_dtype_and_cast(
    values: Any,
    expected_dtype: str,
    error_information: Optional[str] = ''
)

Convert any allowed type into an array and cast it if required.

If values types don't match with any supported type or the expected dtype, raise a ValueError.

Args:

  • values (Any): The values to consider

  • expected_dtype (str): The expected dtype, either "float32" or "int64"

  • error_information (str): Additional information to put in front of the error message when raising a ValueError. Default to None.

Returns:

  • (Union[numpy.ndarray, torch.utils.data.dataset.Subset]): The values with proper dtype.

Raises:

  • ValueError: If the values' dtype don't match the expected one or casting is not possible.


function compute_bits_precision

compute_bits_precision(x: ndarray) → int

Compute the number of bits required to represent x.

Args:

  • x (numpy.ndarray): Integer data

Returns:

  • int: the number of bits required to represent x


function is_brevitas_model

is_brevitas_model(model: Module) → bool

Check if a model is a Brevitas type.

Args:

  • model: PyTorch model.

Returns:

  • bool: True if model is a Brevitas network.


function to_tuple

to_tuple(x: Any) → tuple

Make the input a tuple if it is not already the case.

Args:

  • x (Any): The input to consider. It can already be an input.

Returns:

  • tuple: The input as a tuple.


function all_values_are_integers

all_values_are_integers(*values: Any) → bool

Indicate if all unpacked values are of a supported integer dtype.

Args:

  • *values (Any): The values to consider.

Returns:

  • bool: Whether all values are supported integers or not.


function all_values_are_floats

all_values_are_floats(*values: Any) → bool

Indicate if all unpacked values are of a supported float dtype.

Args:

  • *values (Any): The values to consider.

Returns:

  • bool: Whether all values are supported floating points or not.


function all_values_are_of_dtype

all_values_are_of_dtype(*values: Any, dtypes: Union[str, List[str]]) → bool

Indicate if all unpacked values are of the specified dtype(s).

Args:

  • *values (Any): The values to consider.

  • dtypes (Union[str, List[str]]): The dtype(s) to consider.

Returns:

  • bool: Whether all values are of the specified dtype(s) or not.


class FheMode

Enum representing the execution mode.

This enum inherits from str in order to be able to easily compare a string parameter to its equivalent Enum attribute.

Examples: fhe_disable = FheMode.DISABLE

fhe_disable == "disable" True

 >>> fhe_disable == "execute"
 False

 >>> FheMode.is_valid("simulate")
 True

 >>> FheMode.is_valid(FheMode.EXECUTE)
 True

 >>> FheMode.is_valid("predict_in_fhe")
 False

concrete.ml.deployment.server.md

module concrete.ml.deployment.server

Deployment server.

Routes: - Get client.zip - Add a key - Compute

concrete.ml.deployment.md

module concrete.ml.deployment

Module for deployment of the FHE model.

Global Variables

  • fhe_client_server

concrete.ml.deployment.fhe_client_server.md

module concrete.ml.deployment.fhe_client_server

APIs for FHE deployment.

Global Variables

  • CML_VERSION


function check_concrete_versions

check_concrete_versions(zip_path: Path)

Check that current versions match the ones used in development.

This function loads the version JSON file found in client.zip or server.zip files and then checks that current package versions (Concrete Python, Concrete ML) as well as the Python current version all match the ones that are currently installed.

Args:

  • zip_path (Path): The path to the client or server zip file that contains the version.json file to check.

Raises:

  • ValueError: If at least one version mismatch is found.


class FHEModelServer

Server API to load and run the FHE circuit.

method __init__

__init__(path_dir: str)

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved


method load

load()

Load the circuit.


method run

run(
    serialized_encrypted_quantized_data: bytes,
    serialized_evaluation_keys: bytes
) → bytes

Run the model on the server over encrypted data.

Args:

  • serialized_encrypted_quantized_data (bytes): the encrypted, quantized and serialized data

  • serialized_evaluation_keys (bytes): the serialized evaluation keys

Returns:

  • bytes: the result of the model


class FHEModelDev

Dev API to save the model and then load and run the FHE circuit.

method __init__

__init__(path_dir: str, model: Any = None)

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved

  • model (Any): the model to use for the FHE API


method save

save(via_mlir: bool = False)

Export all needed artifacts for the client and server.

Arguments:

  • via_mlir (bool): serialize with via_mlir option from Concrete-Python. For more details on the topic please refer to Concrete-Python's documentation.

Raises:

  • Exception: path_dir is not empty


class FHEModelClient

Client API to encrypt and decrypt FHE data.

method __init__

__init__(path_dir: str, key_dir: Optional[str] = None)

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved

  • key_dir (str): the path to the directory where the keys are stored


method deserialize_decrypt

deserialize_decrypt(serialized_encrypted_quantized_result: bytes) → ndarray

Deserialize and decrypt the values.

Args:

  • serialized_encrypted_quantized_result (bytes): the serialized, encrypted and quantized result

Returns:

  • numpy.ndarray: the decrypted and deserialized values


method deserialize_decrypt_dequantize

deserialize_decrypt_dequantize(
    serialized_encrypted_quantized_result: bytes
) → ndarray

Deserialize, decrypt and de-quantize the values.

Args:

  • serialized_encrypted_quantized_result (bytes): the serialized, encrypted and quantized result

Returns:

  • numpy.ndarray: the decrypted (de-quantized) values


method generate_private_and_evaluation_keys

generate_private_and_evaluation_keys(force=False)

Generate the private and evaluation keys.

Args:

  • force (bool): if True, regenerate the keys even if they already exist


method get_serialized_evaluation_keys

get_serialized_evaluation_keys() → bytes

Get the serialized evaluation keys.

Returns:

  • bytes: the evaluation keys


method load

load()

Load the quantizers along with the FHE specs.


method quantize_encrypt_serialize

quantize_encrypt_serialize(x: ndarray) → bytes

Quantize, encrypt and serialize the values.

Args:

  • x (numpy.ndarray): the values to quantize, encrypt and serialize

Returns:

  • bytes: the quantized, encrypted and serialized values

concrete.ml.deployment.deploy_to_docker.md

module concrete.ml.deployment.deploy_to_docker

Methods to deploy a server using Docker.

It takes as input a folder with: - client.zip - server.zip - processing.json

It builds a Docker image and spawns a Docker container that runs the server.

This module is untested as it would require to first build the release Docker image. FIXME: https://github.com/zama-ai/concrete-ml-internal/issues/3347

Global Variables

  • DATE_FORMAT


function delete_image

delete_image(image_name: str)

Delete a Docker image.

Arguments:

  • image_name (str): to name of the image to delete.


function stop_container

stop_container(image_name: str)

Kill all containers that use a given image.

Arguments:

  • image_name (str): name of Docker image for which to stop Docker containers.


function build_docker_image

build_docker_image(path_to_model: Path, image_name: str)

Build server Docker image.

Arguments:

  • path_to_model (Path): path to serialized model to serve.

  • image_name (str): name to give to the image.


function main

main(path_to_model: Path, image_name: str)

Deploy function.

  • Builds Docker image.

  • Runs Docker server.

  • Stop container and delete image.

Arguments:

  • path_to_model (Path): path to model to server

  • image_name (str): name of the Docker image

concrete.ml.onnx.md

module concrete.ml.onnx

ONNX module.

Global Variables

  • onnx_impl_utils

  • ops_impl

  • onnx_utils

  • convert

  • onnx_model_manipulations

concrete.ml.deployment.utils.md

module concrete.ml.deployment.utils

Utils.

  • Check if connection possible

  • Wait for connection to be available (with timeout)


function filter_logs

filter_logs(previous_logs: str, current_logs: str) → str

Filter logs based on previous logs.

Arguments:

  • previous_logs (str): previous logs

  • current_logs (str): current logs

Returns:

  • str: filtered logs


function wait_for_connection_to_be_available

wait_for_connection_to_be_available(
    hostname: str,
    ip_address: str,
    path_to_private_key: Path,
    timeout: int = 1,
    wait_time: int = 1,
    max_retries: int = 20,
    wait_bar: bool = False
)

Wait for connection to be available.

Arguments:

  • hostname (str): host name

  • ip_address (str): ip address

  • path_to_private_key (Path): path to private key

  • timeout (int): ssh timeout option

  • wait_time (int): time to wait between retries

  • max_retries (int): number of retries, if < 0 unlimited retries

  • wait_bar (bool): tqdm progress bar of retries

Raises:

  • TimeoutError: if it wasn't able connect to ssh with the given constraints


function is_connection_available

is_connection_available(
    hostname: str,
    ip_address: str,
    path_to_private_key: Path,
    timeout: int = 1
)

Check if ssh connection is available.

Arguments:

  • hostname (str): host name

  • ip_address (str): ip address

  • path_to_private_key (Path): path to private key

  • timeout: ssh timeout option

Returns:

  • bool: True if connection succeeded

concrete.ml.onnx.convert.md

module concrete.ml.onnx.convert

ONNX conversion related code.

Global Variables

  • IMPLEMENTED_ONNX_OPS

  • OPSET_VERSION_FOR_ONNX_EXPORT


function fuse_matmul_bias_to_gemm

fuse_matmul_bias_to_gemm(onnx_model: ModelProto)

Fuse sequence of matmul -> add into a gemm node.

Args:

  • onnx_model (onnx.ModelProto): A onnx model to optimize using Mat-Mult + Add -> Gemm

Returns:

  • onnx.ModelProto: the optimized onnx model


function get_equivalent_numpy_forward_from_torch

get_equivalent_numpy_forward_from_torch(
    torch_module: Module,
    dummy_input: Union[Tensor, Tuple[Tensor, ]],
    output_onnx_file: Union[NoneType, Path, str] = None
) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto]

Get the numpy equivalent forward of the provided torch Module.

Args:

  • torch_module (torch.nn.Module): the torch Module for which to get the equivalent numpy forward.

  • dummy_input (Union[torch.Tensor, Tuple[torch.Tensor, ...]]): dummy inputs for ONNX export.

  • output_onnx_file (Optional[Union[Path, str]]): Path to save the ONNX file to. Will use a temp file if not provided. Defaults to None.

Returns:

  • Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.GraphProto]: The function that will execute the equivalent numpy code to the passed torch_module and the generated ONNX model.


function get_equivalent_numpy_forward_from_onnx

get_equivalent_numpy_forward_from_onnx(
    onnx_model: ModelProto,
    check_model: bool = True
) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto]

Get the numpy equivalent forward of the provided ONNX model.

Args:

  • onnx_model (onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.

  • check_model (bool): set to True to run the onnx checker on the model. Defaults to True.

Raises:

  • ValueError: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy.

Returns:

  • Callable[..., Tuple[numpy.ndarray, ...]]: The function that will execute the equivalent numpy function.

concrete.ml.onnx.onnx_impl_utils.md

module concrete.ml.onnx.onnx_impl_utils

Utility functions for onnx operator implementations.


function numpy_onnx_pad

numpy_onnx_pad(
    x: ndarray,
    pads: Tuple[int, ],
    pad_value: Union[float, int, ndarray] = 0,
    int_only: bool = False
) → ndarray

Pad a tensor according to ONNX spec, using an optional custom pad value.

Args:

  • x (numpy.ndarray): input tensor to pad

  • pads (List[int]): padding values according to ONNX spec

  • pad_value (Optional[Union[float, int]]): value used to fill in padding, default 0

  • int_only (bool): set to True to generate integer only code with Concrete

Returns:

  • res (numpy.ndarray): the input tensor with padding applied


function compute_conv_output_dims

compute_conv_output_dims(
    input_shape: Tuple[int, ],
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ],
    ceil_mode: int
) → Tuple[int, ]

Compute the output shape of a pool or conv operation.

See https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html for details on the computation of the output shape.

Args:

  • input_shape (Tuple[int, ...]): shape of the input to be padded as N x C x H x W

  • kernel_shape (Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)

  • pads (Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W

  • strides (Tuple[int, ...]): strides for each dimension

  • ceil_mode (int): set to 1 to use the ceil function to compute the output shape, as described in the PyTorch doc

Returns:

  • res (Tuple[int, ...]): shape of the output of a conv or pool operator with given parameters


function compute_onnx_pool_padding

compute_onnx_pool_padding(
    input_shape: Tuple[int, ],
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ],
    ceil_mode: int
) → Tuple[int, ]

Compute any additional padding needed to compute pooling layers.

The ONNX standard uses ceil_mode=1 to match TensorFlow style pooling output computation. In this setting, the kernel can be placed at a valid position even though it contains values outside of the input shape including padding. The ceil_mode parameter controls whether this mode is enabled. If the mode is not enabled, the output shape follows PyTorch rules.

Args:

  • input_shape (Tuple[int, ...]): shape of the input to be padded as N x C x H x W

  • kernel_shape (Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)

  • pads (Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W

  • strides (Tuple[int, ...]): strides for each dimension

  • ceil_mode (int): set to 1 to use the ceil function to compute the output shape, as described in the PyTorch doc

Returns:

  • res (Tuple[int, ...]): shape of the output of a conv or pool operator with given parameters


function onnx_avgpool_compute_norm_const

onnx_avgpool_compute_norm_const(
    input_shape: Tuple[int, ],
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ],
    ceil_mode: int
) → Union[ndarray, float]

Compute the average pooling normalization constant.

This constant can be a tensor of the same shape as the input or a scalar.

Args:

  • input_shape (Tuple[int, ...]): shape of the input to be padded as N x C x H x W

  • kernel_shape (Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)

  • pads (Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W

  • strides (Tuple[int, ...]): strides for each dimension

  • ceil_mode (int): set to 1 to use the ceil function to compute the output shape, as described in the PyTorch doc

Returns:

  • res (float): tensor or scalar, corresponding to normalization factors to apply for the average pool computation for each valid kernel position

concrete.ml.onnx.onnx_model_manipulations.md

module concrete.ml.onnx.onnx_model_manipulations

Some code to manipulate models.


function simplify_onnx_model

simplify_onnx_model(onnx_model: ModelProto)

Simplify an ONNX model, removes unused Constant nodes and Identity nodes.

Args:

  • onnx_model (onnx.ModelProto): the model to simplify.


function remove_unused_constant_nodes

remove_unused_constant_nodes(onnx_model: ModelProto)

Remove unused Constant nodes in the provided onnx model.

Args:

  • onnx_model (onnx.ModelProto): the model for which we want to remove unused Constant nodes.


function remove_identity_nodes

remove_identity_nodes(onnx_model: ModelProto)

Remove identity nodes from a model.

Args:

  • onnx_model (onnx.ModelProto): the model for which we want to remove Identity nodes.


function keep_following_outputs_discard_others

keep_following_outputs_discard_others(
    onnx_model: ModelProto,
    outputs_to_keep: Iterable[str]
)

Keep the outputs given in outputs_to_keep and remove the others from the model.

Args:

  • onnx_model (onnx.ModelProto): the ONNX model to modify.

  • outputs_to_keep (Iterable[str]): the outputs to keep by name.


function remove_node_types

remove_node_types(onnx_model: ModelProto, op_types_to_remove: List[str])

Remove unnecessary nodes from the ONNX graph.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model to modify.

  • op_types_to_remove (List[str]): The node types to remove from the graph.

Raises:

  • ValueError: Wrong replacement by an Identity node.


function clean_graph_at_node_op_type

clean_graph_at_node_op_type(
    onnx_model: ModelProto,
    node_op_type: str,
    fail_if_not_found: bool = True
)

Clean the graph of the onnx model by removing nodes at the given node type.

Note: the specified node_type is also removed.

Args:

  • onnx_model (onnx.ModelProto): The onnx model.

  • node_op_type (str): The node's op_type whose following nodes will be removed.

  • fail_if_not_found (bool): If true, abort if the node op_type is not found

Raises:

  • ValueError: if fail_if_not_found is set


function clean_graph_after_node_op_type

clean_graph_after_node_op_type(
    onnx_model: ModelProto,
    node_op_type: str,
    fail_if_not_found: bool = True
)

Clean the graph of the onnx model by removing nodes after the given node type.

Args:

  • onnx_model (onnx.ModelProto): The onnx model.

  • node_op_type (str): The node's op_type whose following nodes will be removed.

  • fail_if_not_found (bool): If true, abort if the node op_type is not found

Raises:

  • ValueError: if the node op_type is not found and if fail_if_not_found is set

concrete.ml.onnx.onnx_utils.md

module concrete.ml.onnx.onnx_utils

Utils to interpret an ONNX model with numpy.

Global Variables

  • ATTR_TYPES

  • ATTR_GETTERS

  • ONNX_OPS_TO_NUMPY_IMPL

  • ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_FLOAT

  • ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_BOOL

  • ONNX_OPS_TO_NUMPY_IMPL_BOOL

  • IMPLEMENTED_ONNX_OPS


function get_attribute

get_attribute(attribute: AttributeProto) → Any

Get the attribute from an ONNX AttributeProto.

Args:

  • attribute (onnx.AttributeProto): The attribute to retrieve the value from.

Returns:

  • Any: The stored attribute value.


function get_op_type

get_op_type(node)

Construct the qualified type name of the ONNX operator.

Args:

  • node (Any): ONNX graph node

Returns:

  • result (str): qualified name


function execute_onnx_with_numpy

execute_onnx_with_numpy(graph: GraphProto, *inputs: ndarray) → Tuple[ndarray, ]

Execute the provided ONNX graph on the given inputs.

Args:

  • graph (onnx.GraphProto): The ONNX graph to execute.

  • *inputs: The inputs of the graph.

Returns:

  • Tuple[numpy.ndarray]: The result of the graph's execution.


function remove_initializer_from_input

remove_initializer_from_input(model: ModelProto)

Remove initializers from model inputs.

In some cases, ONNX initializers may appear, erroneously, as graph inputs. This function searches all model inputs and removes those that are initializers.

Args:

  • model (onnx.ModelProto): the model to clean

Returns:

  • onnx.ModelProto: the cleaned model

concrete.ml.quantization.md

module concrete.ml.quantization

Modules for quantization.

Global Variables

  • quantizers

  • base_quantized_op

  • quantized_module

  • quantized_ops

  • quantized_module_passes

  • post_training

  • qat_quantizers

concrete.ml.pytest.md

module concrete.ml.pytest

Module which is used to contain common functions for pytest.

Global Variables

  • torch_models

  • utils

concrete.ml.pytest.utils.md

module concrete.ml.pytest.utils

Common functions or lists for test files, which can't be put in fixtures.

Global Variables

  • MODELS_AND_DATASETS

  • UNIQUE_MODELS_AND_DATASETS


function get_sklearn_linear_models_and_datasets

Get the pytest parameters to use for testing linear models.

Args:

  • regressor (bool): If regressors should be selected.

  • classifier (bool): If classifiers should be selected.

  • unique_models (bool): If each models should be represented only once.

  • select (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None.

  • ignore (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None.

Returns:

  • List: The pytest parameters to use for testing linear models.


function get_sklearn_tree_models_and_datasets

Get the pytest parameters to use for testing tree-based models.

Args:

  • regressor (bool): If regressors should be selected.

  • classifier (bool): If classifiers should be selected.

  • unique_models (bool): If each models should be represented only once.

  • select (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None.

  • ignore (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None.

Returns:

  • List: The pytest parameters to use for testing tree-based models.


function get_sklearn_neural_net_models_and_datasets

Get the pytest parameters to use for testing neural network models.

Args:

  • regressor (bool): If regressors should be selected.

  • classifier (bool): If classifiers should be selected.

  • unique_models (bool): If each models should be represented only once.

  • select (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None.

  • ignore (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None.

Returns:

  • List: The pytest parameters to use for testing neural network models.


function get_sklearn_neighbors_models_and_datasets

Get the pytest parameters to use for testing neighbor models.

Args:

  • regressor (bool): If regressors should be selected.

  • classifier (bool): If classifiers should be selected.

  • unique_models (bool): If each models should be represented only once.

  • select (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None.

  • ignore (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None.

Returns:

  • List: The pytest parameters to use for testing neighbor models.


function get_sklearn_all_models_and_datasets

Get the pytest parameters to use for testing all models available in Concrete ML.

Args:

  • regressor (bool): If regressors should be selected.

  • classifier (bool): If classifiers should be selected.

  • unique_models (bool): If each models should be represented only once.

  • select (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) match the given string or list of strings. Default to None.

  • ignore (Optional[Union[str, List[str]]]): If not None, only return models which names (or a part of it) do not match the given string or list of strings. Default to None.

Returns:

  • List: The pytest parameters to use for testing all models available in Concrete ML.


function instantiate_model_generic

Instantiate any Concrete ML model type.

Args:

  • model_class (class): The type of the model to instantiate.

  • n_bits (int): The number of quantization to use when initializing the model. For QNNs, default parameters are used based on whether n_bits is greater or smaller than 8.

  • parameters (dict): Hyper-parameters for the model instantiation. For QNNs, these parameters will override the matching default ones.

Returns:

  • model_name (str): The type of the model as a string.

  • model (object): The model instance.


function data_calibration_processing

Reduce size of the given data-set.

Args:

  • data: The input container to consider

  • n_sample (int): Number of samples to keep if the given data-set

  • targets: If dataset is a torch.utils.data.Dataset, it typically contains both the data and the corresponding targets. In this case, targets must be set to None. If data is instance of torch.Tensor or 'numpy.ndarray, targets` is expected.

Returns:

  • Tuple[numpy.ndarray, numpy.ndarray]: The input data and the target (respectively x and y).

Raises:

  • TypeError: If the 'data-set' does not match any expected type.


function load_torch_model

Load an object saved with torch.save() from a file or dict.

Args:

  • model_class (torch.nn.Module): A PyTorch or Brevitas network.

  • state_dict_or_path (Optional[Union[str, Path, Dict[str, Any]]]): Path or state_dict

  • params (Dict): Model's parameters

  • device (str): Device type.

Returns:

  • torch.nn.Module: A PyTorch or Brevitas network.


function values_are_equal

Indicate if two values are equal.

This method takes into account objects of type None, numpy.ndarray, numpy.floating, numpy.integer, numpy.random.RandomState or any instance that provides a __eq__ method.

Args:

  • value_2 (Any): The first value to consider.

  • value_1 (Any): The second value to consider.

Returns:

  • bool: If the two values are equal.


function check_serialization

Check that the given object can properly be serialized.

This function serializes all objects using the dump, dumps, load and loads functions from Concrete ML. If the given object provides a dump and dumps method, they are also serialized using these.

Args:

  • object_to_serialize (Any): The object to serialize.

  • expected_type (Type): The object's expected type.

  • equal_method (Optional[Callable]): The function to use to compare the two loaded objects. Default to values_are_equal.

  • check_str (bool): If the JSON strings should also be checked. Default to True.

concrete.ml.pytest.torch_models.md

module concrete.ml.pytest.torch_models

Torch modules for our pytests.


class SimpleNet

Fake torch model used to generate some onnx.

method __init__


method forward

Forward function.

Arguments:

  • inputs: the inputs of the model.

Returns:

  • torch.Tensor: the result of the computation


class FCSmall

Torch model for the tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class FC

Torch model for the tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class CNN

Torch CNN model for the tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class CNNMaxPool

Torch CNN model for the tests with a max pool.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class CNNOther

Torch CNN model for the tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class CNNInvalid

Torch CNN model for the tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class CNNGrouped

Torch CNN model with grouped convolution for compile torch tests.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class NetWithLoops

Torch model, where we reuse some elements in a loop.

Torch model, where we reuse some elements in a loop in the forward and don't expect the user to define these elements in a particular order.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class MultiInputNN

Torch model to test multiple inputs forward.

method __init__


method forward

Forward pass.

Args:

  • x: the first input of the NN

  • y: the second input of the NN

Returns: the output of the NN


class MultiInputNNConfigurable

Torch model to test multiple inputs forward.

method __init__


method forward

Forward pass.

Args:

  • x: the first input of the NN

  • y: the second input of the NN

Returns: the output of the NN


class MultiInputNNDifferentSize

Torch model to test multiple inputs with different shape in the forward pass.

method __init__


method forward

Forward pass.

Args:

  • x: The first input of the NN.

  • y: The second input of the NN.

Returns: The output of the NN.


class BranchingModule

Torch model with some branching and skip connections.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class BranchingGemmModule

Torch model with some branching and skip connections.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class UnivariateModule

Torch model that calls univariate and shape functions of torch.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class StepActivationModule

Torch model implements a step function that needs Greater, Cast and Where.

method __init__


method forward

Forward pass with a quantizer built into the computation graph.

Args:

  • x: the input of the NN

Returns: the output of the NN


class NetWithConcatUnsqueeze

Torch model to test the concat and unsqueeze operators.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class MultiOpOnSingleInputConvNN

Network that applies two quantized operations on a single input.

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class FCSeq

Torch model that should generate MatMul->Add ONNX patterns.

This network generates additions with a constant scalar

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class FCSeqAddBiasVec

Torch model that should generate MatMul->Add ONNX patterns.

This network tests the addition with a constant vector

method __init__


method forward

Forward pass.

Args:

  • x: the input of the NN

Returns: the output of the NN


class TinyCNN

A very small CNN.

method __init__

Create the tiny CNN with two conv layers.

Args:

  • n_classes: number of classes

  • act: the activation


method forward

Forward the two layers with the chosen activation function.

Args:

  • x: the input of the NN

Returns: the output of the NN


class TinyQATCNN

A very small QAT CNN to classify the sklearn digits data-set.

This class also allows pruning to a maximum of 10 active neurons, which should help keep the accumulator bit-width low.

method __init__

Construct the CNN with a configurable number of classes.

Args:

  • n_classes (int): number of outputs of the neural net

  • n_bits (int): number of weight and activation bits for quantization

  • n_active (int): number of active (non-zero weight) neurons to keep

  • signed (bool): whether quantized integer values are signed

  • narrow (bool): whether the range of quantized integer values is narrow/symmetric

  • power_of_two_scaling (bool): whether to use power-of-two scaling quantizers which allows to test the round PBS optimization when the scales are power-of-two


method forward

Run inference on the tiny CNN, apply the decision layer on the reshaped conv output.

Args:

  • x: the input to the NN

Returns: the output of the NN


method toggle_pruning

Enable or remove pruning.

Args:

  • enable: if we enable the pruning or not


class SimpleQAT

Torch model implements a step function that needs Greater, Cast and Where.

method __init__


method forward

Forward pass with a quantizer built into the computation graph.

Args:

  • x: the input of the NN

Returns: the output of the NN


class QATTestModule

Torch model that implements a simple non-uniform quantizer.

method __init__


method forward

Forward pass with a quantizer built into the computation graph.

Args:

  • x: the input of the NN

Returns: the output of the NN


class SingleMixNet

Torch model that with a single conv layer that produces the output, e.g., a blur filter.

method __init__


method forward

Execute the single convolution.

Args:

  • x: the input of the NN

Returns: the output of the NN


class DoubleQuantQATMixNet

Torch model that with two different quantizers on the input.

Used to test that it keeps the input TLU.

method __init__


method forward

Execute the single convolution.

Args:

  • x: the input of the NN

Returns: the output of the NN


class TorchSum

Torch model to test the ReduceSum ONNX operator in a leveled circuit.

method __init__

Initialize the module.

Args:

  • dim (Tuple[int]): The axis along which the sum should be executed

  • keepdim (bool): If the output should keep the same dimension as the input or not


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model

Returns:

  • torch_sum (torch.tensor): The sum of the input's tensor elements along the given axis


class TorchSumMod

Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS.

method __init__

Initialize the module.

Args:

  • dim (Tuple[int]): The axis along which the sum should be executed

  • keepdim (bool): If the output should keep the same dimension as the input or not


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model

Returns:

  • torch_sum (torch.tensor): The sum of the input's tensor elements along the given axis


class NetWithConstantsFoldedBeforeOps

Torch QAT model that does not quantize the inputs.

method __init__


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model

Returns:

  • torch.tensor: Output of the network


class ShapeOperationsNet

Torch QAT model that reshapes the input.

method __init__


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model

Returns:

  • torch.tensor: Output of the network


class PaddingNet

Torch QAT model that applies various padding patterns.

method __init__


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model

Returns:

  • torch.tensor: Output of the network


class QuantCustomModel

A small quantized network with Brevitas, trained on make_classification.

method __init__

Quantized Torch Model with Brevitas.

Args:

  • input_shape (int): Input size

  • output_shape (int): Output size

  • hidden_shape (int): Hidden size

  • n_bits (int): Bit of quantization

  • weight_quant (brevitas.quant): Quantization protocol of weights

  • act_quant (brevitas.quant): Quantization protocol of activations.

  • bias_quant (brevitas.quant): Quantizer for the linear layer bias


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model.

Returns:

  • torch.tensor: Output of the network.


class TorchCustomModel

A small network with Brevitas, trained on make_classification.

method __init__

Torch Model.

Args:

  • input_shape (int): Input size

  • output_shape (int): Output size

  • hidden_shape (int): Hidden size


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model.

Returns:

  • torch.tensor: Output of the network.


class ConcatFancyIndexing

Concat with fancy indexing.

method __init__

Torch Model.

Args:

  • input_shape (int): Input size

  • output_shape (int): Output size

  • hidden_shape (int): Hidden size

  • n_bits (int): Number of bits

  • n_blocks (int): Number of blocks


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model.

Returns:

  • torch.tensor: Output of the network.


class PartialQATModel

A model with a QAT Module.

method __init__


method forward

Forward pass.

Args:

  • x (torch.tensor): The input of the model.

Returns:

  • torch.tensor: Output of the network.

concrete.ml.quantization.base_quantized_op.md

module concrete.ml.quantization.base_quantized_op

Base Quantized Op class that implements quantization for a float numpy op.

Global Variables

  • ONNX_OPS_TO_NUMPY_IMPL

  • ALL_QUANTIZED_OPS

  • ONNX_OPS_TO_QUANTIZED_IMPL

  • DEFAULT_MODEL_BITS


class QuantizedOp

Base class for quantized ONNX ops implemented in numpy.

Args:

  • n_bits_output (int): The number of bits to use for the quantization of the output

  • op_instance_name (str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.

  • int_input_names (Set[str]): The set of names of integer tensors that are inputs to this op

  • constant_inputs (Optional[Union[Dict[str, Any], Dict[int, Any]]]): The constant tensors that are inputs to this op

  • input_quant_opts (QuantizationOptions): Input quantizer options, determine the quantization that is applied to input tensors (that are not constants)

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method calibrate

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method call_impl

Call self.impl to centralize mypy bug workaround.

Args:

  • *inputs (numpy.ndarray): real valued inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • numpy.ndarray: return value of self.impl


method can_fuse

Determine if the operator impedes graph fusion.

This function shall be overloaded by inheriting classes to test self._int_input_names, to determine whether the operation can be fused to a TLU or not. For example an operation that takes inputs produced by a unique integer tensor can be fused to a TLU. Example: f(x) = x * (x + 1) can be fused. A function that does f(x) = x * (x @ w + 1) can't be fused.

Returns:

  • bool: whether this QuantizedOp instance produces Concrete code that can be fused to TLUs


method dump

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizedOp: The loaded object.


classmethod must_quantize_input

Determine if an input must be quantized.

Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.

Args:

  • input_name_or_idx (int): Index of the input to check.

Returns:

  • result (bool): Whether the input must be quantized (must be a QuantizedArray) or if it stays as a raw numpy.array read from ONNX.


classmethod op_type

Get the type of this operation.

Returns:

  • op_type (str): The type of this operation, in the ONNX referential


method prepare_output

Quantize the output of the activation function.

The calibrate method needs to be called with sample data before using this function.

Args:

  • qoutput_activation (numpy.ndarray): Output of the activation function.

Returns:

  • QuantizedArray: Quantized output.


method q_impl

Execute the quantized forward.

Args:

  • *q_inputs (ONNXOpInputOutputType): Quantized inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • ONNXOpInputOutputType: The returned quantized value.


class QuantizedOpUnivariateOfEncrypted

An univariate operator of an encrypted value.

This operation is not really operating as a quantized operation. It is useful when the computations get fused into a TLU, as in e.g., Act(x) = x || (x + 42)).

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method calibrate

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method call_impl

Call self.impl to centralize mypy bug workaround.

Args:

  • *inputs (numpy.ndarray): real valued inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • numpy.ndarray: return value of self.impl


method can_fuse

Determine if this op can be fused.

This operation can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.

Returns:

  • bool: Can fuse


method dump

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizedOp: The loaded object.


classmethod must_quantize_input

Determine if an input must be quantized.

Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.

Args:

  • input_name_or_idx (int): Index of the input to check.

Returns:

  • result (bool): Whether the input must be quantized (must be a QuantizedArray) or if it stays as a raw numpy.array read from ONNX.


classmethod op_type

Get the type of this operation.

Returns:

  • op_type (str): The type of this operation, in the ONNX referential


method prepare_output

Quantize the output of the activation function.

The calibrate method needs to be called with sample data before using this function.

Args:

  • qoutput_activation (numpy.ndarray): Output of the activation function.

Returns:

  • QuantizedArray: Quantized output.


method q_impl

Execute the quantized forward.

Args:

  • *q_inputs (ONNXOpInputOutputType): Quantized inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • ONNXOpInputOutputType: The returned quantized value.


class QuantizedMixingOp

An operator that mixes (adds or multiplies) together encrypted inputs.

Mixing operators cannot be fused to TLUs.

method __init__

Initialize quantized ops parameters plus specific parameters.

Args:

  • rounding_threshold_bits (Optional[int]): Number of bits to round to.

  • *args: positional argument to pass to the parent class.

  • **kwargs: named argument to pass to the parent class.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method calibrate

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method call_impl

Call self.impl to centralize mypy bug workaround.

Args:

  • *inputs (numpy.ndarray): real valued inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • numpy.ndarray: return value of self.impl


method can_fuse

Determine if this op can be fused.

Mixing operations cannot be fused since it must be performed over integer tensors and it combines different encrypted elements of the input tensors. Mixing operations are Conv, MatMul, etc.

Returns:

  • bool: False, this operation cannot be fused as it adds different encrypted integers


method cnp_round

Round the input array to the specified number of bits.

Args:

  • x (Union[numpy.ndarray, fhe.tracing.Tracer]): The input array to be rounded.

  • calibrate_rounding (bool): Whether to calibrate the rounding (compute the lsbs_to_remove)

Returns:

  • numpy.ndarray: The rounded array.


method dump

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizedOp: The loaded object.


method make_output_quant_parameters

Build a quantized array from quantized integer results of the op and quantization params.

Args:

  • q_values (Union[numpy.ndarray, Any]): the quantized integer values to wrap in the QuantizedArray

  • scale (float): the pre-computed scale of the quantized values

  • zero_point (Union[int, float, numpy.ndarray]): the pre-computed zero_point of the q_values

Returns:

  • QuantizedArray: the quantized array that will be passed to the QuantizedModule output.


classmethod must_quantize_input

Determine if an input must be quantized.

Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.

Args:

  • input_name_or_idx (int): Index of the input to check.

Returns:

  • result (bool): Whether the input must be quantized (must be a QuantizedArray) or if it stays as a raw numpy.array read from ONNX.


classmethod op_type

Get the type of this operation.

Returns:

  • op_type (str): The type of this operation, in the ONNX referential


method prepare_output

Quantize the output of the activation function.

The calibrate method needs to be called with sample data before using this function.

Args:

  • qoutput_activation (numpy.ndarray): Output of the activation function.

Returns:

  • QuantizedArray: Quantized output.


method q_impl

Execute the quantized forward.

Args:

  • *q_inputs (ONNXOpInputOutputType): Quantized inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • ONNXOpInputOutputType: The returned quantized value.

FHEModelDev

concrete.ml.quantization
concrete.ml.pytest
get_sklearn_linear_models_and_datasets(
    regressor: bool = True,
    classifier: bool = True,
    unique_models: bool = False,
    select: Optional[str, List[str]] = None,
    ignore: Optional[str, List[str]] = None
) → List
get_sklearn_tree_models_and_datasets(
    regressor: bool = True,
    classifier: bool = True,
    unique_models: bool = False,
    select: Optional[str, List[str]] = None,
    ignore: Optional[str, List[str]] = None
) → List
get_sklearn_neural_net_models_and_datasets(
    regressor: bool = True,
    classifier: bool = True,
    unique_models: bool = False,
    select: Optional[str, List[str]] = None,
    ignore: Optional[str, List[str]] = None
) → List
get_sklearn_neighbors_models_and_datasets(
    regressor: bool = True,
    classifier: bool = True,
    unique_models: bool = False,
    select: Optional[str, List[str]] = None,
    ignore: Optional[str, List[str]] = None
) → List
get_sklearn_all_models_and_datasets(
    regressor: bool = True,
    classifier: bool = True,
    unique_models: bool = False,
    select: Optional[str, List[str]] = None,
    ignore: Optional[str, List[str]] = None
) → List
instantiate_model_generic(model_class, n_bits, **parameters)
data_calibration_processing(data, n_sample: int, targets=None)
load_torch_model(
    model_class: Module,
    state_dict_or_path: Optional[str, Path, Dict[str, Any]],
    params: Dict,
    device: str = 'cpu'
) → Module
values_are_equal(value_1: Any, value_2: Any) → bool
check_serialization(
    object_to_serialize: Any,
    expected_type: Type,
    equal_method: Optional[Callable] = None,
    check_str: bool = True
)
concrete.ml.pytest.utils
utils.check_serialization
utils.data_calibration_processing
utils.get_sklearn_all_models_and_datasets
utils.get_sklearn_linear_models_and_datasets
utils.get_sklearn_neighbors_models_and_datasets
utils.get_sklearn_neural_net_models_and_datasets
utils.get_sklearn_tree_models_and_datasets
utils.instantiate_model_generic
utils.load_torch_model
utils.values_are_equal
__init__() → None
forward(inputs)
__init__(input_output, activation_function)
forward(x)
__init__(activation_function, input_output=3072)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(activation_function, groups)
forward(x)
__init__(input_output, activation_function, groups)
forward(x)
__init__(activation_function, input_output, n_fc_layers)
forward(x)
__init__(input_output, activation_function)
forward(x, y)
__init__(use_conv, use_qat, input_output, n_bits)
forward(x, y)
__init__(
    input_output,
    activation_function=None,
    is_brevitas_qat=False,
    n_bits=3
)
forward(x, y)
__init__(input_output, activation_function)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(input_output, activation_function)
forward(x)
__init__(activation_function, input_output, n_fc_layers)
forward(x)
__init__(can_remove_input_tlu: bool)
forward(x)
__init__(input_output, act)
forward(x)
__init__(input_output, act)
forward(x)
__init__(n_classes, act) → None
forward(x)
__init__(
    n_classes,
    n_bits,
    n_active,
    signed,
    narrow,
    power_of_two_scaling
) → None
forward(x)
toggle_pruning(enable)
__init__(input_output, activation_function, n_bits=2, disable_bit_check=False)
forward(x)
__init__(activation_function)
forward(x)
__init__(use_conv, use_qat, inp_size, n_bits)
forward(x)
__init__(use_conv, use_qat, inp_size, n_bits)
forward(x)
__init__(dim=(0,), keepdim=True)
forward(x)
__init__(dim=(0,), keepdim=True)
forward(x)
__init__(
    hparams: dict,
    bits: int,
    act_quant=<class 'brevitas.quant.scaled_int.Int8ActPerTensorFloat'>,
    weight_quant=<class 'brevitas.quant.scaled_int.Int8WeightPerTensorFloat'>
)
forward(x)
__init__(is_qat)
forward(x)
__init__()
forward(x)
__init__(
    input_shape: int,
    output_shape: int,
    hidden_shape: int = 100,
    n_bits: int = 5,
    act_quant=<class 'brevitas.quant.scaled_int.Int8ActPerTensorFloat'>,
    weight_quant=<class 'brevitas.quant.scaled_int.Int8WeightPerTensorFloat'>,
    bias_quant=None
)
forward(x)
__init__(input_shape, hidden_shape, output_shape)
forward(x)
__init__(
    input_shape,
    hidden_shape,
    output_shape,
    n_bits: int = 4,
    n_blocks: int = 3
) → None
forward(x)
__init__(input_shape: int, output_shape: int, n_bits: int)
forward(x)
concrete.ml.pytest.torch_models
torch_models.BranchingGemmModule
torch_models.BranchingModule
torch_models.CNN
torch_models.CNNGrouped
torch_models.CNNInvalid
torch_models.CNNMaxPool
torch_models.CNNOther
torch_models.ConcatFancyIndexing
torch_models.DoubleQuantQATMixNet
torch_models.FC
torch_models.FCSeq
torch_models.FCSeqAddBiasVec
torch_models.FCSmall
torch_models.MultiInputNN
torch_models.MultiInputNNConfigurable
torch_models.MultiInputNNDifferentSize
torch_models.MultiOpOnSingleInputConvNN
torch_models.NetWithConcatUnsqueeze
torch_models.NetWithConstantsFoldedBeforeOps
torch_models.NetWithLoops
torch_models.PaddingNet
torch_models.PartialQATModel
torch_models.QATTestModule
torch_models.QuantCustomModel
torch_models.ShapeOperationsNet
torch_models.SimpleNet
torch_models.SimpleQAT
torch_models.SingleMixNet
torch_models.StepActivationModule
torch_models.TinyCNN
torch_models.TinyQATCNN
torch_models.TorchCustomModel
torch_models.TorchSum
torch_models.TorchSumMod
torch_models.UnivariateModule
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Optional[Set[str]] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
calibrate(*inputs: ndarray) → ndarray
call_impl(*inputs: Optional[ndarray, QuantizedArray], **attrs) → ndarray
can_fuse() → bool
dump(file: <class 'TextIO'>) → None
dump_dict() → Dict
dumps() → str
load_dict(metadata: Dict)
must_quantize_input(input_name_or_idx: int) → bool
op_type()
prepare_output(qoutput_activation: ndarray) → QuantizedArray
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Optional[Set[str]] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
calibrate(*inputs: ndarray) → ndarray
call_impl(*inputs: Optional[ndarray, QuantizedArray], **attrs) → ndarray
can_fuse() → bool
dump(file: <class 'TextIO'>) → None
dump_dict() → Dict
dumps() → str
load_dict(metadata: Dict)
must_quantize_input(input_name_or_idx: int) → bool
op_type()
prepare_output(qoutput_activation: ndarray) → QuantizedArray
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(*args, rounding_threshold_bits: Optional[int] = None, **kwargs) → None
calibrate(*inputs: ndarray) → ndarray
call_impl(*inputs: Optional[ndarray, QuantizedArray], **attrs) → ndarray
can_fuse() → bool
cnp_round(x: Union[ndarray, Tracer], calibrate_rounding: bool) → ndarray
dump(file: <class 'TextIO'>) → None
dump_dict() → Dict
dumps() → str
load_dict(metadata: Dict)
make_output_quant_parameters(
    q_values: Union[ndarray, Any],
    scale: float64,
    zero_point: Union[int, float, ndarray]
) → QuantizedArray
must_quantize_input(input_name_or_idx: int) → bool
op_type()
prepare_output(qoutput_activation: ndarray) → QuantizedArray
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
concrete.ml.quantization.base_quantized_op
base_quantized_op.QuantizedMixingOp
base_quantized_op.QuantizedOp
base_quantized_op.QuantizedOpUnivariateOfEncrypted
concrete.ml.quantization.post_training
post_training.ONNXConverter
post_training.PostTrainingAffineQuantization
post_training.PostTrainingQATImporter
post_training.get_n_bits_dict
bitwidth_and_range_report
concrete.ml.quantization.quantized_module
quantized_module.QuantizedModule
concrete.ml.quantization.quantized_module_passes
quantized_module_passes.PowerOfTwoScalingRoundPBSAdapter
concrete.ml.search_parameters
from_sklearn_model
prune
concrete.ml.sklearn.base
base.BaseClassifier
base.BaseEstimator
base.BaseTreeClassifierMixin
base.BaseTreeEstimatorMixin
base.BaseTreeRegressorMixin
base.QuantizedTorchEstimatorMixin
base.SklearnKNeighborsClassifierMixin
base.SklearnKNeighborsMixin
base.SklearnLinearClassifierMixin
base.SklearnLinearModelMixin
base.SklearnLinearRegressorMixin
QuantizedArray
UniformQuantizer
concrete.ml.quantization.quantizers
quantizers.MinMaxQuantizationStats
quantizers.QuantizationOptions
quantizers.QuantizedArray
quantizers.UniformQuantizationParameters
quantizers.UniformQuantizer
quantizers.fill_from_kwargs
concrete.ml.onnx.ops_impl
ops_impl.ONNXMixedFunction
ops_impl.RawOpOutput
ops_impl.cast_to_float
ops_impl.numpy_abs
ops_impl.numpy_acos
ops_impl.numpy_acosh
ops_impl.numpy_add
ops_impl.numpy_asin
ops_impl.numpy_asinh
ops_impl.numpy_atan
ops_impl.numpy_atanh
ops_impl.numpy_avgpool
ops_impl.numpy_batchnorm
ops_impl.numpy_cast
ops_impl.numpy_celu
ops_impl.numpy_concatenate
ops_impl.numpy_constant
ops_impl.numpy_conv
ops_impl.numpy_cos
ops_impl.numpy_cosh
ops_impl.numpy_div
ops_impl.numpy_elu
ops_impl.numpy_equal
ops_impl.numpy_erf
ops_impl.numpy_exp
ops_impl.numpy_flatten
ops_impl.numpy_floor
ops_impl.numpy_gemm
ops_impl.numpy_greater
ops_impl.numpy_greater_float
ops_impl.numpy_greater_or_equal
ops_impl.numpy_greater_or_equal_float
ops_impl.numpy_hardsigmoid
ops_impl.numpy_hardswish
ops_impl.numpy_identity
ops_impl.numpy_leakyrelu
ops_impl.numpy_less
ops_impl.numpy_less_float
ops_impl.numpy_less_or_equal
ops_impl.numpy_less_or_equal_float
ops_impl.numpy_log
ops_impl.numpy_matmul
ops_impl.numpy_max
ops_impl.numpy_maxpool
ops_impl.numpy_min
ops_impl.numpy_mul
ops_impl.numpy_neg
ops_impl.numpy_not
ops_impl.numpy_not_float
ops_impl.numpy_or
ops_impl.numpy_or_float
ops_impl.numpy_pow
ops_impl.numpy_relu
ops_impl.numpy_round
ops_impl.numpy_selu
ops_impl.numpy_sigmoid
ops_impl.numpy_sign
ops_impl.numpy_sin
ops_impl.numpy_sinh
ops_impl.numpy_softmax
ops_impl.numpy_softplus
ops_impl.numpy_sub
ops_impl.numpy_tan
ops_impl.numpy_tanh
ops_impl.numpy_thresholdedrelu
ops_impl.numpy_transpose
ops_impl.numpy_where
ops_impl.numpy_where_body
ops_impl.onnx_func_raw_args
concrete.ml.search_parameters.p_error_search
p_error_search.BinarySearch
p_error_search.compile_and_simulated_fhe_inference

concrete.ml.quantization.post_training.md

module concrete.ml.quantization.post_training

Post Training Quantization methods.

Global Variables

  • ONNX_OPS_TO_NUMPY_IMPL

  • DEFAULT_MODEL_BITS

  • ONNX_OPS_TO_QUANTIZED_IMPL


function get_n_bits_dict

get_n_bits_dict(n_bits: Union[int, Dict[str, int]]) → Dict[str, int]

Convert the n_bits parameter into a proper dictionary.

Args:

  • n_bits (int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.

Returns:

  • n_bits_dict (Dict[str, int]): A dictionary properly representing the number of bits to use for quantization.


class ONNXConverter

Base ONNX to Concrete ML computation graph conversion class.

This class provides a method to parse an ONNX graph and apply several transformations. First, it creates QuantizedOps for each ONNX graph op. These quantized ops have calibrated quantizers that are useful when the operators work on integer data or when the output of the ops is the output of the encrypted program. For operators that compute in float and will be merged to TLUs, these quantizers are not used. Second, this converter creates quantized tensors for initializer and weights stored in the graph.

This class should be sub-classed to provide specific calibration and quantization options depending on the usage (Post-training quantization vs Quantization Aware training).

Arguments:

  • n_bits (int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.

  • numpy_model (NumpyModule): Model in numpy.

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    rounding_threshold_bits: Optional[int] = None
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module


class PostTrainingAffineQuantization

Post-training Affine Quantization.

Create the quantized version of the passed numpy module.

Args:

  • n_bits (int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits

  • numpy_model (NumpyModule): Model in numpy.

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

  • is_signed: Whether the weights of the layers can be signed. Currently, only the weights can be signed.

Returns:

  • QuantizedModule: A quantized version of the numpy model.

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    rounding_threshold_bits: Optional[int] = None
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module


class PostTrainingQATImporter

Converter of Quantization Aware Training networks.

This class provides specific configuration for QAT networks during ONNX network conversion to Concrete ML computation graphs.

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    rounding_threshold_bits: Optional[int] = None
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module

concrete.ml.quantization.quantized_module.md

module concrete.ml.quantization.quantized_module

QuantizedModule API.

Global Variables

  • SUPPORTED_FLOAT_TYPES

  • SUPPORTED_INT_TYPES

  • USE_OLD_VL


class QuantizedModule

Inference for a quantized model.

method __init__

__init__(
    ordered_module_input_names: Iterable[str] = None,
    ordered_module_output_names: Iterable[str] = None,
    quant_layers_dict: Dict[str, Tuple[Tuple[str, ], QuantizedOp]] = None,
    onnx_model: ModelProto = None
)

property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model (onnx.ModelProto): the ONNX model


property post_processing_params

Get the post-processing parameters.

Returns:

  • Dict[str, Any]: the post-processing parameters


method bitwidth_and_range_report

bitwidth_and_range_report() → Union[Dict[str, Dict[str, Union[Tuple[int, ], int]]], NoneType]

Report the ranges and bit-widths for layers that mix encrypted integer values.

Returns:

  • op_names_to_report (Dict): a dictionary with operation names as keys. For each operation, (e.g., conv/gemm/add/avgpool ops), a range and a bit-width are returned. The range contains the min/max values encountered when computing the operation and the bit-width gives the number of bits needed to represent this range.


method check_model_is_compiled

check_model_is_compiled()

Check if the quantized module is compiled.

Raises:

  • AttributeError: If the quantized module is not compiled.


method compile

compile(
    inputs: Union[Tuple[ndarray, ], ndarray],
    configuration: Optional[Configuration] = None,
    artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    p_error: Optional[float] = None,
    global_p_error: Optional[float] = None,
    verbose: bool = False,
    inputs_encryption_status: Optional[Sequence[str]] = None
) → Circuit

Compile the module's forward function.

Args:

  • inputs (numpy.ndarray): A representative set of input values used for building cryptographic parameters.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during simulation, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

  • inputs_encryption_status (Optional[Sequence[str]]): encryption status ('clear', 'encrypted') for each input.

Returns:

  • Circuit: The compiled Circuit.

Raises:

  • ValueError: if inputs_encryption_status does not match with the parameters of the quantized module


method dequantize_output

dequantize_output(q_y_preds: ndarray) → ndarray

Take the last layer q_out and use its de-quant function.

Args:

  • q_y_preds (numpy.ndarray): Quantized output values of the last layer.

Returns:

  • numpy.ndarray: De-quantized output values of the last layer.


method dump

dump(file: <class 'TextIO'>) → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method forward

forward(
    *x: ndarray,
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>,
    debug: bool = False
) → Union[ndarray, Tuple[ndarray, Union[Dict[Any, Any], NoneType]]]

Forward pass with numpy function only on floating points.

This method executes the forward pass in the clear, with simulation or in FHE. Input values are expected to be floating points, as the method handles the quantization step. The returned values are floating points as well.

Args:

  • *x (numpy.ndarray): Input float values to consider.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

  • debug (bool): In debug mode, returns quantized intermediary values of the computation. This is useful when a model's intermediary values in Concrete ML need to be compared with the intermediary values obtained in pytorch/onnx. When set, the second return value is a dictionary containing ONNX operation names as keys and, as values, their input QuantizedArray or ndarray. The use can thus extract the quantized or float values of quantized inputs. This feature is only available in FheMode.DISABLE mode. Default to False.

Returns:

  • numpy.ndarray: Predictions of the quantized model, in floating points.


method load_dict

load_dict(metadata: Dict)

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizedModule: The loaded object.


method post_processing

post_processing(values: ndarray) → ndarray

Apply post-processing to the de-quantized values.

For quantized modules, there is no post-processing step but the method is kept to make the API consistent for the client-server API.

Args:

  • values (numpy.ndarray): The de-quantized values to post-process.

Returns:

  • numpy.ndarray: The post-processed values.


method quantize_input

quantize_input(*x: ndarray) → Union[ndarray, Tuple[ndarray, ]]

Take the inputs in fp32 and quantize it using the learned quantization parameters.

Args:

  • x (numpy.ndarray): Floating point x.

Returns:

  • Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]: Quantized (numpy.int64) x.


method quantized_forward

quantized_forward(
    *q_x: ndarray,
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray

Forward function for the FHE circuit.

Args:

  • *q_x (numpy.ndarray): Input integer values to consider.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • (numpy.ndarray): Predictions of the quantized model, with integer values.


method set_inputs_quantization_parameters

set_inputs_quantization_parameters(*input_q_params: UniformQuantizer)

Set the quantization parameters for the module's inputs.

Args:

  • *input_q_params (UniformQuantizer): The quantizer(s) for the module.

concrete.ml.quantization.quantized_module_passes.md

module concrete.ml.quantization.quantized_module_passes

Optimization passes for QuantizedModules.


class PowerOfTwoScalingRoundPBSAdapter

Detect neural network patterns that can be optimized with round PBS.

method __init__

__init__(qmodule: QuantizedModule) → None

property num_ignored_valid_patterns

Get the number of optimizable patterns that were ignored.

Patterns could be ignored since a number of rounding bits was set manually through the compilation function.

Returns:

  • result (int): number of patterns that could be optimized but were not


method compute_op_predecessors

compute_op_predecessors() → DefaultDict[Union[QuantizedOp, NoneType], List[Tuple[Union[QuantizedOp, NoneType], str]]]

Compute the predecessors for each QuantizedOp in a QuantizedModule.

Stores, for each quantized op, a list of quantized ops that produce its inputs. Currently only the first input of the operations is considered as it is, usually, the encrypted input.

Returns:

  • result (PredecessorsType): a dictionary containing a hierarchy of op predecessors


method detect_patterns

detect_patterns(
    predecessors: DefaultDict[Optional[QuantizedOp], List[Tuple[Optional[QuantizedOp], str]]]
) → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]]

Detect the patterns that can be optimized with roundPBS in the QuantizedModule.

Args:

  • predecessors (PredecessorsType): Module predecessor operation list

Returns:

  • result (PatternDict): list of optimizable patterns


method match_path_pattern

match_path_pattern(
    predecessors: DefaultDict[Optional[QuantizedOp], List[Tuple[Optional[QuantizedOp], str]]],
    nodes_in_path: List[Optional[QuantizedOp]],
    input_producer_of_path: Optional[QuantizedOp]
) → bool

Determine if a pattern has the structure that makes it viable for roundPBS.

Args:

  • predecessors (PredecessorsType): Module predecessor operation list

  • nodes_in_path (List[QuantizedOp]): list of quantized ops in the pattern

  • input_producer_of_path (Optional[QuantizedOp]): operation that produces the input

Returns:

  • result (bool): whether the pattern can be optimized


method process

process() → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]]

Analyze an ONNX graph and detect Gemm/Conv patterns that can use RoundPBS.

We want to detect a gemm/conv node whose weights/bias are Brevitas QAT, and whose input is produced by a Brevitas QAT node that is applied on the output of another Gemm/conv node. Optionally a Relu can be placed before this input quantization node.

Nothing will be done if rounding is already specified.

Returns:

  • result (PatternDict): a dictionary containing for each Conv/Gemm node for which round PBS can be applied based on power-of-two scaling factors


method process_patterns

process_patterns(
    valid_paths: Dict[QuantizedMixingOp, Tuple[List[Optional[QuantizedOp]], Optional[QuantizedOp]]]
) → Dict[QuantizedMixingOp, Tuple[List[Union[QuantizedOp, NoneType]], Union[QuantizedOp, NoneType]]]

Configure the rounding bits of roundPBS for the optimizable operations.

Args:

  • valid_paths (PatternDict): list of optimizable patterns

Returns:

  • result (PatternDict): list of patterns actually optimized with roundPBS

concrete.ml.search_parameters.md

module concrete.ml.search_parameters

Modules for p_error search.

Global Variables

  • p_error_search

concrete.ml.sklearn.base.md

module concrete.ml.sklearn.base

Base classes for all estimators.

Global Variables

  • USE_OLD_VL

  • OPSET_VERSION_FOR_ONNX_EXPORT

  • QNN_AUTO_KWARGS


class BaseEstimator

Base class for all estimators in Concrete ML.

This class does not inherit from sklearn.base.BaseEstimator as it creates some conflicts with skorch in QuantizedTorchEstimatorMixin's subclasses (more specifically, the get_params method is not properly inherited).

Attributes:

  • _is_a_public_cml_model (bool): Private attribute indicating if the class is a public model (as opposed to base or mixin classes).

method __init__

__init__()

Initialize the base class with common attributes used in all estimators.

An underscore "_" is appended to attributes that were created while fitting the model. This is done in order to follow scikit-Learn's standard format. More information available in their documentation: https://scikit-learn.org/stable/developers/develop.html#:~:text=Estimated%20Attributes%C2%B6


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

Compile the model.

Args:

  • X (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging. Default to None.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation. Default to False.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

Returns:

  • Circuit: The compiled Circuit.


method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

De-quantize the output.

This step ensures that the fit method has been called.

Args:

  • q_y_preds (numpy.ndarray): The quantized output values to de-quantize.

Returns:

  • numpy.ndarray: The de-quantized output values.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

Fit the estimator.

This method trains a scikit-learn estimator, computes its ONNX graph and defines the quantization parameters needed for proper FHE inference.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The fitted estimator.


method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Apply post-processing to the de-quantized predictions.

This post-processing step can include operations such as applying the sigmoid or softmax function for classifiers, or summing an ensemble's outputs. These steps are done in the clear because of current technical constraints. They most likely will be integrated in the FHE computations in the future.

For some simple models such a linear regression, there is no post-processing step but the method is kept to make the API consistent for the client-server API. Other models might need to use attributes stored in post_processing_params.

Args:

  • y_preds (numpy.ndarray): The de-quantized predictions to post-process.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict values for X, in FHE or in the clear.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • np.ndarray: The predicted values for X.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

Quantize the input.

This step ensures that the fit method has been called.

Args:

  • X (numpy.ndarray): The input values to quantize.

Returns:

  • numpy.ndarray: The quantized input values.


class BaseClassifier

Base class for linear and tree-based classifiers in Concrete ML.

This class inherits from BaseEstimator and modifies some of its methods in order to align them with classifier behaviors. This notably include applying a sigmoid/softmax post-processing to the predicted values as well as handling a mapping of classes in case they are not ordered.

method __init__

__init__()

Initialize the base class with common attributes used in all estimators.

An underscore "_" is appended to attributes that were created while fitting the model. This is done in order to follow scikit-Learn's standard format. More information available in their documentation: https://scikit-learn.org/stable/developers/develop.html#:~:text=Estimated%20Attributes%C2%B6


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

Compile the model.

Args:

  • X (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging. Default to None.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation. Default to False.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

Returns:

  • Circuit: The compiled Circuit.


method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

De-quantize the output.

This step ensures that the fit method has been called.

Args:

  • q_y_preds (numpy.ndarray): The quantized output values to de-quantize.

Returns:

  • numpy.ndarray: The de-quantized output values.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method predict_proba

predict_proba(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict class probabilities.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • numpy.ndarray: The predicted class probabilities.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

Quantize the input.

This step ensures that the fit method has been called.

Args:

  • X (numpy.ndarray): The input values to quantize.

Returns:

  • numpy.ndarray: The quantized input values.


class QuantizedTorchEstimatorMixin

Mixin that provides quantization for a torch module and follows the Estimator API.

method __init__

__init__()

property base_module

Get the Torch module.

Returns:

  • SparseQuantNeuralNetwork: The fitted underlying module.


property fhe_circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[UniformQuantizer]: The input quantizers.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property output_quantizers

Get the output quantizers.

Returns:

  • List[UniformQuantizer]: The output quantizers.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

Fit he estimator.

If the module was already initialized, the module will be re-initialized unless warm_start is set to True. In addition to the torch training step, this method performs quantization of the trained Torch model using Quantization Aware Training (QAT).

Values of dtype float64 are not supported and will be casted to float32.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • **fit_parameters: Keyword arguments to pass to skorch's fit method.

Returns: The fitted estimator.


method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit the quantized estimator as well as its equivalent float estimator.

This function returns both the quantized estimator (itself) as well as its non-quantized (float) equivalent, which are both trained separately. This method differs from the BaseEstimator's fit_benchmark method as QNNs use QAT instead of PTQ. Hence, here, the float model is topologically equivalent as we have less control over the influence of QAT over the weights.

Values of dtype float64 are not supported and will be casted to float32.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. However, skorch does not handle such a parameter and setting it will have no effect. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to skorch's fit method.

Returns: The Concrete ML and equivalent skorch fitted estimators.


method get_params

get_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is overloaded in order to make sure that auto-computed parameters are not considered when cloning the model (e.g during a GridSearchCV call).

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:

  • params (dict): Parameter names mapped to their values.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → Dict

classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict values for X, in FHE or in the clear.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • np.ndarray: The predicted values for X.


method prune

prune(X: 'Data', y: 'Target', n_prune_neurons_percentage: 'float', **fit_params)

Prune a copy of this Neural Network model.

This can be used when the number of neurons on the hidden layers is too high. For example, when creating a Neural Network model with n_hidden_neurons_multiplier high (3-4), it can be used to speed up the model inference in FHE. Many times, up to 50% of neurons can be pruned without losing accuracy, when using this function to fine-tune an already trained model with good accuracy. This method should be used once good accuracy is obtained.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame Pandas Series or List.

  • n_prune_neurons_percentage (float): The percentage of neurons to remove. A value of 0 (resp. 1.0) means no (resp. all) neurons will be removed.

  • fit_params: Additional parameters to pass to the underlying nn.Module's forward method.

Returns: A new pruned copy of the Neural Network model.

Raises:

  • ValueError: If the model has not been trained or has already been pruned.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

class BaseTreeEstimatorMixin

Mixin class for tree-based estimators.

This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's get_params and set_params methods.

method __init__

__init__(n_bits: 'int')

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): The number of bits used for quantization.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(*args, **kwargs) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method quantize_input

quantize_input(X: 'ndarray') → ndarray

class BaseTreeRegressorMixin

Mixin class for tree-based regressors.

This class is used to create a tree-based regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's score method for regressors.

method __init__

__init__(n_bits: 'int')

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): The number of bits used for quantization.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(*args, **kwargs) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method quantize_input

quantize_input(X: 'ndarray') → ndarray

class BaseTreeClassifierMixin

Mixin class for tree-based classifiers.

This class is used to create a tree-based classifier class that inherits from sklearn.base.ClassifierMixin, which essentially gives access to scikit-learn's score method for classifiers.

Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows.

method __init__

__init__(n_bits: 'int')

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): The number of bits used for quantization.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(*args, **kwargs) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method predict_proba

predict_proba(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict class probabilities.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • numpy.ndarray: The predicted class probabilities.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

class SklearnLinearModelMixin

A Mixin class for sklearn linear models with FHE.

This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's get_params and set_params methods.

method __init__

__init__(n_bits: 'Union[int, Dict[str, int]]' = 8)

Initialize the FHE linear model.

Args:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

Compile the model.

Args:

  • X (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging. Default to None.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation. Default to False.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

Returns:

  • Circuit: The compiled Circuit.


method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


classmethod from_sklearn_model

from_sklearn_model(
    sklearn_model: 'BaseEstimator',
    X: 'Data',
    n_bits: 'Union[int, Dict[str, int]]' = 8
)

Build a FHE-compliant model using a fitted scikit-learn model.

Args:

  • sklearn_model (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert.

  • X (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

Returns: The FHE-compliant fitted model.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Apply post-processing to the de-quantized predictions.

This post-processing step can include operations such as applying the sigmoid or softmax function for classifiers, or summing an ensemble's outputs. These steps are done in the clear because of current technical constraints. They most likely will be integrated in the FHE computations in the future.

For some simple models such a linear regression, there is no post-processing step but the method is kept to make the API consistent for the client-server API. Other models might need to use attributes stored in post_processing_params.

Args:

  • y_preds (numpy.ndarray): The de-quantized predictions to post-process.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict values for X, in FHE or in the clear.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • np.ndarray: The predicted values for X.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

class SklearnLinearRegressorMixin

A Mixin class for sklearn linear regressors with FHE.

This class is used to create a linear regressor class that inherits from sklearn.base.RegressorMixin, which essentially gives access to scikit-learn's score method for regressors.

method __init__

__init__(n_bits: 'Union[int, Dict[str, int]]' = 8)

Initialize the FHE linear model.

Args:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

Compile the model.

Args:

  • X (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging. Default to None.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation. Default to False.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

Returns:

  • Circuit: The compiled Circuit.


method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


classmethod from_sklearn_model

from_sklearn_model(
    sklearn_model: 'BaseEstimator',
    X: 'Data',
    n_bits: 'Union[int, Dict[str, int]]' = 8
)

Build a FHE-compliant model using a fitted scikit-learn model.

Args:

  • sklearn_model (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert.

  • X (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

Returns: The FHE-compliant fitted model.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Apply post-processing to the de-quantized predictions.

This post-processing step can include operations such as applying the sigmoid or softmax function for classifiers, or summing an ensemble's outputs. These steps are done in the clear because of current technical constraints. They most likely will be integrated in the FHE computations in the future.

For some simple models such a linear regression, there is no post-processing step but the method is kept to make the API consistent for the client-server API. Other models might need to use attributes stored in post_processing_params.

Args:

  • y_preds (numpy.ndarray): The de-quantized predictions to post-process.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict values for X, in FHE or in the clear.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • np.ndarray: The predicted values for X.


method quantize_input

quantize_input(X: 'ndarray') → ndarray

class SklearnLinearClassifierMixin

A Mixin class for sklearn linear classifiers with FHE.

This class is used to create a linear classifier class that inherits from sklearn.base.ClassifierMixin, which essentially gives access to scikit-learn's score method for classifiers.

Additionally, this class adjusts some of the tree-based base class's methods in order to make them compliant with classification workflows.

method __init__

__init__(n_bits: 'Union[int, Dict[str, int]]' = 8)

Initialize the FHE linear model.

Args:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(
    X: 'Data',
    configuration: 'Optional[Configuration]' = None,
    artifacts: 'Optional[DebugArtifacts]' = None,
    show_mlir: 'bool' = False,
    p_error: 'Optional[float]' = None,
    global_p_error: 'Optional[float]' = None,
    verbose: 'bool' = False
) → Circuit

Compile the model.

Args:

  • X (Data): A representative set of input values used for building cryptographic parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • configuration (Optional[Configuration]): Options to use for compilation. Default to None.

  • artifacts (Optional[DebugArtifacts]): Artifacts information about the compilation process to store for debugging. Default to None.

  • show_mlir (bool): Indicate if the MLIR graph should be printed during compilation. Default to False.

  • p_error (Optional[float]): Probability of error of a single PBS. A p_error value cannot be given if a global_p_error value is already set. Default to None, which sets this error to a default value.

  • global_p_error (Optional[float]): Probability of error of the full circuit. A global_p_error value cannot be given if a p_error value is already set. This feature is not supported during the FHE simulation mode, meaning the probability is currently set to 0. Default to None, which sets this error to a default value.

  • verbose (bool): Indicate if compilation information should be printed during compilation. Default to False.

Returns:

  • Circuit: The compiled Circuit.


method decision_function

decision_function(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

Predict confidence scores.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Returns:

  • numpy.ndarray: The predicted confidence scores.


method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


classmethod from_sklearn_model

from_sklearn_model(
    sklearn_model: 'BaseEstimator',
    X: 'Data',
    n_bits: 'Union[int, Dict[str, int]]' = 8
)

Build a FHE-compliant model using a fitted scikit-learn model.

Args:

  • sklearn_model (sklearn.base.BaseEstimator): The fitted scikit-learn model to convert.

  • X (Data): A representative set of input values used for computing quantization parameters, as a Numpy array, Torch tensor, Pandas DataFrame or List. This is usually the training data-set or a sub-set of it.

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

Returns: The FHE-compliant fitted model.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method predict_proba

predict_proba(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method quantize_input

quantize_input(X: 'ndarray') → ndarray

class SklearnKNeighborsMixin

A Mixin class for sklearn KNeighbors models with FHE.

This class inherits from sklearn.base.BaseEstimator in order to have access to scikit-learn's get_params and set_params methods.

method __init__

__init__(n_bits: 'int' = 3)

Initialize the FHE knn model.

Args:

  • n_bits (int): Number of bits to quantize the model. IThe value will be used for quantizing inputs and X_fit. Default to 3.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(*args, **kwargs) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method majority_vote

majority_vote(nearest_classes: 'ndarray')

Determine the most common class among nearest neighborsfor each query.

Args:

  • nearest_classes (numpy.ndarray): The class labels of the nearest neighbors for a query

Returns:

  • numpy.ndarray: The majority-voted class label for the corresponding query.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Perform the majority.

For KNN, the de-quantization step is not required. Because _inference returns the label of the k-nearest neighbors.

Args:

  • y_preds (numpy.ndarray): The topk nearest labels

Returns:

  • numpy.ndarray: The majority vote.


method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method quantize_input

quantize_input(X: 'ndarray') → ndarray

class SklearnKNeighborsClassifierMixin

A Mixin class for sklearn KNeighbors classifiers with FHE.

This class is used to create a KNeighbors classifier class that inherits from SklearnKNeighborsMixin and sklearn.base.ClassifierMixin. By inheriting from sklearn.base.ClassifierMixin, it allows this class to be recognized as a classifier."

method __init__

__init__(n_bits: 'int' = 3)

Initialize the FHE knn model.

Args:

  • n_bits (int): Number of bits to quantize the model. IThe value will be used for quantizing inputs and X_fit. Default to 3.


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method check_model_is_compiled

check_model_is_compiled() → None

Check if the model is compiled.

Raises:

  • AttributeError: If the model is not compiled.


method check_model_is_fitted

check_model_is_fitted() → None

Check if the model is fitted.

Raises:

  • AttributeError: If the model is not fitted.


method compile

compile(*args, **kwargs) → Circuit

method dequantize_output

dequantize_output(q_y_preds: 'ndarray') → ndarray

method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict[str, Any]

Dump the object as a dict.

Returns:

  • Dict[str, Any]: Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method fit

fit(X: 'Data', y: 'Target', **fit_parameters)

method fit_benchmark

fit_benchmark(
    X: 'Data',
    y: 'Target',
    random_state: 'Optional[int]' = None,
    **fit_parameters
)

Fit both the Concrete ML and its equivalent float estimators.

Args:

  • X (Data): The training data, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • y (Target): The target data, as a Numpy array, Torch tensor, Pandas DataFrame, Pandas Series or List.

  • random_state (Optional[int]): The random state to use when fitting. Defaults to None.

  • **fit_parameters: Keyword arguments to pass to the float estimator's fit method.

Returns: The Concrete ML and float equivalent fitted estimators.


method get_sklearn_params

get_sklearn_params(deep: 'bool' = True) → dict

Get parameters for this estimator.

This method is used to instantiate a scikit-learn model using the Concrete ML model's parameters. It does not override scikit-learn's existing get_params method in order to not break its implementation of set_params.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators. Default to True.

Returns:

  • params (dict): Parameter names mapped to their values.


classmethod load_dict

load_dict(metadata: 'Dict[str, Any]') → BaseEstimator

Load itself from a dict.

Args:

  • metadata (Dict[str, Any]): Dict of serialized objects.

Returns:

  • BaseEstimator: The loaded object.


method majority_vote

majority_vote(nearest_classes: 'ndarray')

Determine the most common class among nearest neighborsfor each query.

Args:

  • nearest_classes (numpy.ndarray): The class labels of the nearest neighbors for a query

Returns:

  • numpy.ndarray: The majority-voted class label for the corresponding query.


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Perform the majority.

For KNN, the de-quantization step is not required. Because _inference returns the label of the k-nearest neighbors.

Args:

  • y_preds (numpy.ndarray): The topk nearest labels

Returns:

  • numpy.ndarray: The majority vote.


method predict

predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray

method quantize_input

quantize_input(X: 'ndarray') → ndarray

concrete.ml.quantization.quantizers.md

module concrete.ml.quantization.quantizers

Quantization utilities for a numpy array/tensor.

Global Variables

  • STABILITY_CONST


function fill_from_kwargs

fill_from_kwargs(obj, klass, **kwargs)

Fill a parameter set structure from kwargs parameters.

Args:

  • obj: an object of type klass, if None the object is created if any of the type's members appear in the kwargs

  • klass: the type of object to fill

  • kwargs: parameter names and values to fill into an instance of the klass type

Returns:

  • obj: an object of type klass

  • kwargs: remaining parameter names and values that were not filled into obj

Raises:

  • TypeError: if the types of the parameters in kwargs could not be converted to the corresponding types of members of klass


class QuantizationOptions

Options for quantization.

Determines the number of bits for quantization and the method of quantization of the values. Signed quantization allows negative quantized values. Symmetric quantization assumes the float values are distributed symmetrically around x=0 and assigns signed values around 0 to the float values. QAT (quantization aware training) quantization assumes the values are already quantized, taking a discrete set of values, and assigns these values to integers, computing only the scale.

method __init__

__init__(
    n_bits: 'int',
    is_signed: 'bool' = False,
    is_symmetric: 'bool' = False,
    is_qat: 'bool' = False
)

property quant_options

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


method copy_opts

copy_opts(opts)

Copy the options from a different structure.

Args:

  • opts (QuantizationOptions): structure to copy parameters from.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method is_equal

is_equal(opts, ignore_sign_qat: 'bool' = False) → bool

Compare two quantization options sets.

Args:

  • opts (QuantizationOptions): options to compare this instance to

  • ignore_sign_qat (bool): ignore sign comparison for QAT options

Returns:

  • bool: whether the two quantization options compared are equivalent


method load_dict

load_dict(metadata: 'Dict')

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizationOptions: The loaded object.


class MinMaxQuantizationStats

Calibration set statistics.

This class stores the statistics for the calibration set or for a calibration data batch. Currently we only store min/max to determine the quantization range. The min/max are computed from the calibration set.

method __init__

__init__(
    rmax: 'Optional[float]' = None,
    rmin: 'Optional[float]' = None,
    uvalues: 'Optional[ndarray]' = None
)

property quant_stats

Get a copy of the calibration set statistics.

Returns:

  • MinMaxQuantizationStats: a copy of the current quantization stats


method check_is_uniform_quantized

check_is_uniform_quantized(options: 'QuantizationOptions') → bool

Check if these statistics correspond to uniformly quantized values.

Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.

Args:

  • options (QuantizationOptions): used to quantize the values in the QuantizedArray

Returns:

  • bool: check result.


method compute_quantization_stats

compute_quantization_stats(values: 'ndarray') → None

Compute the calibration set quantization statistics.

Args:

  • values (numpy.ndarray): Calibration set on which to compute statistics.


method copy_stats

copy_stats(stats) → None

Copy the statistics from a different structure.

Args:

  • stats (MinMaxQuantizationStats): structure to copy statistics from.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

load_dict(metadata: 'Dict')

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizationOptions: The loaded object.


class UniformQuantizationParameters

Quantization parameters for uniform quantization.

This class stores the parameters used for quantizing real values to discrete integer values. The parameters are computed from quantization options and quantization statistics.

method __init__

__init__(
    scale: 'Optional[float64]' = None,
    zero_point: 'Optional[Union[int, float, ndarray]]' = None,
    offset: 'Optional[int]' = None
)

property quant_params

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


method compute_quantization_parameters

compute_quantization_parameters(
    options: 'QuantizationOptions',
    stats: 'MinMaxQuantizationStats'
) → None

Compute the quantization parameters.

Args:

  • options (QuantizationOptions): quantization options set

  • stats (MinMaxQuantizationStats): calibrated statistics for quantization


method copy_params

copy_params(params) → None

Copy the parameters from a different structure.

Args:

  • params (UniformQuantizationParameters): parameter structure to copy


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

load_dict(metadata: 'Dict') → UniformQuantizationParameters

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • UniformQuantizationParameters: The loaded object.


class UniformQuantizer

Uniform quantizer.

Contains all information necessary for uniform quantization and provides quantization/de-quantization functionality on numpy arrays.

Args:

  • options (QuantizationOptions): Quantization options set

  • stats (Optional[MinMaxQuantizationStats]): Quantization batch statistics set

  • params (Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)

method __init__

__init__(
    options: 'Optional[QuantizationOptions]' = None,
    stats: 'Optional[MinMaxQuantizationStats]' = None,
    params: 'Optional[UniformQuantizationParameters]' = None,
    no_clipping: 'bool' = False
)

property quant_options

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


property quant_params

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


property quant_stats

Get a copy of the calibration set statistics.

Returns:

  • MinMaxQuantizationStats: a copy of the current quantization stats


method check_is_uniform_quantized

check_is_uniform_quantized(options: 'QuantizationOptions') → bool

Check if these statistics correspond to uniformly quantized values.

Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.

Args:

  • options (QuantizationOptions): used to quantize the values in the QuantizedArray

Returns:

  • bool: check result.


method compute_quantization_parameters

compute_quantization_parameters(
    options: 'QuantizationOptions',
    stats: 'MinMaxQuantizationStats'
) → None

Compute the quantization parameters.

Args:

  • options (QuantizationOptions): quantization options set

  • stats (MinMaxQuantizationStats): calibrated statistics for quantization


method compute_quantization_stats

compute_quantization_stats(values: 'ndarray') → None

Compute the calibration set quantization statistics.

Args:

  • values (numpy.ndarray): Calibration set on which to compute statistics.


method copy_opts

copy_opts(opts)

Copy the options from a different structure.

Args:

  • opts (QuantizationOptions): structure to copy parameters from.


method copy_params

copy_params(params) → None

Copy the parameters from a different structure.

Args:

  • params (UniformQuantizationParameters): parameter structure to copy


method copy_stats

copy_stats(stats) → None

Copy the statistics from a different structure.

Args:

  • stats (MinMaxQuantizationStats): structure to copy statistics from.


method dequant

dequant(qvalues: 'ndarray') → Union[Any, ndarray]

De-quantize values.

Args:

  • qvalues (numpy.ndarray): integer values to de-quantize

Returns:

  • Union[Any, numpy.ndarray]: De-quantized float values.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method is_equal

is_equal(opts, ignore_sign_qat: 'bool' = False) → bool

Compare two quantization options sets.

Args:

  • opts (QuantizationOptions): options to compare this instance to

  • ignore_sign_qat (bool): ignore sign comparison for QAT options

Returns:

  • bool: whether the two quantization options compared are equivalent


method load_dict

load_dict(metadata: 'Dict') → UniformQuantizer

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • UniformQuantizer: The loaded object.


method quant

quant(values: 'ndarray') → ndarray

Quantize values.

Args:

  • values (numpy.ndarray): float values to quantize

Returns:

  • numpy.ndarray: Integer quantized values.


class QuantizedArray

Abstraction of quantized array.

Contains float values and their quantized integer counter-parts. Quantization is performed by the quantizer member object. Float and int values are kept in sync. Having both types of values is useful since quantized operators in Concrete ML graphs might need one or the other depending on how the operator works (in float or in int). Moreover, when the encrypted function needs to return a value, it must return integer values.

See https://arxiv.org/abs/1712.05877.

Args:

  • values (numpy.ndarray): Values to be quantized.

  • n_bits (int): The number of bits to use for quantization.

  • value_is_float (bool, optional): Whether the passed values are real (float) values or not. If False, the values will be quantized according to the passed scale and zero_point. Defaults to True.

  • options (QuantizationOptions): Quantization options set

  • stats (Optional[MinMaxQuantizationStats]): Quantization batch statistics set

  • params (Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)

  • kwargs: Any member of the options, stats, params sets as a key-value pair. The parameter sets need to be completely parametrized if their members appear in kwargs.

method __init__

__init__(
    n_bits,
    values: 'Optional[ndarray]',
    value_is_float: 'bool' = True,
    options: 'Optional[QuantizationOptions]' = None,
    stats: 'Optional[MinMaxQuantizationStats]' = None,
    params: 'Optional[UniformQuantizationParameters]' = None,
    **kwargs
)

method dequant

dequant() → ndarray

De-quantize self.qvalues.

Returns:

  • numpy.ndarray: De-quantized values.


method dump

dump(file: 'TextIO') → None

Dump itself to a file.

Args:

  • file (TextIO): The file to dump the serialized object into.


method dump_dict

dump_dict() → Dict

Dump itself to a dict.

Returns:

  • metadata (Dict): Dict of serialized objects.


method dumps

dumps() → str

Dump itself to a string.

Returns:

  • metadata (str): String of the serialized object.


method load_dict

load_dict(metadata: 'Dict') → QuantizedArray

Load itself from a string.

Args:

  • metadata (Dict): Dict of serialized objects.

Returns:

  • QuantizedArray: The loaded object.


method quant

quant() → Optional[ndarray]

Quantize self.values.

Returns:

  • numpy.ndarray: Quantized values.


method update_quantized_values

update_quantized_values(qvalues: 'ndarray') → ndarray

Update qvalues to get their corresponding values using the related quantized parameters.

Args:

  • qvalues (numpy.ndarray): Values to replace self.qvalues

Returns:

  • values (numpy.ndarray): Corresponding values


method update_values

update_values(values: 'ndarray') → ndarray

Update values to get their corresponding qvalues using the related quantized parameters.

Args:

  • values (numpy.ndarray): Values to replace self.values

Returns:

  • qvalues (numpy.ndarray): Corresponding qvalues

concrete.ml.onnx.ops_impl.md

module concrete.ml.onnx.ops_impl

ONNX ops implementation in Python + NumPy.


function cast_to_float

cast_to_float(inputs)

Cast values to floating points.

Args:

  • inputs (Tuple[numpy.ndarray]): The values to consider.

Returns:

  • Tuple[numpy.ndarray]: The float values.


function onnx_func_raw_args

onnx_func_raw_args(*args, output_is_raw: bool = False)

Decorate a numpy onnx function to flag the raw/non quantized inputs.

Args:

  • *args (tuple[Any]): function argument names

  • output_is_raw (bool): marks the function as returning raw values that should not be quantized

Returns:

  • result (ONNXMixedFunction): wrapped numpy function with a list of mixed arguments


function numpy_where_body

numpy_where_body(c: ndarray, t: ndarray, f: Union[ndarray, int]) → ndarray

Compute the equivalent of numpy.where.

This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g., numpy_div or numpy_where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_where

numpy_where(c: ndarray, t: ndarray, f: ndarray) → Tuple[ndarray]

Compute the equivalent of numpy.where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_add

numpy_add(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute add in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13

Args:

  • a (numpy.ndarray): First operand.

  • b (numpy.ndarray): Second operand.

Returns:

  • Tuple[numpy.ndarray]: Result, has same element type as two inputs


function numpy_constant

numpy_constant(**kwargs)

Return the constant passed as a kwarg.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13

Args:

  • **kwargs: keyword arguments

Returns:

  • Any: The stored constant.


function numpy_gemm

numpy_gemm(
    a: ndarray,
    b: ndarray,
    c: Optional[ndarray] = None,
    alpha: float = 1,
    beta: float = 1,
    transA: int = 0,
    transB: int = 0
) → Tuple[ndarray]

Compute Gemm in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Gemm-13

Args:

  • a (numpy.ndarray): Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.

  • b (numpy.ndarray): Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.

  • c (Optional[numpy.ndarray]): Optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N). Defaults to None.

  • alpha (float): Scalar multiplier for the product of input tensors A * B. Defaults to 1.

  • beta (float): Scalar multiplier for input tensor C. Defaults to 1.

  • transA (int): Whether A should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.

  • transB (int): Whether B should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.

Returns:

  • Tuple[numpy.ndarray]: The tuple containing the result tensor


function numpy_matmul

numpy_matmul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute matmul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13

Args:

  • a (numpy.ndarray): N-dimensional matrix A

  • b (numpy.ndarray): N-dimensional matrix B

Returns:

  • Tuple[numpy.ndarray]: Matrix multiply results from A * B


function numpy_relu

numpy_relu(x: ndarray) → Tuple[ndarray]

Compute relu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sigmoid

numpy_sigmoid(x: ndarray) → Tuple[ndarray]

Compute sigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softmax

numpy_softmax(x, axis=1, keepdims=True)

Compute softmax in numpy according to ONNX spec.

Softmax is currently not supported in FHE.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13

Args:

  • x (numpy.ndarray): Input tensor

  • axis (None, int, tuple of int): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.

  • keepdims (bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cos

numpy_cos(x: ndarray) → Tuple[ndarray]

Compute cos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cosh

numpy_cosh(x: ndarray) → Tuple[ndarray]

Compute cosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sin

numpy_sin(x: ndarray) → Tuple[ndarray]

Compute sin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sinh

numpy_sinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tan

numpy_tan(x: ndarray) → Tuple[ndarray]

Compute tan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tanh

numpy_tanh(x: ndarray) → Tuple[ndarray]

Compute tanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acos

numpy_acos(x: ndarray) → Tuple[ndarray]

Compute acos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acosh

numpy_acosh(x: ndarray) → Tuple[ndarray]

Compute acosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asin

numpy_asin(x: ndarray) → Tuple[ndarray]

Compute asin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asinh

numpy_asinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atan

numpy_atan(x: ndarray) → Tuple[ndarray]

Compute atan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atanh

numpy_atanh(x: ndarray) → Tuple[ndarray]

Compute atanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_elu

numpy_elu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute elu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_selu

numpy_selu(
    x: ndarray,
    alpha: float = 1.6732632423543772,
    gamma: float = 1.0507009873554805
) → Tuple[ndarray]

Compute selu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • gamma (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_celu

numpy_celu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute celu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_leakyrelu

numpy_leakyrelu(x: ndarray, alpha: float = 0.01) → Tuple[ndarray]

Compute leakyrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_thresholdedrelu

numpy_thresholdedrelu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute thresholdedrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardsigmoid

numpy_hardsigmoid(
    x: ndarray,
    alpha: float = 0.2,
    beta: float = 0.5
) → Tuple[ndarray]

Compute hardsigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • beta (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softplus

numpy_softplus(x: ndarray) → Tuple[ndarray]

Compute softplus in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_abs

numpy_abs(x: ndarray) → Tuple[ndarray]

Compute abs in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_div

numpy_div(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute div in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_mul

numpy_mul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute mul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sub

numpy_sub(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute sub in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_log

numpy_log(x: ndarray) → Tuple[ndarray]

Compute log in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_erf

numpy_erf(x: ndarray) → Tuple[ndarray]

Compute erf in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardswish

numpy_hardswish(x: ndarray) → Tuple[ndarray]

Compute hardswish in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_exp

numpy_exp(x: ndarray) → Tuple[ndarray]

Compute exponential in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: The exponential of the input tensor computed element-wise


function numpy_equal

numpy_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not

numpy_not(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not_float

numpy_not_float(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater

numpy_greater(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_float

numpy_greater_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal

numpy_greater_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal_float

numpy_greater_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less

numpy_less(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_float

numpy_less_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal

numpy_less_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal_float

numpy_less_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_identity

numpy_identity(x: ndarray) → Tuple[ndarray]

Compute identity in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_transpose

numpy_transpose(x: ndarray, perm=None) → Tuple[ndarray]

Transpose in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13

Args:

  • x (numpy.ndarray): Input tensor

  • perm (numpy.ndarray): Permutation of the axes

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_conv

numpy_conv(
    x: ndarray,
    w: ndarray,
    b: Optional[ndarray] = None,
    dilations: Tuple[int, ],
    group: int = 1,
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ]
) → Tuple[ndarray]

Compute N-D convolution using Torch.

Currently supports 2d convolution with torch semantics. This function is also ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv

Args:

  • x (numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d

  • w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d

  • b (Optional[numpy.ndarray]): bias tensor, Shape is (O,). Default to None.

  • dilations (Tuple[int, ...]): dilation of the kernel, default 1 on all dimensions.

  • group (int): number of convolution groups, can be 1 or a multiple of both (C,) and (O,), so that I = C / group. Default to 1.

  • kernel_shape (Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int, ...]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int, ...]): stride of the convolution on each axis

Returns:

  • res (numpy.ndarray): a tensor of size (N x OutChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.Conv2d.html


function numpy_avgpool

numpy_avgpool(
    x: ndarray,
    ceil_mode: int,
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ] = None,
    strides: Tuple[int, ] = None
) → Tuple[ndarray]

Compute Average Pooling using Torch.

Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool

Args:

  • x (numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d

  • ceil_mode (int): ONNX rounding parameter, expected 0 (torch style dimension computation)

  • kernel_shape (Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int, ...]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int, ...]): stride of the convolution on each axis

Returns:

  • res (numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html

Raises:

  • AssertionError: if the pooling arguments are wrong


function numpy_maxpool

numpy_maxpool(
    x: ndarray,
    kernel_shape: Tuple[int, ],
    strides: Tuple[int, ] = None,
    auto_pad: str = 'NOTSET',
    pads: Tuple[int, ] = None,
    dilations: Optional[Tuple[int, ], List[int]] = None,
    ceil_mode: int = 0,
    storage_order: int = 0
) → Tuple[ndarray]

Compute Max Pooling using Torch.

Currently supports 2d max pooling with torch semantics. This function is ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool

Args:

  • x (numpy.ndarray): the input

  • kernel_shape (Union[Tuple[int, ...], List[int]]): shape of the kernel

  • strides (Optional[Union[Tuple[int, ...], List[int]]]): stride along each spatial axis set to 1 along each spatial axis if not set

  • auto_pad (str): padding strategy, default = "NOTSET"

  • pads (Optional[Union[Tuple[int, ...], List[int]]]): padding for the beginning and ending along each spatial axis (D1_begin, D2_begin, ..., D1_end, D2_end, ...) set to 0 along each spatial axis if not set

  • dilations (Optional[Union[Tuple[int, ...], List[int]]]): dilation along each spatial axis set to 1 along each spatial axis if not set

  • ceil_mode (int): ceiling mode, default = 1

  • storage_order (int): storage order, 0 for row major, 1 for column major, default = 0

Returns:

  • res (numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html


function numpy_cast

numpy_cast(data: ndarray, to: int) → Tuple[ndarray]

Execute ONNX cast in Numpy.

For traced values during compilation, it supports only booleans, which are converted to float. For raw values (used in constant folding or shape computations), any cast is allowed.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast

Args:

  • data (numpy.ndarray): Input encrypted tensor

  • to (int): integer value of the onnx.TensorProto DataType enum

Returns:

  • result (numpy.ndarray): a tensor with the required data type


function numpy_batchnorm

numpy_batchnorm(
    x: ndarray,
    scale: ndarray,
    bias: ndarray,
    input_mean: ndarray,
    input_var: ndarray,
    epsilon=1e-05,
    momentum=0.9,
    training_mode=0
) → Tuple[ndarray]

Compute the batch normalization of the input tensor.

This can be expressed as:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization-14

Args:

  • x (numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.

  • scale (numpy.ndarray): scale tensor of shape (C,)

  • bias (numpy.ndarray): bias tensor of shape (C,)

  • input_mean (numpy.ndarray): mean values to use for each input channel, shape (C,)

  • input_var (numpy.ndarray): variance values to use for each input channel, shape (C,)

  • epsilon (float): avoids division by zero

  • momentum (float): momentum used during training of the mean/variance, not used in inference

  • training_mode (int): if the model was exported in training mode this is set to 1, else 0

Returns:

  • numpy.ndarray: Normalized tensor


function numpy_flatten

numpy_flatten(x: ndarray, axis: int = 1) → Tuple[ndarray]

Flatten a tensor into a 2d array.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13.

Args:

  • x (numpy.ndarray): tensor to flatten

  • axis (int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)

Returns:

  • result: flattened tensor


function numpy_or

numpy_or(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_or_float

numpy_or_float(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_round

numpy_round(a: ndarray) → Tuple[ndarray]

Compute round in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0

Args:

  • a (numpy.ndarray): Input tensor whose elements to be rounded.

Returns:

  • Tuple[numpy.ndarray]: Output tensor with rounded input elements.


function numpy_pow

numpy_pow(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute pow in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13

Args:

  • a (numpy.ndarray): Input tensor whose elements to be raised.

  • b (numpy.ndarray): The power to which we want to raise.

Returns:

  • Tuple[numpy.ndarray]: Output tensor.


function numpy_floor

numpy_floor(x: ndarray) → Tuple[ndarray]

Compute Floor in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_max

numpy_max(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute Max in numpy according to ONNX spec.

Computes the max between the first input and a float constant.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Constant tensor to compare to the first input

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_min

numpy_min(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute Min in numpy according to ONNX spec.

Computes the minimum between the first input and a float constant.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Constant tensor to compare to the first input

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sign

numpy_sign(x: ndarray) → Tuple[ndarray]

Compute Sign in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_neg

numpy_neg(x: ndarray) → Tuple[ndarray]

Compute Negative in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_concatenate

numpy_concatenate(*x: ndarray, axis: int) → Tuple[ndarray]

Apply concatenate in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#concat-13

Args:

  • *x (numpy.ndarray): Input tensors to be concatenated.

  • axis (int): Which axis to concat on.

Returns:

  • Tuple[numpy.ndarray]: Output tensor.


class RawOpOutput

Type construct that marks an ndarray as a raw output of a quantized op.


class ONNXMixedFunction

A mixed quantized-raw valued onnx function.

ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.

method __init__

__init__(function, non_quant_params: Set[str], output_is_raw: bool = False)

Create the mixed function and raw parameter list.

Args:

  • function (Any): function to be decorated

  • non_quant_params: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)

  • output_is_raw (bool): indicates whether the op outputs a value that should not be quantized

concrete.ml.search_parameters.p_error_search.md

module concrete.ml.search_parameters.p_error_search

p_error binary search for classification and regression tasks.

Only PyTorch neural networks and Concrete built-in models are supported.

  • Concrete built-in models include trees and QNN

  • Quantized aware trained model are supported using Brevitas framework

  • Torch models can be converted into post-trained quantized models

The p_error represents an essential hyper-parameter in the FHE computation at Zama. As it impacts the speed of the FHE computations and the model's performance.

In this script, we provide an approach to find out an optimal p_error, which would offer an interesting compromise between speed and efficiency.

The p_error represents the probability of a single PBS being incorrect. Know that the FHE scheme allows to perform 2 types of operations

  • Linear operations: additions and multiplications

  • Non-linear operation: uni-variate activation functions

At Zama, non-linear operations are represented by table lookup (TLU), which are implemented through the Programmable Bootstrapping technology (PBS). A single PBS operation has p_error chances of being incorrect.

It's highly recommended to adjust the p_error as it is linked to the data-set.

The inference is performed via the FHE simulation mode.

The goal is to look for the largest p_error_i, a float ∈ ]0,0.9[, which gives a model_i that has accuracy_i, such that: | accuracy_i - accuracy_0| <= Threshold, where: Threshold ∈ R, given by the user and accuracy_0 refers to original model_0 with p_error_0 ≈ 0.0.

p_error is bounded between 0 and 0.9 p_error ≈ 0.0, refers to the original model in clear, that gives an accuracy that we note as accuracy_0.

We assume that the condition is satisfied when we have a match A match is defined as a uni-variate function, through strategy argument, given by the user, it can be

any = lambda all_matches: any(all_matches) all = lambda all_matches: all(all_matches) mean = lambda all_matches: numpy.mean(all_matches) >= 0.5 median = lambda all_matches: numpy.median(all_matches) == 1

To validate the results of the FHE simulation and get a stable estimation, we do several simulations If match, we update the lower bound to be the current p_error Else, we update the upper bound to be the current p_error Update the current p_error with the mean of the bounds

We stop the search when the maximum number of iterations is reached.

If we don't reach the convergence, a user warning is raised.


function compile_and_simulated_fhe_inference

compile_and_simulated_fhe_inference(
    estimator: Module,
    calibration_data: ndarray,
    ground_truth: ndarray,
    p_error: float,
    n_bits: int,
    is_qat: bool,
    metric: Callable,
    predict: str,
    **kwargs: Dict
) → Tuple[ndarray, float]

Get the quantized module of a given model in FHE, simulated or not.

Supported models are:

  • Built-in models, including trees and QNN,

  • Quantized aware trained model are supported using Brevitas framework,

  • Torch models can be converted into post-trained quantized models.

Args:

  • estimator (torch.nn.Module): Torch model or a built-in model

  • calibration_data (numpy.ndarray): Calibration data required for compilation

  • ground_truth (numpy.ndarray): The ground truth

  • p_error (float): Concrete ML uses table lookup (TLU) to represent any non-linear

  • n_bits (int): Quantization bits

  • is_qat (bool): True, if the NN has been trained through QAT. If False it is converted into post-trained quantized model.

  • metric (Callable): Classification or regression evaluation metric.

  • predict (str): The predict method to use.

  • kwargs (Dict): Hyper-parameters to use for the metric.

Returns:

  • Tuple[numpy.ndarray, float]: De-quantized or quantized output model depending on is_benchmark_test and the score.

Raises:

  • ValueError: If the model is neither a built-in model nor a torch neural network.


class BinarySearch

Class for p_error hyper-parameter search for classification and regression tasks.

method __init__

__init__(
    estimator,
    predict: str,
    metric: Callable,
    n_bits: int = 4,
    is_qat: bool = True,
    lower: float = 0.0,
    upper: float = 0.9,
    max_iter: int = 20,
    n_simulation: int = 5,
    strategy: Any = <built-in function all>,
    max_metric_loss: float = 0.01,
    save: bool = False,
    log_file: str = None,
    directory: str = None,
    verbose: bool = False,
    **kwargs: dict
)

p_error binary search algorithm.

Args:

  • estimator : Custom model (Brevitas or PyTorch) or built-in models (trees or QNNs).

  • predict (str): The prediction method to use for built-in tree models.

  • metric (Callable): Evaluation metric for classification or regression tasks.

  • n_bits (int): Quantization bits, for PTQ models. Default is 4.

  • is_qat (bool): Flag that indicates whether the estimator has been trained through QAT (quantization-aware training). Default is True.

  • lower (float): The lower bound of the search space for the p_error. Default is 0.0.

  • upper (float): The upper bound of the search space for the p_error. Default is 0.9. Increasing the upper bound beyond this range may result in longer execution times especially when p_error≈1.

  • max_iter (int): The maximum number of iterations to run the binary search algorithm. Default is 20.

  • n_simulation (int): The number of simulations to validate the results of the FHE simulation. Default is 5.

  • strategy (Any): A uni-variate function that defines a "match". It can be built-in functions provided in Python, such as any() or all(), or custom functions, like:

  • mean = lambda all_matches: numpy.mean(all_matches) >= 0.5

  • median = lambda all_matches: numpy.median(all_matches) == 1 Default is 'all'.

  • max_metric_loss (float): The threshold to use to satisfy the condition: | accuracy_i - accuracy_0| <= max_metric_loss. Default is 0.01.

  • save (bool): Flag that indicates whether to save some meta data in log file. Default is False.

  • log_file (str): The log file name. Default is None.

  • directory (str): The directory to save the meta data. Default is None.

  • verbose (bool): Flag that indicates whether to print detailed information. Default is False.

  • kwargs: Parameter of the evaluation metric.


method eval_match

eval_match(strategy: Callable, all_matches: List[bool]) → Union[bool, bool_]

Eval the matches.

Args:

  • strategy (Callable): A uni-variate function that defines a "match". It can be built-in functions provided in Python, such as any() or all(), or custom functions, like:

  • mean = lambda all_matches: numpy.mean(all_matches) >= 0.5

  • median = lambda all_matches: numpy.median(all_matches) == 1

  • all_matches (List[bool]): List of matches.

Returns:

  • bool: Evaluation of the matches according to the given strategy.

Raises:

  • TypeError: If the strategy function is not valid.


method reset_history

reset_history() → None

Clean history.


method run

run(
    x: ndarray,
    ground_truth: ndarray,
    strategy: Callable = <built-in function all>,
    **kwargs: Dict
) → float

Get an optimal p_error using binary search for classification and regression tasks.

PyTorch models and built-in models are supported.

To find an optimal p_error that offers a balance between speed and efficiency, we use a binary search approach. Where the goal to look for the largest p_error_i, a float ∈ ]0,1[, which gives a model_i that has accuracy_i, such that | accuracy_i - accuracy_0| <= max_metric_loss, where max_metric_loss ∈ R and accuracy_0 refers to original model_0 with p_error ≈ 0.0.

We assume that the condition is satisfied when we have a match. A match is defined as a uni-variate function, specified through strategy argument.

To validate the results of the FHE simulation and get a stable estimation, we perform multiple samplings. If match, we update the lower bound to be the current p_error. Else, we update the upper bound to be the current p_error. Update the current p_error with the mean of the bounds.

We stop the search either when the maximum number of iterations is reached or when the update of the p_error is below at a given threshold.

Args:

  • x (numpy.ndarray): Data-set which is used for calibration and evaluation

  • ground_truth (numpy.ndarray): The ground truth

  • kwargs (Dict): Class parameters

  • strategy (Callable): A uni-variate function that defines a "match". It can be: a

  • built-in functions provided in Python, like: any or all or a custom function, like:

  • mean = lambda all_matches: numpy.mean(all_matches) >= 0.5

  • median = lambda all_matches: numpy.median(all_matches) == 1 Default is all.

Returns:

  • float: The optimal p_error that aims to speedup computations while maintaining good performance.

concrete.ml.quantization.quantized_ops.md

module concrete.ml.quantization.quantized_ops

Quantized versions of the ONNX operators for post training quantization.


class QuantizedSigmoid

Quantized sigmoid op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedHardSigmoid

Quantized HardSigmoid op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedRelu

Quantized Relu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedPRelu

Quantized PRelu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedLeakyRelu

Quantized LeakyRelu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedHardSwish

Quantized Hardswish op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedElu

Quantized Elu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedSelu

Quantized Selu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedCelu

Quantized Celu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedClip

Quantized clip op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedRound

Quantized round op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedPow

Quantized pow op.

Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedGemm

Quantized Gemm op.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method q_impl


class QuantizedMatMul

Quantized MatMul op.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method q_impl


class QuantizedAdd

Quantized Addition operator.

Can add either two variables (both encrypted) or a variable and a constant


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl


class QuantizedTanh

Quantized Tanh op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedSoftplus

Quantized Softplus op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedExp

Quantized Exp op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedLog

Quantized Log op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedAbs

Quantized Abs op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedIdentity

Quantized Identity op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method q_impl


class QuantizedReshape

Quantized Reshape op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl

Reshape the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedConv

Quantized Conv op.

method __init__

Construct the quantized convolution operator and retrieve parameters.

Args:

  • n_bits_output: number of bits for the quantization of the outputs of this operator

  • op_instance_name (str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.

  • int_input_names: names of integer tensors that are taken as input for this operation

  • constant_inputs: the weights and activations

  • input_quant_opts: options for the input quantizer

  • attrs: convolution options

  • dilations (Tuple[int]): dilation of the kernel. Default to 1 on all dimensions.

  • group (int): number of convolution groups. Default to 1.

  • kernel_shape (Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int]): stride of the convolution on each axis


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method q_impl

Compute the quantized convolution between two quantized tensors.

Allows an optional quantized bias.

Args:

  • q_inputs: input tuple, contains

  • x (numpy.ndarray): input data. Shape is N x C x H x W for 2d

  • w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d

  • b (numpy.ndarray, Optional): bias tensor, Shape is (O,)

  • calibrate_rounding (bool): Whether to calibrate rounding

  • attrs: convolution options handled in constructor

Returns:

  • res (QuantizedArray): result of the quantized integer convolution


class QuantizedAvgPool

Quantized Average Pooling op.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method q_impl


class QuantizedMaxPool

Quantized Max Pooling op.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl


class QuantizedPad

Quantized Padding op.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Pad operation cannot be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation cannot be fused as it is manipulates integer tensors


method q_impl


class QuantizedWhere

Where operator on quantized arrays.

Supports only constants for the results produced on the True/False branches.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedCast

Cast the input to the required data type.

In FHE we only support a limited number of output types. Booleans are cast to integers.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedGreater

Comparison operator >.

Only supports comparison with a constant.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedGreaterOrEqual

Comparison operator >=.

Only supports comparison with a constant.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedLess

Comparison operator <.

Only supports comparison with a constant.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedLessOrEqual

Comparison operator <=.

Only supports comparison with a constant.

method __init__


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedOr

Or operator ||.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = x || (x + 42))


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedDiv

Div operator /.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = 1000 / (x + 42))


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedMul

Multiplication operator.

Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedSub

Subtraction operator.

This works the same as addition on both encrypted - encrypted and on encrypted - constant.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl


class QuantizedBatchNormalization

Quantized Batch normalization with encrypted input and in-the-clear normalization params.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedFlatten

Quantized flatten for encrypted inputs.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Flatten operation cannot be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation cannot be fused as it is manipulates integer tensors.


method q_impl

Flatten the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0

  • attrs: contains axis attribute

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedReduceSum

ReduceSum with encrypted input.

method __init__

Construct the quantized ReduceSum operator and retrieve parameters.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs.

  • op_instance_name (str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • axes (Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None.

  • attrs (dict): RecuseSum options.

  • keepdims (int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.

  • noop_with_empty_axes (int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method calibrate

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: The output values for the provided calibration samples.


method q_impl

Sum the encrypted tensor's values along the given axes.

Args:

  • q_inputs (QuantizedArray): An encrypted integer tensor at index 0.

  • attrs (Dict): Options are handled in constructor.

Returns:

  • (QuantizedArray): The sum of all values along the given axes.


class QuantizedErf

Quantized erf op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedNot

Quantized Not op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedBrevitasQuant

Brevitas uniform quantization with encrypted input.

method __init__

Construct the Brevitas quantization operator.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX

  • op_instance_name (str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • scale (float): Quantizer scale

  • zero_point (float): Quantizer zero-point

  • bit_width (int): Number of bits of the integer representation

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):

  • rounding_mode (str): Rounding mode (default and only accepted option is "ROUND")

  • signed (int): Whether this op quantizes to signed integers (default 1),

  • narrow (int): Whether this op quantizes to a narrow range of integers e.g., [-2n_bits-1 .. 2n_bits-1] (default 0),


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method calibrate

Create corresponding QuantizedArray for the output of Quantization function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method q_impl

Quantize values.

Args:

  • q_inputs: an encrypted integer tensor at index 0, scale, zero_point, n_bits at indices 1,2,3

  • attrs: additional optional attributes

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedTranspose

Transpose operator for quantized inputs.

This operator performs quantization and transposes the encrypted data. When the inputs are pre-computed QAT the input is only quantized if needed.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Transpose can not be fused since it must be performed over integer tensors as it moves around different elements of these input tensors.

Returns:

  • bool: False, this operation can not be fused as it copies encrypted integers


method q_impl

Transpose the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): transposed encrypted integer tensor


class QuantizedFloor

Quantized Floor op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedMax

Quantized Max op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedMin

Quantized Min op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedNeg

Quantized Neg op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedSign

Quantized Neg op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


class QuantizedUnsqueeze

Unsqueeze operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Unsqueeze can not be fused since it must be performed over integer tensors as it reshapes an encrypted tensor.

Returns:

  • bool: False, this operation can not be fused as it operates on encrypted tensors


method q_impl

Unsqueeze the input tensors on a given axis.

Args:

  • q_inputs: an encrypted integer tensor at index 0, axes at index 1

  • attrs: additional optional unsqueeze options

Returns:

  • result (QuantizedArray): unsqueezed encrypted integer tensor


class QuantizedConcat

Concatenate operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Concatenation can not be fused since it must be performed over integer tensors as it copies encrypted integers from one tensor to another.

Returns:

  • bool: False, this operation can not be fused as it copies encrypted integers


method q_impl

Concatenate the input tensors on a given axis.

Args:

  • q_inputs: an encrypted integer tensor

  • attrs: additional optional concatenate options

Returns:

  • result (QuantizedArray): concatenated encrypted integer tensor


class QuantizedSqueeze

Squeeze operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

Squeeze can not be fused since it must be performed over integer tensors as it reshapes encrypted tensors.

Returns:

  • bool: False, this operation can not be fused as it reshapes encrypted tensors


method q_impl

Squeeze the input tensors on a given axis.

Args:

  • q_inputs: an encrypted integer tensor at index 0, axes at index 1

  • attrs: additional optional squeeze options

Returns:

  • result (QuantizedArray): squeezed encrypted integer tensor


class ONNXShape

Shape operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

This operation returns the shape of the tensor and thus can not be fused into a univariate TLU.

Returns:

  • bool: False, this operation can not be fused


method q_impl


class ONNXConstantOfShape

ConstantOfShape operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

This operation returns a new encrypted tensor and thus can not be fused.

Returns:

  • bool: False, this operation can not be fused


class ONNXGather

Gather operator.

Returns values at requested indices from the input tensor.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

This operation returns values from a tensor and thus can not be fused into a univariate TLU.

Returns:

  • bool: False, this operation can not be fused


method q_impl


class ONNXSlice

Slice operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • Set[str]: the names of the tensors


method can_fuse

Determine if this op can be fused.

This operation returns values from a tensor and thus can not be fused into a univariate TLU.

Returns:

  • bool: False, this operation can not be fused


method q_impl

concrete.ml.sklearn.glm.md

module concrete.ml.sklearn.glm

Implement sklearn's Generalized Linear Models (GLM).


class PoissonRegressor

A Poisson regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on PoissonRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


method post_processing


method predict


class GammaRegressor

A Gamma regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on GammaRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.GammaRegressor.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


method post_processing


method predict


class TweedieRegressor

A Tweedie regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on TweedieRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.TweedieRegressor.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


method post_processing


method predict

concrete.ml.sklearn.qnn.md

module concrete.ml.sklearn.qnn

Scikit-learn interface for fully-connected quantized neural networks.

Global Variables

  • QNN_AUTO_KWARGS

  • OPTIONAL_MODULE_PARAMS

  • ATTRIBUTE_PREFIXES


class NeuralNetRegressor

A Fully-Connected Neural Network regressor with FHE.

This class wraps a quantized neural network implemented using Torch tools as a scikit-learn estimator. The skorch package allows to handle training and scikit-learn compatibility, and adds quantization as well as compilation functionalities. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

Inputs and targets that are float64 will be casted to float32 before training as Torch does not handle float64 types properly. Thus should not have a significant impact on the model's performances. An error is raised if these values are not floating points.

method __init__


property base_module

Get the Torch module.

Returns:

  • SparseQuantNeuralNetwork: The fitted underlying module.


property fhe_circuit


property history


property input_quantizers

Get the input quantizers.

Returns:

  • List[UniformQuantizer]: The input quantizers.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property output_quantizers

Get the output quantizers.

Returns:

  • List[UniformQuantizer]: The output quantizers.


method dump_dict


method fit


method fit_benchmark


classmethod load_dict


method predict


method predict_proba


class NeuralNetClassifier

A Fully-Connected Neural Network classifier with FHE.

This class wraps a quantized neural network implemented using Torch tools as a scikit-learn estimator. The skorch package allows to handle training and scikit-learn compatibility, and adds quantization as well as compilation functionalities. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

Inputs that are float64 will be casted to float32 before training as Torch does not handle float64 types properly. Thus should not have a significant impact on the model's performances. If the targets are integers of lower bit-width, they will be safely casted to int64. Else, an error is raised.

method __init__


property base_module

Get the Torch module.

Returns:

  • SparseQuantNeuralNetwork: The fitted underlying module.


property classes_


property fhe_circuit


property history


property input_quantizers

Get the input quantizers.

Returns:

  • List[UniformQuantizer]: The input quantizers.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property output_quantizers

Get the output quantizers.

Returns:

  • List[UniformQuantizer]: The output quantizers.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


method fit


method fit_benchmark


classmethod load_dict


method predict


method predict_proba

concrete.ml.sklearn.md

module concrete.ml.sklearn

Import sklearn models.

Global Variables

  • qnn_module

  • tree_to_numpy

  • base

  • glm

  • linear_model

  • neighbors

  • qnn

  • rf

  • svm

  • tree

  • xgb

concrete.ml.sklearn.qnn_module.md

module concrete.ml.sklearn.qnn_module

Sparse Quantized Neural Network torch module.


class SparseQuantNeuralNetwork

Sparse Quantized Neural Network.

This class implements an MLP that is compatible with FHE constraints. The weights and activations are quantized to low bit-width and pruning is used to ensure accumulators do not surpass an user-provided accumulator bit-width. The number of classes and number of layers are specified by the user, as well as the breadth of the network

method __init__

Sparse Quantized Neural Network constructor.

Args:

  • input_dim (int): Number of dimensions of the input data.

  • n_layers (int): Number of linear layers for this network.

  • n_outputs (int): Number of output classes or regression targets.

  • n_w_bits (int): Number of weight bits.

  • n_a_bits (int): Number of activation and input bits.

  • n_accum_bits (int): Maximal allowed bit-width of intermediate accumulators.

  • n_hidden_neurons_multiplier (int): The number of neurons on the hidden will be the number of dimensions of the input multiplied by n_hidden_neurons_multiplier. Note that pruning is used to adjust the accumulator size to attempt to keep the maximum accumulator bit-width to n_accum_bits, meaning that not all hidden layer neurons will be active. The default value for n_hidden_neurons_multiplier is chosen for small dimensions of the input. Reducing this value decreases the FHE inference time considerably but also decreases the robustness and accuracy of model training.

  • n_prune_neurons_percentage (float): The percentage of neurons to prune in the hidden layers. This can be used when setting n_hidden_neurons_multiplier with a high number (3-4), once good accuracy is obtained, in order to speed up the model in FHE.

  • activation_function (Type): The activation function to use in the network (e.g., torch.ReLU, torch.SELU, torch.Sigmoid, ...).

  • quant_narrow (bool): Whether this network should quantize the values using narrow range (e.g a 2-bits signed quantization uses [-1, 0, 1] instead of [-2, -1, 0, 1]).

  • quant_signed (bool): Whether this network should quantize the values using signed integers.

  • power_of_two_scaling (bool): Force quantization scales to be a power of two to enable inference speed optimizations. Defaults to True

Raises:

  • ValueError: If the parameters have invalid values or the computed accumulator bit-width is zero.


method enable_pruning

Enable pruning in the network. Pruning must be made permanent to recover pruned weights.

Raises:

  • ValueError: If the quantization parameters are invalid.


method forward

Forward pass.

Args:

  • x (torch.Tensor): network input

Returns:

  • x (torch.Tensor): network prediction


method make_pruning_permanent

Make the learned pruning permanent in the network.


method max_active_neurons

Compute the maximum number of active (non-zero weight) neurons.

The computation is done using the quantization parameters passed to the constructor. Warning: With the current quantization algorithm (asymmetric) the value returned by this function is not guaranteed to ensure FHE compatibility. For some weight distributions, weights that are 0 (which are pruned weights) will not be quantized to 0. Therefore the total number of active quantized neurons will not be equal to max_active_neurons.

Returns:

  • int: The maximum number of active neurons.

concrete.ml.sklearn.neighbors.md

module concrete.ml.sklearn.neighbors

Implement sklearn linear model.


class KNeighborsClassifier

A k-nearest neighbors classifier model with FHE.

Parameters:

  • n_bits (int): Number of bits to quantize the model. The value will be used for quantizing inputs and X_fit. Default to 3.

For more details on KNeighborsClassifier please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


method predict_proba

Predict class probabilities.

Args:

  • X (Data): The input values to predict, as a Numpy array, Torch tensor, Pandas DataFrame or List.

  • fhe (Union[FheMode, str]): The mode to use for prediction. Can be FheMode.DISABLE for Concrete ML Python inference, FheMode.SIMULATE for FHE simulation and FheMode.EXECUTE for actual FHE execution. Can also be the string representation of any of these values. Default to FheMode.DISABLE.

Raises:

  • NotImplementedError: The method is not implemented for now.

concrete.ml.sklearn.linear_model.md

module concrete.ml.sklearn.linear_model

Implement sklearn linear model.


class LinearRegression

A linear regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


class ElasticNet

An ElasticNet regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


class Lasso

A Lasso regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


class Ridge

A Ridge regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


class LogisticRegression

A logistic regression model with FHE.

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


classmethod load_dict

concrete.ml.sklearn.svm.md

module concrete.ml.sklearn.svm

Implement Support Vector Machine.


class LinearSVR

A Regression Support Vector Machine (SVM).

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on LinearSVR please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVR.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict


class LinearSVC

A Classification Support Vector Machine (SVM).

Parameters:

  • n_bits (int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.

For more details on LinearSVC please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


classmethod load_dict

concrete.ml.torch.compile.md

module concrete.ml.torch.compile

torch compilation function.

Global Variables

  • MAX_BITWIDTH_BACKWARD_COMPATIBLE

  • OPSET_VERSION_FOR_ONNX_EXPORT


function has_any_qnn_layers

Check if a torch model has QNN layers.

This is useful to check if a model is a QAT model.

Args:

  • torch_model (torch.nn.Module): a torch model

Returns:

  • bool: whether this torch model contains any QNN layer.


function convert_torch_tensor_or_numpy_array_to_numpy_array

Convert a torch tensor or a numpy array to a numpy array.

Args:

  • torch_tensor_or_numpy_array (Tensor): the value that is either a torch tensor or a numpy array.

Returns:

  • numpy.ndarray: the value converted to a numpy array.


function build_quantized_module

Build a quantized module from a Torch or ONNX model.

Take a model in torch or ONNX, turn it to numpy, quantize its inputs / weights / outputs and retrieve the associated quantized module.

Args:

  • model (Union[torch.nn.Module, onnx.ModelProto]): The model to quantize, either in torch or in ONNX.

  • torch_inputset (Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray

  • import_qat (bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not re-quantize it

  • n_bits: the number of bits for the quantization

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

Returns:

  • QuantizedModule: The resulting QuantizedModule.


function compile_torch_model

Compile a torch module into an FHE equivalent.

Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete

Args:

  • torch_model (torch.nn.Module): the model to quantize

  • torch_inputset (Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.

  • import_qat (bool): Set to True to import a network that contains quantizers and was trained using quantization aware training

  • configuration (Configuration): Configuration object to use during compilation

  • artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • n_bits: the number of bits for the quantization

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

  • p_error (Optional[float]): probability of error of a single PBS

  • global_p_error (Optional[float]): probability of error of the full circuit. In FHE simulation global_p_error is set to 0

  • verbose (bool): whether to show compilation information

  • inputs_encryption_status (Optional[Sequence[str]]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted.

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.


function compile_onnx_model

Compile a torch module into an FHE equivalent.

Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Python

Args:

  • onnx_model (onnx.ModelProto): the model to quantize

  • torch_inputset (Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.

  • import_qat (bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not re-quantize it.

  • configuration (Configuration): Configuration object to use during compilation

  • artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • n_bits: the number of bits for the quantization

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

  • p_error (Optional[float]): probability of error of a single PBS

  • global_p_error (Optional[float]): probability of error of the full circuit. In FHE simulation global_p_error is set to 0

  • verbose (bool): whether to show compilation information

  • inputs_encryption_status (Optional[Sequence[str]]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted.

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.


function compile_brevitas_qat_model

Compile a Brevitas Quantization Aware Training model.

The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.

Args:

  • torch_model (torch.nn.Module): the model to quantize

  • torch_inputset (Dataset): the calibration input-set, can contain either torch tensors or numpy.ndarray.

  • n_bits (Optional[Union[int, dict]): the number of bits for the quantization. By default, for most models, a value of None should be given, which instructs Concrete ML to use the bit-widths configured using Brevitas quantization options. For some networks, that perform a non-linear operation on an input on an output, if None is given, a default value of 8 bits is used for the input/output quantization. For such models the user can also specify a dictionary with model_inputs/model_outputs keys to override the 8-bit default or a single integer for both values.

  • configuration (Configuration): Configuration object to use during compilation

  • artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • rounding_threshold_bits (int): if not None, every accumulators in the model are rounded down to the given bits of precision

  • p_error (Optional[float]): probability of error of a single PBS

  • global_p_error (Optional[float]): probability of error of the full circuit. In FHE simulation global_p_error is set to 0

  • output_onnx_file (str): temporary file to store ONNX model. If None a temporary file is generated

  • verbose (bool): whether to show compilation information

  • inputs_encryption_status (Optional[Sequence[str]]): encryption status ('clear', 'encrypted') for each input. By default all arguments will be encrypted.

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.

concrete.ml.sklearn.tree_to_numpy.md

module concrete.ml.sklearn.tree_to_numpy

Implements the conversion of a tree model to a numpy function.

Global Variables

  • MAX_BITWIDTH_BACKWARD_COMPATIBLE

  • OPSET_VERSION_FOR_ONNX_EXPORT


function get_onnx_model

Create ONNX model with Hummingbird convert method.

Args:

  • model (Callable): The tree model to convert.

  • x (numpy.ndarray): Dataset used to trace the tree inference and convert the model to ONNX.

  • framework (str): The framework from which the ONNX model is generated.

  • (options: 'xgboost', 'sklearn')

Returns:

  • onnx.ModelProto: The ONNX model.


function workaround_squeeze_node_xgboost

Workaround to fix torch issue that does not export the proper axis in the ONNX squeeze node.

FIXME: https://github.com/zama-ai/concrete-ml-internal/issues/2778 The squeeze ops does not have the proper dimensions. remove the following workaround when the issue is fixed Add the axis attribute to the Squeeze node

Args:

  • onnx_model (onnx.ModelProto): The ONNX model.


function assert_add_node_and_constant_in_xgboost_regressor_graph

Assert if an Add node with a specific constant exists in the ONNX graph.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model.


function add_transpose_after_last_node

Add transpose after last node.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model.


function preprocess_tree_predictions

Apply post-processing from the graph.

Args:

  • init_tensor (numpy.ndarray): Model parameters to be pre-processed.

  • output_n_bits (int): The number of bits of the output.

Returns:

  • QuantizedArray: Quantizer for the tree predictions.


function tree_onnx_graph_preprocessing

Apply pre-processing onto the ONNX graph.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model.

  • framework (str): The framework from which the ONNX model is generated.

  • (options: 'xgboost', 'sklearn')

  • expected_number_of_outputs (int): The expected number of outputs in the ONNX model.


function tree_values_preprocessing

Pre-process tree values.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model.

  • framework (str): The framework from which the ONNX model is generated.

  • (options: 'xgboost', 'sklearn')

  • output_n_bits (int): The number of bits of the output.

Returns:

  • QuantizedArray: Quantizer for the tree predictions.


function tree_to_numpy

Convert the tree inference to a numpy functions using Hummingbird.

Args:

  • model (Callable): The tree model to convert.

  • x (numpy.ndarray): The input data.

  • framework (str): The framework from which the ONNX model is generated.

  • (options: 'xgboost', 'sklearn')

  • output_n_bits (int): The number of bits of the output. Default to 8.

Returns:

  • Tuple[Callable, List[QuantizedArray], onnx.ModelProto]: A tuple with a function that takes a numpy array and returns a numpy array, QuantizedArray object to quantize and de-quantize the output of the tree, and the ONNX model.

concrete.ml.sklearn.rf.md

module concrete.ml.sklearn.rf

Implement RandomForest models.


class RandomForestClassifier

Implements the RandomForest classifier.

method __init__

Initialize the RandomForestClassifier.

noqa: DAR101


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


classmethod load_dict


method post_processing


class RandomForestRegressor

Implements the RandomForest regressor.

method __init__

Initialize the RandomForestRegressor.

noqa: DAR101


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict

concrete.ml.sklearn.tree.md

module concrete.ml.sklearn.tree

Implement DecisionTree models.


class DecisionTreeClassifier

Implements the sklearn DecisionTreeClassifier.

method __init__

Initialize the DecisionTreeClassifier.

noqa: DAR101


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


classmethod load_dict


method post_processing


class DecisionTreeRegressor

Implements the sklearn DecisionTreeClassifier.

method __init__

Initialize the DecisionTreeRegressor.

noqa: DAR101


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


classmethod load_dict

concrete.ml.sklearn.xgb.md

module concrete.ml.sklearn.xgb

Implements XGBoost models.


class XGBClassifier

Implements the XGBoost classifier.

See https://xgboost.readthedocs.io/en/stable/python/python_api.html#module-xgboost.sklearn for more information about the parameters used.

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property n_classes_

Get the model's number of classes.

Using this attribute is deprecated.

Returns:

  • int: The model's number of classes.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


property target_classes_

Get the model's classes.

Using this attribute is deprecated.

Returns:

  • Optional[numpy.ndarray]: The model's classes.


method dump_dict


classmethod load_dict


class XGBRegressor

Implements the XGBoost regressor.

See https://xgboost.readthedocs.io/en/stable/python/python_api.html#module-xgboost.sklearn for more information about the parameters used.

method __init__


property fhe_circuit

Get the FHE circuit.

The FHE circuit combines computational graph, mlir, client and server into a single object. More information available in Concrete documentation (https://docs.zama.ai/concrete/getting-started/terminology_and_structure) Is None if the model is not fitted.

Returns:

  • Circuit: The FHE circuit.


property is_compiled

Indicate if the model is compiled.

Returns:

  • bool: If the model is compiled.


property is_fitted

Indicate if the model is fitted.

Returns:

  • bool: If the model is fitted.


property onnx_model

Get the ONNX model.

Is None if the model is not fitted.

Returns:

  • onnx.ModelProto: The ONNX model.


method dump_dict


method fit


classmethod load_dict


method post_processing

QuantizedOp
QuantizedModule

__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    calibrate_rounding: bool = False,
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    calibrate_rounding: bool = False,
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    calibrate_rounding: bool = False,
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    calibrate_rounding: bool = False,
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    calibrate_rounding: bool = False,
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
calibrate(*inputs: ndarray) → ndarray
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
__init__(
    n_bits_output: int,
    op_instance_name: str,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
calibrate(*inputs: ndarray) → ndarray
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
can_fuse() → bool
q_impl(
    *q_inputs: Optional[ndarray, QuantizedArray],
    **attrs
) → Union[ndarray, QuantizedArray, NoneType]
concrete.ml.quantization.quantized_ops
quantized_ops.ONNXConstantOfShape
quantized_ops.ONNXGather
quantized_ops.ONNXShape
quantized_ops.ONNXSlice
quantized_ops.QuantizedAbs
quantized_ops.QuantizedAdd
quantized_ops.QuantizedAvgPool
quantized_ops.QuantizedBatchNormalization
quantized_ops.QuantizedBrevitasQuant
quantized_ops.QuantizedCast
quantized_ops.QuantizedCelu
quantized_ops.QuantizedClip
quantized_ops.QuantizedConcat
quantized_ops.QuantizedConv
quantized_ops.QuantizedDiv
quantized_ops.QuantizedElu
quantized_ops.QuantizedErf
quantized_ops.QuantizedExp
quantized_ops.QuantizedFlatten
quantized_ops.QuantizedFloor
quantized_ops.QuantizedGemm
quantized_ops.QuantizedGreater
quantized_ops.QuantizedGreaterOrEqual
quantized_ops.QuantizedHardSigmoid
quantized_ops.QuantizedHardSwish
quantized_ops.QuantizedIdentity
quantized_ops.QuantizedLeakyRelu
quantized_ops.QuantizedLess
quantized_ops.QuantizedLessOrEqual
quantized_ops.QuantizedLog
quantized_ops.QuantizedMatMul
quantized_ops.QuantizedMax
quantized_ops.QuantizedMaxPool
quantized_ops.QuantizedMin
quantized_ops.QuantizedMul
quantized_ops.QuantizedNeg
quantized_ops.QuantizedNot
quantized_ops.QuantizedOr
quantized_ops.QuantizedPRelu
quantized_ops.QuantizedPad
quantized_ops.QuantizedPow
quantized_ops.QuantizedReduceSum
quantized_ops.QuantizedRelu
quantized_ops.QuantizedReshape
quantized_ops.QuantizedRound
quantized_ops.QuantizedSelu
quantized_ops.QuantizedSigmoid
quantized_ops.QuantizedSign
quantized_ops.QuantizedSoftplus
quantized_ops.QuantizedSqueeze
quantized_ops.QuantizedSub
quantized_ops.QuantizedTanh
quantized_ops.QuantizedTranspose
quantized_ops.QuantizedUnsqueeze
quantized_ops.QuantizedWhere
__init__(
    n_bits: 'Union[int, dict]' = 8,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)
dump_dict() → Dict
load_dict(metadata: 'Dict')
post_processing(y_preds: 'ndarray') → ndarray
predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray
__init__(
    n_bits: 'Union[int, dict]' = 8,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)
dump_dict() → Dict
load_dict(metadata: 'Dict')
post_processing(y_preds: 'ndarray') → ndarray
predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray
__init__(
    n_bits: 'Union[int, dict]' = 8,
    power: 'float' = 0.0,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    link: 'str' = 'auto',
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)
dump_dict() → Dict
load_dict(metadata: 'Dict')
post_processing(y_preds: 'ndarray') → ndarray
predict(
    X: 'Data',
    fhe: 'Union[FheMode, str]' = <FheMode.DISABLE: 'disable'>
) → ndarray
PoissonRegressor
TweedieRegressor
GammaRegressor
concrete.ml.sklearn.glm
glm.GammaRegressor
glm.PoissonRegressor
glm.TweedieRegressor
__init__(
    criterion=<class 'torch.nn.modules.loss.MSELoss'>,
    optimizer=<class 'torch.optim.adam.Adam'>,
    lr=0.01,
    max_epochs=10,
    batch_size=128,
    iterator_train=<class 'torch.utils.data.dataloader.DataLoader'>,
    iterator_valid=<class 'torch.utils.data.dataloader.DataLoader'>,
    dataset=<class 'skorch.dataset.Dataset'>,
    train_split=None,
    callbacks=None,
    predict_nonlinearity='auto',
    warm_start=False,
    verbose=1,
    device='cpu',
    **kwargs
)
dump_dict() → Dict[str, Any]
fit(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    y: Union[ndarray, Tensor, ForwardRef('DataFrame'), ForwardRef('Series'), List],
    *args,
    **kwargs
)
fit_benchmark(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    y: Union[ndarray, Tensor, ForwardRef('DataFrame'), ForwardRef('Series'), List],
    *args,
    **kwargs
)
load_dict(metadata: Dict)
predict(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray
predict_proba(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray
__init__(
    criterion=<class 'torch.nn.modules.loss.CrossEntropyLoss'>,
    optimizer=<class 'torch.optim.adam.Adam'>,
    classes=None,
    lr=0.01,
    max_epochs=10,
    batch_size=128,
    iterator_train=<class 'torch.utils.data.dataloader.DataLoader'>,
    iterator_valid=<class 'torch.utils.data.dataloader.DataLoader'>,
    dataset=<class 'skorch.dataset.Dataset'>,
    train_split=None,
    callbacks=None,
    predict_nonlinearity='auto',
    warm_start=False,
    verbose=1,
    device='cpu',
    **kwargs
)
dump_dict() → Dict[str, Any]
fit(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    y: Union[ndarray, Tensor, ForwardRef('DataFrame'), ForwardRef('Series'), List],
    *args,
    **kwargs
)
fit_benchmark(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    y: Union[ndarray, Tensor, ForwardRef('DataFrame'), ForwardRef('Series'), List],
    *args,
    **kwargs
)
load_dict(metadata: Dict)
predict(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray
predict_proba(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray
NeuralNetClassifier
NeuralNetRegressor
Fully Connected Neural Networks
concrete.ml.sklearn.qnn
qnn.NeuralNetClassifier
qnn.NeuralNetRegressor
concrete.ml.sklearn
__init__(
    input_dim: int,
    n_layers: int,
    n_outputs: int,
    n_hidden_neurons_multiplier: int = 4,
    n_w_bits: int = 4,
    n_a_bits: int = 4,
    n_accum_bits: int = 32,
    n_prune_neurons_percentage: float = 0.0,
    activation_function: Type = <class 'torch.nn.modules.activation.ReLU'>,
    quant_narrow: bool = False,
    quant_signed: bool = True,
    power_of_two_scaling: bool = True
)
enable_pruning() → None
forward(x: Tensor) → Tensor
make_pruning_permanent() → None
max_active_neurons() → int
in the API guide
SparseQuantNeuralNetImpl
concrete.ml.sklearn.qnn_module
qnn_module.SparseQuantNeuralNetwork
__init__(
    n_bits=2,
    n_neighbors=3,
    weights='uniform',
    algorithm='auto',
    leaf_size=30,
    p=2,
    metric='minkowski',
    metric_params=None,
    n_jobs=None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
predict_proba(
    X: Union[ndarray, Tensor, ForwardRef('DataFrame'), List],
    fhe: Union[FheMode, str] = <FheMode.DISABLE: 'disable'>
) → ndarray
KNeighborsClassifier
concrete.ml.sklearn.neighbors
neighbors.KNeighborsClassifier
__init__(
    n_bits=8,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    n_jobs=None,
    positive=False
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits=8,
    alpha=1.0,
    l1_ratio=0.5,
    fit_intercept=True,
    normalize='deprecated',
    precompute=False,
    max_iter=1000,
    copy_X=True,
    tol=0.0001,
    warm_start=False,
    positive=False,
    random_state=None,
    selection='cyclic'
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits=8,
    alpha: float = 1.0,
    fit_intercept=True,
    normalize='deprecated',
    precompute=False,
    copy_X=True,
    max_iter=1000,
    tol=0.0001,
    warm_start=False,
    positive=False,
    random_state=None,
    selection='cyclic'
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits=8,
    alpha: float = 1.0,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    max_iter=None,
    tol=0.001,
    solver='auto',
    positive=False,
    random_state=None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits=8,
    penalty='l2',
    dual=False,
    tol=0.0001,
    C=1.0,
    fit_intercept=True,
    intercept_scaling=1,
    class_weight=None,
    random_state=None,
    solver='lbfgs',
    max_iter=100,
    multi_class='auto',
    verbose=0,
    warm_start=False,
    n_jobs=None,
    l1_ratio=None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
LinearRegression
LogisticRegression
Lasso
Ridge
ElasticNet
concrete.ml.sklearn.linear_model
linear_model.ElasticNet
linear_model.Lasso
linear_model.LinearRegression
linear_model.LogisticRegression
linear_model.Ridge
__init__(
    n_bits=8,
    epsilon=0.0,
    tol=0.0001,
    C=1.0,
    loss='epsilon_insensitive',
    fit_intercept=True,
    intercept_scaling=1.0,
    dual=True,
    verbose=0,
    random_state=None,
    max_iter=1000
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits=8,
    penalty='l2',
    loss='squared_hinge',
    dual=True,
    tol=0.0001,
    C=1.0,
    multi_class='ovr',
    fit_intercept=True,
    intercept_scaling=1,
    class_weight=None,
    verbose=0,
    random_state=None,
    max_iter=1000
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
LinearSVC
LinearSVR
concrete.ml.sklearn.svm
svm.LinearSVC
svm.LinearSVR
has_any_qnn_layers(torch_model: Module) → bool
convert_torch_tensor_or_numpy_array_to_numpy_array(
    torch_tensor_or_numpy_array: Union[Tensor, ndarray]
) → ndarray
build_quantized_module(
    model: Union[Module, ModelProto],
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    import_qat: bool = False,
    n_bits: Union[int, Dict[str, int]] = 8,
    rounding_threshold_bits: Optional[int] = None
) → QuantizedModule
compile_torch_model(
    torch_model: Module,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    import_qat: bool = False,
    configuration: Optional[Configuration] = None,
    artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    n_bits: Union[int, Dict[str, int]] = 8,
    rounding_threshold_bits: Optional[int] = None,
    p_error: Optional[float] = None,
    global_p_error: Optional[float] = None,
    verbose: bool = False,
    inputs_encryption_status: Optional[Sequence[str]] = None
) → QuantizedModule
compile_onnx_model(
    onnx_model: ModelProto,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    import_qat: bool = False,
    configuration: Optional[Configuration] = None,
    artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    n_bits: Union[int, Dict[str, int]] = 8,
    rounding_threshold_bits: Optional[int] = None,
    p_error: Optional[float] = None,
    global_p_error: Optional[float] = None,
    verbose: bool = False,
    inputs_encryption_status: Optional[Sequence[str]] = None
) → QuantizedModule
compile_brevitas_qat_model(
    torch_model: Module,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    n_bits: Optional[int, Dict[str, int]] = None,
    configuration: Optional[Configuration] = None,
    artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    rounding_threshold_bits: Optional[int] = None,
    p_error: Optional[float] = None,
    global_p_error: Optional[float] = None,
    output_onnx_file: Union[NoneType, Path, str] = None,
    verbose: bool = False,
    inputs_encryption_status: Optional[Sequence[str]] = None
) → QuantizedModule
compile_torch_model
compile_brevitas_qat_model
compile_torch_model
compile_onnx_model
compile_brevitas_qat_model
compile_torch_model
concrete.ml.torch.compile
compile.build_quantized_module
compile.compile_brevitas_qat_model
compile.compile_onnx_model
compile.compile_torch_model
compile.convert_torch_tensor_or_numpy_array_to_numpy_array
compile.has_any_qnn_layers
get_onnx_model(model: Callable, x: ndarray, framework: str) → ModelProto
workaround_squeeze_node_xgboost(onnx_model: ModelProto)
assert_add_node_and_constant_in_xgboost_regressor_graph(onnx_model: ModelProto)
add_transpose_after_last_node(onnx_model: ModelProto)
preprocess_tree_predictions(
    init_tensor: ndarray,
    output_n_bits: int
) → QuantizedArray
tree_onnx_graph_preprocessing(
    onnx_model: ModelProto,
    framework: str,
    expected_number_of_outputs: int
)
tree_values_preprocessing(
    onnx_model: ModelProto,
    framework: str,
    output_n_bits: int
) → QuantizedArray
tree_to_numpy(
    model: Callable,
    x: ndarray,
    framework: str,
    output_n_bits: int = 8
) → Tuple[Callable, List[UniformQuantizer], ModelProto]
concrete.ml.sklearn.tree_to_numpy
tree_to_numpy.add_transpose_after_last_node
tree_to_numpy.assert_add_node_and_constant_in_xgboost_regressor_graph
tree_to_numpy.get_onnx_model
tree_to_numpy.preprocess_tree_predictions
tree_to_numpy.tree_onnx_graph_preprocessing
tree_to_numpy.tree_to_numpy
tree_to_numpy.tree_values_preprocessing
tree_to_numpy.workaround_squeeze_node_xgboost
__init__(
    n_bits: int = 6,
    n_estimators=20,
    criterion='gini',
    max_depth=4,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features='sqrt',
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    bootstrap=True,
    oob_score=False,
    n_jobs=None,
    random_state=None,
    verbose=0,
    warm_start=False,
    class_weight=None,
    ccp_alpha=0.0,
    max_samples=None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
post_processing(y_preds: ndarray) → ndarray
__init__(
    n_bits: int = 6,
    n_estimators=20,
    criterion='squared_error',
    max_depth=4,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features='sqrt',
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    bootstrap=True,
    oob_score=False,
    n_jobs=None,
    random_state=None,
    verbose=0,
    warm_start=False,
    ccp_alpha=0.0,
    max_samples=None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
RandomForestClassifier
RandomForestRegressor
concrete.ml.sklearn.rf
rf.RandomForestClassifier
rf.RandomForestRegressor
__init__(
    criterion='gini',
    splitter='best',
    max_depth=None,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features=None,
    random_state=None,
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    class_weight=None,
    ccp_alpha: float = 0.0,
    n_bits: int = 6
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
post_processing(y_preds: ndarray) → ndarray
__init__(
    criterion='squared_error',
    splitter='best',
    max_depth=None,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features=None,
    random_state=None,
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    ccp_alpha=0.0,
    n_bits: int = 6
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
DecisionTreeClassifier
DecisionTreeRegressor
concrete.ml.sklearn.tree
tree.DecisionTreeClassifier
tree.DecisionTreeRegressor
__init__(
    n_bits: int = 6,
    max_depth: Optional[int] = 3,
    learning_rate: Optional[float] = 0.1,
    n_estimators: Optional[int] = 20,
    objective: Optional[str] = 'binary:logistic',
    booster: Optional[str] = None,
    tree_method: Optional[str] = None,
    n_jobs: Optional[int] = None,
    gamma: Optional[float] = None,
    min_child_weight: Optional[float] = None,
    max_delta_step: Optional[float] = None,
    subsample: Optional[float] = None,
    colsample_bytree: Optional[float] = None,
    colsample_bylevel: Optional[float] = None,
    colsample_bynode: Optional[float] = None,
    reg_alpha: Optional[float] = None,
    reg_lambda: Optional[float] = None,
    scale_pos_weight: Optional[float] = None,
    base_score: Optional[float] = None,
    missing: float = nan,
    num_parallel_tree: Optional[int] = None,
    monotone_constraints: Optional[Dict[str, int], str] = None,
    interaction_constraints: Optional[str, List[Tuple[str]]] = None,
    importance_type: Optional[str] = None,
    gpu_id: Optional[int] = None,
    validate_parameters: Optional[bool] = None,
    predictor: Optional[str] = None,
    enable_categorical: bool = False,
    use_label_encoder: bool = False,
    random_state: Optional[int] = None,
    verbosity: Optional[int] = None
)
dump_dict() → Dict[str, Any]
load_dict(metadata: Dict)
__init__(
    n_bits: int = 6,
    max_depth: Optional[int] = 3,
    learning_rate: Optional[float] = 0.1,
    n_estimators: Optional[int] = 20,
    objective: Optional[str] = 'reg:squarederror',
    booster: Optional[str] = None,
    tree_method: Optional[str] = None,
    n_jobs: Optional[int] = None,
    gamma: Optional[float] = None,
    min_child_weight: Optional[float] = None,
    max_delta_step: Optional[float] = None,
    subsample: Optional[float] = None,
    colsample_bytree: Optional[float] = None,
    colsample_bylevel: Optional[float] = None,
    colsample_bynode: Optional[float] = None,
    reg_alpha: Optional[float] = None,
    reg_lambda: Optional[float] = None,
    scale_pos_weight: Optional[float] = None,
    base_score: Optional[float] = None,
    missing: float = nan,
    num_parallel_tree: Optional[int] = None,
    monotone_constraints: Optional[Dict[str, int], str] = None,
    interaction_constraints: Optional[str, List[Tuple[str]]] = None,
    importance_type: Optional[str] = None,
    gpu_id: Optional[int] = None,
    validate_parameters: Optional[bool] = None,
    predictor: Optional[str] = None,
    enable_categorical: bool = False,
    use_label_encoder: bool = False,
    random_state: Optional[int] = None,
    verbosity: Optional[int] = None
)
dump_dict() → Dict[str, Any]
fit(X, y, *args, **kwargs) → Any
load_dict(metadata: Dict)
post_processing(y_preds: ndarray) → ndarray
XGBClassifier
XGBRegressor
concrete.ml.sklearn.xgb
xgb.XGBClassifier
xgb.XGBRegressor
concrete.ml.torch
NumpyModule
concrete.ml.torch.numpy_module
numpy_module.NumpyModule
concrete.ml.version
save_and_clear_private_info
concrete.ml.torch.hybrid_model
hybrid_model.HybridFHEMode
hybrid_model.HybridFHEModel
hybrid_model.HybridFHEModelServer
hybrid_model.LoggerStub
hybrid_model.RemoteModule
hybrid_model.convert_conv1d_to_linear
hybrid_model.tuple_to_underscore_str
hybrid_model.underscore_str_to_tuple

concrete.ml.torch.md

module concrete.ml.torch

Modules for torch to numpy conversion.

Global Variables

  • numpy_module

  • compile

concrete.ml.torch.numpy_module.md

module concrete.ml.torch.numpy_module

A torch to numpy module.

Global Variables

  • OPSET_VERSION_FOR_ONNX_EXPORT


class NumpyModule

General interface to transform a torch.nn.Module to numpy module.

Args:

  • torch_model (Union[nn.Module, onnx.ModelProto]): A fully trained, torch model along with its parameters or the onnx graph of the model.

  • dummy_input (Union[torch.Tensor, Tuple[torch.Tensor, ...]]): Sample tensors for all the module inputs, used in the ONNX export to get a simple to manipulate nn representation.

  • debug_onnx_output_file_path: (Optional[Union[Path, str]], optional): An optional path to indicate where to save the ONNX file exported by torch for debug. Defaults to None.

method __init__

__init__(
    model: Union[Module, ModelProto],
    dummy_input: Optional[Tensor, Tuple[Tensor, ]] = None,
    debug_onnx_output_file_path: Optional[str, Path] = None
)

property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model (onnx.ModelProto): the ONNX model


method forward

forward(*args: ndarray) → Union[ndarray, Tuple[ndarray, ]]

Apply a forward pass on args with the equivalent numpy function only.

Args:

  • *args: the inputs of the forward function

Returns:

  • Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]: result of the forward on the given inputs

concrete.ml.version.md

module concrete.ml.version

File to manage the version of the package.

concrete.ml.torch.hybrid_model.md

module concrete.ml.torch.hybrid_model

Implement the conversion of a torch model to a hybrid fhe/torch inference.

Global Variables

  • MAX_BITWIDTH_BACKWARD_COMPATIBLE


function tuple_to_underscore_str

tuple_to_underscore_str(tup: Tuple) → str

Convert a tuple to a string representation.

Args:

  • tup (Tuple): a tuple to change into string representation

Returns:

  • str: a string representing the tuple


function underscore_str_to_tuple

underscore_str_to_tuple(tup: str) → Tuple

Convert a a string representation of a tuple to a tuple.

Args:

  • tup (str): a string representing the tuple

Returns:

  • Tuple: a tuple to change into string representation


function convert_conv1d_to_linear

convert_conv1d_to_linear(layer_or_module)

Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear.

Args:

  • layer_or_module (nn.Module or Conv1D): The module which will be recursively searched for Conv1D layers, or a Conv1D layer itself.

Returns:

  • nn.Module or nn.Linear: The updated module with Conv1D layers converted to Linear layers, or the Conv1D layer converted to a Linear layer.


class HybridFHEMode

Simple enum for different modes of execution of HybridModel.


class RemoteModule

A wrapper class for the modules to be evaluated remotely with FHE.

method __init__

__init__(
    module: Optional[Module] = None,
    server_remote_address: Optional[str] = None,
    module_name: Optional[str] = None,
    model_name: Optional[str] = None,
    verbose: int = 0
)

method forward

forward(x: Tensor) → Union[Tensor, QuantTensor]

Forward pass of the remote module.

To change the behavior of this forward function one must change the fhe_local_mode attribute. Choices are:

  • disable: forward using torch module

  • remote: forward with fhe client-server

  • simulate: forward with local fhe simulation

  • calibrate: forward for calibration

Args:

  • x (torch.Tensor): The input tensor.

Returns:

  • (torch.Tensor): The output tensor.

Raises:

  • ValueError: if local_fhe_mode is not supported


method init_fhe_client

init_fhe_client(
    path_to_client: Optional[Path] = None,
    path_to_keys: Optional[Path] = None
)

Set the clients keys.

Args:

  • path_to_client (str): Path where the client.zip is located.

  • path_to_keys (str): Path where keys are located.

Raises:

  • ValueError: if anything goes wrong with the server.


method remote_call

remote_call(x: Tensor) → Tensor

Call the remote server to get the private module inference.

Args:

  • x (torch.Tensor): The input tensor.

Returns:

  • torch.Tensor: The result of the FHE computation


class HybridFHEModel

Convert a model to a hybrid model.

This is done by converting targeted modules by RemoteModules. This will modify the model in place.

Args:

  • model (nn.Module): The model to modify (in-place modification)

  • module_names (Union[str, List[str]]): The module name(s) to replace with FHE server.

  • server_remote_address): The remote address of the FHE server

  • model_name (str): Model name identifier

  • verbose (int): If logs should be printed when interacting with FHE server

method __init__

__init__(
    model: Module,
    module_names: Union[str, List[str]],
    server_remote_address=None,
    model_name: str = 'model',
    verbose: int = 0
)

method compile_model

compile_model(
    x: Tensor,
    n_bits: Union[int, Dict[str, int]] = 8,
    rounding_threshold_bits: Optional[int] = None,
    p_error: Optional[float] = None,
    configuration: Optional[Configuration] = None
)

Compiles the specific layers to FHE.

Args:

  • x (torch.Tensor): The input tensor for the model. This is used to run the model once for calibration.

  • n_bits (int): The bit precision for quantization during FHE model compilation. Default is 8.

  • rounding_threshold_bits (int): The number of bits to use for rounding threshold during FHE model compilation. Default is 8.

  • p_error (float): Error allowed for each table look-up in the circuit.

  • configuration (Configuration): A concrete Configuration object specifying the FHE encryption parameters. If not specified, a default configuration is used.


method init_client

init_client(
    path_to_clients: Optional[Path] = None,
    path_to_keys: Optional[Path] = None
)

Initialize client for all remote modules.

Args:

  • path_to_clients (Optional[Path]): Path to the client.zip files.

  • path_to_keys (Optional[Path]): Path to the keys folder.


method publish_to_hub

publish_to_hub()

Allow the user to push the model and FHE required files to HF Hub.


method save_and_clear_private_info

save_and_clear_private_info(path: Path, via_mlir=False)

Save the PyTorch model to the provided path and also saves the corresponding FHE circuit.

Args:

  • path (Path): The directory where the model and the FHE circuit will be saved.

  • via_mlir (bool): if fhe circuits should be serialized using via_mlir option useful for cross-platform (compile on one architecture and run on another)


method set_fhe_mode

set_fhe_mode(hybrid_fhe_mode: Union[str, HybridFHEMode])

Set Hybrid FHE mode for all remote modules.

Args:

  • hybrid_fhe_mode (Union[str, HybridFHEMode]): Hybrid FHE mode to set to all remote modules.


class LoggerStub

Placeholder type for a typical logger like the one from loguru.


method info

info(msg: str)

Placholder function for logger.info.

Args:

  • msg (str): the message to output


class HybridFHEModelServer

Hybrid FHE Model Server.

This is a class object to server FHE models serialized using HybridFHEModel.

method __init__

__init__(key_path: Path, model_dir: Path, logger: Optional[LoggerStub])

method add_key

add_key(key: bytes, model_name: str, module_name: str, input_shape: str)

Add public key.

Arguments:

  • key (bytes): public key

  • model_name (str): model name

  • module_name (str): name of the module in the model

  • input_shape (str): input shape of said module

Returns: Dict[str, str] - uid: uid a personal uid


method check_inputs

check_inputs(
    model_name: str,
    module_name: Optional[str],
    input_shape: Optional[str]
)

Check that the given configuration exist in the compiled models folder.

Args:

  • model_name (str): name of the model

  • module_name (Optional[str]): name of the module in the model

  • input_shape (Optional[str]): input shape of the module

Raises:

  • ValueError: if the given configuration does not exist.


method compute

compute(
    model_input: bytes,
    uid: str,
    model_name: str,
    module_name: str,
    input_shape: str
)

Compute the circuit over encrypted input.

Arguments:

  • model_input (bytes): input of the circuit

  • uid (str): uid of the public key to use

  • model_name (str): model name

  • module_name (str): name of the module in the model

  • input_shape (str): input shape of said module

Returns:

  • bytes: the result of the circuit


method dump_key

dump_key(key_bytes: bytes, uid: Union[UUID, str]) → None

Dump a public key to a stream.

Args:

  • key_bytes (bytes): stream to dump the public serialized key to

  • uid (Union[str, uuid.UUID]): uid of the public key to dump


method get_circuit

get_circuit(model_name, module_name, input_shape)

Get circuit based on model name, module name and input shape.

Args:

  • model_name (str): name of the model

  • module_name (str): name of the module in the model

  • input_shape (str): input shape of the module

Returns:

  • FHEModelServer: a fhe model server of the given module of the given model for the given shape


method get_client

get_client(model_name: str, module_name: str, input_shape: str)

Get client.

Args:

  • model_name (str): name of the model

  • module_name (str): name of the module in the model

  • input_shape (str): input shape of the module

Returns:

  • Path: the path to the correct client

Raises:

  • ValueError: if client couldn't be found


method list_modules

list_modules(model_name: str)

List all modules in a model.

Args:

  • model_name (str): name of the model

Returns: Dict[str, Dict[str, Dict]]


method list_shapes

list_shapes(model_name: str, module_name: str)

List all modules in a model.

Args:

  • model_name (str): name of the model

  • module_name (str): name of the module in the model

Returns: Dict[str, Dict]


method load_key

load_key(uid: Union[str, UUID]) → bytes

Load a public key from the key path in the file system.

Args:

  • uid (Union[str, uuid.UUID]): uid of the public key to load

Returns:

  • bytes: the bytes of the public key

Comparison of clasification decision boundaries between FHE and plaintext models
XGBoost n_bits comparison
Comparison neural networks
Sklearn model decision boundaries
FHE model decision boundarires
Artificial Neuron
Fully Connected Neural Network
Pruned Fully Connected Neural Network
Impact of p_error in a Neural Network
Torch compilation flow with ONNX
Cover

GPT-2 in FHE

Privacy-preserving text generation based on a user's prompt

Cover

Titanic

Train an XGB classifier that can perform encrypted prediction for the

Cover

Federated Learning and Private Inference

Use federated learning to train a Logistic Regression while preserving training data confidentiality. Import the model into Concrete ML and perform encrypted prediction

Cover

Neural Network Fine-tuning

Fine-tune a VGG network to classify the CIFAR image data-sets and predict on encrypted data

Cover

Neural Network Splitting for SaaS deployment

Train a VGG-like CNN that classifies CIFAR10 encrypted images, and where an initial feature extractor is executed client-side

Cover

Encrypted Image filtering

A Hugging Face space that applies a variety of image filters to encrypted images

Cover

Encrypted sentiment analysis

A Hugging Face space that securely analyzes the sentiment expressed in a short text

Cover

Credit Scoring

Predict the chance of a given loan applicant defaulting on loan repayment

Cover

Healthcare diagnosis

Give a diagnosis using FHE to preserve the privacy of the patient

Kaggle Titanic competition