Only this pageAll pages
Powered by GitBook
1 of 70

0.4

Loading...

Getting Started

Loading...

Loading...

Loading...

Built-in Models

Loading...

Loading...

Loading...

Loading...

Loading...

Deep Learning

Loading...

Loading...

Loading...

Loading...

Loading...

Advanced topics

Loading...

Loading...

Loading...

Loading...

Loading...

Developer Guide

Workflow

Loading...

Loading...

Loading...

Loading...

Loading...

Inner workings

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Linear Models

Concrete-ML
scikit-learn

Models are also compatible with some of scikit-learn's main workflows, such as Pipeline() or GridSearch().

Example

import numpy
from tqdm import tqdm
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from concrete.ml.sklearn import LogisticRegression

# Create the data for classification
X, y = make_classification(
    n_features=2,
    n_redundant=0,
    n_informative=2,
    random_state=2,
    n_clusters_per_class=1,
    n_samples=100,
)

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# Instantiate the model
model = LogisticRegression(n_bits=2)

# Fit the model
model.fit(X_train, y_train)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test)

# Compile the model
model.compile(X_train)

# Perform the inference in FHE
# Note that here the encryption and decryption is done behind the scene.
# It is recommended to run this with a very small batch of
# examples first (e.g. N_TEST_FHE = 3)
N_TEST_FHE = 3
y_pred_fhe = numpy.array([
  model.predict([sample], execute_in_fhe=True)[0]
  for sample in tqdm(X_test[:N_TEST_FHE])
])

# Assert that FHE predictions are the same as the clear predictions
print(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "
      f"examples over {N_TEST_FHE} have a FHE inference equal to the clear inference.")

# Output:
#  3 examples over 3 have a FHE inference equal to the clear inference

We can clearly observe the impact of quantization over the decision boundaries in the FHE model, separating the initial lines into broken lines with steps. However, this does not change the overall score as both models output the same accuracy (90%).

In fact, the quantization process may sometimes create some artifacts that could lead to a decrease in performance. Still, the impact of those artifacts is often minor when considering linear models as FHE models reach similar scores as their equivalent clear ones.

Concrete-ML provides several of the most popular linear models for regression or classification that can be found in :

Using these models in FHE is extremely similar to what can be done with scikit-learn's , making it easy for data scientists who are used to this framework to get started with Concrete-ML.

Here's an example of how to use this model in FHE on a simple data-set below. A more complete example can be found in the .

We can then plot the decision boundary of the classifier and then compare those results with a scikit-learn model executed in clear. The complete code can be found in the .

Scikit-learn
API
LogisticRegression notebook
LogisticRegression notebook
LinearRegression
LogisticRegression
LinearSVC
LinearSVR
PoissonRegressor
TweedieRegressor
GammaRegressor
Lasso
Ridge
ElasticNet

Key Concepts

Concrete-ML is built on top of Concrete-Numpy, which enables Numpy programs to be converted into FHE circuits.

Lifecycle of a Concrete-ML model

I. Model Development

  1. Training. A model is trained using plaintext, non-encrypted, training data.

  2. Inference. The compiled model can then be executed on encrypted data, once the proper keys have been generated. The model can also be deployed to a server and used to run private inference on encrypted inputs.

II. Model deployment

  1. Client/Server deployment. In a client/server setting, the model can be exported in a way that:

    • allows the client to generate keys, encrypt and decrypt.

    • provides a compiled model that can run on the server to perform inference on encrypted data

  2. Key generation. The data owner (client) needs to generate a pair of private keys (to encrypt/decrypt their data and results) and a public evaluation key (for the model's FHE evaluation on the server).

Cryptography concepts

Concrete-ML and Concrete-Numpy are tools that hide away the details of the underlying cryptography scheme, called TFHE. However, some cryptography concepts are still useful when using these two toolkits:

  1. Encryption/Decryption. These operations transform plaintext, i.e. human-readable information, into ciphertext, i.e. data that contains a form of the original plaintext that is unreadable by a human or computer without the proper key to decrypt it. Encryption takes plaintext and an encryption key and produces ciphertext, while decryption is the inverse operation.

  2. Encrypted inference. FHE allows a third party to execute (i.e. run inference or predict) a machine learning model on encrypted data (a ciphertext). The result of the inference is also encrypted and can only be read by the person who gets the decryption key.

  3. Keys. A key is a series of bits used within an encryption algorithm for encrypting data so that the corresponding ciphertext appears random.

  4. Key generation. Cryptographic keys need to be generated using random number generators. Their size may be large and key generation may take a long time. However, keys only need to be generated once for each model a client uses.

  5. Guaranteed correctness of encrypted computations. To achieve security, TFHE, the underlying encryption scheme, adds random noise as ciphertexts. This can induce errors during processing of encrypted data, depending on noise parameters. By default, Concrete-ML uses parameters that ensure the correctness of the encrypted computation, so you do not need to take into account the noise parametrization. Therefore, results on encrypted data will be the same as the results of simulation on clear data.

Model accuracy considerations under FHE constraints

To respect FHE constraints, all numerical programs over encrypted data must have all inputs, constants and intermediate values represented with integers of a maximum of 8 bits.

Quantization. The model is converted into an integer equivalent using quantization. Concrete-ML performs this step either during training (Quantization-Aware Training) or after training (Post-Training Quantization), depending on model type. Quantization converts inputs, model weights and all intermediate values of the inference computation to integers. More information is available .

Simulation using the Virtual Library. Testing FHE models on very large datasets can take a long time. Furthermore, not all models are compatible with FHE constraints out-of-the-box. Simulation using the Virtual Library allows you to execute a model that was quantized, to measure the accuracy it would have in FHE, but also to determine the modifications required to make it FHE compatible. Simulation is described in more details .

Compilation. Once the model is quantized, simulation can confirm it has good accuracy in FHE. The model then needs to be compiled using Concrete's FHE compiler to produce an equivalent FHE circuit. This circuit is represented as an MLIR program consisting of low level cryptographic operations. You can read more about FHE compilation , MLIR and about the low-level Concrete library .

You can see some examples of the model development workflow .

You can see an example of the model deployment workflow .

While Concrete-ML users only need to understand the cryptography concepts above, for a deeper understanding of the cryptography behind the Concrete stack, please see the or .

Thus, Concrete-ML quantizes the input data and model outputs in the same way as weights and activations. The main levers to control accumulator bit-width are the numbers of bits used for the inputs, weights and activations of the model. These parameters are crucial to comply with the constraint on accumulator bit-widths. Please refer to for more details about how to develop models with quantization in Concrete-ML.

However, these methods may cause a reduction in the accuracy of the model since its representative power is diminished. Most importantly, carefully choosing a quantization approach can alleviate accuracy loss, all the while allowing compilation to FHE. Concrete-ML offers built-in models that already include quantization algorithms, and users only need to configure some of their parameters, such as the number of bits, discussed above. See for information about configuring these parameters for various models.

Additional specific methods can help to make models compatible with FHE constraints. For instance, dimensionality reduction can reduce the number of input features and, thus, the maximum accumulator bit-width reached within a circuit. Similarly, sparsity-inducing training methods, such as pruning, de-activate some features during inference, which also helps. For now, dimensionality reduction is considered as a pre-processing step, while pruning is used in the .

The configuration of model quantization parameters is illustrated in the advanced examples for and dimensionality reduction is shown in the .

here
here
here
here
here
here
here
whitepaper on TFHE and Programmable Boostrapping
this series of blogs
the quantization documentation
built-in neural networks
Linear and Logistic Regressions
Poisson regression example
Plaintext model decision boundaries
FHE model decision boundarires

Installation

Please note that not all hardware/OS combinations are supported. Determine your platform, OS version and Python version before referencing the table below.

Depending on your OS, Concrete-ML may be installed with Docker or with pip:

OS / HW
Available on Docker
Available on pip

Linux

Yes

Yes

Windows

Yes

Not currently

Windows Subsystem for Linux

Yes

Yes

macOS (Intel)

Yes

Yes

macOS (Apple Silicon, ie M1, M2 etc)

Yes

Not currently

Most of these limits are shared with the rest of the Concrete stack (namely Concrete-Numpy and Concrete-Compiler). Support for more platforms will be added in the future.

Using PyPi

Requirements

Installing on Windows can be done using Docker or WSL. On WSL, Concrete-ML will work as long as the package is not installed in the /mnt/c/ directory, which corresponds to the host OS filesystem.

Installation

To install Concrete-ML from PyPi, run the following:

pip install -U pip wheel setuptools
pip install concrete-ml

This will automatically install all dependencies, notably Concrete-Numpy.

Using Docker

Concrete-ML can be installed using Docker by either pulling the latest image or a specific version:

docker pull zamafhe/concrete-ml:latest
# or
docker pull zamafhe/concrete-ml:v0.4.0

The image can then be used via the following command:

# Without local volume:
docker run --rm -it -p 8888:8888 zamafhe/concrete-ml

# With local volume to save notebooks on host:
docker run --rm -it -p 8888:8888 -v /host/path:/data zamafhe/concrete-ml

This will launch a Concrete-ML enabled Jupyter server in Docker that can be accessed directly from a browser.

Alternatively, a shell can be lauched in Docker, with or without volumes:

docker run --rm -it zamafhe/concrete-ml /bin/bash

Pandas

Concrete-ML provides partial support for Pandas, with most available models (linear and tree-based models) usable on Pandas dataframes the same way they would be used with NumPy arrays.

The table below summarizes the current compatibility:

Methods
Support Pandas dataframe

fit

✓

compile

✗

predict (execute_in_fhe=False)

✓

predict (execute_in_fhe=True)

✓

Example

import numpy as np
import pandas as pd
from concrete.ml.sklearn import LogisticRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

# Create the data set as a Pandas dataframe
X, y = make_classification(
    n_samples=100,
    n_features=2,
    n_redundant=0,
    random_state=2,
)
X, y = pd.DataFrame(X), pd.DataFrame(y)

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)

# Instantiate the model
model = LogisticRegression(n_bits=2)

# Fit the model
model.fit(X_train, y_train)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test)

# Compile the model
model.compile(X_train.to_numpy())

# Perform the inference in FHE
# Warning: this will take a while. It is recommended to run this with a very small batch of
# examples first (e.g. N_TEST_FHE = 1)
# Note that here the encryption and decryption is done behind the scenes.
N_TEST_FHE = 1
y_pred_fhe = model.predict(X_test.head(N_TEST_FHE), execute_in_fhe=True)

# Assert that FHE predictions are the same as the clear predictions
print(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "
      f"examples over {N_TEST_FHE} have a FHE inference equal to the clear inference.")

# Output:
#  1 examples over 1 have a FHE inference equal to the clear inference

Using ONNX

ONNX models can be compiled by directly importing models that are already quantized with Quantization Aware Training (QAT). or by performing Post-Training Quantization (PTQ) with Concrete-ML.

Simple example

The following example shows how to compile an ONNX model using PTQ. The model was initially trained using Keras before being exported to ONNX. The training code is not shown here.

import numpy
import onnx
import tensorflow
import tf2onnx

from concrete.ml.torch.compile import compile_onnx_model
from concrete.numpy.compilation import Configuration


class FC(tensorflow.keras.Model):
    """A fully-connected model."""

    def __init__(self):
        super().__init__()
        hidden_layer_size = 10
        output_size = 5

        self.dense1 = tensorflow.keras.layers.Dense(
            hidden_layer_size,
            activation=tensorflow.nn.relu,
        )
        self.dense2 = tensorflow.keras.layers.Dense(output_size, activation=tensorflow.nn.relu6)
        self.flatten = tensorflow.keras.layers.Flatten()

    def call(self, inputs):
        """Forward function."""
        x = self.flatten(inputs)
        x = self.dense1(x)
        x = self.dense2(x)
        return self.flatten(x)


n_bits = 6
input_output_feature = 2
input_shape = (input_output_feature,)
num_inputs = 1
n_examples = 5000

# Define the Keras model
keras_model = FC()
keras_model.build((None,) + input_shape)
keras_model.compute_output_shape(input_shape=(None, input_output_feature))

# Create random input
input_set = numpy.random.uniform(-100, 100, size=(n_examples, *input_shape))

# Convert to ONNX
tf2onnx.convert.from_keras(keras_model, opset=14, output_path="tmp.model.onnx")

onnx_model = onnx.load("tmp.model.onnx")
onnx.checker.check_model(onnx_model)

# Compile
quantized_numpy_module = compile_onnx_model(
    onnx_model, input_set, n_bits=2
)

# Create test data from the same distribution and quantize using
# learned quantization parameters during compilation
x_test = tuple(numpy.random.uniform(-100, 100, size=(1, *input_shape)) for _ in range(num_inputs))
qtest = quantized_numpy_module.quantize_input(x_test)

y_clear = quantized_numpy_module(*qtest)
y_fhe = quantized_numpy_module.forward_fhe.encrypt_run_decrypt(*qtest)

print("Execution in clear: ", y_clear)
print("Execution in FHE:   ", y_fhe)
print("Equality:           ", numpy.sum(y_clear == y_fhe), "over", numpy.size(y_fhe), "values")

While Keras was used in this example, it is not officially supported as additional work is needed to test all of Keras' types of layer and models.

Quantization Aware Training

QAT models contain quantizers in the ONNX graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized. Since these QAT models have quantizers that are configured during training to a specific number of bits, the ONNX graph will need to be imported using the same settings:

n_bits_qat = 3  # number of bits for weights and activations during training

quantized_numpy_module = compile_onnx_model(
    onnx_model,
    input_set,
    import_qat=True,
    n_bits=n_bits_qat,
)

Supported operators

The following operators are supported for evaluation and conversion to an equivalent FHE circuit. Other operators were not implemented either due to FHE constraints, or because they are rarely used in PyTorch activations or scikit-learn models.

  • Abs

  • Acos

  • Acosh

  • Add

  • Asin

  • Asinh

  • Atan

  • Atanh

  • AveragePool

  • BatchNormalization

  • Cast

  • Celu

  • Clip

  • Constant

  • Conv

  • Cos

  • Cosh

  • Div

  • Elu

  • Equal

  • Erf

  • Exp

  • Flatten

  • Gemm

  • Greater

  • GreaterOrEqual

  • HardSigmoid

  • HardSwish

  • Identity

  • LeakyRelu

  • Less

  • LessOrEqual

  • Log

  • MatMul

  • Mul

  • Not

  • Or

  • PRelu

  • Pad

  • Pow

  • ReduceSum

  • Relu

  • Reshape

  • Round

  • Selu

  • Sigmoid

  • Sin

  • Sinh

  • Softplus

  • Sub

  • Tan

  • Tanh

  • ThresholdedRelu

  • Transpose

  • Where

  • onnx.brevitas.Quant

Built-in Model Examples

The following table summarizes the various examples in this section, along with their accuracies.

Model
Data-set
Metric
Floating Point
Simulation
FHE

Linear Regression

Synthetic 1D

R2

0.876

0.863

0.863

Logistic Regression

Synthetic 2D with 2 classes

accuracy

0.90

0.875

0.875

Poisson Regression

mean Poisson deviance

0.61

0.60

0.60

Gamma Regression

mean Gamma deviance

0.45

0.45

0.45

Tweedie Regression

mean Tweedie deviance (power=1.9)

33.42

34.18

34.18

Decision Tree

precision score

0.95

0.97

0.97*

XGBoost Classifier

MCC

0.48

0.52

0.52*

XGBoost Regressor

R2

0.92

0.90

0.90*

A * means that FHE accuracy was calculated on a subset of the validation set.

Concrete-ML models

Comparison of classifiers

Kaggle competition

Using Torch

The following example uses a simple QAT PyTorch model that implements a fully connected neural network with two hidden layers. Due to its small size, making this model respect FHE constraints is relatively easy.

import brevitas.nn as qnn
import torch.nn as nn
import torch

N_FEAT = 12
n_bits = 3

class QATSimpleNet(nn.Module):
    def __init__(self, n_hidden):
        super().__init__()

        self.quant_inp = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc1 = qnn.QuantLinear(N_FEAT, n_hidden, True, weight_bit_width=n_bits, bias_quant=None)
        self.quant2 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc2 = qnn.QuantLinear(n_hidden, n_hidden, True, weight_bit_width=3, bias_quant=None)
        self.quant3 = qnn.QuantIdentity(bit_width=n_bits, return_quant_tensor=True)
        self.fc3 = qnn.QuantLinear(n_hidden, 2, True, weight_bit_width=n_hidden, bias_quant=None)

    def forward(self, x):
        x = self.quant_inp(x)
        x = self.quant2(torch.relu(self.fc1(x)))
        x = self.quant3(torch.relu(self.fc2(x)))
        x = self.fc3(x)
        return x
from concrete.ml.torch.compile import compile_brevitas_qat_model
import numpy

torch_input = torch.randn(100, N_FEAT)
torch_model = QATSimpleNet(30)
quantized_numpy_module = compile_brevitas_qat_model(
    torch_model, # our model
    torch_input, # a representative input-set to be used for both quantization and compilation
    n_bits = n_bits,
)

The model can now be used to perform encrypted inference. Next, the test data is quantized:

x_test = numpy.array([numpy.random.randn(N_FEAT)])
x_test_quantized = quantized_numpy_module.quantize_input(x_test)

and the encrypted inference run using either:

  • quantized_numpy_module.forward_and_dequant() to compute predictions in the clear, on quantized data and then de-quantize the result. The return value of this function contains the dequantized (float) output of running the model in the clear. Calling the forward function on the clear data is useful when debugging. The results in FHE will be the same as those on clear quantized data.

  • quantized_numpy_module.forward_fhe.encrypt_run_decrypt() to perform the FHE inference. In this case, dequantization is done in a second stage using quantized_numpy_module.dequantize_output().

Generic Quantization Aware Training import

While the example above shows how to import a Brevitas/PyTorch model, Concrete-ML also provides an option to import generic QAT models implemented either in PyTorch or through ONNX. Interestingly, deep learning models made with TensorFlow or Keras should be usable, by preliminary converting them to ONNX.

QAT models contain quantizers in the PyTorch graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized.

from concrete.ml.torch.compile import compile_torch_model
n_bits_qat = 3

quantized_numpy_module = compile_torch_model(
    torch_model,
    torch_input,
    import_qat=True,
    n_bits=n_bits_qat,
)

When importing QAT models using this generic pipeline, a representative calibration set should be given as quantization parameters in the model need to be inferred from the statistics of the values encountered during inference.

Supported operators and activations

Concrete-ML supports a variety of PyTorch operators that can be used to build fully connected or convolutional neural networks, with normalization and activation layers. Moreover, many element-wise operators are supported.

Operators

univariate operators

shape modifying operators

operators that take an encrypted input and unencrypted constants

Please note that Concrete-ML supports these operators but also the Quantization Aware Training equivalents from Brevitas.

  • brevitas.nn.QuantLinear

  • brevitas.nn.QuantConv2d

operators that can take both encrypted+unencrypted and encrypted+encrypted inputs

Quantizers

  • brevitas.nn.QuantIdentity

Activations

Note that the equivalent versions from torch.functional are also supported.

Also, only some versions of python are supported: in the current release, these are 3.8 and 3.9. Please note that, at the time of this Concrete-ML version release, or use Python 3.7 which is a deprecated version and is not supported by Concrete-ML.

Installing Concrete-ML using PyPi requires a Linux-based OS or macOS running on an x86 CPU. For Apple Silicon, Docker is the only currently supported option (see ).

The image can be used with Docker volumes, .

The following example uses a LogisticRegression model on a simple classification problem. A more advanced example can be found in the .

In addition to Concrete-ML models and to , it is also possible to directly compile models. This can be particularly appealing, notably to import models trained with Keras.

This example uses Post-Training Quantization, i.e. the quantization is not performed during training. Thus this model would not have good performance in FHE. Quantization Aware Training should be added by the model developer and importing QAT ONNX models can be done .

In addition to the built-in models, Concrete-ML supports generic machine learning models implemented with Torch, or .

As is the most appropriate method of training neural networks that are compatible with , Concrete-ML works with , a library providing QAT support for PyTorch.

Once the model is trained, calling the from Concrete-ML will automatically perform conversion and compilation of a QAT network. Here, 3-bit quantization is used for both the weights and activations.

Suppose that n_bits_qat is the bit-width of activations and weights during the QAT process. To import a PyTorch QAT network, you can use the library function, passing import_qat=True:

Alternatively, if you want to import an ONNX model directly, please see . The also supports the import_qat parameter.

-- partial support

Kaggle
Google Colab
see the Docker documentation here
KaggleTitanic notebook
custom models in torch
ONNX
LinearRegression.ipynb
LogisticRegression.ipynb
PoissonRegression.ipynb
DecisionTreeClassifier.ipynb
XGBClassifier.ipynb
GLMComparison.ipynb
XGBRegressor.ipynb
ClassifierComparison.ipynb
KaggleTitanic.ipynb
exported as ONNX graphs
torch.abs
torch.clip
torch.exp
torch.log
torch.gt
torch.clamp
torch.mul, torch.Tensor operator *
torch.div, torch.Tensor operator /
torch.nn.identity
torch.reshape
torch.Tensor.view
torch.flatten
torch.transpose
torch.conv2d, torch.nn.Conv2D
torch.matmul
torch.nn.Linear
torch.add, torch.Tensor operator +
torch.sub, torch.Tensor operator -
torch.nn.Celu
torch.nn.Elu
torch.nn.GELU
torch.nn.Hardshrink
torch.nn.HardSigmoid
torch.nn.Hardswish
torch.nn.HardTanh
torch.nn.LeakyRelu
torch.nn.LogSigmoid
torch.nn.Mish
torch.nn.PReLU
torch.nn.ReLU6
torch.nn.ReLU
torch.nn.Selu
torch.nn.Sigmoid
torch.nn.SiLU
torch.nn.Softplus
torch.nn.Softshrink
torch.nn.Softsign
torch.nn.Tanh
torch.nn.Tanhshrink
torch.nn.Threshold
below
as shown below
OpenML insurance (freq)
OpenML insurance (sev)
OpenML insurance (sev)
OpenML spams
Diabetes
House Prices
the advanced quantization guide

Pruning

Overview of pruning in Concrete-ML

Pruning is used in Concrete-ML for two types of neural networks:

Basics of pruning

In neural networks, a neuron computes a linear combination of inputs and learned weights, then applies an activation function.

The neuron computes:

yk=ϕ(∑iwixi)y_k = \phi\left(\sum_i w_ix_i\right)yk​=ϕ(∑i​wi​xi​)

When building a full neural network, each layer will contain multiple neurons, which are connected to the neuron outputs of a previous layer or to the inputs.

For every neuron shown in each layer of the figure above, the linear combinations of inputs and learned weights are computed. Depending on the values of the inputs and weights, the sum vk=∑iwixiv_k = \sum_i w_ix_ivk​=∑i​wi​xi​ - which for Concrete-ML neural networks is computed with integers - can take a range of different values.

Pruning a neural network entails fixing some of the weights wkw_kwk​ to be zero during training. This is advantageous to meet FHE constraints, as irrespective of the distribution of xix_ixi​, multiplying these input values by 0 does not increase the accumulator value.

Fixing some of the weights to 0 makes the network graph look more similar to the following:

Pruning in practice

In the formula above, in the worst-case, the maximum number of the input and weights that can make the result exceed $n$ bits is given by:

Ω=floor(2nmax−1(2nweights−1)(2ninputs−1))\Omega = \mathsf{floor} \left( \frac{2^{n_{\mathsf{max}}} - 1}{(2^{n_{\mathsf{weights}}} - 1)(2^{n_{\mathsf{inputs}}} - 1)} \right)Ω=floor((2nweights​−1)(2ninputs​−1)2nmax​−1​)

Here, nmax=8n_{\mathsf{max}} = 8nmax​=8 is the maximum precision allowed.

For example, if nweights=2n_{\mathsf{weights}} = 2nweights​=2 and ninputs=2n_{\mathsf{inputs}} = 2ninputs​=2 with nmax=8n_{\mathsf{max}} = 8nmax​=8, the worst case is where all inputs and weights are equal to their maximal value 22−1=32^2-1=322−1=3. In this case, there can be at most Ω=28\Omega = 28Ω=28 elements in the multi-sums.

In practice, the distribution of the weights of a neural network is Gaussian, with many weights either 0 or having a small value. This enables exceeding the worst-case number of active neurons without having to risk overflowing the bit-width. In built-in neural networks, the parameter n_hidden_neurons_multiplier is multiplied with Ω\OmegaΩ to determine the total number of non-zero weights that should be kept in a neuron.

Quantization

Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as real numbers) to a discrete set (such as integers).

This means that some accuracy in the representation is lost (e.g. a simple approach is to eliminate least-significant bits). However, in many cases in machine learning, it is possible to adapt the models to give meaningful results while using these smaller data types. This significantly reduces the number of bits necessary for intermediary results during the execution of these machine learning models.

Since FHE is currently limited to 8-bit integers, it is necessary to quantize models to make them compatible. As a general rule, the smaller the precision models, the better the FHE performance.

Overview of quantization in Concrete-ML

Quantization implemented in Concrete-ML is applied in two ways:

  1. Built-in models apply quantization internally and the user only needs to configure some quantization parameters. This approach requires little work by the user but may not be a one-size-fits-all solution for all types of models. The final quantized model is FHE friendly and ready to predict over encrypted data. In this setting, Post-Training Quantization (PTQ) is for linear models, data quantization is used for tree-based models and, finally, Quantization Aware Training (QAT) is included in the built-in neural network models.

While Concrete-ML quantizes machine learning models, the data the client has is often in floating point. The Concrete-ML models provide APIs to quantize inputs and de-quantize outputs.

Please note that the floating point input is quantized in the clear, i.e. it is converted to integers before being encrypted. Moreover, the model's output are also integers and are decrypted before de-quantization.

Basics of quantization

Let [α,β][\alpha, \beta ][α,β] be the range of a value to quantize where α\alphaα is the minimum and β\betaβ is the maximum. To quantize a range of floating point values (in R\mathbb{R}R) to integer values (in Z\mathbb{Z}Z), the first step is to choose the data type that is going to be used. Concrete, the framework used by Concrete-ML, is currently limited to 8-bit integers, so this will be the value used in this example. Knowing the number of bits that can be used for a value in the range [α,β][\alpha, \beta ][α,β], the scale SSS can be computed :

S=β−α2n−1S = \frac{\beta - \alpha}{2^n - 1}S=2n−1β−α​

where nnn is the number of bits (n≤8n \leq 8n≤8). For the sake of example, let's take n=7n = 7n=7.

In practice, the quantization scale is then S=β−α127S = \frac{\beta - \alpha}{127}S=127β−α​. This means the gap between consecutive representable values cannot be smaller than SSS, which, in turn, means there can be a substantial loss of precision. Every interval of length SSS will be represented by a value within the range [0..127][0..127][0..127].

The other important parameter from this quantization schema is the zero point ZpZ_pZp​ value. This essentially brings the 0 floating point value to a specific integer. If the quantization scheme is asymmetric (quantized values are not centered in 0), the resulting integer will be in Z\mathbb{Z}Z.

Zp=round(−αS)Z_p = \mathtt{round} \left(- \frac{\alpha}{S} \right)Zp​=round(−Sα​)

Configuring model quantization parameters

Built-in models provide a simple interface for configuring quantization parameters, most notably the number of bits used for inputs, model weights, intermediary and output values.

For linear models, n_bits is used to quantize both model inputs and weights. Depending on the number of features, you can use a single integer value for the n_bits parameter, e.g. a value between 2 and 7. When the number of features is high, the n_bits parameter should be decreased if you encounter compilation errors. It is also possible to quantize inputs and weights with different number of bits by passing a dictionary to n_bits , containing the op_inputs and op_weights keys.

Tree-based models can directly control the accumulator bit-width used. However, if 6 or 7 bits are not sufficient to obtain good accuracy on your data-set, one option is to use an ensemble model (RandomForest or XGBoost) and increase the number of trees in the ensemble. This, however, will have a detrimental impact on FHE execution speed.

Note that for the built-in linear models and neural networks, the maximum accumulator bit-width can not be precisely controlled. To use many input features and a high number of bits is beneficial for model accuracy, but it can conflict with the 8-bit accumulator constraint. Finding the best quantization parameters to maximize accuracy can only be done through experimentation.

Quantizing model inputs and outputs

The models implemented in Concrete-ML provide features to let the user quantize the input data and de-quantize the output data.

Here is a simple example showing how to perform inference, starting from float values and ending up with float values. Note that the FHE engine that is compiled for the ML models does not support data batching.

# Assume quantized_module : QuantizedModule
#        data: numpy.ndarray of float

# Quantization is done in the clear
x_test_q = quantized_module.quantize_input(data)

for i in range(x_test_q.shape[0]):
    # Inputs must have size (1 x N) or (1 x C x H x W), we add the batch dimension with N=1
    x_q = np.expand_dims(x_test_q[i, :], 0)

    # Execute the model in FHE
    out_fhe = quantized_module.forward_fhe.encrypt_run_decrypt(x_q)

    # Dequantization is done in the clear
    output = quantized_module.dequantize_output(out_fhe)

    # For classifiers with multi-class outputs, the arg max is done in the clear
    y_pred = np.argmax(output, 1)

Resources

What is Concrete ML?

Example usage

import numpy
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from concrete.ml.sklearn import LogisticRegression

# Lets create a synthetic data-set
x, y = make_classification(n_samples=100,
    class_sep=2, n_features=4, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=0.2, random_state=42
)

# Now we train in plaintext using quantization
model = LogisticRegression(n_bits=2)
model.fit(X_train, y_train)

y_pred_clear = model.predict(X_test)

# Finally we compile and run inference on encrypted inputs!
model.compile(x)
y_pred_fhe = model.predict(X_test, execute_in_fhe=True)

print("In clear  :", y_pred_clear)
print("In FHE    :", y_pred_fhe)
print("Comparison:", (y_pred_fhe == y_pred_clear))

# Output:
#   In clear  : [0 1 0 1 0 1 0 1 1 1 0 1 1 0 1 0 0 1 1 1]
#   In FHE    : [0 1 0 1 0 1 0 1 1 1 0 1 1 0 1 0 0 1 1 1]
#   Comparison: [ True  True  True  True  True  True  True  True  True  True  True  True
#   True  True  True  True  True  True  True  True]

This example shows the typical flow of a Concrete-ML model:

  • The model is trained on unencrypted (plaintext) data using scikit-learn. As FHE operates over integers, Concrete-ML quantizes the model to use only integers during inference.

  • The quantized model is compiled to a FHE equivalent. Under the hood, the model is first converted to a Concrete-Numpy program, then compiled.

Current limitations

To make a model work with FHE, the only constraint is to make it run within the supported precision limitations of Concrete-ML (currently 8-bit integers). Thus, machine learning models are required to be quantized, which sometimes leads to a loss of accuracy versus the original model operating on plaintext.

Additionally, Concrete-ML currently only supports FHE inference. On the other hand, training has to be done on unencrypted data, producing a model which is then converted to a FHE equivalent that can perform encrypted inference, i.e. prediction over encrypted data.

Finally, in Concrete-ML there is currently no support for pre-processing model inputs and for post-processing model outputs. These processing stages may involve text to numerical feature transformation, dimensionality reduction, KNN or clustering, featurization, normalization, and the mixing of results of ensemble models.

All of these issues are currently being addressed and significant improvements are expected to be released in the coming months.

Concrete Stack

Online demos and tutorials.

More generally, if you have built awesome projects using Concrete-ML, feel free to let us know and we'll link to it!

Additional resources

Looking for support? Ask our team!

Compilation

Compilation of a model produces machine code that executes the model on encrypted data. In some cases, notably in the client/server setting, the compilation can be done by the server when loading the model for serving.

As FHE execution is much slower than execution on non-encrypted data, Concrete-ML has a simulation mode, using an execution mode named the Virtual Library. Since, by default, the cryptographic parameters are chosen such that the results obtained in FHE are the same as those on clear data, the Virtual Library allows you to benchmark models quickly during development.

Compilation

From the perspective of the Concrete-ML user, the compilation process performed by Concrete-Numpy can be broken up into 3 steps:

  1. Numpy program tracing and creation of a Concrete-Numpy op-graph

  2. checking that the op-graph is FHE compatible

  3. producing machine code for the op-graph. This step automatically determines cryptographic parameters

Simulation with the Virtual Library

The result of this single step of the compilation pipeline allows the:

  • execution of the op-graph, which includes TLUs, on clear non-encrypted data. This is, of course, not secure, but it is much faster than executing in FHE. This mode is useful for debugging, i.e. to find the appropriate hyper-parameters. This mode is called the Virtual Library.

  • verification of the maximum bit-width of the op-graph, to determine FHE compatibility, without actually compiling the circuit to machine code.

Enabling Virtual Library execution requires the definition of a compilation Configuration. As simulation does not execute in FHE, this can be considered unsafe:

    COMPIL_CONFIG_VL = Configuration(
        dump_artifacts_on_unexpected_failures=False,
        enable_unsafe_features=True,  # This is for our tests in Virtual Library only
    )

Next, the following code uses the simulation mode for built-in models:

    clf.compile(
        X_train,
        use_virtual_lib=True,
        configuration=COMPIL_CONFIG_VL,
    )

And finally, for custom models, it is possible to enable simulation using the following syntax:

    quantized_numpy_module = compile_torch_model(
        torch_model,  # our model
        X_train,  # a representative input-set to be used for both quantization and compilation
        n_bits={"net_inputs": 5, "op_inputs": 3, "op_weights": 3, "net_outputs": 5},
        import_qat=is_qat,  # signal to the conversion function whether the network is QAT
        use_virtual_lib=True,
        configuration=COMPIL_CONFIG_VL,
    )

Obtaining the simulated predictions of the models using the Virtual Library has the same syntax as execution in FHE:

    Z = clf.predict_proba(X, execute_in_fhe=True)

Moreover, the maximum accumulator bit-width is determined as follows:

    bit_width = clf.quantized_module_.forward_fhe.graph.maximum_integer_bit_width()

A simple Concrete-Numpy example

import numpy
from concrete.numpy.compilation import compiler

# Let's assume Quantization has been applied and we are left with integers only.
# This is essentially the work of Concrete-ML

# Some parameters (weight and bias) for our model taking a single feature
w = [2]
b = 2

# The function that implements our model
@compiler({"x": "encrypted"})
def linear_model(x):
    return w @ x + b

# A representative input-set is needed to compile the function
# (used for tracing)
n_bits_input = 2
inputset = numpy.arange(0, 2**n_bits_input).reshape(-1, 1)
circuit = linear_model.compile(inputset)

# Use the API to get the maximum bit-width in the circuit
max_bit_width = circuit.graph.maximum_integer_bit_width()
print("Max bit_width = ", max_bit_width)
# Max bit_width =  4

# Test our FHE inference
circuit.encrypt_run_decrypt(numpy.array([3]))
# 8

# Print the graph of the circuit
print(circuit)
# %0 = 2                     # ClearScalar<uint2>
# %1 = [2]                   # ClearTensor<uint2, shape=(1,)>
# %2 = x                     # EncryptedTensor<uint2, shape=(1,)>
# %3 = matmul(%1, %2)        # EncryptedScalar<uint3>
# %4 = add(%3, %0)           # EncryptedScalar<uint4>
# return %4

Debugging Models

This section provides a set of tools and guidelines to help users build optimized FHE-compatible models.

Virtual library

The Virtual Lib in Concrete-ML is a prototype that provides drop-in replacements for Concrete-Numpy's compiler, allowing users to simulate what would happen when converting a model to FHE without the current bit-width constraint. Additionally, it quickly simulates the behavior with 8 bits or less without actually doing the FHE computations.

The Virtual Lib can be useful when developing and iterating on an ML model implementation. For example, you can check that your model is compatible in terms of operands (all integers) with the Virtual Lib compilation. Then, you can check how many bits your ML model would require, which can give you hints as to how it should be modified if you want to compile it to an actual FHE Circuit (not a simulated one) that only supports 8 bits of integer precision.

The following example shows how to use the Virtual Lib in Concrete-ML. Simply add use_virtual_lib = True and enable_unsafe_features = True in a Configuration. The result of the compilation will then be a simulated circuit that allows for more precision or simulated FHE execution.

from sklearn.datasets import fetch_openml, make_circles
from concrete.ml.sklearn import RandomForestClassifier
from concrete.numpy import Configuration
debug_config = Configuration(
    enable_unsafe_features=True,
    use_insecure_key_cache=True,
    insecure_key_cache_location="~/.cml_keycache",
)

n_bits = 2
X, y = make_circles(n_samples=1000, noise=0.1, factor=0.6, random_state=0)
concrete_clf = RandomForestClassifier(
    n_bits=n_bits, n_estimators=10, max_depth=5
)
concrete_clf.fit(X, y)

concrete_clf.compile(X, debug_config, use_virtual_lib=True)

y_preds_clear = concrete_clf.predict(X)

Compilation debugging

The following example produces a neural network that is not FHE-compatible:

import numpy
import torch

from torch import nn
from concrete.ml.torch.compile import compile_torch_model

N_FEAT = 2
class SimpleNet(nn.Module):
    """Simple MLP with PyTorch"""

    def __init__(self, n_hidden=30):
        super().__init__()
        self.fc1 = nn.Linear(in_features=N_FEAT, out_features=n_hidden)
        self.fc2 = nn.Linear(in_features=n_hidden, out_features=n_hidden)
        self.fc3 = nn.Linear(in_features=n_hidden, out_features=2)


    def forward(self, x):
        """Forward pass."""
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x


torch_input = torch.randn(100, N_FEAT)
torch_model = SimpleNet(120)
try:
    quantized_numpy_module = compile_torch_model(
        torch_model,
        torch_input,
        n_bits = 3,
    )
except RuntimeError as err:
    print(err)

Upon execution, the compiler will raise the following error:

%0 = [[-1 -3] [ ... ] [-2  2]]        # ClearTensor<int3, shape=(120, 2)>
 %1 = [[ 1  3 -2 ...  1  2  0]]        # ClearTensor<int3, shape=(120, 120)>
 %2 = [[ 2  0  3 ... -2 -2 -1]]        # ClearTensor<int3, shape=(2, 120)>
 %3 = _onnx__Gemm_0                    # EncryptedTensor<uint5, shape=(1, 2)>
 %4 = -15                              # ClearScalar<int5>
 %5 = add(%3, %4)                      # EncryptedTensor<int6, shape=(1, 2)>
 %6 = subgraph(%5)                     # EncryptedTensor<int3, shape=(1, 2)>
 %7 = matmul(%6, %2)                   # EncryptedTensor<int6, shape=(1, 120)>
 %8 = subgraph(%7)                     # EncryptedTensor<uint3, shape=(1, 120)>
 %9 = matmul(%8, %1)                   # EncryptedTensor<int9, shape=(1, 120)>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only up to 8-bit integers are supported
%10 = subgraph(%9)                     # EncryptedTensor<uint3, shape=(1, 120)>
%11 = matmul(%10, %0)                  # EncryptedTensor<int8, shape=(1, 2)>
%12 = subgraph(%11)                    # EncryptedTensor<uint5, shape=(1, 2)>
return %12

Knowing that a linear/dense layer is implemented as a matrix multiplication, it can determine which parts of the op-graph listing in the exception message above correspond to which layers.

Layer weights initialization:

%0 = [[-1 -3] [ ... ] [-2  2]]        # ClearTensor<int3, shape=(120, 2)>
 %1 = [[ 1  3 -2 ...  1  2  0]]        # ClearTensor<int3, shape=(120, 120)>
 %2 = [[ 2  0  3 ... -2 -2 -1]]        # ClearTensor<int3, shape=(2, 120)>

Input processing and quantization:

 %3 = _onnx__Gemm_0                    # EncryptedTensor<uint5, shape=(1, 2)>
 %4 = -15                              # ClearScalar<int5>
 %5 = add(%3, %4)                      # EncryptedTensor<int6, shape=(1, 2)>
 %6 = subgraph(%5)                     # EncryptedTensor<int3, shape=(1, 2)>

First dense layer and activation function:

%7 = matmul(%6, %2)                   # EncryptedTensor<int6, shape=(1, 120)>
%8 = subgraph(%7)                     # EncryptedTensor<uint3, shape=(1, 120)>

Second dense layer and activation function:

%9 = matmul(%8, %1)                   # EncryptedTensor<int9, shape=(1, 120)>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ only up to 8-bit integers are supported
%10 = subgraph(%9)                     # EncryptedTensor<uint3, shape=(1, 120)>

Third dense layer and output quantization:

%11 = matmul(%10, %0)                  # EncryptedTensor<int8, shape=(1, 2)>
%12 = subgraph(%11)                    # EncryptedTensor<uint5, shape=(1, 2)>
return %12

We can see here that the error is in the second layer. Reducing the number of neurons in this layer will resolve the error and make the network FHE-compatible:

torch_model = SimpleNet(50)
try:
    quantized_numpy_module = compile_torch_model(
        torch_model,
        torch_input,
        n_bits = 3,
    )
except RuntimeError as err:
    print(err)

Complexity analysis

In FHE, univariate functions are encoded as table lookups, which are then implemented using Programmable Bootstrapping (PBS). PBS is a powerful technique but will require significantly more computing resources, and thus time, than simpler encrypted operations such matrix multiplications, convolution or additions.

Furthermore, the cost of PBS will depend on the bit-width of the compiled circuit. Every additional bit in the maximum bit-width raises the complexity of the PBS by a significant factor. It may be of interest to the model developer, then, to determine the bit-width of the circuit and the amount of PBS it performs.

This can be done by inspecting the MLIR code produced by the compiler:

Concrete-ML Model

torch_model = SimpleNet(50)
try:
    quantized_numpy_module = compile_torch_model(
        torch_model,
        torch_input,
        n_bits = 3,
        show_mlir=True,
    )
except RuntimeError as err:
    print(err)

Compiled MLIR model

%cst = arith.constant dense<...> : tensor<50x2xi9>
%cst_0 = arith.constant dense<...>
%cst_1 = arith.constant dense<...> : tensor<2x50xi9>
%c-14_i9 = arith.constant -14 : i9
%c128_i9 = arith.constant 128 : i9
%c128_i9_2 = arith.constant 128 : i9
%c128_i9_3 = arith.constant 128 : i9
%c128_i9_4 = arith.constant 128 : i9
%hack_0_c-14_i9 = tensor.from_elements %c-14_i9 : tensor<1xi9>
%0 = "FHELinalg.add_eint_int"(%arg0, %hack_0_c-14_i9) : (tensor<1x2x!FHE.eint<8>>, tensor<1xi9>) -> tensor<1x2x!FHE.eint<8>>
%hack_1_c128_i9_4 = tensor.from_elements %c128_i9_4 : tensor<1xi9>
%1 = "FHELinalg.add_eint_int"(%0, %hack_1_c128_i9_4) : (tensor<1x2x!FHE.eint<8>>, tensor<1xi9>) -> tensor<1x2x!FHE.eint<8>>
%cst_5 = arith.constant dense<...> : tensor<256xi64>
%2 = "FHELinalg.apply_lookup_table"(%1, %cst_5) : (tensor<1x2x!FHE.eint<8>>, tensor<256xi64>) -> tensor<1x2x!FHE.eint<8>>

%3 = "FHELinalg.matmul_eint_int"(%2, %cst_1) : (tensor<1x2x!FHE.eint<8>>, tensor<2x50xi9>) -> tensor<1x50x!FHE.eint<8>>
%hack_4_c128_i9_3 = tensor.from_elements %c128_i9_3 : tensor<1xi9>
%4 = "FHELinalg.add_eint_int"(%3, %hack_4_c128_i9_3) : (tensor<1x50x!FHE.eint<8>>, tensor<1xi9>) -> tensor<1x50x!FHE.eint<8>>
%cst_6 = arith.constant dense<...> : tensor<34x256xi64>
%cst_7 = arith.constant dense<...]> : tensor<1x50xindex>
%5 = "FHELinalg.apply_mapped_lookup_table"(%4, %cst_6, %cst_7) : (tensor<1x50x!FHE.eint<8>>, tensor<34x256xi64>, tensor<1x50xindex>) -> tensor<1x50x!FHE.eint<8>>

%6 = "FHELinalg.matmul_eint_int"(%5, %cst_0) : (tensor<1x50x!FHE.eint<8>>, tensor<50x50xi9>) -> tensor<1x50x!FHE.eint<8>>
%hack_7_c128_i9_2 = tensor.from_elements %c128_i9_2 : tensor<1xi9>
%7 = "FHELinalg.add_eint_int"(%6, %hack_7_c128_i9_2) : (tensor<1x50x!FHE.eint<8>>, tensor<1xi9>) -> tensor<1x50x!FHE.eint<8>>
%cst_8 = arith.constant dense<...> : tensor<34x256xi64>
%cst_9 = arith.constant dense<...> : tensor<1x50xindex>
%8 = "FHELinalg.apply_mapped_lookup_table"(%7, %cst_8, %cst_9) : (tensor<1x50x!FHE.eint<8>>, tensor<34x256xi64>, tensor<1x50xindex>) -> tensor<1x50x!FHE.eint<8>>

%9 = "FHELinalg.matmul_eint_int"(%8, %cst) : (tensor<1x50x!FHE.eint<8>>, tensor<50x2xi9>) -> tensor<1x2x!FHE.eint<8>>
%hack_10_c128_i9 = tensor.from_elements %c128_i9 : tensor<1xi9>
%10 = "FHELinalg.add_eint_int"(%9, %hack_10_c128_i9) : (tensor<1x2x!FHE.eint<8>>, tensor<1xi9>) -> tensor<1x2x!FHE.eint<8>>
%cst_10 = arith.constant dense<...> : tensor<2x256xi64>
%cst_11 = arith.constant dense<[[0, 1]]> : tensor<1x2xindex>
%11 = "FHELinalg.apply_mapped_lookup_table"(%10, %cst_10, %cst_11) : (tensor<1x2x!FHE.eint<8>>, tensor<2x256xi64>, tensor<1x2xindex>) -> tensor<1x2x!FHE.eint<8>>
return %11 : tensor<1x2x!FHE.eint<8>>

There are several calls to FHELinalg.apply_mapped_lookup_table and FHELinalg.apply_lookup_table. These calls apply PBS to the cells of their input tensors. Their inputs in the listing above are: tensor<1x2x!FHE.eint<8>> for the first and last call and tensor<1x50x!FHE.eint<8>> for the two calls in the middle. Thus, PBS is applied 104 times.

Getting the bit-width of the circuit is then simply:

print(quantized_numpy_module.forward_fhe.graph.maximum_integer_bit_width())

Decreasing the number of bits and the number of PBS induces large reductions in the computation time of the compiled circuit.

Inference in the Cloud

Concrete-ML models can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.

Keys are generated by the user once for each service they use, based on the model the service provides and its cryptographic parameters.

The overall communications protocol to enable cloud deployment of machine learning services can be summarized in the following diagram:

The steps detailed above are as follows:

  1. The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference.

  2. The client requests the cryptographic parameters (also called "client specs"). Once it gets them from the server, the secret and evaluation keys are generated.

  3. The client sends the evaluation key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data.

  4. The server uses the evaluation key to securely run inference on the user's data and sends back the encrypted result.

  5. The client now decrypts the result and can send back new requests.

Deep Learning Examples

Summary

The following table summarizes the examples in this section:

Examples

Neural Networks

Concrete-ML provides simple neural networks models with a Scikit-learn interface through the NeuralNetClassifier and NeuralNetRegressor classes.

Concrete-ML

These models use a stack of linear layers and the activation function and the number of neurons in each layer is configurable. This approach is similar to what is available in Scikit-learn using the MLPClassifier/MLPRegressor classes. The built-in, fully connected neural network (FCNN) models train easily with a single call to .fit(), which will automatically quantize the weights and activations. These models use Quantization Aware Training, allowing good performance for low precision (down to 2-3 bit) weights and activations.

Example usage

To create an instance of a Fully Connected Neural Network you need to instantiate one of the NeuralNetClassifier and NeuralNetRegressor classes and configure a number of parameters that are passed to their constructor. Note that some parameters need to be prefixed by module__, while others don't. Basically, the parameters that are related to the model, i.e. the underlying nn.Module, must have the prefix. The parameters that are related to training options do not require the prefix.

from concrete.ml.sklearn import NeuralNetClassifier
import torch.nn as nn

n_inputs = 10
n_outputs = 2
params = {
    "module__n_layers": 2,
    "module__n_w_bits": 2,
    "module__n_a_bits": 2,
    "module__n_accum_bits": 8,
    "module__n_hidden_neurons_multiplier": 1,
    "module__n_outputs": n_outputs,
    "module__input_dim": n_inputs,
    "module__activation_function": nn.ReLU,
    "max_epochs": 10,
}

concrete_classifier = NeuralNetClassifier(**params)

The figure above shows, on the right, the Concrete-ML neural network, trained with Quantization Aware Training, in a FHE-compatible configuration. The figure compares this network to the floating point equivalent, trained with scikit-learn.

Architecture parameters

  • module__n_layers: number of layers in the FCNN, must be at least 1. Note that this is the total number of layers. For a single hidden layer NN model, set module__n_layers=2

  • module__n_outputs: number of outputs (classes or targets)

  • module__input_dim: dimensionality of the input

Quantization parameters

  • n_w_bits (default 3): number of bits for weights

  • n_a_bits (default 3): number of bits for activations and inputs

Training parameters (from Skorch)

  • max_epochs: The number of epochs to train the network (default 10)

  • verbose: Whether to log loss/metrics during training (default: False)

  • lr: Learning rate (default 0.001)

Advanced parameters

Network input/output

When you have training data in the form of a NumPy array, and targets in a NumPy 1d array, you can set:

    classes = np.unique(y_all)
    params["module__input_dim"] = x_train.shape[1]
    params["module__n_outputs"] = len(classes)

Class weights

You can give weights to each class to use in training. Note that this must be supported by the underlying PyTorch loss function.

    from sklearn.utils.class_weight import compute_class_weight
    params["criterion__weight"] = compute_class_weight("balanced", classes=classes, y=y_train)

Overflow errors

The n_hidden_neurons_multiplier parameter influences training accuracy as it controls the number of non-zero neurons that are allowed in each layer. Increasing n_hidden_neurons_multiplier improves accuracy, but should take into account precision limitations to avoid overflow in the accumulator. The default value is a good compromise that avoids overflow, in most cases, but you may want to change the value of this parameter to reduce the breadth of the network if you have overflow errors. A value of 1 should be completely safe with respect to overflow.

Tree-based Models

Concrete-ML
scikit-learn
Concrete-ML
XGboost

Example

from sklearn.datasets import load_breast_cancer
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler

from concrete.ml.sklearn.xgb import XGBClassifier


# Get data-set and split into train and test
X, y = load_breast_cancer(return_X_y=True)

# Split the train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

# Define our model
model = XGBClassifier(n_jobs=1, n_bits=3)

# Define the pipeline
# We will normalize the data and apply a PCA before fitting the model
pipeline = Pipeline(
    [("standard_scaler", StandardScaler()), ("pca", PCA(random_state=0)), ("model", model)]
)

# Define the parameters to tune
param_grid = {
    "pca__n_components": [2, 5, 10, 15],
    "model__max_depth": [2, 3, 5],
    "model__n_estimators": [5, 10, 20],
}

# Instantiate the grid search with 5-fold cross validation on all available cores
grid = GridSearchCV(pipeline, param_grid, cv=5, n_jobs=-1, scoring="accuracy")

# Launch the grid search
grid.fit(X_train, y_train)

# Print the best parameters found
print(f"Best parameters found: {grid.best_params_}")

# Output:
#  Best parameters found: {'model__max_depth': 5, 'model__n_estimators': 10, 'pca__n_components': 5}

# Currently we only focus on model inference in FHE
# The data transformation will be done in clear (client machine)
# while the model inference will be done in FHE on a server.
# The pipeline can be split into 2 parts:
#   1. data transformation
#   2. estimator
best_pipeline = grid.best_estimator_
data_transformation_pipeline = best_pipeline[:-1]
model = best_pipeline[-1]

# Transform test set
X_train_transformed = data_transformation_pipeline.transform(X_train)
X_test_transformed = data_transformation_pipeline.transform(X_test)

# Evaluate the model on the test set in clear
y_pred_clear = model.predict(X_test_transformed)
print(f"Test accuracy in clear: {(y_pred_clear == y_test).mean():0.2f}")

# Output:
#  Test accuracy: 0.98

# Compile the model to FHE
model.compile(X_train_transformed)

# Perform the inference in FHE
# Warning: this will take a while. It is recommended to run this with a very small batch of
# example first (e.g. N_TEST_FHE = 1)
# Note that here the encryption and decryption is done behind the scene.
N_TEST_FHE = 1
y_pred_fhe = model.predict(X_test_transformed[:N_TEST_FHE], execute_in_fhe=True)

# Assert that FHE predictions are the same as the clear predictions
print(f"{(y_pred_fhe == y_pred_clear[:N_TEST_FHE]).sum()} "
      f"examples over {N_TEST_FHE} have a FHE inference equal to the clear inference.")

# Output:
#  1 examples over 1 have a FHE inference equal to the clear inference

This graph shows the impact of quantization over the decision boundaries in the Concrete-ML FHE decision tree models. In the 3-bits model, only a rough, highly-discrete decision function is observed. This results in a small decrease of accuracy of about 7% compared to the initial XGBoost classifier. Besides, using 6-bits of quantization makes the model reach 93% accuracy, drastically reducing this difference to only 1.7 percentage points.

In fact, the quantization process may sometimes create some artifacts that could lead to a decrease in performance. Still, as the quantization is done individually on each input feature, the artifacts are minor when considering small tree-based models with 5-6 bits quantization. Thus, FHE tree-based models reach similar scores as their equivalent floating point ones.

The following graph shows that using 5-6 bits of quantization is usually sufficient to reach the performance of a non-quantized XGBoost model on floating point data. The metrics plotted are accuracy and F1-score on the spambase data-set.

Step-by-Step Guide

Summary

Baseline model

This example shows how to train a fully-connected neural network on a synthetic 2D data-set with a checkerboard grid pattern of 100 x 100 points. The data is split into 9500 training and 500 test samples.

In PyTorch, using standard layers, this network would look as follows:

from torch import nn
import torch

N_FEAT = 2
class SimpleNet(nn.Module):
    """Simple MLP with PyTorch"""

    def __init__(self, n_hidden=30):
        super().__init__()
        self.fc1 = nn.Linear(in_features=N_FEAT, out_features=n_hidden)
        self.fc2 = nn.Linear(in_features=n_hidden, out_features=n_hidden)
        self.fc3 = nn.Linear(in_features=n_hidden, out_features=2)


    def forward(self, x):
        """Forward pass."""
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        x = self.fc3(x)
        return x
neurons
10
30
100

fp32 accuracy

68.70%

83.32%

88.06%

3bit accuracy

56.44%

55.54%

56.50%

mean accumulator size

6.6

6.9

7.4

This shows that the fp32 accuracy and accumulator size increases with the number of hidden neurons, while the 3-bit accuracy remains low irrespective of to the number of neurons. While all the configurations tried here were FHE-compatible (accumulator < 8 bits), it is sometimes preferable to have a lower accumulator size in order for the inference time to be faster.

The accumulator size is determined by Concrete-Numpy as being the maximum bit-width encountered anywhere in the encrypted circuit

Pruning using Torch

Considering that FHE only works with limited integer precision, there is a risk of overflowing in the accumulator, resulting in unpredictable results.

To understand how to overcome this limitation, consider a scenario where 2 bits are used for weights and layer inputs/outputs. The Linear layer computes a dot product between weights and inputs y=∑iwixiy = \sum_i w_i x_iy=∑i​wi​xi​. With 2 bits, no overflow can occur during the computation of the Linear layer as long the number of neurons does not exceed 14, i.e. the sum of 14 products of 2-bit numbers does not exceed 7 bits.

By default, Concrete-ML uses symmetric quantization for model weights, with values in the interval [−2nbits−1,2nbits−1−1]\left[-2^{n_{bits}-1}, 2^{n_{bits}-1}-1\right][−2nbits​−1,2nbits​−1−1]. For example, for nbits=2n_{bits}=2nbits​=2 the possible values are [−2,−1,0,1][-2, -1, 0, 1][−2,−1,0,1], for nbits=3n_{bits}=3nbits​=3 the values can be [−4,−3,−2,−1,0,1,2,3][-4,-3,-2,-1,0,1,2,3][−4,−3,−2,−1,0,1,2,3].

However, in a typical setting, the weights will not all have the maximum or minimum values (e.g. −2nbits−1-2^{n_{bits}-1}−2nbits​−1). Instead, weights typically have a normal distribution around 0, which is one of the motivating factors for their symmetric quantization. A symmetric distribution and many zero-valued weights are desirable because opposite sign weights can cancel each other out and zero weights do not increase the accumulator size.

The following code shows how to use pruning in the previous example:

import torch.nn.utils.prune as prune

class PrunedSimpleNet(SimpleNet):
    """Simple MLP with PyTorch"""

    def prune(self, max_non_zero, enable):
        # Linear layer weight has dimensions NumOutputs x NumInputs
        for layer in self.named_modules():
            if isinstance(layer, nn.Linear):
                num_zero_weights = (layer.weight.shape[1] - max_non_zero) * layer.weight.shape[0]
                if num_zero_weights <= 0:
                    continue

                if enable:
                    prune.l1_unstructured(layer, "weight", amount=num_zero_weights)
                else:
                    prune.remove(layer, "weight")

Results with PrunedSimpleNet, a pruned version of the SimpleNet with 100 neurons on the hidden layers, are given below:

non-zero neurons
10
30

fp32 accuracy

82.50%

88.06%

3bit accuracy

57.74%

57.82%

mean accumulator size

6.6

6.8

This shows that the fp32 accuracy has been improved while maintaining constant mean accumulator size.

When pruning a larger neural network during training, it is easier to obtain a low bit-width accumulator while maintaining better final accuracy. Thus, pruning is more robust than training a similar smaller network.

Quantization Aware Training

The QAT import tool in Concrete-ML is a work in progress. While it has been tested with some networks built with Brevitas, it is possible to use other tools to obtain QAT networks.

import brevitas.nn as qnn


from brevitas.core.bit_width import BitWidthImplType
from brevitas.core.quant import QuantType
from brevitas.core.restrict_val import FloatToIntImplType, RestrictValueType
from brevitas.core.scaling import ScalingImplType
from brevitas.core.zero_point import ZeroZeroPoint
from brevitas.inject import ExtendedInjector
from brevitas.quant.solver import ActQuantSolver, WeightQuantSolver
from dependencies import value

# Configure quantization options
class CommonQuant(ExtendedInjector):
    bit_width_impl_type = BitWidthImplType.CONST
    scaling_impl_type = ScalingImplType.CONST
    restrict_scaling_type = RestrictValueType.FP
    zero_point_impl = ZeroZeroPoint
    float_to_int_impl_type = FloatToIntImplType.ROUND
    scaling_per_output_channel = False
    narrow_range = True
    signed = True

    @value
    def quant_type(bit_width):
        if bit_width is None:
            return QuantType.FP
        elif bit_width == 1:
            return QuantType.BINARY
        else:
            return QuantType.INT

# Quantization options for weights/activations
class CommonWeightQuant(CommonQuant, WeightQuantSolver):
    scaling_const = 1.0
    signed = True


class CommonActQuant(CommonQuant, ActQuantSolver):
    min_val = -1.0
    max_val = 1.0

class QATPrunedSimpleNet(nn.Module):
    def __init__(self, n_hidden):
        super(QATPrunedSimpleNet, self).__init__()

        n_bits = 3
        self.quant_inp = qnn.QuantIdentity(
            act_quant=CommonActQuant,
            bit_width=n_bits,
            return_quant_tensor=True,
        )

        self.fc1 = qnn.QuantLinear(
            N_FEAT,
            n_hidden,
            True,
            weight_quant=CommonWeightQuant,
            weight_bit_width=n_bits,
            bias_quant=None,
        )

        self.q1 = qnn.QuantIdentity(
            act_quant=CommonActQuant, bit_width=n_bits, return_quant_tensor=True
        )

        self.fc2 = qnn.QuantLinear(
            n_hidden,
            n_hidden,
            True,
            weight_quant=CommonWeightQuant,
            weight_bit_width=3,
            bias_quant=None
        )

        self.q2 = qnn.QuantIdentity(
            act_quant=CommonActQuant, bit_width=n_bits, return_quant_tensor=True
        )

        self.fc3 = qnn.QuantLinear(
            n_hidden,
            2,
            True,
            weight_quant=CommonWeightQuant,
            weight_bit_width=n_hidden,
            bias_quant=None,
        )

        for m in self.modules():
            if isinstance(m, qnn.QuantLinear):
                torch.nn.init.uniform_(m.weight.data, -1, 1)

    def forward(self, x):
        x = self.quant_inp(x)
        x = self.q1(torch.relu(self.fc1(x)))
        x = self.q2(torch.relu(self.fc2(x)))
        x = self.fc3(x)
        return x

    def prune(self, max_non_zero, enable):
        # Linear layer weight has dimensions NumOutputs x NumInputs
        for name, layer in self.named_modules():
            if isinstance(layer, nn.Linear):
                num_zero_weights = (layer.weight.shape[1] - max_non_zero) * layer.weight.shape[0]
                if num_zero_weights <= 0:
                    continue

                if enable:
                    print(f"Pruning layer {name} factor {num_zero_weights}")
                    prune.l1_unstructured(layer, "weight", amount=num_zero_weights)
                else:
                    prune.remove(layer, "weight")

Training this network with 30 out of 100 total non-zero neurons gives good accuracy while being FHE-compatible (accumulator size < 8 bits).

non-zero neurons
30

3bit accuracy brevitas

95.4%

3bit accuracy in Concrete-ML

92.4%

accumulator size

7

The PyTorch QAT training loop is the same as the standard floating point training loop, but hyper-parameters such as learning rate might need to be adjusted.

Quantization Aware Training is somewhat slower than normal training. QAT introduces quantization during both the forward and backward passes. The quantization process is inefficient on GPUs as its computational intensity is low with respect to data transfer time.

Production Deployment

Concrete-ML provides functionality to deploy FHE machine learning models in a client/server setting. The deployment workflow and model serving pattern is as follows:

Deployment

The training of the model and its compilation to FHE are performed on a development machine. Three different files are created when saving the model:

  • client.json contains the secure cryptographic parameters needed for the client to generate private and evaluation keys.

  • server.json contains the compiled model. This file is sufficient to run the model on a server.

  • serialized_processing.json contains the metadata about pre- and post-processing, such as quantization parameters to quantize the input and de-quantize the output.

The compiled model (server.zip) is deployed to a server and the cryptographic parameters (client.zip) along with the model meta data (serialized_processing.json) are shared with the clients.

Serving

The client obtains the cryptographic parameters (using client.zip) and generates a private encryption/decryption key as well as a set of public evaluation keys. The public evaluation keys are then sent to the server, while the secret key remains on the client.

The private data is then encrypted using serialized_processing.json by the client and sent to the server. Server-side, the FHE model inference is run on the encrypted inputs using the public evaluation keys.

The encrypted result is then returned by the server to the client, which decrypts it using its private key. Finally, the client performs any necessary post-processing of the decrypted result using serialized_processing.json.

Example notebook

Advanced Features

Concrete-ML offers some features for advanced users that wish to adjust the cryptographic parameters that are generated by the Concrete stack for a certain machine learning model.

Approximate computations using the p_error parameter

Concrete-ML makes use of table lookup (TLU) to represent any non-linear operation (e.g. sigmoid). This TLU is implemented through the Programmable Bootstrapping (PBS) operation which will apply a non-linear operation in the cryptographic realm.

In Concrete-ML, the result of the TLU operation is obtained with a specific error probability:

A single PBS operation has 1 - DEFAULT_P_ERROR_PBS = 99.9936657516% chances of being correct. This number plays a role in the cryptographic parameters. As such, the lower the p_error, the more constraining the parameters will become. This has an impact on both key generation and, more importantly, on FHE execution time.

Here is a visualization of the effect of the p_error over a simple linear regression with a p_error = 0.1 vs the default p_error value:

The execution for the two models are 336 ms per example for the standard p_error and 253 ms per example for a p_error = 0.1 (on a 8 cores Intel CPU machine). Obviously, this speedup is very dependent on model complexity. To obtain a speedup while maintaining good accuracy, it is possible to search for a good value of p_error. Currently no heuristic has been proposed to find a good value a-priori.

Users have the possibility to change this p_error as they choose fit, by passing an argument to the compile function of any of the models. Here is an example:

Support and Issues

Concrete-ML is a constant work-in-progress, and thus may contain bugs or suboptimal APIs.

Furthermore, undefined behavior may occur if the input-set, which is internally used by the compilation core to set bit-widths of some intermediate data, is not sufficiently representative of the future user inputs. With all the inputs in the input-set, it appears that intermediate data can be represented as an n-bit integer. But, for a particular computation, this same intermediate data needs additional bits to be represented. The FHE execution for this computation will result in an incorrect output, as typically occurs in integer overflows in classical programs.

Submitting an issue

  • the reproducibility rate you see on your side

  • any insight you might have on the bug

  • any workaround you have been able to find

Quantization tools

Quantizing data

Concrete-ML has support for quantized ML models and also provides quantization tools for Quantization Aware Training and Post-Training Quantization. The core of this functionality is the conversion of floating point values to integers and back. This is done using QuantizedArray in concrete.ml.quantization.

  • n_bits that defines the precision of the quantization

  • values are floating point values that will be converted to integers

  • is_signed determines if the quantized integer values should allow negative values

  • is_symmetric determines if the range of floating point values to be quantized should be taken as symmetric around zero

It is also possible to use symmetric quantization, where the integer values are centered around 0:

In the following example, showing the de-quantization of model outputs, the QuantizedArray class is used in a different way. Here it uses pre-quantized integer values and has the scale and zero-point set explicitly. Once the QuantizedArray is constructed, calling dequant() will compute the floating point values corresponding to the integer values qvalues, which are the output of the forward_fhe.encrypt_run_decrypt(..) call.

Quantized modules

Machine learning models are implemented with a diverse set of operations, such as convolution, linear transformations, activation functions and element-wise operations. When working with quantized values, these operations cannot be carried out in an equivalent way as for floating point values. With quantization, it is necessary to re-scale the input and output values of each operation to fit in the quantization domain.

In Concrete-ML, the quantized equivalent of a scikit-learn model or a PyTorch nn.Module is the QuantizedModule. Note that only inference is implemented in the QuantizedModule, and it is built through a conversion of the inference function of the corresponding scikit-learn or PyTorch module.

Built-in neural networks expose the quantized_module member, while a QuantizedModule is also the result of the compilation of custom models through compile_torch_model and compile_brevitas_qat_model.

Calibration is the process of determining the typical distributions of values encountered for the intermediate values of a model during inference.

Resources

Contributing

There are three ways to contribute to Concrete-ML:

  • You can open issues to report bugs and typos and to suggest ideas.

  • You can also provide new tutorials or use-cases, showing what can be done with the library. The more examples we have, the better and clearer it is for the other users.

1. Creating a new branch

Concrete-ML uses a consistent branch naming scheme, and you are expected to follow it as well. Here is the format, along with some examples:

e.g.

2. Before committing

2.1 Conformance

Each commit to Concrete-ML should conform to the standards of the project. You can let the development tools fix some issues automatically with the following command:

Conformance can be checked using the following command:

2.2 Testing

Your code must be well documented, containing tests and not breaking other tests:

You need to make sure you get 100% code coverage. The make pytest command checks that by default and will fail with a coverage report at the end should some lines of your code not be executed during testing.

If your coverage is below 100%, you should write more tests and then create the pull request. If you ignore this warning and create the PR, GitHub actions will fail and your PR will not be merged.

There may be cases where covering your code is not possible (an exception that cannot be triggered in normal execution circumstances). In those cases, you may be allowed to disable coverage for some specific lines. This should be the exception rather than the rule, and reviewers will ask why some lines are not covered. If it appears they can be covered, then the PR won't be accepted in that state.

3. Committing

Concrete-ML uses a consistent commit naming scheme, and you are expected to follow it as well (the CI will make sure you do). The accepted format can be printed to your terminal by running:

e.g.

4. Rebasing

You should rebase on top of the main branch before you create your pull request. Merge commits are not allowed, so rebasing on main before pushing gives you the best chance of avoiding having to rewrite parts of your PR later if conflicts arise with other PRs being merged. After you commit your changes to your new branch, you can use the following commands to rebase:

5. Releases

Set Up Docker

Building the image

Once you do that, you can get inside the Docker environment using the following command:

After you finish your work, you can leave Docker by using the exit command or by pressing CTRL + D.

Importing ONNX

As ONNX is becoming the standard exchange format for neural networks, this allows Concrete-ML to be flexible while also making model representation manipulation quite easy. In addition, it allows for straight-forward mapping to NumPy operators, supported by Concrete-Numpy to use Concrete stack's FHE conversion capabilities.

Torch to NumPy conversion using ONNX

The diagram below gives an overview of the steps involved in the conversion of an ONNX graph to a FHE compatible format, i.e. a format that can be compiled to FHE through Concrete-Numpy.

All Concrete-ML built-in models follow the same pattern for FHE conversion:

  1. The models are trained with sklearn or PyTorch

  2. The Concrete-ML ONNX parser checks that all the operations in the ONNX graph are supported and assigns reference NumPy operations to them. This step produces a NumpyModule.

  3. Once the QuantizedModule is built, Concrete-Numpy is used to trace the ._forward() function of the QuantizedModule.

Once an ONNX model is imported, it is converted to a NumpyModule, then to a QuantizedModule and, finally, to a FHE circuit. However, as the diagram shows, it is perfectly possible to stop at the NumpyModule level if you just want to run the PyTorch model as NumPy code without doing quantization.

Inspecting the ONNX models

Documentation

Using GitBook

Documentation with GitBook is done mainly by pushing content on GitHub. GitBook then pulls the docs from the repository and publishes. In most cases, GitBook is just a mirror of what is available in GitHub.

There are, however, some use-cases where documentation can be modified directly in GitBook (and then, push the modifications to GitHub), for example when the documentation is modified by a person outside of Zama. In this case, a GitHub branch is created, and a GitHub space is associated to it: modifications are done in this space and automatically pushed to the branch. Once the modifications are done, one can simply create a pull-request, to finally merge modifications on the main branch.

Using Sphinx

Documentation can alternatively be built using Sphinx:

The documentation contains both files written by hand by developers (the .md files) and files automatically created by parsing the source files.

Then to open it, go to docs/_build/html/index.html or use the follwing command:

To build and open the docs at the same time, use:

Set Up the Project

Concrete-ML is a Python library, so Python should be installed to develop Concrete-ML. v3.8 and v3.9 are the only supported versions. Concrete-ML also uses Poetry and Make.

First of all, you need to git clone the project:

Automatic installation

For Windows users, the setup_os_deps.sh script does not install dependencies because of how many different installation methods there are/lack of a single package manager.

Manual installation

Python

Poetry

As there is no concrete-compiler package for Windows, only the dev dependencies can be installed. This requires Poetry >= 1.2.

Make

The dev tools use make to launch the various commands.

On Linux, you can install make from your distribution's preferred package manager.

On macOS, you can install a more recent version of make via brew:

In the following sections, be sure to use the proper make tool for your system: make, gmake, or other.

Cloning the repository

To get the source code of Concrete-ML, clone the code repository using the link for your favourite communication protocol (ssh or https).

Setting up environment on your host OS

We are going to make use of virtual environments. This helps to keep the project isolated from other Python projects in the system. The following commands will create a new virtual environment under the project directory and install dependencies to it.

The following command will not work on Windows if you don't have Poetry >= 1.2.

Activating the environment

Finally, activate the newly created environment using the following command:

macOS or Linux

Windows

Setting up environment on Docker

Docker automatically creates and sources a venv in ~/dev_venv/

The venv persists thanks to volumes. It also creates a volume for ~/.cache to speed up later reinstallations. You can check which Docker volumes exist with:

You can still run all make commands inside Docker (to update the venv, for example). Be mindful of the current venv being used (the name in parentheses at the beginning of your command prompt).

Leaving the environment

After your work is done, you can simply run the following command to leave the environment:

Syncing environment with the latest changes

From time to time, new dependencies will be added to the project or the old ones will be removed. The command below will make sure the project has the proper environment, so run it regularly!

Troubleshooting your environment

in your OS

If you are having issues, consider using the dev Docker exclusively (unless you are working on OS-specific bug fixes or features).

Here are the steps you can take on your OS to try and fix issues:

in Docker

Here are the steps you can take in your Docker to try and fix issues:

If the problem persists at this point, you should ask for help. We're here and ready to assist!

Quantization Aware Training (QAT)
FHE constraints
the ONNX guide

Pruning is a method to reduce neural network complexity, usually applied in order to reduce the computation cost or memory size. Pruning is used in Concrete-ML to control the size of accumulators in neural networks, thus making them FHE-compatible. See for an explanation of accumulator bit-width constraints.

Built-in include a pruning mechanism that can be parameterized by the user. The pruning type is based on L1-norm. To comply with FHE constraints, Concrete-ML uses unstructured pruning, as the aim is not to eliminate neurons or convolutional filters completely, but to decrease their accumulator bit-width.

Custom neural networks, to work well under FHE constraints, should include pruning. When implemented with PyTorch, you can use the (e.g.L1-Unstructured) to good effect.

To respect the bit-width constraint of the FHE , the values of the accumulator vkv_kvk​ must remain small to be representable with only 8 bits. In other words, the values must be between 0 and 255.

While pruning weights can reduce the prediction performance of the neural network, studies show that a high level of pruning (above 50%) can often be applied. See here how Concrete-ML uses pruning in .

For custom neural networks with more complex topology, obtaining FHE-compatible models with good accuracy requires QAT. Concrete-ML offers the possibility for the user to perform quantization before compiling to FHE. This can be achieved through a third-party library that offers QAT tools, such as for PyTorch. In this approach, the user is responsible for implementing a full-integer model, respecting FHE constraints. Please refer to the for tips on designing FHE neural networks.

When using quantized values in a matrix multiplication or convolution, the equations for computing the result become more complex. The IntelLabs distiller quantization documentation provides a more of the maths used to quantize values and how to keep computations consistent.

For , the quantization is done post-training. Thus, the model is trained in floating point, and then, the best integer weight representations are found, depending on the distribution of inputs and weights. For these models, the user can select the value of the n_bits parameter.

For , the training and test data is quantized. The maximum accumulator bit-width for a model trained with n_bits=n for this type of model is known beforehand: it will need n+1 bits. Thus, as Concrete-ML only supports up to 8-bit integers, n should be less than 8. Through experimentation, it was determined that in many cases a value of 5 or 6 bits gives the same accuracy as training in floating point.

For the built-in , several linear layers are used. Thus, the outputs of a layer are used as inputs to a new layer. Built-in neural networks use Quantization Aware Training. The parameters controlling the maximum accumulator bit-width are the number of weights and activation bits ( module__n_w_bits, module__n_a_bits ), but also the pruning factor. This factor is determined automatically by specifying a desired accumulator bit-width module__n_accum_bits and, optionally, a multiplier factor, module__n_hidden_neurons_multiplier.

In a client/server setting, the client is responsible for quantizing inputs before sending them, encrypted, to the server. Further, the client must de-quantize the encrypted integer results received from the server. See the section for more details.

IntelLabs distiller explanation of quantization:

| |

Concrete-ML is an open-source privacy-preserving machine learning inference framework based on fully homomorphic encryption (FHE). It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent, using familiar APIs from Scikit-learn and PyTorch (see how it looks for , and ).

Fully Homomorphic Encryption (FHE) is an encryption technique that allows computing directly on encrypted data, without needing to decrypt it. With FHE, you can build private-by-design applications without compromising on features. You can learn more about FHE in , or by joining the community.

Here is a simple example of classification on encrypted data using logistic regression. More examples can be found .

Inference can then be done on encrypted data. The above example shows encrypted inference in the model development phase. Alternatively, in in a client/server setting, the data is encrypted by the client, processed securely by the server and then decrypted by the client.

Concrete-ML is built on top of Zama's Concrete framework. It uses , which itself uses the and the . To use these libraries directly, refer to the and documentations.

Various tutorials are proposed for the and for . In addition, we also list standalone use-cases:

: a Python and notebook showing a Quantization Aware Training (done with and following constraints of the package) and its corresponding use in Concrete-ML.

: a notebook, which gives a solution to the . Done with XGBoost from Concrete-ML. It comes as a companion of , and was the subject of a blogpost in .

Support forum: (we answer in less than 24 hours).

Live discussion on the FHE.org discord server: (inside the #concrete channel).

Do you have a question about Zama? You can write us on or send us an email at: hello@zama.ai

Concrete-ML implements machine model inference using Concrete-Numpy as a backend. In order to execute in FHE, a numerical program written in Concrete-Numpy needs to be compiled. This functionality is , and Concrete-ML hides away most of the complexity of this step. The entire compilation process is done by Concrete-Numpy.

Additionally, the packages the result of the last step in a way that allows the deployment of the encrypted circuit to a server and key generation, encryption and decryption on the client side.

The first step in the list above takes a Python function implemented using the Concrete-Numpy and transforms it into an executable operation graph.

While Concrete-ML hides away all the Concrete-Numpy code that performs model inference, it can be useful to understand how Concrete-Numpy code works. Here is an toy example for a simple linear regression model on integers. Note that this is just an example to illustrate compilation concepts. Generally, it is recommended to use the , which provide linear regression out of the box.

The Virtual Lib, being pure Python and not requiring crypto key generation, can be much faster than the actual compilation and FHE execution, thus allowing for faster iterations, debugging and FHE simulation, regardless of the bit-width used. For example, this was used for the red/blue contours in the , as computing in FHE for the whole grid and all the classifiers would take significant time.

As seen in the , a Concrete-ML model, once compiled to FHE, generates machine code that performs the inference on private data. Furthermore, secret encryption keys are needed so that the user can securely encrypt their data and decrypt the inference result. An evaluation key is also needed for the server to securely process the user's encrypted data.

For more information on how to implement this basic secure inference protocol, refer to the and to the .

Model
Data-set
Metric
Floating Point
Simulation
FHE

Fully Connected NN

accuracy

0.95

0.94

0.94

QAT Fully Connected NN

Synthetic (Checkerboard)

accuracy

0.95

0.92

0.92

Convolutional NN

accuracy

0.97

0.91

0.91

The neural network models are built with , which provides a scikit-learn like interface to Torch models (more ).

While NeuralNetClassifier and NeuralNetClassifier provide scikit-learn like models, their architecture is somewhat restricted in order to make training easy and robust. If you need more advanced models, you can convert custom neural networks as described in the .

Good quantization parameter values are critical to make models . Weights and activations should be quantized to low precision (e.g. 2-4 bits). Furthermore, in cases of overflow, the sparsity of the network can be tuned .

The shows the behavior of built-in neural networks on several synthetic datasets.

module__activation_function: can be one of the Torch activations (e.g. nn.ReLU, see the full list )

n_accum_bits (default 8): maximum accumulator bit-width that is desired. The implementation will attempt to keep accumulators under this bit-width through , i.e. setting some weights to zero

Other parameters from skorch are in the

module__n_hidden_neurons_multiplier: The number of hidden neurons will be automatically set proportional to the dimensionality of the input (i.e. the value for module__input_dim). This parameter controls the proportionality factor and is set to 4 by default. This value gives good accuracy while avoiding accumulator overflow. See the and sections for more info.

Concrete-ML provides several of the most popular classification and regression tree models that can be found in :

In addition to support for scikit-learn, Concrete-ML also supports 's XGBClassifier:

Here's an example of how to use this model in FHE on a popular data-set using some of scikit-learn's pre-processing tools. A more complete example can be found in the .

Using the above example, we can then plot how the model classifies the inputs and then compare those results with the XGBoost model executed in clear. A 6-bits model is also given in order to better understand the impact of quantization on classification. Similar plots can be found in the .

This section includes a complete example of converting a neural network to Quantization Aware Training (QAT). This tutorial uses PyTorch and Brevitas to train a simple network on a synthetic data-set. You can find the demo of the final network in the . To see how to apply these network design principles for a real-world data-set, please see the .

For a more formal description of the usage of Brevitas to build FHE-compatible neural networks, please see the .

Once trained, this network can be imported using the function. This function uses simple Post-Training Quantization.

The network was trained using different numbers of neurons in the hidden layers, and quantized using 3-bits weights and activations. The mean accumulator size shown below was extracted using the .

This can be leveraged to train a network with more neurons, while not overflowing the accumulator, using a technique called , where the developer can impose a number of zero-valued weights. Torch out of the box.

While pruning helps maintain the post-quantization level of accuracy in low-precision settings, it does not help maintain accuracy when quantizing from floating point models. The best way to guarantee accuracy is to use QAT (read more in the ).

In this example, QAT is done using , changing Linear layers to QuantLinear and adding quantizers on the inputs of linear layers using QuantIdentity.

For a complete example, see

This number is set by default to be relatively low such that any user can build deep circuits without being impacted by this noise as . However, there might be use cases and specific circuits where the Gaussian noise can increase without being too dramatic for the circuit accuracy. In that case, increasing the p_error can be relevant as it will reduce the execution time in FHE.

Before opening an issue or asking for support, please read this documentation to understand common issues and limitations of Concrete-ML. You can also check the .

If you didn't find an answer, you can ask a question on the , or in the FHE.org .

When submitting an issue (), ideally include as much information as possible. In addition to the Python script, the following information is useful:

If you would like to contribute to project and send pull requests, take a look at the guide.

The class takes several arguments that determine how float values are quantized:

See also the reference for more information:

The quantized versions of floating point model operations are stored in the QuantizedModule. The ONNX_OPS_TO_QUANTIZED_IMPL dictionary maps ONNX floating point operators (e.g. Gemm) to their quantized equivalent (e.g. QuantizedGemm). For more information on implementing these operations, please see the .

The computation graph is taken from the corresponding floating point ONNX graph exported from scikit-learn , or from the ONNX graph exported by PyTorch. Calibration is used to obtain quantized parameters for the operations in the QuantizedModule. Parameters are also determined for the quantization of inputs during model deployment.

To perform calibration, an interpreter goes through the ONNX graph in and stores the intermediate results as it goes. The statistics of these values determine quantization parameters.

That QuantizedModule generates the Concrete-Numpy function that is compiled to FHE. The compilation will succeed if the intermediate values conform to the 8-bits precision limit of the Concrete stack. See for details.

Lei Mao's blog on quantization:

Google paper on neural network quantization and integer-only inference:

You can ask to become an official contributor by emailing . Only approved contributors can send pull requests (PR), so please make sure to get in touch before you do.

To learn more about conventional commits, check page. Just a reminder that commit messages are checked in the comformance step and are rejected if they don't follow the rules.

You can learn more about rebasing .

Before any final release, Concrete-ML contributors go through a release candidate (RC) cycle. The idea is that once the codebase and documentations look ready for a release, you create an RC release by opening an issue with the release template , starting with version vX.Y.Zrc1 and then with versions vX.Y.Zrc2, vX.Y.Zrc3...

Once the last RC is deemed ready, open an issue with the release template using the last RC version from which you remove the rc? part (i.e. v12.67.19 if your last RC version was v12.67.19-rc4) on .

Before you start this section, you must install Docker by following official guide.

Once you have access to this repository and the dev environment is installed on your host OS (via make setup_env once ), you should be able to launch the commands to build the dev Docker image with make docker_build.

Internally, Concrete-ML uses operators as intermediate representation (or IR) for manipulating machine learning models produced through export for , and .

All models have a PyTorch implementation for inference. This implementation is provided either by a third-party tool such as or implemented directly in Concrete-ML.

The PyTorch model is exported to ONNX. For more information on the use of ONNX in Concrete-ML, see .

Quantization is performed on the , producing a . Two steps are performed: calibration and assignment of equivalent objects to each ONNX operation. The QuantizedModule class is the quantized counterpart of the NumpyModule.

Moreover, by passing a user provided nn.Module to step 2 of the above process, Concrete-ML supports custom user models. See the associated for instructions about working with such models.

Note that the NumpyModule interpreter currently .

In order to better understand how Concrete-ML works under the hood, it is possible to access each model in their ONNX format and then either print it or visualize it by importing the associated file in . For example, with LogisticRegression:

Some tests require files tracked by git-lfs to be downloaded. To do so, please follow the instructions on , then run git lfs pull.

A simple way to have everything installed is to use the development Docker (see the guide). On Linux and macOS, you have to run the script in ./script/make_utils/setup_os_deps.sh. Specify the --linux-install-python flag if you want to install python3.8 as well on apt-enabled Linux distributions. The script should install everything you need for Docker and bare OS development (you can first review the content of the file to check what it will do).

The first step is to (as some of the dev tools depend on it), then . In addition to installing Python, you are still going to need the following software available on path on Windows, as some of the basic dev tools depend on them:

git

jq

make

Development on Windows only works with the Docker environment. Follow .

To manually install Python, you can follow guide (alternatively, you can google how to install Python 3.8 (or 3.9)).

Poetry is used as the package manager. It drastically simplifies dependency and environment management. You can follow official guide to install it.

It is possible to install gmake as make. Check this for more info.

On Windows, check .

At this point, you should consider using Docker as nobody will have the exact same setup as you. If, however, you need to develop on your OS directly, you can .

neural networks
framework's pruning mechanism
Table Lookup
Brevitas
advanced QAT tutorial
detailed explanation
linear models
tree-based models
neural networks
Production Deployment
Distiller documentation
⭐️ Star the repo on Github
🗣 Community support forum
📁 Contribute to the project
linear models
tree-based models
neural networks
this introduction
FHE.org
here
deployment
Concrete-Numpy
Concrete-Compiler
Concrete-Library
Concrete-Numpy
Concrete-Framework
built-in models
deep learning
MNIST
Brevitas
Titanic
Kaggle Titanic competition
Kaggle notebook
KDnuggets
Dedicated Concrete-ML community support
Zama's blog
FHE.org community
https://community.zama.ai
https://discord.fhe.org
Twitter
described here
client/server API
supported operation set
built-in models
Classifier Comparison notebook
concepts section
Production Deployment section
client/server example
FullyConnectedNeuralNetwork.ipynb
QuantizationAwareTraining.ipynb
ConvolutionalNeuralNetwork.ipynb
FHE-friendly models documentation
Classifier Comparison notebook
pruning
Skorch documentation
pruning
quantization
Scikit-learn
XGBoost
XGBClassifier notebook
Classifier Comparison notebook
custom-model with quantization aware training demo
MNIST use-case example
Virtual Library
pruning
provides support for pruning
quantization documentation
Brevitas
this notebook
here
respect FHE constraints
as described below
here
Building a standard baseline PyTorch model
Adding pruning to make learning more robust
Converting to Quantization Aware Training with Brevitas
DEFAULT_P_ERROR_PBS = 6.3342483999973e-05
from concrete.ml.sklearn import XGBoostClassifier
clf = XGBoostClassifier()
clf.fit(X_train, y_train)

# Here comes the p_error parameter
clf.compile(X_train, p_error = 0.1)
from concrete.ml.quantization import QuantizedArray
import numpy
numpy.random.seed(0)
A = numpy.random.uniform(-2, 2, 10)
print("A = ", A)
# array([ 0.19525402,  0.86075747,  0.4110535,  0.17953273, -0.3053808,
#         0.58357645, -0.24965115,  1.567092 ,  1.85465104, -0.46623392])
q_A = QuantizedArray(7, A)
print("q_A.qvalues = ", q_A.qvalues)
# array([ 37,          73,          48,         36,          9,
#         58,          12,          112,        127,         0])
# the quantized integers values from A.
print("q_A.quantizer.scale = ", q_A.quantizer.scale)
# 0.018274684777173276, the scale S.
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# 26, the zero point Z.
print("q_A.dequant() = ", q_A.dequant())
# array([ 0.20102153,  0.85891018,  0.40204307,  0.18274685, -0.31066964,
#         0.58478991, -0.25584559,  1.57162289,  1.84574316, -0.4751418 ])
# Dequantized values.
q_A = QuantizedArray(3, A)
print("Unsigned: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Unsigned: q_A.qvalues =  [2 4 2 2 0 3 0 6 7 0]
# q_A.quantizer.zero_point =  1

q_A = QuantizedArray(3, A, is_signed=True, is_symmetric=True)
print("Signed Symmetric: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Signed Symmetric: q_A.qvalues =  [ 0  1  1  0  0  1  0  3  3 -1]
# q_A.quantizer.zero_point =  0
import numpy

def dequantize_output(self, qvalues: numpy.ndarray) -> numpy.ndarray:
    # .....
    # Assume: qvalues is the decrypted integer output of the model
    # .....
    QuantizedArray(
            output_layer.n_bits,
            qvalues,
            value_is_float=False,
            scale=output_layer.output_scale,
            zero_point=output_layer.output_zero_point,
        ).dequant()
    # ....
git checkout -b {feat|fix|refactor|test|benchmark|doc|style|chore}/short-description_$issue_id
git checkout -b feat/explicit-tlu_11
git checkout -b fix/tracing_indexing_42
make conformance
make pcc
make pytest
make show_scope
git commit -m "feat: implement bounds checking"
git commit -m "feat(debugging): add an helper function to draw intermediate representation"
git commit -m "fix(tracing): fix a bug that crashed PyTorch tracer"
# fetch the list of active remote branches
git fetch --all --prune

# checkout to main
git checkout main

# pull the latest changes to main (--ff-only is there to prevent accidental commits to main)
git pull --ff-only

# checkout back to your branch
git checkout $YOUR_BRANCH

# rebase on top of main branch
git rebase main

# If there are conflicts during the rebase, resolve them
# and continue the rebase with the following command
git rebase --continue

# push the latest version of the local branch to remote
git push --force
make docker_start

# or build and start at the same time
make docker_build_and_start

# or equivalently but shorter
make docker_bas
import onnx
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split

from concrete.ml.sklearn import LogisticRegression

# Create the data for classification
x, y = make_classification(n_samples=100, class_sep=2, n_features=4, random_state=42)

# Retrieve train and test sets
X_train, X_test, y_train, y_test = train_test_split(
    x, y, test_size=10, random_state=42
)

# Fix the number of bits to used for quantization
model = LogisticRegression(n_bits=2)

# Fit the model
model.fit(X_train, y_train)

# Access to the model
onnx_model = model.onnx_model

# Print the model
print(onnx.helper.printable_graph(onnx_model.graph))

# Save the model
onnx.save(onnx_model, "tmp.onnx")

# And then visualize it with Netron
make docs
make open_docs
make docs_and_open
git clone https://github.com/zama-ai/concrete-ml
# check for gmake
which gmake

# If you don't have it, it will error out, install gmake
brew install make

# recheck, now you should have gmake
which gmake
cd concrete-ml
make setup_env
source .venv/bin/activate
source .venv/Scripts/activate
docker volume ls
# Here we have dev_venv sourced
(dev_venv) dev_user@8e299b32283c:/src$ make setup_env
deactivate
make sync_env
# Try to install the env normally
make setup_env

# If you are still having issues, sync the environment
make sync_env

# If you are still having issues on your OS, delete the venv:
rm -rf .venv

# And re-run the env setup
make setup_env
# Try to install the env normally
make setup_env

# If you are still having issues, sync the environment
make sync_env

# If you are still having issues in Docker, delete the venv:
rm -rf ~/dev_venv/*

# Disconnect from Docker
exit

# And relaunch, the venv will be reinstalled
make docker_start

# If you are still out of luck, force a rebuild which will also delete the volumes
make docker_rebuild

# And start Docker, which will reinstall the venv
make docker_start
Iris
Digits
DecisionTreeClassifier
DecisionTreeRegressor
RandomForestClassifier
RandomForestRegressor
XGBClassifier
XGBRegressor
outstanding issues on github
Zama forum
discord
here
contributor
FHE compatible op-graph section
topological order
the compilation section
Quantization for Neural Networks
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
hello@zama.ai
this
here
here
github
this
you followed the steps here
ONNX
PyTorch
Hummingbird
skorch
FHE-friendly model documentation
Netron
git-lfs website
Docker setup
https://gitforwindows.org/
https://github.com/stedolan/jq/releases
https://gist.github.com/evanwill/0207876c3243bbb6863e65ec5dc3f058#make
this link to setup the Docker environment
this
this
StackOverflow post
this GitHub gist
described in the concepts section
here
supports the following ONNX operators
install Python
Poetry
ask Zama for help
Brevitas
Skorch
here
Brevitas usage reference
using HummingBird
Hummingbird

concrete.ml.common

module concrete.ml.common

Module for shared data structures and code.

Global Variables

  • debugging

  • check_inputs

  • utils

concrete.ml.common.debugging

module concrete.ml.common.debugging

Module for debugging.

Global Variables

  • custom_assert

concrete.ml.common.utils

module concrete.ml.common.utils

Utils that can be re-used by other pieces of code in the module.

Global Variables

  • DEFAULT_P_ERROR_PBS


function replace_invalid_arg_name_chars

replace_invalid_arg_name_chars(arg_name: str) → str

Sanitize arg_name, replacing invalid chars by _.

This does not check that the starting character of arg_name is valid.

Args:

  • arg_name (str): the arg name to sanitize.

Returns:

  • str: the sanitized arg name, with only chars in _VALID_ARG_CHARS.


function generate_proxy_function

generate_proxy_function(
    function_to_proxy: Callable,
    desired_functions_arg_names: Iterable[str]
) → Tuple[Callable, Dict[str, str]]

Generate a proxy function for a function accepting only *args type arguments.

This returns a runtime compiled function with the sanitized argument names passed in desired_functions_arg_names as the arguments to the function.

Args:

  • function_to_proxy (Callable): the function defined like def f(*args) for which to return a function like f_proxy(arg_1, arg_2) for any number of arguments.

  • desired_functions_arg_names (Iterable[str]): the argument names to use, these names are sanitized and the mapping between the original argument name to the sanitized one is returned in a dictionary. Only the sanitized names will work for a call to the proxy function.

Returns:

  • Tuple[Callable, Dict[str, str]]: the proxy function and the mapping of the original arg name to the new and sanitized arg names.


function get_onnx_opset_version

get_onnx_opset_version(onnx_model: ModelProto) → int

Return the ONNX opset_version.

Args:

  • onnx_model (onnx.ModelProto): the model.

Returns:

  • int: the version of the model

concrete.ml.common.check_inputs

module concrete.ml.common.check_inputs

Check and conversion tools.

Utils that are used to check (including convert) some data types which are compatible with scikit-learn to numpy types.


function check_array_and_assert

check_array_and_assert(X)

sklearn.utils.check_array with an assert.

Equivalent of sklearn.utils.check_array, with a final assert that the type is one which is supported by Concrete-ML.

Args:

  • X (object): Input object to check / convert

Returns: The converted and validated array


function check_X_y_and_assert

check_X_y_and_assert(X, y, *args, **kwargs)

sklearn.utils.check_X_y with an assert.

Equivalent of sklearn.utils.check_X_y, with a final assert that the type is one which is supported by Concrete-ML.

Args:

  • X (ndarray, list, sparse matrix): Input data

  • y (ndarray, list, sparse matrix): Labels

  • *args: The arguments to pass to check_X_y

  • **kwargs: The keyword arguments to pass to check_X_y

Returns: The converted and validated arrays

External Libraries

Hummingbird

Concrete-ML allows the conversion of an ONNX inference to NumPy inference (note that NumPy is always the entry point to run models in FHE with Concrete ML).

Hummingbird exposes a convert function that can be imported as follows from the hummingbird.ml package:

# Disable Hummingbird warnings for pytest.
import warnings
warnings.filterwarnings("ignore")
from hummingbird.ml import convert

This function can be used to convert a machine learning model to an ONNX as follows:

from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression

# Instantiate the logistic regression from sklearn
lr = LogisticRegression()

# Create synthetic data
X, y = make_classification(
    n_samples=100, n_features=20, n_classes=2
)

# Fit the model
lr.fit(X, y)

# Convert the model to ONNX
onnx_model = convert(lr, backend="onnx", test_input=X).model

In theory, the resulting onnx_model could be used directly within Concrete-ML's get_equivalent_numpy_forward method (as long as all operators present in the ONNX model are implemented in NumPy) and get the NumPy inference.

In practice, there are some steps needed to clean the ONNX output and make the graph compatible with Concrete-ML, such as applying quantization where needed or deleting/replacing non-FHE friendly ONNX operators (such as Softmax and ArgMax).

Skorch

This wrapper implements Torch training boilerplate code, alleviating the work that needs to be done by the user. It is possible to add hooks during the training phase, for example once an epoch is finished.

class SparseQuantNeuralNetImpl(nn.Module):
    """Sparse Quantized Neural Network classifier.

Brevitas

While Brevitas provides many types of quantization, for Concrete-ML, a custom "mixed integer" quantization applies. This "mixed integer" quantization is much simpler than the "integer only" mode of Brevitas. The "mixed integer" network design is defined as:

  • all weights and activations of convolutional, linear and pooling layers must be quantized (e.g. using Brevitas layers, QuantConv2D, QuantAvgPool2D, QuantLinear)

For "mixed integer" quantization to work, the first layer of a Brevitas nn.Module must be a QuantIdentity layer. However, you can then use functions such as torch.sigmoid on the result of such a quantizing operation.

import torch.nn as nn

class QATnetwork(nn.Module):
    def __init__(self):
        super(QATnetwork, self).__init__()
        self.quant_inp = qnn.QuantIdentity(
            bit_width=4, return_quant_tensor=True)
        # ...

    def forward(self, x):
        out = self.quant_inp(x)
        return torch.sigmoid(out)
        # ...

For examples of such a "mixed integer" network design, please see the Quantization Aware Training examples:

FHE Op-graph design

Float vs. quantized operations

Concrete, the underlying implementation of TFHE that powers Concrete-ML, enables two types of operations on integers:

  • arithmetic operations: the addition of two encrypted values and multiplication of encrypted values with clear scalars. These are used, for example, in dot-products, matrix multiplication (linear layers) and convolution.

  • table lookup operations (TLU): using an encrypted value as an index, return the value of a lookup table at that index. This is implemented using Programmable Bootstrapping. This operation is used to perform any non-linear computation such as activation functions, quantization and normalization.

Alternatively, it is possible to use a table lookup to avoid the quantization of the entire graph, by converting floating-point ONNX subgraphs into lambdas and computing their corresponding lookup tables to be evaluated directly in FHE. This operator-fusion technique only requires the input and output of the lambdas to be integers.

For example, in the following graph there is a single input, which must be an encrypted integer tensor. The following series of univariate functions is then fed into a matrix multiplication (MatMul) and fused into a single table lookup with integer inputs and outputs.

ONNX operations

Concrete-ML implements ONNX operations using Concrete-Numpy, which can handle floating point operations, as long as they can be fused to an integer lookup table. The ONNX operations implementations are based on the QuantizedOp class.

There are two modes of creation of a single table lookup for a chain of ONNX operations:

  • float mode: when the operation can be fused

  • mixed float/integer: when the ONNX operation needs to perform arithmetic operations

Thus, QuantizedOp instances may need to quantize their inputs or the result of their computation, depending on their position in the graph.

The QuantizedOp class provides a generic implementation of an ONNX operation, including the quantization of inputs and outputs, with the computation implemented in NumPy in ops_impl.py. It is possible to picture at the architecture of the QuantizedOp as the following structure:

Operations that can fuse to a TLU

Depending on the position of the op in the graph and its inputs, the QuantizedOp can be fully fused to a TLU.

Many ONNX ops are trivially univariate, as they multiply variable inputs with constants or apply univariate functions such as ReLU, Sigmoid, etc. This includes operations between the input and the MatMul in the graph above (subtraction, comparison, multiplication, etc. between inputs and constants).

Operations that work on integers

Operations, such as matrix multiplication of encrypted inputs with a constant matrix or convolution with constant weights, require that the encrypted inputs be integers. In this case, the input quantizer of the QuantizedOp is applied. These types of operations are implemented with a class that derives from QuantizedOp and implements q_impl, such as QuantizedGemm and QuantizedConv.

Operations that produce graph outputs

Finally, some operations produce graph outputs, which must be integers. These operations need to quantize their outputs as follows:

The diagram above shows that both float ops and integer ops need to quantize their outputs to integers, when placed at the end of the graph.

Putting it all together

To chain the operation types described above following the ONNX graph, Concrete-ML constructs a function that calls the q_impl of the QuantizedOp instances in the graph in sequence, and uses Concrete-Numpy to trace the execution and compile to FHE. Thus, in this chain of function calls, all groups of that instruction that operate in floating point will be fused to TLUs. In FHE, this lookup table is computed with a PBS.

The red contours show the groups of elementary Concrete-Numpy instructions that will be converted to TLUs.

Note that the input is slightly different from the QuantizedOp. Since the encrypted function takes integers as inputs, the input needs to be de-quantized first.

Implementing a QuantizedOp

QuantizedOp is the base class for all ONNX-quantized operators. It abstracts away many things to allow easy implementation of new quantized ops.

Determining if the operation can be fused

The QuantizedOp class exposes a function can_fuse that

  • helps to determine the type of implementation that will be traced

  • determines whether operations further in the graph, that depend on the results of this operation, can fuse

In most cases, ONNX ops have a single variable input and one or more constant inputs.

When the op implements elementwise operations between the inputs and constants (addition, subtract, multiplication, etc), the operation can be fused to a TLU. Thus, by default in QuantizedOp, the can_fuse function returns True.

When the op implements operations that mix the various scalars in the input encrypted tensor, the operation cannot fuse, as table lookups are univariate. Thus, operations such as QuantizedGemm, QuantizedConv return False in can_fuse.

Some operations may be found in both settings above. A mechanism is implemented in Concrete-ML to determine if the inputs of a QuantizedOp are produced by a unique integer tensor. Therefore, the can_fuse function of some QuantizedOp types (addition, subtraction) will allow fusion to take place if both operands are produced by a unique integer tensor:

def can_fuse(self) -> bool:
    return len(self._int_input_names) == 1

Case 1: A floating point version of the op is sufficient

You can check ops_impl.py to see how some operations are implemented in NumPy. The declaration convention for these operations is as follows:

  • The required inputs should be positional arguments only before the /, which marks the limit of the positional arguments

  • The optional inputs should be positional or keyword arguments between the / and *, which marks the limits of positional or keyword arguments

  • The operator attributes should be keyword arguments only after the *

The proper use of positional/keyword arguments is required to allow the QuantizedOp class to properly populate metadata automatically. It uses Python inspect modules and stores relevant information for each argument related to its positional/keyword status. This allows using the Concrete-NumPy implementation as specifications for QuantizedOp, which removes some data duplication and allows having a single source of truth for QuantizedOp and ONNX-NumPy implementations.

In that case (unless the quantized implementation requires special handling like QuantizedGemm), you can just set _impl_for_op_named to the name of the ONNX op for which the quantized class is implemented (this uses the mapping ONNX_OPS_TO_NUMPY_IMPL in onnx_utils.py to get the correct implementation).

Case 2: An integer implementation of the op is necessary

Providing an integer implementation requires sub-classing QuantizedOp to create a new operation. This sub-class must override q_impl in order to provide an integer implementation. QuantizedGemm is an example of such a case where quantized matrix multiplication requires proper handling of scales and zero points. The q_impl of that class reflects this.

In the body of q_impl, in order to obtain quantized integer values you can use the _prepare_inputs_with_constants function as such:

from concrete.ml.quantization import QuantizedArray

def q_impl(
    self,
    *q_inputs: QuantizedArray,
    **attrs,
) -> QuantizedArray:

    # Retrieve the quantized inputs
    prepared_inputs = self._prepare_inputs_with_constants(
        *q_inputs, calibrate=False, quantize_actual_values=True
    )

Here, prepared_inputs will contain one or more QuantizedArray of which the qvalues are the quantized integers.

Once the required integer processing code is implemented, the output of the q_impl function must be implemented as a single QuantizedArray. Most commonly, this is built using the de-quantized results of the processing done in q_impl.

    result = (
        sum_result.astype(numpy.float32) - q_input.quantizer.zero_point
    ) * q_input.quantizer.scale

    return QuantizedArray(
        self.n_bits,
        result,
        value_is_float=True,
        options=self.input_quant_opts,
        stats=self.output_quant_stats,
        params=self.output_quant_params,
    )

Case 3: Both a floating point and an integer implementation are necessary

In this case, in q_impl you can check whether the current operation can be fused by calling self.can_fuse(). You can then have both a floating point and an integer implementation. The traced execution path will depend on can_fuse():

def q_impl(
    self,
    *q_inputs: QuantizedArray,
    **attrs,
) -> QuantizedArray:

    execute_in_float = len(self.constant_inputs) > 0 or self.can_fuse()

    # a floating point implementation that can fuse
    if execute_in_float:
        prepared_inputs = self._prepare_inputs_with_constants(
            *q_inputs, calibrate=False, quantize_actual_values=False
        )

        result = prepared_inputs[0] + self.b_sign * prepared_inputs[1]
        return QuantizedArray(
            self.n_bits,
            result,
            # ......
        )
    else:
        prepared_inputs = self._prepare_inputs_with_constants(
            *q_inputs, calibrate=False, quantize_actual_values=True
        )
        # an integer implementation follows, see Case 2
        # ....

API

API Overview

Modules

Classes

Functions

concrete.ml.onnx

module concrete.ml.onnx

ONNX module.

Global Variables

  • ops_impl

  • onnx_utils

  • convert

  • onnx_model_manipulations

concrete.ml.onnx.onnx_model_manipulations

module concrete.ml.onnx.onnx_model_manipulations

Some code to manipulate models.


function simplify_onnx_model

Simplify an ONNX model, removes unused Constant nodes and Identity nodes.

Args:

  • onnx_model (onnx.ModelProto): the model to simplify.


function remove_unused_constant_nodes

Remove unused Constant nodes in the provided onnx model.

Args:

  • onnx_model (onnx.ModelProto): the model for which we want to remove unused Constant nodes.


function remove_identity_nodes

Remove identity nodes from a model.

Args:

  • onnx_model (onnx.ModelProto): the model for which we want to remove Identity nodes.


function keep_following_outputs_discard_others

Keep the outputs given in outputs_to_keep and remove the others from the model.

Args:

  • onnx_model (onnx.ModelProto): the ONNX model to modify.

  • outputs_to_keep (Iterable[str]): the outputs to keep by name.


function remove_node_types

Remove unnecessary nodes from the ONNX graph.

Args:

  • onnx_model (onnx.ModelProto): The ONNX model to modify.

  • op_types_to_remove (List[str]): The node types to remove from the graph.

Raises:

  • ValueError: Wrong replacement by an Identity node.


function clean_graph_after_node

Clean the graph of the onnx model by removing nodes after the given node name.

Args:

  • onnx_model (onnx.ModelProto): The onnx model.

  • node_name (str): The node's name whose following nodes will be removed.

concrete.ml.deployment

module concrete.ml.deployment

Module for deployment of the FHE model.

Global Variables

  • fhe_client_server

concrete.ml.common.debugging.custom_assert

module concrete.ml.common.debugging.custom_assert

Provide some variants of assert.


function assert_true

Provide a custom assert to check that the condition is True.

Args:

  • condition (bool): the condition. If False, raise AssertionError

  • on_error_msg (str): optional message for precising the error, in case of error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError


function assert_false

Provide a custom assert to check that the condition is False.

Args:

  • condition (bool): the condition. If True, raise AssertionError

  • on_error_msg (str): optional message for precising the error, in case of error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError


function assert_not_reached

Provide a custom assert to check that a piece of code is never reached.

Args:

  • on_error_msg (str): message for precising the error

  • error_type (Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError

concrete.ml.deployment.fhe_client_server

module concrete.ml.deployment.fhe_client_server

APIs for FHE deployment.

Global Variables

  • CML_VERSION

  • AVAILABLE_MODEL


class FHEModelServer

Server API to load and run the FHE circuit.

method __init__

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved


method load

Load the circuit.


method run

Run the model on the server over encrypted data.

Args:

  • serialized_encrypted_quantized_data (cnp.PublicArguments): the encrypted, quantized and serialized data

  • serialized_evaluation_keys (cnp.EvaluationKeys): the serialized evaluation keys

Returns:

  • cnp.PublicResult: the result of the model


class FHEModelDev

Dev API to save the model and then load and run the FHE circuit.

method __init__

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved

  • model (Any): the model to use for the FHE API


method save

Export all needed artifacts for the client and server.

Raises:

  • Exception: path_dir is not empty


class FHEModelClient

Client API to encrypt and decrypt FHE data.

method __init__

Initialize the FHE API.

Args:

  • path_dir (str): the path to the directory where the circuit is saved

  • key_dir (str): the path to the directory where the keys are stored


method deserialize_decrypt

Deserialize and decrypt the values.

Args:

  • serialized_encrypted_quantized_result (cnp.PublicArguments): the serialized, encrypted and quantized result

Returns:

  • numpy.ndarray: the decrypted and desarialized values


method deserialize_decrypt_dequantize

Deserialize, decrypt and dequantize the values.

Args:

  • serialized_encrypted_quantized_result (cnp.PublicArguments): the serialized, encrypted and quantized result

Returns:

  • numpy.ndarray: the decrypted (dequantized) values


method generate_private_and_evaluation_keys

Generate the private and evaluation keys.

Args:

  • force (bool): if True, regenerate the keys even if they already exist


method get_serialized_evaluation_keys

Get the serialized evaluation keys.

Returns:

  • cnp.EvaluationKeys: the evaluation keys


method load

Load the quantizers along with the FHE specs.


method quantize_encrypt_serialize

Quantize, encrypt and serialize the values.

Args:

  • x (numpy.ndarray): the values to quantize, encrypt and serialize

Returns:

  • cnp.PublicArguments: the quantized, encrypted and serialized values

is a third-party, open-source library that converts machine learning models into tensor computations, and it can export these models to ONNX. The list of supported models can be found in .

Concrete-ML uses to implement multi-layer, fully-connected PyTorch neural networks in a way that is compatible with the Scikit-learn API.

Skorch allows the user to easily create a classifier or regressor around a neural network (NN), implemented in Torch as a nn.Module, which is used by Concrete-ML to provide a fully-connected multi-layer NN with a configurable number of layers and optional pruning (see and the for more information).

Under the hood, Concrete-ML uses a Skorch wrapper around a single PyTorch module, SparseQuantNeuralNetImpl. More information can be found .

is a quantization aware learning toolkit built on top of PyTorch. It provides quantization layers that are one-to-one equivalents to PyTorch layers, but also contain operations that perform the quantization during training.

PyTorch floating point versions of univariate functions can be used. E.g. torch.relu, nn.BatchNormalization2D, torch.max (encrypted vs. constant), torch.add, torch.exp. See the for a full list.

The "mixed integer" mode used in Concrete-ML neural networks is based on the that makes both weights and activations representable as integers during training. However, through the use of lookup tables in Concrete-ML, floating point univariate PyTorch functions are supported.

or go to the .

You can also refer to the class which is the basis of the built-in NeuralNetworkClassifier.

The section gave an overview of the conversion of a generic ONNX graph to a FHE compatible Concrete-ML op-graph. This section describes the implementation of operations in the Concrete-ML op-graph and the way floating point can be used in some parts of the op-graphs through table lookup operations.

Since machine learning models use floating point inputs and weights, they first need to be converted to integers using .

This figure shows that the QuantizedOp has a body that implements the computation of the operation, following the . The operation's body can take either integer or float inputs and can output float or integer values. Two quantizers are attached to the operation: one that takes float inputs and produces integer inputs and one that does the same for the output.

: Module for shared data structures and code.

: Check and conversion tools.

: Module for debugging.

: Provide some variants of assert.

: Utils that can be re-used by other pieces of code in the module.

: Module for deployment of the FHE model.

: APIs for FHE deployment.

: ONNX module.

: ONNX conversion related code.

: Some code to manipulate models.

: Utils to interpret an ONNX model with numpy.

: ONNX ops implementation in python + numpy.

: Modules for quantization.

: Base Quantized Op class that implements quantization for a float numpy op.

: Post Training Quantization methods.

: QuantizedModule API.

: Quantized versions of the ONNX operators for post training quantization.

: Quantization utilities for a numpy array/tensor.

: Import sklearn models.

: Module that contains base classes for our libraries estimators.

: Implement sklearn's Generalized Linear Models (GLM).

: Implement sklearn linear model.

: Protocols.

: Scikit-learn interface for concrete quantized neural networks.

: Implements RandomForest models.

: Implement Support Vector Machine.

: Implement torch module.

: Implement the sklearn tree models.

: Implements the conversion of a tree model to a numpy function.

: Implements XGBoost models.

: Modules for torch to numpy conversion.

: torch compilation function.

: A torch to numpy module.

: File to manage the version of the package.

: Client API to encrypt and decrypt FHE data.

: Dev API to save the model and then load and run the FHE circuit.

: Server API to load and run the FHE circuit.

: A mixed quantized-raw valued onnx function.

: Base class for quantized ONNX ops implemented in numpy.

: Base ONNX to Concrete ML computation graph conversion class.

: Post-training Affine Quantization.

: Converter of Quantization Aware Training networks.

: Inference for a quantized model.

: Quantized Abs op.

: Quantized Addition operator.

: Quantized Average Pooling op.

: Quantized Batch normalization with encrypted input and in-the-clear normalization params.

: Brevitas uniform quantization with encrypted input.

: Cast the input to the required data type.

: Quantized Celu op.

: Quantized clip op.

: Quantized Conv op.

: Div operator /.

: Quantized Elu op.

: Quantized erf op.

: Quantized Exp op.

: Quantized flatten for encrypted inputs.

: Quantized Gemm op.

: Comparison operator >.

: Comparison operator >=.

: Quantized HardSigmoid op.

: Quantized Hardswish op.

: Quantized Identity op.

: Quantized LeakyRelu op.

: Comparison operator <.

: Comparison operator <=.

: Quantized Log op.

: Quantized MatMul op.

: Multiplication operator.

: Quantized Not op.

: Or operator ||.

: Quantized PRelu op.

: Quantized Padding op.

: Quantized pow op.

: ReduceSum with encrypted input.

: Quantized Relu op.

: Quantized Reshape op.

: Quantized round op.

: Quantized Selu op.

: Quantized sigmoid op.

: Quantized Softplus op.

: Subtraction operator.

: Quantized Tanh op.

: Transpose operator for quantized inputs.

: Where operator on quantized arrays.

: Calibration set statistics.

: Options for quantization.

: Abstraction of quantized array.

: Quantization parameters for uniform quantization.

: Uniform quantizer.

: Mixin class for tree-based classifiers.

: Mixin class for tree-based estimators.

: Mixin class for tree-based regressors.

: Mixin that provides quantization for a torch module and follows the Estimator API.

: A Mixin class for sklearn linear classifiers with FHE.

: A Mixin class for sklearn linear models with FHE.

: A Gamma regression model with FHE.

: A Poisson regression model with FHE.

: A Tweedie regression model with FHE.

: An ElasticNet regression model with FHE.

: A Lasso regression model with FHE.

: A linear regression model with FHE.

: A logistic regression model with FHE.

: A Ridge regression model with FHE.

: Concrete classifier protocol.

: A Concrete Estimator Protocol.

: Concrete regressor protocol.

: Quantizer Protocol.

: A mixin with a helpful modification to a skorch estimator that fixes the module type.

: Scikit-learn interface for quantized FHE compatible neural networks.

: Scikit-learn interface for quantized FHE compatible neural networks.

: Mixin class that adds quantization features to Skorch NN estimators.

: Sparse Quantized Neural Network classifier.

: Implements the RandomForest classifier.

: Implements the RandomForest regressor.

: A Classification Support Vector Machine (SVM).

: A Regression Support Vector Machine (SVM).

: Implements the sklearn DecisionTreeClassifier.

: Implements the sklearn DecisionTreeClassifier.

: Task enumerate.

: Implements the XGBoost classifier.

: Implements the XGBoost regressor.

: General interface to transform a torch.nn.Module to numpy module.

: sklearn.utils.check_X_y with an assert.

: sklearn.utils.check_array with an assert.

: Provide a custom assert to check that the condition is False.

: Provide a custom assert to check that a piece of code is never reached.

: Provide a custom assert to check that the condition is True.

: Generate a proxy function for a function accepting only *args type arguments.

: Return the ONNX opset_version.

: Sanitize arg_name, replacing invalid chars by _.

: Get the numpy equivalent forward of the provided ONNX model.

: Get the numpy equivalent forward of the provided torch Module.

: Clean the graph of the onnx model by removing nodes after the given node name.

: Keep the outputs given in outputs_to_keep and remove the others from the model.

: Remove identity nodes from a model.

: Remove unnecessary nodes from the ONNX graph.

: Remove unused Constant nodes in the provided onnx model.

: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.

: Execute the provided ONNX graph on the given inputs.

: Get the attribute from an ONNX AttributeProto.

: Construct the qualified name of the ONNX operator.

: Cast values to floating points.

: Compute abs in numpy according to ONNX spec.

: Compute acos in numpy according to ONNX spec.

: Compute acosh in numpy according to ONNX spec.

: Compute add in numpy according to ONNX spec.

: Compute asin in numpy according to ONNX spec.

: Compute sinh in numpy according to ONNX spec.

: Compute atan in numpy according to ONNX spec.

: Compute atanh in numpy according to ONNX spec.

: Compute the batch normalization of the input tensor.

: Execute ONNX cast in Numpy.

: Compute celu in numpy according to ONNX spec.

: Return the constant passed as a kwarg.

: Compute cos in numpy according to ONNX spec.

: Compute cosh in numpy according to ONNX spec.

: Compute div in numpy according to ONNX spec.

: Compute elu in numpy according to ONNX spec.

: Compute equal in numpy according to ONNX spec.

: Compute erf in numpy according to ONNX spec.

: Compute exponential in numpy according to ONNX spec.

: Flatten a tensor into a 2d array.

: Compute greater in numpy according to ONNX spec.

: Compute greater in numpy according to ONNX spec and cast outputs to floats.

: Compute greater or equal in numpy according to ONNX spec.

: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.

: Compute hardsigmoid in numpy according to ONNX spec.

: Compute hardswish in numpy according to ONNX spec.

: Compute identity in numpy according to ONNX spec.

: Compute leakyrelu in numpy according to ONNX spec.

: Compute less in numpy according to ONNX spec.

: Compute less in numpy according to ONNX spec and cast outputs to floats.

: Compute less or equal in numpy according to ONNX spec.

: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.

: Compute log in numpy according to ONNX spec.

: Compute matmul in numpy according to ONNX spec.

: Compute mul in numpy according to ONNX spec.

: Compute not in numpy according to ONNX spec.

: Compute not in numpy according to ONNX spec and cast outputs to floats.

: Compute or in numpy according to ONNX spec.

: Compute or in numpy according to ONNX spec and cast outputs to floats.

: Compute pow in numpy according to ONNX spec.

: Compute relu in numpy according to ONNX spec.

: Compute round in numpy according to ONNX spec.

: Compute selu in numpy according to ONNX spec.

: Compute sigmoid in numpy according to ONNX spec.

: Compute sin in numpy according to ONNX spec.

: Compute sinh in numpy according to ONNX spec.

: Compute softmax in numpy according to ONNX spec.

: Compute softplus in numpy according to ONNX spec.

: Compute sub in numpy according to ONNX spec.

: Compute tan in numpy according to ONNX spec.

: Compute tanh in numpy according to ONNX spec.

: Compute thresholdedrelu in numpy according to ONNX spec.

: Transpose in numpy according to ONNX spec.

: Compute the equivalent of numpy.where.

: Compute the equivalent of numpy.where.

: Decorate a numpy onnx function to flag the raw/non quantized inputs.

: Compute Average Pooling using Torch.

: Fill a parameter set structure from kwargs parameters.

: Convert the tree inference to a numpy functions using Hummingbird.

: Compile a Brevitas Quantization Aware Training model.

: Compile a torch module into an FHE equivalent.

: Compile a torch module into an FHE equivalent.

: Convert a torch tensor or a numpy array to a numpy array.

Hummingbird
the Hummingbird documentation
Skorch
pruning
neural network documentation
Brevitas
PyTorch supported layers page
"integer only" Brevitas quantization
QuantizationAwareTraining.ipynb
ConvolutionalNeuralNetwork.ipynb
MNIST use-case example
ONNX import
quantization
ONNX spec
concrete.ml.common
concrete.ml.common.check_inputs
check_inputs.check_X_y_and_assert
check_inputs.check_array_and_assert
concrete.ml.common.debugging
concrete.ml.common.utils
utils.generate_proxy_function
utils.get_onnx_opset_version
utils.replace_invalid_arg_name_chars
concrete.ml.onnx
simplify_onnx_model(onnx_model: ModelProto)
remove_unused_constant_nodes(onnx_model: ModelProto)
remove_identity_nodes(onnx_model: ModelProto)
keep_following_outputs_discard_others(
    onnx_model: ModelProto,
    outputs_to_keep: Iterable[str]
)
remove_node_types(onnx_model: ModelProto, op_types_to_remove: List[str])
clean_graph_after_node(onnx_model: ModelProto, node_name: str)
concrete.ml.onnx.onnx_model_manipulations
onnx_model_manipulations.clean_graph_after_node
onnx_model_manipulations.keep_following_outputs_discard_others
onnx_model_manipulations.remove_identity_nodes
onnx_model_manipulations.remove_node_types
onnx_model_manipulations.remove_unused_constant_nodes
onnx_model_manipulations.simplify_onnx_model
concrete.ml.deployment
assert_true(
    condition: bool,
    on_error_msg: str = '',
    error_type: Type[Exception] = <class 'AssertionError'>
)
assert_false(
    condition: bool,
    on_error_msg: str = '',
    error_type: Type[Exception] = <class 'AssertionError'>
)
assert_not_reached(
    on_error_msg: str,
    error_type: Type[Exception] = <class 'AssertionError'>
)
concrete.ml.common.debugging.custom_assert
custom_assert.assert_false
custom_assert.assert_not_reached
custom_assert.assert_true
__init__(path_dir: str)
load()
run(
    serialized_encrypted_quantized_data: PublicArguments,
    serialized_evaluation_keys: EvaluationKeys
) → PublicResult
__init__(path_dir: str, model: Any = None)
save()
__init__(path_dir: str, key_dir: str = None)
deserialize_decrypt(
    serialized_encrypted_quantized_result: PublicArguments
) → ndarray
deserialize_decrypt_dequantize(
    serialized_encrypted_quantized_result: PublicArguments
) → ndarray
generate_private_and_evaluation_keys(force=False)
get_serialized_evaluation_keys() → EvaluationKeys
load()
quantize_encrypt_serialize(x: ndarray) → PublicArguments
concrete.ml.deployment.fhe_client_server
fhe_client_server.FHEModelClient
fhe_client_server.FHEModelDev
fhe_client_server.FHEModelServer
concrete.ml.quantization.quantized_module
quantized_module.QuantizedModule
concrete.ml.quantization.post_training
post_training.ONNXConverter
post_training.PostTrainingAffineQuantization
post_training.PostTrainingQATImporter
concrete.ml.quantization.base_quantized_op
base_quantized_op.QuantizedOp
QuantizedArray
UniformQuantizer
concrete.ml.quantization.quantizers
quantizers.MinMaxQuantizationStats
quantizers.QuantizationOptions
quantizers.QuantizedArray
quantizers.UniformQuantizationParameters
quantizers.UniformQuantizer
quantizers.fill_from_kwargs
concrete.ml.onnx.ops_impl
ops_impl.ONNXMixedFunction
ops_impl.cast_to_float
ops_impl.numpy_abs
ops_impl.numpy_acos
ops_impl.numpy_acosh
ops_impl.numpy_add
ops_impl.numpy_asin
ops_impl.numpy_asinh
ops_impl.numpy_atan
ops_impl.numpy_atanh
ops_impl.numpy_batchnorm
ops_impl.numpy_cast
ops_impl.numpy_celu
ops_impl.numpy_constant
ops_impl.numpy_cos
ops_impl.numpy_cosh
ops_impl.numpy_div
ops_impl.numpy_elu
ops_impl.numpy_equal
ops_impl.numpy_erf
ops_impl.numpy_exp
ops_impl.numpy_flatten
ops_impl.numpy_greater
ops_impl.numpy_greater_float
ops_impl.numpy_greater_or_equal
ops_impl.numpy_greater_or_equal_float
ops_impl.numpy_hardsigmoid
ops_impl.numpy_hardswish
ops_impl.numpy_identity
ops_impl.numpy_leakyrelu
ops_impl.numpy_less
ops_impl.numpy_less_float
ops_impl.numpy_less_or_equal
ops_impl.numpy_less_or_equal_float
ops_impl.numpy_log
ops_impl.numpy_matmul
ops_impl.numpy_mul
ops_impl.numpy_not
ops_impl.numpy_not_float
ops_impl.numpy_or
ops_impl.numpy_or_float
ops_impl.numpy_pow
ops_impl.numpy_relu
ops_impl.numpy_round
ops_impl.numpy_selu
ops_impl.numpy_sigmoid
ops_impl.numpy_sin
ops_impl.numpy_sinh
ops_impl.numpy_softmax
ops_impl.numpy_softplus
ops_impl.numpy_sub
ops_impl.numpy_tan
ops_impl.numpy_tanh
ops_impl.numpy_thresholdedrelu
ops_impl.numpy_transpose
ops_impl.numpy_where
ops_impl.numpy_where_body
ops_impl.onnx_func_raw_args
ops_impl.torch_avgpool
LinearRegression
LogisticRegression
Lasso
Ridge
ElasticNet
concrete.ml.sklearn.linear_model
linear_model.ElasticNet
linear_model.Lasso
linear_model.LinearRegression
linear_model.LogisticRegression
linear_model.Ridge
concrete.ml.sklearn
PoissonRegressor
TweedieRegressor
GammaRegressor
concrete.ml.sklearn.glm
glm.GammaRegressor
glm.PoissonRegressor
glm.TweedieRegressor
concrete.ml.sklearn.protocols
protocols.ConcreteBaseClassifierProtocol
protocols.ConcreteBaseEstimatorProtocol
protocols.ConcreteBaseRegressorProtocol
protocols.Quantizer
concrete.ml.sklearn.base
base.BaseTreeClassifierMixin
base.BaseTreeEstimatorMixin
base.BaseTreeRegressorMixin
base.QuantizedTorchEstimatorMixin
base.SklearnLinearClassifierMixin
base.SklearnLinearModelMixin
concrete.ml.quantization
concrete.ml.onnx.convert
convert.get_equivalent_numpy_forward
convert.get_equivalent_numpy_forward_and_onnx_model
concrete.ml.onnx.onnx_utils
onnx_utils.execute_onnx_with_numpy
onnx_utils.get_attribute
onnx_utils.get_op_name

concrete.ml.quantization.quantized_module

module concrete.ml.quantization.quantized_module

QuantizedModule API.

Global Variables

  • DEFAULT_P_ERROR_PBS


class QuantizedModule

Inference for a quantized model.

method __init__

__init__(
    ordered_module_input_names: Iterable[str] = None,
    ordered_module_output_names: Iterable[str] = None,
    quant_layers_dict: Dict[str, Tuple[Tuple[str, ], QuantizedOp]] = None
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property is_compiled

Return the compiled status of the module.

Returns:

  • bool: the compiled status of the module.


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model (onnx.ModelProto): the ONNX model


property post_processing_params

Get the post-processing parameters.

Returns:

  • Dict[str, Any]: the post-processing parameters


method compile

compile(
    q_inputs: Union[Tuple[ndarray, ], ndarray],
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the forward function of the module.

Args:

  • q_inputs (Union[Tuple[numpy.ndarray, ...], numpy.ndarray]): Needed for tracing and building the boundaries.

  • configuration (Optional[Configuration]): Configuration object to use during compilation

  • compilation_artifacts (Optional[DebugArtifacts]): Artifacts object to fill during

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.

  • use_virtual_lib (bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.

  • p_error (Optional[float]): probability of error of a PBS.

Returns:

  • Circuit: the compiled Circuit.


method dequantize_output

dequantize_output(qvalues: ndarray) → ndarray

Take the last layer q_out and use its dequant function.

Args:

  • qvalues (numpy.ndarray): Quantized values of the last layer.

Returns:

  • numpy.ndarray: Dequantized values of the last layer.


method forward

forward(*qvalues: ndarray) → ndarray

Forward pass with numpy function only.

Args:

  • *qvalues (numpy.ndarray): numpy.array containing the quantized values.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method forward_and_dequant

forward_and_dequant(*q_x: ndarray) → ndarray

Forward pass with numpy function only plus dequantization.

Args:

  • *q_x (numpy.ndarray): numpy.ndarray containing the quantized input values. Requires the input dtype to be uint8.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method post_processing

post_processing(qvalues: ndarray) → ndarray

Post-processing of the quantized output.

Args:

  • qvalues (numpy.ndarray): numpy.ndarray containing the quantized input values.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method quantize_input

quantize_input(*values: ndarray) → Union[ndarray, Tuple[ndarray, ]]

Take the inputs in fp32 and quantize it using the learned quantization parameters.

Args:

  • *values (numpy.ndarray): Floating point values.

Returns:

  • Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]: Quantized (numpy.uint32) values.


method set_inputs_quantization_parameters

set_inputs_quantization_parameters(*input_q_params: UniformQuantizer)

Set the quantization parameters for the module's inputs.

Args:

  • *input_q_params (UniformQuantizer): The quantizer(s) for the module.

concrete.ml.quantization.post_training

module concrete.ml.quantization.post_training

Post Training Quantization methods.

Global Variables

  • ONNX_OPS_TO_NUMPY_IMPL

  • DEFAULT_MODEL_BITS

  • ONNX_OPS_TO_QUANTIZED_IMPL


class ONNXConverter

Base ONNX to Concrete ML computation graph conversion class.

This class provides a method to parse an ONNX graph and apply several transformations. First, it creates QuantizedOps for each ONNX graph op. These quantized ops have calibrated quantizers that are useful when the operators work on integer data or when the output of the ops is the output of the encrypted program. For operators that compute in float and will be merged to TLUs, these quantizers are not used. Second, this converter creates quantized tensors for initializer and weights stored in the graph.

This class should be sub-classed to provide specific calibration and quantization options depending on the usage (Post-training quantization vs Quantization Aware training).

Arguments:

  • n_bits (int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.

  • y_model (NumpyModule): Model in numpy.

  • is_signed (bool): Whether the weights of the layers can be signed. Currently, only the weights can be signed.

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    is_signed: bool = False
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module


class PostTrainingAffineQuantization

Post-training Affine Quantization.

Create the quantized version of the passed numpy module.

Args:

  • n_bits (int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits

  • numpy_model (NumpyModule): Model in numpy.

  • is_signed: Whether the weights of the layers can be signed. Currently, only the weights can be signed.

Returns:

  • QuantizedModule: A quantized version of the numpy model.

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    is_signed: bool = False
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module


class PostTrainingQATImporter

Converter of Quantization Aware Training networks.

This class provides specific configuration for QAT networks during ONNX network conversion to Concrete ML computation graphs.

method __init__

__init__(
    n_bits: Union[int, Dict],
    numpy_model: NumpyModule,
    is_signed: bool = False
)

property n_bits_model_inputs

Get the number of bits to use for the quantization of the first layer's output.

Returns:

  • n_bits (int): number of bits for input quantization


property n_bits_model_outputs

Get the number of bits to use for the quantization of the last layer's output.

Returns:

  • n_bits (int): number of bits for output quantization


property n_bits_op_inputs

Get the number of bits to use for the quantization of any operators' inputs.

Returns:

  • n_bits (int): number of bits for the quantization of the operators' inputs


property n_bits_op_weights

Get the number of bits to use for the quantization of any constants (usually weights).

Returns:

  • n_bits (int): number of bits for quantizing constants used by operators


method quantize_module

quantize_module(*calibration_data: ndarray) → QuantizedModule

Quantize numpy module.

Following https://arxiv.org/abs/1712.05877 guidelines.

Args:

  • *calibration_data (numpy.ndarray): Data that will be used to compute the bounds, scales and zero point values for every quantized object.

Returns:

  • QuantizedModule: Quantized numpy module

concrete.ml.quantization.base_quantized_op

module concrete.ml.quantization.base_quantized_op

Base Quantized Op class that implements quantization for a float numpy op.

Global Variables

  • ONNX_OPS_TO_NUMPY_IMPL

  • ALL_QUANTIZED_OPS

  • ONNX_OPS_TO_QUANTIZED_IMPL

  • DEFAULT_MODEL_BITS


class QuantizedOp

Base class for quantized ONNX ops implemented in numpy.

Args:

  • n_bits_output (int): The number of bits to use for the quantization of the output

  • int_input_names (Set[str]): The set of names of integer tensors that are inputs to this op

  • constant_inputs (Optional[Union[Dict[str, Any], Dict[int, Any]]]): The constant tensors that are inputs to this op

  • input_quant_opts (QuantizationOptions): Input quantizer options, determine the quantization that is applied to input tensors (that are not constants)

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None

method calibrate

calibrate(*inputs: ndarray) → ndarray

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method call_impl

call_impl(*inputs: ndarray, **attrs) → ndarray

Call self.impl to centralize mypy bug workaround.

Args:

  • *inputs (numpy.ndarray): real valued inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • numpy.ndarray: return value of self.impl


method can_fuse

can_fuse() → bool

Determine if the operator impedes graph fusion.

This function shall be overloaded by inheriting classes to test self._int_input_names, to determine whether the operation can be fused to a TLU or not. For example an operation that takes inputs produced by a unique integer tensor can be fused to a TLU. Example: f(x) = x * (x + 1) can be fused. A function that does f(x) = x * (x @ w + 1) can't be fused.

Returns:

  • bool: whether this instance of the QuantizedOp produces Concrete Numpy code that can be fused to TLUs


classmethod must_quantize_input

must_quantize_input(input_name_or_idx: int) → bool

Determine if an input must be quantized.

Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.

Args:

  • input_name_or_idx (int): Index of the input to check.

Returns:

  • result (bool): Whether the input must be quantized (must be a QuantizedArray) or if it stays as a raw numpy.array read from ONNX.


method prepare_output

prepare_output(qoutput_activation: ndarray) → QuantizedArray

Quantize the output of the activation function.

The calibrate method needs to be called with sample data before using this function.

Args:

  • qoutput_activation (numpy.ndarray): Output of the activation function.

Returns:

  • QuantizedArray: Quantized output.


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Execute the quantized forward.

Args:

  • *q_inputs (QuantizedArray): Quantized inputs.

  • **attrs: the QuantizedOp attributes.

Returns:

  • QuantizedArray: The returned quantized value.

concrete.ml.quantization.quantizers

module concrete.ml.quantization.quantizers

Quantization utilities for a numpy array/tensor.

Global Variables

  • STABILITY_CONST


function fill_from_kwargs

fill_from_kwargs(obj, klass, **kwargs)

Fill a parameter set structure from kwargs parameters.

Args:

  • obj: an object of type klass, if None the object is created if any of the type's members appear in the kwargs

  • klass: the type of object to fill

  • kwargs: parameter names and values to fill into an instance of the klass type

Returns:

  • obj: an object of type klass

  • kwargs: remaining parameter names and values that were not filled into obj

Raises:

  • TypeError: if the types of the parameters in kwargs could not be converted to the corresponding types of members of klass


class QuantizationOptions

Options for quantization.

Determines the number of bits for quantization and the method of quantization of the values. Signed quantization allows negative quantized values. Symmetric quantization assumes the float values are distributed symmetrically around x=0 and assigns signed values around 0 to the float values. QAT (quantization aware training) quantization assumes the values are already quantized, taking a discrete set of values, and assigns these values to integers, computing only the scale.

method __init__

__init__(
    n_bits,
    is_signed: bool = False,
    is_symmetric: bool = False,
    is_qat: bool = False
) → None

property quant_options

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


method copy_opts

copy_opts(opts)

Copy the options from a different structure.

Args:

  • opts (QuantizationOptions): structure to copy parameters from.


class MinMaxQuantizationStats

Calibration set statistics.

This class stores the statistics for the calibration set or for a calibration data batch. Currently we only store min/max to determine the quantization range. The min/max are computed from the calibration set.


property quant_stats

Get a copy of the calibration set statistics.

Returns:

  • MinMaxQuantizationStats: a copy of the current quantization stats


method compute_quantization_stats

compute_quantization_stats(values: ndarray) → None

Compute the calibration set quantization statistics.

Args:

  • values (numpy.ndarray): Calibration set on which to compute statistics.


method copy_stats

copy_stats(stats) → None

Copy the statistics from a different structure.

Args:

  • stats (MinMaxQuantizationStats): structure to copy statistics from.


class UniformQuantizationParameters

Quantization parameters for uniform quantization.

This class stores the parameters used for quantizing real values to discrete integer values. The parameters are computed from quantization options and quantization statistics.


property quant_params

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


method compute_quantization_parameters

compute_quantization_parameters(
    options: QuantizationOptions,
    stats: MinMaxQuantizationStats
) → None

Compute the quantization parameters.

Args:

  • options (QuantizationOptions): quantization options set

  • stats (MinMaxQuantizationStats): calibrated statistics for quantization


method copy_params

copy_params(params) → None

Copy the parameters from a different structure.

Args:

  • params (UniformQuantizationParameters): parameter structure to copy


class UniformQuantizer

Uniform quantizer.

Contains all information necessary for uniform quantization and provides quantization/dequantization functionality on numpy arrays.

Args:

  • options (QuantizationOptions): Quantization options set

  • stats (Optional[MinMaxQuantizationStats]): Quantization batch statistics set

  • params (Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)

method __init__

__init__(
    options: QuantizationOptions = None,
    stats: Optional[MinMaxQuantizationStats] = None,
    params: Optional[UniformQuantizationParameters] = None,
    **kwargs
)

property quant_options

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


property quant_params

Get a copy of the quantization parameters.

Returns:

  • UniformQuantizationParameters: a copy of the current quantization parameters


property quant_stats

Get a copy of the calibration set statistics.

Returns:

  • MinMaxQuantizationStats: a copy of the current quantization stats


method compute_quantization_parameters

compute_quantization_parameters(
    options: QuantizationOptions,
    stats: MinMaxQuantizationStats
) → None

Compute the quantization parameters.

Args:

  • options (QuantizationOptions): quantization options set

  • stats (MinMaxQuantizationStats): calibrated statistics for quantization


method compute_quantization_stats

compute_quantization_stats(values: ndarray) → None

Compute the calibration set quantization statistics.

Args:

  • values (numpy.ndarray): Calibration set on which to compute statistics.


method copy_opts

copy_opts(opts)

Copy the options from a different structure.

Args:

  • opts (QuantizationOptions): structure to copy parameters from.


method copy_params

copy_params(params) → None

Copy the parameters from a different structure.

Args:

  • params (UniformQuantizationParameters): parameter structure to copy


method copy_stats

copy_stats(stats) → None

Copy the statistics from a different structure.

Args:

  • stats (MinMaxQuantizationStats): structure to copy statistics from.


method dequant

dequant(qvalues: ndarray) → ndarray

Dequantize values.

Args:

  • qvalues (numpy.ndarray): integer values to dequantize

Returns:

  • numpy.ndarray: Dequantized float values.


method quant

quant(values: ndarray) → ndarray

Quantize values.

Args:

  • values (numpy.ndarray): float values to quantize

Returns:

  • numpy.ndarray: Integer quantized values.


class QuantizedArray

Abstraction of quantized array.

Contains float values and their quantized integer counter-parts. Quantization is performed by the quantizer member object. Float and int values are kept in sync. Having both types of values is useful since quantized operators in Concrete ML graphs might need one or the other depending on how the operator works (in float or in int). Moreover, when the encrypted function needs to return a value, it must return integer values.

See https://arxiv.org/abs/1712.05877.

Args:

  • values (numpy.ndarray): Values to be quantized.

  • n_bits (int): The number of bits to use for quantization.

  • value_is_float (bool, optional): Whether the passed values are real (float) values or not. If False, the values will be quantized according to the passed scale and zero_point. Defaults to True.

  • options (QuantizationOptions): Quantization options set

  • stats (Optional[MinMaxQuantizationStats]): Quantization batch statistics set

  • params (Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)

  • kwargs: Any member of the options, stats, params sets as a key-value pair. The parameter sets need to be completely parametrized if their members appear in kwargs.

method __init__

__init__(
    n_bits,
    values: Optional[ndarray],
    value_is_float: bool = True,
    options: QuantizationOptions = None,
    stats: Optional[MinMaxQuantizationStats] = None,
    params: Optional[UniformQuantizationParameters] = None,
    **kwargs
)

method dequant

dequant() → ndarray

Dequantize self.qvalues.

Returns:

  • numpy.ndarray: Dequantized values.


method quant

quant() → Union[ndarray, NoneType]

Quantize self.values.

Returns:

  • numpy.ndarray: Quantized values.


method update_quantized_values

update_quantized_values(qvalues: ndarray) → ndarray

Update qvalues to get their corresponding values using the related quantized parameters.

Args:

  • qvalues (numpy.ndarray): Values to replace self.qvalues

Returns:

  • values (numpy.ndarray): Corresponding values


method update_values

update_values(values: ndarray) → ndarray

Update values to get their corresponding qvalues using the related quantized parameters.

Args:

  • values (numpy.ndarray): Values to replace self.values

Returns:

  • qvalues (numpy.ndarray): Corresponding qvalues

concrete.ml.onnx.ops_impl

module concrete.ml.onnx.ops_impl

ONNX ops implementation in python + numpy.


function cast_to_float

cast_to_float(inputs)

Cast values to floating points.

Args:

  • inputs (Tuple[numpy.ndarray]): The values to consider.

Returns:

  • Tuple[numpy.ndarray]: The float values.


function onnx_func_raw_args

onnx_func_raw_args(*args)

Decorate a numpy onnx function to flag the raw/non quantized inputs.

Args:

  • *args (tuple[Any]): function argument names

Returns:

  • result (ONNXMixedFunction): wrapped numpy function with a list of mixed arguments


function numpy_where_body

numpy_where_body(c: ndarray, t: ndarray, f: Union[ndarray, int]) → ndarray

Compute the equivalent of numpy.where.

This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g. numpy_div or numpy_where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_where

numpy_where(c: ndarray, t: ndarray, f: ndarray) → Tuple[ndarray]

Compute the equivalent of numpy.where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_add

numpy_add(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute add in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Add-13

Args:

  • a (numpy.ndarray): First operand.

  • b (numpy.ndarray): Second operand.

Returns:

  • Tuple[numpy.ndarray]: Result, has same element type as two inputs


function numpy_constant

numpy_constant(**kwargs)

Return the constant passed as a kwarg.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Constant-13

Args:

  • **kwargs: keyword arguments

Returns:

  • Any: The stored constant.


function numpy_matmul

numpy_matmul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute matmul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#MatMul-13

Args:

  • a (numpy.ndarray): N-dimensional matrix A

  • b (numpy.ndarray): N-dimensional matrix B

Returns:

  • Tuple[numpy.ndarray]: Matrix multiply results from A * B


function numpy_relu

numpy_relu(x: ndarray) → Tuple[ndarray]

Compute relu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Relu-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sigmoid

numpy_sigmoid(x: ndarray) → Tuple[ndarray]

Compute sigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Sigmoid-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softmax

numpy_softmax(x, axis=1, keepdims=True)

Compute softmax in numpy according to ONNX spec.

Softmax is currently not supported in FHE.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#softmax-13

Args:

  • x (numpy.ndarray): Input tensor

  • axis (None, int, tuple of ints): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.

  • keepdims (bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cos

numpy_cos(x: ndarray) → Tuple[ndarray]

Compute cos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Cos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cosh

numpy_cosh(x: ndarray) → Tuple[ndarray]

Compute cosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Cosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sin

numpy_sin(x: ndarray) → Tuple[ndarray]

Compute sin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Sin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sinh

numpy_sinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Sinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tan

numpy_tan(x: ndarray) → Tuple[ndarray]

Compute tan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Tan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tanh

numpy_tanh(x: ndarray) → Tuple[ndarray]

Compute tanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Tanh-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acos

numpy_acos(x: ndarray) → Tuple[ndarray]

Compute acos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Acos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acosh

numpy_acosh(x: ndarray) → Tuple[ndarray]

Compute acosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Acosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asin

numpy_asin(x: ndarray) → Tuple[ndarray]

Compute asin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Asin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asinh

numpy_asinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Asinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atan

numpy_atan(x: ndarray) → Tuple[ndarray]

Compute atan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Atan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atanh

numpy_atanh(x: ndarray) → Tuple[ndarray]

Compute atanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Atanh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_elu

numpy_elu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute elu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Elu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_selu

numpy_selu(
    x: ndarray,
    alpha: float = 1.6732632423543772,
    gamma: float = 1.0507009873554805
) → Tuple[ndarray]

Compute selu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Selu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • gamma (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_celu

numpy_celu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute celu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Celu-12

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_leakyrelu

numpy_leakyrelu(x: ndarray, alpha: float = 0.01) → Tuple[ndarray]

Compute leakyrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#LeakyRelu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_thresholdedrelu

numpy_thresholdedrelu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute thresholdedrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#ThresholdedRelu-10

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardsigmoid

numpy_hardsigmoid(
    x: ndarray,
    alpha: float = 0.2,
    beta: float = 0.5
) → Tuple[ndarray]

Compute hardsigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#HardSigmoid-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • beta (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softplus

numpy_softplus(x: ndarray) → Tuple[ndarray]

Compute softplus in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Softplus-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_abs

numpy_abs(x: ndarray) → Tuple[ndarray]

Compute abs in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Abs-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_div

numpy_div(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute div in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Div-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_mul

numpy_mul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute mul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Mul-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sub

numpy_sub(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute sub in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Sub-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_log

numpy_log(x: ndarray) → Tuple[ndarray]

Compute log in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Log-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_erf

numpy_erf(x: ndarray) → Tuple[ndarray]

Compute erf in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Erf-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardswish

numpy_hardswish(x: ndarray) → Tuple[ndarray]

Compute hardswish in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#hardswish-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_exp

numpy_exp(x: ndarray) → Tuple[ndarray]

Compute exponential in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Exp-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: The exponential of the input tensor computed element-wise


function numpy_equal

numpy_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Equal-11

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not

numpy_not(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not_float

numpy_not_float(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater

numpy_greater(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_float

numpy_greater_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal

numpy_greater_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal_float

numpy_greater_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less

numpy_less(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_float

numpy_less_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal

numpy_less_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal_float

numpy_less_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_identity

numpy_identity(x: ndarray) → Tuple[ndarray]

Compute identity in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Identity-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_transpose

numpy_transpose(x: ndarray, perm=None) → Tuple[ndarray]

Transpose in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Transpose-13

Args:

  • x (numpy.ndarray): Input tensor

  • perm (numpy.ndarray): Permutation of the axes

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function torch_avgpool

torch_avgpool(
    x: ndarray,
    ceil_mode: int,
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ]
) → Tuple[ndarray]

Compute Average Pooling using Torch.

Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.

See: https://github.com/onnx/onnx/blob/release/0.4.x/docs/Operators.md#AveragePool

Args:

  • x (numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d

  • ceil_mode (int): ONNX rounding parameter, expected 0 (torch style dimension computation)

  • kernel_shape (Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int]): stride of the convolution on each axis

Returns:

  • res (numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html

Raises:

  • AssertionError: if the pooling arguments are wrong


function numpy_cast

numpy_cast(data: ndarray, to: int) → Tuple[ndarray]

Execute ONNX cast in Numpy.

Supports only booleans for now, which are converted to integers.

See: https://github.com/onnx/onnx/blob/release/0.4.x/docs/Operators.md#Cast

Args:

  • data (numpy.ndarray): Input encrypted tensor

  • to (int): integer value of the onnx.TensorProto DataType enum

Returns:

  • result (numpy.ndarray): a tensor with the required data type


function numpy_batchnorm

numpy_batchnorm(
    x: ndarray,
    scale: ndarray,
    bias: ndarray,
    input_mean: ndarray,
    input_var: ndarray,
    epsilon=1e-05,
    momentum=0.9,
    training_mode=0
) → Tuple[ndarray]

Compute the batch normalization of the input tensor.

This can be expressed as:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#BatchNormalization-14

Args:

  • x (numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.

  • scale (numpy.ndarray): scale tensor of shape (C,)

  • bias (numpy.ndarray): bias tensor of shape (C,)

  • input_mean (numpy.ndarray): mean values to use for each input channel, shape (C,)

  • input_var (numpy.ndarray): variance values to use for each input channel, shape (C,)

  • epsilon (float): avoids division by zero

  • momentum (float): momentum used during training of the mean/variance, not used in inference

  • training_mode (int): if the model was exported in training mode this is set to 1, else 0

Returns:

  • numpy.ndarray: Normalized tensor


function numpy_flatten

numpy_flatten(x: ndarray, axis: int = 1) → Tuple[ndarray]

Flatten a tensor into a 2d array.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Flatten-13.

Args:

  • x (numpy.ndarray): tensor to flatten

  • axis (int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)

Returns:

  • result: flattened tensor


function numpy_or

numpy_or(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_or_float

numpy_or_float(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_round

numpy_round(a: ndarray) → Tuple[ndarray]

Compute round in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0

Args:

  • a (numpy.ndarray): Input tensor whose elements to be rounded.

Returns:

  • Tuple[numpy.ndarray]: Output tensor with rounded input elements.


function numpy_pow

numpy_pow(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute pow in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/release/0.4.x/docs/Changelog.md#Pow-13

Args:

  • a (numpy.ndarray): Input tensor whose elements to be raised.

  • b (numpy.ndarray): The power to which we want to raise.

Returns:

  • Tuple[numpy.ndarray]: Output tensor.


class ONNXMixedFunction

A mixed quantized-raw valued onnx function.

ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.

method __init__

__init__(function, non_quant_params: Set[str])

Create the mixed function and raw parameter list.

Args:

  • function (Any): function to be decorated

  • non_quant_params: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)

concrete.ml.sklearn.linear_model

module concrete.ml.sklearn.linear_model

Implement sklearn linear model.


class LinearRegression

A linear regression model with FHE.

Arguments:

  • n_bits (int): default is 2.

  • use_sum_workaround (bool): indicate if the sum workaround should be used or not. This

  • feature is experimental and should be used carefully. Important note: it only works for a LinearRegression model with N features, N a power of 2, for now. More information available in the QuantizedReduceSum operator. Default to False.

For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html

method __init__

__init__(
    n_bits=2,
    use_sum_workaround=False,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    n_jobs=None,
    positive=False
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit

fit(X, y: ndarray, *args, **kwargs) → Any

Fit the FHE linear model.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.

Returns: Any


class ElasticNet

An ElasticNet regression model with FHE.

Arguments:

  • n_bits (int): default is 2.

For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html

method __init__

__init__(
    n_bits=2,
    alpha=1.0,
    l1_ratio=0.5,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    positive=False
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


class Lasso

A Lasso regression model with FHE.

Arguments:

  • n_bits (int): default is 2.

For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html

method __init__

__init__(
    n_bits=2,
    alpha: float = 1.0,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    positive=False
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


class Ridge

A Ridge regression model with FHE.

Arguments:

  • n_bits (int): default is 2.

For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html

method __init__

__init__(
    n_bits=2,
    alpha: float = 1.0,
    fit_intercept=True,
    normalize='deprecated',
    copy_X=True,
    positive=False
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


class LogisticRegression

A logistic regression model with FHE.

Arguments:

  • n_bits (int): default is 2.

For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

method __init__

__init__(
    n_bits=2,
    penalty='l2',
    dual=False,
    tol=0.0001,
    C=1.0,
    fit_intercept=True,
    intercept_scaling=1,
    class_weight=None,
    random_state=None,
    solver='lbfgs',
    max_iter=100,
    multi_class='auto',
    verbose=0,
    warm_start=False,
    n_jobs=None,
    l1_ratio=None
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input

concrete.ml.sklearn

module concrete.ml.sklearn

Import sklearn models.

Global Variables

  • protocols

  • tree_to_numpy

  • base

  • torch_module

  • glm

  • linear_model

  • qnn

  • rf

  • svm

  • tree

  • xgb

concrete.ml.sklearn.glm

module concrete.ml.sklearn.glm

Implement sklearn's Generalized Linear Models (GLM).


class PoissonRegressor

A Poisson regression model with FHE.

method __init__

__init__(
    n_bits: 'Union[int, dict]' = 2,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit

fit(X, y: 'ndarray', *args, **kwargs) → None

Fit the GLM regression quantized model.

Args:

  • X : The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.


method post_processing

post_processing(
    y_preds: 'ndarray',
    already_dequantized: 'bool' = False
) → ndarray

Post-processing the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions to post-process.

  • already_dequantized (bool): Wether the inputs were already dequantized or not. Default to False.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool' = False) → ndarray

Predict on user data.

Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute the inference in FHE. Default to False.

Returns:

  • numpy.ndarray: The model's predictions.


class GammaRegressor

A Gamma regression model with FHE.

method __init__

__init__(
    n_bits: 'Union[int, dict]' = 2,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit

fit(X, y: 'ndarray', *args, **kwargs) → None

Fit the GLM regression quantized model.

Args:

  • X : The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.


method post_processing

post_processing(
    y_preds: 'ndarray',
    already_dequantized: 'bool' = False
) → ndarray

Post-processing the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions to post-process.

  • already_dequantized (bool): Wether the inputs were already dequantized or not. Default to False.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool' = False) → ndarray

Predict on user data.

Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute the inference in FHE. Default to False.

Returns:

  • numpy.ndarray: The model's predictions.


class TweedieRegressor

A Tweedie regression model with FHE.

method __init__

__init__(
    n_bits: 'Union[int, dict]' = 2,
    power: 'float' = 0.0,
    alpha: 'float' = 1.0,
    fit_intercept: 'bool' = True,
    link: 'str' = 'auto',
    max_iter: 'int' = 100,
    tol: 'float' = 0.0001,
    warm_start: 'bool' = False,
    verbose: 'int' = 0
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit

fit(X, y: 'ndarray', *args, **kwargs) → None

Fit the GLM regression quantized model.

Args:

  • X : The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.


method post_processing

post_processing(
    y_preds: 'ndarray',
    already_dequantized: 'bool' = False
) → ndarray

Post-processing the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions to post-process.

  • already_dequantized (bool): Wether the inputs were already dequantized or not. Default to False.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool' = False) → ndarray

Predict on user data.

Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute the inference in FHE. Default to False.

Returns:

  • numpy.ndarray: The model's predictions.

concrete.ml.sklearn.protocols

module concrete.ml.sklearn.protocols

Protocols.

Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.

To read more about Protocol please read: https://peps.python.org/pep-0544


class Quantizer

Quantizer Protocol.

To use to type hint a quantizer.


method dequant

dequant(X: 'ndarray') → ndarray

Dequantize some values.

Args:

  • X (numpy.ndarray): Values to dequantize

.. # noqa: DAR202

Returns:

  • numpy.ndarray: Dequantized values


method quant

quant(values: 'ndarray') → ndarray

Quantize some values.

Args:

  • values (numpy.ndarray): Values to quantize

.. # noqa: DAR202

Returns:

  • numpy.ndarray: The quantized values


class ConcreteBaseEstimatorProtocol

A Concrete Estimator Protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a PBS

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


class ConcreteBaseClassifierProtocol

Concrete classifier protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a PBS

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the class with highest probability.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray


method predict_proba

predict_proba(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the probability of each class.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray


class ConcreteBaseRegressorProtocol

Concrete regressor protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a PBS

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the expected value.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray

concrete.ml.sklearn.base

module concrete.ml.sklearn.base

Module that contains base classes for our libraries estimators.

Global Variables

  • DEFAULT_P_ERROR_PBS

  • OPSET_VERSION_FOR_ONNX_EXPORT


class QuantizedTorchEstimatorMixin

Mixin that provides quantization for a torch module and follows the Estimator API.

This class should be mixed in with another that provides the full Estimator API. This class only provides modifiers for .fit() (with quantization) and .predict() (optionally in FHE)

method __init__

__init__()

property base_estimator_type

Get the sklearn estimator that should be trained by the child class.


property base_module_to_compile

Get the Torch module that should be compiled to FHE.


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[Quantizer]: the input quantizers


property n_bits_quant

Get the number of quantization bits.


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model_ (onnx.ModelProto): the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the model.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.

Raises:

  • ValueError: if called before the model is trained


method fit

fit(X, y, **fit_params)

Initialize and fit the module.

If the module was already initialized, by calling fit, the module will be re-initialized (unless warm_start is True). In addition to the torch training step, this method performs quantization of the trained torch model.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training, these are passed to the torch training interface

Returns:

  • self: the trained quantized estimator


method fit_benchmark

fit_benchmark(X: ndarray, y: ndarray, *args, **kwargs) → Tuple[Any, Any]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.

Returns:

  • self: the trained quantized estimator

  • fp32_model: trained raw (fp32) wrapped NN estimator


method get_params_for_benchmark

get_params_for_benchmark()

Get the parameters to instantiate the sklearn estimator trained by the child class.

Returns:

  • params (dict): dictionary with parameters that will initialize a new Estimator


method post_processing

post_processing(y_preds: ndarray) → ndarray

Post-processing the output.

Args:

  • y_preds (numpy.ndarray): the output to post-process

Raises:

  • ValueError: if unknown post-processing function

Returns:

  • numpy.ndarray: the post-processed output


method predict

predict(X, execute_in_fhe=False)

Predict on user provided data.

Predicts using the quantized clear or FHE classifier

Args:

  • X : input data, a numpy array of raw values (non quantized)

  • execute_in_fhe : whether to execute the inference in FHE or in the clear

Returns:

  • y_pred : numpy ndarray with predictions


method predict_proba

predict_proba(X, execute_in_fhe=False)

Predict on user provided data, returning probabilities.

Predicts using the quantized clear or FHE classifier

Args:

  • X : input data, a numpy array of raw values (non quantized)

  • execute_in_fhe : whether to execute the inference in FHE or in the clear

Returns:

  • y_pred : numpy ndarray with probabilities (if applicable)

Raises:

  • ValueError: if the estimator was not yet trained or compiled


class BaseTreeEstimatorMixin

Mixin class for tree-based estimators.

A place to share methods that are used on all tree-based estimators.

method __init__

__init__(n_bits: int)

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): number of bits used for quantization


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the model.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.


method dequantize_output

dequantize_output(y_preds: ndarray)

Dequantize the integer predictions.

Args:

  • y_preds (numpy.ndarray): the predictions

Returns: the dequantized predictions


method fit_benchmark

fit_benchmark(
    X: ndarray,
    y: ndarray,
    *args,
    random_state: Optional[int] = None,
    **kwargs
) → Tuple[Any, Any]

Fit the sklearn tree-based model and the FHE tree-based model.

Args:

  • X (numpy.ndarray): The input data.

  • y (numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.

  • *args: args for super().fit

  • **kwargs: kwargs for super().fit

Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.


method quantize_input

quantize_input(X: ndarray)

Quantize the input.

Args:

  • X (numpy.ndarray): the input

Returns: the quantized input


class BaseTreeRegressorMixin

Mixin class for tree-based regressors.

A place to share methods that are used on all tree-based regressors.

method __init__

__init__(n_bits: int)

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): number of bits used for quantization


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the model.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.


method dequantize_output

dequantize_output(y_preds: ndarray)

Dequantize the integer predictions.

Args:

  • y_preds (numpy.ndarray): the predictions

Returns: the dequantized predictions


method fit

fit(X, y: ndarray, **kwargs) → Any

Fit the tree-based estimator.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • **kwargs: args for super().fit

Returns:

  • Any: The fitted model.


method fit_benchmark

fit_benchmark(
    X: ndarray,
    y: ndarray,
    *args,
    random_state: Optional[int] = None,
    **kwargs
) → Tuple[Any, Any]

Fit the sklearn tree-based model and the FHE tree-based model.

Args:

  • X (numpy.ndarray): The input data.

  • y (numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.

  • *args: args for super().fit

  • **kwargs: kwargs for super().fit

Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.


method post_processing

post_processing(y_preds: ndarray) → ndarray

Apply post-processing to the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict the probability.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute in FHE. Defaults to False.

Returns:

  • numpy.ndarray: The predicted probabilities.


method quantize_input

quantize_input(X: ndarray)

Quantize the input.

Args:

  • X (numpy.ndarray): the input

Returns: the quantized input


class BaseTreeClassifierMixin

Mixin class for tree-based classifiers.

A place to share methods that are used on all tree-based classifiers.

method __init__

__init__(n_bits: int)

Initialize the TreeBasedEstimatorMixin.

Args:

  • n_bits (int): number of bits used for quantization


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the model.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.


method dequantize_output

dequantize_output(y_preds: ndarray)

Dequantize the integer predictions.

Args:

  • y_preds (numpy.ndarray): the predictions

Returns: the dequantized predictions


method fit

fit(X, y: ndarray, **kwargs) → Any

Fit the tree-based estimator.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • **kwargs: args for super().fit

Returns:

  • Any: The fitted model.


method fit_benchmark

fit_benchmark(
    X: ndarray,
    y: ndarray,
    *args,
    random_state: Optional[int] = None,
    **kwargs
) → Tuple[Any, Any]

Fit the sklearn tree-based model and the FHE tree-based model.

Args:

  • X (numpy.ndarray): The input data.

  • y (numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.

  • *args: args for super().fit

  • **kwargs: kwargs for super().fit

Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.


method post_processing

post_processing(y_preds: ndarray) → ndarray

Apply post-processing to the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict the class with highest probability.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute in FHE. Defaults to False.

Returns:

  • numpy.ndarray: The predicted target values.


method predict_proba

predict_proba(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict the probability.

Args:

  • X (numpy.ndarray): The input data.

  • execute_in_fhe (bool): Whether to execute in FHE. Defaults to False.

Returns:

  • numpy.ndarray: The predicted probabilities.


method quantize_input

quantize_input(X: ndarray)

Quantize the input.

Args:

  • X (numpy.ndarray): the input

Returns: the quantized input


class SklearnLinearModelMixin

A Mixin class for sklearn linear models with FHE.

method __init__

__init__(*args, n_bits: Union[int, Dict] = 2, **kwargs)

Initialize the FHE linear model.

Args:

  • n_bits (int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits Default to 2.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method clean_graph

clean_graph()

Clean the graph of the onnx model.

This will remove the Cast node in the model's onnx.graph since they have no use in quantized or FHE models.


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the FHE linear model.

Args:

  • X (numpy.ndarray): The input data.

  • configuration (Optional[Configuration]): Configuration object to use during compilation

  • compilation_artifacts (Optional[DebugArtifacts]): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X, y: ndarray, *args, **kwargs) → Any

Fit the FHE linear model.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.

Returns: Any


method fit_benchmark

fit_benchmark(
    X: ndarray,
    y: ndarray,
    *args,
    random_state: Optional[int] = None,
    **kwargs
) → Tuple[Any, Any]

Fit the sklearn linear model and the FHE linear model.

Args:

  • X (numpy.ndarray): The input data.

  • y (numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.

  • *args: The arguments to pass to the sklearn linear model. or not (False). Default to False.

  • *args: args for super().fit

  • **kwargs: kwargs for super().fit

Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.


method post_processing

post_processing(y_preds: ndarray) → ndarray

Post-processing the output.

Args:

  • y_preds (numpy.ndarray): the output to post-process

Returns:

  • numpy.ndarray: the post-processed output


method predict

predict(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict on user data.

Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit

Args:

  • X (numpy.ndarray): the input data

  • execute_in_fhe (bool): whether to execute the inference in FHE

Returns:

  • numpy.ndarray: the prediction as ordinals


class SklearnLinearClassifierMixin

A Mixin class for sklearn linear classifiers with FHE.

method __init__

__init__(*args, n_bits: Union[int, Dict] = 2, **kwargs)

Initialize the FHE linear model.

Args:

  • n_bits (int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits Default to 2.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method clean_graph

clean_graph()

Clean the graph of the onnx model.

Any operators following gemm, including the sigmoid, softmax and argmax operators, are removed from the graph. They will be executed in clear in the post-processing method.


method compile

compile(
    X: ndarray,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the FHE linear model.

Args:

  • X (numpy.ndarray): The input data.

  • configuration (Optional[Configuration]): Configuration object to use during compilation

  • compilation_artifacts (Optional[DebugArtifacts]): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • Circuit: the compiled Circuit.


method decision_function

decision_function(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict confidence scores for samples.

Args:

  • X (numpy.ndarray): Samples to predict.

  • execute_in_fhe (bool): If True, the inference will be executed in FHE. Default to False.

Returns:

  • numpy.ndarray: Confidence scores for samples.


method fit

fit(X, y: ndarray, *args, **kwargs) → Any

Fit the FHE linear model.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • *args: The arguments to pass to the sklearn linear model.

  • **kwargs: The keyword arguments to pass to the sklearn linear model.

Returns: Any


method fit_benchmark

fit_benchmark(
    X: ndarray,
    y: ndarray,
    *args,
    random_state: Optional[int] = None,
    **kwargs
) → Tuple[Any, Any]

Fit the sklearn linear model and the FHE linear model.

Args:

  • X (numpy.ndarray): The input data.

  • y (numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.

  • *args: The arguments to pass to the sklearn linear model. or not (False). Default to False.

  • *args: args for super().fit

  • **kwargs: kwargs for super().fit

Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.


method post_processing

post_processing(y_preds: ndarray, already_dequantized: bool = False)

Post-processing the predictions.

This step may include a dequantization of the inputs if not done previously, in particular within the client-server workflow.

Args:

  • y_preds (numpy.ndarray): The predictions to post-process.

  • already_dequantized (bool): Wether the inputs were already dequantized or not. Default to False.

Returns:

  • numpy.ndarray: The post-processed predictions.


method predict

predict(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict on user data.

Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.

Args:

  • X (numpy.ndarray): Samples to predict.

  • execute_in_fhe (bool): If True, the inference will be executed in FHE. Default to False.

Returns:

  • numpy.ndarray: The prediction as ordinals.


method predict_proba

predict_proba(X: ndarray, execute_in_fhe: bool = False) → ndarray

Predict class probabilities for samples.

Args:

  • X (numpy.ndarray): Samples to predict.

  • execute_in_fhe (bool): If True, the inference will be executed in FHE. Default to False.

Returns:

  • numpy.ndarray: Class probabilities for samples.

concrete.ml.quantization

module concrete.ml.quantization

Modules for quantization.

Global Variables

  • quantizers

  • base_quantized_op

  • quantized_module

  • post_training

  • quantized_ops

concrete.ml.onnx.convert

module concrete.ml.onnx.convert

ONNX conversion related code.

Global Variables

  • IMPLEMENTED_ONNX_OPS

  • OPSET_VERSION_FOR_ONNX_EXPORT


function get_equivalent_numpy_forward_and_onnx_model

get_equivalent_numpy_forward_and_onnx_model(
    torch_module: Module,
    dummy_input: Union[Tensor, Tuple[Tensor, ]],
    output_onnx_file: Optional[Path, str] = None
) → Tuple[Callable[, Tuple[ndarray, ]], GraphProto]

Get the numpy equivalent forward of the provided torch Module.

Args:

  • torch_module (torch.nn.Module): the torch Module for which to get the equivalent numpy forward.

  • dummy_input (Union[torch.Tensor, Tuple[torch.Tensor, ...]]): dummy inputs for ONNX export.

  • output_onnx_file (Optional[Union[Path, str]], optional): Path to save the ONNX file to. Will use a temp file if not provided. Defaults to None.

Returns:

  • Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.GraphProto]: The function that will execute the equivalent numpy code to the passed torch_module and the generated ONNX model.


function get_equivalent_numpy_forward

get_equivalent_numpy_forward(
    onnx_model: ModelProto,
    check_model: bool = True
) → Callable[, Tuple[ndarray, ]]

Get the numpy equivalent forward of the provided ONNX model.

Args:

  • onnx_model (onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.

  • check_model (bool): set to True to run the onnx checker on the model. Defaults to True.

Raises:

  • ValueError: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy.

Returns:

  • Callable[..., Tuple[numpy.ndarray, ...]]: The function that will execute the equivalent numpy function.

concrete.ml.onnx.onnx_utils

module concrete.ml.onnx.onnx_utils

Utils to interpret an ONNX model with numpy.

Global Variables

  • ATTR_TYPES

  • ATTR_GETTERS

  • ONNX_OPS_TO_NUMPY_IMPL

  • ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_FLOAT

  • ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_BOOL

  • ONNX_OPS_TO_NUMPY_IMPL_BOOL

  • IMPLEMENTED_ONNX_OPS


function get_attribute

get_attribute(attribute: AttributeProto) → Any

Get the attribute from an ONNX AttributeProto.

Args:

  • attribute (onnx.AttributeProto): The attribute to retrieve the value from.

Returns:

  • Any: The stored attribute value.


function get_op_name

get_op_name(node)

Construct the qualified name of the ONNX operator.

Args:

  • node (Any): ONNX graph node

Returns:

  • result (str): qualified name


function execute_onnx_with_numpy

execute_onnx_with_numpy(graph: GraphProto, *inputs: ndarray) → Tuple[ndarray, ]

Execute the provided ONNX graph on the given inputs.

Args:

  • graph (onnx.GraphProto): The ONNX graph to execute.

  • *inputs: The inputs of the graph.

Returns:

  • Tuple[numpy.ndarray]: The result of the graph's execution.

concrete.ml.sklearn.rf

module concrete.ml.sklearn.rf

Implements RandomForest models.


class RandomForestClassifier

Implements the RandomForest classifier.

method __init__

Initialize the RandomForestClassifier.

noqa: DAR101


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


class RandomForestRegressor

Implements the RandomForest regressor.

method __init__

Initialize the RandomForestRegressor.

noqa: DAR101


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model

concrete.ml.sklearn.svm

module concrete.ml.sklearn.svm

Implement Support Vector Machine.


class LinearSVR

A Regression Support Vector Machine (SVM).

method __init__


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


class LinearSVC

A Classification Support Vector Machine (SVM).

method __init__


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input

concrete.ml.sklearn.tree_to_numpy

module concrete.ml.sklearn.tree_to_numpy

Implements the conversion of a tree model to a numpy function.

Global Variables

  • MAXIMUM_TLU_BIT_WIDTH

  • OPSET_VERSION_FOR_ONNX_EXPORT

  • EXPECTED_NUMBER_OF_OUTPUTS_PER_TASK


function tree_to_numpy

Convert the tree inference to a numpy functions using Hummingbird.

Args:

  • model (onnx.ModelProto): The model to convert.

  • x (numpy.ndarray): The input data.

  • framework (str): The framework from which the onnx_model is generated.

  • (options: 'xgboost', 'sklearn')

  • task (Task): The task the model is solving

  • output_n_bits (int): The number of bits of the output.

Returns:

  • Tuple[Callable, List[QuantizedArray], onnx.ModelProto]: A tuple with a function that takes a numpy array and returns a numpy array, QuantizedArray object to quantize and dequantize the output of the tree, and the ONNX model.


class Task

Task enumerate.

concrete.ml.torch

module concrete.ml.torch

Modules for torch to numpy conversion.

Global Variables

  • numpy_module

concrete.ml.sklearn.qnn

module concrete.ml.sklearn.qnn

Scikit-learn interface for concrete quantized neural networks.

Global Variables

  • MAXIMUM_TLU_BIT_WIDTH


class SparseQuantNeuralNetImpl

Sparse Quantized Neural Network classifier.

This class implements an MLP that is compatible with FHE constraints. The weights and activations are quantized to low bitwidth and pruning is used to ensure accumulators do not surpass an user-provided accumulator bit-width. The number of classes and number of layers are specified by the user, as well as the breadth of the network

method __init__

Sparse Quantized Neural Network constructor.

Args:

  • input_dim: Number of dimensions of the input data

  • n_layers: Number of linear layers for this network

  • n_outputs: Number of output classes or regression targets

  • n_w_bits: Number of weight bits

  • n_a_bits: Number of activation and input bits

  • n_accum_bits: Maximal allowed bitwidth of intermediate accumulators

  • n_hidden_neurons_multiplier: A factor that is multiplied by the maximal number of active (non-zero weight) neurons for every layer. The maximal number of neurons in the worst case scenario is: 2^n_max-1 max_active_neurons(n_max, n_w, n_a) = floor(---------------------) (2^n_w-1)*(2^n_a-1) ) The worst case scenario for the bitwidth of the accumulator is when all weights and activations are maximum simultaneously. We set, for each layer, the total number of neurons to be: n_hidden_neurons_multiplier * max_active_neurons(n_accum_bits, n_w_bits, n_a_bits) Through experiments, for typical distributions of weights and activations, the default value for n_hidden_neurons_multiplier, 4, is safe to avoid overflow.

  • activation_function: a torch class that is used to construct activation functions in the network (e.g. torch.ReLU, torch.SELU, torch.Sigmoid, etc)

Raises:

  • ValueError: if the parameters have invalid values or the computed accumulator bitwidth is zero


method enable_pruning

Enable pruning in the network. Pruning must be made permanent to recover pruned weights.

Raises:

  • ValueError: if the quantization parameters are invalid


method forward

Forward pass.

Args:

  • x (torch.Tensor): network input

Returns:

  • x (torch.Tensor): network prediction


method make_pruning_permanent

Make the learned pruning permanent in the network.


method max_active_neurons

Compute the maximum number of active (non-zero weight) neurons.

The computation is done using the quantization parameters passed to the constructor. Warning: With the current quantization algorithm (asymmetric) the value returned by this function is not guaranteed to ensure FHE compatibility. For some weight distributions, weights that are 0 (which are pruned weights) will not be quantized to 0. Therefore the total number of active quantized neurons will not be equal to max_active_neurons.

Returns:

  • n (int): maximum number of active neurons


method on_train_end

Call back when training is finished, can be useful to remove training hooks.


class QuantizedSkorchEstimatorMixin

Mixin class that adds quantization features to Skorch NN estimators.


property base_estimator_type

Get the sklearn estimator that should be trained by the child class.


property base_module_to_compile

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

Returns:

  • module (nn.Module): the instantiated torch module


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property input_quantizers

Get the input quantizers.

Returns:

  • List[Quantizer]: the input quantizers


property n_bits_quant

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

Returns:

  • n_bits (int): the number of bits to quantize the network

Raises:

  • ValueError: with skorch estimators, the module_ is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model_ (onnx.ModelProto): the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method get_params_for_benchmark

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

Returns:

  • params (dict): parameters to create an equivalent fp32 sklearn estimator for benchmark


method infer

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

Args:

  • x (torch.Tensor): A batch of the input data, produced by a Dataset

  • **fit_params (dict) : Additional parameters passed to the forward method of the module and to the self.train_split call.

Returns: A torch tensor with the inference results for each item in the input


method on_train_end

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

Args:

  • net: estimator for which training has ended (equal to self)

  • X: data

  • y: targets

  • kwargs: other arguments


class FixedTypeSkorchNeuralNet

A mixin with a helpful modification to a skorch estimator that fixes the module type.


method get_params

Get parameters for this estimator.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.

  • **kwargs: any additional parameters to pass to the sklearn BaseEstimator class

Returns:

  • params : dict, Parameter names mapped to their values.


class NeuralNetClassifier

Scikit-learn interface for quantized FHE compatible neural networks.

This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.

method __init__


property base_estimator_type


property base_module_to_compile

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

Returns:

  • module (nn.Module): the instantiated torch module


property classes_


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property history


property input_quantizers

Get the input quantizers.

Returns:

  • List[Quantizer]: the input quantizers


property n_bits_quant

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

Returns:

  • n_bits (int): the number of bits to quantize the network

Raises:

  • ValueError: with skorch estimators, the module_ is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model_ (onnx.ModelProto): the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit


method get_params

Get parameters for this estimator.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.

  • **kwargs: any additional parameters to pass to the sklearn BaseEstimator class

Returns:

  • params : dict, Parameter names mapped to their values.


method get_params_for_benchmark

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

Returns:

  • params (dict): parameters to create an equivalent fp32 sklearn estimator for benchmark


method infer

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

Args:

  • x (torch.Tensor): A batch of the input data, produced by a Dataset

  • **fit_params (dict) : Additional parameters passed to the forward method of the module and to the self.train_split call.

Returns: A torch tensor with the inference results for each item in the input


method on_train_end

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

Args:

  • net: estimator for which training has ended (equal to self)

  • X: data

  • y: targets

  • kwargs: other arguments


method predict

Predict on user provided data.

Predicts using the quantized clear or FHE classifier

Args:

  • X : input data, a numpy array of raw values (non quantized)

  • execute_in_fhe : whether to execute the inference in FHE or in the clear

Returns:

  • y_pred : numpy ndarray with predictions


class NeuralNetRegressor

Scikit-learn interface for quantized FHE compatible neural networks.

This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.

method __init__


property base_estimator_type


property base_module_to_compile

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

Returns:

  • module (nn.Module): the instantiated torch module


property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property history


property input_quantizers

Get the input quantizers.

Returns:

  • List[Quantizer]: the input quantizers


property n_bits_quant

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

Returns:

  • n_bits (int): the number of bits to quantize the network

Raises:

  • ValueError: with skorch estimators, the module_ is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model_ (onnx.ModelProto): the ONNX model


property output_quantizers

Get the input quantizers.

Returns:

  • List[QuantizedArray]: the input quantizers


property quantize_input

Get the input quantization function.

Returns:

  • Callable : function that quantizes the input


method fit


method get_params

Get parameters for this estimator.

Args:

  • deep (bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.

  • **kwargs: any additional parameters to pass to the sklearn BaseEstimator class

Returns:

  • params : dict, Parameter names mapped to their values.


method get_params_for_benchmark

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

Returns:

  • params (dict): parameters to create an equivalent fp32 sklearn estimator for benchmark


method infer

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

Args:

  • x (torch.Tensor): A batch of the input data, produced by a Dataset

  • **fit_params (dict) : Additional parameters passed to the forward method of the module and to the self.train_split call.

Returns: A torch tensor with the inference results for each item in the input


method on_train_end

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

Args:

  • net: estimator for which training has ended (equal to self)

  • X: data

  • y: targets

  • kwargs: other arguments

concrete.ml.sklearn.xgb

module concrete.ml.sklearn.xgb

Implements XGBoost models.


class XGBClassifier

Implements the XGBoost classifier.

method __init__


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


method post_processing

Apply post-processing to the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions.

Returns:

  • numpy.ndarray: The post-processed predictions.


class XGBRegressor

Implements the XGBoost regressor.

method __init__


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


method fit

Fit the tree-based estimator.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): The target data.

  • **kwargs: args for super().fit

Returns:

  • Any: The fitted model.


method post_processing

Apply post-processing to the predictions.

Args:

  • y_preds (numpy.ndarray): The predictions.

Returns:

  • numpy.ndarray: The post-processed predictions.

concrete.ml.torch.compile

module concrete.ml.torch.compile

torch compilation function.

Global Variables

  • MAXIMUM_TLU_BIT_WIDTH

  • DEFAULT_P_ERROR_PBS

  • OPSET_VERSION_FOR_ONNX_EXPORT


function convert_torch_tensor_or_numpy_array_to_numpy_array

Convert a torch tensor or a numpy array to a numpy array.

Args:

  • torch_tensor_or_numpy_array (Tensor): the value that is either a torch tensor or a numpy array.

Returns:

  • numpy.ndarray: the value converted to a numpy array.


function compile_torch_model

Compile a torch module into an FHE equivalent.

Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy

Args:

  • torch_model (torch.nn.Module): the model to quantize

  • torch_inputset (Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.

  • import_qat (bool): Set to True to import a network that contains quantizers and was trained using quantization aware training

  • configuration (Configuration): Configuration object to use during compilation

  • compilation_artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • n_bits: the number of bits for the quantization

  • use_virtual_lib (bool): set to use the so called virtual lib simulating FHE computation. Defaults to False

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.


function compile_onnx_model

Compile a torch module into an FHE equivalent.

Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy

Args:

  • onnx_model (onnx.ModelProto): the model to quantize

  • torch_inputset (Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.

  • import_qat (bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not requantize it.

  • configuration (Configuration): Configuration object to use during compilation

  • compilation_artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • n_bits: the number of bits for the quantization

  • use_virtual_lib (bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.

  • p_error (Optional[float]): probability of error of a PBS

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.


function compile_brevitas_qat_model

Compile a Brevitas Quantization Aware Training model.

The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.

Args:

  • torch_model (torch.nn.Module): the model to quantize

  • torch_inputset (Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.

  • n_bits (Union[int,dict]): the number of bits for the quantization

  • configuration (Configuration): Configuration object to use during compilation

  • compilation_artifacts (DebugArtifacts): Artifacts object to fill during compilation

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo

  • use_virtual_lib (bool): set to use the so called virtual lib simulating FHE computation, defaults to False.

  • p_error (Optional[float]): probability of error of a PBS

  • output_onnx_file (str): temporary file to store ONNX model. If None a temporary file is generated

Returns:

  • QuantizedModule: The resulting compiled QuantizedModule.

concrete.ml.sklearn.torch_module

module concrete.ml.sklearn.torch_module

Implement torch module.

concrete.ml.quantization.quantized_ops

module concrete.ml.quantization.quantized_ops

Quantized versions of the ONNX operators for post training quantization.


class QuantizedSigmoid

Quantized sigmoid op.


class QuantizedHardSigmoid

Quantized HardSigmoid op.


class QuantizedRelu

Quantized Relu op.


class QuantizedPRelu

Quantized PRelu op.


class QuantizedLeakyRelu

Quantized LeakyRelu op.


class QuantizedHardSwish

Quantized Hardswish op.


class QuantizedElu

Quantized Elu op.


class QuantizedSelu

Quantized Selu op.


class QuantizedCelu

Quantized Celu op.


class QuantizedClip

Quantized clip op.


class QuantizedRound

Quantized round op.


class QuantizedPow

Quantized pow op.

Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.

method __init__


method can_fuse

Determine if this op can be fused.

Power raising can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x ** (x + 1) where x is an integer tensor.

Returns:

  • bool: Can fuse


class QuantizedGemm

Quantized Gemm op.

method __init__


method can_fuse

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl


class QuantizedMatMul

Quantized MatMul op.

method __init__


method can_fuse

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl


class QuantizedAdd

Quantized Addition operator.

Can add either two variables (both encrypted) or a variable and a constant


method can_fuse

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl


class QuantizedTanh

Quantized Tanh op.


class QuantizedSoftplus

Quantized Softplus op.


class QuantizedExp

Quantized Exp op.


class QuantizedLog

Quantized Log op.


class QuantizedAbs

Quantized Abs op.


class QuantizedIdentity

Quantized Identity op.


method q_impl


class QuantizedReshape

Quantized Reshape op.


method q_impl

Reshape the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedConv

Quantized Conv op.

method __init__

Construct the quantized convolution operator and retrieve parameters.

Args:

  • n_bits_output: number of bits for the quantization of the outputs of this operator

  • int_input_names: names of integer tensors that are taken as input for this operation

  • constant_inputs: the weights and activations

  • input_quant_opts: options for the input quantizer

  • attrs: convolution options

  • dilations (Tuple[int]): dilation of the kernel, default 1 on all dimensions.

  • group (int): number of convolution groups, default 1

  • kernel_shape (Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int]): stride of the convolution on each axis


method can_fuse

Determine if this op can be fused.

Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl

Compute the quantized convolution between two quantized tensors.

Allows an optional quantized bias.

Args:

  • q_inputs: input tuple, contains

  • x (numpy.ndarray): input data. Shape is N x C x H x W for 2d

  • w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d

  • b (numpy.ndarray, Optional): bias tensor, Shape is (O,)

  • attrs: convolution options handled in constructor

Returns:

  • res (QuantizedArray): result of the quantized integer convolution


class QuantizedAvgPool

Quantized Average Pooling op.

method __init__


method can_fuse

Determine if this op can be fused.

Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl


class QuantizedPad

Quantized Padding op.

method __init__


method can_fuse

Determine if this op can be fused.

Pad operation can not be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation can not be fused as it is manipulates integer tensors


class QuantizedWhere

Where operator on quantized arrays.

Supports only constants for the results produced on the True/False branches.

method __init__


class QuantizedCast

Cast the input to the required data type.

In FHE we only support a limited number of output types. Booleans are cast to integers.


class QuantizedGreater

Comparison operator >.

Only supports comparison with a constant.

method __init__


class QuantizedGreaterOrEqual

Comparison operator >=.

Only supports comparison with a constant.

method __init__


class QuantizedLess

Comparison operator <.

Only supports comparison with a constant.

method __init__


class QuantizedLessOrEqual

Comparison operator <=.

Only supports comparison with a constant.

method __init__


class QuantizedOr

Or operator ||.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))

method __init__


method can_fuse

Determine if this op can be fused.

Or can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.

Returns:

  • bool: Can fuse


class QuantizedDiv

Div operator /.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))

method __init__


method can_fuse

Determine if this op can be fused.

Div can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x / (x + 1) where x is an integer tensor.

Returns:

  • bool: Can fuse


class QuantizedMul

Multiplication operator.

Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.

method __init__


method can_fuse

Determine if this op can be fused.

Multiplication can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x * (x + 1) where x is an integer tensor.

Returns:

  • bool: Can fuse


class QuantizedSub

Subtraction operator.

This works the same as addition on both encrypted - encrypted and on encrypted - constant.


method can_fuse

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl


class QuantizedBatchNormalization

Quantized Batch normalization with encrypted input and in-the-clear normalization params.


class QuantizedFlatten

Quantized flatten for encrypted inputs.


method can_fuse

Determine if this op can be fused.

Flatten operation can not be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation can not be fused as it is manipulates integer tensors.


method q_impl

Flatten the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0

  • attrs: contains axis attribute

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedReduceSum

ReduceSum with encrypted input.

This operator is currently an experimental feature.

method __init__

Construct the quantized ReduceSum operator and retrieve parameters.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs.

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • axes (Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None.

  • attrs (dict): RecuseSum options.

  • keepdims (int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.

  • noop_with_empty_axes (int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.


method calibrate

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method q_impl

Sum the encrypted tensor's values over axis 1.

Args:

  • q_inputs (QuantizedArray): An encrypted integer tensor at index 0.

  • attrs (Dict): Contains axis attribute.

Returns:

  • (QuantizedArray): The sum of all values along axis 1 as an encrypted integer tensor.


method tree_sum

Large sum without overflow (only MSB remains).

Args:

  • input_qarray: Enctyped integer tensor.

  • is_calibration: Whether we are calibrating the tree sum. If so, it will create all the quantizers for the downscaling.

Returns:

  • (numpy.ndarray): The MSB (based on the precision self.n_bits) of the integers sum.


class QuantizedErf

Quantized erf op.


class QuantizedNot

Quantized Not op.


class QuantizedBrevitasQuant

Brevitas uniform quantization with encrypted input.

method __init__

Construct the Brevitas quantization operator.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • scale (float): Quantizer scale

  • zero_point (float): Quantizer zero-point

  • bit_width (int): Number of bits of the integer representation

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):

  • rounding_mode (str): Rounding mode (default and only accepted option is "ROUND")

  • signed (int): Whether this op quantizes to signed integers (default 1),

  • narrow (int): Whether this op quantizes to a narrow range of integers e.g. [-2n_bits-1 .. 2n_bits-1] (default 0),


method q_impl

Quantize values.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedTranspose

Transpose operator for quantized inputs.

This operator performs quantization, transposes the encrypted data, then dequantizes again.


method q_impl

Reshape the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor

concrete.ml.sklearn.tree

module concrete.ml.sklearn.tree

Implement the sklearn tree models.


class DecisionTreeClassifier

Implements the sklearn DecisionTreeClassifier.

method __init__

Initialize the DecisionTreeClassifier.

noqa: DAR101


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model


class DecisionTreeRegressor

Implements the sklearn DecisionTreeClassifier.

method __init__

Initialize the DecisionTreeRegressor.

noqa: DAR101


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • onnx.ModelProto: the ONNX model

QuantizedModule
QuantizedOp

__init__(
    n_bits: int = 6,
    n_estimators=20,
    criterion='gini',
    max_depth=4,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features='sqrt',
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    bootstrap=True,
    oob_score=False,
    n_jobs=None,
    random_state=None,
    verbose=0,
    warm_start=False,
    class_weight=None,
    ccp_alpha=0.0,
    max_samples=None
)
__init__(
    n_bits: int = 6,
    n_estimators=20,
    criterion='squared_error',
    max_depth=4,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features='sqrt',
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    bootstrap=True,
    oob_score=False,
    n_jobs=None,
    random_state=None,
    verbose=0,
    warm_start=False,
    ccp_alpha=0.0,
    max_samples=None
)
RandomForestClassifier
RandomForestRegressor
concrete.ml.sklearn.rf
rf.RandomForestClassifier
rf.RandomForestRegressor
__init__(
    n_bits=2,
    epsilon=0.0,
    tol=0.0001,
    C=1.0,
    loss='epsilon_insensitive',
    fit_intercept=True,
    intercept_scaling=1.0,
    dual=True,
    verbose=0,
    random_state=None,
    max_iter=1000
)
__init__(
    n_bits=2,
    penalty='l2',
    loss='squared_hinge',
    dual=True,
    tol=0.0001,
    C=1.0,
    multi_class='ovr',
    fit_intercept=True,
    intercept_scaling=1,
    class_weight=None,
    verbose=0,
    random_state=None,
    max_iter=1000
)
LinearSVC
LinearSVR
concrete.ml.sklearn.svm
svm.LinearSVC
svm.LinearSVR
tree_to_numpy(
    model: ModelProto,
    x: ndarray,
    framework: str,
    task: Task,
    output_n_bits: Optional[int] = 8
) → Tuple[Callable, List[UniformQuantizer], ModelProto]
concrete.ml.sklearn.tree_to_numpy
tree_to_numpy.Task
tree_to_numpy.tree_to_numpy
concrete.ml.torch
__init__(
    input_dim,
    n_layers,
    n_outputs,
    n_hidden_neurons_multiplier=4,
    n_w_bits=3,
    n_a_bits=3,
    n_accum_bits=8,
    activation_function=<class 'torch.nn.modules.activation.ReLU'>
)
enable_pruning()
forward(x)
make_pruning_permanent()
max_active_neurons()
on_train_end()
get_params_for_benchmark()
infer(x, **fit_params)
on_train_end(net, X=None, y=None, **kwargs)
get_params(deep=True, **kwargs)
__init__(
    *args,
    criterion=<class 'torch.nn.modules.loss.CrossEntropyLoss'>,
    classes=None,
    optimizer=<class 'torch.optim.adam.Adam'>,
    **kwargs
)
fit(X, y, **fit_params)
get_params(deep=True, **kwargs)
get_params_for_benchmark()
infer(x, **fit_params)
on_train_end(net, X=None, y=None, **kwargs)
predict(X, execute_in_fhe=False)
__init__(*args, optimizer=<class 'torch.optim.adam.Adam'>, **kwargs)
fit(X, y, **fit_params)
get_params(deep=True, **kwargs)
get_params_for_benchmark()
infer(x, **fit_params)
on_train_end(net, X=None, y=None, **kwargs)
Fully Connected Neural Networks
NeuralNetClassifier
NeuralNetRegressor
in the API guide
SparseQuantNeuralNetImpl
concrete.ml.sklearn.qnn
qnn.FixedTypeSkorchNeuralNet
qnn.NeuralNetClassifier
qnn.NeuralNetRegressor
qnn.QuantizedSkorchEstimatorMixin
qnn.SparseQuantNeuralNetImpl
__init__(
    n_bits: int = 6,
    max_depth: Optional[int] = 3,
    learning_rate: Optional[float] = 0.1,
    n_estimators: Optional[int] = 20,
    objective: Optional[str] = 'binary:logistic',
    booster: Optional[str] = None,
    tree_method: Optional[str] = None,
    n_jobs: Optional[int] = None,
    gamma: Optional[float] = None,
    min_child_weight: Optional[float] = None,
    max_delta_step: Optional[float] = None,
    subsample: Optional[float] = None,
    colsample_bytree: Optional[float] = None,
    colsample_bylevel: Optional[float] = None,
    colsample_bynode: Optional[float] = None,
    reg_alpha: Optional[float] = None,
    reg_lambda: Optional[float] = None,
    scale_pos_weight: Optional[float] = None,
    base_score: Optional[float] = None,
    missing: float = nan,
    num_parallel_tree: Optional[int] = None,
    monotone_constraints: Optional[Dict[str, int], str] = None,
    interaction_constraints: Optional[str, List[Tuple[str]]] = None,
    importance_type: Optional[str] = None,
    gpu_id: Optional[int] = None,
    validate_parameters: Optional[bool] = None,
    predictor: Optional[str] = None,
    enable_categorical: bool = False,
    use_label_encoder: bool = False,
    random_state: Optional[RandomState, int] = None,
    verbosity: Optional[int] = None
)
post_processing(y_preds: ndarray) → ndarray
__init__(
    n_bits: int = 6,
    max_depth: Optional[int] = 3,
    learning_rate: Optional[float] = 0.1,
    n_estimators: Optional[int] = 20,
    objective: Optional[str] = 'reg:squarederror',
    booster: Optional[str] = None,
    tree_method: Optional[str] = None,
    n_jobs: Optional[int] = None,
    gamma: Optional[float] = None,
    min_child_weight: Optional[float] = None,
    max_delta_step: Optional[float] = None,
    subsample: Optional[float] = None,
    colsample_bytree: Optional[float] = None,
    colsample_bylevel: Optional[float] = None,
    colsample_bynode: Optional[float] = None,
    reg_alpha: Optional[float] = None,
    reg_lambda: Optional[float] = None,
    scale_pos_weight: Optional[float] = None,
    base_score: Optional[float] = None,
    missing: float = nan,
    num_parallel_tree: Optional[int] = None,
    monotone_constraints: Optional[Dict[str, int], str] = None,
    interaction_constraints: Optional[str, List[Tuple[str]]] = None,
    importance_type: Optional[str] = None,
    gpu_id: Optional[int] = None,
    validate_parameters: Optional[bool] = None,
    predictor: Optional[str] = None,
    enable_categorical: bool = False,
    use_label_encoder: bool = False,
    random_state: Optional[RandomState, int] = None,
    verbosity: Optional[int] = None
)
fit(X, y, **kwargs) → Any
post_processing(y_preds: ndarray) → ndarray
XGBClassifier
XGBRegressor
concrete.ml.sklearn.xgb
xgb.XGBClassifier
xgb.XGBRegressor
convert_torch_tensor_or_numpy_array_to_numpy_array(
    torch_tensor_or_numpy_array: Union[Tensor, ndarray]
) → ndarray
compile_torch_model(
    torch_model: Module,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    import_qat: bool = False,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    n_bits=8,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → QuantizedModule
compile_onnx_model(
    onnx_model: ModelProto,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    import_qat: bool = False,
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    n_bits=8,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → QuantizedModule
compile_brevitas_qat_model(
    torch_model: Module,
    torch_inputset: Union[Tensor, ndarray, Tuple[Union[Tensor, ndarray], ]],
    n_bits: Union[int, dict],
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05,
    output_onnx_file: Optional[str] = None
) → QuantizedModule
compile_brevitas_qat_model
compile_torch_model
compile_onnx_model
compile_torch_model
concrete.ml.torch.compile
compile.compile_brevitas_qat_model
compile.compile_onnx_model
compile.compile_torch_model
compile.convert_torch_tensor_or_numpy_array_to_numpy_array
concrete.ml.sklearn.torch_module
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse()
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse()
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
can_fuse() → bool
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
) → None
can_fuse() → bool
can_fuse() → bool
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
can_fuse() → bool
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
calibrate(*inputs: ndarray) → ndarray
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
tree_sum(input_qarray, is_calibration=False)
__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
) → None
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray
concrete.ml.quantization.quantized_ops
quantized_ops.QuantizedAbs
quantized_ops.QuantizedAdd
quantized_ops.QuantizedAvgPool
quantized_ops.QuantizedBatchNormalization
quantized_ops.QuantizedBrevitasQuant
quantized_ops.QuantizedCast
quantized_ops.QuantizedCelu
quantized_ops.QuantizedClip
quantized_ops.QuantizedConv
quantized_ops.QuantizedDiv
quantized_ops.QuantizedElu
quantized_ops.QuantizedErf
quantized_ops.QuantizedExp
quantized_ops.QuantizedFlatten
quantized_ops.QuantizedGemm
quantized_ops.QuantizedGreater
quantized_ops.QuantizedGreaterOrEqual
quantized_ops.QuantizedHardSigmoid
quantized_ops.QuantizedHardSwish
quantized_ops.QuantizedIdentity
quantized_ops.QuantizedLeakyRelu
quantized_ops.QuantizedLess
quantized_ops.QuantizedLessOrEqual
quantized_ops.QuantizedLog
quantized_ops.QuantizedMatMul
quantized_ops.QuantizedMul
quantized_ops.QuantizedNot
quantized_ops.QuantizedOr
quantized_ops.QuantizedPRelu
quantized_ops.QuantizedPad
quantized_ops.QuantizedPow
quantized_ops.QuantizedReduceSum
quantized_ops.QuantizedRelu
quantized_ops.QuantizedReshape
quantized_ops.QuantizedRound
quantized_ops.QuantizedSelu
quantized_ops.QuantizedSigmoid
quantized_ops.QuantizedSoftplus
quantized_ops.QuantizedSub
quantized_ops.QuantizedTanh
quantized_ops.QuantizedTranspose
quantized_ops.QuantizedWhere
__init__(
    criterion='gini',
    splitter='best',
    max_depth=None,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features=None,
    random_state=None,
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    class_weight=None,
    ccp_alpha: float = 0.0,
    n_bits: int = 6
)
__init__(
    criterion='squared_error',
    splitter='best',
    max_depth=None,
    min_samples_split=2,
    min_samples_leaf=1,
    min_weight_fraction_leaf=0.0,
    max_features=None,
    random_state=None,
    max_leaf_nodes=None,
    min_impurity_decrease=0.0,
    ccp_alpha=0.0,
    n_bits: int = 6
)
DecisionTreeClassifier
DecisionTreeRegressor
concrete.ml.sklearn.tree
tree.DecisionTreeClassifier
tree.DecisionTreeRegressor
concrete.ml.version
NumpyModule
concrete.ml.torch.numpy_module
numpy_module.NumpyModule

concrete.ml.version

module concrete.ml.version

File to manage the version of the package.

concrete.ml.torch.numpy_module

module concrete.ml.torch.numpy_module

A torch to numpy module.

Global Variables

  • OPSET_VERSION_FOR_ONNX_EXPORT


class NumpyModule

General interface to transform a torch.nn.Module to numpy module.

Args:

  • torch_model (Union[nn.Module, onnx.ModelProto]): A fully trained, torch model along with its parameters or the onnx graph of the model.

  • dummy_input (Union[torch.Tensor, Tuple[torch.Tensor, ...]]): Sample tensors for all the module inputs, used in the ONNX export to get a simple to manipulate nn representation.

  • debug_onnx_output_file_path: (Optional[Union[Path, str]], optional): An optional path to indicate where to save the ONNX file exported by torch for debug. Defaults to None.

method __init__

__init__(
    model: Union[Module, ModelProto],
    dummy_input: Optional[Tensor, Tuple[Tensor, ]] = None,
    debug_onnx_output_file_path: Optional[Path, str] = None
)

property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model (onnx.ModelProto): the ONNX model


method forward

forward(*args: ndarray) → Union[ndarray, Tuple[ndarray, ]]

Apply a forward pass on args with the equivalent numpy function only.

Args:

  • *args: the inputs of the forward function

Returns:

  • Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]: result of the forward on the given inputs

Artificial Neuron (from: wikipedia)
Fully Connected Neural Network
Pruned Fully Connected Neural Network
Comparison neural networks
Comparison of clasification decision boundaries between FHE and plaintext models
XGBoost n_bits comparison
Impact of p_error in a Linear Regression
Torch compilation flow with ONNX