Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Concrete-ML is built on top of Concrete-Numpy, which enables Numpy programs to be converted into FHE circuits.
training: A model is trained using plaintext, non-encrypted, training data.
quantization: The model is converted into an integer equivalent using quantization. Concrete-ML performs this step either during training (Quantization Aware Training) or after training (Post-training Quantization), depending on model type. Quantization converts inputs, model weights, and all intermediate values of the inference computation to integers. More information is available here.
simulation using the Virtual Library: Testing FHE models on very large data-sets can take a long time. Furthermore, not all models are compatible with FHE constraints out-of-the-box. Simulation using the Virtual Library allows you to execute a model that was quantized, to measure the accuracy it would have in FHE, but also to determine the modifications required to make it FHE compatible. Simulation is described in more detail here.
compilation: Once the model is quantized, simulation can confirm it has good accuracy in FHE. The model then needs to be compiled using Concrete's FHE compiler to produce an equivalent FHE circuit. This circuit is represented as an MLIR program consisting of low level cryptographic operations. You can read more about FHE compilation here, MLIR here, and about the low-level Concrete library here.
inference: The compiled model can then be executed on encrypted data, once the proper keys have been generated. The model can also be deployed to a server and used to run private inference on encrypted inputs.
You can see some examples of the model development workflow here.
client/server deployment: In a client/server setting, the model can be exported in a way that:
allows the client to generate keys, encrypt, and decrypt.
provides a compiled model that can run on the server to perform inference on encrypted data.
key generation: The data owner (client) needs to generate a pair of private keys (to encrypt/decrypt their data and results) and a public evaluation key (for the model's FHE evaluation on the server).
You can see an example of the model deployment workflow here.
Concrete-ML and Concrete-Numpy are tools that hide away the details of the underlying cryptography scheme, called TFHE. However, some cryptography concepts are still useful when using these two toolkits:
encryption/decryption: These operations transform plaintext, i.e. human-readable information, into ciphertext, i.e. data that contains a form of the original plaintext that is unreadable by a human or computer without the proper key to decrypt it. Encryption takes plaintext and an encryption key and produces ciphertext, while decryption is the inverse operation.
encrypted inference: FHE allows a third party to execute (i.e. run inference or predict) a machine learning model on encrypted data (a ciphertext). The result of the inference is also encrypted and can only be read by the person who receives the decryption key.
keys: A key is a series of bits used within an encryption algorithm for encrypting data so that the corresponding ciphertext appears random.
key generation: Cryptographic keys need to be generated using random number generators. Their size may be large and key generation may take a long time. However, keys only need to be generated once for each model used by a client.
guaranteed correctness of encrypted computations: To achieve security, TFHE, the underlying encryption scheme, adds random noise as ciphertexts. This can induce errors during processing of encrypted data, depending on noise parameters. By default, Concrete-ML uses parameters that ensure the correctness of the encrypted computation, so there is no need to account for noise parametrization. Therefore, the results on encrypted data will be the same as the results of simulation on clear data.
While Concrete-ML users only need to understand the cryptography concepts above, for a deeper understanding of the cryptography behind the Concrete stack, please see the whitepaper on TFHE and Programmable Boostrapping or this series of blogs.
To respect FHE constraints, all numerical programs that include non-linear operations over encrypted data must have all inputs, constants, and intermediate values represented with integers of a maximum of 16 bits.
Thus, Concrete-ML quantizes the input data and model outputs in the same way as weights and activations. The main levers to control accumulator bit-width are the number of bits used for the inputs, weights, and activations of the model. These parameters are crucial to comply with the constraint on accumulator bit-widths. Please refer to the quantization documentation for more details about how to develop models with quantization in Concrete-ML.
However, these methods may cause a reduction in the accuracy of the model since its representative power is diminished. Most importantly, carefully choosing a quantization approach can alleviate accuracy loss, all the while allowing compilation to FHE. Concrete-ML offers built-in models that already include quantization algorithms, and users only need to configure some of their parameters, such as the number of bits, discussed above. See the advanced quantization guide for information about configuring these parameters for various models.
Additional specific methods can help to make models compatible with FHE constraints. For instance, dimensionality reduction can reduce the number of input features and, thus, the maximum accumulator bit-width reached within a circuit. Similarly, sparsity-inducing training methods, such as pruning, deactivate some features during inference, which also helps. For now, dimensionality reduction is considered as a pre-processing step, while pruning is used in the built-in neural networks.
The configuration of model quantization parameters is illustrated in the advanced examples for Linear and Logistic Regressions and dimensionality reduction is shown in the Poisson regression example.
Concrete-ML provides several of the most popular linear models for regression
and classification
that can be found in Scikit-learn:
Using these models in FHE is extremely similar to what can be done with scikit-learn's API, making it easy for data scientists who are used to this framework to get started with Concrete-ML.
Models are also compatible with some of scikit-learn's main workflows, such as Pipeline()
and GridSearch()
.
The n_bits
parameter controls the bit-width of the inputs and weights of the linear models. When non-linear mapping is applied by the model, such as exp or sigmoid, currently Concrete-ML applies it on the client-side, on clear-text values that are the decrypted output of the linear part of the model. Thus, Linear Models do not use table lookups, and can, therefore, use high precision integers for weight and inputs. The n_bits
parameter can be set to 8
or more bits for models with up to 300 input dimensions. When the input has more dimensions, n_bits
must be reduced to 6-7
. Accuracy and R2 scores are preserved down to n_bits=6
, compared to the non-quantized float models from scikit-learn.
Here is an example below of how to use a LogisticRegression model in FHE on a simple data set for classification. A more complete example can be found in the LogisticRegression notebook.
We can then plot the decision boundary of the classifier and compare those results with a scikit-learn model executed in clear. The complete code can be found in the LogisticRegression notebook.
The overall accuracy scores are identical (93%) between the scikit-learn model (executed in the clear) and the Concrete-ML one (executed in FHE). In fact, quantization has little impact on the decision boundaries, as linear models are able to consider large precision numbers when quantizing inputs and weights in Concrete-ML. Additionally, as the linear models do not use PBS, the FHE computations are always exact, meaning the FHE predictions are always identical to the quantized clear ones.
This section lists several demos that apply Concrete-ML to some popular machine learning problems. They show how to build ML models that perform well under FHE constraints, and then how to perform the conversion to FHE.
Simpler tutorials that discuss only model usage and compilation are also available for the built-in models and for deep learning.
Concrete-ML models can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.
As seen in the concepts section, a Concrete-ML model, once compiled to FHE, generates machine code that performs the inference on private data. Furthermore, secret encryption keys are needed so that the user can securely encrypt their data and decrypt the inference result. An evaluation key is also needed for the server to securely process the user's encrypted data.
Keys are generated by the user once for each service they use, based on the model the service provides and its cryptographic parameters.
The overall communications protocol to enable cloud deployment of machine learning services can be summarized in the following diagram:
The steps detailed above are as follows:
The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference.
The client requests the cryptographic parameters (also called "client specs"). Once it receives them from the server, the secret and evaluation keys are generated.
The client sends the evaluation key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data.
The server uses the evaluation key to securely run inference on the user's data and sends back the encrypted result.
The client now decrypts the result and can send back new requests.
For more information on how to implement this basic secure inference protocol, refer to the Production Deployment section and to the client/server example.
These examples illustrate the basic usage of built-in Concrete-ML models. For more examples showing how to train high-accuracy models on more complex data-sets, see the section.
In Concrete-ML, built-in linear models are exact equivalents to their scikit-learn counterparts. Indeed, since they do not apply any non-linearity during inference, these models are very fast (~1ms FHE inference time) and can use high precision integers (between 20-25 bits).
Tree-based models apply non-linear functions that enable comparisons of inputs and trained thresholds. Thus, they are limited with respect to the number of bits used to represent the inputs. But as these examples show, in practice 5-6 bits are sufficient to exactly reproduce the behavior of their scikit-learn counterpart models.
As shown in the examples below, built-in neural networks can be configured to work with user-specified accumulator sizes, which allow the user to adjust the speed/accuracy tradeoff.
It is recommended to use to configure the speed/accuracy trade-off for tree-based models and neural networks, using grid-search or your own heuristics.
These examples show how to use the built-in linear models on synthetic data, which allows for easy visualization of the decision boundaries or trend lines. Executing these 1D and 2D models in FHE takes around 1 millisecond.
These two examples show generalized linear models (GLM) on the real-world data-set. As the non-linear, inverse-link functions are computed, these models do not use , and are, thus, very fast (~1ms execution time).
Based on three different synthetic data-sets, all the built-in classifiers are demonstrated in this notebook, showing accuracies, inference times, accumulator bit-widths, and decision boundaries.
Concrete-ML provides simple built-in neural networks models with a scikit-learn interface through the NeuralNetClassifier
and NeuralNetRegressor
classes.
The neural network models are implemented with , which provides a scikit-learn-like interface to Torch models (more ).
The Concrete-ML models are multi-layer, fully-connected networks with customizable activation functions and a number of neurons in each layer. This approach is similar to what is available in scikit-learn using the MLPClassifier
/MLPRegressor
classes. The built-in models train easily with a single call to .fit()
, which will automatically quantize the weights and activations. These models use Quantization Aware Training, allowing good performance for low precision (down to 2-3 bit) weights and activations.
While NeuralNetClassifier
and NeuralNetClassifier
provide scikit-learn-like models, their architecture is somewhat restricted in order to make training easy and robust. If you need more advanced models, you can convert custom neural networks as described in the .
Good quantization parameter values are critical to make models . Weights and activations should be quantized to low precision (e.g. 2-4 bits). Furthermore, the sparsity of the network can be tuned , to avoid accumulator overflow.
To create an instance of a Fully Connected Neural Network (FCNN), you need to instantiate one of the NeuralNetClassifier
and NeuralNetRegressor
classes and configure a number of parameters that are passed to their constructor. Note that some parameters need to be prefixed by module__
, while others don't. Basically, the parameters that are related to the model, i.e. the underlying nn.Module
, must have the prefix. The parameters that are related to training options do not require the prefix.
The figure above shows, on the right, the Concrete-ML neural network, trained with Quantization Aware Training, in a FHE-compatible configuration. The figure compares this network to the floating-point equivalent, trained with scikit-learn.
module__n_layers
: number of layers in the FCNN, must be at least 1. Note that this is the total number of layers. For a single, hidden layer NN model, set module__n_layers=2
module__n_outputs
: number of outputs (classes or targets)
module__input_dim
: dimensionality of the input
n_w_bits
(default 3): number of bits for weights
n_a_bits
(default 3): number of bits for activations and inputs
max_epochs
: The number of epochs to train the network (default 10)
verbose
: Whether to log loss/metrics during training (default: False)
lr
: Learning rate (default 0.001)
When you have training data in the form of a NumPy array, and targets in a NumPy 1D array, you can set:
You can give weights to each class to use in training. Note that this must be supported by the underlying PyTorch loss function.
The n_hidden_neurons_multiplier
parameter influences training accuracy as it controls the number of non-zero neurons that are allowed in each layer. Increasing n_hidden_neurons_multiplier
improves accuracy, but should take into account precision limitations to avoid overflow in the accumulator. The default value is a good compromise that avoids overflow in most cases, but you may want to change the value of this parameter to reduce the breadth of the network if you have overflow errors. A value of 1 should be completely safe with respect to overflow.
| |
Concrete-ML is an open-source, privacy-preserving, machine learning inference framework based on fully homomorphic encryption (FHE). It enables data scientists without any prior knowledge of cryptography to automatically turn machine learning models into their FHE equivalent, using familiar APIs from Scikit-learn and PyTorch (see how it looks for , , and ).
Fully Homomorphic Encryption (FHE) is an encryption technique that allows computing directly on encrypted data, without needing to decrypt it. With FHE, you can build private-by-design applications without compromising on features. You can learn more about FHE in or by joining the community.
This example shows the typical flow of a Concrete-ML model:
The model is trained on unencrypted (plaintext) data using scikit-learn. As FHE operates over integers, Concrete-ML quantizes the model to use only integers during inference.
The quantized model is compiled to a FHE equivalent. Under the hood, the model is first converted to a Concrete-Numpy program, then compiled.
To make a model work with FHE, the only constraint is to make it run within the supported precision limitations of Concrete-ML (currently 16-bit integers). Thus, machine learning models are required to be quantized, which sometimes leads to a loss of accuracy versus the original model, which operates on plaintext.
Additionally, Concrete-ML currently only supports FHE inference. On the other hand, training has to be done on unencrypted data, producing a model which is then converted to a FHE equivalent that can perform encrypted inference, i.e. prediction over encrypted data.
Finally, in Concrete-ML there is currently no support for pre-processing model inputs and post-processing model outputs. These processing stages may involve text-to-numerical feature transformation, dimensionality reduction, KNN or clustering, featurization, normalization, and the mixing of results of ensemble models.
All of these issues are currently being addressed and significant improvements are expected to be released in the coming months.
If you have built awesome projects using Concrete-ML, feel free to let us know and we'll link to your work!
Concrete-ML provides several of the most popular classification
and regression
tree models that can be found in :
In addition to support for scikit-learn, Concrete-ML also supports 's XGBClassifier
:
Here's an example of how to use this model in FHE on a popular data-set using some of scikit-learn's pre-processing tools. A more complete example can be found in the .
This graph above shows that, when using a sufficiently high bit-width, quantization has little impact on the decision boundaries of the Concrete-ML FHE decision tree models. As the quantization is done individually on each input feature, the impact of quantization is strongly reduced, and, thus, FHE tree-based models reach similar accuracy as their floating point equivalents. Using 6 bits for quantization makes the Concrete-ML model reach or exceed the floating point accuracy. The number of bits for quantization can be adjusted through the n_bits
parameter.
When n_bits
is set low, the quantization process may sometimes create some artifacts that could lead to a decrease in performance, but the execution speed in FHE decreases. In this way, it is possible to adjust the accuracy/speed trade-off, and some accuracy can be recovered by increasing the n_estimators
.
The following graph shows that using 5-6 bits of quantization is usually sufficient to reach the performance of a non-quantized XGBoost model on floating point data. The metrics plotted are accuracy and F1-score on the spambase
data-set.
Concrete-ML provides partial support for Pandas, with most available models (linear and tree-based models) usable on Pandas dataframes just as they would be used with NumPy arrays.
The table below summarizes current compatibility:
The following example considers a LogisticRegression
model on a simple classification problem. A more advanced example can be found in the , which considers a XGBClassifier
.
Using the data-set, this example shows how to train a classifier that detects spam, based on features extracted from email messages. A grid-search is performed over decision-tree hyper-parameters to find the best ones.
This example shows how to train tree-ensemble models (either XGBoost or Random Forest), first on a synthetic data-set, and then on the data-set. Grid-search is used to find the best number of trees in the ensemble.
Privacy-preserving prediction of house prices is shown in this example, using the data-set. Using 50 trees in the ensemble, with 5 bits of precision for the input features, the FHE regressor obtains an score of 0.90 and an execution time of 7-8 seconds.
Two different configurations of the built-in, fully-connected neural networks are shown. First, a small bit-width accumulator network is trained on and compared to a Pytorch floating point network. Second, a larger accumulator (>8 bits) is demonstrated on .
The shows the behavior of built-in neural networks on several synthetic data-sets.
module__activation_function
: can be one of the Torch activations (e.g. nn.ReLU, see the full list )
n_accum_bits
(default 8): maximum accumulator bit-width that is desired. The implementation will attempt to keep accumulators under this bit-width through , i.e. setting some weights to zero
Other parameters from skorch are in the .
module__n_hidden_neurons_multiplier
: The number of hidden neurons will be automatically set proportional to the dimensionality of the input (i.e. the value for module__input_dim
). This parameter controls the proportionality factor and is set to 4 by default. This value gives good accuracy while avoiding accumulator overflow. See the and sections for more info.
Here is a simple example of classification on encrypted data using logistic regression. More examples can be found .
Inference can then be done on encrypted data. The above example shows encrypted inference in the model-development phase. Alternatively, during in a client/server setting, the data is encrypted by the client, processed securely by the server, and then decrypted by the client.
Concrete-ML is built on top of Zama's Concrete framework. It uses , which itself uses the and the . To use these libraries directly, refer to the and documentations.
Various tutorials are available for the and for . In addition, several standalone demos for use-cases can be found in the section.
Support forum: (we answer in less than 24 hours).
Live discussion on the FHE.org Discord server: (inside the #concrete channel).
Do you have a question about Zama? You can write us on or send us an email at: hello@zama.ai
In a similar example, the decision boundaries of the Concrete-ML model can be plotted, and, then, compared to the results of the classical XGBoost model executed in the clear. A 6-bit model is shown in order to illustrate the impact of quantization on classification. Similar plots can be found in the .
fit
✓
compile
✗
predict (execute_in_fhe=False)
✓
predict (execute_in_fhe=True)
✓
Please note that not all hardware/OS combinations are supported. Determine your platform, OS version, and Python version before referencing the table below.
Depending on your OS, Concrete-ML may be installed with Docker or with pip:
Linux
Yes
Yes
Windows
Yes
Not currently
Windows Subsystem for Linux
Yes
Yes
macOS (Intel)
Yes
Yes
macOS (Apple Silicon, ie M1, M2 etc)
Yes
Not currently
Also, only some versions of python
are supported: in the current release, these are 3.7
(Linux only), 3.8
, and 3.9
. Moreover, the Concrete-ML Python package requires glibc >= 2.28
. On Linux, you can check your glibc
version by running ldd --version
.
Concrete-ML can be installed on Kaggle (see question on community for more details), but not on Google Colab (see question on community for more details).
Most of these limits are shared with the rest of the Concrete stack (namely Concrete-Numpy and Concrete-Compiler). Support for more platforms will be added in the future.
Installing Concrete-ML using PyPi requires a Linux-based OS or macOS running on an x86 CPU. For Apple Silicon, Docker is the only currently supported option (see below).
Installing on Windows can be done using Docker or WSL. On WSL, Concrete-ML will work as long as the package is not installed in the /mnt/c/ directory, which corresponds to the host OS filesystem.
To install Concrete-ML from PyPi, run the following:
This will automatically install all dependencies, notably Concrete-Numpy.
Concrete-ML can be installed using Docker by either pulling the latest image or a specific version:
The image can be used with Docker volumes, see the Docker documentation here.
The image can then be used via the following command:
This will launch a Concrete-ML enabled Jupyter server in Docker that can be accessed directly from a browser.
Alternatively, a shell can be lauched in Docker, with or without volumes:
This guide provides a complete example of converting a PyTorch neural network into its FHE-friendly, quantized counterpart. It focuses on Quantization Aware Training a simple network on a synthetic data-set.
In general, quantization can be carried out in two different ways: either during training with Quantization Aware Training (QAT) or after the training phase with Post-Training Quantization (PTQ).
Regarding FHE-friendly neural networks, QAT is the best way to reach optimal accuracy under FHE constrains. This technique allows weights and activations to be reduced to very low bit-widths (e.g. 2-3 bits), which, combined with pruning, can keep accumulator bit-widths low.
Concrete-ML uses the third party library Brevitas to perform QAT for PyTorch NNs, but options exist for other frameworks such as Keras/Tensorflow.
Several demos and tutorials that use Brevitas are available in Concrete-ML library, such as the CIFAR classification tutorial.
This guide is based on a notebook tutorial, from which some code blocks are documented here.
For a more formal description of the usage of Brevitas to build FHE-compatible neural networks, please see the Brevitas usage reference.
In PyTorch, using standard layers, a fully connected neural network would look as follows:
The notebook tutorial, example shows how to train a fully-connected neural network, similar to the one above, on a synthetic 2D data-set with a checkerboard grid pattern of 100 x 100 points. The data is split into 9500 training and 500 test samples.
Once trained, this PyTorch network can be imported using the compile_torch_model
function. This function uses simple Post-Training Quantization.
The network was trained using different numbers of neurons in the hidden layers, and quantized using 3-bits weights and activations. The mean accumulator size shown below was extracted using the Virtual Library and is measured as the mean over 10 runs of the experiment. An accumulator of 6.6 means that 4 times out of 10 the accumulator measured was 6 bits while 6 times it was 7 bits.
fp32 accuracy
68.70%
83.32%
88.06%
3-bit accuracy
56.44%
55.54%
56.50%
mean accumulator size
6.6
6.9
7.4
This shows that the fp32 accuracy and accumulator size increases with the number of hidden neurons, while the 3-bit accuracy remains low irrespective of the number of neurons. While all the configurations tried here were FHE-compatible (accumulator < 16 bits), it is often preferable to have a lower accumulator size in order to speed up the inference time.
The accumulator size is determined by Concrete-Numpy as being the maximum bit-width encountered anywhere in the encrypted circuit.
Quantization Aware Training using Brevitas is the best way to guarantee a good accuracy for Concrete-ML compatible neural networks.
Brevitas provides a quantized version of almost all PyTorch layers (Linear
layer becomes QuantLinear
, ReLU
layer becomes QuantReLU
and so one), plus some extra quantization parameters, such as :
bit_width
: precision quantization bits for activations
act_quant
: quantization protocol for the activations
weight_bit_width
: precision quantization bits for weights
weight_quant
: quantization protocol for the weights
In order to use FHE, the network must be quantized from end to end, and thanks to the Brevitas's QuantIdentity
layer, it is possible to quantize the input by placing it at the entry point of the network. Moreover, it is also possible to combine PyTorch and Brevitas layers, provided that a QuantIdentity
is placed after this PyTorch layer. The following table gives the replacements to be made to convert a PyTorch NN for Concrete-ML compatibility.
torch.nn.Linear
brevitas.quant.QuantLinear
torch.nn.Conv2d
brevitas.quant.Conv2d
torch.nn.AvgPool2d
torch.nn.AvgPool2d
+ brevitas.quant.QuantIdentity
torch.nn.ReLU
brevitas.quant.QuantReLU
Furthermore, some PyTorch operators (from the PyTorch functional API), require a brevitas.quant.QuantIdentity
to be applied on their inputs.
torch.transpose
torch.add
(between two activation tensors)
torch.reshape
torch.flatten
The QAT import tool in Concrete-ML is a work in progress. While it has been tested with some networks built with Brevitas, it is possible to use other tools to obtain QAT networks.
For instance, with Brevitas, the network above becomes :
Note that in the network above, biases are used for linear layers but are not quantized ("bias": True, "bias_quant": None
). The addition of the bias is an univariate operation and is fused into the activation function.
Training this network with pruning (see below) with 30 out of 100 total non-zero neurons gives good accuracy while keeping the accumulator size low.
3-bit accuracy brevitas
95.4%
3-bit accuracy in Concrete-ML
95.4%
Accumulator size
7
The PyTorch QAT training loop is the same as the standard floating point training loop, but hyper-parameters such as learning rate might need to be adjusted.
Quantization Aware Training is somewhat slower than normal training. QAT introduces quantization during both the forward and backward passes. The quantization process is inefficient on GPUs as its computational intensity is low with respect to data transfer time.
Considering that FHE only works with limited integer precision, there is a risk of overflowing in the accumulator, which will make Concrete-ML raise an error.
To understand how to overcome this limitation, consider a scenario where 2 bits are used for weights and layer inputs/outputs. The Linear
layer computes a dot product between weights and inputs . With 2 bits, no overflow can occur during the computation of the Linear
layer as long the number of neurons does not exceed 14, i.e. the sum of 14 products of 2-bit numbers does not exceed 7 bits.
By default, Concrete-ML uses symmetric quantization for model weights, with values in the interval . For example, for the possible values are , for the values can be .
However, in a typical setting, the weights will not all have the maximum or minimum values (e.g. ). Instead, weights typically have a normal distribution around 0, which is one of the motivating factors for their symmetric quantization. A symmetric distribution and many zero-valued weights are desirable because opposite sign weights can cancel each other out and zero weights do not increase the accumulator size.
This fact can be leveraged to train a network with more neurons, while not overflowing the accumulator, using a technique called pruning, where the developer can impose a number of zero-valued weights. Torch provides support for pruning out of the box.
The following code shows how to use pruning in the previous example:
Results with PrunedQuantNet
, a pruned version of the QuantSimpleNet
with 100 neurons on the hidden layers, are given below, showing a mean accumulator size measured over 10 runs of the experiment:
3-bit accuracy
82.50%
88.06%
Mean accumulator size
6.6
6.8
This shows that the fp32 accuracy has been improved while maintaining constant mean accumulator size.
When pruning a larger neural network during training, it is easier to obtain a low bit-width accumulator while maintaining better final accuracy. Thus, pruning is more robust than training a similar, smaller network.
This section provides a set of tools and guidelines to help users build optimized FHE-compatible models.
The Virtual Lib in Concrete-ML is a prototype that provides drop-in replacements for Concrete-Numpy's compiler, allowing users to simulate FHE execution, including any probabilistic behavior FHE may induce. The Virtual Library comes from Concrete-Numpy, where it is called Virtual Circuits.
The Virtual Lib can be useful when developing and iterating on an ML model implementation. For example, you can check that your model is compatible in terms of operands (all integers) with the Virtual Lib compilation. Then, you can check how many bits your ML model would require, which can give you hints about ways it could be modified to compile it to an actual FHE Circuit. As FHE non-linear models work with integers up to 16 bits, with a tradeoff between number of bits and FHE execution speed, the Virtual Lib can help to find the optimal model design.
The Virtual Lib, being pure Python and not requiring crypto key generation, can be much faster than the actual compilation and FHE execution. This allows for faster iterations, debugging, and FHE simulation, regardless of the bit-width used. For example, this was used for the red/blue contours in the Classifier Comparison notebook, as computing in FHE for the whole grid and all the classifiers would take significant time.
The following example shows how to use the Virtual Lib in Concrete-ML. Simply add use_virtual_lib = True
and enable_unsafe_features = True
in a Configuration
. The result of the compilation will then be a simulated circuit that allows for more precision or simulated FHE execution.
The following example produces a neural network that is not FHE-compatible:
Upon execution, the compiler will raise the following error within the graph representation:
Knowing that a linear/dense layer is implemented as a matrix multiplication, it can determine which parts of the op-graph listing in the exception message above correspond to which layers.
Layer weights initialization:
Input data:
First dense layer and activation function:
Second dense layer and activation function:
Third dense layer:
We can see here that the error is in the second layer because the product has exceeded the 16-bit precision limit. This error is only detected when the PBS operations are actually applied.
However, reducing the number of neurons in this layer resolves the error and makes the network FHE-compatible:
In FHE, univariate functions are encoded as table lookups, which are then implemented using Programmable Bootstrapping (PBS). PBS is a powerful technique but will require significantly more computing resources, and thus time, than simpler encrypted operations such as matrix multiplications, convolution, or additions.
Furthermore, the cost of PBS will depend on the bit-width of the compiled circuit. Every additional bit in the maximum bit-width raises the complexity of the PBS by a significant factor. It may be of interest to the model developer, then, to determine the bit-width of the circuit and the amount of PBS it performs.
This can be done by inspecting the MLIR code produced by the compiler:
There are several calls to FHELinalg.apply_mapped_lookup_table
and FHELinalg.apply_lookup_table
. These calls apply PBS to the cells of their input tensors. Their inputs in the listing above are: tensor<1x2x!FHE.eint<8>>
for the first and last call and tensor<1x50x!FHE.eint<8>>
for the two calls in the middle. Thus, PBS is applied 104 times.
Retrieving the bit-width of the circuit is then simply:
Decreasing the number of bits and the number of PBS applications induces large reductions in the computation time of the compiled circuit.
In addition to Concrete-ML models and custom models in torch, it is also possible to directly compile ONNX models. This can be particularly appealing, notably to import models trained with Keras.
ONNX models can be compiled by directly importing models that are already quantized with Quantization Aware Training (QAT) or by performing Post-Training Quantization (PTQ) with Concrete-ML.
The following example shows how to compile an ONNX model using PTQ. The model was initially trained using Keras before being exported to ONNX. The training code is not shown here.
This example uses Post-Training Quantization, i.e. the quantization is not performed during training. Thus, this model would not have good performance in FHE. Quantization Aware Training should be added by the model developer. Additionally, importing QAT ONNX models can be done as shown below.
While Keras was used in this example, it is not officially supported as additional work is needed to test all of Keras' types of layer and models.
QAT models contain quantizers in the ONNX graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized. Since these QAT models have quantizers that are configured during training to a specific number of bits, the ONNX graph will need to be imported using the same settings:
The following operators are supported for evaluation and conversion to an equivalent FHE circuit. Other operators were not implemented, either due to FHE constraints or because they are rarely used in PyTorch activations or scikit-learn models.
Abs
Acos
Acosh
Add
Asin
Asinh
Atan
Atanh
AveragePool
BatchNormalization
Cast
Celu
Clip
Concat
Constant
Conv
Cos
Cosh
Div
Elu
Equal
Erf
Exp
Flatten
Floor
Gemm
Greater
GreaterOrEqual
HardSigmoid
HardSwish
Identity
LeakyRelu
Less
LessOrEqual
Log
MatMul
Max
MaxPool
Min
Mul
Neg
Not
Or
PRelu
Pad
Pow
ReduceSum
Relu
Reshape
Round
Selu
Sigmoid
Sign
Sin
Sinh
Softplus
Sub
Tan
Tanh
ThresholdedRelu
Transpose
Unsqueeze
Where
onnx.brevitas.Quant
In addition to the built-in models, Concrete-ML supports generic machine learning models implemented with Torch, or exported as ONNX graphs.
As Quantization Aware Training (QAT) is the most appropriate method of training neural networks that are compatible with FHE constraints, Concrete-ML works with Brevitas, a library providing QAT support for PyTorch.
The following example uses a simple QAT PyTorch model that implements a fully connected neural network with two hidden layers. Due to its small size, making this model respect FHE constraints is relatively easy.
Once the model is trained, calling the compile_brevitas_qat_model
from Concrete-ML will automatically perform conversion and compilation of a QAT network. Here, 3-bit quantization is used for both the weights and activations.
The model can now be used to perform encrypted inference. Next, the test data is quantized:
and the encrypted inference can be run using either:
quantized_numpy_module.forward_and_dequant()
to compute predictions in the clear on quantized data, and then de-quantize the result. The return value of this function contains the dequantized (float) output of running the model in the clear. Calling the forward function on the clear data is useful when debugging. The results in FHE will be the same as those on clear quantized data.
quantized_numpy_module.forward_fhe.encrypt_run_decrypt()
to perform the FHE inference. In this case, de-quantization is done in a second stage using quantized_numpy_module.dequantize_output()
.
While the example above shows how to import a Brevitas/PyTorch model, Concrete-ML also provides an option to import generic QAT models implemented either in PyTorch or through ONNX. Interestingly, deep learning models made with TensorFlow or Keras should be usable, by preliminary converting them to ONNX.
QAT models contain quantizers in the PyTorch graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized.
Suppose that n_bits_qat
is the bit-width of activations and weights during the QAT process. To import a PyTorch QAT network, you can use the compile_torch_model
library function, passing import_qat=True
:
Alternatively, if you want to import an ONNX model directly, please see the ONNX guide. The compile_onnx_model
also supports the import_qat
parameter.
When importing QAT models using this generic pipeline, a representative calibration set should be given as quantization parameters in the model need to be inferred from the statistics of the values encountered during inference.
Concrete-ML supports a variety of PyTorch operators that can be used to build fully connected or convolutional neural networks, with normalization and activation layers. Moreover, many element-wise operators are supported.
Please note that Concrete-ML supports these operators but also the QAT equivalents from Brevitas.
brevitas.nn.QuantLinear
brevitas.nn.QuantConv2d
brevitas.nn.QuantIdentity
torch.nn.Threshold
-- partial support
Note that the equivalent versions from torch.functional
are also supported.
Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as real numbers) to a discrete set (such as integers).
This means that some accuracy in the representation is lost (e.g. a simple approach is to eliminate least-significant bits). However, in many cases in machine learning, it is possible to adapt the models to give meaningful results while using these smaller data types. This significantly reduces the number of bits necessary for intermediary results during the execution of these machine learning models.
Since FHE is currently limited to 16-bit integers, it is necessary to quantize models to make them compatible. As a general rule, the smaller the bit-width of integer values used in models, the better the FHE performance. This trade-off should be taken into account when designing models, especially neural networks.
Quantization implemented in Concrete-ML is applied in two ways:
Built-in models apply quantization internally and the user only needs to configure some quantization parameters. This approach requires little work by the user but may not be a one-size-fits-all solution for all types of models. The final quantized model is FHE-friendly and ready to predict over encrypted data. In this setting, Post-Training Quantization (PTQ) is for linear models, data quantization is used for tree-based models and, finally, Quantization Aware Training (QAT) is included in the built-in neural network models.
For custom neural networks with more complex topology, obtaining FHE-compatible models with good accuracy requires QAT. Concrete-ML offers the possibility for the user to perform quantization before compiling to FHE. This can be achieved through a third-party library that offers QAT tools, such as Brevitas for PyTorch. In this approach, the user is responsible for implementing a full-integer model, respecting FHE constraints. Please refer to the advanced QAT tutorial for tips on designing FHE neural networks.
While Concrete-ML quantizes machine learning models, the data the client has is often in floating point. The Concrete-ML models provide APIs to quantize inputs and de-quantize outputs.
Please note that the floating point input is quantized in the clear, i.e. it is converted to integers before being encrypted. Moreover, the model's output are also integers and are decrypted before de-quantization.
Let be the range of a value to quantize where is the minimum and is the maximum. To quantize a range of floating point values (in ) to integer values (in ), the first step is to choose the data type that is going to be used. Many ML models work with weights and activations represented as 8-bit integers, so this will be the value used in this example. Knowing the number of bits that can be used for a value in the range , the scale
can be computed :
where is the number of bits (). For the sake of example, let's take .
In practice, the quantization scale is then . This means the gap between consecutive representable values cannot be smaller than , which, in turn, means there can be a substantial loss of precision. Every interval of length will be represented by a value within the range .
The other important parameter from this quantization schema is the zero point
value. This essentially brings the 0 floating point value to a specific integer. If the quantization scheme is asymmetric (quantized values are not centered in 0), the resulting will be in .
When using quantized values in a matrix multiplication or convolution, the equations for computing the result become more complex. The IntelLabs Distiller documentation provides a more detailed explanation of the maths used to quantize values and how to keep computations consistent.
Built-in models provide a simple interface for configuring quantization parameters, most notably the number of bits used for inputs, model weights, intermediary values, and output values.
For linear models, the quantization is done post-training. Thus, the model is trained in floating point, and then, the best integer weight representations are found, depending on the distribution of inputs and weights. For these models, the user can select the value of the n_bits
parameter.
For linear models, n_bits
is used to quantize both model inputs and weights. Depending on the number of features, you can use a single integer value for the n_bits
parameter (e.g. a value between 2 and 7). When the number of features is high, the n_bits
parameter should be decreased if you encounter compilation errors. It is also possible to quantize inputs and weights with different numbers of bits by passing a dictionary to n_bits
containing the op_inputs
and op_weights
keys.
For tree-based models, the training and test data is quantized. The maximum accumulator bit-width for a model trained with n_bits=n
for this type of model is known beforehand: it will need n+1
bits. Through experimentation, it was determined that in many cases a value of 5 or 6 bits gives the same accuracy as training in floating point and values above n=7
do not increase model performance (but they induce a strong slowdown).
Tree-based models can directly control the accumulator bit-width used. However, if 6 or 7 bits are not sufficient to obtain good accuracy on your data-set, one option is to use an ensemble model (RandomForest or XGBoost) and increase the number of trees in the ensemble. This, however, will have a detrimental impact on FHE execution speed.
For built-in neural networks, several linear layers are used. Thus, the outputs of a layer are used as inputs to a new layer. Built-in neural networks use Quantization Aware Training. The parameters controlling the maximum accumulator bit-width are the number of weights and activation bits ( module__n_w_bits
, module__n_a_bits
), but also the pruning factor. This factor is determined automatically by specifying a desired accumulator bit-width module__n_accum_bits
and, optionally, a multiplier factor, module__n_hidden_neurons_multiplier
.
Note that for built-in neural networks, the maximum accumulator bit-width cannot be precisely controlled. To use many input features and a high number of bits is beneficial for model accuracy, but it can conflict with the 16-bit accumulator constraint. Finding the best quantization parameters to maximize accuracy, while keeping the accumulator size down, can only be accomplished through experimentation.
The models implemented in Concrete-ML provide features to let the user quantize the input data and de-quantize the output data.
In a client/server setting, the client is responsible for quantizing inputs before sending them, encrypted, to the server. Further, the client must de-quantize the encrypted integer results received from the server. See the Production Deployment section for more details.
Here is a simple example showing how to perform inference, starting from float values and ending up with float values. Note that the FHE engine that is compiled for the ML models does not support data batching.
IntelLabs distiller explanation of quantization: Distiller documentation
Compilation of a model produces machine code that executes the model on encrypted data. In some cases, notably in the client/server setting, the compilation can be done by the server when loading the model for serving.
As FHE execution is much slower than execution on non-encrypted data, Concrete-ML has a simulation mode, using an execution mode named the Virtual Library. Since, by default, the cryptographic parameters are chosen such that the results obtained in FHE are the same as those on clear data, the Virtual Library allows you to benchmark models quickly during development.
Concrete-ML implements machine model inference using Concrete-Numpy as a backend. In order to execute in FHE, a numerical program written in Concrete-Numpy needs to be compiled. This functionality is described here, and Concrete-ML hides away most of the complexity of this step, completing the entire compilation process itself.
From the perspective of the Concrete-ML user, the compilation process performed by Concrete-Numpy can be broken up into 3 steps:
tracing the Numpy program and creating a Concrete-Numpy op-graph
checking the op-graph for FHE compatability
producing machine code for the op-graph (this step automatically determines cryptographic parameters)
Additionally, the client/server API packages the result of the last step in a way that allows the deployment of the encrypted circuit to a server, as well as key generation, encryption, and decryption on the client side.
The first step in the list above takes a Python function implemented using the Concrete-Numpy supported operation set and transforms it into an executable operation graph.
The result of this single step of the compilation pipeline allows the:
execution of the op-graph, which includes TLUs, on clear non-encrypted data. This is, of course, not secure, but it is much faster than executing in FHE. This mode is useful for debugging, i.e. to find the appropriate hyper-parameters. This mode is called the Virtual Library (which is referred as Virtual Circuits in Concrete-Numpy).
verification of the maximum bit-width of the op-graph, to determine FHE compatibility, without actually compiling the circuit to machine code.
Enabling Virtual Library execution requires the definition of a compilation Configuration
. As simulation does not execute in FHE, this can be considered unsafe:
Next, the following code uses the simulation mode for built-in models:
And finally, for custom models, it is possible to enable simulation using the following syntax:
Obtaining the simulated predictions of the models using the Virtual Library has the same syntax as execution in FHE:
Moreover, the maximum accumulator bit-width is determined as follows:
While Concrete-ML hides away all the Concrete-Numpy code that performs model inference, it can be useful to understand how Concrete-Numpy code works. Here is a toy example for a simple linear regression model on integers. Note that this is just an example to illustrate compilation concepts. Generally, it is recommended to use the built-in models, which provide linear regression out of the box.
These examples illustrate the basic usage of Concrete-ML to build various types of neural networks. They use simple data-sets, focusing on the syntax and usage of Concrete-ML. For examples showing how to train high-accuracy models on more complex data-sets, see the Demos and Tutorials section.
The examples listed here make use of simulation (using the Virtual Library) to perform evaluation over large test sets. Since FHE execution can be slow, only a few FHE executions can be performed. The correctness guarantees of Concrete-ML ensure that accuracy measured with simulation is the same that will be obtained during FHE execution.
Some examples constrain accumulators to 7-8 bits, which can be sufficient for simple data-sets. Up to 16-bit accumulators can be used, but this introduces a slowdown of 4-5x compared to 8-bit accumulators.
Quantization aware training example
Shows how to use Quantization Aware Training and pruning when starting out from a classical PyTorch network. This example uses a simple data-set and a small NN, which achieves good accuracy with low accumulator size.
Following the Step-by-step guide, this notebook implements a Quantization Aware Training convolutional neural network on the MNIST data-set. It uses 3-bit weights and activations, giving a 7-bit accumulator.
Pruning is a method to reduce neural network complexity, usually applied in order to reduce the computation cost or memory size. Pruning is used in Concrete-ML to control the size of accumulators in neural networks, thus making them FHE-compatible. See here for an explanation of accumulator bit-width constraints.
Pruning is used in Concrete-ML for two types of neural networks:
Built-in neural networks include a pruning mechanism that can be parameterized by the user. The pruning type is based on L1-norm. To comply with FHE constraints, Concrete-ML uses unstructured pruning, as the aim is not to eliminate neurons or convolutional filters completely, but to decrease their accumulator bit-width.
Custom neural networks, to work well under FHE constraints, should include pruning. When implemented with PyTorch, you can use the framework's pruning mechanism (e.g.L1-Unstructured) to good effect.
In neural networks, a neuron computes a linear combination of inputs and learned weights, then applies an activation function.
The neuron computes:
When building a full neural network, each layer will contain multiple neurons, which are connected to the inputs or to the neuron outputs of a previous layer.
For every neuron shown in each layer of the figure above, the linear combinations of inputs and learned weights are computed. Depending on the values of the inputs and weights, the sum - which for Concrete-ML neural networks is computed with integers - can take a range of different values.
To respect the bit-width constraint of the FHE table lookup, the values of the accumulator must remain small to be representable using a maximum of 16 bits. In other words, the values must be between 0 and .
Pruning a neural network entails fixing some of the weights to be zero during training. This is advantageous to meet FHE constraints, as irrespective of the distribution of , multiplying these input values by 0 does not increase the accumulator value.
Fixing some of the weights to 0 makes the network graph look more similar to the following:
While pruning weights can reduce the prediction performance of the neural network, studies show that a high level of pruning (above 50%) can often be applied. See here how Concrete-ML uses pruning in Fully Connected Neural Networks.
In the formula above, in the worst case, the maximum number of the input and weights that can make the result exceed $n$ bits is given by:
Here, is the maximum precision allowed.
For example, if and with , the worst case is where all inputs and weights are equal to their maximal value . In this case, there can be at most elements in the multi-sums.
In practice, the distribution of the weights of a neural network is Gaussian, with many weights either 0 or having a small value. This enables exceeding the worst-case number of active neurons without having to risk overflowing the bit-width. In built-in neural networks, the parameter n_hidden_neurons_multiplier
is multiplied with to determine the total number of non-zero weights that should be kept in a neuron.
Concrete-ML provides functionality to deploy FHE machine learning models in a client/server setting. The deployment workflow and model serving pattern is as follows:
The diagram above shows the steps that a developer goes through to prepare a model for encrypted inference in a client/server setting. The training of the model and its compilation to FHE are performed on a development machine. Three different files are created when saving the model:
client.zip
contains client.specs.json
which lists the secure cryptographic parameters needed for the client to generate private and evaluation keys.
serialized_processing.json
describes the pre-processing and post-processing required by the machine learning model, such as quantization parameters to quantize the input and de-quantize the output. It should be deployed in the same way as client.zip
.
server.zip
contains the compiled model. This file is sufficient to run the model on a server. The compiled model is machine-architecture specific (i.e. a model compiled on x86 cannot run on ARM).
The compiled model (server.zip
) is deployed to a server and the cryptographic parameters (client.zip
), along with the model metadata (serialized_processing.json
), are shared with the clients. In some settings, such as a phone application, the client.zip
can be directly deployed on the client device and the server does not need to host it.
The client-side deployment of a secured inference machine learning model follows the schema above. First, the client obtains the cryptographic parameters (stored in client.zip
) and generates a private encryption/decryption key as well as a set of public evaluation keys. The public evaluation keys are then sent to the server, while the secret key remains on the client.
The private data is then encrypted by the client as described in serialized_processing.json
, and it is then sent to the server. Server-side, the FHE model inference is run on encrypted inputs using the public evaluation keys.
The encrypted result is then returned by the server to the client, which decrypts it using its private key. Finally, the client performs any necessary post-processing of the decrypted result as specified in serialized_processing.json
.
The server-side implementation of a Concrete-ML model follows the diagram above. The public evaluation keys sent by clients are stored. They are then retrieved for the client that is querying the service and used to evaluate the machine learning model stored in server.zip
. Finally, the server sends the encrypted result of the computation back to the client.
Concrete-ML is a Python
library, so Python
should be installed to develop Concrete-ML. v3.8
and v3.9
are the only supported versions. Concrete-ML also uses Poetry
and Make
.
First of all, you need to git clone
the project:
Some tests require files tracked by git-lfs to be downloaded. To do so, please follow the instructions on , then run git lfs pull
.
A simple way to have everything installed is to use the development Docker (see the guide). On Linux and macOS, you have to run the script in ./script/make_utils/setup_os_deps.sh
. Specify the --linux-install-python
flag if you want to install python3.8 as well on apt-enabled Linux distributions. The script should install everything you need for Docker and bare OS development (you can first review the content of the file to check what it will do).
For Windows users, the setup_os_deps.sh
script does not install dependencies because of how many different installation methods there are due to the lack of a single package manager.
The first step is to (as some of the dev tools depend on it), then . In addition to installing Python, you are still going to need the following software available on path on Windows, as some of the basic dev tools depend on them:
git
jq
make
Development on Windows only works with the Docker environment. Follow .
To manually install Python, you can follow guide (alternatively, you can google how to install Python 3.8 (or 3.9)
).
As there is no concrete-compiler
package for Windows, only the dev dependencies can be installed. This requires Poetry >= 1.2.
The dev tools use make
to launch various commands.
On Linux, you can install make
from your distribution's preferred package manager.
On macOS, you can install a more recent version of make
via brew:
In the following sections, be sure to use the proper make
tool for your system: make
, gmake
, or other.
To get the source code of Concrete-ML, clone the code repository using the link for your favorite communication protocol (ssh or https).
We are going to make use of virtual environments. This helps to keep the project isolated from other Python
projects in the system. The following commands will create a new virtual environment under the project directory and install dependencies to it.
The following command will not work on Windows if you don't have Poetry >= 1.2.
Finally, activate the newly created environment using the following command:
Docker automatically creates and sources a venv in ~/dev_venv/
The venv persists thanks to volumes. It also creates a volume for ~/.cache to speedup later reinstallations. You can check which Docker volumes exist with:
You can still run all make
commands inside Docker (to update the venv, for example). Be mindful of the current venv being used (the name in parentheses at the beginning of your command prompt).
After your work is done, you can simply run the following command to leave the environment:
From time to time, new dependencies will be added to the project or the old ones will be removed. The command below will make sure the project has the proper environment, so run it regularly!
If you are having issues, consider using the dev Docker exclusively (unless you are working on OS-specific bug fixes or features).
Here are the steps you can take on your OS to try and fix issues:
Here are the steps you can take in your Docker to try and fix issues:
If the problem persists at this point, you should ask for help. We're here and ready to assist!
Internally, Concrete-ML uses operators as intermediate representation (or IR) for manipulating machine learning models produced through export for , , and .
As ONNX is becoming the standard exchange format for neural networks, this allows Concrete-ML to be flexible while also making model representation manipulation easy. In addition, it allows for straight-forward mapping to NumPy operators, supported by Concrete-Numpy to use Concrete stack's FHE-conversion capabilities.
The diagram below gives an overview of the steps involved in the conversion of an ONNX graph to a FHE-compatible format (i.e. a format that can be compiled to FHE through Concrete-Numpy).
All Concrete-ML built-in models follow the same pattern for FHE conversion:
The models are trained with sklearn or PyTorch.
All models have a PyTorch implementation for inference. This implementation is provided either by a third-party tool such as or implemented directly in Concrete-ML.
The PyTorch model is exported to ONNX. For more information on the use of ONNX in Concrete-ML, see .
The Concrete-ML ONNX parser checks that all the operations in the ONNX graph are supported and assigns reference NumPy operations to them. This step produces a NumpyModule
.
Quantization is performed on the , producing a . Two steps are performed: calibration and assignment of equivalent objects to each ONNX operation. The QuantizedModule
class is the quantized counterpart of the NumpyModule
.
Once the QuantizedModule
is built, Concrete-Numpy is used to trace the ._forward()
function of the QuantizedModule
.
Moreover, by passing a user provided nn.Module
to step 2 of the above process, Concrete-ML supports custom user models. See the associated for instructions about working with such models.
Once an ONNX model is imported, it is converted to a NumpyModule
, then to a QuantizedModule
and, finally, to a FHE circuit. However, as the diagram shows, it is perfectly possible to stop at the NumpyModule
level if you just want to run the PyTorch model as NumPy code without doing quantization.
Concrete-ML has support for quantized ML models and also provides quantization tools for Quantization Aware Training and Post-Training Quantization. The core of this functionality is the conversion of floating point values to integers and back. This is done using QuantizedArray
in concrete.ml.quantization
.
The class takes several arguments that determine how float values are quantized:
n_bits
define the precision of the quantization
values
are floating point values that will be converted to integers
is_signed
determines if the quantized integer values should allow negative values
is_symmetric
determines if the range of floating point values to be quantized should be taken as symmetric around zero
See also the reference for more information:
It is also possible to use symmetric quantization, where the integer values are centered around 0:
In the following example, showing the de-quantization of model outputs, the QuantizedArray
class is used in a different way. Here it uses pre-quantized integer values and has the scale
and zero-point
set explicitly. Once the QuantizedArray
is constructed, calling dequant()
will compute the floating point values corresponding to the integer values qvalues
, which are the output of the forward_fhe.encrypt_run_decrypt(..)
call.
Machine learning models are implemented with a diverse set of operations, such as convolution, linear transformations, activation functions, and element-wise operations. When working with quantized values, these operations cannot be carried out in an equivalent way to floating point values. With quantization, it is necessary to re-scale the input and output values of each operation to fit in the quantization domain.
In Concrete-ML, the quantized equivalent of a scikit-learn model or a PyTorch nn.Module
is the QuantizedModule
. Note that only inference is implemented in the QuantizedModule
, and it is built through a conversion of the inference function of the corresponding scikit-learn or PyTorch module.
Built-in neural networks expose the quantized_module
member, while a QuantizedModule
is also the result of the compilation of custom models through compile_torch_model
and compile_brevitas_qat_model
.
Calibration is the process of determining the typical distributions of values encountered for the intermediate values of a model during inference.
Concrete-ML offers some features for advanced users that wish to adjust the cryptographic parameters that are generated by the Concrete stack for a certain machine learning model.
Concrete-ML makes use of table lookups (TLUs) to represent any non-linear operation (e.g. sigmoid). TLUs are implemented through the Programmable Bootstrapping (PBS) operation which will apply a non-linear operation in the cryptographic realm.
The result of TLU operations is obtained with a specific error probability. Concrete-ML offers the possibility to set this error probability, which influences the cryptographic parameters. The higher the success rate, the more restrictive the parameters become. This can affect both key generation and, more significantly, FHE execution time.
In Concrete-ML, there are three different ways to define the error probability:
setting p_error
, the error probability of an individual TLU (see )
setting global_p_error
, the error probability of the full circuit (see )
not setting p_error
nor global_p_error
, and using default parameters (see )
p_error
and global_p_error
are somehow two concurrent parameters, in the sense they both have an impact on the choice of cryptographic parameters. To avoid a mistake, it is forbidden in Concrete-ML to set both p_error
and global_p_error
simultaneously.
The first way to set error probabilities in Concrete-ML is at the local level, by directly setting the probability of error of each individual TLU. This probability is referred to as p_error
. A given PBS operation has a 1 - p_error
chance of being successful. The successful evaluation here means that the value decrypted after FHE evaluation is exactly the same as the one that one would compute in the clear.
For simplicity, it is best to use , irrespective of the type of model. However, especially for deep neural networks, default values may be too pessimistic, reducing computation speed without any improvement in accuracy. For deep neural networks, some TLU errors may not have any impact on accuracy and the p_error
can be safely increased (see for example CIFAR classifications in ).
Here is a visualization of the effect of the p_error
on a neural network model with a p_error = 0.1
compared to execution in the clear (i.e. no error):
Varying the p_error
in the one hidden-layer neural network above produces the following inference times. Increasing p_error
to 0.1 halves the inference time with respect to a p_error
of 0.001. Note, in the graph above, that the decision boundary becomes noisier with higher p_error
.
The speedup is dependent on model complexity, but, in an iterative approach, it is possible to search for a good value of p_error
to obtain a speedup while maintaining good accuracy. Currently, no heuristic has been proposed to find a good value a priori.
Users have the possibility to change this p_error
as they see fit, by passing an argument to the compile
function of any of the models. Here is an example:
A global_p_error
is also available and defines the probability of success for the entire model. Here, the p_error
for every PBS is computed internally in Concrete-Numpy such that the global_p_error
is reached.
There might be cases where the user encounters a No cryptography parameter found
error message. In such a case, increasing the p_error
or the global_p_error
might help.
Usage is similar to the p_error
parameter:
In the above example, XGBoostClassifier in FHE has a 1/10 probability to have a shifted output value compared to the expected value. Note that the shift is relative to the expected value, so even if the result is different, it should be around the expected value.
The global_p_error
parameter is only used for FHE evaluation and has no effect on VL simulation (unlike the p_error
). Fixing it is in our roadmap.
If neither p_error
or global_p_error
are set, Concrete-ML takes a default global_p_error = 0.01
.
By using verbose_compilation = True
and show_mlir = True
during compilation, the user receives a lot of information from the compiler and its inner optimizer. These options are, however, mainly meant for power-users, so they may be hard to understand.
Here, one will see:
the computation graph, typically
the MLIR, produced by Concrete-Numpy and given to the compiler
information from the optimizer (including cryptographic parameters):
In this latter optimization, the following information will be provided:
The bit-width ("6 bits integers") used in the program: for the moment, the compiler only supports a single precision (i.e. that all PBS are promoted to the same bit-width - the largest one). Therefore, this bit-width predominantly drives the speed of the program, and it is essential to attempt to reduce it as much as possible for fast execution.
The maximal norm2 ("7 manp"), which has an impact on the crypto parameters: The larger this norm2, the slower PBS will be. The norm2 is related to the norm of some constants appearing in your program, in a way which will be clarified in the compiler documentation.
The probability of error of an individual PBS, which was requested by the user ("3.300000e-02 error per pbs call" in User Config)
The probability of error of the full circuit, which was requested by the user ("1.000000e+00 error per circuit call" in User Config): Here, the probability 1 stands for "not used", since we had set the individual probability.
The probability of error of an individual PBS, which is found by the optimizer ("1/30 errors (3.234529e-02)"
The probability of error of the full circuit which is found by the optimizer ("1/10 errors (9.390887e-02)")
An estimation of the cost of the circuit ("4.214000e+02 Millions Operations"): Large values indicate a circuit that will execute more slowly.
and, for cryptographers only, some information about cryptographic parameters:
1x glwe_dimension
2**11 polynomial (2048)
762 lwe dimension
keyswitch l,b=5,3
blindrota l,b=2,15
wopPbs : false
Once again, this optimizer feedback is a work in progress and will be modified and improved in future releases.
Documentation with GitBook is done mainly by pushing content on GitHub. GitBook then pulls the docs from the repository and publishes. In most cases, GitBook is just a mirror of what is available in GitHub.
There are, however, some use-cases where documentation can be modified directly in GitBook (and, then, push the modifications to GitHub), for example when the documentation is modified by a person outside of Zama. In this case, a GitHub branch is created, and a GitHub space is associated to it: modifications are done in this space and automatically pushed to the branch. Once the modifications have been completed, one can simply create a pull-request, to finally merge modifications on the main branch.
Documentation can alternatively be built using Sphinx:
The documentation contains both files written by hand by developers (the .md files) and files automatically created by parsing the source files.
Then to open it, go to docs/_build/html/index.html
or use the follwing command:
To build and open the docs at the same time, use:
Before you start this section, you must install Docker by following official guide.
Once you have access to this repository and the dev environment is installed on your host OS (via make setup_env
once ), you should be able to launch the commands to build the dev Docker image with make docker_build
.
Once you do that, you can get inside the Docker environment using the following command:
After you finish your work, you can leave Docker by using the exit
command or by pressing CTRL + D
.
Concrete-ML is a constant work-in-progress, and thus may contain bugs or suboptimal APIs.
Before opening an issue or asking for support, please read this documentation to understand common issues and limitations of Concrete-ML. You can also check the .
Furthermore, undefined behavior may occur if the input-set, which is internally used by the compilation core to set bit-widths of some intermediate data, is not sufficiently representative of the future user inputs. With all the inputs in the input-set, it appears that intermediate data can be represented as an n-bit integer. But, for a particular computation, this same intermediate data needs additional bits to be represented. The FHE execution for this computation will result in an incorrect output, as typically occurs in integer overflows in classical programs.
If you didn't find an answer, you can ask a question on the or in the FHE.org .
When submitting an issue (), ideally include as much information as possible. In addition to the Python script, the following information is useful:
the reproducibility rate you see on your side
any insight you might have on the bug
any workaround you have been able to find
If you would like to contribute to a project and send pull requests, take a look at the guide.
There are three ways to contribute to Concrete-ML:
You can open issues to report bugs and typos and to suggest ideas.
You can ask to become an official contributor by emailing . Only approved contributors can send pull requests (PR), so please make sure to get in touch before you do.
You can also provide new tutorials or use-cases, showing what can be done with the library. The more examples we have, the better and clearer it is for the other users.
To create your branch, you have to use the issue ID somewhere in the branch name:
e.g.
Each commit to Concrete-ML should conform to the standards of the project. You can let the development tools fix some issues automatically with the following command:
Conformance can be checked using the following command:
Your code must be well documented, containing tests and not breaking other tests:
You need to make sure you get 100% code coverage. The make pytest
command checks that by default and will fail with a coverage report at the end should some lines of your code not be executed during testing.
If your coverage is below 100%, you should write more tests and then create the pull request. If you ignore this warning and create the PR, GitHub actions will fail and your PR will not be merged.
There may be cases where covering your code is not possible (an exception that cannot be triggered in normal execution circumstances). In those cases, you may be allowed to disable coverage for some specific lines. This should be the exception rather than the rule, and reviewers will ask why some lines are not covered. If it appears they can be covered, then the PR won't be accepted in that state.
Concrete-ML uses a consistent commit naming scheme, and you are expected to follow it as well (the CI will make sure you do). The accepted format can be printed to your terminal by running:
e.g.
You should rebase on top of the main
branch before you create your pull request. Merge commits are not allowed, so rebasing on main
before pushing gives you the best chance of to avoid rewriting parts of your PR later if conflicts arise with other PRs being merged. After you commit changes to your new branch, you can use the following commands to rebase:
For a complete example, see .
Poetry
is used as the package manager. It drastically simplifies dependency and environment management. You can follow official guide to install it.
It is possible to install gmake
as make
. Check this for more info.
On Windows, check .
At this point, you should consider using Docker as nobody will have the exact same setup as you. If, however, you need to develop on your OS directly, you can .
Note that the NumpyModule
interpreter currently .
In order to better understand how Concrete-ML works under the hood, it is possible to access each model in their ONNX format and then either print it or visualize it by importing the associated file in . For example, with LogisticRegression
:
The quantized versions of floating point model operations are stored in the QuantizedModule
. The ONNX_OPS_TO_QUANTIZED_IMPL
dictionary maps ONNX floating point operators (e.g. Gemm) to their quantized equivalent (e.g. QuantizedGemm). For more information on implementing these operations, please see the .
The computation graph is taken from the corresponding floating point ONNX graph exported from scikit-learn , or from the ONNX graph exported by PyTorch. Calibration is used to obtain quantized parameters for the operations in the QuantizedModule
. Parameters are also determined for the quantization of inputs during model deployment.
To perform calibration, an interpreter goes through the ONNX graph in and stores the intermediate results as it goes. The statistics of these values determine quantization parameters.
That QuantizedModule
generates the Concrete-Numpy function that is compiled to FHE. The compilation will succeed if the intermediate values conform to the 16-bits precision limit of the Concrete stack. See for details.
Lei Mao's blog on quantization:
Google paper on neural network quantization and integer-only inference:
If the p_error
value is specified and the is enabled, the run will take into account the randomness induced by the p_error
, resulting in statistical similarity to the FHE evaluation.
Just a reminder that commit messages are checked in the comformance step and are rejected if they don't follow the rules. To learn more about conventional commits, check page.
You can learn more about rebasing .
0.001
0.80
0.01
0.41
0.1
0.37
Hummingbird is a third-party, open-source library that converts machine learning models into tensor computations, and it can export these models to ONNX. The list of supported models can be found in the Hummingbird documentation.
Concrete-ML allows the conversion of an ONNX inference to NumPy inference (note that NumPy is always the entry point to run models in FHE with Concrete-ML).
Hummingbird exposes a convert
function that can be imported as follows from the hummingbird.ml
package:
This function can be used to convert a machine learning model to an ONNX as follows:
In theory, the resulting onnx_model
could be used directly within Concrete-ML's get_equivalent_numpy_forward
method (as long as all operators present in the ONNX model are implemented in NumPy) and get the NumPy inference.
In practice, there are some steps needed to clean the ONNX output and make the graph compatible with Concrete-ML, such as applying quantization where needed or deleting/replacing non-FHE friendly ONNX operators (such as Softmax and ArgMax).
Concrete-ML uses Skorch to implement multi-layer, fully-connected PyTorch neural networks in a way that is compatible with the scikit-learn API.
This wrapper implements Torch training boilerplate code, lessening the work required of the user. It is possible to add hooks during the training phase, for example once an epoch is finished.
Skorch allows the user to easily create a classifier or regressor around a neural network (NN), implemented in Torch as a nn.Module
, which is used by Concrete-ML to provide a fully-connected, multi-layer NN with a configurable number of layers and optional pruning (see pruning and the neural network documentation for more information).
Under the hood, Concrete-ML uses a Skorch wrapper around a single PyTorch module, SparseQuantNeuralNetImpl
. More information can be found in the API guide.
Brevitas is a quantization aware learning toolkit built on top of PyTorch. It provides quantization layers that are one-to-one equivalents to PyTorch layers, but also contain operations that perform the quantization during training.
While Brevitas provides many types of quantization, for Concrete-ML, a custom "mixed integer" quantization applies. This "mixed integer" quantization is much simpler than the "integer only" mode of Brevitas. The "mixed integer" network design is defined as:
all weights and activations of convolutional, linear and pooling layers must be quantized (e.g. using Brevitas layers, QuantConv2D
, QuantAvgPool2D
, QuantLinear
)
PyTorch floating-point versions of univariate functions can be used. E.g. torch.relu
, nn.BatchNormalization2D
, torch.max
(encrypted vs. constant), torch.add
, torch.exp
. See the PyTorch supported layers page for a full list.
The "mixed integer" mode used in Concrete-ML neural networks is based on the "integer only" Brevitas quantization that makes both weights and activations representable as integers during training. However, through the use of lookup tables in Concrete-ML, floating point univariate PyTorch functions are supported.
For "mixed integer" quantization to work, the first layer of a Brevitas nn.Module
must be a QuantIdentity
layer. However, you can then use functions such as torch.sigmoid
on the result of such a quantizing operation.
For examples of such a "mixed integer" network design, please see the Quantization Aware Training examples:
or go to the MNIST use-case example.
You can also refer to the SparseQuantNeuralNetImpl
class, which is the basis of the built-in NeuralNetworkClassifier
.
concrete.ml.common
: Module for shared data structures and code.
concrete.ml.common.check_inputs
: Check and conversion tools.
concrete.ml.common.debugging
: Module for debugging.
concrete.ml.common.debugging.custom_assert
: Provide some variants of assert.
concrete.ml.common.utils
: Utils that can be re-used by other pieces of code in the module.
concrete.ml.deployment
: Module for deployment of the FHE model.
concrete.ml.deployment.fhe_client_server
: APIs for FHE deployment.
concrete.ml.onnx
: ONNX module.
concrete.ml.onnx.convert
: ONNX conversion related code.
concrete.ml.onnx.onnx_impl_utils
: Utility functions for onnx operator implementations.
concrete.ml.onnx.onnx_model_manipulations
: Some code to manipulate models.
concrete.ml.onnx.onnx_utils
: Utils to interpret an ONNX model with numpy.
concrete.ml.onnx.ops_impl
: ONNX ops implementation in python + numpy.
concrete.ml.pytest
: Module which is used to contain common functions for pytest.
concrete.ml.pytest.torch_models
: Torch modules for our pytests.
concrete.ml.pytest.utils
: Common functions or lists for test files, which can't be put in fixtures.
concrete.ml.quantization
: Modules for quantization.
concrete.ml.quantization.base_quantized_op
: Base Quantized Op class that implements quantization for a float numpy op.
concrete.ml.quantization.post_training
: Post Training Quantization methods.
concrete.ml.quantization.quantized_module
: QuantizedModule API.
concrete.ml.quantization.quantized_ops
: Quantized versions of the ONNX operators for post training quantization.
concrete.ml.quantization.quantizers
: Quantization utilities for a numpy array/tensor.
concrete.ml.sklearn
: Import sklearn models.
concrete.ml.sklearn.base
: Module that contains base classes for our libraries estimators.
concrete.ml.sklearn.glm
: Implement sklearn's Generalized Linear Models (GLM).
concrete.ml.sklearn.linear_model
: Implement sklearn linear model.
concrete.ml.sklearn.protocols
: Protocols.
concrete.ml.sklearn.qnn
: Scikit-learn interface for concrete quantized neural networks.
concrete.ml.sklearn.rf
: Implements RandomForest models.
concrete.ml.sklearn.svm
: Implement Support Vector Machine.
concrete.ml.sklearn.torch_modules
: Implement torch module.
concrete.ml.sklearn.tree
: Implement the sklearn tree models.
concrete.ml.sklearn.tree_to_numpy
: Implements the conversion of a tree model to a numpy function.
concrete.ml.sklearn.xgb
: Implements XGBoost models.
concrete.ml.torch
: Modules for torch to numpy conversion.
concrete.ml.torch.compile
: torch compilation function.
concrete.ml.torch.numpy_module
: A torch to numpy module.
concrete.ml.version
: File to manage the version of the package.
fhe_client_server.FHEModelClient
: Client API to encrypt and decrypt FHE data.
fhe_client_server.FHEModelDev
: Dev API to save the model and then load and run the FHE circuit.
fhe_client_server.FHEModelServer
: Server API to load and run the FHE circuit.
ops_impl.ONNXMixedFunction
: A mixed quantized-raw valued onnx function.
torch_models.BranchingGemmModule
: Torch model with some branching and skip connections.
torch_models.BranchingModule
: Torch model with some branching and skip connections.
torch_models.CNN
: Torch CNN model for the tests.
torch_models.CNNGrouped
: Torch CNN model with grouped convolution for compile torch tests.
torch_models.CNNInvalid
: Torch CNN model for the tests.
torch_models.CNNMaxPool
: Torch CNN model for the tests with a max pool.
torch_models.CNNOther
: Torch CNN model for the tests.
torch_models.FC
: Torch model for the tests.
torch_models.FCSeq
: Torch model that should generate MatMul->Add ONNX patterns.
torch_models.FCSeqAddBiasVec
: Torch model that should generate MatMul->Add ONNX patterns.
torch_models.FCSmall
: Torch model for the tests.
torch_models.MultiInputNN
: Torch model to test multiple inputs forward.
torch_models.MultiOpOnSingleInputConvNN
: Network that applies two quantized operations on a single input.
torch_models.NetWithConcatUnsqueeze
: Torch model to test the concat and unsqueeze operators.
torch_models.NetWithLoops
: Torch model, where we reuse some elements in a loop.
torch_models.QATTestModule
: Torch model that implements a simple non-uniform quantizer.
torch_models.SimpleQAT
: Torch model implements a step function that needs Greater, Cast and Where.
torch_models.SingleMixNet
: Torch model that with a single conv layer that produces the output, e.g. a blur filter.
torch_models.StepActivationModule
: Torch model implements a step function that needs Greater, Cast and Where.
torch_models.TinyCNN
: A very small CNN.
torch_models.TinyQATCNN
: A very small QAT CNN to classify the sklearn digits dataset.
torch_models.TorchSum
: Torch model to test the ReduceSum ONNX operator in a leveled circuit.
torch_models.TorchSumMod
: Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS.
torch_models.UnivariateModule
: Torch model that calls univariate and shape functions of torch.
base_quantized_op.QuantizedMixingOp
: An operator that mixes (adds or multiplies) together encrypted inputs.
base_quantized_op.QuantizedOp
: Base class for quantized ONNX ops implemented in numpy.
base_quantized_op.QuantizedOpUnivariateOfEncrypted
: An univariate operator of an encrypted value.
post_training.ONNXConverter
: Base ONNX to Concrete ML computation graph conversion class.
post_training.PostTrainingAffineQuantization
: Post-training Affine Quantization.
post_training.PostTrainingQATImporter
: Converter of Quantization Aware Training networks.
quantized_module.QuantizedModule
: Inference for a quantized model.
quantized_ops.QuantizedAbs
: Quantized Abs op.
quantized_ops.QuantizedAdd
: Quantized Addition operator.
quantized_ops.QuantizedAvgPool
: Quantized Average Pooling op.
quantized_ops.QuantizedBatchNormalization
: Quantized Batch normalization with encrypted input and in-the-clear normalization params.
quantized_ops.QuantizedBrevitasQuant
: Brevitas uniform quantization with encrypted input.
quantized_ops.QuantizedCast
: Cast the input to the required data type.
quantized_ops.QuantizedCelu
: Quantized Celu op.
quantized_ops.QuantizedClip
: Quantized clip op.
quantized_ops.QuantizedConcat
: Concatenate operator.
quantized_ops.QuantizedConv
: Quantized Conv op.
quantized_ops.QuantizedDiv
: Div operator /.
quantized_ops.QuantizedElu
: Quantized Elu op.
quantized_ops.QuantizedErf
: Quantized erf op.
quantized_ops.QuantizedExp
: Quantized Exp op.
quantized_ops.QuantizedFlatten
: Quantized flatten for encrypted inputs.
quantized_ops.QuantizedFloor
: Quantized Floor op.
quantized_ops.QuantizedGemm
: Quantized Gemm op.
quantized_ops.QuantizedGreater
: Comparison operator >.
quantized_ops.QuantizedGreaterOrEqual
: Comparison operator >=.
quantized_ops.QuantizedHardSigmoid
: Quantized HardSigmoid op.
quantized_ops.QuantizedHardSwish
: Quantized Hardswish op.
quantized_ops.QuantizedIdentity
: Quantized Identity op.
quantized_ops.QuantizedLeakyRelu
: Quantized LeakyRelu op.
quantized_ops.QuantizedLess
: Comparison operator <.
quantized_ops.QuantizedLessOrEqual
: Comparison operator <=.
quantized_ops.QuantizedLog
: Quantized Log op.
quantized_ops.QuantizedMatMul
: Quantized MatMul op.
quantized_ops.QuantizedMax
: Quantized Max op.
quantized_ops.QuantizedMaxPool
: Quantized Max Pooling op.
quantized_ops.QuantizedMin
: Quantized Min op.
quantized_ops.QuantizedMul
: Multiplication operator.
quantized_ops.QuantizedNeg
: Quantized Neg op.
quantized_ops.QuantizedNot
: Quantized Not op.
quantized_ops.QuantizedOr
: Or operator ||.
quantized_ops.QuantizedPRelu
: Quantized PRelu op.
quantized_ops.QuantizedPad
: Quantized Padding op.
quantized_ops.QuantizedPow
: Quantized pow op.
quantized_ops.QuantizedReduceSum
: ReduceSum with encrypted input.
quantized_ops.QuantizedRelu
: Quantized Relu op.
quantized_ops.QuantizedReshape
: Quantized Reshape op.
quantized_ops.QuantizedRound
: Quantized round op.
quantized_ops.QuantizedSelu
: Quantized Selu op.
quantized_ops.QuantizedSigmoid
: Quantized sigmoid op.
quantized_ops.QuantizedSign
: Quantized Neg op.
quantized_ops.QuantizedSoftplus
: Quantized Softplus op.
quantized_ops.QuantizedSub
: Subtraction operator.
quantized_ops.QuantizedTanh
: Quantized Tanh op.
quantized_ops.QuantizedTranspose
: Transpose operator for quantized inputs.
quantized_ops.QuantizedUnsqueeze
: Unsqueeze operator.
quantized_ops.QuantizedWhere
: Where operator on quantized arrays.
quantizers.MinMaxQuantizationStats
: Calibration set statistics.
quantizers.QuantizationOptions
: Options for quantization.
quantizers.QuantizedArray
: Abstraction of quantized array.
quantizers.UniformQuantizationParameters
: Quantization parameters for uniform quantization.
quantizers.UniformQuantizer
: Uniform quantizer.
base.BaseTreeClassifierMixin
: Mixin class for tree-based classifiers.
base.BaseTreeEstimatorMixin
: Mixin class for tree-based estimators.
base.BaseTreeRegressorMixin
: Mixin class for tree-based regressors.
base.QuantizedTorchEstimatorMixin
: Mixin that provides quantization for a torch module and follows the Estimator API.
base.SklearnLinearClassifierMixin
: A Mixin class for sklearn linear classifiers with FHE.
base.SklearnLinearModelMixin
: A Mixin class for sklearn linear models with FHE.
glm.GammaRegressor
: A Gamma regression model with FHE.
glm.PoissonRegressor
: A Poisson regression model with FHE.
glm.TweedieRegressor
: A Tweedie regression model with FHE.
linear_model.ElasticNet
: An ElasticNet regression model with FHE.
linear_model.Lasso
: A Lasso regression model with FHE.
linear_model.LinearRegression
: A linear regression model with FHE.
linear_model.LogisticRegression
: A logistic regression model with FHE.
linear_model.Ridge
: A Ridge regression model with FHE.
protocols.ConcreteBaseClassifierProtocol
: Concrete classifier protocol.
protocols.ConcreteBaseEstimatorProtocol
: A Concrete Estimator Protocol.
protocols.ConcreteBaseRegressorProtocol
: Concrete regressor protocol.
protocols.Quantizer
: Quantizer Protocol.
qnn.FixedTypeSkorchNeuralNet
: A mixin with a helpful modification to a skorch estimator that fixes the module type.
qnn.NeuralNetClassifier
: Scikit-learn interface for quantized FHE compatible neural networks.
qnn.NeuralNetRegressor
: Scikit-learn interface for quantized FHE compatible neural networks.
qnn.QuantizedSkorchEstimatorMixin
: Mixin class that adds quantization features to Skorch NN estimators.
qnn.SparseQuantNeuralNetImpl
: Sparse Quantized Neural Network classifier.
rf.RandomForestClassifier
: Implements the RandomForest classifier.
rf.RandomForestRegressor
: Implements the RandomForest regressor.
svm.LinearSVC
: A Classification Support Vector Machine (SVM).
svm.LinearSVR
: A Regression Support Vector Machine (SVM).
tree.DecisionTreeClassifier
: Implements the sklearn DecisionTreeClassifier.
tree.DecisionTreeRegressor
: Implements the sklearn DecisionTreeClassifier.
tree_to_numpy.Task
: Task enumerate.
xgb.XGBClassifier
: Implements the XGBoost classifier.
xgb.XGBRegressor
: Implements the XGBoost regressor.
numpy_module.NumpyModule
: General interface to transform a torch.nn.Module to numpy module.
check_inputs.check_X_y_and_assert
: sklearn.utils.check_X_y with an assert.
check_inputs.check_array_and_assert
: sklearn.utils.check_array with an assert.
custom_assert.assert_false
: Provide a custom assert to check that the condition is False.
custom_assert.assert_not_reached
: Provide a custom assert to check that a piece of code is never reached.
custom_assert.assert_true
: Provide a custom assert to check that the condition is True.
utils.check_there_is_no_p_error_options_in_configuration
: Check the user did not set p_error or global_p_error in configuration.
utils.generate_proxy_function
: Generate a proxy function for a function accepting only *args type arguments.
utils.get_onnx_opset_version
: Return the ONNX opset_version.
utils.manage_parameters_for_pbs_errors
: Return (p_error, global_p_error) that we want to give to Concrete-Numpy and the compiler.
utils.replace_invalid_arg_name_chars
: Sanitize arg_name, replacing invalid chars by _.
convert.get_equivalent_numpy_forward
: Get the numpy equivalent forward of the provided ONNX model.
convert.get_equivalent_numpy_forward_and_onnx_model
: Get the numpy equivalent forward of the provided torch Module.
onnx_impl_utils.compute_conv_output_dims
: Compute the output shape of a pool or conv operation.
onnx_impl_utils.compute_onnx_pool_padding
: Compute any additional padding needed to compute pooling layers.
onnx_impl_utils.numpy_onnx_pad
: Pad a tensor according to ONNX spec, using an optional custom pad value.
onnx_impl_utils.onnx_avgpool_compute_norm_const
: Compute the average pooling normalization constant.
onnx_model_manipulations.clean_graph_after_node_name
: Clean the graph of the onnx model by removing nodes after the given node name.
onnx_model_manipulations.clean_graph_after_node_op_type
: Clean the graph of the onnx model by removing nodes after the given node type.
onnx_model_manipulations.keep_following_outputs_discard_others
: Keep the outputs given in outputs_to_keep and remove the others from the model.
onnx_model_manipulations.remove_identity_nodes
: Remove identity nodes from a model.
onnx_model_manipulations.remove_node_types
: Remove unnecessary nodes from the ONNX graph.
onnx_model_manipulations.remove_unused_constant_nodes
: Remove unused Constant nodes in the provided onnx model.
onnx_model_manipulations.simplify_onnx_model
: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
onnx_utils.execute_onnx_with_numpy
: Execute the provided ONNX graph on the given inputs.
onnx_utils.get_attribute
: Get the attribute from an ONNX AttributeProto.
onnx_utils.get_op_type
: Construct the qualified type name of the ONNX operator.
onnx_utils.remove_initializer_from_input
: Remove initializers from model inputs.
ops_impl.cast_to_float
: Cast values to floating points.
ops_impl.numpy_abs
: Compute abs in numpy according to ONNX spec.
ops_impl.numpy_acos
: Compute acos in numpy according to ONNX spec.
ops_impl.numpy_acosh
: Compute acosh in numpy according to ONNX spec.
ops_impl.numpy_add
: Compute add in numpy according to ONNX spec.
ops_impl.numpy_asin
: Compute asin in numpy according to ONNX spec.
ops_impl.numpy_asinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_atan
: Compute atan in numpy according to ONNX spec.
ops_impl.numpy_atanh
: Compute atanh in numpy according to ONNX spec.
ops_impl.numpy_avgpool
: Compute Average Pooling using Torch.
ops_impl.numpy_batchnorm
: Compute the batch normalization of the input tensor.
ops_impl.numpy_cast
: Execute ONNX cast in Numpy.
ops_impl.numpy_celu
: Compute celu in numpy according to ONNX spec.
ops_impl.numpy_concatenate
: Apply concatenate in numpy according to ONNX spec.
ops_impl.numpy_constant
: Return the constant passed as a kwarg.
ops_impl.numpy_cos
: Compute cos in numpy according to ONNX spec.
ops_impl.numpy_cosh
: Compute cosh in numpy according to ONNX spec.
ops_impl.numpy_div
: Compute div in numpy according to ONNX spec.
ops_impl.numpy_elu
: Compute elu in numpy according to ONNX spec.
ops_impl.numpy_equal
: Compute equal in numpy according to ONNX spec.
ops_impl.numpy_erf
: Compute erf in numpy according to ONNX spec.
ops_impl.numpy_exp
: Compute exponential in numpy according to ONNX spec.
ops_impl.numpy_flatten
: Flatten a tensor into a 2d array.
ops_impl.numpy_floor
: Compute Floor in numpy according to ONNX spec.
ops_impl.numpy_greater
: Compute greater in numpy according to ONNX spec.
ops_impl.numpy_greater_float
: Compute greater in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_greater_or_equal
: Compute greater or equal in numpy according to ONNX spec.
ops_impl.numpy_greater_or_equal_float
: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
ops_impl.numpy_hardsigmoid
: Compute hardsigmoid in numpy according to ONNX spec.
ops_impl.numpy_hardswish
: Compute hardswish in numpy according to ONNX spec.
ops_impl.numpy_identity
: Compute identity in numpy according to ONNX spec.
ops_impl.numpy_leakyrelu
: Compute leakyrelu in numpy according to ONNX spec.
ops_impl.numpy_less
: Compute less in numpy according to ONNX spec.
ops_impl.numpy_less_float
: Compute less in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_less_or_equal
: Compute less or equal in numpy according to ONNX spec.
ops_impl.numpy_less_or_equal_float
: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_log
: Compute log in numpy according to ONNX spec.
ops_impl.numpy_matmul
: Compute matmul in numpy according to ONNX spec.
ops_impl.numpy_max
: Compute Max in numpy according to ONNX spec.
ops_impl.numpy_maxpool
: Compute Max Pooling using Torch.
ops_impl.numpy_min
: Compute Min in numpy according to ONNX spec.
ops_impl.numpy_mul
: Compute mul in numpy according to ONNX spec.
ops_impl.numpy_neg
: Compute Negative in numpy according to ONNX spec.
ops_impl.numpy_not
: Compute not in numpy according to ONNX spec.
ops_impl.numpy_not_float
: Compute not in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_or
: Compute or in numpy according to ONNX spec.
ops_impl.numpy_or_float
: Compute or in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_pow
: Compute pow in numpy according to ONNX spec.
ops_impl.numpy_relu
: Compute relu in numpy according to ONNX spec.
ops_impl.numpy_round
: Compute round in numpy according to ONNX spec.
ops_impl.numpy_selu
: Compute selu in numpy according to ONNX spec.
ops_impl.numpy_sigmoid
: Compute sigmoid in numpy according to ONNX spec.
ops_impl.numpy_sign
: Compute Sign in numpy according to ONNX spec.
ops_impl.numpy_sin
: Compute sin in numpy according to ONNX spec.
ops_impl.numpy_sinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_softmax
: Compute softmax in numpy according to ONNX spec.
ops_impl.numpy_softplus
: Compute softplus in numpy according to ONNX spec.
ops_impl.numpy_sub
: Compute sub in numpy according to ONNX spec.
ops_impl.numpy_tan
: Compute tan in numpy according to ONNX spec.
ops_impl.numpy_tanh
: Compute tanh in numpy according to ONNX spec.
ops_impl.numpy_thresholdedrelu
: Compute thresholdedrelu in numpy according to ONNX spec.
ops_impl.numpy_transpose
: Transpose in numpy according to ONNX spec.
ops_impl.numpy_where
: Compute the equivalent of numpy.where.
ops_impl.numpy_where_body
: Compute the equivalent of numpy.where.
ops_impl.onnx_func_raw_args
: Decorate a numpy onnx function to flag the raw/non quantized inputs.
utils.sanitize_test_and_train_datasets
: Sanitize datasets depending on the model type.
post_training.get_n_bits_dict
: Convert the n_bits parameter into a proper dictionary.
quantizers.fill_from_kwargs
: Fill a parameter set structure from kwargs parameters.
base.get_sklearn_linear_models
: Return the list of available linear models in Concrete-ML.
base.get_sklearn_models
: Return the list of available models in Concrete-ML.
base.get_sklearn_neural_net_models
: Return the list of available neural net models in Concrete-ML.
base.get_sklearn_tree_models
: Return the list of available tree models in Concrete-ML.
tree_to_numpy.tree_to_numpy
: Convert the tree inference to a numpy functions using Hummingbird.
compile.compile_brevitas_qat_model
: Compile a Brevitas Quantization Aware Training model.
compile.compile_onnx_model
: Compile a torch module into a FHE equivalent.
compile.compile_torch_model
: Compile a torch module into a FHE equivalent.
compile.convert_torch_tensor_or_numpy_array_to_numpy_array
: Convert a torch tensor or a numpy array to a numpy array.
concrete.ml.common.debugging.custom_assert
Provide some variants of assert.
assert_true
Provide a custom assert to check that the condition is True.
Args:
condition
(bool): the condition. If False, raise AssertionError
on_error_msg
(str): optional message for precising the error, in case of error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
assert_false
Provide a custom assert to check that the condition is False.
Args:
condition
(bool): the condition. If True, raise AssertionError
on_error_msg
(str): optional message for precising the error, in case of error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
assert_not_reached
Provide a custom assert to check that a piece of code is never reached.
Args:
on_error_msg
(str): message for precising the error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
The ONNX import section gave an overview of the conversion of a generic ONNX graph to a FHE-compatible Concrete-ML op-graph. This section describes the implementation of operations in the Concrete-ML op-graph and the way floating point can be used in some parts of the op-graphs through table lookup operations.
Concrete, the underlying implementation of TFHE that powers Concrete-ML, enables two types of operations on integers:
arithmetic operations: the addition of two encrypted values and multiplication of encrypted values with clear scalars. These are used, for example, in dot-products, matrix multiplication (linear layers), and convolution.
table lookup operations (TLU): using an encrypted value as an index, return the value of a lookup table at that index. This is implemented using Programmable Bootstrapping. This operation is used to perform any non-linear computation such as activation functions, quantization, and normalization.
Since machine learning models use floating point inputs and weights, they first need to be converted to integers using quantization.
Alternatively, it is possible to use a table lookup to avoid the quantization of the entire graph, by converting floating-point ONNX subgraphs into lambdas and computing their corresponding lookup tables to be evaluated directly in FHE. This operator-fusion technique only requires the input and output of the lambdas to be integers.
For example, in the following graph there is a single input, which must be an encrypted integer tensor. The following series of univariate functions is then fed into a matrix multiplication (MatMul) and fused into a single table lookup with integer inputs and outputs.
Concrete-ML implements ONNX operations using Concrete-Numpy, which can handle floating point operations, as long as they can be fused to an integer lookup table. The ONNX operations implementations are based on the QuantizedOp
class.
There are two modes of creation of a single table lookup for a chain of ONNX operations:
float mode: when the operation can be fused
mixed float/integer: when the ONNX operation needs to perform arithmetic operations
Thus, QuantizedOp
instances may need to quantize their inputs or the result of their computation, depending on their position in the graph.
The QuantizedOp
class provides a generic implementation of an ONNX operation, including the quantization of inputs and outputs, with the computation implemented in NumPy in ops_impl.py
. It is possible to picture the architecture of the QuantizedOp
as the following structure:
This figure shows that the QuantizedOp
has a body that implements the computation of the operation, following the ONNX spec. The operation's body can take either integer or float inputs and can output float or integer values. Two quantizers are attached to the operation: one that takes float inputs and produces integer inputs and one that does the same for the output.
Depending on the position of the op in the graph and its inputs, the QuantizedOp
can be fully fused to a TLU.
Many ONNX ops are trivially univariate, as they multiply variable inputs with constants or apply univariate functions such as ReLU, Sigmoid, etc. This includes operations between the input and the MatMul in the graph above (subtraction, comparison, multiplication, etc. between inputs and constants).
Operations, such as matrix multiplication of encrypted inputs with a constant matrix or convolution with constant weights, require that the encrypted inputs be integers. In this case, the input quantizer of the QuantizedOp
is applied. These types of operations are implemented with a class that derives from QuantizedOp
and implements q_impl
, such as QuantizedGemm
and QuantizedConv
.
Finally, some operations produce graph outputs, which must be integers. These operations need to quantize their outputs as follows:
The diagram above shows that both float ops and integer ops need to quantize their outputs to integers when placed at the end of the graph.
To chain the operation types described above following the ONNX graph, Concrete-ML constructs a function that calls the q_impl
of the QuantizedOp
instances in the graph in sequence, and uses Concrete-Numpy to trace the execution and compile to FHE. Thus, in this chain of function calls, all groups of that instruction that operate in floating point will be fused to TLUs. In FHE, this lookup table is computed with a PBS.
The red contours show the groups of elementary Concrete-Numpy instructions that will be converted to TLUs.
Note that the input is slightly different from the QuantizedOp
. Since the encrypted function takes integers as inputs, the input needs to be de-quantized first.
QuantizedOp
QuantizedOp
is the base class for all ONNX-quantized operators. It abstracts away many things to allow easy implementation of new quantized ops.
The QuantizedOp
class exposes a function can_fuse
that:
helps to determine the type of implementation that will be traced.
determines whether operations further in the graph, that depend on the results of this operation, can fuse.
In most cases, ONNX ops have a single variable input and one or more constant inputs.
When the op implements element-wise operations between the inputs and constants (addition, subtract, multiplication, etc), the operation can be fused to a TLU. Thus, by default in QuantizedOp
, the can_fuse
function returns True
.
When the op implements operations that mix the various scalars in the input encrypted tensor, the operation cannot fuse, as table lookups are univariate. Thus, operations such as QuantizedGemm
and QuantizedConv
return False
in can_fuse
.
Some operations may be found in both settings above. A mechanism is implemented in Concrete-ML to determine if the inputs of a QuantizedOp
are produced by a unique integer tensor. Therefore, the can_fuse
function of some QuantizedOp
types (addition, subtraction) will allow fusion to take place if both operands are produced by a unique integer tensor:
You can check ops_impl.py
to see how some operations are implemented in NumPy. The declaration convention for these operations is as follows:
The required inputs should be positional arguments only before the /
, which marks the limit of the positional arguments.
The optional inputs should be positional or keyword arguments between the /
and *
, which marks the limits of positional or keyword arguments.
The operator attributes should be keyword arguments only after the *
.
The proper use of positional/keyword arguments is required to allow the QuantizedOp
class to properly populate metadata automatically. It uses Python inspect modules and stores relevant information for each argument related to its positional/keyword status. This allows using the Concrete-Numpy implementation as specifications for QuantizedOp
, which removes some data duplication and generates a single source of truth for QuantizedOp
and ONNX-NumPy implementations.
In that case (unless the quantized implementation requires special handling like QuantizedGemm
), you can just set _impl_for_op_named
to the name of the ONNX op for which the quantized class is implemented (this uses the mapping ONNX_OPS_TO_NUMPY_IMPL
in onnx_utils.py
to get the correct implementation).
Providing an integer implementation requires sub-classing QuantizedOp
to create a new operation. This sub-class must override q_impl
in order to provide an integer implementation. QuantizedGemm
is an example of such a case where quantized matrix multiplication requires proper handling of scales and zero points. The q_impl
of that class reflects this.
In the body of q_impl
, you can use the _prepare_inputs_with_constants
function in order to obtain quantized integer values:
Here, prepared_inputs
will contain one or more QuantizedArray
, of which the qvalues
are the quantized integers.
Once the required integer processing code is implemented, the output of the q_impl
function must be implemented as a single QuantizedArray
. Most commonly, this is built using the de-quantized results of the processing done in q_impl
.
In this case, in q_impl
you can check whether the current operation can be fused by calling self.can_fuse()
. You can then have both a floating-point and an integer implementation. The traced execution path will depend on can_fuse()
:
concrete.ml.common.check_inputs
Check and conversion tools.
Utils that are used to check (including convert) some data types which are compatible with scikit-learn to numpy types.
check_array_and_assert
sklearn.utils.check_array with an assert.
Equivalent of sklearn.utils.check_array, with a final assert that the type is one which is supported by Concrete-ML.
Args:
X
(object): Input object to check / convert
Returns: The converted and validated array
check_X_y_and_assert
sklearn.utils.check_X_y with an assert.
Equivalent of sklearn.utils.check_X_y, with a final assert that the type is one which is supported by Concrete-ML.
Args:
X
(ndarray, list, sparse matrix): Input data
y
(ndarray, list, sparse matrix): Labels
*args
: The arguments to pass to check_X_y
**kwargs
: The keyword arguments to pass to check_X_y
Returns: The converted and validated arrays
concrete.ml.onnx.convert
ONNX conversion related code.
IMPLEMENTED_ONNX_OPS
OPSET_VERSION_FOR_ONNX_EXPORT
get_equivalent_numpy_forward_and_onnx_model
Get the numpy equivalent forward of the provided torch Module.
Args:
torch_module
(torch.nn.Module): the torch Module for which to get the equivalent numpy forward.
dummy_input
(Union[torch.Tensor, Tuple[torch.Tensor, ...]]): dummy inputs for ONNX export.
output_onnx_file
(Optional[Union[Path, str]]): Path to save the ONNX file to. Will use a temp file if not provided. Defaults to None.
Returns:
Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.GraphProto]
: The function that will execute the equivalent numpy code to the passed torch_module and the generated ONNX model.
get_equivalent_numpy_forward
Get the numpy equivalent forward of the provided ONNX model.
Args:
onnx_model
(onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.
check_model
(bool): set to True to run the onnx checker on the model. Defaults to True.
Raises:
ValueError
: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy.
Returns:
Callable[..., Tuple[numpy.ndarray, ...]]
: The function that will execute the equivalent numpy function.
concrete.ml.deployment.fhe_client_server
APIs for FHE deployment.
CML_VERSION
AVAILABLE_MODEL
FHEModelServer
Server API to load and run the FHE circuit.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
load
Load the circuit.
run
Run the model on the server over encrypted data.
Args:
serialized_encrypted_quantized_data
(cnp.PublicArguments): the encrypted, quantized and serialized data
serialized_evaluation_keys
(cnp.EvaluationKeys): the serialized evaluation keys
Returns:
cnp.PublicResult
: the result of the model
FHEModelDev
Dev API to save the model and then load and run the FHE circuit.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
model
(Any): the model to use for the FHE API
save
Export all needed artifacts for the client and server.
Raises:
Exception
: path_dir is not empty
FHEModelClient
Client API to encrypt and decrypt FHE data.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
key_dir
(str): the path to the directory where the keys are stored
deserialize_decrypt
Deserialize and decrypt the values.
Args:
serialized_encrypted_quantized_result
(cnp.PublicArguments): the serialized, encrypted and quantized result
Returns:
numpy.ndarray
: the decrypted and deserialized values
deserialize_decrypt_dequantize
Deserialize, decrypt and dequantize the values.
Args:
serialized_encrypted_quantized_result
(cnp.PublicArguments): the serialized, encrypted and quantized result
Returns:
numpy.ndarray
: the decrypted (dequantized) values
generate_private_and_evaluation_keys
Generate the private and evaluation keys.
Args:
force
(bool): if True, regenerate the keys even if they already exist
get_serialized_evaluation_keys
Get the serialized evaluation keys.
Returns:
cnp.EvaluationKeys
: the evaluation keys
load
Load the quantizers along with the FHE specs.
quantize_encrypt_serialize
Quantize, encrypt and serialize the values.
Args:
x
(numpy.ndarray): the values to quantize, encrypt and serialize
Returns:
cnp.PublicArguments
: the quantized, encrypted and serialized values
concrete.ml.onnx.onnx_impl_utils
Utility functions for onnx operator implementations.
numpy_onnx_pad
Pad a tensor according to ONNX spec, using an optional custom pad value.
Args:
x
(numpy.ndarray): input tensor to pad
pads
(List[int]): padding values according to ONNX spec
pad_value
(Optional[Union[float, int]]): value used to fill in padding, default 0
int_only
(bool): set to True to generate integer only code with Concrete-Numpy
Returns:
res
(numpy.ndarray): the input tensor with padding applied
compute_conv_output_dims
Compute the output shape of a pool or conv operation.
See https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html for details on the computation of the output shape.
Args:
input_shape
(Tuple[int, ...]): shape of the input to be padded as N x C x H x W
kernel_shape
(Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)
pads
(Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W
strides
(Tuple[int, ...]): strides for each dimension
ceil_mode
(int): set to 1 to use the ceil
function to compute the output shape, as described in the PyTorch doc
Returns:
res
(Tuple[int, ...]): shape of the output of a conv or pool operator with given parameters
compute_onnx_pool_padding
Compute any additional padding needed to compute pooling layers.
The ONNX standard uses ceil_mode=1 to match tensorflow style pooling output computation. In this setting, the kernel can be placed at a valid position even though it contains values outside of the input shape including padding. The ceil_mode parameter controls whether this mode is enabled. If the mode is not enabled, the output shape follows PyTorch rules.
Args:
input_shape
(Tuple[int, ...]): shape of the input to be padded as N x C x H x W
kernel_shape
(Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)
pads
(Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W
strides
(Tuple[int, ...]): strides for each dimension
ceil_mode
(int): set to 1 to use the ceil
function to compute the output shape, as described in the PyTorch doc
Returns:
res
(Tuple[int, ...]): shape of the output of a conv or pool operator with given parameters
onnx_avgpool_compute_norm_const
Compute the average pooling normalization constant.
This constant can be a tensor of the same shape as the input or a scalar.
Args:
input_shape
(Tuple[int, ...]): shape of the input to be padded as N x C x H x W
kernel_shape
(Tuple[int, ...]): shape of the conv or pool kernel, as Kh x Kw (or n-d)
pads
(Tuple[int, ...]): padding values following ONNX spec: dim1_start, dim2_start, .. dimN_start, dim1_end, dim2_end, ... dimN_end where in the 2-d case dim1 is H, dim2 is W
strides
(Tuple[int, ...]): strides for each dimension
ceil_mode
(int): set to 1 to use the ceil
function to compute the output shape, as described in the PyTorch doc
Returns:
res
(float): tensor or scalar, corresponding to normalization factors to apply for the average pool computation for each valid kernel position
concrete.ml.onnx.onnx_model_manipulations
Some code to manipulate models.
simplify_onnx_model
Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
Args:
onnx_model
(onnx.ModelProto): the model to simplify.
remove_unused_constant_nodes
Remove unused Constant nodes in the provided onnx model.
Args:
onnx_model
(onnx.ModelProto): the model for which we want to remove unused Constant nodes.
remove_identity_nodes
Remove identity nodes from a model.
Args:
onnx_model
(onnx.ModelProto): the model for which we want to remove Identity nodes.
keep_following_outputs_discard_others
Keep the outputs given in outputs_to_keep and remove the others from the model.
Args:
onnx_model
(onnx.ModelProto): the ONNX model to modify.
outputs_to_keep
(Iterable[str]): the outputs to keep by name.
remove_node_types
Remove unnecessary nodes from the ONNX graph.
Args:
onnx_model
(onnx.ModelProto): The ONNX model to modify.
op_types_to_remove
(List[str]): The node types to remove from the graph.
Raises:
ValueError
: Wrong replacement by an Identity node.
clean_graph_after_node_name
Clean the graph of the onnx model by removing nodes after the given node name.
Args:
onnx_model
(onnx.ModelProto): The onnx model.
node_name
(str): The node's name whose following nodes will be removed.
fail_if_not_found
(bool): If true, abort if the node name is not found
Raises:
ValueError
: if the node name is not found and if fail_if_not_found is set
clean_graph_after_node_op_type
Clean the graph of the onnx model by removing nodes after the given node type.
Args:
onnx_model
(onnx.ModelProto): The onnx model.
node_op_type
(str): The node's op_type whose following nodes will be removed.
fail_if_not_found
(bool): If true, abort if the node op_type is not found
Raises:
ValueError
: if the node op_type is not found and if fail_if_not_found is set
concrete.ml.pytest.torch_models
Torch modules for our pytests.
FCSmall
Torch model for the tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
FC
Torch model for the tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
CNN
Torch CNN model for the tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
CNNMaxPool
Torch CNN model for the tests with a max pool.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
CNNOther
Torch CNN model for the tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
CNNInvalid
Torch CNN model for the tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
CNNGrouped
Torch CNN model with grouped convolution for compile torch tests.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
NetWithLoops
Torch model, where we reuse some elements in a loop.
Torch model, where we reuse some elements in a loop in the forward and don't expect the user to define these elements in a particular order.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
MultiInputNN
Torch model to test multiple inputs forward.
__init__
forward
Forward pass.
Args:
x
: the first input of the NN
y
: the second input of the NN
Returns: the output of the NN
BranchingModule
Torch model with some branching and skip connections.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
BranchingGemmModule
Torch model with some branching and skip connections.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
UnivariateModule
Torch model that calls univariate and shape functions of torch.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
StepActivationModule
Torch model implements a step function that needs Greater, Cast and Where.
__init__
forward
Forward pass with a quantizer built into the computation graph.
Args:
x
: the input of the NN
Returns: the output of the NN
NetWithConcatUnsqueeze
Torch model to test the concat and unsqueeze operators.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
MultiOpOnSingleInputConvNN
Network that applies two quantized operations on a single input.
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
FCSeq
Torch model that should generate MatMul->Add ONNX patterns.
This network generates additions with a constant scalar
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
FCSeqAddBiasVec
Torch model that should generate MatMul->Add ONNX patterns.
This network tests the addition with a constant vector
__init__
forward
Forward pass.
Args:
x
: the input of the NN
Returns: the output of the NN
TinyCNN
A very small CNN.
__init__
Create the tiny CNN with two conv layers.
Args:
n_classes
: number of classes
act
: the activation
forward
Forward the two layers with the chosen activation function.
Args:
x
: the input of the NN
Returns: the output of the NN
TinyQATCNN
A very small QAT CNN to classify the sklearn digits dataset.
This class also allows pruning to a maximum of 10 active neurons, which should help keep the accumulator bit-width low.
__init__
Construct the CNN with a configurable number of classes.
Args:
n_classes
(int): number of outputs of the neural net
n_bits
(int): number of weight and activation bits for quantization
n_active
(int): number of active (non-zero weight) neurons to keep
signed
(bool): whether quantized integer values are signed
narrow
(bool): whether the range of quantized integer values is narrow/symmetric
forward
Run inference on the tiny CNN, apply the decision layer on the reshaped conv output.
Args:
x
: the input to the NN
Returns: the output of the NN
test_torch
Test the network: measure accuracy on the test set.
Args:
test_loader
: the test loader
Returns:
res
: the number of correctly classified test examples
toggle_pruning
Enable or remove pruning.
Args:
enable
: if we enable the pruning or not
SimpleQAT
Torch model implements a step function that needs Greater, Cast and Where.
__init__
forward
Forward pass with a quantizer built into the computation graph.
Args:
x
: the input of the NN
Returns: the output of the NN
QATTestModule
Torch model that implements a simple non-uniform quantizer.
__init__
forward
Forward pass with a quantizer built into the computation graph.
Args:
x
: the input of the NN
Returns: the output of the NN
SingleMixNet
Torch model that with a single conv layer that produces the output, e.g. a blur filter.
__init__
forward
Execute the single convolution.
Args:
x
: the input of the NN
Returns: the output of the NN
TorchSum
Torch model to test the ReduceSum ONNX operator in a leveled circuit.
__init__
Initialize the module.
Args:
dim
(Tuple[int]): The axis along which the sum should be executed
keepdim
(bool): If the output should keep the same dimension as the input or not
forward
Forward pass.
Args:
x
(torch.tensor): The input of the model
Returns:
torch_sum
(torch.tensor): The sum of the input's tensor elements along the given axis
TorchSumMod
Torch model to test the ReduceSum ONNX operator in a circuit containing a PBS.
__init__
Initialize the module.
Args:
dim
(Tuple[int]): The axis along which the sum should be executed
keepdim
(bool): If the output should keep the same dimension as the input or not
forward
Forward pass.
Args:
x
(torch.tensor): The input of the model
Returns:
torch_sum
(torch.tensor): The sum of the input's tensor elements along the given axis
concrete.ml.pytest.utils
Common functions or lists for test files, which can't be put in fixtures.
regressor_models
classifier_models
classifiers
regressors
sanitize_test_and_train_datasets
Sanitize datasets depending on the model type.
Args:
model
: the model
x
: the first output of load_data, i.e., the inputs
y
: the second output of load_data, i.e., the labels
Returns: Tuple containing sanitized (model_params, x, y, x_train, y_train, x_test)
concrete.ml.onnx.onnx_utils
Utils to interpret an ONNX model with numpy.
ATTR_TYPES
ATTR_GETTERS
ONNX_OPS_TO_NUMPY_IMPL
ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_FLOAT
ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_BOOL
ONNX_OPS_TO_NUMPY_IMPL_BOOL
IMPLEMENTED_ONNX_OPS
get_attribute
Get the attribute from an ONNX AttributeProto.
Args:
attribute
(onnx.AttributeProto): The attribute to retrieve the value from.
Returns:
Any
: The stored attribute value.
get_op_type
Construct the qualified type name of the ONNX operator.
Args:
node
(Any): ONNX graph node
Returns:
result
(str): qualified name
execute_onnx_with_numpy
Execute the provided ONNX graph on the given inputs.
Args:
graph
(onnx.GraphProto): The ONNX graph to execute.
*inputs
: The inputs of the graph.
Returns:
Tuple[numpy.ndarray]
: The result of the graph's execution.
remove_initializer_from_input
Remove initializers from model inputs.
In some cases, ONNX initializers may appear, erroneously, as graph inputs. This function searches all model inputs and removes those that are initializers.
Args:
model
(onnx.ModelProto): the model to clean
Returns:
onnx.ModelProto
: the cleaned model
concrete.ml.common.utils
Utils that can be re-used by other pieces of code in the module.
MAX_BITWIDTH_BACKWARD_COMPATIBLE
replace_invalid_arg_name_chars
Sanitize arg_name, replacing invalid chars by _.
This does not check that the starting character of arg_name is valid.
Args:
arg_name
(str): the arg name to sanitize.
Returns:
str
: the sanitized arg name, with only chars in _VALID_ARG_CHARS.
generate_proxy_function
Generate a proxy function for a function accepting only *args type arguments.
This returns a runtime compiled function with the sanitized argument names passed in desired_functions_arg_names as the arguments to the function.
Args:
function_to_proxy
(Callable): the function defined like def f(*args) for which to return a function like f_proxy(arg_1, arg_2) for any number of arguments.
desired_functions_arg_names
(Iterable[str]): the argument names to use, these names are sanitized and the mapping between the original argument name to the sanitized one is returned in a dictionary. Only the sanitized names will work for a call to the proxy function.
Returns:
Tuple[Callable, Dict[str, str]]
: the proxy function and the mapping of the original arg name to the new and sanitized arg names.
get_onnx_opset_version
Return the ONNX opset_version.
Args:
onnx_model
(onnx.ModelProto): the model.
Returns:
int
: the version of the model
manage_parameters_for_pbs_errors
Return (p_error, global_p_error) that we want to give to Concrete-Numpy and the compiler.
The returned (p_error, global_p_error) depends on user's parameters and the way we want to manage defaults in Concrete-ML, which may be different from the way defaults are managed in Concrete-Numpy
Principle: - if none are set, we set global_p_error to a default value of our choice - if both are set, we raise an error - if one is set, we use it and forward it to Concrete-Numpy and the compiler
Note that global_p_error is currently not simulated by the VL, i.e., taken as 0.
Args:
p_error
(Optional[float]): probability of error of a single PBS.
global_p_error
(Optional[float]): probability of error of the full circuit.
Returns:
(p_error, global_p_error)
: parameters to give to the compiler
Raises:
ValueError
: if the two parameters are set (this is not as in Concrete-Numpy)
check_there_is_no_p_error_options_in_configuration
Check the user did not set p_error or global_p_error in configuration.
It would be dangerous, since we set them in direct arguments in our calls to Concrete-Numpy.
Args:
configuration
: Configuration object to use during compilation
concrete.ml.onnx.ops_impl
ONNX ops implementation in python + numpy.
cast_to_float
Cast values to floating points.
Args:
inputs
(Tuple[numpy.ndarray]): The values to consider.
Returns:
Tuple[numpy.ndarray]
: The float values.
onnx_func_raw_args
Decorate a numpy onnx function to flag the raw/non quantized inputs.
Args:
*args (tuple[Any])
: function argument names
Returns:
result
(ONNXMixedFunction): wrapped numpy function with a list of mixed arguments
numpy_where_body
Compute the equivalent of numpy.where.
This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g. numpy_div or numpy_where.
Args:
c
(numpy.ndarray): Condition operand.
t
(numpy.ndarray): True operand.
f
(numpy.ndarray): False operand.
Returns:
numpy.ndarray
: numpy.where(c, t, f)
numpy_where
Compute the equivalent of numpy.where.
Args:
c
(numpy.ndarray): Condition operand.
t
(numpy.ndarray): True operand.
f
(numpy.ndarray): False operand.
Returns:
numpy.ndarray
: numpy.where(c, t, f)
numpy_add
Compute add in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13
Args:
a
(numpy.ndarray): First operand.
b
(numpy.ndarray): Second operand.
Returns:
Tuple[numpy.ndarray]
: Result, has same element type as two inputs
numpy_constant
Return the constant passed as a kwarg.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13
Args:
**kwargs
: keyword arguments
Returns:
Any
: The stored constant.
numpy_matmul
Compute matmul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13
Args:
a
(numpy.ndarray): N-dimensional matrix A
b
(numpy.ndarray): N-dimensional matrix B
Returns:
Tuple[numpy.ndarray]
: Matrix multiply results from A * B
numpy_relu
Compute relu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sigmoid
Compute sigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_softmax
Compute softmax in numpy according to ONNX spec.
Softmax is currently not supported in FHE.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13
Args:
x
(numpy.ndarray): Input tensor
axis
(None, int, tuple of int): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.
keepdims
(bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_cos
Compute cos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_cosh
Compute cosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sin
Compute sin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sinh
Compute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_tan
Compute tan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_tanh
Compute tanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_acos
Compute acos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_acosh
Compute acosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_asin
Compute asin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_asinh
Compute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_atan
Compute atan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_atanh
Compute atanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_elu
Compute elu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_selu
Compute selu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
gamma
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_celu
Compute celu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_leakyrelu
Compute leakyrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_thresholdedrelu
Compute thresholdedrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_hardsigmoid
Compute hardsigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
beta
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_softplus
Compute softplus in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_abs
Compute abs in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_div
Compute div in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_mul
Compute mul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sub
Compute sub in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_log
Compute log in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_erf
Compute erf in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_hardswish
Compute hardswish in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_exp
Compute exponential in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: The exponential of the input tensor computed element-wise
numpy_equal
Compute equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_not
Compute not in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_not_float
Compute not in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater
Compute greater in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_float
Compute greater in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_or_equal
Compute greater or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_or_equal_float
Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less
Compute less in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_float
Compute less in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_or_equal
Compute less or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_or_equal_float
Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_identity
Compute identity in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_transpose
Transpose in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13
Args:
x
(numpy.ndarray): Input tensor
perm
(numpy.ndarray): Permutation of the axes
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_avgpool
Compute Average Pooling using Torch.
Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool
Args:
x
(numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d
ceil_mode
(int): ONNX rounding parameter, expected 0 (torch style dimension computation)
kernel_shape
(Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int, ...]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int, ...]): stride of the convolution on each axis
Returns:
res
(numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).
See https
: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html
Raises:
AssertionError
: if the pooling arguments are wrong
numpy_maxpool
Compute Max Pooling using Torch.
Currently supports 2d max pooling with torch semantics. This function is ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool
Args:
x
(numpy.ndarray): the input
kernel_shape
(Union[Tuple[int, ...], List[int]]): shape of the kernel
strides
(Optional[Union[Tuple[int, ...], List[int]]]): stride along each spatial axis set to 1 along each spatial axis if not set
auto_pad
(str): padding strategy, default = "NOTSET"
pads
(Optional[Union[Tuple[int, ...], List[int]]]): padding for the beginning and ending along each spatial axis (D1_begin, D2_begin, ..., D1_end, D2_end, ...) set to 0 along each spatial axis if not set
dilations
(Optional[Union[Tuple[int, ...], List[int]]]): dilation along each spatial axis set to 1 along each spatial axis if not set
ceil_mode
(int): ceiling mode, default = 1
storage_order
(int): storage order, 0 for row major, 1 for column major, default = 0
Returns:
res
(numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).
See https
: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html
numpy_cast
Execute ONNX cast in Numpy.
Supports only booleans for now, which are converted to integers.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast
Args:
data
(numpy.ndarray): Input encrypted tensor
to
(int): integer value of the onnx.TensorProto DataType enum
Returns:
result
(numpy.ndarray): a tensor with the required data type
numpy_batchnorm
Compute the batch normalization of the input tensor.
This can be expressed as:
Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization-14
Args:
x
(numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.
scale
(numpy.ndarray): scale tensor of shape (C,)
bias
(numpy.ndarray): bias tensor of shape (C,)
input_mean
(numpy.ndarray): mean values to use for each input channel, shape (C,)
input_var
(numpy.ndarray): variance values to use for each input channel, shape (C,)
epsilon
(float): avoids division by zero
momentum
(float): momentum used during training of the mean/variance, not used in inference
training_mode
(int): if the model was exported in training mode this is set to 1, else 0
Returns:
numpy.ndarray
: Normalized tensor
numpy_flatten
Flatten a tensor into a 2d array.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13.
Args:
x
(numpy.ndarray): tensor to flatten
axis
(int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)
Returns:
result
: flattened tensor
numpy_or
Compute or in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_or_float
Compute or in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_round
Compute round in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0
Args:
a
(numpy.ndarray): Input tensor whose elements to be rounded.
Returns:
Tuple[numpy.ndarray]
: Output tensor with rounded input elements.
numpy_pow
Compute pow in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13
Args:
a
(numpy.ndarray): Input tensor whose elements to be raised.
b
(numpy.ndarray): The power to which we want to raise.
Returns:
Tuple[numpy.ndarray]
: Output tensor.
numpy_floor
Compute Floor in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_max
Compute Max in numpy according to ONNX spec.
Computes the max between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_min
Compute Min in numpy according to ONNX spec.
Computes the minimum between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sign
Compute Sign in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_neg
Compute Negative in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_concatenate
Apply concatenate in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#concat-13
Args:
*x (numpy.ndarray)
: Input tensors to be concatenated.
axis
(int): Which axis to concat on.
Returns:
Tuple[numpy.ndarray]
: Output tensor.
ONNXMixedFunction
A mixed quantized-raw valued onnx function.
ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.
__init__
Create the mixed function and raw parameter list.
Args:
function
(Any): function to be decorated
non_quant_params
: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)
concrete.ml.quantization.base_quantized_op
Base Quantized Op class that implements quantization for a float numpy op.
ONNX_OPS_TO_NUMPY_IMPL
ALL_QUANTIZED_OPS
ONNX_OPS_TO_QUANTIZED_IMPL
DEFAULT_MODEL_BITS
QuantizedOp
Base class for quantized ONNX ops implemented in numpy.
Args:
n_bits_output
(int): The number of bits to use for the quantization of the output
int_input_names
(Set[str]): The set of names of integer tensors that are inputs to this op
constant_inputs
(Optional[Union[Dict[str, Any], Dict[int, Any]]]): The constant tensors that are inputs to this op
input_quant_opts
(QuantizationOptions): Input quantizer options, determine the quantization that is applied to input tensors (that are not constants)
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
call_impl
Call self.impl to centralize mypy bug workaround.
Args:
*inputs (numpy.ndarray)
: real valued inputs.
**attrs
: the QuantizedOp attributes.
Returns:
numpy.ndarray
: return value of self.impl
can_fuse
Determine if the operator impedes graph fusion.
This function shall be overloaded by inheriting classes to test self._int_input_names, to determine whether the operation can be fused to a TLU or not. For example an operation that takes inputs produced by a unique integer tensor can be fused to a TLU. Example: f(x) = x * (x + 1) can be fused. A function that does f(x) = x * (x @ w + 1) can't be fused.
Returns:
bool
: whether this instance of the QuantizedOp produces Concrete Numpy code that can be fused to TLUs
must_quantize_input
Determine if an input must be quantized.
Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.
Args:
input_name_or_idx
(int): Index of the input to check.
Returns:
result
(bool): Whether the input must be quantized (must be a QuantizedArray
) or if it stays as a raw numpy.array
read from ONNX.
op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
prepare_output
Quantize the output of the activation function.
The calibrate method needs to be called with sample data before using this function.
Args:
qoutput_activation
(numpy.ndarray): Output of the activation function.
Returns:
QuantizedArray
: Quantized output.
q_impl
Execute the quantized forward.
Args:
*q_inputs (QuantizedArray)
: Quantized inputs.
**attrs
: the QuantizedOp attributes.
Returns:
QuantizedArray
: The returned quantized value.
QuantizedOpUnivariateOfEncrypted
An univariate operator of an encrypted value.
This operation is not really operating as a quantized operation. It is useful when the computations get fused into a TLU, as in e.g. Act(x) = x || (x + 42)).
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
call_impl
Call self.impl to centralize mypy bug workaround.
Args:
*inputs (numpy.ndarray)
: real valued inputs.
**attrs
: the QuantizedOp attributes.
Returns:
numpy.ndarray
: return value of self.impl
can_fuse
Determine if this op can be fused.
This operation can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.
Returns:
bool
: Can fuse
must_quantize_input
Determine if an input must be quantized.
Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.
Args:
input_name_or_idx
(int): Index of the input to check.
Returns:
result
(bool): Whether the input must be quantized (must be a QuantizedArray
) or if it stays as a raw numpy.array
read from ONNX.
op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
prepare_output
Quantize the output of the activation function.
The calibrate method needs to be called with sample data before using this function.
Args:
qoutput_activation
(numpy.ndarray): Output of the activation function.
Returns:
QuantizedArray
: Quantized output.
q_impl
Execute the quantized forward.
Args:
*q_inputs (QuantizedArray)
: Quantized inputs.
**attrs
: the QuantizedOp attributes.
Returns:
QuantizedArray
: The returned quantized value.
QuantizedMixingOp
An operator that mixes (adds or multiplies) together encrypted inputs.
Mixing operators cannot be fused to TLUs.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
call_impl
Call self.impl to centralize mypy bug workaround.
Args:
*inputs (numpy.ndarray)
: real valued inputs.
**attrs
: the QuantizedOp attributes.
Returns:
numpy.ndarray
: return value of self.impl
can_fuse
Determine if this op can be fused.
Mixing operations cannot be fused since it must be performed over integer tensors and it combines different encrypted elements of the input tensors. Mixing operations are Conv, MatMul, etc.
Returns:
bool
: False, this operation cannot be fused as it adds different encrypted integers
make_output_quant_parameters
Build a quantized array from quantized integer results of the op and quantization params.
Args:
q_values
(Union[numpy.ndarray, Any]): the quantized integer values to wrap in the QuantizedArray
scale
(float): the pre-computed scale of the quantized values
zero_point
(Union[int, float, numpy.ndarray]): the pre-computed zero_point of the q_values
Returns:
QuantizedArray
: the quantized array that will be passed to the QuantizedModule output.
must_quantize_input
Determine if an input must be quantized.
Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.
Args:
input_name_or_idx
(int): Index of the input to check.
Returns:
result
(bool): Whether the input must be quantized (must be a QuantizedArray
) or if it stays as a raw numpy.array
read from ONNX.
op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
prepare_output
Quantize the output of the activation function.
The calibrate method needs to be called with sample data before using this function.
Args:
qoutput_activation
(numpy.ndarray): Output of the activation function.
Returns:
QuantizedArray
: Quantized output.
q_impl
Execute the quantized forward.
Args:
*q_inputs (QuantizedArray)
: Quantized inputs.
**attrs
: the QuantizedOp attributes.
Returns:
QuantizedArray
: The returned quantized value.
concrete.ml.sklearn.linear_model
Implement sklearn linear model.
LinearRegression
A linear regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
__init__
ElasticNet
An ElasticNet regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
__init__
Lasso
A Lasso regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
__init__
Ridge
A Ridge regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
__init__
LogisticRegression
A logistic regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
__init__
concrete.ml.quantization.quantized_ops
Quantized versions of the ONNX operators for post training quantization.
QuantizedSigmoid
Quantized sigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedHardSigmoid
Quantized HardSigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedRelu
Quantized Relu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedPRelu
Quantized PRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedLeakyRelu
Quantized LeakyRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedHardSwish
Quantized Hardswish op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedElu
Quantized Elu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedSelu
Quantized Selu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedCelu
Quantized Celu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedClip
Quantized clip op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedRound
Quantized round op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedPow
Quantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedGemm
Quantized Gemm op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
QuantizedMatMul
Quantized MatMul op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
QuantizedAdd
Quantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedTanh
Quantized Tanh op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedSoftplus
Quantized Softplus op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedExp
Quantized Exp op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedLog
Quantized Log op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedAbs
Quantized Abs op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedIdentity
Quantized Identity op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
QuantizedReshape
Quantized Reshape op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedConv
Quantized Conv op.
__init__
Construct the quantized convolution operator and retrieve parameters.
Args:
n_bits_output
: number of bits for the quantization of the outputs of this operator
int_input_names
: names of integer tensors that are taken as input for this operation
constant_inputs
: the weights and activations
input_quant_opts
: options for the input quantizer
attrs
: convolution options
dilations
(Tuple[int]): dilation of the kernel. Default to 1 on all dimensions.
group
(int): number of convolution groups. Default to 1.
kernel_shape
(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int]): stride of the convolution on each axis
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
Args:
q_inputs
: input tuple, contains
x
(numpy.ndarray): input data. Shape is N x C x H x W for 2d
w
(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d
b
(numpy.ndarray, Optional): bias tensor, Shape is (O,)
attrs
: convolution options handled in constructor
Returns:
res
(QuantizedArray): result of the quantized integer convolution
QuantizedAvgPool
Quantized Average Pooling op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
QuantizedMaxPool
Quantized Max Pooling op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedPad
Quantized Padding op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Pad operation cannot be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation cannot be fused as it is manipulates integer tensors
QuantizedWhere
Where operator on quantized arrays.
Supports only constants for the results produced on the True/False branches.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedCast
Cast the input to the required data type.
In FHE we only support a limited number of output types. Booleans are cast to integers.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedGreater
Comparison operator >.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedGreaterOrEqual
Comparison operator >=.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedLess
Comparison operator <.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedLessOrEqual
Comparison operator <=.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedOr
Or operator ||.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedDiv
Div operator /.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedMul
Multiplication operator.
Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedSub
Subtraction operator.
This works the same as addition on both encrypted - encrypted and on encrypted - constant.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedBatchNormalization
Quantized Batch normalization with encrypted input and in-the-clear normalization params.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedFlatten
Quantized flatten for encrypted inputs.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Flatten operation cannot be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation cannot be fused as it is manipulates integer tensors.
q_impl
Flatten the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0
attrs
: contains axis attribute
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedReduceSum
ReduceSum with encrypted input.
__init__
Construct the quantized ReduceSum operator and retrieve parameters.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs.
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
axes
(Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None.
attrs
(dict): RecuseSum options.
keepdims
(int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.
noop_with_empty_axes
(int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: The output values for the provided calibration samples.
q_impl
Sum the encrypted tensor's values along the given axes.
Args:
q_inputs
(QuantizedArray): An encrypted integer tensor at index 0.
attrs
(Dict): Options are handled in constructor.
Returns:
(QuantizedArray)
: The sum of all values along the given axes.
QuantizedErf
Quantized erf op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedNot
Quantized Not op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedBrevitasQuant
Brevitas uniform quantization with encrypted input.
__init__
Construct the Brevitas quantization operator.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
scale
(float): Quantizer scale
zero_point
(float): Quantizer zero-point
bit_width
(int): Number of bits of the integer representation
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):
rounding_mode
(str): Rounding mode (default and only accepted option is "ROUND")
signed
(int): Whether this op quantizes to signed integers (default 1),
narrow
(int): Whether this op quantizes to a narrow range of integers e.g. [-2n_bits-1 .. 2n_bits-1] (default 0),
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of Quantization function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
q_impl
Quantize values.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedTranspose
Transpose operator for quantized inputs.
This operator performs quantization, transposes the encrypted data, then dequantizes again.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedFloor
Quantized Floor op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedMax
Quantized Max op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedMin
Quantized Min op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedNeg
Quantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedSign
Quantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
QuantizedUnsqueeze
Unsqueeze operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
Unsqueeze the input tensors on a given axis.
Args:
q_inputs
: an encrypted integer tensor
attrs
: additional optional unsqueeze options
Returns:
result
(QuantizedArray): unsqueezed encrypted integer tensor
QuantizedConcat
Concatenate operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
List[str]
: the names of the tensors
q_impl
Concatenate the input tensors on a giver axis.
Args:
q_inputs
: an encrypted integer tensor
attrs
: additional optional concatenate options
Returns:
result
(QuantizedArray): concatenated encrypted integer tensor
concrete.ml.sklearn.base
Module that contains base classes for our libraries estimators.
OPSET_VERSION_FOR_ONNX_EXPORT
get_sklearn_models
Return the list of available models in Concrete-ML.
Returns: the lists of models in Concrete-ML
get_sklearn_linear_models
Return the list of available linear models in Concrete-ML.
Args:
classifier
(bool): whether you want classifiers or not
regressor
(bool): whether you want regressors or not
str_in_class_name
(str): if not None, only return models with this as a substring in the class name
Returns: the lists of linear models in Concrete-ML
get_sklearn_tree_models
Return the list of available tree models in Concrete-ML.
Args:
classifier
(bool): whether you want classifiers or not
regressor
(bool): whether you want regressors or not
str_in_class_name
(str): if not None, only return models with this as a substring in the class name
Returns: the lists of tree models in Concrete-ML
get_sklearn_neural_net_models
Return the list of available neural net models in Concrete-ML.
Args:
classifier
(bool): whether you want classifiers or not
regressor
(bool): whether you want regressors or not
str_in_class_name
(str): if not None, only return models with this as a substring in the class name
Returns: the lists of neural net models in Concrete-ML
QuantizedTorchEstimatorMixin
Mixin that provides quantization for a torch module and follows the Estimator API.
This class should be mixed in with another that provides the full Estimator API. This class only provides modifiers for .fit() (with quantization) and .predict() (optionally in FHE)
__init__
property base_estimator_type
Get the sklearn estimator that should be trained by the child class.
property base_module_to_compile
Get the Torch module that should be compiled to FHE.
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Get the number of quantization bits.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: the compiled Circuit.
Raises:
ValueError
: if called before the model is trained
fit
Initialize and fit the module.
If the module was already initialized, by calling fit, the module will be re-initialized (unless warm_start
is True). In addition to the torch training step, this method performs quantization of the trained torch model.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training, these are passed to the torch training interface
Returns:
self
: the trained quantized estimator
fit_benchmark
Fit the quantized estimator as well as its equivalent float estimator.
This function returns both the quantized estimator (itself) as well as its non-quantized (float) equivalent, which are both trained separately. This is useful in order to compare performances between quantized and fp32 versions.
Args:
X
: The training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The labels associated with the training data
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns:
self
: The trained quantized estimator
fp32_model
: The trained float equivalent estimator
get_params_for_benchmark
Get the parameters to instantiate the sklearn estimator trained by the child class.
Returns:
params
(dict): dictionary with parameters that will initialize a new Estimator
post_processing
Post-processing the output.
Args:
y_preds
(numpy.ndarray): the output to post-process
Raises:
ValueError
: if unknown post-processing function
Returns:
numpy.ndarray
: the post-processed output
predict
Predict on user provided data.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with predictions
predict_proba
Predict on user provided data, returning probabilities.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with probabilities (if applicable)
Raises:
ValueError
: if the estimator was not yet trained or compiled
BaseTreeEstimatorMixin
Mixin class for tree-based estimators.
A place to share methods that are used on all tree-based estimators.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
BaseTreeRegressorMixin
Mixin class for tree-based regressors.
A place to share methods that are used on all tree-based regressors.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict the probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted probabilities.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
BaseTreeClassifierMixin
Mixin class for tree-based classifiers.
A place to share methods that are used on all tree-based classifiers.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict the class with highest probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted target values.
predict_proba
Predict the probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted probabilities.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
SklearnLinearModelMixin
A Mixin class for sklearn linear models with FHE.
__init__
Initialize the FHE linear model.
Args:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
clean_graph
Clean the graph of the onnx model.
This will remove the Cast node in the model's onnx.graph since they have no use in quantized or FHE models.
compile
Compile the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during compilation
show_mlir
(bool): If set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): Whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False
p_error
(Optional[float]): Probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: The compiled Circuit.
dequantize_output
Dequantize the output.
Args:
q_y_preds
(numpy.ndarray): The quantized output to dequantize
Returns:
numpy.ndarray
: The dequantized output
fit
Fit the FHE linear model.
Args:
X
: Training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns: Any
fit_benchmark
Fit the sklearn linear model and the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: The arguments to pass to the sklearn linear model. or not (False). Default to False.
*args
: Arguments for super().fit
**kwargs
: Keyword arguments for super().fit
Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.
post_processing
Post-processing the quantized output.
For linear models, post-processing only considers a dequantization step.
Args:
y_preds
(numpy.ndarray): The quantized outputs to post-process
Returns:
numpy.ndarray
: The post-processed output
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit
Args:
X
(numpy.ndarray): The input data
execute_in_fhe
(bool): Whether to execute the inference in FHE
Returns:
numpy.ndarray
: The prediction as ordinals
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): The input to quantize
Returns:
numpy.ndarray
: The quantized input
SklearnLinearClassifierMixin
A Mixin class for sklearn linear classifiers with FHE.
__init__
Initialize the FHE linear model.
Args:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
clean_graph
Clean the graph of the onnx model.
Any operators following gemm, including the sigmoid, softmax and argmax operators, are removed from the graph. They will be executed in clear in the post-processing method.
compile
Compile the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during compilation
show_mlir
(bool): If set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): Whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False
p_error
(Optional[float]): Probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: The compiled Circuit.
decision_function
Predict confidence scores for samples.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: Confidence scores for samples.
dequantize_output
Dequantize the output.
Args:
q_y_preds
(numpy.ndarray): The quantized output to dequantize
Returns:
numpy.ndarray
: The dequantized output
fit
Fit the FHE linear model.
Args:
X
: Training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns: Any
fit_benchmark
Fit the sklearn linear model and the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: The arguments to pass to the sklearn linear model. or not (False). Default to False.
*args
: Arguments for super().fit
**kwargs
: Keyword arguments for super().fit
Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.
post_processing
Post-processing the predictions.
This step may include a dequantization of the inputs if not done previously, in particular within the client-server workflow.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Whether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: The prediction as ordinals.
predict_proba
Predict class probabilities for samples.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: Class probabilities for samples.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): The input to quantize
Returns:
numpy.ndarray
: The quantized input
concrete.ml.quantization.post_training
Post Training Quantization methods.
ONNX_OPS_TO_NUMPY_IMPL
DEFAULT_MODEL_BITS
ONNX_OPS_TO_QUANTIZED_IMPL
get_n_bits_dict
Convert the n_bits parameter into a proper dictionary.
Args:
n_bits
(int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.
Returns:
n_bits_dict
(Dict[str, int]): A dictionary properly representing the number of bits to use for quantization.
ONNXConverter
Base ONNX to Concrete ML computation graph conversion class.
This class provides a method to parse an ONNX graph and apply several transformations. First, it creates QuantizedOps for each ONNX graph op. These quantized ops have calibrated quantizers that are useful when the operators work on integer data or when the output of the ops is the output of the encrypted program. For operators that compute in float and will be merged to TLUs, these quantizers are not used. Second, this converter creates quantized tensors for initializer and weights stored in the graph.
This class should be sub-classed to provide specific calibration and quantization options depending on the usage (Post-training quantization vs Quantization Aware training).
Arguments:
n_bits
(int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.
y_model
(NumpyModule): Model in numpy.
is_signed
(bool): Whether the weights of the layers can be signed. Currently, only the weights can be signed.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
PostTrainingAffineQuantization
Post-training Affine Quantization.
Create the quantized version of the passed numpy module.
Args:
n_bits
(int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits
numpy_model
(NumpyModule): Model in numpy.
is_signed
: Whether the weights of the layers can be signed. Currently, only the weights can be signed.
Returns:
QuantizedModule
: A quantized version of the numpy model.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
PostTrainingQATImporter
Converter of Quantization Aware Training networks.
This class provides specific configuration for QAT networks during ONNX network conversion to Concrete ML computation graphs.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
concrete.ml.quantization.quantized_module
QuantizedModule API.
QuantizedModule
Inference for a quantized model.
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property is_compiled
Return the compiled status of the module.
Returns:
bool
: the compiled status of the module.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model
(onnx.ModelProto): the ONNX model
property post_processing_params
Get the post-processing parameters.
Returns:
Dict[str, Any]
: the post-processing parameters
compile
Compile the forward function of the module.
Args:
q_inputs
(Union[Tuple[numpy.ndarray, ...], numpy.ndarray]): Needed for tracing and building the boundaries.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.
p_error
(Optional[float]): probability of error of a single PBS.
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Take the last layer q_out and use its dequant function.
Args:
qvalues
(numpy.ndarray): Quantized values of the last layer.
Returns:
numpy.ndarray
: Dequantized values of the last layer.
forward
Forward pass with numpy function only.
Args:
*qvalues (numpy.ndarray)
: numpy.array containing the quantized values.
debug
(bool): In debug mode, returns quantized intermediary values of the computation. This is useful when a model's intermediary values in Concrete-ML need to be compared with the intermediary values obtained in pytorch/onnx. When set, the second return value is a dictionary containing ONNX operation names as keys and, as values, their input QuantizedArray or ndarray. The use can thus extract the quantized or float values of quantized inputs.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
forward_and_dequant
Forward pass with numpy function only plus dequantization.
Args:
*q_x (numpy.ndarray)
: numpy.ndarray containing the quantized input values. Requires the input dtype to be int64.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
post_processing
Post-processing of the quantized output.
Args:
qvalues
(numpy.ndarray): numpy.ndarray containing the quantized input values.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
quantize_input
Take the inputs in fp32 and quantize it using the learned quantization parameters.
Args:
*values (numpy.ndarray)
: Floating point values.
Returns:
Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]
: Quantized (numpy.int64) values.
set_inputs_quantization_parameters
Set the quantization parameters for the module's inputs.
Args:
*input_q_params (UniformQuantizer)
: The quantizer(s) for the module.
concrete.ml.quantization.quantizers
Quantization utilities for a numpy array/tensor.
STABILITY_CONST
fill_from_kwargs
Fill a parameter set structure from kwargs parameters.
Args:
obj
: an object of type klass, if None the object is created if any of the type's members appear in the kwargs
klass
: the type of object to fill
kwargs
: parameter names and values to fill into an instance of the klass type
Returns:
obj
: an object of type klass
kwargs
: remaining parameter names and values that were not filled into obj
Raises:
TypeError
: if the types of the parameters in kwargs could not be converted to the corresponding types of members of klass
QuantizationOptions
Options for quantization.
Determines the number of bits for quantization and the method of quantization of the values. Signed quantization allows negative quantized values. Symmetric quantization assumes the float values are distributed symmetrically around x=0 and assigns signed values around 0 to the float values. QAT (quantization aware training) quantization assumes the values are already quantized, taking a discrete set of values, and assigns these values to integers, computing only the scale.
__init__
property quant_options
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
copy_opts
Copy the options from a different structure.
Args:
opts
(QuantizationOptions): structure to copy parameters from.
is_equal
Compare two quantization options sets.
Args:
opts
(QuantizationOptions): options to compare this instance to
ignore_sign_qat
(bool): ignore sign comparison for QAT options
Returns:
bool
: whether the two quantization options compared are equivalent
MinMaxQuantizationStats
Calibration set statistics.
This class stores the statistics for the calibration set or for a calibration data batch. Currently we only store min/max to determine the quantization range. The min/max are computed from the calibration set.
property quant_stats
Get a copy of the calibration set statistics.
Returns:
MinMaxQuantizationStats
: a copy of the current quantization stats
check_is_uniform_quantized
Check if these statistics correspond to uniformly quantized values.
Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.
Args:
options
(QuantizationOptions): used to quantize the values in the QuantizedArray
Returns:
bool
: check result.
compute_quantization_stats
Compute the calibration set quantization statistics.
Args:
values
(numpy.ndarray): Calibration set on which to compute statistics.
copy_stats
Copy the statistics from a different structure.
Args:
stats
(MinMaxQuantizationStats): structure to copy statistics from.
UniformQuantizationParameters
Quantization parameters for uniform quantization.
This class stores the parameters used for quantizing real values to discrete integer values. The parameters are computed from quantization options and quantization statistics.
property quant_params
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
compute_quantization_parameters
Compute the quantization parameters.
Args:
options
(QuantizationOptions): quantization options set
stats
(MinMaxQuantizationStats): calibrated statistics for quantization
copy_params
Copy the parameters from a different structure.
Args:
params
(UniformQuantizationParameters): parameter structure to copy
UniformQuantizer
Uniform quantizer.
Contains all information necessary for uniform quantization and provides quantization/dequantization functionality on numpy arrays.
Args:
options
(QuantizationOptions): Quantization options set
stats
(Optional[MinMaxQuantizationStats]): Quantization batch statistics set
params
(Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)
__init__
property quant_options
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
property quant_params
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
property quant_stats
Get a copy of the calibration set statistics.
Returns:
MinMaxQuantizationStats
: a copy of the current quantization stats
check_is_uniform_quantized
Check if these statistics correspond to uniformly quantized values.
Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.
Args:
options
(QuantizationOptions): used to quantize the values in the QuantizedArray
Returns:
bool
: check result.
compute_quantization_parameters
Compute the quantization parameters.
Args:
options
(QuantizationOptions): quantization options set
stats
(MinMaxQuantizationStats): calibrated statistics for quantization
compute_quantization_stats
Compute the calibration set quantization statistics.
Args:
values
(numpy.ndarray): Calibration set on which to compute statistics.
copy_opts
Copy the options from a different structure.
Args:
opts
(QuantizationOptions): structure to copy parameters from.
copy_params
Copy the parameters from a different structure.
Args:
params
(UniformQuantizationParameters): parameter structure to copy
copy_stats
Copy the statistics from a different structure.
Args:
stats
(MinMaxQuantizationStats): structure to copy statistics from.
dequant
Dequantize values.
Args:
qvalues
(numpy.ndarray): integer values to dequantize
Returns:
numpy.ndarray
: Dequantized float values.
is_equal
Compare two quantization options sets.
Args:
opts
(QuantizationOptions): options to compare this instance to
ignore_sign_qat
(bool): ignore sign comparison for QAT options
Returns:
bool
: whether the two quantization options compared are equivalent
quant
Quantize values.
Args:
values
(numpy.ndarray): float values to quantize
Returns:
numpy.ndarray
: Integer quantized values.
QuantizedArray
Abstraction of quantized array.
Contains float values and their quantized integer counter-parts. Quantization is performed by the quantizer member object. Float and int values are kept in sync. Having both types of values is useful since quantized operators in Concrete ML graphs might need one or the other depending on how the operator works (in float or in int). Moreover, when the encrypted function needs to return a value, it must return integer values.
See https://arxiv.org/abs/1712.05877.
Args:
values
(numpy.ndarray): Values to be quantized.
n_bits
(int): The number of bits to use for quantization.
value_is_float
(bool, optional): Whether the passed values are real (float) values or not. If False, the values will be quantized according to the passed scale and zero_point. Defaults to True.
options
(QuantizationOptions): Quantization options set
stats
(Optional[MinMaxQuantizationStats]): Quantization batch statistics set
params
(Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)
kwargs
: Any member of the options, stats, params sets as a key-value pair. The parameter sets need to be completely parametrized if their members appear in kwargs.
__init__
dequant
Dequantize self.qvalues.
Returns:
numpy.ndarray
: Dequantized values.
quant
Quantize self.values.
Returns:
numpy.ndarray
: Quantized values.
update_quantized_values
Update qvalues to get their corresponding values using the related quantized parameters.
Args:
qvalues
(numpy.ndarray): Values to replace self.qvalues
Returns:
values
(numpy.ndarray): Corresponding values
update_values
Update values to get their corresponding qvalues using the related quantized parameters.
Args:
values
(numpy.ndarray): Values to replace self.values
Returns:
qvalues
(numpy.ndarray): Corresponding qvalues
concrete.ml.sklearn.glm
Implement sklearn's Generalized Linear Models (GLM).
PoissonRegressor
A Poisson regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on PoissonRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.PoissonRegressor.html
__init__
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Whether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
GammaRegressor
A Gamma regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on GammaRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.GammaRegressor.html
__init__
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Whether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
TweedieRegressor
A Tweedie regression model with FHE.
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on TweedieRegressor please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.TweedieRegressor.html
__init__
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Whether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
concrete.ml.sklearn.svm
Implement Support Vector Machine.
LinearSVR
A Regression Support Vector Machine (SVM).
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LinearSVR please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVR.html
__init__
LinearSVC
A Classification Support Vector Machine (SVM).
Parameters:
n_bits
(int, Dict[str, int]): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for quantizing inputs and weights. If a dict is passed, then it should contain "op_inputs" and "op_weights" as keys with corresponding number of quantization bits so that: - op_inputs : number of bits to quantize the input values - op_weights: number of bits to quantize the learned parameters Default to 8.
For more details on LinearSVC please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html
__init__
concrete.ml.sklearn.qnn
Scikit-learn interface for concrete quantized neural networks.
MAX_BITWIDTH_BACKWARD_COMPATIBLE
SparseQuantNeuralNetImpl
Sparse Quantized Neural Network classifier.
This class implements an MLP that is compatible with FHE constraints. The weights and activations are quantized to low bitwidth and pruning is used to ensure accumulators do not surpass an user-provided accumulator bit-width. The number of classes and number of layers are specified by the user, as well as the breadth of the network
__init__
Sparse Quantized Neural Network constructor.
Args:
input_dim
: Number of dimensions of the input data
n_layers
: Number of linear layers for this network
n_outputs
: Number of output classes or regression targets
n_w_bits
: Number of weight bits
n_a_bits
: Number of activation and input bits
n_accum_bits
: Maximal allowed bitwidth of intermediate accumulators
n_hidden_neurons_multiplier
: A factor that is multiplied by the maximal number of active (non-zero weight) neurons for every layer. The maximal number of neurons in the worst case scenario is: 2^n_max-1 max_active_neurons(n_max, n_w, n_a) = floor(---------------------) (2^n_w-1)*(2^n_a-1) ) The worst case scenario for the bitwidth of the accumulator is when all weights and activations are maximum simultaneously. We set, for each layer, the total number of neurons to be: n_hidden_neurons_multiplier * max_active_neurons(n_accum_bits, n_w_bits, n_a_bits) Through experiments, for typical distributions of weights and activations, the default value for n_hidden_neurons_multiplier, 4, is safe to avoid overflow.
activation_function
: a torch class that is used to construct activation functions in the network (e.g. torch.ReLU, torch.SELU, torch.Sigmoid, etc)
quant_narrow
: whether this network should use narrow range quantized integer values
quant_signed
: whether to use signed quantized integer values
Raises:
ValueError
: if the parameters have invalid values or the computed accumulator bitwidth is zero
enable_pruning
Enable pruning in the network. Pruning must be made permanent to recover pruned weights.
Raises:
ValueError
: if the quantization parameters are invalid
forward
Forward pass.
Args:
x
(torch.Tensor): network input
Returns:
x
(torch.Tensor): network prediction
make_pruning_permanent
Make the learned pruning permanent in the network.
max_active_neurons
Compute the maximum number of active (non-zero weight) neurons.
The computation is done using the quantization parameters passed to the constructor. Warning: With the current quantization algorithm (asymmetric) the value returned by this function is not guaranteed to ensure FHE compatibility. For some weight distributions, weights that are 0 (which are pruned weights) will not be quantized to 0. Therefore the total number of active quantized neurons will not be equal to max_active_neurons.
Returns:
n
(int): maximum number of active neurons
on_train_end
Call back when training is finished, can be useful to remove training hooks.
QuantizedSkorchEstimatorMixin
Mixin class that adds quantization features to Skorch NN estimators.
property base_estimator_type
Get the sklearn estimator that should be trained by the child class.
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
FixedTypeSkorchNeuralNet
A mixin with a helpful modification to a skorch estimator that fixes the module type.
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
NeuralNetClassifier
Scikit-learn interface for quantized FHE compatible neural networks.
This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.
__init__
property base_estimator_type
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property classes_
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
predict
Predict on user provided data.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with predictions
NeuralNetRegressor
Scikit-learn interface for quantized FHE compatible neural networks.
This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.
__init__
property base_estimator_type
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
concrete.ml.sklearn.rf
Implements RandomForest models.
RandomForestClassifier
Implements the RandomForest classifier.
__init__
Initialize the RandomForestClassifier.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
RandomForestRegressor
Implements the RandomForest regressor.
__init__
Initialize the RandomForestRegressor.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
concrete.ml.sklearn.tree
Implement the sklearn tree models.
DecisionTreeClassifier
Implements the sklearn DecisionTreeClassifier.
__init__
Initialize the DecisionTreeClassifier.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
DecisionTreeRegressor
Implements the sklearn DecisionTreeClassifier.
__init__
Initialize the DecisionTreeRegressor.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
concrete.ml.sklearn.protocols
Protocols.
Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.
To read more about Protocol please read: https://peps.python.org/pep-0544
Quantizer
Quantizer Protocol.
To use to type hint a quantizer.
dequant
Dequantize some values.
Args:
X
(numpy.ndarray): Values to dequantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: Dequantized values
quant
Quantize some values.
Args:
values
(numpy.ndarray): Values to quantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: The quantized values
ConcreteBaseEstimatorProtocol
A Concrete Estimator Protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
ConcreteBaseClassifierProtocol
Concrete classifier protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the class with highest probability.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
predict_proba
Predicts for each sample the probability of each class.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
ConcreteBaseRegressorProtocol
Concrete regressor protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the expected value.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
concrete.ml.sklearn.xgb
Implements XGBoost models.
XGBClassifier
Implements the XGBoost classifier.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
XGBRegressor
Implements the XGBoost regressor.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
concrete.ml.torch.numpy_module
A torch to numpy module.
OPSET_VERSION_FOR_ONNX_EXPORT
NumpyModule
General interface to transform a torch.nn.Module to numpy module.
Args:
torch_model
(Union[nn.Module, onnx.ModelProto]): A fully trained, torch model along with its parameters or the onnx graph of the model.
dummy_input
(Union[torch.Tensor, Tuple[torch.Tensor, ...]]): Sample tensors for all the module inputs, used in the ONNX export to get a simple to manipulate nn representation.
debug_onnx_output_file_path
: (Optional[Union[Path, str]], optional): An optional path to indicate where to save the ONNX file exported by torch for debug. Defaults to None.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model
(onnx.ModelProto): the ONNX model
forward
Apply a forward pass on args with the equivalent numpy function only.
Args:
*args
: the inputs of the forward function
Returns:
Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]
: result of the forward on the given inputs
concrete.ml.torch.compile
torch compilation function.
MAX_BITWIDTH_BACKWARD_COMPATIBLE
OPSET_VERSION_FOR_ONNX_EXPORT
convert_torch_tensor_or_numpy_array_to_numpy_array
Convert a torch tensor or a numpy array to a numpy array.
Args:
torch_tensor_or_numpy_array
(Tensor): the value that is either a torch tensor or a numpy array.
Returns:
numpy.ndarray
: the value converted to a numpy array.
compile_torch_model
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
torch_model
(torch.nn.Module): the model to quantize
torch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.
import_qat
(bool): Set to True to import a network that contains quantizers and was trained using quantization aware training
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
n_bits
: the number of bits for the quantization
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_onnx_model
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
onnx_model
(onnx.ModelProto): the model to quantize
torch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.
import_qat
(bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not requantize it.
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
n_bits
: the number of bits for the quantization
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_brevitas_qat_model
Compile a Brevitas Quantization Aware Training model.
The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.
Args:
torch_model
(torch.nn.Module): the model to quantize
torch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.
n_bits
(Union[int,dict]): the number of bits for the quantization
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation, defaults to False.
p_error
(Optional[float]): probability of error of a single PBS
global_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
output_onnx_file
(str): temporary file to store ONNX model. If None a temporary file is generated
verbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
concrete.ml.sklearn.tree_to_numpy
Implements the conversion of a tree model to a numpy function.
MAX_BITWIDTH_BACKWARD_COMPATIBLE
OPSET_VERSION_FOR_ONNX_EXPORT
EXPECTED_NUMBER_OF_OUTPUTS_PER_TASK
tree_to_numpy
Convert the tree inference to a numpy functions using Hummingbird.
Args:
model
(onnx.ModelProto): The model to convert.
x
(numpy.ndarray): The input data.
framework
(str): The framework from which the onnx_model is generated.
(options
: 'xgboost', 'sklearn')
task
(Task): The task the model is solving
output_n_bits
(int): The number of bits of the output.
Returns:
Tuple[Callable, List[QuantizedArray], onnx.ModelProto]
: A tuple with a function that takes a numpy array and returns a numpy array, QuantizedArray object to quantize and dequantize the output of the tree, and the ONNX model.
Task
Task enumerate.