Using Torch
This document explains how to implement machine learning models with Torch in Concrete ML, leveraging Fully Homomorphic Encryption (FHE).
Introduction
There are two approaches to build FHE-compatible deep networks:
Quantization Aware Training (QAT): This method requires using custom layers to quantize weights and activations to low bit-widths. Concrete ML works with Brevitas, a library that provides QAT support for PyTorch.
Use
compile_brevitas_qat_model
to compile models in this mode.
Post Training Quantization (PTQ): This method allows to compile a vanilla PyTorch model. However, accuracy may decrease significantly when quantizing weights and activations to fewer than 7 bits. On the other hand, depending on the model size, quantizing with 6-8 bits can be incompatible with FHE constraints. Thus you need to determine the trade-off between model accuracy and FHE compatibility.
Use
compile_torch_model
to compile models in this mode.
Both approaches require setting rounding_threshold_bits
parameter accordingly. You should experiment to find the best values, starting with an initial value of 6
. See here for more details.
See the common compilation errors page for explanations and solutions to some common errors raised by the compilation function.
Quantization Aware training (QAT)
The following example uses a simple QAT PyTorch model that implements a fully connected neural network with two hidden layers. Due to its small size, making this model respect FHE constraints is relatively easy. To use QAT, Brevitas QuantIdentity
nodes must be inserted in the PyTorch model, including one that quantizes the input of the forward
function.
Once the model is trained, use compile_brevitas_qat_model
from Concrete ML to perform conversion and compilation of the QAT network. Here, 3-bit quantization is used for both the weights and activations. This function automatically identifies the number of quantization bits used in the Brevitas model.
If QuantIdentity
layers are missing for any input or intermediate value, the compile function will raise an error. See the common compilation errors page for an explanation.
Post Training quantization (PTQ)
The following example demonstrates a simple PyTorch model that implements a fully connected neural network with two hidden layers. The model is compiled with compile_torch_model
to use FHE.
Configuring quantization parameters
The quantization parameters, along with the number of neurons in each layer, determine the accumulator bit-width of the network. Larger accumulator bit-widths result in higher accuracy but slower FHE inference time.
QAT: Configure parameters such as bit_width
and weight_bit_width
. Set n_bits=None
in the compile_brevitas_qat_model
.
PTQ: Set the n_bits
value in the compile_torch_model
function. Manually determine the trade-off between accuracy, FHE compatibility, and latency.
Running encrypted inference
The model can now perform encrypted inference.
In this example, the input values x_test
and the predicted values y_pred
are floating points. The quantization (respectively de-quantization) step is done in the clear within the forward
method, before (respectively after) any FHE computations.
Simulated FHE Inference in the clear
You can perform the inference on clear data in order to evaluate the impact of quantization and of FHE computation on the accuracy of their model. See this section for more details.
There are two approaches:
quantized_module.forward(quantized_x, fhe="simulate")
: This method simulates FHE execution taking into account Table Lookup errors. De-quantization must be done in a second step as for actual FHE execution. Simulation takes into account thep_error
/global_p_error
parametersquantized_module.forward(quantized_x, fhe="disable")
: This method computes predictions in the clear on quantized data, and then de-quantize the result. The return value of this function contains the de-quantized (float) output of running the model in the clear. Calling this function on clear data is useful when debugging, but this does not perform actual FHE simulation.
FHE simulation allows to measure the impact of the Table Lookup error on the model accuracy. You can adjust the Table Lookup error using p_error
/global_p_error
, as described in the approximate computation section.
Supported operators and activations
Concrete ML supports a variety of PyTorch operators that can be used to build fully connected or convolutional neural networks, with normalization and activation layers. Moreover, many element-wise operators are supported.
Operators
Univariate operators
Shape modifying operators
Tensor operators
torch.Tensor.to
-- for casting to dtype
Multi-variate operators: encrypted input and unencrypted constants
Concrete ML also supports some of their QAT equivalents from Brevitas.
brevitas.nn.QuantLinear
brevitas.nn.QuantConv1d
brevitas.nn.QuantConv2d
Multi-variate operators: encrypted+unencrypted or encrypted+encrypted inputs
Quantizers
brevitas.nn.QuantIdentity
Activation functions
torch.nn.Threshold
-- partial support
The equivalent versions from torch.functional
are also supported.
Zama 5-Question Developer Survey
We want to hear from you! Take 1 minute to share your thoughts and helping us enhance our documentation and libraries. 👉 Click here to participate.
Last updated