# Using Torch

In addition to the built-in models, Concrete ML supports generic machine learning models implemented with Torch, or exported as ONNX graphs.

As Quantization Aware Training (QAT) is the most appropriate method of training neural networks that are compatible with FHE constraints, Concrete ML works with Brevitas, a library providing QAT support for PyTorch.

The following example uses a simple QAT PyTorch model that implements a fully connected neural network with two hidden layers. Due to its small size, making this model respect FHE constraints is relatively easy.

Converting neural networks to use FHE can be done with `compile_brevitas_qat_model`

or with `compile_torch_model`

for post-training quantization. If the model can not be converted to FHE two types of errors can be raised: (1) crypto-parameters can not be found and, (2) table look-up bit-width limit is exceeded. See the debugging section if you encounter these errors.

Once the model is trained, calling the `compile_brevitas_qat_model`

from Concrete ML will automatically perform conversion and compilation of a QAT network. Here, 3-bit quantization is used for both the weights and activations. The `compile_brevitas_qat_model`

function automatically identifies the number of quantization bits used in the Brevitas model.

## Configuring quantization parameters

The PyTorch/Brevitas models, created following the example above, require the user to configure quantization parameters such as `bit_width`

(activation bit-width) and `weight_bit_width`

. The quantization parameters, along with the number of neurons on each layer, will determine the accumulator bit-width of the network. Larger accumulator bit-widths result in higher accuracy but slower FHE inference time.

The following configurations were determined through experimentation for convolutional and dense layers.

target accumulator bit-width | activation bit-width | weight bit-width | number of active neurons |
---|---|---|---|

8 | 3 | 3 | 80 |

10 | 4 | 3 | 90 |

12 | 5 | 5 | 110 |

14 | 6 | 6 | 110 |

16 | 7 | 6 | 120 |

Using the templates above, the probability of obtaining the target accumulator bit-width, for a single layer, was determined experimentally by training 10 models for each of the following data-sets.

probability of obtaining the accumulator bit-width | 8 | 10 | 12 | 14 | 16 |

mnist,fashion | 72% | 100% | 72% | 85% | 100% |

cifar10 | 88% | 88% | 75% | 75% | 88% |

cifar100 | 73% | 88% | 61% | 66% | 100% |

Note that the accuracy on larger data-sets, when the accumulator size is low, is also reduced strongly.

accuracy for target accumulator bit-width | 8 | 10 | 12 | 14 | 16 |

cifar10 | 20% | 37% | 89% | 90% | 90% |

cifar100 | 6% | 30% | 67% | 69% | 69% |

## Running encrypted inference

The model can now perform encrypted inference.

In this example, the input values `x_test`

and the predicted values `y_pred`

are floating points. The quantization (resp. de-quantization) step is done in the clear within the `forward`

method, before (resp. after) any FHE computations.

## Simulated FHE Inference in the clear

The user can also perform the inference on clear data. Two approaches exist:

`quantized_module.forward(quantized_x, fhe="simulate")`

: simulates FHE execution taking into account Table Lookup errors. De-quantization must be done in a second step as for actual FHE execution. Simulation takes into account the`p_error`

/`global_p_error`

parameters`quantized_module.forward(quantized_x, fhe="disable")`

: computes predictions in the clear on quantized data, and then de-quantize the result. The return value of this function contains the de-quantized (float) output of running the model in the clear. Calling this function on clear data is useful when debugging, but this does not perform actual FHE simulation.

FHE simulation allows to measure the impact of the Table Lookup error on the model accuracy. The Table Lookup error can be adjusted using `p_error`

/`global_p_error`

, as described in the approximate computation section.

## Generic Quantization Aware Training import

While the example above shows how to import a Brevitas/PyTorch model, Concrete ML also provides an option to import generic QAT models implemented in PyTorch or through ONNX. Deep learning models made with TensorFlow or Keras should be usable by preliminary converting them to ONNX.

QAT models contain quantizers in the PyTorch graph. These quantizers ensure that the inputs to the Linear/Dense and Conv layers are quantized.

Suppose that `n_bits_qat`

is the bit-width of activations and weights during the QAT process. To import a PyTorch QAT network, you can use the `compile_torch_model`

library function, passing `import_qat=True`

:

Alternatively, if you want to import an ONNX model directly, please see the ONNX guide. The `compile_onnx_model`

also supports the `import_qat`

parameter.

When importing QAT models using this generic pipeline, a representative calibration set should be given as quantization parameters in the model need to be inferred from the statistics of the values encountered during inference.

## Supported operators and activations

Concrete ML supports a variety of PyTorch operators that can be used to build fully connected or convolutional neural networks, with normalization and activation layers. Moreover, many element-wise operators are supported.

### Operators

#### Univariate operators

#### Shape modifying operators

#### Tensor operators

`torch.Tensor.to`

-- for casting to dtype

#### Multi-variate operators: encrypted input and unencrypted constants

Concrete ML also supports some of their QAT equivalents from Brevitas.

`brevitas.nn.QuantLinear`

`brevitas.nn.QuantConv1d`

`brevitas.nn.QuantConv2d`

#### Multi-variate operators: encrypted+unencrypted or encrypted+encrypted inputs

### Quantizers

`brevitas.nn.QuantIdentity`

### Activation functions

`torch.nn.Threshold`

-- partial support

The equivalent versions from `torch.functional`

are also supported.

Last updated