Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
1.2
1.2
  • What is Concrete ML?
  • Getting Started
    • Installation
    • Key Concepts
    • Inference in the Cloud
    • Demos and Tutorials
  • Built-in Models
    • Linear Models
    • Tree-based Models
    • Neural Networks
    • Nearest Neighbors
    • Pandas
    • Built-in Model Examples
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step Guide
    • Deep Learning Examples
    • Debugging Models
    • Optimizing Inference
  • Deployment
    • Prediction with FHE
    • Hybrid models
    • Production Deployment
    • Serialization
  • Advanced topics
    • Quantization
    • Pruning
    • Compilation
    • Advanced Features
  • Developer Guide
    • Workflow
      • Set Up the Project
      • Set Up Docker
      • Documentation
      • Support and Issues
      • Contributing
    • Inner Workings
      • Importing ONNX
      • Quantization Tools
      • FHE Op-graph Design
      • External Libraries
    • API
      • concrete.ml.common.check_inputs.md
      • concrete.ml.common.debugging.custom_assert.md
      • concrete.ml.common.debugging.md
      • concrete.ml.common.md
      • concrete.ml.common.serialization.decoder.md
      • concrete.ml.common.serialization.dumpers.md
      • concrete.ml.common.serialization.encoder.md
      • concrete.ml.common.serialization.loaders.md
      • concrete.ml.common.serialization.md
      • concrete.ml.common.utils.md
      • concrete.ml.deployment.deploy_to_aws.md
      • concrete.ml.deployment.deploy_to_docker.md
      • concrete.ml.deployment.fhe_client_server.md
      • concrete.ml.deployment.md
      • concrete.ml.deployment.server.md
      • concrete.ml.deployment.utils.md
      • concrete.ml.onnx.convert.md
      • concrete.ml.onnx.md
      • concrete.ml.onnx.onnx_impl_utils.md
      • concrete.ml.onnx.onnx_model_manipulations.md
      • concrete.ml.onnx.onnx_utils.md
      • concrete.ml.onnx.ops_impl.md
      • concrete.ml.pytest.md
      • concrete.ml.pytest.torch_models.md
      • concrete.ml.pytest.utils.md
      • concrete.ml.quantization.base_quantized_op.md
      • concrete.ml.quantization.md
      • concrete.ml.quantization.post_training.md
      • concrete.ml.quantization.quantized_module.md
      • concrete.ml.quantization.quantized_module_passes.md
      • concrete.ml.quantization.quantized_ops.md
      • concrete.ml.quantization.quantizers.md
      • concrete.ml.search_parameters.md
      • concrete.ml.search_parameters.p_error_search.md
      • concrete.ml.sklearn.base.md
      • concrete.ml.sklearn.glm.md
      • concrete.ml.sklearn.linear_model.md
      • concrete.ml.sklearn.md
      • concrete.ml.sklearn.neighbors.md
      • concrete.ml.sklearn.qnn.md
      • concrete.ml.sklearn.qnn_module.md
      • concrete.ml.sklearn.rf.md
      • concrete.ml.sklearn.svm.md
      • concrete.ml.sklearn.tree.md
      • concrete.ml.sklearn.tree_to_numpy.md
      • concrete.ml.sklearn.xgb.md
      • concrete.ml.torch.compile.md
      • concrete.ml.torch.hybrid_model.md
      • concrete.ml.torch.md
      • concrete.ml.torch.numpy_module.md
      • concrete.ml.version.md
Powered by GitBook
On this page
  • Overview of quantization in Concrete ML
  • Basics of quantization
  • Quantization special cases
  • Configuring model quantization parameters
  • Quantizing model inputs and outputs
  • Resources

Was this helpful?

Export as PDF
  1. Advanced topics

Quantization

PreviousSerializationNextPruning

Last updated 1 year ago

Was this helpful?

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers

Quantization is the process of constraining an input from a continuous or otherwise large set of values (such as real numbers) to a discrete set (such as integers).

This means that some accuracy in the representation is lost (e.g., a simple approach is to eliminate least-significant bits). In many cases in machine learning, it is possible to adapt the models to give meaningful results while using these smaller data types. This significantly reduces the number of bits necessary for intermediary results during the execution of these machine learning models.

Since FHE is currently limited to 16-bit integers, it is necessary to quantize models to make them compatible. As a general rule, the smaller the bit-width of integer values used in models, the better the FHE performance. This trade-off should be taken into account when designing models, especially neural networks.

Overview of quantization in Concrete ML

Quantization implemented in Concrete ML is applied in two ways:

  1. Built-in models apply quantization internally and the user only needs to configure some quantization parameters. This approach requires little work by the user but may not be a one-size-fits-all solution for all types of models. The final quantized model is FHE-friendly and ready to predict over encrypted data. In this setting, Post-Training Quantization (PTQ) is used for linear models, data quantization is used for tree-based models and, finally, Quantization Aware Training (QAT) is included in the built-in neural network models.

  2. For custom neural networks with more complex topology, obtaining FHE-compatible models with good accuracy requires QAT. Concrete ML offers the possibility for the user to perform quantization before compiling to FHE. This can be achieved through a third-party library that offers QAT tools, such as for PyTorch. In this approach, the user is responsible for implementing a full-integer model, respecting FHE constraints. Please refer to the for tips on designing FHE neural networks.

While Concrete ML quantizes machine learning models, the data that the client has is often in floating point. Concrete ML models provide APIs to quantize inputs and de-quantize outputs.

Note that the floating point input is quantized in the clear, meaning it is converted to integers before being encrypted. The model's outputs are also integers and decrypted before de-quantization.

Basics of quantization

Let [α,β][\alpha, \beta ][α,β] be the range of a value to quantize where α\alphaα is the minimum and β\betaβ is the maximum. To quantize a range of floating point values (in R\mathbb{R}R) to integer values (in Z\mathbb{Z}Z), the first step is to choose the data type that is going to be used. Many ML models work with weights and activations represented as 8-bit integers, so this will be the value used in this example. Knowing the number of bits that can be used for a value in the range [α,β][\alpha, \beta ][α,β], the scale SSS can be computed :

S=β−α2n−1S = \frac{\beta - \alpha}{2^n - 1}S=2n−1β−α​

where nnn is the number of bits (n≤8n \leq 8n≤8). In the following, n=8n = 8n=8 is assumed.

In practice, the quantization scale is then S=β−α255S = \frac{\beta - \alpha}{255}S=255β−α​. This means the gap between consecutive representable values cannot be smaller than SSS, which, in turn, means there can be a substantial loss of precision. Every interval of length SSS will be represented by a value within the range [0..255][0..255][0..255].

The other important parameter from this quantization schema is the zero point ZpZ_pZp​ value. This essentially brings the 0 floating point value to a specific integer. If the quantization scheme is asymmetric (quantized values are not centered in 0), the resulting ZpZ_pZp​ will be in Z\mathbb{Z}Z.

Zp=round(−αS)Z_p = \mathtt{round} \left(- \frac{\alpha}{S} \right)Zp​=round(−Sα​)

Quantization special cases

Machine learning acceleration solutions are often based on integer computation of activations. To make quantization computations hardware-friendly, a popular approach is to ensure that scales are powers-of-two, which allows the replacement of the division in the equations above with a shift-right operation. TFHE also has a fast primitive for right bit-shift that enables acceleration in the special case of power-of-two scales.

Configuring model quantization parameters

Built-in models provide a simple interface for configuring quantization parameters, most notably the number of bits used for inputs, model weights, intermediary values, and output values.

For linear models, n_bits is used to quantize both model inputs and weights. Depending on the number of features, you can use a single integer value for the n_bits parameter (e.g., a value between 2 and 7). When the number of features is high, the n_bits parameter should be decreased if you encounter compilation errors. It is also possible to quantize inputs and weights with different numbers of bits by passing a dictionary to n_bits containing the op_inputs and op_weights keys.

Tree-based models can directly control the accumulator bit-width used. If 6 or 7 bits are not sufficient to obtain good accuracy on your data-set, one option is to use an ensemble model (RandomForest or XGBoost) and increase the number of trees in the ensemble. This, however, will have a detrimental impact on FHE execution speed.

For built-in neural networks, the maximum accumulator bit-width cannot be precisely controlled. To use many input features and a high number of bits is beneficial for model accuracy, but it can conflict with the 16-bit accumulator constraint. Finding the best quantization parameters to maximize accuracy, while keeping the accumulator size down, can only be accomplished through experimentation.

Quantizing model inputs and outputs

The models implemented in Concrete ML provide features to let the user quantize the input data and de-quantize the output data.

Here is a simple example showing how to perform inference, starting from float values and ending up with float values. The FHE engine that is compiled for ML models does not support data batching.

# Assume 
#   quantized_module : QuantizedModule
#   x: numpy.ndarray (of float)

# Quantization is done in the clear
x_q = quantized_module.quantize_input(x)

# Forward in FHE (here with simulation)
q_y_proba = quantized_module.quantized_forward(x_q, fhe="simulate")

# De-quantization is done in the clear
y_proba = quantized_module.dequantize_output(q_y_proba)

# For classifiers with multi-class outputs, the arg max is done in the clear
y_pred = np.argmax(y_proba, 1)

Alternatively, the forward method groups the quantization, FHE execution and de-quantization steps all together.

# Assume 
#   quantized_module : QuantizedModule
#   x: numpy.ndarray (of float)

# Forward in FHE (here with simulation). Quantization and de-quantization steps are still done in 
# the clear 
y_proba = quantized_module.forward(x, fhe="simulate")

# For classifiers with multi-class outputs, the arg max is done in the clear
y_pred = np.argmax(y_proba, 1)

Resources

When using quantized values in a matrix multiplication or convolution, the equations for computing the result become more complex. The IntelLabs Distiller documentation provides a more of the maths used to quantize values and how to keep computations consistent.

For , the quantization is done post-training. Thus, the model is trained in floating point, and then, the best integer weight representations are found, depending on the distribution of inputs and weights. For these models, the user selects the value of the n_bits parameter.

For , the training and test data is quantized. The maximum accumulator bit-width for a model trained with n_bits=n for this type of model is known beforehand: It will need n+1 bits. Through experimentation, it was determined that, in many cases, a value of 5 or 6 bits gives the same accuracy as training in floating point and values above n=7 do not increase model performance (but rather induce a strong slowdown).

For built-in , several linear layers are used. Thus, the outputs of a layer are used as inputs to a new layer. Built-in neural networks use Quantization Aware Training. The parameters controlling the maximum accumulator bit-width are the number of weights and activation bits ( module__n_w_bits, module__n_a_bits ), but also the pruning factor. This factor is determined automatically by specifying a desired accumulator bit-width module__n_accum_bits and, optionally, a multiplier factor, module__n_hidden_neurons_multiplier.

In a client/server setting, the client is responsible for quantizing inputs before sending them, encrypted, to the server. The client must then de-quantize the encrypted integer results received from the server. See the section for more details.

IntelLabs distiller explanation of quantization:

Brevitas
advanced QAT tutorial
detailed explanation
linear models
tree-based models
neural networks
Production Deployment
Distiller documentation