Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
1.5
1.5
  • Welcome
  • Getting Started
    • What is Concrete ML?
    • Installation
    • Key concepts
    • Inference in the cloud
  • Built-in Models
    • Linear models
    • Tree-based models
    • Neural networks
    • Nearest neighbors
    • Encrypted dataframe
    • Encrypted training
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step guide
    • Debugging models
    • Optimizing inference
  • Guides
    • Prediction with FHE
    • Production deployment
    • Hybrid models
    • Serialization
  • Tutorials
    • See all tutorials
    • Built-in model examples
    • Deep learning examples
  • References
    • API
    • Pandas support
  • Explanations
    • Quantization
    • Pruning
    • Compilation
    • Advanced features
    • Project architecture
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External libraries
  • Developers
    • Set up the project
    • Set up Docker
    • Documentation
    • Support and issues
    • Contributing
    • Support new ONNX node
    • Release note
    • Feature request
    • Bug report
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • Overview of pruning in Concrete ML
  • Basics of pruning
  • Pruning in practice

Was this helpful?

Export as PDF
  1. Explanations

Pruning

PreviousQuantizationNextCompilation

Last updated 1 year ago

Was this helpful?

Pruning is a method to reduce neural network complexity, usually applied in order to reduce the computation cost or memory size. Pruning is used in Concrete ML to control the size of accumulators in neural networks, thus making them FHE-compatible. See for an explanation of accumulator bit-width constraints.

Overview of pruning in Concrete ML

Pruning is used in Concrete ML for two types of neural networks:

  1. Built-in include a pruning mechanism that can be parameterized by the user. The pruning type is based on L1-norm. To comply with FHE constraints, Concrete ML uses unstructured pruning, as the aim is not to eliminate neurons or convolutional filters completely, but to decrease their accumulator bit-width.

  2. Custom neural networks, to work well under FHE constraints, should include pruning. When implemented with PyTorch, you can use the (e.g., L1-Unstructured) to good effect.

Basics of pruning

In neural networks, a neuron computes a linear combination of inputs and learned weights, then applies an activation function.

The neuron computes:

yk=ϕ(∑iwixi)y_k = \phi\left(\sum_i w_ix_i\right)yk​=ϕ(∑i​wi​xi​)

When building a full neural network, each layer will contain multiple neurons, which are connected to the inputs or to the neuron outputs of a previous layer.

For every neuron shown in each layer of the figure above, the linear combinations of inputs and learned weights are computed. Depending on the values of the inputs and weights, the sum vk=∑iwixiv_k = \sum_i w_ix_ivk​=∑i​wi​xi​ - which for Concrete ML neural networks is computed with integers - can take a range of different values.

Pruning a neural network entails fixing some of the weights wkw_kwk​ to be zero during training. This is advantageous to meet FHE constraints, as irrespective of the distribution of xix_ixi​, multiplying these input values by 0 does not increase the accumulator value.

Fixing some of the weights to 0 makes the network graph look more similar to the following:

Pruning in practice

In the formula above, in the worst case, the maximum number of the input and weights that can make the result exceed nnn bits is given by:

Ω=floor(2nmax−1(2nweights−1)(2ninputs−1))\Omega = \mathsf{floor} \left( \frac{2^{n_{\mathsf{max}}} - 1}{(2^{n_{\mathsf{weights}}} - 1)(2^{n_{\mathsf{inputs}}} - 1)} \right)Ω=floor((2nweights​−1)(2ninputs​−1)2nmax​−1​)

Here, nmax=16n_{\mathsf{max}} = 16nmax​=16 is the maximum precision allowed.

For example, if nweights=2n_{\mathsf{weights}} = 2nweights​=2 and ninputs=2n_{\mathsf{inputs}} = 2ninputs​=2 with nmax=16n_{\mathsf{max}} = 16nmax​=16, the worst case scenario occurs when all inputs and weights are equal to their maximal value 22−1=32^2-1=322−1=3. There can be at most Ω=7281\Omega = 7281Ω=7281 elements in the multi-sums.

The distribution of the weights of a neural network is Gaussian, with many weights either 0 or having a small value. This enables exceeding the worst case number of active neurons without having to risk overflowing the bit-width. In built-in neural networks, the parameter n_hidden_neurons_multiplier is multiplied with Ω\OmegaΩ to determine the total number of non-zero weights that should be kept in a neuron.

To respect the bit-width constraint of the FHE , the values of the accumulator vkv_kvk​ must remain small to be representable using a maximum of 16 bits. In other words, the values must be between 0 and 216−12^{16}-1216−1.

While pruning weights can reduce the prediction performance of the neural network, studies show that a high level of pruning (above 50%) can often be applied. See here how Concrete ML uses pruning in .

table lookup
Fully Connected Neural Networks
neural networks
framework's pruning mechanism
here
Artificial Neuron
Fully Connected Neural Network
Pruned Fully Connected Neural Network