Pruning
Last updated
Last updated
In neural networks, a neuron computes a linear combination of inputs and learned weights, then applies an activation function.
The neuron computes:
When building a full neural network, each layer will contain multiple neurons, which are connected to the neuron outputs of a previous layer or to the inputs.
For every neuron shown in each layer of the figure above, the linear combinations of inputs and learned weights are computed. Depending on the values of the inputs and weights, the sum - which, for Concrete-ML neural networks, is computed with integers - can take a range of different values.
To respect the bit width constraint of the Table Lookup mechanism, implemented with programmable bootstrapping, the values of the accumulator must remain small to be representable with only 7 bits. In other words, the values must be between 0 and 127.
Pruning a neural network entails fixing some of the weights to be zero during training. This is advantageous to meet FHE constraints, as, irrespective of the distribution of , multiplying these input values by 0 does not increase the accumulator value.
Fixing some of the weights to 0 makes the network graph look more similar to the following:
Pruning weights can reduce the prediction performance of the neural network, but studies show that a high level of pruning (above 50%, see Han, Song & Pool, Jeff & Tran, John & Dally, William. (2015). Learning both Weights and Connections for Efficient Neural Networks) can be applied. In Concrete-ML, we implement Fully Connected Neural Networks with pruning, as described in the developer guide.
Artificial Neuron (from: wikipedia)
Fully Connected Neural Network
Pruned Fully Connected Neural Network