# concrete.ml.sklearn.qnn

## module `concrete.ml.sklearn.qnn`

`concrete.ml.sklearn.qnn`

Scikit-learn interface for concrete quantized neural networks.

**Global Variables**

**Global Variables**

**MAXIMUM_TLU_BIT_WIDTH**

### class `SparseQuantNeuralNetImpl`

`SparseQuantNeuralNetImpl`

Sparse Quantized Neural Network classifier.

This class implements an MLP that is compatible with FHE constraints. The weights and activations are quantized to low bitwidth and pruning is used to ensure accumulators do not surpass an user-provided accumulator bit-width. The number of classes and number of layers are specified by the user, as well as the breadth of the network

#### method `__init__`

`__init__`

Sparse Quantized Neural Network constructor.

**Args:**

: Number of dimensions of the input data**input_dim**

: Number of linear layers for this network**n_layers**

: Number of output classes or regression targets**n_outputs**

: Number of weight bits**n_w_bits**

: Number of activation and input bits**n_a_bits**

: Maximal allowed bitwidth of intermediate accumulators**n_accum_bits**

: A factor that is multiplied by the maximal number of active (non-zero weight) neurons for every layer. The maximal number of neurons in the worst case scenario is: 2^n_max-1 max_active_neurons(n_max, n_w, n_a) = floor(---------------------) (2^n_w-1)*(2^n_a-1) ) The worst case scenario for the bitwidth of the accumulator is when all weights and activations are maximum simultaneously. We set, for each layer, the total number of neurons to be: n_hidden_neurons_multiplier * max_active_neurons(n_accum_bits, n_w_bits, n_a_bits) Through experiments, for typical distributions of weights and activations, the default value for n_hidden_neurons_multiplier, 4, is safe to avoid overflow.**n_hidden_neurons_multiplier**

: a torch class that is used to construct activation functions in the network (e.g. torch.ReLU, torch.SELU, torch.Sigmoid, etc)**activation_function**

**Raises:**

: if the parameters have invalid values or the computed accumulator bitwidth is zero**ValueError**

#### method `enable_pruning`

`enable_pruning`

Enable pruning in the network. Pruning must be made permanent to recover pruned weights.

**Raises:**

: if the quantization parameters are invalid**ValueError**

#### method `forward`

`forward`

Forward pass.

**Args:**

(torch.Tensor): network input**x**

**Returns:**

(torch.Tensor): network prediction**x**

#### method `make_pruning_permanent`

`make_pruning_permanent`

Make the learned pruning permanent in the network.

#### method `max_active_neurons`

`max_active_neurons`

Compute the maximum number of active (non-zero weight) neurons.

The computation is done using the quantization parameters passed to the constructor. Warning: With the current quantization algorithm (asymmetric) the value returned by this function is not guaranteed to ensure FHE compatibility. For some weight distributions, weights that are 0 (which are pruned weights) will not be quantized to 0. Therefore the total number of active quantized neurons will not be equal to max_active_neurons.

**Returns:**

(int): maximum number of active neurons**n**

#### method `on_train_end`

`on_train_end`

Call back when training is finished, can be useful to remove training hooks.

### class `QuantizedSkorchEstimatorMixin`

`QuantizedSkorchEstimatorMixin`

Mixin class that adds quantization features to Skorch NN estimators.

**property base_estimator_type**

Get the sklearn estimator that should be trained by the child class.

**property base_module_to_compile**

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

**Returns:**

(nn.Module): the instantiated torch module**module**

**property fhe_circuit**

Get the FHE circuit.

**Returns:**

: the FHE circuit**Circuit**

**property input_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[Quantizer]**

**property n_bits_quant**

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

**Returns:**

(int): the number of bits to quantize the network**n_bits**

**Raises:**

: with skorch estimators, the**ValueError**`module_`

is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception

**property onnx_model**

Get the ONNX model.

.. # noqa: DAR201

**Returns:**

(onnx.ModelProto): the ONNX model**_onnx_model_**

**property output_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[QuantizedArray]**

**property quantize_input**

Get the input quantization function.

**Returns:**

: function that quantizes the input**Callable**

#### method `get_params_for_benchmark`

`get_params_for_benchmark`

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the `module`

parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

**Returns:**

(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark**params**

#### method `infer`

`infer`

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

**Args:**

(torch.Tensor): A batch of the input data, produced by a Dataset**x**

: Additional parameters passed to the****fit_params (dict)**`forward`

method of the module and to the`self.train_split`

call.

**Returns:** A torch tensor with the inference results for each item in the input

#### method `on_train_end`

`on_train_end`

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

**Args:**

: estimator for which training has ended (equal to self)**net**

: data**X**

: targets**y**

: other arguments**kwargs**

### class `FixedTypeSkorchNeuralNet`

`FixedTypeSkorchNeuralNet`

A mixin with a helpful modification to a skorch estimator that fixes the module type.

#### method `get_params`

`get_params`

Get parameters for this estimator.

**Args:**

(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.**deep**

: any additional parameters to pass to the sklearn BaseEstimator class****kwargs**

**Returns:**

: dict, Parameter names mapped to their values.**params**

### class `NeuralNetClassifier`

`NeuralNetClassifier`

Scikit-learn interface for quantized FHE compatible neural networks.

This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.

#### method `__init__`

`__init__`

**property base_estimator_type**

**property base_module_to_compile**

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

**Returns:**

(nn.Module): the instantiated torch module**module**

**property classes_**

**property fhe_circuit**

Get the FHE circuit.

**Returns:**

: the FHE circuit**Circuit**

**property history**

**property input_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[Quantizer]**

**property n_bits_quant**

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

**Returns:**

(int): the number of bits to quantize the network**n_bits**

**Raises:**

: with skorch estimators, the**ValueError**`module_`

is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception

**property onnx_model**

Get the ONNX model.

.. # noqa: DAR201

**Returns:**

(onnx.ModelProto): the ONNX model**_onnx_model_**

**property output_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[QuantizedArray]**

**property quantize_input**

Get the input quantization function.

**Returns:**

: function that quantizes the input**Callable**

#### method `fit`

`fit`

#### method `get_params`

`get_params`

Get parameters for this estimator.

**Args:**

(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.**deep**

: any additional parameters to pass to the sklearn BaseEstimator class****kwargs**

**Returns:**

: dict, Parameter names mapped to their values.**params**

#### method `get_params_for_benchmark`

`get_params_for_benchmark`

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the `module`

parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

**Returns:**

(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark**params**

#### method `infer`

`infer`

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

**Args:**

(torch.Tensor): A batch of the input data, produced by a Dataset**x**

: Additional parameters passed to the****fit_params (dict)**`forward`

method of the module and to the`self.train_split`

call.

**Returns:** A torch tensor with the inference results for each item in the input

#### method `on_train_end`

`on_train_end`

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

**Args:**

: estimator for which training has ended (equal to self)**net**

: data**X**

: targets**y**

: other arguments**kwargs**

#### method `predict`

`predict`

Predict on user provided data.

Predicts using the quantized clear or FHE classifier

**Args:**

: input data, a numpy array of raw values (non quantized)**X**

: whether to execute the inference in FHE or in the clear**execute_in_fhe**

**Returns:**

: numpy ndarray with predictions**y_pred**

### class `NeuralNetRegressor`

`NeuralNetRegressor`

Scikit-learn interface for quantized FHE compatible neural networks.

This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).

The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.

#### method `__init__`

`__init__`

**property base_estimator_type**

**property base_module_to_compile**

Get the module that should be compiled to FHE. In our case this is a torch nn.Module.

**Returns:**

(nn.Module): the instantiated torch module**module**

**property fhe_circuit**

Get the FHE circuit.

**Returns:**

: the FHE circuit**Circuit**

**property history**

**property input_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[Quantizer]**

**property n_bits_quant**

Return the number of quantization bits.

This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.

**Returns:**

(int): the number of bits to quantize the network**n_bits**

**Raises:**

: with skorch estimators, the**ValueError**`module_`

is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception

**property onnx_model**

Get the ONNX model.

.. # noqa: DAR201

**Returns:**

(onnx.ModelProto): the ONNX model**_onnx_model_**

**property output_quantizers**

Get the input quantizers.

**Returns:**

: the input quantizers**List[QuantizedArray]**

**property quantize_input**

Get the input quantization function.

**Returns:**

: function that quantizes the input**Callable**

#### method `fit`

`fit`

#### method `get_params`

`get_params`

Get parameters for this estimator.

**Args:**

(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.**deep**

: any additional parameters to pass to the sklearn BaseEstimator class****kwargs**

**Returns:**

: dict, Parameter names mapped to their values.**params**

#### method `get_params_for_benchmark`

`get_params_for_benchmark`

Get parameters for benchmark when cloning a skorch wrapped NN.

We must remove all parameters related to the module. Skorch takes either a class or a class instance for the `module`

parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance

**Returns:**

(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark**params**

#### method `infer`

`infer`

Perform a single inference step on a batch of data.

This method is specific to Skorch estimators.

**Args:**

(torch.Tensor): A batch of the input data, produced by a Dataset**x**

: Additional parameters passed to the****fit_params (dict)**`forward`

method of the module and to the`self.train_split`

call.

**Returns:** A torch tensor with the inference results for each item in the input

#### method `on_train_end`

`on_train_end`

Call back when training is finished by the skorch wrapper.

Check if the underlying neural net has a callback for this event and, if so, call it.

**Args:**

: estimator for which training has ended (equal to self)**net**

: data**X**

: targets**y**

: other arguments**kwargs**

Last updated