# concrete.ml.quantization.quantized_ops

Last updated

Last updated

module

`concrete.ml.quantization.quantized_ops`

Quantized versions of the ONNX operators for post training quantization.

class

`QuantizedSigmoid`

Quantized sigmoid op.

class

`QuantizedHardSigmoid`

Quantized HardSigmoid op.

class

`QuantizedRelu`

Quantized Relu op.

class

`QuantizedPRelu`

Quantized PRelu op.

class

`QuantizedLeakyRelu`

Quantized LeakyRelu op.

class

`QuantizedHardSwish`

Quantized Hardswish op.

class

`QuantizedElu`

Quantized Elu op.

class

`QuantizedSelu`

Quantized Selu op.

class

`QuantizedCelu`

Quantized Celu op.

class

`QuantizedClip`

Quantized clip op.

class

`QuantizedRound`

Quantized round op.

class

`QuantizedPow`

Quantized pow op.

Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Power raising can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x ** (x + 1) where x is an integer tensor.

**Returns:**

: Can fuse**bool**

class

`QuantizedGemm`

Quantized Gemm op.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedMatMul`

Quantized MatMul op.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedAdd`

Quantized Addition operator.

Can add either two variables (both encrypted) or a variable and a constant

method

`can_fuse`

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

**Returns:**

: Whether the number of integer input tensors allows computing this op as a TLU**bool**

method

`q_impl`

class

`QuantizedTanh`

Quantized Tanh op.

class

`QuantizedSoftplus`

Quantized Softplus op.

class

`QuantizedExp`

Quantized Exp op.

class

`QuantizedLog`

Quantized Log op.

class

`QuantizedAbs`

Quantized Abs op.

class

`QuantizedIdentity`

Quantized Identity op.

method

`q_impl`

class

`QuantizedReshape`

Quantized Reshape op.

method

`q_impl`

Reshape the input integer encrypted tensor.

**Args:**

: an encrypted integer tensor at index 0 and one constant shape at index 1**q_inputs**

: additional optional reshape options**attrs**

**Returns:**

(QuantizedArray): reshaped encrypted integer tensor**result**

class

`QuantizedConv`

Quantized Conv op.

method

`__init__`

Construct the quantized convolution operator and retrieve parameters.

**Args:**

: number of bits for the quantization of the outputs of this operator**n_bits_output**

: names of integer tensors that are taken as input for this operation**int_input_names**

: the weights and activations**constant_inputs**

: options for the input quantizer**input_quant_opts**

: convolution options**attrs**

(Tuple[int]): dilation of the kernel, default 1 on all dimensions.**dilations**

(int): number of convolution groups, default 1**group**

(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv**kernel_shape**

(Tuple[int]): padding in ONNX format (begin, end) on each axis**pads**

(Tuple[int]): stride of the convolution on each axis**strides**

method

`can_fuse`

Determine if this op can be fused.

Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

Compute the quantized convolution between two quantized tensors.

Allows an optional quantized bias.

**Args:**

: input tuple, contains**q_inputs**

(numpy.ndarray): input data. Shape is N x C x H x W for 2d**x**

(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d**w**

(numpy.ndarray, Optional): bias tensor, Shape is (O,)**b**

: convolution options handled in constructor**attrs**

**Returns:**

(QuantizedArray): result of the quantized integer convolution**res**

class

`QuantizedAvgPool`

Quantized Average Pooling op.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedPad`

Quantized Padding op.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Pad operation can not be fused since it must be performed over integer tensors.

**Returns:**

: False, this operation can not be fused as it is manipulates integer tensors**bool**

class

`QuantizedWhere`

Where operator on quantized arrays.

Supports only constants for the results produced on the True/False branches.

method

`__init__`

class

`QuantizedCast`

Cast the input to the required data type.

In FHE we only support a limited number of output types. Booleans are cast to integers.

class

`QuantizedGreater`

Comparison operator >.

Only supports comparison with a constant.

method

`__init__`

class

`QuantizedGreaterOrEqual`

Comparison operator >=.

Only supports comparison with a constant.

method

`__init__`

class

`QuantizedLess`

Comparison operator <.

Only supports comparison with a constant.

method

`__init__`

class

`QuantizedLessOrEqual`

Comparison operator <=.

Only supports comparison with a constant.

method

`__init__`

class

`QuantizedOr`

Or operator ||.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Or can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.

**Returns:**

: Can fuse**bool**

class

`QuantizedDiv`

Div operator /.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Div can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x / (x + 1) where x is an integer tensor.

**Returns:**

: Can fuse**bool**

class

`QuantizedMul`

Multiplication operator.

Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.

method

`__init__`

method

`can_fuse`

Determine if this op can be fused.

Multiplication can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x * (x + 1) where x is an integer tensor.

**Returns:**

: Can fuse**bool**

class

`QuantizedSub`

Subtraction operator.

This works the same as addition on both encrypted - encrypted and on encrypted - constant.

method

`can_fuse`

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

**Returns:**

: Whether the number of integer input tensors allows computing this op as a TLU**bool**

method

`q_impl`

class

`QuantizedBatchNormalization`

Quantized Batch normalization with encrypted input and in-the-clear normalization params.

class

`QuantizedFlatten`

Quantized flatten for encrypted inputs.

method

`can_fuse`

Determine if this op can be fused.

Flatten operation can not be fused since it must be performed over integer tensors.

**Returns:**

: False, this operation can not be fused as it is manipulates integer tensors.**bool**

method

`q_impl`

Flatten the input integer encrypted tensor.

**Args:**

: an encrypted integer tensor at index 0**q_inputs**

: contains axis attribute**attrs**

**Returns:**

(QuantizedArray): reshaped encrypted integer tensor**result**

class

`QuantizedReduceSum`

ReduceSum with encrypted input.

This operator is currently an experimental feature.

method

`__init__`

Construct the quantized ReduceSum operator and retrieve parameters.

**Args:**

(int): Number of bits for the operator's quantization of outputs.**n_bits_output**

(Optional[Set[str]]): Names of input integer tensors. Default to None.**int_input_names**

(Optional[Dict]): Input constant tensor.**constant_inputs**

(Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.**axes**

(Optional[QuantizationOptions]): Options for the input quantizer. Default to None.**input_quant_opts**

(dict): RecuseSum options.**attrs**

(int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.**keepdims**

(int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.**noop_with_empty_axes**

method

`calibrate`

Create corresponding QuantizedArray for the output of the activation function.

**Args:**

: Calibration sample inputs.***inputs (numpy.ndarray)**

**Returns:**

: the output values for the provided calibration samples.**numpy.ndarray**

method

`q_impl`

Sum the encrypted tensor's values over axis 1.

**Args:**

(QuantizedArray): An encrypted integer tensor at index 0.**q_inputs**

(Dict): Contains axis attribute.**attrs**

**Returns:**

: The sum of all values along axis 1 as an encrypted integer tensor.**(QuantizedArray)**

method

`tree_sum`

Large sum without overflow (only MSB remains).

**Args:**

: Enctyped integer tensor.**input_qarray**

: Whether we are calibrating the tree sum. If so, it will create all the quantizers for the downscaling.**is_calibration**

**Returns:**

: The MSB (based on the precision self.n_bits) of the integers sum.**(numpy.ndarray)**

class

`QuantizedErf`

Quantized erf op.

class

`QuantizedNot`

Quantized Not op.

class

`QuantizedBrevitasQuant`

Brevitas uniform quantization with encrypted input.

method

`__init__`

Construct the Brevitas quantization operator.

**Args:**

(int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX**n_bits_output**

(Optional[Set[str]]): Names of input integer tensors. Default to None.**int_input_names**

(Optional[Dict]): Input constant tensor.**constant_inputs**

(float): Quantizer scale**scale**

(float): Quantizer zero-point**zero_point**

(int): Number of bits of the integer representation**bit_width**

(Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):**input_quant_opts**

(str): Rounding mode (default and only accepted option is "ROUND")**rounding_mode**

(int): Whether this op quantizes to signed integers (default 1),**signed**

(int): Whether this op quantizes to a narrow range of integers e.g. [-2**narrow****n_bits-1 .. 2**n_bits-1] (default 0),

method

`q_impl`

Quantize values.

**Args:**

: an encrypted integer tensor at index 0 and one constant shape at index 1**q_inputs**

: additional optional reshape options**attrs**

**Returns:**

(QuantizedArray): reshaped encrypted integer tensor**result**

class

`QuantizedTranspose`

Transpose operator for quantized inputs.

This operator performs quantization, transposes the encrypted data, then dequantizes again.

method

`q_impl`

Reshape the input integer encrypted tensor.

**Args:**

: an encrypted integer tensor at index 0 and one constant shape at index 1**q_inputs**

: additional optional reshape options**attrs**

**Returns:**

(QuantizedArray): reshaped encrypted integer tensor**result**