# concrete.ml.quantization.quantized_ops

Last updated

Last updated

module

`concrete.ml.quantization.quantized_ops`

Quantized versions of the ONNX operators for post training quantization.

class

`QuantizedSigmoid`

Quantized sigmoid op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedHardSigmoid`

Quantized HardSigmoid op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedRelu`

Quantized Relu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedPRelu`

Quantized PRelu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedLeakyRelu`

Quantized LeakyRelu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedHardSwish`

Quantized Hardswish op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedElu`

Quantized Elu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedSelu`

Quantized Selu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedCelu`

Quantized Celu op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedClip`

Quantized clip op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedRound`

Quantized round op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedPow`

Quantized pow op.

Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedGemm`

Quantized Gemm op.

method

`__init__`

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedMatMul`

Quantized MatMul op.

method

`__init__`

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedAdd`

Quantized Addition operator.

Can add either two variables (both encrypted) or a variable and a constant

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

**Returns:**

: Whether the number of integer input tensors allows computing this op as a TLU**bool**

method

`q_impl`

class

`QuantizedTanh`

Quantized Tanh op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedSoftplus`

Quantized Softplus op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedExp`

Quantized Exp op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedLog`

Quantized Log op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedAbs`

Quantized Abs op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

class

`QuantizedIdentity`

Quantized Identity op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`q_impl`

class

`QuantizedReshape`

Quantized Reshape op.

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`q_impl`

Reshape the input integer encrypted tensor.

**Args:**

: an encrypted integer tensor at index 0 and one constant shape at index 1**q_inputs**

: additional optional reshape options**attrs**

**Returns:**

(QuantizedArray): reshaped encrypted integer tensor**result**

class

`QuantizedConv`

Quantized Conv op.

method

`__init__`

Construct the quantized convolution operator and retrieve parameters.

**Args:**

: number of bits for the quantization of the outputs of this operator**n_bits_output**

: names of integer tensors that are taken as input for this operation**int_input_names**

: the weights and activations**constant_inputs**

: options for the input quantizer**input_quant_opts**

: convolution options**attrs**

(Tuple[int]): dilation of the kernel, default 1 on all dimensions.**dilations**

(int): number of convolution groups, default 1**group**

(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv**kernel_shape**

(Tuple[int]): padding in ONNX format (begin, end) on each axis**pads**

(Tuple[int]): stride of the convolution on each axis**strides**

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

Compute the quantized convolution between two quantized tensors.

Allows an optional quantized bias.

**Args:**

: input tuple, contains**q_inputs**

(numpy.ndarray): input data. Shape is N x C x H x W for 2d**x**

(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d**w**

(numpy.ndarray, Optional): bias tensor, Shape is (O,)**b**

: convolution options handled in constructor**attrs**

**Returns:**

(QuantizedArray): result of the quantized integer convolution**res**

class

`QuantizedAvgPool`

Quantized Average Pooling op.

method

`__init__`

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

**Returns:**

: False, this operation can not be fused as it adds different encrypted integers**bool**

method

`q_impl`

class

`QuantizedPad`

Quantized Padding op.

method

`__init__`

**property op_type**

Get the type of this operation.

**Returns:**

(str): The type of this operation, in the ONNX referential**op_type**

method

`can_fuse`

Determine if this op can be fused.

Pad operation can not be fused since it must be performed over integer tensors.

**Returns:**

: False, this operation can not be fused as it is manipulates integer tensors**bool**

class

`QuantizedWhere`

Where operator on quantized arrays.

Supports only constants for the results produced on the True/False branches.

method

`__init__`