concrete.ml.quantization.quantized_ops
Last updated
Last updated
concrete.ml.quantization.quantized_ops
Quantized versions of the ONNX operators for post training quantization.
QuantizedSigmoid
Quantized sigmoid op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedHardSigmoid
Quantized HardSigmoid op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedRelu
Quantized Relu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedPRelu
Quantized PRelu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLeakyRelu
Quantized LeakyRelu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedHardSwish
Quantized Hardswish op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedElu
Quantized Elu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSelu
Quantized Selu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedCelu
Quantized Celu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedClip
Quantized clip op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedRound
Quantized round op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedPow
Quantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedGemm
Quantized Gemm op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedMatMul
Quantized MatMul op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedAdd
Quantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedTanh
Quantized Tanh op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSoftplus
Quantized Softplus op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedExp
Quantized Exp op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLog
Quantized Log op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedAbs
Quantized Abs op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedIdentity
Quantized Identity op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
QuantizedReshape
Quantized Reshape op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedConv
Quantized Conv op.
__init__
Construct the quantized convolution operator and retrieve parameters.
Args:
n_bits_output
: number of bits for the quantization of the outputs of this operator
int_input_names
: names of integer tensors that are taken as input for this operation
constant_inputs
: the weights and activations
input_quant_opts
: options for the input quantizer
attrs
: convolution options
dilations
(Tuple[int]): dilation of the kernel, default 1 on all dimensions.
group
(int): number of convolution groups, default 1
kernel_shape
(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int]): stride of the convolution on each axis
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
Args:
q_inputs
: input tuple, contains
x
(numpy.ndarray): input data. Shape is N x C x H x W for 2d
w
(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d
b
(numpy.ndarray, Optional): bias tensor, Shape is (O,)
attrs
: convolution options handled in constructor
Returns:
res
(QuantizedArray): result of the quantized integer convolution
QuantizedAvgPool
Quantized Average Pooling op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedPad
Quantized Padding op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Pad operation can not be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation can not be fused as it is manipulates integer tensors