concrete.ml.quantization.quantized_ops
Quantized versions of the ONNX operators for post training quantization.
QuantizedSigmoid
Quantized sigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedHardSigmoid
Quantized HardSigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedRelu
Quantized Relu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedPRelu
Quantized PRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedLeakyRelu
Quantized LeakyRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedHardSwish
Quantized Hardswish op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedElu
Quantized Elu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedSelu
Quantized Selu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedCelu
Quantized Celu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedClip
Quantized clip op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedRound
Quantized round op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedPow
Quantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedGemm
Quantized Gemm op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
q_impl
QuantizedMatMul
Quantized MatMul op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
q_impl
QuantizedAdd
Quantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedTanh
Quantized Tanh op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedSoftplus
Quantized Softplus op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedExp
Quantized Exp op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedLog
Quantized Log op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedAbs
Quantized Abs op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedIdentity
Quantized Identity op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
q_impl
QuantizedReshape
Quantized Reshape op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedConv
Quantized Conv op.
__init__
Construct the quantized convolution operator and retrieve parameters.
Args:
n_bits_output
: number of bits for the quantization of the outputs of this operator
op_instance_name
(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.
int_input_names
: names of integer tensors that are taken as input for this operation
constant_inputs
: the weights and activations
input_quant_opts
: options for the input quantizer
attrs
: convolution options
dilations
(Tuple[int]): dilation of the kernel. Default to 1 on all dimensions.
group
(int): number of convolution groups. Default to 1.
kernel_shape
(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int]): stride of the convolution on each axis
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
q_impl
Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
Args:
q_inputs
: input tuple, contains
x
(numpy.ndarray): input data. Shape is N x C x H x W for 2d
w
(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d
b
(numpy.ndarray, Optional): bias tensor, Shape is (O,)
calibrate_rounding
(bool): Whether to calibrate rounding
attrs
: convolution options handled in constructor
Returns:
res
(QuantizedArray): result of the quantized integer convolution
QuantizedAvgPool
Quantized Average Pooling op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
q_impl
QuantizedMaxPool
Quantized Max Pooling op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedPad
Quantized Padding op.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Pad operation cannot be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation cannot be fused as it is manipulates integer tensors
q_impl
QuantizedWhere
Where operator on quantized arrays.
Supports only constants for the results produced on the True/False branches.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedCast
Cast the input to the required data type.
In FHE we only support a limited number of output types. Booleans are cast to integers.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedGreater
Comparison operator >.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedGreaterOrEqual
Comparison operator >=.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedLess
Comparison operator <.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedLessOrEqual
Comparison operator <=.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedOr
Or operator ||.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = x || (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedDiv
Div operator /.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = 1000 / (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedMul
Multiplication operator.
Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedSub
Subtraction operator.
This works the same as addition on both encrypted - encrypted and on encrypted - constant.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedBatchNormalization
Quantized Batch normalization with encrypted input and in-the-clear normalization params.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedFlatten
Quantized flatten for encrypted inputs.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Flatten operation cannot be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation cannot be fused as it is manipulates integer tensors.
q_impl
Flatten the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0
attrs
: contains axis attribute
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedReduceSum
ReduceSum with encrypted input.
__init__
Construct the quantized ReduceSum operator and retrieve parameters.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs.
op_instance_name
(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
axes
(Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None.
attrs
(dict): RecuseSum options.
keepdims
(int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.
noop_with_empty_axes
(int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: The output values for the provided calibration samples.
q_impl
Sum the encrypted tensor's values along the given axes.
Args:
q_inputs
(QuantizedArray): An encrypted integer tensor at index 0.
calibrate_rounding
(bool): Whether to calibrate rounding or not.
attrs
(Dict): Options are handled in constructor.
Returns:
(QuantizedArray)
: The sum of all values along the given axes.
QuantizedErf
Quantized erf op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedNot
Quantized Not op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedBrevitasQuant
Brevitas uniform quantization with encrypted input.
__init__
Construct the Brevitas quantization operator.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX
op_instance_name
(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
scale
(float): Quantizer scale
zero_point
(float): Quantizer zero-point
bit_width
(int): Number of bits of the integer representation
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):
rounding_mode
(str): Rounding mode (default and only accepted option is "ROUND")
signed
(int): Whether this op quantizes to signed integers (default 1),
narrow
(int): Whether this op quantizes to a narrow range of integers e.g., [-2n_bits-1 .. 2n_bits-1] (default 0),
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
calibrate
Create corresponding QuantizedArray for the output of Quantization function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
q_impl
Quantize values.
Args:
q_inputs
: an encrypted integer tensor at index 0, scale, zero_point, n_bits at indices 1,2,3
attrs
: additional optional attributes
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedTranspose
Transpose operator for quantized inputs.
This operator performs quantization and transposes the encrypted data. When the inputs are pre-computed QAT the input is only quantized if needed.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Transpose can not be fused since it must be performed over integer tensors as it moves around different elements of these input tensors.
Returns:
bool
: False, this operation can not be fused as it copies encrypted integers
q_impl
Transpose the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): transposed encrypted integer tensor
QuantizedFloor
Quantized Floor op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedMax
Quantized Max op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedMin
Quantized Min op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedNeg
Quantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedSign
Quantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
QuantizedUnsqueeze
Unsqueeze operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Unsqueeze can not be fused since it must be performed over integer tensors as it reshapes an encrypted tensor.
Returns:
bool
: False, this operation can not be fused as it operates on encrypted tensors
q_impl
Unsqueeze the input tensors on a given axis.
Args:
q_inputs
: an encrypted integer tensor at index 0, axes at index 1
attrs
: additional optional unsqueeze options
Returns:
result
(QuantizedArray): unsqueezed encrypted integer tensor
QuantizedConcat
Concatenate operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Concatenation can not be fused since it must be performed over integer tensors as it copies encrypted integers from one tensor to another.
Returns:
bool
: False, this operation can not be fused as it copies encrypted integers
q_impl
Concatenate the input tensors on a given axis.
Args:
q_inputs
: an encrypted integer tensor
attrs
: additional optional concatenate options
Returns:
result
(QuantizedArray): concatenated encrypted integer tensor
QuantizedSqueeze
Squeeze operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Squeeze can not be fused since it must be performed over integer tensors as it reshapes encrypted tensors.
Returns:
bool
: False, this operation can not be fused as it reshapes encrypted tensors
q_impl
Squeeze the input tensors on a given axis.
Args:
q_inputs
: an encrypted integer tensor at index 0, axes at index 1
attrs
: additional optional squeeze options
Returns:
result
(QuantizedArray): squeezed encrypted integer tensor
ONNXShape
Shape operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
This operation returns the shape of the tensor and thus can not be fused into a univariate TLU.
Returns:
bool
: False, this operation can not be fused
q_impl
ONNXConstantOfShape
ConstantOfShape operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
This operation returns a new encrypted tensor and thus can not be fused.
Returns:
bool
: False, this operation can not be fused
ONNXGather
Gather operator.
Returns values at requested indices from the input tensor.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
This operation returns values from a tensor and thus can not be fused into a univariate TLU.
Returns:
bool
: False, this operation can not be fused
q_impl
ONNXSlice
Slice operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
This operation returns values from a tensor and thus can not be fused into a univariate TLU.
Returns:
bool
: False, this operation can not be fused
q_impl
QuantizedExpand
Expand operator for quantized tensors.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors
can_fuse
Determine if this op can be fused.
Unsqueeze can not be fused since it must be performed over integer tensors as it reshapes an encrypted tensor.
Returns:
bool
: False, this operation can not be fused as it operates on encrypted tensors
q_impl
Expand the input tensor to a specified shape.
Args:
q_inputs
: an encrypted integer tensor at index 0, shape at index 1
attrs
: additional optional expand options
Returns:
result
(QuantizedArray): expanded encrypted integer tensor
QuantizedEqual
Comparison operator ==.
Only supports comparison with a constant.
__init__
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]
: the names of the tensors