concrete.ml.quantization.quantized_ops.md

module concrete.ml.quantization.quantized_ops

Quantized versions of the ONNX operators for post training quantization.


class QuantizedSigmoid

Quantized sigmoid op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedHardSigmoid

Quantized HardSigmoid op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedRelu

Quantized Relu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedPRelu

Quantized PRelu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedLeakyRelu

Quantized LeakyRelu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedHardSwish

Quantized Hardswish op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedElu

Quantized Elu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedSelu

Quantized Selu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedCelu

Quantized Celu op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedClip

Quantized clip op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedRound

Quantized round op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedPow

Quantized pow op.

Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedGemm

Quantized Gemm op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedMatMul

Quantized MatMul op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedAdd

Quantized Addition operator.

Can add either two variables (both encrypted) or a variable and a constant


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method can_fuse

can_fuse()bool

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedTanh

Quantized Tanh op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedSoftplus

Quantized Softplus op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedExp

Quantized Exp op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedLog

Quantized Log op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedAbs

Quantized Abs op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedIdentity

Quantized Identity op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedReshape

Quantized Reshape op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Reshape the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedConv

Quantized Conv op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

Construct the quantized convolution operator and retrieve parameters.

Args:

  • n_bits_output: number of bits for the quantization of the outputs of this operator

  • int_input_names: names of integer tensors that are taken as input for this operation

  • constant_inputs: the weights and activations

  • input_quant_opts: options for the input quantizer

  • attrs: convolution options

  • dilations (Tuple[int]): dilation of the kernel. Default to 1 on all dimensions.

  • group (int): number of convolution groups. Default to 1.

  • kernel_shape (Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int]): stride of the convolution on each axis


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Compute the quantized convolution between two quantized tensors.

Allows an optional quantized bias.

Args:

  • q_inputs: input tuple, contains

  • x (numpy.ndarray): input data. Shape is N x C x H x W for 2d

  • w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d

  • b (numpy.ndarray, Optional): bias tensor, Shape is (O,)

  • attrs: convolution options handled in constructor

Returns:

  • res (QuantizedArray): result of the quantized integer convolution


class QuantizedAvgPool

Quantized Average Pooling op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedMaxPool

Quantized Max Pooling op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method can_fuse

can_fuse()bool

Determine if this op can be fused.

Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.

Returns:

  • bool: False, this operation can not be fused as it adds different encrypted integers


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedPad

Quantized Padding op.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method can_fuse

can_fuse()bool

Determine if this op can be fused.

Pad operation cannot be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation cannot be fused as it is manipulates integer tensors


class QuantizedWhere

Where operator on quantized arrays.

Supports only constants for the results produced on the True/False branches.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedCast

Cast the input to the required data type.

In FHE we only support a limited number of output types. Booleans are cast to integers.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedGreater

Comparison operator >.

Only supports comparison with a constant.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedGreaterOrEqual

Comparison operator >=.

Only supports comparison with a constant.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedLess

Comparison operator <.

Only supports comparison with a constant.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedLessOrEqual

Comparison operator <=.

Only supports comparison with a constant.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: QuantizationOptions = None,
    **attrs
)None

property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedOr

Or operator ||.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedDiv

Div operator /.

This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedMul

Multiplication operator.

Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedSub

Subtraction operator.

This works the same as addition on both encrypted - encrypted and on encrypted - constant.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method can_fuse

can_fuse()bool

Determine if this op can be fused.

Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.

Returns:

  • bool: Whether the number of integer input tensors allows computing this op as a TLU


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

class QuantizedBatchNormalization

Quantized Batch normalization with encrypted input and in-the-clear normalization params.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedFlatten

Quantized flatten for encrypted inputs.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method can_fuse

can_fuse()bool

Determine if this op can be fused.

Flatten operation cannot be fused since it must be performed over integer tensors.

Returns:

  • bool: False, this operation cannot be fused as it is manipulates integer tensors.


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Flatten the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0

  • attrs: contains axis attribute

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedReduceSum

ReduceSum with encrypted input.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
)None

Construct the quantized ReduceSum operator and retrieve parameters.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs.

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • axes (Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None.

  • attrs (dict): RecuseSum options.

  • keepdims (int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.

  • noop_with_empty_axes (int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method calibrate

calibrate(*inputs: ndarray) → ndarray

Create corresponding QuantizedArray for the output of the activation function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: The output values for the provided calibration samples.


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Sum the encrypted tensor's values along the given axes.

Args:

  • q_inputs (QuantizedArray): An encrypted integer tensor at index 0.

  • attrs (Dict): Options are handled in constructor.

Returns:

  • (QuantizedArray): The sum of all values along the given axes.


class QuantizedErf

Quantized erf op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedNot

Quantized Not op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedBrevitasQuant

Brevitas uniform quantization with encrypted input.

method __init__

__init__(
    n_bits_output: int,
    int_input_names: Set[str] = None,
    constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
    input_quant_opts: Optional[QuantizationOptions] = None,
    **attrs
)None

Construct the Brevitas quantization operator.

Args:

  • n_bits_output (int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX

  • int_input_names (Optional[Set[str]]): Names of input integer tensors. Default to None.

  • constant_inputs (Optional[Dict]): Input constant tensor.

  • scale (float): Quantizer scale

  • zero_point (float): Quantizer zero-point

  • bit_width (int): Number of bits of the integer representation

  • input_quant_opts (Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):

  • rounding_mode (str): Rounding mode (default and only accepted option is "ROUND")

  • signed (int): Whether this op quantizes to signed integers (default 1),

  • narrow (int): Whether this op quantizes to a narrow range of integers e.g. [-2n_bits-1 .. 2n_bits-1] (default 0),


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method calibrate

calibrate(*inputs: ndarray) → ndarray

Create corresponding QuantizedArray for the output of Quantization function.

Args:

  • *inputs (numpy.ndarray): Calibration sample inputs.

Returns:

  • numpy.ndarray: the output values for the provided calibration samples.


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Quantize values.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedTranspose

Transpose operator for quantized inputs.

This operator performs quantization, transposes the encrypted data, then dequantizes again.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Reshape the input integer encrypted tensor.

Args:

  • q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1

  • attrs: additional optional reshape options

Returns:

  • result (QuantizedArray): reshaped encrypted integer tensor


class QuantizedFloor

Quantized Floor op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedMax

Quantized Max op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedMin

Quantized Min op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedNeg

Quantized Neg op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedSign

Quantized Neg op.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


class QuantizedUnsqueeze

Unsqueeze operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Unsqueeze the input tensors on a given axis.

Args:

  • q_inputs: an encrypted integer tensor

  • attrs: additional optional unsqueeze options

Returns:

  • result (QuantizedArray): unsqueezed encrypted integer tensor


class QuantizedConcat

Concatenate operator.


property int_input_names

Get the names of encrypted integer tensors that are used by this op.

Returns:

  • List[str]: the names of the tensors


method q_impl

q_impl(*q_inputs: QuantizedArray, **attrs) → QuantizedArray

Concatenate the input tensors on a giver axis.

Args:

  • q_inputs: an encrypted integer tensor

  • attrs: additional optional concatenate options

Returns:

  • result (QuantizedArray): concatenated encrypted integer tensor

Last updated