concrete.ml.quantization.quantized_ops.md
module concrete.ml.quantization.quantized_ops
concrete.ml.quantization.quantized_opsQuantized versions of the ONNX operators for post training quantization.
class QuantizedSigmoid
QuantizedSigmoidQuantized sigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedHardSigmoid
QuantizedHardSigmoidQuantized HardSigmoid op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedRelu
QuantizedReluQuantized Relu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedPRelu
QuantizedPReluQuantized PRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedLeakyRelu
QuantizedLeakyReluQuantized LeakyRelu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedHardSwish
QuantizedHardSwishQuantized Hardswish op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedElu
QuantizedEluQuantized Elu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedSelu
QuantizedSeluQuantized Selu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedCelu
QuantizedCeluQuantized Celu op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedClip
QuantizedClipQuantized clip op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedRound
QuantizedRoundQuantized round op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedPow
QuantizedPowQuantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedGemm
QuantizedGemmQuantized Gemm op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
calibrate_rounding: bool = False,
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedMatMul
QuantizedMatMulQuantized MatMul op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
calibrate_rounding: bool = False,
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedAdd
QuantizedAddQuantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool: Whether the number of integer input tensors allows computing this op as a TLU
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedTanh
QuantizedTanhQuantized Tanh op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedSoftplus
QuantizedSoftplusQuantized Softplus op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedExp
QuantizedExpQuantized Exp op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedLog
QuantizedLogQuantized Log op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedAbs
QuantizedAbsQuantized Abs op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedIdentity
QuantizedIdentityQuantized Identity op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedReshape
QuantizedReshapeQuantized Reshape op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Reshape the input integer encrypted tensor.
Args:
q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1attrs: additional optional reshape options
Returns:
result(QuantizedArray): reshaped encrypted integer tensor
class QuantizedConv
QuantizedConvQuantized Conv op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → NoneConstruct the quantized convolution operator and retrieve parameters.
Args:
n_bits_output: number of bits for the quantization of the outputs of this operatorop_instance_name(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.int_input_names: names of integer tensors that are taken as input for this operationconstant_inputs: the weights and activationsinput_quant_opts: options for the input quantizerattrs: convolution optionsdilations(Tuple[int]): dilation of the kernel. Default to 1 on all dimensions.group(int): number of convolution groups. Default to 1.kernel_shape(Tuple[int]): shape of the kernel. Should have 2 elements for 2d convpads(Tuple[int]): padding in ONNX format (begin, end) on each axisstrides(Tuple[int]): stride of the convolution on each axis
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
calibrate_rounding: bool = False,
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
Args:
q_inputs: input tuple, containsx(numpy.ndarray): input data. Shape is N x C x H x W for 2dw(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2db(numpy.ndarray, Optional): bias tensor, Shape is (O,)calibrate_rounding(bool): Whether to calibrate roundingattrs: convolution options handled in constructor
Returns:
res(QuantizedArray): result of the quantized integer convolution
class QuantizedAvgPool
QuantizedAvgPoolQuantized Average Pooling op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
calibrate_rounding: bool = False,
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedMaxPool
QuantizedMaxPoolQuantized Max Pooling op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Max Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool: False, this operation can not be fused as it adds different encrypted integers
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedPad
QuantizedPadQuantized Padding op.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Pad operation cannot be fused since it must be performed over integer tensors.
Returns:
bool: False, this operation cannot be fused as it is manipulates integer tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
calibrate_rounding: bool = False,
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedWhere
QuantizedWhereWhere operator on quantized arrays.
Supports only constants for the results produced on the True/False branches.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedCast
QuantizedCastCast the input to the required data type.
In FHE we only support a limited number of output types. Booleans are cast to integers.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedGreater
QuantizedGreaterComparison operator >.
Only supports comparison with a constant.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedGreaterOrEqual
QuantizedGreaterOrEqualComparison operator >=.
Only supports comparison with a constant.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedLess
QuantizedLessComparison operator <.
Only supports comparison with a constant.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedLessOrEqual
QuantizedLessOrEqualComparison operator <=.
Only supports comparison with a constant.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: QuantizationOptions = None,
**attrs
) → Noneproperty int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedOr
QuantizedOrOr operator ||.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = x || (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedDiv
QuantizedDivDiv operator /.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g., Act(x) = 1000 / (x + 42))
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedMul
QuantizedMulMultiplication operator.
Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedSub
QuantizedSubSubtraction operator.
This works the same as addition on both encrypted - encrypted and on encrypted - constant.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool: Whether the number of integer input tensors allows computing this op as a TLU
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class QuantizedBatchNormalization
QuantizedBatchNormalizationQuantized Batch normalization with encrypted input and in-the-clear normalization params.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedFlatten
QuantizedFlattenQuantized flatten for encrypted inputs.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Flatten operation cannot be fused since it must be performed over integer tensors.
Returns:
bool: False, this operation cannot be fused as it is manipulates integer tensors.
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Flatten the input integer encrypted tensor.
Args:
q_inputs: an encrypted integer tensor at index 0attrs: contains axis attribute
Returns:
result(QuantizedArray): reshaped encrypted integer tensor
class QuantizedReduceSum
QuantizedReduceSumReduceSum with encrypted input.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: Optional[QuantizationOptions] = None,
**attrs
) → NoneConstruct the quantized ReduceSum operator and retrieve parameters.
Args:
n_bits_output(int): Number of bits for the operator's quantization of outputs.op_instance_name(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.int_input_names(Optional[Set[str]]): Names of input integer tensors. Default to None.constant_inputs(Optional[Dict]): Input constant tensor.axes(Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.input_quant_opts(Optional[QuantizationOptions]): Options for the input quantizer. Default to None.attrs(dict): RecuseSum options.keepdims(int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.noop_with_empty_axes(int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method calibrate
calibratecalibrate(*inputs: ndarray) → ndarrayCreate corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray): Calibration sample inputs.
Returns:
numpy.ndarray: The output values for the provided calibration samples.
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Sum the encrypted tensor's values along the given axes.
Args:
q_inputs(QuantizedArray): An encrypted integer tensor at index 0.attrs(Dict): Options are handled in constructor.
Returns:
(QuantizedArray): The sum of all values along the given axes.
class QuantizedErf
QuantizedErfQuantized erf op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedNot
QuantizedNotQuantized Not op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedBrevitasQuant
QuantizedBrevitasQuantBrevitas uniform quantization with encrypted input.
method __init__
__init____init__(
n_bits_output: int,
op_instance_name: str,
int_input_names: Set[str] = None,
constant_inputs: Optional[Dict[str, Any], Dict[int, Any]] = None,
input_quant_opts: Optional[QuantizationOptions] = None,
**attrs
) → NoneConstruct the Brevitas quantization operator.
Args:
n_bits_output(int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNXop_instance_name(str): The name that should be assigned to this operation, used to retrieve it later or get debugging information about this op (bit-width, value range, integer intermediary values, op-specific error messages). Usually this name is the same as the ONNX operation name for which this operation is constructed.int_input_names(Optional[Set[str]]): Names of input integer tensors. Default to None.constant_inputs(Optional[Dict]): Input constant tensor.scale(float): Quantizer scalezero_point(float): Quantizer zero-pointbit_width(int): Number of bits of the integer representationinput_quant_opts(Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):rounding_mode(str): Rounding mode (default and only accepted option is "ROUND")signed(int): Whether this op quantizes to signed integers (default 1),narrow(int): Whether this op quantizes to a narrow range of integers e.g., [-2n_bits-1 .. 2n_bits-1] (default 0),
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method calibrate
calibratecalibrate(*inputs: ndarray) → ndarrayCreate corresponding QuantizedArray for the output of Quantization function.
Args:
*inputs (numpy.ndarray): Calibration sample inputs.
Returns:
numpy.ndarray: the output values for the provided calibration samples.
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Quantize values.
Args:
q_inputs: an encrypted integer tensor at index 0, scale, zero_point, n_bits at indices 1,2,3attrs: additional optional attributes
Returns:
result(QuantizedArray): reshaped encrypted integer tensor
class QuantizedTranspose
QuantizedTransposeTranspose operator for quantized inputs.
This operator performs quantization and transposes the encrypted data. When the inputs are pre-computed QAT the input is only quantized if needed.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Transpose can not be fused since it must be performed over integer tensors as it moves around different elements of these input tensors.
Returns:
bool: False, this operation can not be fused as it copies encrypted integers
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Transpose the input integer encrypted tensor.
Args:
q_inputs: an encrypted integer tensor at index 0 and one constant shape at index 1attrs: additional optional reshape options
Returns:
result(QuantizedArray): transposed encrypted integer tensor
class QuantizedFloor
QuantizedFloorQuantized Floor op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedMax
QuantizedMaxQuantized Max op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedMin
QuantizedMinQuantized Min op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedNeg
QuantizedNegQuantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedSign
QuantizedSignQuantized Neg op.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
class QuantizedUnsqueeze
QuantizedUnsqueezeUnsqueeze operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Unsqueeze can not be fused since it must be performed over integer tensors as it reshapes an encrypted tensor.
Returns:
bool: False, this operation can not be fused as it operates on encrypted tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Unsqueeze the input tensors on a given axis.
Args:
q_inputs: an encrypted integer tensor at index 0, axes at index 1attrs: additional optional unsqueeze options
Returns:
result(QuantizedArray): unsqueezed encrypted integer tensor
class QuantizedConcat
QuantizedConcatConcatenate operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Concatenation can not be fused since it must be performed over integer tensors as it copies encrypted integers from one tensor to another.
Returns:
bool: False, this operation can not be fused as it copies encrypted integers
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Concatenate the input tensors on a given axis.
Args:
q_inputs: an encrypted integer tensorattrs: additional optional concatenate options
Returns:
result(QuantizedArray): concatenated encrypted integer tensor
class QuantizedSqueeze
QuantizedSqueezeSqueeze operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
Squeeze can not be fused since it must be performed over integer tensors as it reshapes encrypted tensors.
Returns:
bool: False, this operation can not be fused as it reshapes encrypted tensors
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Squeeze the input tensors on a given axis.
Args:
q_inputs: an encrypted integer tensor at index 0, axes at index 1attrs: additional optional squeeze options
Returns:
result(QuantizedArray): squeezed encrypted integer tensor
class ONNXShape
ONNXShapeShape operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
This operation returns the shape of the tensor and thus can not be fused into a univariate TLU.
Returns:
bool: False, this operation can not be fused
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class ONNXConstantOfShape
ONNXConstantOfShapeConstantOfShape operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
This operation returns a new encrypted tensor and thus can not be fused.
Returns:
bool: False, this operation can not be fused
class ONNXGather
ONNXGatherGather operator.
Returns values at requested indices from the input tensor.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
This operation returns values from a tensor and thus can not be fused into a univariate TLU.
Returns:
bool: False, this operation can not be fused
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]class ONNXSlice
ONNXSliceSlice operator.
property int_input_names
Get the names of encrypted integer tensors that are used by this op.
Returns:
Set[str]: the names of the tensors
method can_fuse
can_fusecan_fuse() → boolDetermine if this op can be fused.
This operation returns values from a tensor and thus can not be fused into a univariate TLU.
Returns:
bool: False, this operation can not be fused
method q_impl
q_implq_impl(
*q_inputs: Optional[ndarray, QuantizedArray],
**attrs
) → Union[ndarray, QuantizedArray, NoneType]Last updated
Was this helpful?