concrete.ml.onnx.ops_impl.md

module concrete.ml.onnx.ops_impl

ONNX ops implementation in Python + NumPy.


function cast_to_float

cast_to_float(inputs)

Cast values to floating points.

Args:

  • inputs (Tuple[numpy.ndarray]): The values to consider.

Returns:

  • Tuple[numpy.ndarray]: The float values.


function onnx_func_raw_args

onnx_func_raw_args(*args, output_is_raw: bool = False)

Decorate a numpy onnx function to flag the raw/non quantized inputs.

Args:

  • *args (tuple[Any]): function argument names

  • output_is_raw (bool): marks the function as returning raw values that should not be quantized

Returns:

  • result (ONNXMixedFunction): wrapped numpy function with a list of mixed arguments


function numpy_where_body

numpy_where_body(c: ndarray, t: ndarray, f: Union[ndarray, int]) → ndarray

Compute the equivalent of numpy.where.

This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g., numpy_div or numpy_where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_where

numpy_where(c: ndarray, t: ndarray, f: ndarray) → Tuple[ndarray]

Compute the equivalent of numpy.where.

Args:

  • c (numpy.ndarray): Condition operand.

  • t (numpy.ndarray): True operand.

  • f (numpy.ndarray): False operand.

Returns:

  • numpy.ndarray: numpy.where(c, t, f)


function numpy_add

numpy_add(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute add in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13

Args:

  • a (numpy.ndarray): First operand.

  • b (numpy.ndarray): Second operand.

Returns:

  • Tuple[numpy.ndarray]: Result, has same element type as two inputs


function numpy_constant

numpy_constant(**kwargs)

Return the constant passed as a kwarg.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13

Args:

  • **kwargs: keyword arguments

Returns:

  • Any: The stored constant.


function numpy_gemm

numpy_gemm(
    a: ndarray,
    b: ndarray,
    c: Optional[ndarray] = None,
    alpha: float = 1,
    beta: float = 1,
    transA: int = 0,
    transB: int = 0
) → Tuple[ndarray]

Compute Gemm in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Gemm-13

Args:

  • a (numpy.ndarray): Input tensor A. The shape of A should be (M, K) if transA is 0, or (K, M) if transA is non-zero.

  • b (numpy.ndarray): Input tensor B. The shape of B should be (K, N) if transB is 0, or (N, K) if transB is non-zero.

  • c (Optional[numpy.ndarray]): Optional input tensor C. If not specified, the computation is done as if C is a scalar 0. The shape of C should be unidirectional broadcastable to (M, N). Defaults to None.

  • alpha (float): Scalar multiplier for the product of input tensors A * B. Defaults to 1.

  • beta (float): Scalar multiplier for input tensor C. Defaults to 1.

  • transA (int): Whether A should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.

  • transB (int): Whether B should be transposed. The type is kept as int as it is the type used by ONNX and it can easily be interpreted by Python as a boolean. Defaults to 0.

Returns:

  • Tuple[numpy.ndarray]: The tuple containing the result tensor


function numpy_matmul

numpy_matmul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute matmul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13

Args:

  • a (numpy.ndarray): N-dimensional matrix A

  • b (numpy.ndarray): N-dimensional matrix B

Returns:

  • Tuple[numpy.ndarray]: Matrix multiply results from A * B


function numpy_relu

numpy_relu(x: ndarray) → Tuple[ndarray]

Compute relu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sigmoid

numpy_sigmoid(x: ndarray) → Tuple[ndarray]

Compute sigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softmax

numpy_softmax(x, axis=1, keepdims=True)

Compute softmax in numpy according to ONNX spec.

Softmax is currently not supported in FHE.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13

Args:

  • x (numpy.ndarray): Input tensor

  • axis (None, int, tuple of int): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.

  • keepdims (bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cos

numpy_cos(x: ndarray) → Tuple[ndarray]

Compute cos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_cosh

numpy_cosh(x: ndarray) → Tuple[ndarray]

Compute cosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sin

numpy_sin(x: ndarray) → Tuple[ndarray]

Compute sin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sinh

numpy_sinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tan

numpy_tan(x: ndarray) → Tuple[ndarray]

Compute tan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_tanh

numpy_tanh(x: ndarray) → Tuple[ndarray]

Compute tanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acos

numpy_acos(x: ndarray) → Tuple[ndarray]

Compute acos in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_acosh

numpy_acosh(x: ndarray) → Tuple[ndarray]

Compute acosh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asin

numpy_asin(x: ndarray) → Tuple[ndarray]

Compute asin in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_asinh

numpy_asinh(x: ndarray) → Tuple[ndarray]

Compute sinh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atan

numpy_atan(x: ndarray) → Tuple[ndarray]

Compute atan in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_atanh

numpy_atanh(x: ndarray) → Tuple[ndarray]

Compute atanh in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_elu

numpy_elu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute elu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_selu

numpy_selu(
    x: ndarray,
    alpha: float = 1.6732632423543772,
    gamma: float = 1.0507009873554805
) → Tuple[ndarray]

Compute selu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • gamma (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_celu

numpy_celu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute celu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_leakyrelu

numpy_leakyrelu(x: ndarray, alpha: float = 0.01) → Tuple[ndarray]

Compute leakyrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_thresholdedrelu

numpy_thresholdedrelu(x: ndarray, alpha: float = 1) → Tuple[ndarray]

Compute thresholdedrelu in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardsigmoid

numpy_hardsigmoid(
    x: ndarray,
    alpha: float = 0.2,
    beta: float = 0.5
) → Tuple[ndarray]

Compute hardsigmoid in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6

Args:

  • x (numpy.ndarray): Input tensor

  • alpha (float): Coefficient

  • beta (float): Coefficient

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_softplus

numpy_softplus(x: ndarray) → Tuple[ndarray]

Compute softplus in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_abs

numpy_abs(x: ndarray) → Tuple[ndarray]

Compute abs in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_div

numpy_div(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute div in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_mul

numpy_mul(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute mul in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sub

numpy_sub(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute sub in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_log

numpy_log(x: ndarray) → Tuple[ndarray]

Compute log in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_erf

numpy_erf(x: ndarray) → Tuple[ndarray]

Compute erf in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_hardswish

numpy_hardswish(x: ndarray) → Tuple[ndarray]

Compute hardswish in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_exp

numpy_exp(x: ndarray) → Tuple[ndarray]

Compute exponential in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: The exponential of the input tensor computed element-wise


function numpy_equal

numpy_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not

numpy_not(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_not_float

numpy_not_float(x: ndarray) → Tuple[ndarray]

Compute not in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater

numpy_greater(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_float

numpy_greater_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal

numpy_greater_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_greater_or_equal_float

numpy_greater_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less

numpy_less(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_float

numpy_less_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal

numpy_less_or_equal(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_less_or_equal_float

numpy_less_or_equal_float(x: ndarray, y: ndarray) → Tuple[ndarray]

Compute less or equal in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12

Args:

  • x (numpy.ndarray): Input tensor

  • y (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_identity

numpy_identity(x: ndarray) → Tuple[ndarray]

Compute identity in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_transpose

numpy_transpose(x: ndarray, perm=None) → Tuple[ndarray]

Transpose in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13

Args:

  • x (numpy.ndarray): Input tensor

  • perm (numpy.ndarray): Permutation of the axes

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_conv

numpy_conv(
    x: ndarray,
    w: ndarray,
    b: Optional[ndarray] = None,
    dilations: Tuple[int, ],
    group: int = 1,
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ],
    strides: Tuple[int, ]
) → Tuple[ndarray]

Compute N-D convolution using Torch.

Currently supports 2d convolution with torch semantics. This function is also ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Conv

Args:

  • x (numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d

  • w (numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d

  • b (Optional[numpy.ndarray]): bias tensor, Shape is (O,). Default to None.

  • dilations (Tuple[int, ...]): dilation of the kernel, default 1 on all dimensions.

  • group (int): number of convolution groups, can be 1 or a multiple of both (C,) and (O,), so that I = C / group. Default to 1.

  • kernel_shape (Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int, ...]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int, ...]): stride of the convolution on each axis

Returns:

  • res (numpy.ndarray): a tensor of size (N x OutChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.Conv2d.html


function numpy_avgpool

numpy_avgpool(
    x: ndarray,
    ceil_mode: int,
    kernel_shape: Tuple[int, ],
    pads: Tuple[int, ] = None,
    strides: Tuple[int, ] = None
) → Tuple[ndarray]

Compute Average Pooling using Torch.

Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool

Args:

  • x (numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d

  • ceil_mode (int): ONNX rounding parameter, expected 0 (torch style dimension computation)

  • kernel_shape (Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv

  • pads (Tuple[int, ...]): padding in ONNX format (begin, end) on each axis

  • strides (Tuple[int, ...]): stride of the convolution on each axis

Returns:

  • res (numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html

Raises:

  • AssertionError: if the pooling arguments are wrong


function numpy_maxpool

numpy_maxpool(
    x: ndarray,
    kernel_shape: Tuple[int, ],
    strides: Tuple[int, ] = None,
    auto_pad: str = 'NOTSET',
    pads: Tuple[int, ] = None,
    dilations: Optional[Tuple[int, ], List[int]] = None,
    ceil_mode: int = 0,
    storage_order: int = 0
) → Tuple[ndarray]

Compute Max Pooling using Torch.

Currently supports 2d max pooling with torch semantics. This function is ONNX compatible.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#MaxPool

Args:

  • x (numpy.ndarray): the input

  • kernel_shape (Union[Tuple[int, ...], List[int]]): shape of the kernel

  • strides (Optional[Union[Tuple[int, ...], List[int]]]): stride along each spatial axis set to 1 along each spatial axis if not set

  • auto_pad (str): padding strategy, default = "NOTSET"

  • pads (Optional[Union[Tuple[int, ...], List[int]]]): padding for the beginning and ending along each spatial axis (D1_begin, D2_begin, ..., D1_end, D2_end, ...) set to 0 along each spatial axis if not set

  • dilations (Optional[Union[Tuple[int, ...], List[int]]]): dilation along each spatial axis set to 1 along each spatial axis if not set

  • ceil_mode (int): ceiling mode, default = 1

  • storage_order (int): storage order, 0 for row major, 1 for column major, default = 0

Returns:

  • res (numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).

  • See https: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html


function numpy_cast

numpy_cast(data: ndarray, to: int) → Tuple[ndarray]

Execute ONNX cast in Numpy.

For traced values during compilation, it supports only booleans, which are converted to float. For raw values (used in constant folding or shape computations), any cast is allowed.

See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast

Args:

  • data (numpy.ndarray): Input encrypted tensor

  • to (int): integer value of the onnx.TensorProto DataType enum

Returns:

  • result (numpy.ndarray): a tensor with the required data type


function numpy_batchnorm

numpy_batchnorm(
    x: ndarray,
    scale: ndarray,
    bias: ndarray,
    input_mean: ndarray,
    input_var: ndarray,
    epsilon=1e-05,
    momentum=0.9,
    training_mode=0
) → Tuple[ndarray]

Compute the batch normalization of the input tensor.

This can be expressed as:

Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization-14

Args:

  • x (numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.

  • scale (numpy.ndarray): scale tensor of shape (C,)

  • bias (numpy.ndarray): bias tensor of shape (C,)

  • input_mean (numpy.ndarray): mean values to use for each input channel, shape (C,)

  • input_var (numpy.ndarray): variance values to use for each input channel, shape (C,)

  • epsilon (float): avoids division by zero

  • momentum (float): momentum used during training of the mean/variance, not used in inference

  • training_mode (int): if the model was exported in training mode this is set to 1, else 0

Returns:

  • numpy.ndarray: Normalized tensor


function numpy_flatten

numpy_flatten(x: ndarray, axis: int = 1) → Tuple[ndarray]

Flatten a tensor into a 2d array.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13.

Args:

  • x (numpy.ndarray): tensor to flatten

  • axis (int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)

Returns:

  • result: flattened tensor


function numpy_or

numpy_or(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_or_float

numpy_or_float(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute or in numpy according to ONNX spec and cast outputs to floats.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_round

numpy_round(a: ndarray) → Tuple[ndarray]

Compute round in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0

Args:

  • a (numpy.ndarray): Input tensor whose elements to be rounded.

Returns:

  • Tuple[numpy.ndarray]: Output tensor with rounded input elements.


function numpy_pow

numpy_pow(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute pow in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13

Args:

  • a (numpy.ndarray): Input tensor whose elements to be raised.

  • b (numpy.ndarray): The power to which we want to raise.

Returns:

  • Tuple[numpy.ndarray]: Output tensor.


function numpy_floor

numpy_floor(x: ndarray) → Tuple[ndarray]

Compute Floor in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_max

numpy_max(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute Max in numpy according to ONNX spec.

Computes the max between the first input and a float constant.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Constant tensor to compare to the first input

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_min

numpy_min(a: ndarray, b: ndarray) → Tuple[ndarray]

Compute Min in numpy according to ONNX spec.

Computes the minimum between the first input and a float constant.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1

Args:

  • a (numpy.ndarray): Input tensor

  • b (numpy.ndarray): Constant tensor to compare to the first input

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_sign

numpy_sign(x: ndarray) → Tuple[ndarray]

Compute Sign in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_neg

numpy_neg(x: ndarray) → Tuple[ndarray]

Compute Negative in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9

Args:

  • x (numpy.ndarray): Input tensor

Returns:

  • Tuple[numpy.ndarray]: Output tensor


function numpy_concatenate

numpy_concatenate(*x: ndarray, axis: int) → Tuple[ndarray]

Apply concatenate in numpy according to ONNX spec.

See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#concat-13

Args:

  • *x (numpy.ndarray): Input tensors to be concatenated.

  • axis (int): Which axis to concat on.

Returns:

  • Tuple[numpy.ndarray]: Output tensor.


class RawOpOutput

Type construct that marks an ndarray as a raw output of a quantized op.


class ONNXMixedFunction

A mixed quantized-raw valued onnx function.

ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.

method __init__

__init__(function, non_quant_params: Set[str], output_is_raw: bool = False)

Create the mixed function and raw parameter list.

Args:

  • function (Any): function to be decorated

  • non_quant_params: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)

  • output_is_raw (bool): indicates whether the op outputs a value that should not be quantized

Last updated