concrete.ml.onnx.convert.md
module concrete.ml.onnx.convert
concrete.ml.onnx.convertONNX conversion related code.
Global Variables
IMPLEMENTED_ONNX_OPS
OPSET_VERSION_FOR_ONNX_EXPORT
function fuse_matmul_bias_to_gemm
fuse_matmul_bias_to_gemmfuse_matmul_bias_to_gemm(onnx_model: ModelProto)Fuse sequence of matmul -> add into a gemm node.
Args:
onnx_model(onnx.ModelProto): A onnx model to optimize using Mat-Mult + Add -> Gemm
Returns:
onnx.ModelProto: the optimized onnx model
function get_equivalent_numpy_forward_from_torch
get_equivalent_numpy_forward_from_torchget_equivalent_numpy_forward_from_torch(
torch_module: Module,
dummy_input: Union[Tensor, Tuple[Tensor, ]],
output_onnx_file: Union[NoneType, Path, str] = None
) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto]Get the numpy equivalent forward of the provided torch Module.
Args:
torch_module(torch.nn.Module): the torch Module for which to get the equivalent numpy forward.dummy_input(Union[torch.Tensor, Tuple[torch.Tensor, ...]]): dummy inputs for ONNX export.output_onnx_file(Optional[Union[Path, str]]): Path to save the ONNX file to. Will use a temp file if not provided. Defaults to None.
Returns:
Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.GraphProto]: The function that will execute the equivalent numpy code to the passed torch_module and the generated ONNX model.
function preprocess_onnx_model
preprocess_onnx_modelpreprocess_onnx_model(onnx_model: ModelProto, check_model: bool) → ModelProtoGet the numpy equivalent forward of the provided ONNX model.
Args:
onnx_model(onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.check_model(bool): set to True to run the onnx checker on the model. Defaults to True.
Raises:
ValueError: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy.
Returns:
onnx.ModelProto: The preprocessed ONNX model.
function get_equivalent_numpy_forward_from_onnx
get_equivalent_numpy_forward_from_onnxget_equivalent_numpy_forward_from_onnx(
onnx_model: ModelProto,
check_model: bool = True
) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto]Get the numpy equivalent forward of the provided ONNX model.
Args:
onnx_model(onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.check_model(bool): set to True to run the onnx checker on the model. Defaults to True.
Returns:
Callable[..., Tuple[numpy.ndarray, ...]]: The function that will execute the equivalent numpy function.
function get_equivalent_numpy_forward_from_onnx_tree
get_equivalent_numpy_forward_from_onnx_treeget_equivalent_numpy_forward_from_onnx_tree(
onnx_model: ModelProto,
check_model: bool = True,
lsbs_to_remove_for_trees: Optional[Tuple[int, int]] = None
) → Tuple[Callable[, Tuple[ndarray, ]], ModelProto]Get the numpy equivalent forward of the provided ONNX model for tree-based models only.
Args:
onnx_model(onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.check_model(bool): set to True to run the onnx checker on the model. Defaults to True.lsbs_to_remove_for_trees(Optional[Tuple[int, int]]): This parameter is exclusively used for optimizing tree-based models. It contains the values of the least significant bits to remove during the tree traversal, where the first value refers to the first comparison (either "less" or "less_or_equal"), while the second value refers to the "Equal" comparison operation. Default to None, as it is not applicable to other types of models.
Returns:
Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.ModelProto]: The function that will execute the equivalent numpy function.
Last updated
Was this helpful?