concrete.ml.torch.compile.md
module concrete.ml.torch.compile
concrete.ml.torch.compile
torch compilation function.
Global Variables
MAX_BITWIDTH_BACKWARD_COMPATIBLE
OPSET_VERSION_FOR_ONNX_EXPORT
function convert_torch_tensor_or_numpy_array_to_numpy_array
convert_torch_tensor_or_numpy_array_to_numpy_array
Convert a torch tensor or a numpy array to a numpy array.
Args:
torch_tensor_or_numpy_array
(Tensor): the value that is either a torch tensor or a numpy array.
Returns:
numpy.ndarray
: the value converted to a numpy array.
function compile_torch_model
compile_torch_model
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
torch_model
(torch.nn.Module): the model to quantizetorch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.import_qat
(bool): Set to True to import a network that contains quantizers and was trained using quantization aware trainingconfiguration
(Configuration): Configuration object to use during compilationcompilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demon_bits
: the number of bits for the quantizationuse_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to Falsep_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
function compile_onnx_model
compile_onnx_model
Compile a torch module into a FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
onnx_model
(onnx.ModelProto): the model to quantizetorch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.import_qat
(bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not requantize it.configuration
(Configuration): Configuration object to use during compilationcompilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demon_bits
: the number of bits for the quantizationuse_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.p_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
function compile_brevitas_qat_model
compile_brevitas_qat_model
Compile a Brevitas Quantization Aware Training model.
The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.
Args:
torch_model
(torch.nn.Module): the model to quantizetorch_inputset
(Dataset): the calibration inputset, can contain either torch tensors or numpy.ndarray.n_bits
(Union[int,dict]): the number of bits for the quantizationconfiguration
(Configuration): Configuration object to use during compilationcompilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilationshow_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demouse_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation, defaults to False.p_error
(Optional[float]): probability of error of a single PBSglobal_p_error
(Optional[float]): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0output_onnx_file
(str): temporary file to store ONNX model. If None a temporary file is generatedverbose_compilation
(bool): whether to show compilation information
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
Last updated