Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
concrete.ml.common
: Module for shared data structures and code.
concrete.ml.common.check_inputs
: Check and conversion tools.
concrete.ml.common.debugging
: Module for debugging.
concrete.ml.common.debugging.custom_assert
: Provide some variants of assert.
concrete.ml.common.utils
: Utils that can be re-used by other pieces of code in the module.
concrete.ml.deployment
: Module for deployment of the FHE model.
concrete.ml.deployment.fhe_client_server
: APIs for FHE deployment.
concrete.ml.onnx
: ONNX module.
concrete.ml.onnx.convert
: ONNX conversion related code.
concrete.ml.onnx.onnx_impl_utils
: Utility functions for onnx operator implementations.
concrete.ml.onnx.onnx_model_manipulations
: Some code to manipulate models.
concrete.ml.onnx.onnx_utils
: Utils to interpret an ONNX model with numpy.
concrete.ml.onnx.ops_impl
: ONNX ops implementation in python + numpy.
concrete.ml.quantization
: Modules for quantization.
concrete.ml.quantization.base_quantized_op
: Base Quantized Op class that implements quantization for a float numpy op.
concrete.ml.quantization.post_training
: Post Training Quantization methods.
concrete.ml.quantization.quantized_module
: QuantizedModule API.
concrete.ml.quantization.quantized_ops
: Quantized versions of the ONNX operators for post training quantization.
concrete.ml.quantization.quantizers
: Quantization utilities for a numpy array/tensor.
concrete.ml.sklearn
: Import sklearn models.
concrete.ml.sklearn.base
: Module that contains base classes for our libraries estimators.
concrete.ml.sklearn.glm
: Implement sklearn's Generalized Linear Models (GLM).
concrete.ml.sklearn.linear_model
: Implement sklearn linear model.
concrete.ml.sklearn.protocols
: Protocols.
concrete.ml.sklearn.qnn
: Scikit-learn interface for concrete quantized neural networks.
concrete.ml.sklearn.rf
: Implements RandomForest models.
concrete.ml.sklearn.svm
: Implement Support Vector Machine.
concrete.ml.sklearn.torch_module
: Implement torch module.
concrete.ml.sklearn.tree
: Implement the sklearn tree models.
concrete.ml.sklearn.tree_to_numpy
: Implements the conversion of a tree model to a numpy function.
concrete.ml.sklearn.xgb
: Implements XGBoost models.
concrete.ml.torch
: Modules for torch to numpy conversion.
concrete.ml.torch.compile
: torch compilation function.
concrete.ml.torch.numpy_module
: A torch to numpy module.
concrete.ml.version
: File to manage the version of the package.
fhe_client_server.FHEModelClient
: Client API to encrypt and decrypt FHE data.
fhe_client_server.FHEModelDev
: Dev API to save the model and then load and run the FHE circuit.
fhe_client_server.FHEModelServer
: Server API to load and run the FHE circuit.
ops_impl.ONNXMixedFunction
: A mixed quantized-raw valued onnx function.
base_quantized_op.QuantizedOp
: Base class for quantized ONNX ops implemented in numpy.
base_quantized_op.QuantizedOpUnivariateOfEncrypted
: An univariate operator of an encrypted value.
post_training.ONNXConverter
: Base ONNX to Concrete ML computation graph conversion class.
post_training.PostTrainingAffineQuantization
: Post-training Affine Quantization.
post_training.PostTrainingQATImporter
: Converter of Quantization Aware Training networks.
quantized_module.QuantizedModule
: Inference for a quantized model.
quantized_ops.QuantizedAbs
: Quantized Abs op.
quantized_ops.QuantizedAdd
: Quantized Addition operator.
quantized_ops.QuantizedAvgPool
: Quantized Average Pooling op.
quantized_ops.QuantizedBatchNormalization
: Quantized Batch normalization with encrypted input and in-the-clear normalization params.
quantized_ops.QuantizedBrevitasQuant
: Brevitas uniform quantization with encrypted input.
quantized_ops.QuantizedCast
: Cast the input to the required data type.
quantized_ops.QuantizedCelu
: Quantized Celu op.
quantized_ops.QuantizedClip
: Quantized clip op.
quantized_ops.QuantizedConv
: Quantized Conv op.
quantized_ops.QuantizedDiv
: Div operator /.
quantized_ops.QuantizedElu
: Quantized Elu op.
quantized_ops.QuantizedErf
: Quantized erf op.
quantized_ops.QuantizedExp
: Quantized Exp op.
quantized_ops.QuantizedFlatten
: Quantized flatten for encrypted inputs.
quantized_ops.QuantizedFloor
: Quantized Floor op.
quantized_ops.QuantizedGemm
: Quantized Gemm op.
quantized_ops.QuantizedGreater
: Comparison operator >.
quantized_ops.QuantizedGreaterOrEqual
: Comparison operator >=.
quantized_ops.QuantizedHardSigmoid
: Quantized HardSigmoid op.
quantized_ops.QuantizedHardSwish
: Quantized Hardswish op.
quantized_ops.QuantizedIdentity
: Quantized Identity op.
quantized_ops.QuantizedLeakyRelu
: Quantized LeakyRelu op.
quantized_ops.QuantizedLess
: Comparison operator <.
quantized_ops.QuantizedLessOrEqual
: Comparison operator <=.
quantized_ops.QuantizedLog
: Quantized Log op.
quantized_ops.QuantizedMatMul
: Quantized MatMul op.
quantized_ops.QuantizedMax
: Quantized Max op.
quantized_ops.QuantizedMin
: Quantized Min op.
quantized_ops.QuantizedMul
: Multiplication operator.
quantized_ops.QuantizedNeg
: Quantized Neg op.
quantized_ops.QuantizedNot
: Quantized Not op.
quantized_ops.QuantizedOr
: Or operator ||.
quantized_ops.QuantizedPRelu
: Quantized PRelu op.
quantized_ops.QuantizedPad
: Quantized Padding op.
quantized_ops.QuantizedPow
: Quantized pow op.
quantized_ops.QuantizedReduceSum
: ReduceSum with encrypted input.
quantized_ops.QuantizedRelu
: Quantized Relu op.
quantized_ops.QuantizedReshape
: Quantized Reshape op.
quantized_ops.QuantizedRound
: Quantized round op.
quantized_ops.QuantizedSelu
: Quantized Selu op.
quantized_ops.QuantizedSigmoid
: Quantized sigmoid op.
quantized_ops.QuantizedSign
: Quantized Neg op.
quantized_ops.QuantizedSoftplus
: Quantized Softplus op.
quantized_ops.QuantizedSub
: Subtraction operator.
quantized_ops.QuantizedTanh
: Quantized Tanh op.
quantized_ops.QuantizedTranspose
: Transpose operator for quantized inputs.
quantized_ops.QuantizedWhere
: Where operator on quantized arrays.
quantizers.MinMaxQuantizationStats
: Calibration set statistics.
quantizers.QuantizationOptions
: Options for quantization.
quantizers.QuantizedArray
: Abstraction of quantized array.
quantizers.UniformQuantizationParameters
: Quantization parameters for uniform quantization.
quantizers.UniformQuantizer
: Uniform quantizer.
base.BaseTreeClassifierMixin
: Mixin class for tree-based classifiers.
base.BaseTreeEstimatorMixin
: Mixin class for tree-based estimators.
base.BaseTreeRegressorMixin
: Mixin class for tree-based regressors.
base.QuantizedTorchEstimatorMixin
: Mixin that provides quantization for a torch module and follows the Estimator API.
base.SklearnLinearClassifierMixin
: A Mixin class for sklearn linear classifiers with FHE.
base.SklearnLinearModelMixin
: A Mixin class for sklearn linear models with FHE.
glm.GammaRegressor
: A Gamma regression model with FHE.
glm.PoissonRegressor
: A Poisson regression model with FHE.
glm.TweedieRegressor
: A Tweedie regression model with FHE.
linear_model.ElasticNet
: An ElasticNet regression model with FHE.
linear_model.Lasso
: A Lasso regression model with FHE.
linear_model.LinearRegression
: A linear regression model with FHE.
linear_model.LogisticRegression
: A logistic regression model with FHE.
linear_model.Ridge
: A Ridge regression model with FHE.
protocols.ConcreteBaseClassifierProtocol
: Concrete classifier protocol.
protocols.ConcreteBaseEstimatorProtocol
: A Concrete Estimator Protocol.
protocols.ConcreteBaseRegressorProtocol
: Concrete regressor protocol.
protocols.Quantizer
: Quantizer Protocol.
qnn.FixedTypeSkorchNeuralNet
: A mixin with a helpful modification to a skorch estimator that fixes the module type.
qnn.NeuralNetClassifier
: Scikit-learn interface for quantized FHE compatible neural networks.
qnn.NeuralNetRegressor
: Scikit-learn interface for quantized FHE compatible neural networks.
qnn.QuantizedSkorchEstimatorMixin
: Mixin class that adds quantization features to Skorch NN estimators.
qnn.SparseQuantNeuralNetImpl
: Sparse Quantized Neural Network classifier.
rf.RandomForestClassifier
: Implements the RandomForest classifier.
rf.RandomForestRegressor
: Implements the RandomForest regressor.
svm.LinearSVC
: A Classification Support Vector Machine (SVM).
svm.LinearSVR
: A Regression Support Vector Machine (SVM).
tree.DecisionTreeClassifier
: Implements the sklearn DecisionTreeClassifier.
tree.DecisionTreeRegressor
: Implements the sklearn DecisionTreeClassifier.
tree_to_numpy.Task
: Task enumerate.
xgb.XGBClassifier
: Implements the XGBoost classifier.
xgb.XGBRegressor
: Implements the XGBoost regressor.
numpy_module.NumpyModule
: General interface to transform a torch.nn.Module to numpy module.
check_inputs.check_X_y_and_assert
: sklearn.utils.check_X_y with an assert.
check_inputs.check_array_and_assert
: sklearn.utils.check_array with an assert.
custom_assert.assert_false
: Provide a custom assert to check that the condition is False.
custom_assert.assert_not_reached
: Provide a custom assert to check that a piece of code is never reached.
custom_assert.assert_true
: Provide a custom assert to check that the condition is True.
utils.generate_proxy_function
: Generate a proxy function for a function accepting only *args type arguments.
utils.get_onnx_opset_version
: Return the ONNX opset_version.
utils.replace_invalid_arg_name_chars
: Sanitize arg_name, replacing invalid chars by _.
convert.get_equivalent_numpy_forward
: Get the numpy equivalent forward of the provided ONNX model.
convert.get_equivalent_numpy_forward_and_onnx_model
: Get the numpy equivalent forward of the provided torch Module.
onnx_impl_utils.compute_conv_output_dims
: Compute the output shape of a pool or conv operation.
onnx_impl_utils.compute_onnx_pool_padding
: Compute any additional padding needed to compute pooling layers.
onnx_impl_utils.numpy_onnx_pad
: Pad a tensor according to ONNX spec, using an optional custom pad value.
onnx_impl_utils.onnx_avgpool_compute_norm_const
: Compute the average pooling normalization constant.
onnx_model_manipulations.clean_graph_after_node_name
: Clean the graph of the onnx model by removing nodes after the given node name.
onnx_model_manipulations.clean_graph_after_node_op_type
: Clean the graph of the onnx model by removing nodes after the given node type.
onnx_model_manipulations.keep_following_outputs_discard_others
: Keep the outputs given in outputs_to_keep and remove the others from the model.
onnx_model_manipulations.remove_identity_nodes
: Remove identity nodes from a model.
onnx_model_manipulations.remove_node_types
: Remove unnecessary nodes from the ONNX graph.
onnx_model_manipulations.remove_unused_constant_nodes
: Remove unused Constant nodes in the provided onnx model.
onnx_model_manipulations.simplify_onnx_model
: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
onnx_utils.execute_onnx_with_numpy
: Execute the provided ONNX graph on the given inputs.
onnx_utils.get_attribute
: Get the attribute from an ONNX AttributeProto.
onnx_utils.get_op_name
: Construct the qualified name of the ONNX operator.
ops_impl.cast_to_float
: Cast values to floating points.
ops_impl.numpy_abs
: Compute abs in numpy according to ONNX spec.
ops_impl.numpy_acos
: Compute acos in numpy according to ONNX spec.
ops_impl.numpy_acosh
: Compute acosh in numpy according to ONNX spec.
ops_impl.numpy_add
: Compute add in numpy according to ONNX spec.
ops_impl.numpy_asin
: Compute asin in numpy according to ONNX spec.
ops_impl.numpy_asinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_atan
: Compute atan in numpy according to ONNX spec.
ops_impl.numpy_atanh
: Compute atanh in numpy according to ONNX spec.
ops_impl.numpy_avgpool
: Compute Average Pooling using Torch.
ops_impl.numpy_batchnorm
: Compute the batch normalization of the input tensor.
ops_impl.numpy_cast
: Execute ONNX cast in Numpy.
ops_impl.numpy_celu
: Compute celu in numpy according to ONNX spec.
ops_impl.numpy_constant
: Return the constant passed as a kwarg.
ops_impl.numpy_cos
: Compute cos in numpy according to ONNX spec.
ops_impl.numpy_cosh
: Compute cosh in numpy according to ONNX spec.
ops_impl.numpy_div
: Compute div in numpy according to ONNX spec.
ops_impl.numpy_elu
: Compute elu in numpy according to ONNX spec.
ops_impl.numpy_equal
: Compute equal in numpy according to ONNX spec.
ops_impl.numpy_erf
: Compute erf in numpy according to ONNX spec.
ops_impl.numpy_exp
: Compute exponential in numpy according to ONNX spec.
ops_impl.numpy_flatten
: Flatten a tensor into a 2d array.
ops_impl.numpy_floor
: Compute Floor in numpy according to ONNX spec.
ops_impl.numpy_greater
: Compute greater in numpy according to ONNX spec.
ops_impl.numpy_greater_float
: Compute greater in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_greater_or_equal
: Compute greater or equal in numpy according to ONNX spec.
ops_impl.numpy_greater_or_equal_float
: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
ops_impl.numpy_hardsigmoid
: Compute hardsigmoid in numpy according to ONNX spec.
ops_impl.numpy_hardswish
: Compute hardswish in numpy according to ONNX spec.
ops_impl.numpy_identity
: Compute identity in numpy according to ONNX spec.
ops_impl.numpy_leakyrelu
: Compute leakyrelu in numpy according to ONNX spec.
ops_impl.numpy_less
: Compute less in numpy according to ONNX spec.
ops_impl.numpy_less_float
: Compute less in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_less_or_equal
: Compute less or equal in numpy according to ONNX spec.
ops_impl.numpy_less_or_equal_float
: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_log
: Compute log in numpy according to ONNX spec.
ops_impl.numpy_matmul
: Compute matmul in numpy according to ONNX spec.
ops_impl.numpy_max
: Compute Max in numpy according to ONNX spec.
ops_impl.numpy_min
: Compute Min in numpy according to ONNX spec.
ops_impl.numpy_mul
: Compute mul in numpy according to ONNX spec.
ops_impl.numpy_neg
: Compute Negative in numpy according to ONNX spec.
ops_impl.numpy_not
: Compute not in numpy according to ONNX spec.
ops_impl.numpy_not_float
: Compute not in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_or
: Compute or in numpy according to ONNX spec.
ops_impl.numpy_or_float
: Compute or in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_pow
: Compute pow in numpy according to ONNX spec.
ops_impl.numpy_relu
: Compute relu in numpy according to ONNX spec.
ops_impl.numpy_round
: Compute round in numpy according to ONNX spec.
ops_impl.numpy_selu
: Compute selu in numpy according to ONNX spec.
ops_impl.numpy_sigmoid
: Compute sigmoid in numpy according to ONNX spec.
ops_impl.numpy_sign
: Compute Sign in numpy according to ONNX spec.
ops_impl.numpy_sin
: Compute sin in numpy according to ONNX spec.
ops_impl.numpy_sinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_softmax
: Compute softmax in numpy according to ONNX spec.
ops_impl.numpy_softplus
: Compute softplus in numpy according to ONNX spec.
ops_impl.numpy_sub
: Compute sub in numpy according to ONNX spec.
ops_impl.numpy_tan
: Compute tan in numpy according to ONNX spec.
ops_impl.numpy_tanh
: Compute tanh in numpy according to ONNX spec.
ops_impl.numpy_thresholdedrelu
: Compute thresholdedrelu in numpy according to ONNX spec.
ops_impl.numpy_transpose
: Transpose in numpy according to ONNX spec.
ops_impl.numpy_where
: Compute the equivalent of numpy.where.
ops_impl.numpy_where_body
: Compute the equivalent of numpy.where.
ops_impl.onnx_func_raw_args
: Decorate a numpy onnx function to flag the raw/non quantized inputs.
quantizers.fill_from_kwargs
: Fill a parameter set structure from kwargs parameters.
tree_to_numpy.tree_to_numpy
: Convert the tree inference to a numpy functions using Hummingbird.
compile.compile_brevitas_qat_model
: Compile a Brevitas Quantization Aware Training model.
compile.compile_onnx_model
: Compile a torch module into an FHE equivalent.
compile.compile_torch_model
: Compile a torch module into an FHE equivalent.
compile.convert_torch_tensor_or_numpy_array_to_numpy_array
: Convert a torch tensor or a numpy array to a numpy array.
concrete.ml.common.debugging.custom_assert
Provide some variants of assert.
assert_true
Provide a custom assert to check that the condition is True.
Args:
condition
(bool): the condition. If False, raise AssertionError
on_error_msg
(str): optional message for precising the error, in case of error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
assert_false
Provide a custom assert to check that the condition is False.
Args:
condition
(bool): the condition. If True, raise AssertionError
on_error_msg
(str): optional message for precising the error, in case of error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
assert_not_reached
Provide a custom assert to check that a piece of code is never reached.
Args:
on_error_msg
(str): message for precising the error
error_type
(Type[Exception]): the type of error to raise, if condition is not fulfilled. Default to AssertionError
concrete.ml.common.utils
Utils that can be re-used by other pieces of code in the module.
DEFAULT_P_ERROR_PBS
replace_invalid_arg_name_chars
Sanitize arg_name, replacing invalid chars by _.
This does not check that the starting character of arg_name is valid.
Args:
arg_name
(str): the arg name to sanitize.
Returns:
str
: the sanitized arg name, with only chars in _VALID_ARG_CHARS.
generate_proxy_function
Generate a proxy function for a function accepting only *args type arguments.
This returns a runtime compiled function with the sanitized argument names passed in desired_functions_arg_names as the arguments to the function.
Args:
function_to_proxy
(Callable): the function defined like def f(*args) for which to return a function like f_proxy(arg_1, arg_2) for any number of arguments.
desired_functions_arg_names
(Iterable[str]): the argument names to use, these names are sanitized and the mapping between the original argument name to the sanitized one is returned in a dictionary. Only the sanitized names will work for a call to the proxy function.
Returns:
Tuple[Callable, Dict[str, str]]
: the proxy function and the mapping of the original arg name to the new and sanitized arg names.
get_onnx_opset_version
Return the ONNX opset_version.
Args:
onnx_model
(onnx.ModelProto): the model.
Returns:
int
: the version of the model
concrete.ml.common.check_inputs
Check and conversion tools.
Utils that are used to check (including convert) some data types which are compatible with scikit-learn to numpy types.
check_array_and_assert
sklearn.utils.check_array with an assert.
Equivalent of sklearn.utils.check_array, with a final assert that the type is one which is supported by Concrete-ML.
Args:
X
(object): Input object to check / convert
Returns: The converted and validated array
check_X_y_and_assert
sklearn.utils.check_X_y with an assert.
Equivalent of sklearn.utils.check_X_y, with a final assert that the type is one which is supported by Concrete-ML.
Args:
X
(ndarray, list, sparse matrix): Input data
y
(ndarray, list, sparse matrix): Labels
*args
: The arguments to pass to check_X_y
**kwargs
: The keyword arguments to pass to check_X_y
Returns: The converted and validated arrays
concrete.ml.deployment.fhe_client_server
APIs for FHE deployment.
CML_VERSION
AVAILABLE_MODEL
FHEModelServer
Server API to load and run the FHE circuit.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
load
Load the circuit.
run
Run the model on the server over encrypted data.
Args:
serialized_encrypted_quantized_data
(cnp.PublicArguments): the encrypted, quantized and serialized data
serialized_evaluation_keys
(cnp.EvaluationKeys): the serialized evaluation keys
Returns:
cnp.PublicResult
: the result of the model
FHEModelDev
Dev API to save the model and then load and run the FHE circuit.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
model
(Any): the model to use for the FHE API
save
Export all needed artifacts for the client and server.
Raises:
Exception
: path_dir is not empty
FHEModelClient
Client API to encrypt and decrypt FHE data.
__init__
Initialize the FHE API.
Args:
path_dir
(str): the path to the directory where the circuit is saved
key_dir
(str): the path to the directory where the keys are stored
deserialize_decrypt
Deserialize and decrypt the values.
Args:
serialized_encrypted_quantized_result
(cnp.PublicArguments): the serialized, encrypted and quantized result
Returns:
numpy.ndarray
: the decrypted and desarialized values
deserialize_decrypt_dequantize
Deserialize, decrypt and dequantize the values.
Args:
serialized_encrypted_quantized_result
(cnp.PublicArguments): the serialized, encrypted and quantized result
Returns:
numpy.ndarray
: the decrypted (dequantized) values
generate_private_and_evaluation_keys
Generate the private and evaluation keys.
Args:
force
(bool): if True, regenerate the keys even if they already exist
get_serialized_evaluation_keys
Get the serialized evaluation keys.
Returns:
cnp.EvaluationKeys
: the evaluation keys
load
Load the quantizers along with the FHE specs.
quantize_encrypt_serialize
Quantize, encrypt and serialize the values.
Args:
x
(numpy.ndarray): the values to quantize, encrypt and serialize
Returns:
cnp.PublicArguments
: the quantized, encrypted and serialized values
concrete.ml.onnx.ops_impl
ONNX ops implementation in python + numpy.
cast_to_float
Cast values to floating points.
Args:
inputs
(Tuple[numpy.ndarray]): The values to consider.
Returns:
Tuple[numpy.ndarray]
: The float values.
onnx_func_raw_args
Decorate a numpy onnx function to flag the raw/non quantized inputs.
Args:
*args (tuple[Any])
: function argument names
Returns:
result
(ONNXMixedFunction): wrapped numpy function with a list of mixed arguments
numpy_where_body
Compute the equivalent of numpy.where.
This function is not mapped to any ONNX operator (as opposed to numpy_where). It is usable by functions which are mapped to ONNX operators, e.g. numpy_div or numpy_where.
Args:
c
(numpy.ndarray): Condition operand.
t
(numpy.ndarray): True operand.
f
(numpy.ndarray): False operand.
Returns:
numpy.ndarray
: numpy.where(c, t, f)
numpy_where
Compute the equivalent of numpy.where.
Args:
c
(numpy.ndarray): Condition operand.
t
(numpy.ndarray): True operand.
f
(numpy.ndarray): False operand.
Returns:
numpy.ndarray
: numpy.where(c, t, f)
numpy_add
Compute add in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Add-13
Args:
a
(numpy.ndarray): First operand.
b
(numpy.ndarray): Second operand.
Returns:
Tuple[numpy.ndarray]
: Result, has same element type as two inputs
numpy_constant
Return the constant passed as a kwarg.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Constant-13
Args:
**kwargs
: keyword arguments
Returns:
Any
: The stored constant.
numpy_matmul
Compute matmul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#MatMul-13
Args:
a
(numpy.ndarray): N-dimensional matrix A
b
(numpy.ndarray): N-dimensional matrix B
Returns:
Tuple[numpy.ndarray]
: Matrix multiply results from A * B
numpy_relu
Compute relu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Relu-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sigmoid
Compute sigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sigmoid-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_softmax
Compute softmax in numpy according to ONNX spec.
Softmax is currently not supported in FHE.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#softmax-13
Args:
x
(numpy.ndarray): Input tensor
axis
(None, int, tuple of ints): Axis or axes along which a softmax's sum is performed. If None, it will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis. Default to 1.
keepdims
(bool): If True, the axes which are reduced along the sum are left in the result as dimensions with size one. Default to True.
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_cos
Compute cos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cos-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_cosh
Compute cosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Cosh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sin
Compute sin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sin-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sinh
Compute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sinh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_tan
Compute tan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tan-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_tanh
Compute tanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Tanh-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_acos
Compute acos in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acos-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_acosh
Compute acosh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Acosh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_asin
Compute asin in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asin-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_asinh
Compute sinh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Asinh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_atan
Compute atan in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atan-7
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_atanh
Compute atanh in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Atanh-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_elu
Compute elu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Elu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_selu
Compute selu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Selu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
gamma
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_celu
Compute celu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Celu-12
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_leakyrelu
Compute leakyrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LeakyRelu-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_thresholdedrelu
Compute thresholdedrelu in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ThresholdedRelu-10
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_hardsigmoid
Compute hardsigmoid in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#HardSigmoid-6
Args:
x
(numpy.ndarray): Input tensor
alpha
(float): Coefficient
beta
(float): Coefficient
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_softplus
Compute softplus in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softplus-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_abs
Compute abs in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Abs-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_div
Compute div in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Div-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_mul
Compute mul in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Mul-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sub
Compute sub in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sub-14
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_log
Compute log in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Log-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_erf
Compute erf in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Erf-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_hardswish
Compute hardswish in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#hardswish-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_exp
Compute exponential in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Exp-13
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: The exponential of the input tensor computed element-wise
numpy_equal
Compute equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Equal-11
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_not
Compute not in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_not_float
Compute not in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Not-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater
Compute greater in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_float
Compute greater in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Greater-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_or_equal
Compute greater or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_greater_or_equal_float
Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#GreaterOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less
Compute less in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_float
Compute less in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Less-13
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_or_equal
Compute less or equal in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_less_or_equal_float
Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#LessOrEqual-12
Args:
x
(numpy.ndarray): Input tensor
y
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_identity
Compute identity in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Identity-14
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_transpose
Transpose in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Transpose-13
Args:
x
(numpy.ndarray): Input tensor
perm
(numpy.ndarray): Permutation of the axes
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_avgpool
Compute Average Pooling using Torch.
Currently supports 2d average pooling with torch semantics. This function is ONNX compatible.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#AveragePool
Args:
x
(numpy.ndarray): input data (many dtypes are supported). Shape is N x C x H x W for 2d
ceil_mode
(int): ONNX rounding parameter, expected 0 (torch style dimension computation)
kernel_shape
(Tuple[int, ...]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int, ...]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int, ...]): stride of the convolution on each axis
Returns:
res
(numpy.ndarray): a tensor of size (N x InChannels x OutHeight x OutWidth).
See https
: //pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html
Raises:
AssertionError
: if the pooling arguments are wrong
numpy_cast
Execute ONNX cast in Numpy.
Supports only booleans for now, which are converted to integers.
See: https://github.com/onnx/onnx/blob/main/docs/Operators.md#Cast
Args:
data
(numpy.ndarray): Input encrypted tensor
to
(int): integer value of the onnx.TensorProto DataType enum
Returns:
result
(numpy.ndarray): a tensor with the required data type
numpy_batchnorm
Compute the batch normalization of the input tensor.
This can be expressed as:
Y = (X - input_mean) / sqrt(input_var + epsilon) * scale + B
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#BatchNormalization-14
Args:
x
(numpy.ndarray): tensor to normalize, dimensions are in the form of (N,C,D1,D2,...,Dn), where N is the batch size, C is the number of channels.
scale
(numpy.ndarray): scale tensor of shape (C,)
bias
(numpy.ndarray): bias tensor of shape (C,)
input_mean
(numpy.ndarray): mean values to use for each input channel, shape (C,)
input_var
(numpy.ndarray): variance values to use for each input channel, shape (C,)
epsilon
(float): avoids division by zero
momentum
(float): momentum used during training of the mean/variance, not used in inference
training_mode
(int): if the model was exported in training mode this is set to 1, else 0
Returns:
numpy.ndarray
: Normalized tensor
numpy_flatten
Flatten a tensor into a 2d array.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Flatten-13.
Args:
x
(numpy.ndarray): tensor to flatten
axis
(int): axis after which all dimensions will be flattened (axis=0 gives a 1D output)
Returns:
result
: flattened tensor
numpy_or
Compute or in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_or_float
Compute or in numpy according to ONNX spec and cast outputs to floats.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Or-7
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_round
Compute round in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Round-11 Remark that ONNX Round operator is actually a rint, since the number of decimals is forced to be 0
Args:
a
(numpy.ndarray): Input tensor whose elements to be rounded.
Returns:
Tuple[numpy.ndarray]
: Output tensor with rounded input elements.
numpy_pow
Compute pow in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Pow-13
Args:
a
(numpy.ndarray): Input tensor whose elements to be raised.
b
(numpy.ndarray): The power to which we want to raise.
Returns:
Tuple[numpy.ndarray]
: Output tensor.
numpy_floor
Compute Floor in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Floor-1
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_max
Compute Max in numpy according to ONNX spec.
Computes the max between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_min
Compute Min in numpy according to ONNX spec.
Computes the minimum between the first input and a float constant.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Max-1
Args:
a
(numpy.ndarray): Input tensor
b
(numpy.ndarray): Constant tensor to compare to the first input
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_sign
Compute Sign in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
numpy_neg
Compute Negative in numpy according to ONNX spec.
See https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Sign-9
Args:
x
(numpy.ndarray): Input tensor
Returns:
Tuple[numpy.ndarray]
: Output tensor
ONNXMixedFunction
A mixed quantized-raw valued onnx function.
ONNX functions will take inputs which can be either quantized or float. Some functions only take quantized inputs, but some functions take both types. For mixed functions we need to tag the parameters that do not need quantization. Thus quantized ops can know which inputs are not QuantizedArray and we avoid unnecessary wrapping of float values as QuantizedArrays.
__init__
Create the mixed function and raw parameter list.
Args:
function
(Any): function to be decorated
non_quant_params
: Set[str]: set of parameters that will not be quantized (stored as numpy.ndarray)
concrete.ml.onnx.convert
ONNX conversion related code.
IMPLEMENTED_ONNX_OPS
OPSET_VERSION_FOR_ONNX_EXPORT
get_equivalent_numpy_forward_and_onnx_model
Get the numpy equivalent forward of the provided torch Module.
Args:
torch_module
(torch.nn.Module): the torch Module for which to get the equivalent numpy forward.
dummy_input
(Union[torch.Tensor, Tuple[torch.Tensor, ...]]): dummy inputs for ONNX export.
output_onnx_file
(Optional[Union[Path, str]]): Path to save the ONNX file to. Will use a temp file if not provided. Defaults to None.
Returns:
Tuple[Callable[..., Tuple[numpy.ndarray, ...]], onnx.GraphProto]
: The function that will execute the equivalent numpy code to the passed torch_module and the generated ONNX model.
get_equivalent_numpy_forward
Get the numpy equivalent forward of the provided ONNX model.
Args:
onnx_model
(onnx.ModelProto): the ONNX model for which to get the equivalent numpy forward.
check_model
(bool): set to True to run the onnx checker on the model. Defaults to True.
Raises:
ValueError
: Raised if there is an unsupported ONNX operator required to convert the torch model to numpy.
Returns:
Callable[..., Tuple[numpy.ndarray, ...]]
: The function that will execute the equivalent numpy function.
concrete.ml.onnx.onnx_model_manipulations
Some code to manipulate models.
simplify_onnx_model
Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
Args:
onnx_model
(onnx.ModelProto): the model to simplify.
remove_unused_constant_nodes
Remove unused Constant nodes in the provided onnx model.
Args:
onnx_model
(onnx.ModelProto): the model for which we want to remove unused Constant nodes.
remove_identity_nodes
Remove identity nodes from a model.
Args:
onnx_model
(onnx.ModelProto): the model for which we want to remove Identity nodes.
keep_following_outputs_discard_others
Keep the outputs given in outputs_to_keep and remove the others from the model.
Args:
onnx_model
(onnx.ModelProto): the ONNX model to modify.
outputs_to_keep
(Iterable[str]): the outputs to keep by name.
remove_node_types
Remove unnecessary nodes from the ONNX graph.
Args:
onnx_model
(onnx.ModelProto): The ONNX model to modify.
op_types_to_remove
(List[str]): The node types to remove from the graph.
Raises:
ValueError
: Wrong replacement by an Identity node.
clean_graph_after_node_name
Clean the graph of the onnx model by removing nodes after the given node name.
Args:
onnx_model
(onnx.ModelProto): The onnx model.
node_name
(str): The node's name whose following nodes will be removed.
fail_if_not_found
(bool): If true, abort if the node name is not found
Raises:
ValueError
: if the node name is not found and if fail_if_not_found is set
clean_graph_after_node_op_type
Clean the graph of the onnx model by removing nodes after the given node type.
Args:
onnx_model
(onnx.ModelProto): The onnx model.
node_op_type
(str): The node's op_type whose following nodes will be removed.
fail_if_not_found
(bool): If true, abort if the node op_type is not found
Raises:
ValueError
: if the node op_type is not found and if fail_if_not_found is set
concrete.ml.quantization.post_training
Post Training Quantization methods.
ONNX_OPS_TO_NUMPY_IMPL
DEFAULT_MODEL_BITS
ONNX_OPS_TO_QUANTIZED_IMPL
ONNXConverter
Base ONNX to Concrete ML computation graph conversion class.
This class provides a method to parse an ONNX graph and apply several transformations. First, it creates QuantizedOps for each ONNX graph op. These quantized ops have calibrated quantizers that are useful when the operators work on integer data or when the output of the ops is the output of the encrypted program. For operators that compute in float and will be merged to TLUs, these quantizers are not used. Second, this converter creates quantized tensors for initializer and weights stored in the graph.
This class should be sub-classed to provide specific calibration and quantization options depending on the usage (Post-training quantization vs Quantization Aware training).
Arguments:
n_bits
(int, Dict[str, int]): number of bits for quantization, can be a single value or a dictionary with the following keys : - "op_inputs" and "op_weights" (mandatory) - "model_inputs" and "model_outputs" (optional, default to 5 bits). When using a single integer for n_bits, its value is assigned to "op_inputs" and "op_weights" bits. The maximum between this value and a default value (5) is then assigned to the number of "model_inputs" "model_outputs". This default value is a compromise between model accuracy and runtime performance in FHE. "model_outputs" gives the precision of the final network's outputs, while "model_inputs" gives the precision of the network's inputs. "op_inputs" and "op_weights" both control the quantization for inputs and weights of all layers.
y_model
(NumpyModule): Model in numpy.
is_signed
(bool): Whether the weights of the layers can be signed. Currently, only the weights can be signed.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
PostTrainingAffineQuantization
Post-training Affine Quantization.
Create the quantized version of the passed numpy module.
Args:
n_bits
(int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits
numpy_model
(NumpyModule): Model in numpy.
is_signed
: Whether the weights of the layers can be signed. Currently, only the weights can be signed.
Returns:
QuantizedModule
: A quantized version of the numpy model.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
PostTrainingQATImporter
Converter of Quantization Aware Training networks.
This class provides specific configuration for QAT networks during ONNX network conversion to Concrete ML computation graphs.
__init__
property n_bits_model_inputs
Get the number of bits to use for the quantization of the first layer's output.
Returns:
n_bits
(int): number of bits for input quantization
property n_bits_model_outputs
Get the number of bits to use for the quantization of the last layer's output.
Returns:
n_bits
(int): number of bits for output quantization
property n_bits_op_inputs
Get the number of bits to use for the quantization of any operators' inputs.
Returns:
n_bits
(int): number of bits for the quantization of the operators' inputs
property n_bits_op_weights
Get the number of bits to use for the quantization of any constants (usually weights).
Returns:
n_bits
(int): number of bits for quantizing constants used by operators
quantize_module
Quantize numpy module.
Following https://arxiv.org/abs/1712.05877 guidelines.
Args:
*calibration_data (numpy.ndarray)
: Data that will be used to compute the bounds, scales and zero point values for every quantized object.
Returns:
QuantizedModule
: Quantized numpy module
concrete.ml.quantization.base_quantized_op
Base Quantized Op class that implements quantization for a float numpy op.
ONNX_OPS_TO_NUMPY_IMPL
ALL_QUANTIZED_OPS
ONNX_OPS_TO_QUANTIZED_IMPL
DEFAULT_MODEL_BITS
QuantizedOp
Base class for quantized ONNX ops implemented in numpy.
Args:
n_bits_output
(int): The number of bits to use for the quantization of the output
int_input_names
(Set[str]): The set of names of integer tensors that are inputs to this op
constant_inputs
(Optional[Union[Dict[str, Any], Dict[int, Any]]]): The constant tensors that are inputs to this op
input_quant_opts
(QuantizationOptions): Input quantizer options, determine the quantization that is applied to input tensors (that are not constants)
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
call_impl
Call self.impl to centralize mypy bug workaround.
Args:
*inputs (numpy.ndarray)
: real valued inputs.
**attrs
: the QuantizedOp attributes.
Returns:
numpy.ndarray
: return value of self.impl
can_fuse
Determine if the operator impedes graph fusion.
This function shall be overloaded by inheriting classes to test self._int_input_names, to determine whether the operation can be fused to a TLU or not. For example an operation that takes inputs produced by a unique integer tensor can be fused to a TLU. Example: f(x) = x * (x + 1) can be fused. A function that does f(x) = x * (x @ w + 1) can't be fused.
Returns:
bool
: whether this instance of the QuantizedOp produces Concrete Numpy code that can be fused to TLUs
must_quantize_input
Determine if an input must be quantized.
Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.
Args:
input_name_or_idx
(int): Index of the input to check.
Returns:
result
(bool): Whether the input must be quantized (must be a QuantizedArray
) or if it stays as a raw numpy.array
read from ONNX.
prepare_output
Quantize the output of the activation function.
The calibrate method needs to be called with sample data before using this function.
Args:
qoutput_activation
(numpy.ndarray): Output of the activation function.
Returns:
QuantizedArray
: Quantized output.
q_impl
Execute the quantized forward.
Args:
*q_inputs (QuantizedArray)
: Quantized inputs.
**attrs
: the QuantizedOp attributes.
Returns:
QuantizedArray
: The returned quantized value.
QuantizedOpUnivariateOfEncrypted
An univariate operator of an encrypted value.
This operation is not really operating as a quantized operation. It is useful when the computations get fused into a TLU, as in e.g. Act(x) = x || (x + 42)).
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
call_impl
Call self.impl to centralize mypy bug workaround.
Args:
*inputs (numpy.ndarray)
: real valued inputs.
**attrs
: the QuantizedOp attributes.
Returns:
numpy.ndarray
: return value of self.impl
can_fuse
Determine if this op can be fused.
This operation can be fused and computed in float when a single integer tensor generates both the operands. For example in the formula: f(x) = x || (x + 1) where x is an integer tensor.
Returns:
bool
: Can fuse
must_quantize_input
Determine if an input must be quantized.
Quantized ops and numpy onnx ops take inputs and attributes. Inputs can be either constant or variable (encrypted). Note that this does not handle attributes, which are handled by QuantizedOp classes separately in their constructor.
Args:
input_name_or_idx
(int): Index of the input to check.
Returns:
result
(bool): Whether the input must be quantized (must be a QuantizedArray
) or if it stays as a raw numpy.array
read from ONNX.
prepare_output
Quantize the output of the activation function.
The calibrate method needs to be called with sample data before using this function.
Args:
qoutput_activation
(numpy.ndarray): Output of the activation function.
Returns:
QuantizedArray
: Quantized output.
q_impl
Execute the quantized forward.
Args:
*q_inputs (QuantizedArray)
: Quantized inputs.
**attrs
: the QuantizedOp attributes.
Returns:
QuantizedArray
: The returned quantized value.
concrete.ml.onnx.onnx_utils
Utils to interpret an ONNX model with numpy.
ATTR_TYPES
ATTR_GETTERS
ONNX_OPS_TO_NUMPY_IMPL
ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_FLOAT
ONNX_COMPARISON_OPS_TO_NUMPY_IMPL_BOOL
ONNX_OPS_TO_NUMPY_IMPL_BOOL
IMPLEMENTED_ONNX_OPS
get_attribute
Get the attribute from an ONNX AttributeProto.
Args:
attribute
(onnx.AttributeProto): The attribute to retrieve the value from.
Returns:
Any
: The stored attribute value.
get_op_name
Construct the qualified name of the ONNX operator.
Args:
node
(Any): ONNX graph node
Returns:
result
(str): qualified name
execute_onnx_with_numpy
Execute the provided ONNX graph on the given inputs.
Args:
graph
(onnx.GraphProto): The ONNX graph to execute.
*inputs
: The inputs of the graph.
Returns:
Tuple[numpy.ndarray]
: The result of the graph's execution.
concrete.ml.sklearn.base
Module that contains base classes for our libraries estimators.
DEFAULT_P_ERROR_PBS
OPSET_VERSION_FOR_ONNX_EXPORT
QuantizedTorchEstimatorMixin
Mixin that provides quantization for a torch module and follows the Estimator API.
This class should be mixed in with another that provides the full Estimator API. This class only provides modifiers for .fit() (with quantization) and .predict() (optionally in FHE)
__init__
property base_estimator_type
Get the sklearn estimator that should be trained by the child class.
property base_module_to_compile
Get the Torch module that should be compiled to FHE.
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Get the number of quantization bits.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
Raises:
ValueError
: if called before the model is trained
fit
Initialize and fit the module.
If the module was already initialized, by calling fit, the module will be re-initialized (unless warm_start
is True). In addition to the torch training step, this method performs quantization of the trained torch model.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training, these are passed to the torch training interface
Returns:
self
: the trained quantized estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns:
self
: the trained quantized estimator
fp32_model
: trained raw (fp32) wrapped NN estimator
get_params_for_benchmark
Get the parameters to instantiate the sklearn estimator trained by the child class.
Returns:
params
(dict): dictionary with parameters that will initialize a new Estimator
post_processing
Post-processing the output.
Args:
y_preds
(numpy.ndarray): the output to post-process
Raises:
ValueError
: if unknown post-processing function
Returns:
numpy.ndarray
: the post-processed output
predict
Predict on user provided data.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with predictions
predict_proba
Predict on user provided data, returning probabilities.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with probabilities (if applicable)
Raises:
ValueError
: if the estimator was not yet trained or compiled
BaseTreeEstimatorMixin
Mixin class for tree-based estimators.
A place to share methods that are used on all tree-based estimators.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
BaseTreeRegressorMixin
Mixin class for tree-based regressors.
A place to share methods that are used on all tree-based regressors.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict the probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted probabilities.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
BaseTreeClassifierMixin
Mixin class for tree-based classifiers.
A place to share methods that are used on all tree-based classifiers.
__init__
Initialize the TreeBasedEstimatorMixin.
Args:
n_bits
(int): number of bits used for quantization
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
compile
Compile the model.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): set to True to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Dequantize the integer predictions.
Args:
y_preds
(numpy.ndarray): the predictions
Returns: the dequantized predictions
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
fit_benchmark
Fit the sklearn tree-based model and the FHE tree-based model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[ConcreteEstimators, SklearnEstimators]: The FHE and sklearn tree-based models.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict the class with highest probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted target values.
predict_proba
Predict the probability.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute in FHE. Defaults to False.
Returns:
numpy.ndarray
: The predicted probabilities.
quantize_input
Quantize the input.
Args:
X
(numpy.ndarray): the input
Returns: the quantized input
SklearnLinearModelMixin
A Mixin class for sklearn linear models with FHE.
__init__
Initialize the FHE linear model.
Args:
n_bits
(int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits Default to 2.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
clean_graph
Clean the graph of the onnx model.
This will remove the Cast node in the model's onnx.graph since they have no use in quantized or FHE models.
compile
Compile the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
fit
Fit the FHE linear model.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns: Any
fit_benchmark
Fit the sklearn linear model and the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: The arguments to pass to the sklearn linear model. or not (False). Default to False.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.
post_processing
Post-processing the output.
Args:
y_preds
(numpy.ndarray): the output to post-process
Returns:
numpy.ndarray
: the post-processed output
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit
Args:
X
(numpy.ndarray): the input data
execute_in_fhe
(bool): whether to execute the inference in FHE
Returns:
numpy.ndarray
: the prediction as ordinals
SklearnLinearClassifierMixin
A Mixin class for sklearn linear classifiers with FHE.
__init__
Initialize the FHE linear model.
Args:
n_bits
(int, Dict): Number of bits to quantize the model. If an int is passed for n_bits, the value will be used for activation, inputs and weights. If a dict is passed, then it should contain "model_inputs", "op_inputs", "op_weights" and "model_outputs" keys with corresponding number of quantization bits for: - model_inputs : number of bits for model input - op_inputs : number of bits to quantize layer input values - op_weights: learned parameters or constants in the network - model_outputs: final model output quantization bits Default to 2.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
clean_graph
Clean the graph of the onnx model.
Any operators following gemm, including the sigmoid, softmax and argmax operators, are removed from the graph. They will be executed in clear in the post-processing method.
compile
Compile the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths with simulated FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
Circuit
: the compiled Circuit.
decision_function
Predict confidence scores for samples.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: Confidence scores for samples.
fit
Fit the FHE linear model.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns: Any
fit_benchmark
Fit the sklearn linear model and the FHE linear model.
Args:
X
(numpy.ndarray): The input data.
y
(numpy.ndarray): The target data. random_state (Optional[Union[int, numpy.random.RandomState, None]]): The random state. Defaults to None.
*args
: The arguments to pass to the sklearn linear model. or not (False). Default to False.
*args
: args for super().fit
**kwargs
: kwargs for super().fit
Returns: Tuple[SklearnLinearModelMixin, sklearn.linear_model.LinearRegression]: The FHE and sklearn LinearRegression.
post_processing
Post-processing the predictions.
This step may include a dequantization of the inputs if not done previously, in particular within the client-server workflow.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Wether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: The prediction as ordinals.
predict_proba
Predict class probabilities for samples.
Args:
X
(numpy.ndarray): Samples to predict.
execute_in_fhe
(bool): If True, the inference will be executed in FHE. Default to False.
Returns:
numpy.ndarray
: Class probabilities for samples.
concrete.ml.quantization.quantizers
Quantization utilities for a numpy array/tensor.
STABILITY_CONST
fill_from_kwargs
Fill a parameter set structure from kwargs parameters.
Args:
obj
: an object of type klass, if None the object is created if any of the type's members appear in the kwargs
klass
: the type of object to fill
kwargs
: parameter names and values to fill into an instance of the klass type
Returns:
obj
: an object of type klass
kwargs
: remaining parameter names and values that were not filled into obj
Raises:
TypeError
: if the types of the parameters in kwargs could not be converted to the corresponding types of members of klass
QuantizationOptions
Options for quantization.
Determines the number of bits for quantization and the method of quantization of the values. Signed quantization allows negative quantized values. Symmetric quantization assumes the float values are distributed symmetrically around x=0 and assigns signed values around 0 to the float values. QAT (quantization aware training) quantization assumes the values are already quantized, taking a discrete set of values, and assigns these values to integers, computing only the scale.
__init__
property quant_options
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
copy_opts
Copy the options from a different structure.
Args:
opts
(QuantizationOptions): structure to copy parameters from.
MinMaxQuantizationStats
Calibration set statistics.
This class stores the statistics for the calibration set or for a calibration data batch. Currently we only store min/max to determine the quantization range. The min/max are computed from the calibration set.
property quant_stats
Get a copy of the calibration set statistics.
Returns:
MinMaxQuantizationStats
: a copy of the current quantization stats
check_is_uniform_quantized
Check if these statistics correspond to uniformly quantized values.
Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.
Args:
options
(QuantizationOptions): used to quantize the values in the QuantizedArray
Returns:
bool
: check result.
compute_quantization_stats
Compute the calibration set quantization statistics.
Args:
values
(numpy.ndarray): Calibration set on which to compute statistics.
copy_stats
Copy the statistics from a different structure.
Args:
stats
(MinMaxQuantizationStats): structure to copy statistics from.
UniformQuantizationParameters
Quantization parameters for uniform quantization.
This class stores the parameters used for quantizing real values to discrete integer values. The parameters are computed from quantization options and quantization statistics.
property quant_params
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
compute_quantization_parameters
Compute the quantization parameters.
Args:
options
(QuantizationOptions): quantization options set
stats
(MinMaxQuantizationStats): calibrated statistics for quantization
copy_params
Copy the parameters from a different structure.
Args:
params
(UniformQuantizationParameters): parameter structure to copy
UniformQuantizer
Uniform quantizer.
Contains all information necessary for uniform quantization and provides quantization/dequantization functionality on numpy arrays.
Args:
options
(QuantizationOptions): Quantization options set
stats
(Optional[MinMaxQuantizationStats]): Quantization batch statistics set
params
(Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)
__init__
property quant_options
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
property quant_params
Get a copy of the quantization parameters.
Returns:
UniformQuantizationParameters
: a copy of the current quantization parameters
property quant_stats
Get a copy of the calibration set statistics.
Returns:
MinMaxQuantizationStats
: a copy of the current quantization stats
check_is_uniform_quantized
Check if these statistics correspond to uniformly quantized values.
Determines whether the values represented by this QuantizedArray show a quantized structure that allows to infer the scale of quantization.
Args:
options
(QuantizationOptions): used to quantize the values in the QuantizedArray
Returns:
bool
: check result.
compute_quantization_parameters
Compute the quantization parameters.
Args:
options
(QuantizationOptions): quantization options set
stats
(MinMaxQuantizationStats): calibrated statistics for quantization
compute_quantization_stats
Compute the calibration set quantization statistics.
Args:
values
(numpy.ndarray): Calibration set on which to compute statistics.
copy_opts
Copy the options from a different structure.
Args:
opts
(QuantizationOptions): structure to copy parameters from.
copy_params
Copy the parameters from a different structure.
Args:
params
(UniformQuantizationParameters): parameter structure to copy
copy_stats
Copy the statistics from a different structure.
Args:
stats
(MinMaxQuantizationStats): structure to copy statistics from.
dequant
Dequantize values.
Args:
qvalues
(numpy.ndarray): integer values to dequantize
Returns:
numpy.ndarray
: Dequantized float values.
quant
Quantize values.
Args:
values
(numpy.ndarray): float values to quantize
Returns:
numpy.ndarray
: Integer quantized values.
QuantizedArray
Abstraction of quantized array.
Contains float values and their quantized integer counter-parts. Quantization is performed by the quantizer member object. Float and int values are kept in sync. Having both types of values is useful since quantized operators in Concrete ML graphs might need one or the other depending on how the operator works (in float or in int). Moreover, when the encrypted function needs to return a value, it must return integer values.
See https://arxiv.org/abs/1712.05877.
Args:
values
(numpy.ndarray): Values to be quantized.
n_bits
(int): The number of bits to use for quantization.
value_is_float
(bool, optional): Whether the passed values are real (float) values or not. If False, the values will be quantized according to the passed scale and zero_point. Defaults to True.
options
(QuantizationOptions): Quantization options set
stats
(Optional[MinMaxQuantizationStats]): Quantization batch statistics set
params
(Optional[UniformQuantizationParameters]): Quantization parameters set (scale, zero-point)
kwargs
: Any member of the options, stats, params sets as a key-value pair. The parameter sets need to be completely parametrized if their members appear in kwargs.
__init__
dequant
Dequantize self.qvalues.
Returns:
numpy.ndarray
: Dequantized values.
quant
Quantize self.values.
Returns:
numpy.ndarray
: Quantized values.
update_quantized_values
Update qvalues to get their corresponding values using the related quantized parameters.
Args:
qvalues
(numpy.ndarray): Values to replace self.qvalues
Returns:
values
(numpy.ndarray): Corresponding values
update_values
Update values to get their corresponding qvalues using the related quantized parameters.
Args:
values
(numpy.ndarray): Values to replace self.values
Returns:
qvalues
(numpy.ndarray): Corresponding qvalues
concrete.ml.sklearn.glm
Implement sklearn's Generalized Linear Models (GLM).
PoissonRegressor
A Poisson regression model with FHE.
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
Fit the GLM regression quantized model.
Args:
X
: The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Wether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
GammaRegressor
A Gamma regression model with FHE.
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
Fit the GLM regression quantized model.
Args:
X
: The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Wether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
TweedieRegressor
A Tweedie regression model with FHE.
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
Fit the GLM regression quantized model.
Args:
X
: The training data, which can be: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
post_processing
Post-processing the predictions.
Args:
y_preds
(numpy.ndarray): The predictions to post-process.
already_dequantized
(bool): Wether the inputs were already dequantized or not. Default to False.
Returns:
numpy.ndarray
: The post-processed predictions.
predict
Predict on user data.
Predict on user data using either the quantized clear model, implemented with tensors, or, if execute_in_fhe is set, using the compiled FHE circuit.
Args:
X
(numpy.ndarray): The input data.
execute_in_fhe
(bool): Whether to execute the inference in FHE. Default to False.
Returns:
numpy.ndarray
: The model's predictions.
concrete.ml.quantization.quantized_ops
Quantized versions of the ONNX operators for post training quantization.
QuantizedSigmoid
Quantized sigmoid op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedHardSigmoid
Quantized HardSigmoid op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedRelu
Quantized Relu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedPRelu
Quantized PRelu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLeakyRelu
Quantized LeakyRelu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedHardSwish
Quantized Hardswish op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedElu
Quantized Elu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSelu
Quantized Selu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedCelu
Quantized Celu op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedClip
Quantized clip op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedRound
Quantized round op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedPow
Quantized pow op.
Only works for a float constant power. This operation will be fused to a (potentially larger) TLU.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedGemm
Quantized Gemm op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedMatMul
Quantized MatMul op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Gemm operation can not be fused since it must be performed over integer tensors and it combines different values of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedAdd
Quantized Addition operator.
Can add either two variables (both encrypted) or a variable and a constant
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedTanh
Quantized Tanh op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSoftplus
Quantized Softplus op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedExp
Quantized Exp op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLog
Quantized Log op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedAbs
Quantized Abs op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedIdentity
Quantized Identity op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
QuantizedReshape
Quantized Reshape op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedConv
Quantized Conv op.
__init__
Construct the quantized convolution operator and retrieve parameters.
Args:
n_bits_output
: number of bits for the quantization of the outputs of this operator
int_input_names
: names of integer tensors that are taken as input for this operation
constant_inputs
: the weights and activations
input_quant_opts
: options for the input quantizer
attrs
: convolution options
dilations
(Tuple[int]): dilation of the kernel, default 1 on all dimensions.
group
(int): number of convolution groups, default 1
kernel_shape
(Tuple[int]): shape of the kernel. Should have 2 elements for 2d conv
pads
(Tuple[int]): padding in ONNX format (begin, end) on each axis
strides
(Tuple[int]): stride of the convolution on each axis
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Conv operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
Compute the quantized convolution between two quantized tensors.
Allows an optional quantized bias.
Args:
q_inputs
: input tuple, contains
x
(numpy.ndarray): input data. Shape is N x C x H x W for 2d
w
(numpy.ndarray): weights tensor. Shape is (O x I x Kh x Kw) for 2d
b
(numpy.ndarray, Optional): bias tensor, Shape is (O,)
attrs
: convolution options handled in constructor
Returns:
res
(QuantizedArray): result of the quantized integer convolution
QuantizedAvgPool
Quantized Average Pooling op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Avg Pooling operation can not be fused since it must be performed over integer tensors and it combines different elements of the input tensors.
Returns:
bool
: False, this operation can not be fused as it adds different encrypted integers
q_impl
QuantizedPad
Quantized Padding op.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Pad operation can not be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation can not be fused as it is manipulates integer tensors
QuantizedWhere
Where operator on quantized arrays.
Supports only constants for the results produced on the True/False branches.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedCast
Cast the input to the required data type.
In FHE we only support a limited number of output types. Booleans are cast to integers.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedGreater
Comparison operator >.
Only supports comparison with a constant.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedGreaterOrEqual
Comparison operator >=.
Only supports comparison with a constant.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLess
Comparison operator <.
Only supports comparison with a constant.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedLessOrEqual
Comparison operator <=.
Only supports comparison with a constant.
__init__
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedOr
Or operator ||.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = x || (x + 42))
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedDiv
Div operator /.
This operation is not really working as a quantized operation. It just works when things got fused, as in e.g. Act(x) = 1000 / (x + 42))
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedMul
Multiplication operator.
Only multiplies an encrypted tensor with a float constant for now. This operation will be fused to a (potentially larger) TLU.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSub
Subtraction operator.
This works the same as addition on both encrypted - encrypted and on encrypted - constant.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Add operation can be computed in float and fused if it operates over inputs produced by a single integer tensor. For example the expression x + x * 1.75, where x is an encrypted tensor, can be computed with a single TLU.
Returns:
bool
: Whether the number of integer input tensors allows computing this op as a TLU
q_impl
QuantizedBatchNormalization
Quantized Batch normalization with encrypted input and in-the-clear normalization params.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedFlatten
Quantized flatten for encrypted inputs.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
can_fuse
Determine if this op can be fused.
Flatten operation can not be fused since it must be performed over integer tensors.
Returns:
bool
: False, this operation can not be fused as it is manipulates integer tensors.
q_impl
Flatten the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0
attrs
: contains axis attribute
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedReduceSum
ReduceSum with encrypted input.
This operator is currently an experimental feature.
__init__
Construct the quantized ReduceSum operator and retrieve parameters.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs.
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
axes
(Optional[numpy.ndarray]): Array of integers along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Default to None.
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None.
attrs
(dict): RecuseSum options.
keepdims
(int): Keep the reduced dimension or not, 1 means keeping the input dimension, 0 will reduce it along the given axis. Default to 1.
noop_with_empty_axes
(int): Defines behavior if 'axes' is empty or set to None. Default behavior with 0 is to reduce all axes. When axes is empty and this attribute is set to true 1, input tensor will not be reduced, and the output tensor would be equivalent to input tensor. Default to 0.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
calibrate
Create corresponding QuantizedArray for the output of the activation function.
Args:
*inputs (numpy.ndarray)
: Calibration sample inputs.
Returns:
numpy.ndarray
: the output values for the provided calibration samples.
q_impl
Sum the encrypted tensor's values over axis 1.
Args:
q_inputs
(QuantizedArray): An encrypted integer tensor at index 0.
attrs
(Dict): Contains axis attribute.
Returns:
(QuantizedArray)
: The sum of all values along axis 1 as an encrypted integer tensor.
tree_sum
Large sum without overflow (only MSB remains).
Args:
input_qarray
: Enctyped integer tensor.
is_calibration
: Whether we are calibrating the tree sum. If so, it will create all the quantizers for the downscaling.
Returns:
(numpy.ndarray)
: The MSB (based on the precision self.n_bits) of the integers sum.
QuantizedErf
Quantized erf op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedNot
Quantized Not op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedBrevitasQuant
Brevitas uniform quantization with encrypted input.
__init__
Construct the Brevitas quantization operator.
Args:
n_bits_output
(int): Number of bits for the operator's quantization of outputs. Not used, will be overridden by the bit_width in ONNX
int_input_names
(Optional[Set[str]]): Names of input integer tensors. Default to None.
constant_inputs
(Optional[Dict]): Input constant tensor.
scale
(float): Quantizer scale
zero_point
(float): Quantizer zero-point
bit_width
(int): Number of bits of the integer representation
input_quant_opts
(Optional[QuantizationOptions]): Options for the input quantizer. Default to None. attrs (dict):
rounding_mode
(str): Rounding mode (default and only accepted option is "ROUND")
signed
(int): Whether this op quantizes to signed integers (default 1),
narrow
(int): Whether this op quantizes to a narrow range of integers e.g. [-2n_bits-1 .. 2n_bits-1] (default 0),
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
Quantize values.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedTranspose
Transpose operator for quantized inputs.
This operator performs quantization, transposes the encrypted data, then dequantizes again.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
q_impl
Reshape the input integer encrypted tensor.
Args:
q_inputs
: an encrypted integer tensor at index 0 and one constant shape at index 1
attrs
: additional optional reshape options
Returns:
result
(QuantizedArray): reshaped encrypted integer tensor
QuantizedFloor
Quantized Floor op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedMax
Quantized Max op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedMin
Quantized Min op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedNeg
Quantized Neg op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
QuantizedSign
Quantized Neg op.
property op_type
Get the type of this operation.
Returns:
op_type
(str): The type of this operation, in the ONNX referential
concrete.ml.quantization.quantized_module
QuantizedModule API.
DEFAULT_P_ERROR_PBS
QuantizedModule
Inference for a quantized model.
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property is_compiled
Return the compiled status of the module.
Returns:
bool
: the compiled status of the module.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model
(onnx.ModelProto): the ONNX model
property post_processing_params
Get the post-processing parameters.
Returns:
Dict[str, Any]
: the post-processing parameters
compile
Compile the forward function of the module.
Args:
q_inputs
(Union[Tuple[numpy.ndarray, ...], numpy.ndarray]): Needed for tracing and building the boundaries.
configuration
(Optional[Configuration]): Configuration object to use during compilation
compilation_artifacts
(Optional[DebugArtifacts]): Artifacts object to fill during
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.
p_error
(Optional[float]): probability of error of a PBS.
Returns:
Circuit
: the compiled Circuit.
dequantize_output
Take the last layer q_out and use its dequant function.
Args:
qvalues
(numpy.ndarray): Quantized values of the last layer.
Returns:
numpy.ndarray
: Dequantized values of the last layer.
forward
Forward pass with numpy function only.
Args:
*qvalues (numpy.ndarray)
: numpy.array containing the quantized values.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
forward_and_dequant
Forward pass with numpy function only plus dequantization.
Args:
*q_x (numpy.ndarray)
: numpy.ndarray containing the quantized input values. Requires the input dtype to be uint8.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
post_processing
Post-processing of the quantized output.
Args:
qvalues
(numpy.ndarray): numpy.ndarray containing the quantized input values.
Returns:
(numpy.ndarray)
: Predictions of the quantized model
quantize_input
Take the inputs in fp32 and quantize it using the learned quantization parameters.
Args:
*values (numpy.ndarray)
: Floating point values.
Returns:
Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]
: Quantized (numpy.uint32) values.
set_inputs_quantization_parameters
Set the quantization parameters for the module's inputs.
Args:
*input_q_params (UniformQuantizer)
: The quantizer(s) for the module.
concrete.ml.sklearn.protocols
Protocols.
Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.
To read more about Protocol please read: https://peps.python.org/pep-0544
Quantizer
Quantizer Protocol.
To use to type hint a quantizer.
dequant
Dequantize some values.
Args:
X
(numpy.ndarray): Values to dequantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: Dequantized values
quant
Quantize some values.
Args:
values
(numpy.ndarray): Values to quantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: The quantized values
ConcreteBaseEstimatorProtocol
A Concrete Estimator Protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a PBS
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
ConcreteBaseClassifierProtocol
Concrete classifier protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a PBS
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the class with highest probability.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
predict_proba
Predicts for each sample the probability of each class.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
ConcreteBaseRegressorProtocol
Concrete regressor protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a PBS
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the expected value.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
concrete.ml.sklearn.svm
Implement Support Vector Machine.
LinearSVR
A Regression Support Vector Machine (SVM).
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
LinearSVC
A Classification Support Vector Machine (SVM).
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
concrete.ml.sklearn.rf
Implements RandomForest models.
RandomForestClassifier
Implements the RandomForest classifier.
__init__
Initialize the RandomForestClassifier.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
RandomForestRegressor
Implements the RandomForest regressor.
__init__
Initialize the RandomForestRegressor.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
concrete.ml.sklearn.tree
Implement the sklearn tree models.
DecisionTreeClassifier
Implements the sklearn DecisionTreeClassifier.
__init__
Initialize the DecisionTreeClassifier.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
DecisionTreeRegressor
Implements the sklearn DecisionTreeClassifier.
__init__
Initialize the DecisionTreeRegressor.
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
concrete.ml.sklearn.linear_model
Implement sklearn linear model.
LinearRegression
A linear regression model with FHE.
Arguments:
n_bits
(int): default is 2.
use_sum_workaround
(bool): indicate if the sum workaround should be used or not. This
feature is experimental and should be used carefully. Important note
: it only works for a LinearRegression model with N features, N a power of 2, for now. More information available in the QuantizedReduceSum operator. Default to False.
For more details on LinearRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
Fit the FHE linear model.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
*args
: The arguments to pass to the sklearn linear model.
**kwargs
: The keyword arguments to pass to the sklearn linear model.
Returns: Any
ElasticNet
An ElasticNet regression model with FHE.
Arguments:
n_bits
(int): default is 2.
For more details on ElasticNet please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
Lasso
A Lasso regression model with FHE.
Arguments:
n_bits
(int): default is 2.
For more details on Lasso please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
Ridge
A Ridge regression model with FHE.
Arguments:
n_bits
(int): default is 2.
For more details on Ridge please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
LogisticRegression
A logistic regression model with FHE.
Arguments:
n_bits
(int): default is 2.
For more details on LogisticRegression please refer to the scikit-learn documentation: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
__init__
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
concrete.ml.sklearn.tree_to_numpy
Implements the conversion of a tree model to a numpy function.
MAXIMUM_TLU_BIT_WIDTH
OPSET_VERSION_FOR_ONNX_EXPORT
EXPECTED_NUMBER_OF_OUTPUTS_PER_TASK
tree_to_numpy
Convert the tree inference to a numpy functions using Hummingbird.
Args:
model
(onnx.ModelProto): The model to convert.
x
(numpy.ndarray): The input data.
framework
(str): The framework from which the onnx_model is generated.
(options
: 'xgboost', 'sklearn')
task
(Task): The task the model is solving
output_n_bits
(int): The number of bits of the output.
Returns:
Tuple[Callable, List[QuantizedArray], onnx.ModelProto]
: A tuple with a function that takes a numpy array and returns a numpy array, QuantizedArray object to quantize and dequantize the output of the tree, and the ONNX model.
Task
Task enumerate.
concrete.ml.sklearn.qnn
Scikit-learn interface for concrete quantized neural networks.
MAXIMUM_TLU_BIT_WIDTH
SparseQuantNeuralNetImpl
Sparse Quantized Neural Network classifier.
This class implements an MLP that is compatible with FHE constraints. The weights and activations are quantized to low bitwidth and pruning is used to ensure accumulators do not surpass an user-provided accumulator bit-width. The number of classes and number of layers are specified by the user, as well as the breadth of the network
__init__
Sparse Quantized Neural Network constructor.
Args:
input_dim
: Number of dimensions of the input data
n_layers
: Number of linear layers for this network
n_outputs
: Number of output classes or regression targets
n_w_bits
: Number of weight bits
n_a_bits
: Number of activation and input bits
n_accum_bits
: Maximal allowed bitwidth of intermediate accumulators
n_hidden_neurons_multiplier
: A factor that is multiplied by the maximal number of active (non-zero weight) neurons for every layer. The maximal number of neurons in the worst case scenario is: 2^n_max-1 max_active_neurons(n_max, n_w, n_a) = floor(---------------------) (2^n_w-1)*(2^n_a-1) ) The worst case scenario for the bitwidth of the accumulator is when all weights and activations are maximum simultaneously. We set, for each layer, the total number of neurons to be: n_hidden_neurons_multiplier * max_active_neurons(n_accum_bits, n_w_bits, n_a_bits) Through experiments, for typical distributions of weights and activations, the default value for n_hidden_neurons_multiplier, 4, is safe to avoid overflow.
activation_function
: a torch class that is used to construct activation functions in the network (e.g. torch.ReLU, torch.SELU, torch.Sigmoid, etc)
Raises:
ValueError
: if the parameters have invalid values or the computed accumulator bitwidth is zero
enable_pruning
Enable pruning in the network. Pruning must be made permanent to recover pruned weights.
Raises:
ValueError
: if the quantization parameters are invalid
forward
Forward pass.
Args:
x
(torch.Tensor): network input
Returns:
x
(torch.Tensor): network prediction
make_pruning_permanent
Make the learned pruning permanent in the network.
max_active_neurons
Compute the maximum number of active (non-zero weight) neurons.
The computation is done using the quantization parameters passed to the constructor. Warning: With the current quantization algorithm (asymmetric) the value returned by this function is not guaranteed to ensure FHE compatibility. For some weight distributions, weights that are 0 (which are pruned weights) will not be quantized to 0. Therefore the total number of active quantized neurons will not be equal to max_active_neurons.
Returns:
n
(int): maximum number of active neurons
on_train_end
Call back when training is finished, can be useful to remove training hooks.
QuantizedSkorchEstimatorMixin
Mixin class that adds quantization features to Skorch NN estimators.
property base_estimator_type
Get the sklearn estimator that should be trained by the child class.
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
FixedTypeSkorchNeuralNet
A mixin with a helpful modification to a skorch estimator that fixes the module type.
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
NeuralNetClassifier
Scikit-learn interface for quantized FHE compatible neural networks.
This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.
__init__
property base_estimator_type
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property classes_
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
predict
Predict on user provided data.
Predicts using the quantized clear or FHE classifier
Args:
X
: input data, a numpy array of raw values (non quantized)
execute_in_fhe
: whether to execute the inference in FHE or in the clear
Returns:
y_pred
: numpy ndarray with predictions
NeuralNetRegressor
Scikit-learn interface for quantized FHE compatible neural networks.
This class wraps a quantized NN implemented using our Torch tools as a scikit-learn Estimator. It uses the skorch package to handle training and scikit-learn compatibility, and adds quantization and compilation functionality. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
The datatypes that are allowed for prediction by this wrapper are more restricted than standard scikit-learn estimators as this class needs to predict in FHE and network inference executor is the NumpyModule.
__init__
property base_estimator_type
property base_module_to_compile
Get the module that should be compiled to FHE. In our case this is a torch nn.Module.
Returns:
module
(nn.Module): the instantiated torch module
property fhe_circuit
Get the FHE circuit.
Returns:
Circuit
: the FHE circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[Quantizer]
: the input quantizers
property n_bits_quant
Return the number of quantization bits.
This is stored by the torch.nn.module instance and thus cannot be retrieved until this instance is created.
Returns:
n_bits
(int): the number of bits to quantize the network
Raises:
ValueError
: with skorch estimators, the module_
is not instantiated until .fit() is called. Thus this estimator needs to be .fit() before we get the quantization number of bits. If it is not trained we raise an exception
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model_
(onnx.ModelProto): the ONNX model
property output_quantizers
Get the input quantizers.
Returns:
List[QuantizedArray]
: the input quantizers
property quantize_input
Get the input quantization function.
Returns:
Callable
: function that quantizes the input
fit
get_params
Get parameters for this estimator.
Args:
deep
(bool): If True, will return the parameters for this estimator and contained subobjects that are estimators.
**kwargs
: any additional parameters to pass to the sklearn BaseEstimator class
Returns:
params
: dict, Parameter names mapped to their values.
get_params_for_benchmark
Get parameters for benchmark when cloning a skorch wrapped NN.
We must remove all parameters related to the module. Skorch takes either a class or a class instance for the module
parameter. We want to pass our trained model, a class instance. But for this to work, we need to remove all module related constructor params. If not, skorch will instantiate a new class instance of the same type as the passed module see skorch net.py NeuralNet::initialize_instance
Returns:
params
(dict): parameters to create an equivalent fp32 sklearn estimator for benchmark
infer
Perform a single inference step on a batch of data.
This method is specific to Skorch estimators.
Args:
x
(torch.Tensor): A batch of the input data, produced by a Dataset
**fit_params (dict)
: Additional parameters passed to the forward
method of the module and to the self.train_split
call.
Returns: A torch tensor with the inference results for each item in the input
on_train_end
Call back when training is finished by the skorch wrapper.
Check if the underlying neural net has a callback for this event and, if so, call it.
Args:
net
: estimator for which training has ended (equal to self)
X
: data
y
: targets
kwargs
: other arguments
concrete.ml.torch.numpy_module
A torch to numpy module.
OPSET_VERSION_FOR_ONNX_EXPORT
NumpyModule
General interface to transform a torch.nn.Module to numpy module.
Args:
torch_model
(Union[nn.Module, onnx.ModelProto]): A fully trained, torch model along with its parameters or the onnx graph of the model.
dummy_input
(Union[torch.Tensor, Tuple[torch.Tensor, ...]]): Sample tensors for all the module inputs, used in the ONNX export to get a simple to manipulate nn representation.
debug_onnx_output_file_path
: (Optional[Union[Path, str]], optional): An optional path to indicate where to save the ONNX file exported by torch for debug. Defaults to None.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
_onnx_model
(onnx.ModelProto): the ONNX model
forward
Apply a forward pass on args with the equivalent numpy function only.
Args:
*args
: the inputs of the forward function
Returns:
Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]
: result of the forward on the given inputs
concrete.ml.torch.compile
torch compilation function.
MAXIMUM_TLU_BIT_WIDTH
DEFAULT_P_ERROR_PBS
OPSET_VERSION_FOR_ONNX_EXPORT
convert_torch_tensor_or_numpy_array_to_numpy_array
Convert a torch tensor or a numpy array to a numpy array.
Args:
torch_tensor_or_numpy_array
(Tensor): the value that is either a torch tensor or a numpy array.
Returns:
numpy.ndarray
: the value converted to a numpy array.
compile_torch_model
Compile a torch module into an FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
torch_model
(torch.nn.Module): the model to quantize
torch_inputset
(Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.
import_qat
(bool): Set to True to import a network that contains quantizers and was trained using quantization aware training
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
n_bits
: the number of bits for the quantization
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False
p_error
(Optional[float]): probability of error of a PBS
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_onnx_model
Compile a torch module into an FHE equivalent.
Take a model in torch, turn it to numpy, quantize its inputs / weights / outputs and finally compile it with Concrete-Numpy
Args:
onnx_model
(onnx.ModelProto): the model to quantize
torch_inputset
(Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.
import_qat
(bool): Flag to signal that the network being imported contains quantizers in in its computation graph and that Concrete ML should not requantize it.
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
n_bits
: the number of bits for the quantization
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.
p_error
(Optional[float]): probability of error of a PBS
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
compile_brevitas_qat_model
Compile a Brevitas Quantization Aware Training model.
The torch_model parameter is a subclass of torch.nn.Module that uses quantized operations from brevitas.qnn. The model is trained before calling this function. This function compiles the trained model to FHE.
Args:
torch_model
(torch.nn.Module): the model to quantize
torch_inputset
(Dataset): the inputset, can contain either torch tensors or numpy.ndarray, only datasets with a single input are supported for now.
n_bits
(Union[int,dict]): the number of bits for the quantization
configuration
(Configuration): Configuration object to use during compilation
compilation_artifacts
(DebugArtifacts): Artifacts object to fill during compilation
show_mlir
(bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo
use_virtual_lib
(bool): set to use the so called virtual lib simulating FHE computation, defaults to False.
p_error
(Optional[float]): probability of error of a PBS
output_onnx_file
(str): temporary file to store ONNX model. If None a temporary file is generated
Returns:
QuantizedModule
: The resulting compiled QuantizedModule.
concrete.ml.sklearn.xgb
Implements XGBoost models.
XGBClassifier
Implements the XGBoost classifier.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.
XGBRegressor
Implements the XGBoost regressor.
__init__
property onnx_model
Get the ONNX model.
.. # noqa: DAR201
Returns:
onnx.ModelProto
: the ONNX model
fit
Fit the tree-based estimator.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): The target data.
**kwargs
: args for super().fit
Returns:
Any
: The fitted model.
post_processing
Apply post-processing to the predictions.
Args:
y_preds
(numpy.ndarray): The predictions.
Returns:
numpy.ndarray
: The post-processed predictions.