concrete.ml.sklearn.protocols
Protocols.
Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.
To read more about Protocol please read: https://peps.python.org/pep-0544
Quantizer
Quantizer Protocol.
To use to type hint a quantizer.
dequant
Dequantize some values.
Args:
X
(numpy.ndarray): Values to dequantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: Dequantized values
quant
Quantize some values.
Args:
values
(numpy.ndarray): Values to quantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: The quantized values
ConcreteBaseEstimatorProtocol
A Concrete Estimator Protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
ConcreteBaseClassifierProtocol
Concrete classifier protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the class with highest probability.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
predict_proba
Predicts for each sample the probability of each class.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
ConcreteBaseRegressorProtocol
Concrete regressor protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
compile
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized dataset
configuration
(Optional[Configuration]): the options for compilation
compilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilation
show_mlir
(bool): whether or not to show MLIR during the compilation
use_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidths
p_error
(float): probability of error of a single PBS
global_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0
verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
fit
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
fit_benchmark
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series
y
(numpy.ndarray): labels associated with training data
*args
: The arguments to pass to the underlying model.
**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fitted
model
: underlying estimator
post_processing
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
predict
Predicts for each sample the expected value.
Args:
X
(numpy.ndarray): Features
execute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray