concrete.ml.sklearn.protocols.md
module concrete.ml.sklearn.protocols
concrete.ml.sklearn.protocols
Protocols.
Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.
To read more about Protocol please read: https://peps.python.org/pep-0544
class Quantizer
Quantizer
Quantizer Protocol.
To use to type hint a quantizer.
method dequant
dequant
dequant(X: 'ndarray') → ndarray
Dequantize some values.
Args:
X
(numpy.ndarray): Values to dequantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: Dequantized values
method quant
quant
quant(values: 'ndarray') → ndarray
Quantize some values.
Args:
values
(numpy.ndarray): Values to quantize
.. # noqa: DAR202
Returns:
numpy.ndarray
: The quantized values
class ConcreteBaseEstimatorProtocol
ConcreteBaseEstimatorProtocol
A Concrete Estimator Protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compile
compile(
X: 'ndarray',
configuration: 'Optional[Configuration]',
compilation_artifacts: 'Optional[DebugArtifacts]',
show_mlir: 'bool',
use_virtual_lib: 'bool',
p_error: 'float',
global_p_error: 'float',
verbose_compilation: 'bool'
) → Circuit
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized datasetconfiguration
(Optional[Configuration]): the options for compilationcompilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir
(bool): whether or not to show MLIR during the compilationuse_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidthsp_error
(float): probability of error of a single PBSglobal_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
method fit
fit
fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
method fit_benchmark
fit_benchmark
fit_benchmark(
X: 'ndarray',
y: 'ndarray',
*args,
**kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data*args
: The arguments to pass to the underlying model.**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fittedmodel
: underlying estimator
method post_processing
post_processing
post_processing(y_preds: 'ndarray') → ndarray
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
class ConcreteBaseClassifierProtocol
ConcreteBaseClassifierProtocol
Concrete classifier protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compile
compile(
X: 'ndarray',
configuration: 'Optional[Configuration]',
compilation_artifacts: 'Optional[DebugArtifacts]',
show_mlir: 'bool',
use_virtual_lib: 'bool',
p_error: 'float',
global_p_error: 'float',
verbose_compilation: 'bool'
) → Circuit
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized datasetconfiguration
(Optional[Configuration]): the options for compilationcompilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir
(bool): whether or not to show MLIR during the compilationuse_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidthsp_error
(float): probability of error of a single PBSglobal_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
method fit
fit
fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
method fit_benchmark
fit_benchmark
fit_benchmark(
X: 'ndarray',
y: 'ndarray',
*args,
**kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data*args
: The arguments to pass to the underlying model.**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fittedmodel
: underlying estimator
method post_processing
post_processing
post_processing(y_preds: 'ndarray') → ndarray
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
method predict
predict
predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray
Predicts for each sample the class with highest probability.
Args:
X
(numpy.ndarray): Featuresexecute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
method predict_proba
predict_proba
predict_proba(X: 'ndarray', execute_in_fhe: 'bool') → ndarray
Predicts for each sample the probability of each class.
Args:
X
(numpy.ndarray): Featuresexecute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
class ConcreteBaseRegressorProtocol
ConcreteBaseRegressorProtocol
Concrete regressor protocol.
property onnx_model
onnx_model.
.. # noqa: DAR202
Results: onnx.ModelProto
property quantize_input
Quantize input function.
method compile
compile
compile(
X: 'ndarray',
configuration: 'Optional[Configuration]',
compilation_artifacts: 'Optional[DebugArtifacts]',
show_mlir: 'bool',
use_virtual_lib: 'bool',
p_error: 'float',
global_p_error: 'float',
verbose_compilation: 'bool'
) → Circuit
Compiles a model to a FHE Circuit.
Args:
X
(numpy.ndarray): the dequantized datasetconfiguration
(Optional[Configuration]): the options for compilationcompilation_artifacts
(Optional[DebugArtifacts]): artifacts object to fill during compilationshow_mlir
(bool): whether or not to show MLIR during the compilationuse_virtual_lib
(bool): whether to compile using the virtual library that allows higher bitwidthsp_error
(float): probability of error of a single PBSglobal_p_error
(float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0verbose_compilation
(bool): whether to show compilation information
.. # noqa: DAR202
Returns:
Circuit
: the compiled Circuit.
method fit
fit
fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol
Initialize and fit the module.
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data**fit_params
: additional parameters that can be used during training
.. # noqa: DAR202
Returns:
ConcreteBaseEstimatorProtocol
: the trained estimator
method fit_benchmark
fit_benchmark
fit_benchmark(
X: 'ndarray',
y: 'ndarray',
*args,
**kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]
Fit the quantized estimator and return reference estimator.
This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier
Args:
X
: training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Seriesy
(numpy.ndarray): labels associated with training data*args
: The arguments to pass to the underlying model.**kwargs
: The keyword arguments to pass to the underlying model.
.. # noqa: DAR202
Returns:
self
: self fittedmodel
: underlying estimator
method post_processing
post_processing
post_processing(y_preds: 'ndarray') → ndarray
Post-process models predictions.
Args:
y_preds
(numpy.ndarray): predicted values by model (clear-quantized)
.. # noqa: DAR202
Returns:
numpy.ndarray
: the post-processed predictions
method predict
predict
predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray
Predicts for each sample the expected value.
Args:
X
(numpy.ndarray): Featuresexecute_in_fhe
(bool): Whether the inference should be done in fhe or not.
.. # noqa: DAR202
Returns: numpy.ndarray
Last updated
Was this helpful?