Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
0.6
0.6
  • What is Concrete ML?
  • Getting Started
    • Installation
    • Key Concepts
    • Inference in the Cloud
    • Demos and Tutorials
  • Built-in Models
    • Linear Models
    • Tree-based Models
    • Neural Networks
    • Pandas
    • Built-in Model Examples
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step Guide
    • Deep Learning Examples
    • Debugging Models
  • Advanced topics
    • Quantization
    • Pruning
    • Compilation
    • Production Deployment
    • Advanced Features
  • Developer Guide
    • Workflow
      • Set Up the Project
      • Set Up Docker
      • Documentation
      • Support and Issues
      • Contributing
    • Inner Workings
      • Importing ONNX
      • Quantization Tools
      • FHE Op-graph Design
      • External Libraries
    • API
      • concrete.ml.common.check_inputs.md
      • concrete.ml.common.debugging.custom_assert.md
      • concrete.ml.common.debugging.md
      • concrete.ml.common.md
      • concrete.ml.common.utils.md
      • concrete.ml.deployment.fhe_client_server.md
      • concrete.ml.deployment.md
      • concrete.ml.onnx.convert.md
      • concrete.ml.onnx.md
      • concrete.ml.onnx.onnx_impl_utils.md
      • concrete.ml.onnx.onnx_model_manipulations.md
      • concrete.ml.onnx.onnx_utils.md
      • concrete.ml.onnx.ops_impl.md
      • concrete.ml.pytest.md
      • concrete.ml.pytest.torch_models.md
      • concrete.ml.pytest.utils.md
      • concrete.ml.quantization.base_quantized_op.md
      • concrete.ml.quantization.md
      • concrete.ml.quantization.post_training.md
      • concrete.ml.quantization.quantized_module.md
      • concrete.ml.quantization.quantized_ops.md
      • concrete.ml.quantization.quantizers.md
      • concrete.ml.sklearn.base.md
      • concrete.ml.sklearn.glm.md
      • concrete.ml.sklearn.linear_model.md
      • concrete.ml.sklearn.md
      • concrete.ml.sklearn.protocols.md
      • concrete.ml.sklearn.qnn.md
      • concrete.ml.sklearn.rf.md
      • concrete.ml.sklearn.svm.md
      • concrete.ml.sklearn.torch_modules.md
      • concrete.ml.sklearn.tree.md
      • concrete.ml.sklearn.tree_to_numpy.md
      • concrete.ml.sklearn.xgb.md
      • concrete.ml.torch.compile.md
      • concrete.ml.torch.md
      • concrete.ml.torch.numpy_module.md
      • concrete.ml.version.md
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • module concrete.ml.sklearn.protocols
  • class Quantizer
  • class ConcreteBaseEstimatorProtocol
  • class ConcreteBaseClassifierProtocol
  • class ConcreteBaseRegressorProtocol

Was this helpful?

Export as PDF
  1. Developer Guide
  2. API

concrete.ml.sklearn.protocols.md

Previousconcrete.ml.sklearn.mdNextconcrete.ml.sklearn.qnn.md

Last updated 2 years ago

Was this helpful?

module concrete.ml.sklearn.protocols

Protocols.

Protocols are used to mix type hinting with duck-typing. Indeed we don't always want to have an abstract parent class between all objects. We are more interested in the behavior of such objects. Implementing a Protocol is a way to specify the behavior of objects.

To read more about Protocol please read: https://peps.python.org/pep-0544


class Quantizer

Quantizer Protocol.

To use to type hint a quantizer.


method dequant

dequant(X: 'ndarray') → ndarray

Dequantize some values.

Args:

  • X (numpy.ndarray): Values to dequantize

.. # noqa: DAR202

Returns:

  • numpy.ndarray: Dequantized values


method quant

quant(values: 'ndarray') → ndarray

Quantize some values.

Args:

  • values (numpy.ndarray): Values to quantize

.. # noqa: DAR202

Returns:

  • numpy.ndarray: The quantized values


class ConcreteBaseEstimatorProtocol

A Concrete Estimator Protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float',
    global_p_error: 'float',
    verbose_compilation: 'bool'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a single PBS

  • global_p_error (float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0

  • verbose_compilation (bool): whether to show compilation information

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


class ConcreteBaseClassifierProtocol

Concrete classifier protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float',
    global_p_error: 'float',
    verbose_compilation: 'bool'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a single PBS

  • global_p_error (float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0

  • verbose_compilation (bool): whether to show compilation information

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the class with highest probability.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray


method predict_proba

predict_proba(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the probability of each class.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray


class ConcreteBaseRegressorProtocol

Concrete regressor protocol.


property onnx_model

onnx_model.

.. # noqa: DAR202

Results: onnx.ModelProto


property quantize_input

Quantize input function.


method compile

compile(
    X: 'ndarray',
    configuration: 'Optional[Configuration]',
    compilation_artifacts: 'Optional[DebugArtifacts]',
    show_mlir: 'bool',
    use_virtual_lib: 'bool',
    p_error: 'float',
    global_p_error: 'float',
    verbose_compilation: 'bool'
) → Circuit

Compiles a model to a FHE Circuit.

Args:

  • X (numpy.ndarray): the dequantized dataset

  • configuration (Optional[Configuration]): the options for compilation

  • compilation_artifacts (Optional[DebugArtifacts]): artifacts object to fill during compilation

  • show_mlir (bool): whether or not to show MLIR during the compilation

  • use_virtual_lib (bool): whether to compile using the virtual library that allows higher bitwidths

  • p_error (float): probability of error of a single PBS

  • global_p_error (float): probability of error of the full circuit. Not simulated by the VL, i.e., taken as 0

  • verbose_compilation (bool): whether to show compilation information

.. # noqa: DAR202

Returns:

  • Circuit: the compiled Circuit.


method fit

fit(X: 'ndarray', y: 'ndarray', **fit_params) → ConcreteBaseEstimatorProtocol

Initialize and fit the module.

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • **fit_params: additional parameters that can be used during training

.. # noqa: DAR202

Returns:

  • ConcreteBaseEstimatorProtocol: the trained estimator


method fit_benchmark

fit_benchmark(
    X: 'ndarray',
    y: 'ndarray',
    *args,
    **kwargs
) → Tuple[ConcreteBaseEstimatorProtocol, BaseEstimator]

Fit the quantized estimator and return reference estimator.

This function returns both the quantized estimator (itself), but also a wrapper around the non-quantized trained NN. This is useful in order to compare performance between the quantized and fp32 versions of the classifier

Args:

  • X : training data By default, you should be able to pass: * numpy arrays * torch tensors * pandas DataFrame or Series

  • y (numpy.ndarray): labels associated with training data

  • *args: The arguments to pass to the underlying model.

  • **kwargs: The keyword arguments to pass to the underlying model.

.. # noqa: DAR202

Returns:

  • self: self fitted

  • model: underlying estimator


method post_processing

post_processing(y_preds: 'ndarray') → ndarray

Post-process models predictions.

Args:

  • y_preds (numpy.ndarray): predicted values by model (clear-quantized)

.. # noqa: DAR202

Returns:

  • numpy.ndarray: the post-processed predictions


method predict

predict(X: 'ndarray', execute_in_fhe: 'bool') → ndarray

Predicts for each sample the expected value.

Args:

  • X (numpy.ndarray): Features

  • execute_in_fhe (bool): Whether the inference should be done in fhe or not.

.. # noqa: DAR202

Returns: numpy.ndarray