Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
0.5
0.5
  • What is Concrete ML?
  • Getting Started
    • Installation
    • Key Concepts
    • Inference in the Cloud
  • Built-in Models
    • Linear Models
    • Tree-based Models
    • Neural Networks
    • Pandas
    • Built-in Model Examples
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-Step Guide
    • Deep Learning Examples
    • Debugging Models
  • Advanced topics
    • Quantization
    • Pruning
    • Compilation
    • Production Deployment
    • Advanced Features
  • Developer Guide
    • Workflow
      • Set Up the Project
      • Set Up Docker
      • Documentation
      • Support and Issues
      • Contributing
    • Inner workings
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External Libraries
    • API
      • concrete.ml.common
      • concrete.ml.common.check_inputs
      • concrete.ml.common.debugging
      • concrete.ml.common.debugging.custom_assert
      • concrete.ml.common.utils
      • concrete.ml.deployment
      • concrete.ml.deployment.fhe_client_server
      • concrete.ml.onnx
      • concrete.ml.onnx.convert
      • concrete.ml.onnx.onnx_model_manipulations
      • concrete.ml.onnx.onnx_utils
      • concrete.ml.onnx.ops_impl
      • concrete.ml.quantization
      • concrete.ml.quantization.base_quantized_op
      • concrete.ml.quantization.post_training
      • concrete.ml.quantization.quantized_module
      • concrete.ml.quantization.quantized_ops
      • concrete.ml.quantization.quantizers
      • concrete.ml.sklearn
      • concrete.ml.sklearn.base
      • concrete.ml.sklearn.glm
      • concrete.ml.sklearn.linear_model
      • concrete.ml.sklearn.protocols
      • concrete.ml.sklearn.qnn
      • concrete.ml.sklearn.rf
      • concrete.ml.sklearn.svm
      • concrete.ml.sklearn.torch_module
      • concrete.ml.sklearn.tree
      • concrete.ml.sklearn.tree_to_numpy
      • concrete.ml.sklearn.xgb
      • concrete.ml.torch
      • concrete.ml.torch.compile
      • concrete.ml.torch.numpy_module
      • concrete.ml.version
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page

Was this helpful?

Export as PDF
  1. Advanced topics

Advanced Features

PreviousProduction DeploymentNextWorkflow

Last updated 2 years ago

Was this helpful?

Concrete-ML offers some features for advanced users that wish to adjust the cryptographic parameters that are generated by the Concrete stack for a certain machine learning model.

Approximate computations using the p_error parameter

Concrete-ML makes use of table lookup (TLU) to represent any non-linear operation (e.g. sigmoid). This TLU is implemented through the Programmable Bootstrapping (PBS) operation which will apply a non-linear operation in the cryptographic realm.

In Concrete-ML, the result of the TLU operation is obtained with a specific error probability:

DEFAULT_P_ERROR_PBS = 6.3342483999973e-05

A single PBS operation has 1 - DEFAULT_P_ERROR_PBS = 99.9936657516% chances of being correct. This number plays a role in the cryptographic parameters. As such, the lower the p_error, the more constraining the parameters will become. This has an impact on both key generation and, more importantly, on FHE execution time.

This number is set by default to be relatively low such that any user can build deep circuits without being impacted by this noise as . However, there might be use cases and specific circuits where the Gaussian noise can increase without being too dramatic for the circuit accuracy. In that case, increasing the p_error can be relevant as it will reduce the execution time in FHE.

Here is a visualization of the effect of the p_error over a simple linear regression with a p_error = 0.1 vs the default p_error value:

The execution for the two models are 336 ms per example for the standard p_error and 253 ms per example for a p_error = 0.1 (on a 8 cores Intel CPU machine). Obviously, this speedup is very dependent on model complexity. To obtain a speedup while maintaining good accuracy, it is possible to search for a good value of p_error. Currently no heuristic has been proposed to find a good value a-priori.

Users have the possibility to change this p_error as they choose fit, by passing an argument to the compile function of any of the models. Here is an example:

from concrete.ml.sklearn import XGBoostClassifier
clf = XGBoostClassifier()
clf.fit(X_train, y_train)

# Here comes the p_error parameter
clf.compile(X_train, p_error = 0.1)
described in the concepts section
Impact of p_error in a Linear Regression