Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
1.9
1.9
  • Welcome
  • Get Started
    • What is Concrete ML?
    • Installation
    • Key concepts
    • Inference in the cloud
  • Built-in Models
    • Linear models
    • Tree-based models
    • Neural networks
    • Nearest neighbors
    • Encrypted dataframe
    • Encrypted training
  • LLMs
    • Inference
    • Encrypted fine-tuning
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step guide
    • Debugging models
    • Optimizing inference
  • Guides
    • Prediction with FHE
    • Production deployment
    • Hybrid models
    • Serialization
    • GPU acceleration
  • Tutorials
    • See all tutorials
    • Built-in model examples
    • Deep learning examples
  • References
    • API
  • Explanations
    • Security and correctness
    • Quantization
    • Pruning
    • Compilation
    • Advanced features
    • Project architecture
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External libraries
  • Developers
    • Set up the project
    • Set up Docker
    • Documentation
    • Support and issues
    • Contributing
    • Support new ONNX node
    • Release note
    • Feature request
    • Bug report
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • Security model
  • Correctness of computations
  • References

Was this helpful?

Export as PDF
  1. Explanations

Security and correctness

PreviousAPINextQuantization

Last updated 1 month ago

Was this helpful?

Security model

The default parameters for Concrete ML are chosen considering the security model, and are selected with a of 2−402^-402−40. In particular, it is assumed that the results of decrypted computations are not shared by the secret key owner with any third parties, as such an action can lead to leakage of the secret encryption key. If you are designing an application where decryptions must be shared, you will need to craft custom encryption parameters which are chosen in consideration of the IND-CPA^D security model [1].

Correctness of computations

The section explains how Concrete ML can ensure guaranteed correctness of encrypted computations. In this approach, a quantized machine learning model will be converted to an FHE circuit that produces the same result on encrypted data as the original model on clear data.

However, the can be configured by the user. Raising this probability results in lower latency when executing on encrypted data, but higher values cancel the correctness guarantee of the default setting. In practice this may not be an issue, as the accuracy of the model may be maintained, even though slight differences are observed in the model outputs. Moreover, as noted in the , raising the off-by-one error probability may negatively impact the security model.

Furthermore, a second approach to reduce latency at the expense of correctness is approximate computation of univariate functions. This mode is enabled by using the . When using the rounding method, off-by-one errors are always induced in the computation of activation functions, irrespective of the bootstrapping off-by-one error probability.

When trading-off better latency for correctness, it is highly recommended to use the to measure accuracy on a drawn-out test-set. In many cases the accuracy of the model is only slightly impacted by approximate computations.

References

[1] Li, Baiyu, et al. “Securing approximate homomorphic encryption using differential privacy.” Annual International Cryptology Conference. Cham: Springer Nature Switzerland, 2022. https://eprint.iacr.org/2022/816.pdf

IND-CPA
fhe.Exactness.APPROXIMATE
paragraph above
cryptography concepts
FHE simulation feature
bootstrapping off-by-one error probability
bootstrapping off-by-one error probability
rounding setting