Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
0.2
0.2
  • What is Concrete ML?
  • Installing
    • Installing
  • How To
    • Scikit-learn
    • Torch
    • Compute with Quantized Functions
    • Use Concrete ML ONNX Support
    • Debug / Get Support / Submit Issues
  • Advanced examples
    • Advanced examples
  • Explanations
    • Philosophy of the Design
    • Quantization
    • Pruning
    • Virtual Lib
    • Resources
  • Developper How To
    • Set Up the Project
    • Set Up Docker
    • Document
    • Create a Release on GitHub
    • Contribute
  • Developper Explanations
    • Concrete Stack
    • Quantization
    • Using ONNX as IR for FHE Compilation
    • Hummingbird Usage
    • Skorch Usage
  • API
    • API
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • Ops supported for evaluation/NumPy conversion
  • Ops supported for Post Training Quantization

Was this helpful?

Export as PDF
  1. How To

Use Concrete ML ONNX Support

PreviousCompute with Quantized FunctionsNextDebug / Get Support / Submit Issues

Last updated 2 years ago

Was this helpful?

Internally, Concrete-ML uses operators as intermediate representation (or IR) for manipulating machine learning models produced through export for , and . As ONNX is becoming the standard exchange format for neural networks, this allows Concrete-ML to be flexible while also making model representation manipulation quite easy. In addition, it allows for straight-forward mapping to NumPy operators, supported by Concrete-Numpy to use the Concrete stack FHE conversion capabilities.

Here we list the operators that are supported as well as the operators that have a quantized version, which should allow you to perform automatic Post Training Quantization (PTQ) of your models.

Please note that due to the current precision constraints from the Concrete stack, PTQ may produce circuits that have worse accuracy than your original model.

Ops supported for evaluation/NumPy conversion

The following operators should be supported for evaluation and conversion to an equivalent NumPy circuit. As long as your model converts to an ONNX using these operators, it should be convertible to an FHE equivalent.

Do note that all operators may not be fully supported for conversion to a circuit executable in FHE. You will get error messages should you use such an operator in a circuit you are trying to convert to FHE.

  • Abs

  • Acos

  • Acosh

  • Add

  • Asin

  • Asinh

  • Atan

  • Atanh

  • Celu

  • Clip

  • Constant

  • Conv

  • Cos

  • Cosh

  • Div

  • Elu

  • Equal

  • Erf

  • Exp

  • Gemm

  • Greater

  • HardSigmoid

  • Identity

  • LeakyRelu

  • Less

  • Log

  • MatMul

  • Mul

  • Not

  • Relu

  • Reshape

  • Selu

  • Sigmoid

  • Sin

  • Sinh

  • Softplus

  • Sub

  • Tan

  • Tanh

  • ThresholdedRelu

Ops supported for Post Training Quantization

  • Abs: QuantizedAbs

  • Add: QuantizedAdd

  • Celu: QuantizedCelu

  • Clip: QuantizedClip

  • Conv: QuantizedConv

  • Elu: QuantizedElu

  • Exp: QuantizedExp

  • Gemm: QuantizedGemm

  • HardSigmoid: QuantizedHardSigmoid

  • Identity: QuantizedIdentity

  • LeakyRelu: QuantizedLeakyRelu

  • Linear: QuantizedLinear

  • Log: QuantizedLog

  • MatMul: QuantizedMatMul

  • Relu: QuantizedRelu

  • Reshape: QuantizedReshape

  • Selu: QuantizedSelu

  • Sigmoid: QuantizedSigmoid

  • Softplus: QuantizedSoftplus

  • Tanh: QuantizedTanh

ONNX
PyTorch
Hummingbird
skorch