Use Concrete ML ONNX Support
Last updated
Was this helpful?
Last updated
Was this helpful?
Internally, Concrete-ML uses operators as intermediate representation (or IR) for manipulating machine learning models produced through export for , and . As ONNX is becoming the standard exchange format for neural networks, this allows Concrete-ML to be flexible while also making model representation manipulation quite easy. In addition, it allows for straight-forward mapping to NumPy operators, supported by Concrete-Numpy to use the Concrete stack FHE conversion capabilities.
Here we list the operators that are supported as well as the operators that have a quantized version, which should allow you to perform automatic Post Training Quantization (PTQ) of your models.
The following operators should be supported for evaluation and conversion to an equivalent NumPy circuit. As long as your model converts to an ONNX using these operators, it should be convertible to an FHE equivalent.
Abs
Acos
Acosh
Add
Asin
Asinh
Atan
Atanh
Celu
Clip
Constant
Conv
Cos
Cosh
Div
Elu
Equal
Erf
Exp
Gemm
Greater
HardSigmoid
Identity
LeakyRelu
Less
Log
MatMul
Mul
Not
Relu
Reshape
Selu
Sigmoid
Sin
Sinh
Softplus
Sub
Tan
Tanh
ThresholdedRelu
Abs: QuantizedAbs
Add: QuantizedAdd
Celu: QuantizedCelu
Clip: QuantizedClip
Conv: QuantizedConv
Elu: QuantizedElu
Exp: QuantizedExp
Gemm: QuantizedGemm
HardSigmoid: QuantizedHardSigmoid
Identity: QuantizedIdentity
LeakyRelu: QuantizedLeakyRelu
Linear: QuantizedLinear
Log: QuantizedLog
MatMul: QuantizedMatMul
Relu: QuantizedRelu
Reshape: QuantizedReshape
Selu: QuantizedSelu
Sigmoid: QuantizedSigmoid
Softplus: QuantizedSoftplus
Tanh: QuantizedTanh