Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
1.5
1.5
  • Welcome
  • Getting Started
    • What is Concrete ML?
    • Installation
    • Key concepts
    • Inference in the cloud
  • Built-in Models
    • Linear models
    • Tree-based models
    • Neural networks
    • Nearest neighbors
    • Encrypted dataframe
    • Encrypted training
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step guide
    • Debugging models
    • Optimizing inference
  • Guides
    • Prediction with FHE
    • Production deployment
    • Hybrid models
    • Serialization
  • Tutorials
    • See all tutorials
    • Built-in model examples
    • Deep learning examples
  • References
    • API
    • Pandas support
  • Explanations
    • Quantization
    • Pruning
    • Compilation
    • Advanced features
    • Project architecture
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External libraries
  • Developers
    • Set up the project
    • Set up Docker
    • Documentation
    • Support and issues
    • Contributing
    • Support new ONNX node
    • Release note
    • Feature request
    • Bug report
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • Operator Implementation
  • Floating-point Implementation
  • Operator Mapping
  • Quantized Operator
  • Adding Tests
  • Update Documentation

Was this helpful?

Export as PDF
  1. Developers

Support new ONNX node

PreviousContributing

Last updated 1 year ago

Was this helpful?

Concrete ML supports a wide range of models through the integration of ONNX nodes. In case a specific ONNX node is missing, developers need to add support for the new ONNX nodes.

Operator Implementation

Floating-point Implementation

The file is responsible for implementing the computation of ONNX operators using floating-point arithmetic. The implementation should mirror the behavior of the corresponding ONNX operator precisely. This includes adhering to the expected inputs, outputs, and operational semantics.

Refer to the to grasp the expected behavior, inputs and outputs of the operator.

Operator Mapping

After implementing the operator in , you need to import it into and map it within the ONNX_OPS_TO_NUMPY_IMPL dictionary. This mapping is crucial for the framework to recognize and utilize the new operator.

Quantized Operator

Quantized operators are defined in and are used to handle integer arithmetic. Their implementation is required for the new ONNX to be executed in FHE.

There exist two types of quantized operators:

  • Univariate Non-Linear Operators: Such operator applies transformation on every element of the input without changing its shape. Sigmoid, Tanh, ReLU are examples of such operation. The sigmoid in this file is simply supported as follows:

class QuantizedSigmoid(QuantizedOp):
    """Quantized sigmoid op."""

    _impl_for_op_named: str = "Sigmoid"
  • Linear Layers: Linear layers like Gemm and Conv require specific implementations for integer arithmetic. Please refer to the QuantizedGemm and QuantizedConv implementations for reference.

Adding Tests

Proper testing is essential to ensure the correctness of the new ONNX node support.

There are many locations where tests can be added:

Update Documentation

Finally, update the documentation to reflect the newly supported ONNX node.

: Tests the implementation of the ONNX node in floating points.

: Tests the implementation of the ONNX node in integer arithmetic.

Optional: : Tests the implementation of a specific torch model that contains the new ONNX operator. The model needs to be added in .

ops_impl.py
ONNX documentation
ops_impl.py
onnx_utils.py
quantized_ops.py
test_onnx_ops_impl.py
test_quantized_ops.py
test_compile_torch.py
torch_models.py