Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
0.2
0.2
  • What is Concrete ML?
  • Installing
    • Installing
  • How To
    • Scikit-learn
    • Torch
    • Compute with Quantized Functions
    • Use Concrete ML ONNX Support
    • Debug / Get Support / Submit Issues
  • Advanced examples
    • Advanced examples
  • Explanations
    • Philosophy of the Design
    • Quantization
    • Pruning
    • Virtual Lib
    • Resources
  • Developper How To
    • Set Up the Project
    • Set Up Docker
    • Document
    • Create a Release on GitHub
    • Contribute
  • Developper Explanations
    • Concrete Stack
    • Quantization
    • Using ONNX as IR for FHE Compilation
    • Hummingbird Usage
    • Skorch Usage
  • API
    • API
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • List of supported torch operators
  • List of supported activations

Was this helpful?

Export as PDF
  1. How To

Torch

PreviousScikit-learnNextCompute with Quantized Functions

Last updated 2 years ago

Was this helpful?

Concrete-ML allows you to compile a torch model to its FHE counterpart.

This process executes most of the concepts described in the documentation on and triggers the compilation to be able to run the model over homomorphically encrypted data.

from torch import nn
import torch
class LogisticRegression(nn.Module):
    """LogisticRegression with torch"""

    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(in_features=14, out_features=1)
        self.sigmoid1 = nn.Sigmoid()


    def forward(self, x):
        """Forward pass."""
        out = self.fc1(x)
        out = self.sigmoid1(out)
        return out

torch_model = LogisticRegression()

Once your model is trained, you can simply call the compile_torch_model function to execute the compilation.

from concrete.ml.torch.compile import compile_torch_model
import numpy
torch_input = torch.randn(100, 14)
quantized_numpy_module = compile_torch_model(
    torch_model, # our model
    torch_input, # a representative inputset to be used for both quantization and compilation
    n_bits = 2,
)

You can then call quantized_numpy_module.forward_fhe.encrypt_run_decrypt() to have the FHE inference.

Now your model is ready to infer in FHE settings.

enc_x = numpy.array([numpy.random.randn(14)])
# An example that is going to be encrypted, and used for homomorphic inference.
enc_x_q = quantized_numpy_module.quantize_input(enc_x)
fhe_prediction = quantized_numpy_module.forward_fhe.encrypt_run_decrypt(enc_x_q)

fhe_prediction contains the clear quantized output. The user can now dequantize the output to get the actual floating point prediction as follows:

clear_output = quantized_numpy_module.dequantize_output(
    numpy.array(fhe_prediction, dtype=numpy.float32)
)

List of supported torch operators

The following operators in torch will be exported as Concrete-ML compatible ONNX operators:

Operators that take an encrypted input and unencrypted constants:

List of supported activations

Note that the equivalent versions from torch.functional are also supported.

Note that the architecture of the neural network passed to be compiled must respect some hard constraints given by FHE. Please read the our on these limitations.

If you want to see more compilation examples, you can check out the

Our torch conversion pipeline uses ONNX and an intermediate representation. We refer the user to for more information.

how to use quantization
detailed documentation
Fully Connected Neural Network
the Concrete ML ONNX operator reference
torch.abs
torch.clip
torch.exp
torch.nn.identity
torch.log
torch.reshape
torch.Tensor.view
torch.add, torch.Tensor operator +
torch.conv2d, torch.nn.Conv2D
torch.matmul
torch.nn.Linear
torch.nn.Celu
torch.nn.Elu
torch.nn.HardSigmoid
torch.nn.LeakyRelu
torch.nn.ReLU
torch.nn.ReLU6
torch.nn.Selu
torch.nn.Sigmoid
torch.nn.Softplus
torch.nn.Tanh
torch.nn.HardTanh