API
Last updated
Was this helpful?
Last updated
Was this helpful?
: Module for shared data structures and code.
: Check and conversion tools.
: Module for debugging.
: Provide some variants of assert.
: Serialization module.
: Custom decoder for serialization.
: Dump functions for serialization.
: Custom encoder for serialization.
: Load functions for serialization.
: Utils that can be re-used by other pieces of code in the module.
: Module for deployment of the FHE model.
: APIs for FHE deployment.
: ONNX module.
: ONNX conversion related code.
: Utility functions for onnx operator implementations.
: Some code to manipulate models.
: Utils to interpret an ONNX model with numpy.
: ONNX ops implementation in Python + NumPy.
: Public API for encrypted data-frames.
: Define the framework used for managing keys (encrypt, decrypt) for encrypted data-frames.
: Define the encrypted data-frame framework.
: Module which is used to contain common functions for pytest.
: Torch modules for our pytests.
: Common functions or lists for test files, which can't be put in fixtures.
: Modules for quantization.
: Base Quantized Op class that implements quantization for a float numpy op.
: GLWE backend for some supported layers.
: Post Training Quantization methods.
: QuantizedModule API.
: Optimization passes for QuantizedModules.
: Quantized versions of the ONNX operators for post training quantization.
: Quantization utilities for a numpy array/tensor.
: Modules for p_error
search.
: p_error binary search for classification and regression tasks.
: Import sklearn models.
: Base classes for all estimators.
: Implement sklearn's Generalized Linear Models (GLM).
: Implement sklearn linear model.
: Implement sklearn neighbors model.
: Scikit-learn interface for fully-connected quantized neural networks.
: Sparse Quantized Neural Network torch module.
: Implement RandomForest models.
: Implement Support Vector Machine.
: Implement DecisionTree models.
: Implements the conversion of a tree model to a numpy function.
: Implements XGBoost models.
: Modules for torch to numpy conversion.
: torch compilation function.
: Linear layer implementations for backprop FHE-compatible models.
: Implement the conversion of a torch model to a hybrid fhe/torch inference.
: This module contains classes for LoRA (Low-Rank Adaptation) FHE training and custom layers.
: A torch to numpy module.
: File to manage the version of the package.
: Custom json decoder to handle non-native types found in serialized Concrete ML objects.
: Custom json encoder to handle non-native types found in serialized Concrete ML objects.
: Type of ciphertext used as input/output for a model.
: Enum representing the execution mode.
: Simple enum for different modes of execution of HybridModel.
: Mode for the FHE API.
: Client API to encrypt and decrypt FHE data.
: Dev API to save the model and then load and run the FHE circuit.
: Server API to load and run the FHE circuit.
: A mixed quantized-raw valued onnx function.
: Type construct that marks an ndarray as a raw output of a quantized op.
: Define a framework that manages keys.
: Define an encrypted data-frame framework that supports Pandas operators and parameters.
: Torch model that performs a simple addition between two inputs.
: A CNN class that has all zero weights and biases.
: Torch model with some branching and skip connections.
: Torch model with some branching and skip connections.
: Torch CNN model for the tests.
: Torch CNN model with grouped convolution for compile torch tests.
: Torch CNN model for the tests.
: Torch CNN model for the tests with a max pool.
: Torch CNN model for the tests.
: Concat with fancy indexing.
: Small model that uses a 1D convolution operator.
: Torch model that with two different quantizers on the input.
: A torch model with an embedding layer.
: PyTorch module for performing matrix multiplication between two encrypted values.
: Minimalist network that expands the input tensor to a larger size.
: Torch model for the tests.
: Torch model that should generate MatMul->Add ONNX patterns.
: Torch model that should generate MatMul->Add ONNX patterns.
: Torch model for the tests.
: Model that only adds an empty dimension at axis 0.
: Model that only adds an empty dimension at axis 0, and returns the initial input as well.
: PyTorch module for performing SGD training.
: Torch model to test multiple inputs forward.
: Torch model to test multiple inputs forward.
: Torch model to test multiple inputs with different shape in the forward pass.
: Network that applies two quantized operations on a single input.
: Multi-output model.
: Torch model to test the concat and unsqueeze operators.
: Torch QAT model that does not quantize the inputs.
: Torch model, where we reuse some elements in a loop.
: Torch QAT model that applies various padding patterns.
: A model with a QAT Module.
: Torch model that implements a simple non-uniform quantizer.
: A small quantized network with Brevitas, trained on make_classification.
: Torch QAT model that reshapes the input.
: Fake torch model used to generate some onnx.
: Torch model that with a single conv layer that produces the output, e.g., a blur filter.
: Torch model implements a step function that needs Greater, Cast and Where.
: Torch model implements a step function that needs Greater, Cast and Where.
: A very small CNN.
: A very small QAT CNN to classify the sklearn digits data-set.
: A small network with Brevitas, trained on make_classification.
: Torch model that performs a encrypted division between two inputs.
: Torch model that performs a encrypted multiplication between two inputs.
: Torch model to test the ReduceSum ONNX operator in a leveled circuit.
: Torch model that calls univariate and shape functions of torch.
: Simple network with a where operation for testing.
: An operator that mixes (adds or multiplies) together encrypted inputs.
: Base class for quantized ONNX ops implemented in numpy.
: An univariate operator of an encrypted value.
: GLWE execution helper for pure linear layers.
: Simple enum for different modes of execution of HybridModel.
: Base ONNX to Concrete ML computation graph conversion class.
: Post-training Affine Quantization.
: Converter of Quantization Aware Training networks.
: Inference for a quantized model.
: Detect neural network patterns that can be optimized with round PBS.
: ConstantOfShape operator.
: Gather operator.
: Shape operator.
: Slice operator.
: Quantized Abs op.
: Quantized Addition operator.
: Quantized Average Pooling op.
: Quantized Batch normalization with encrypted input and in-the-clear normalization params.
: Brevitas uniform quantization with encrypted input.
: Cast the input to the required data type.
: Quantized Celu op.
: Quantized clip op.
: Concatenate operator.
: Quantized Conv op.
: Quantized Division operator.
: Quantized Elu op.
: Comparison operator ==.
: Quantized erf op.
: Quantized Exp op.
: Expand operator for quantized tensors.
: Quantized flatten for encrypted inputs.
: Quantized Floor op.
: Quantized Gemm op.
: Comparison operator >.
: Comparison operator >=.
: Quantized HardSigmoid op.
: Quantized Hardswish op.
: Quantized Identity op.
: Quantized LeakyRelu op.
: Comparison operator <.
: Comparison operator <=.
: Quantized Log op.
: Quantized MatMul op.
: Quantized Max op.
: Quantized Max Pooling op.
: Quantized Min op.
: Quantized Multiplication operator.
: Quantized Neg op.
: Quantized Not op.
: Or operator ||.
: Quantized PRelu op.
: Quantized Padding op.
: Quantized pow op.
: ReduceSum with encrypted input.
: Quantized Relu op.
: Quantized Reshape op.
: Quantized round op.
: Quantized Selu op.
: Quantized sigmoid op.
: Quantized Neg op.
: Quantized Softplus op.
: Squeeze operator.
: Subtraction operator.
: Quantized Tanh op.
: Transpose operator for quantized inputs.
: Quantized Unfold op.
: Unsqueeze operator.
: Where operator on quantized arrays.
: Calibration set statistics.
: Options for quantization.
: Abstraction of quantized array.
: Uniform quantizer with a PyTorch implementation.
: Quantization parameters for uniform quantization.
: Uniform quantizer.
: Class for p_error
hyper-parameter search for classification and regression tasks.
: Base class for linear and tree-based classifiers in Concrete ML.
: Base class for all estimators in Concrete ML.
: Mixin class for tree-based classifiers.
: Mixin class for tree-based estimators.
: Mixin class for tree-based regressors.
: Mixin that provides quantization for a torch module and follows the Estimator API.
: A Mixin class for sklearn KNeighbors classifiers with FHE.
: A Mixin class for sklearn KNeighbors models with FHE.
: A Mixin class for sklearn linear classifiers with FHE.
: A Mixin class for sklearn linear models with FHE.
: A Mixin class for sklearn linear regressors with FHE.
: A Mixin class for sklearn SGD classifiers with FHE.
: A Mixin class for sklearn SGD regressors with FHE.
: A Gamma regression model with FHE.
: A Poisson regression model with FHE.
: A Tweedie regression model with FHE.
: An ElasticNet regression model with FHE.
: A Lasso regression model with FHE.
: A linear regression model with FHE.
: A logistic regression model with FHE.
: A Ridge regression model with FHE.
: An FHE linear classifier model fitted with stochastic gradient descent.
: An FHE linear regression model fitted with stochastic gradient descent.
: A k-nearest neighbors classifier model with FHE.
: A Fully-Connected Neural Network classifier with FHE.
: A Fully-Connected Neural Network regressor with FHE.
: Sparse Quantized Neural Network.
: Implements the RandomForest classifier.
: Implements the RandomForest regressor.
: A Classification Support Vector Machine (SVM).
: A Regression Support Vector Machine (SVM).
: Implements the sklearn DecisionTreeClassifier.
: Implements the sklearn DecisionTreeClassifier.
: Implements the XGBoost classifier.
: Implements the XGBoost regressor.
: Backward module for linear layers.
: Custom linear module.
: Custom autograd function for forward and backward passes.
: Forward module for linear layers.
: Convert a model to a hybrid model.
: Hybrid FHE Model Server.
: Placeholder type for a typical logger like the one from loguru.
: A wrapper class for the modules to be evaluated remotely with FHE.
: Trainer class for LoRA fine-tuning with FHE support.
: LoraTraining module for fine-tuning with LoRA in a hybrid model setting.
: General interface to transform a torch.nn.Module to numpy module.
: sklearn.utils.check_X_y with an assert.
: sklearn.utils.check_X_y with an assert and multi-output handling.
: sklearn.utils.check_array with an assert.
: Provide a custom assert to check that the condition is False.
: Provide a custom assert to check that a piece of code is never reached.
: Provide a custom assert to check that the condition is True.
: Define a custom object hook that enables loading any supported serialized values.
: Dump any Concrete ML object in a file.
: Dump any object as a string.
: Dump the value into a custom dict format.
: Load any Concrete ML object that provide a load_dict
method.
: Load any Concrete ML object that provide a dump_dict
method.
: Indicate if all unpacked values are of a supported float dtype.
: Indicate if all unpacked values are of a supported integer dtype.
: Indicate if all unpacked values are of the specified dtype(s).
: Check if two numpy arrays are equal within a tolerances and have the same shape.
: Check whether the device string for compilation or FHE execution is CUDA or CPU.
: Check whether the device string is valid or raise an exception.
: Convert any allowed type into an array and cast it if required.
: Check whether the circuit can be executed on the required device.
: Check the user did not set p_error or global_p_error in configuration.
: Compute the number of bits required to represent x.
: Generate a proxy function for a function accepting only *args type arguments.
: Return the class of the model (instantiated or not), which can be a partial() instance.
: Return the name of the model, which can be a partial() instance.
: Return the ONNX opset_version.
: Check if a model is a Brevitas type.
: Indicate if the model class represents a classifier.
: Indicate if a model class, which can be a partial() instance, is an element of a_list.
: Indicate if the input container is a Pandas DataFrame.
: Indicate if the input container is a Pandas Series.
: Indicate if the input container is a Pandas DataFrame or Series.
: Indicate if the model class represents a regressor.
: Return (p_error, global_p_error) that we want to give to Concrete.
: Check and process the rounding_threshold_bits parameter.
: Sanitize arg_name, replacing invalid chars by _.
: Make the input a tuple if it is not already the case.
: Check that current versions match the ones used in development.
: Fuse sequence of matmul -> add into a gemm node.
: Get the numpy equivalent forward of the provided ONNX model.
: Get the numpy equivalent forward of the provided ONNX model for tree-based models only.
: Get the numpy equivalent forward of the provided torch Module.
: Preprocess the ONNX model to be used for numpy execution.
: Compute the output shape of a pool or conv operation.
: Compute any additional padding needed to compute pooling layers.
: Pad a tensor according to ONNX spec, using an optional custom pad value.
: Compute the average pooling normalization constant.
: Comparison operation using round_bit_pattern
function.
: Remove the nodes following first node matching node_op_type from the ONNX graph.
: Remove the first node matching node_op_type and its following nodes from the ONNX graph.
: Convert the first Gather node to a matrix multiplication node.
: Keep the outputs given in outputs_to_keep and remove the others from the model.
: Remove identity nodes from a model.
: Remove unnecessary nodes from the ONNX graph.
: Remove unused Constant nodes in the provided onnx model.
: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
: Check an ONNX model, handling large models (>2GB) by using external data.
: Execute the provided ONNX graph on the given inputs.
: Execute the provided ONNX graph on the given inputs for tree-based models only.
: Get the attribute from an ONNX AttributeProto.
: Construct the qualified type name of the ONNX operator.
: Remove initializers from model inputs.
: Cast values to floating points.
: Compute abs in numpy according to ONNX spec.
: Compute acos in numpy according to ONNX spec.
: Compute acosh in numpy according to ONNX spec.
: Compute add in numpy according to ONNX spec.
: Compute asin in numpy according to ONNX spec.
: Compute sinh in numpy according to ONNX spec.
: Compute atan in numpy according to ONNX spec.
: Compute atanh in numpy according to ONNX spec.
: Compute Average Pooling using Torch.
: Compute the batch normalization of the input tensor.
: Execute ONNX cast in Numpy.
: Compute celu in numpy according to ONNX spec.
: Apply concatenate in numpy according to ONNX spec.
: Return the constant passed as a kwarg.
: Compute N-D convolution using Torch.
: Compute cos in numpy according to ONNX spec.
: Compute cosh in numpy according to ONNX spec.
: Compute div in numpy according to ONNX spec.
: Compute elu in numpy according to ONNX spec.
: Compute equal in numpy according to ONNX spec.
: Compute equal in numpy according to ONNX spec and cast outputs to floats.
: Compute erf in numpy according to ONNX spec.
: Compute exponential in numpy according to ONNX spec.
: Flatten a tensor into a 2d array.
: Compute Floor in numpy according to ONNX spec.
: Compute Gemm in numpy according to ONNX spec.
: Compute greater in numpy according to ONNX spec.
: Compute greater in numpy according to ONNX spec and cast outputs to floats.
: Compute greater or equal in numpy according to ONNX spec.
: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
: Compute hardsigmoid in numpy according to ONNX spec.
: Compute hardswish in numpy according to ONNX spec.
: Compute identity in numpy according to ONNX spec.
: Compute leakyrelu in numpy according to ONNX spec.
: Compute less in numpy according to ONNX spec.
: Compute less in numpy according to ONNX spec and cast outputs to floats.
: Compute less or equal in numpy according to ONNX spec.
: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
: Compute log in numpy according to ONNX spec.
: Compute matmul in numpy according to ONNX spec.
: Compute Max in numpy according to ONNX spec.
: Compute Max Pooling using Torch.
: Compute Min in numpy according to ONNX spec.
: Compute mul in numpy according to ONNX spec.
: Compute Negative in numpy according to ONNX spec.
: Compute not in numpy according to ONNX spec.
: Compute not in numpy according to ONNX spec and cast outputs to floats.
: Compute or in numpy according to ONNX spec.
: Compute or in numpy according to ONNX spec and cast outputs to floats.
: Compute pow in numpy according to ONNX spec.
: Compute relu in numpy according to ONNX spec.
: Compute round in numpy according to ONNX spec.
: Compute selu in numpy according to ONNX spec.
: Compute sigmoid in numpy according to ONNX spec.
: Compute Sign in numpy according to ONNX spec.
: Compute sin in numpy according to ONNX spec.
: Compute sinh in numpy according to ONNX spec.
: Compute softmax in numpy according to ONNX spec.
: Compute softplus in numpy according to ONNX spec.
: Compute sub in numpy according to ONNX spec.
: Compute tan in numpy according to ONNX spec.
: Compute tanh in numpy according to ONNX spec.
: Compute thresholdedrelu in numpy according to ONNX spec.
: Transpose in numpy according to ONNX spec.
: Compute Unfold using Torch.
: Compute the equivalent of numpy.where.
: Compute the equivalent of numpy.where.
: Decorate a numpy onnx function to flag the raw/non quantized inputs.
: Compute rounded equal in numpy according to ONNX spec for tree-based models only.
: Compute rounded less in numpy according to ONNX spec for tree-based models only.
: Compute rounded less or equal in numpy according to ONNX spec for tree-based models only.
: Load a serialized encrypted data-frame.
: Merge two encrypted data-frames in FHE using Pandas parameters.
: Check that the given object can properly be serialized.
: Reduce size of the given data-set.
: Select n_sample
random elements from a 2D NumPy array.
: Get the pytest parameters to use for testing all models available in Concrete ML.
: Get the pytest parameters to use for testing linear models.
: Get the pytest parameters to use for testing neighbor models.
: Get the pytest parameters to use for testing neural network models.
: Get the pytest parameters to use for testing tree-based models.
: Instantiate any Concrete ML model type.
: Load an object saved with torch.save() from a file or dict.
: Determine if both data-frames are identical.
: Indicate if two values are equal.
: Check if the GLWE backend is installed.
: Convert the n_bits parameter into a proper dictionary.
: Fill a parameter set structure from kwargs parameters.
: Get the quantized module of a given model in FHE, simulated or not.
: Add transpose after last node.
: Assert if an Add node with a specific constant exists in the ONNX graph.
: Create ONNX model with Hummingbird convert method.
: Build a FHE-compliant onnx-model using a fitted scikit-learn model.
: Apply post-processing from the graph.
: Apply pre-processing onto the ONNX graph.
: Convert the tree inference to a numpy functions using Hummingbird.
: Pre-process tree values.
: Workaround to fix torch issue that does not export the proper axis in the ONNX squeeze node.
: Build a quantized module from a Torch or ONNX model.
: Compile a Brevitas Quantization Aware Training model.
: Compile a torch module into an FHE equivalent.
: Compile a torch module into an FHE equivalent.
: Convert a torch tensor or a numpy array to a numpy array.
: Check if a torch model has QNN layers.
: Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear.
: Convert a tuple to a string representation.
: Convert a a string representation of a tuple to a tuple.
: Get names of modules to be executed remotely.
: Move parameter gradient to device.
: Move optimizer object to device.
: Set up a logger that logs to both console and a file.