concrete.ml.common
: Module for shared data structures and code.
concrete.ml.common.check_inputs
: Check and conversion tools.
concrete.ml.common.debugging
: Module for debugging.
concrete.ml.common.debugging.custom_assert
: Provide some variants of assert.
concrete.ml.common.serialization
: Serialization module.
concrete.ml.common.serialization.decoder
: Custom decoder for serialization.
concrete.ml.common.serialization.dumpers
: Dump functions for serialization.
concrete.ml.common.serialization.encoder
: Custom encoder for serialization.
concrete.ml.common.serialization.loaders
: Load functions for serialization.
concrete.ml.common.utils
: Utils that can be re-used by other pieces of code in the module.
concrete.ml.deployment
: Module for deployment of the FHE model.
concrete.ml.deployment.fhe_client_server
: APIs for FHE deployment.
concrete.ml.onnx
: ONNX module.
concrete.ml.onnx.convert
: ONNX conversion related code.
concrete.ml.onnx.onnx_impl_utils
: Utility functions for onnx operator implementations.
concrete.ml.onnx.onnx_model_manipulations
: Some code to manipulate models.
concrete.ml.onnx.onnx_utils
: Utils to interpret an ONNX model with numpy.
concrete.ml.onnx.ops_impl
: ONNX ops implementation in Python + NumPy.
concrete.ml.pandas
: Public API for encrypted data-frames.
concrete.ml.pandas.client_engine
: Define the framework used for managing keys (encrypt, decrypt) for encrypted data-frames.
concrete.ml.pandas.dataframe
: Define the encrypted data-frame framework.
concrete.ml.pytest
: Module which is used to contain common functions for pytest.
concrete.ml.pytest.torch_models
: Torch modules for our pytests.
concrete.ml.pytest.utils
: Common functions or lists for test files, which can't be put in fixtures.
concrete.ml.quantization
: Modules for quantization.
concrete.ml.quantization.base_quantized_op
: Base Quantized Op class that implements quantization for a float numpy op.
concrete.ml.quantization.post_training
: Post Training Quantization methods.
concrete.ml.quantization.quantized_module
: QuantizedModule API.
concrete.ml.quantization.quantized_module_passes
: Optimization passes for QuantizedModules.
concrete.ml.quantization.quantized_ops
: Quantized versions of the ONNX operators for post training quantization.
concrete.ml.quantization.quantizers
: Quantization utilities for a numpy array/tensor.
concrete.ml.search_parameters
: Modules for p_error
search.
concrete.ml.search_parameters.p_error_search
: p_error binary search for classification and regression tasks.
concrete.ml.sklearn
: Import sklearn models.
concrete.ml.sklearn.base
: Base classes for all estimators.
concrete.ml.sklearn.glm
: Implement sklearn's Generalized Linear Models (GLM).
concrete.ml.sklearn.linear_model
: Implement sklearn linear model.
concrete.ml.sklearn.neighbors
: Implement sklearn neighbors model.
concrete.ml.sklearn.qnn
: Scikit-learn interface for fully-connected quantized neural networks.
concrete.ml.sklearn.qnn_module
: Sparse Quantized Neural Network torch module.
concrete.ml.sklearn.rf
: Implement RandomForest models.
concrete.ml.sklearn.svm
: Implement Support Vector Machine.
concrete.ml.sklearn.tree
: Implement DecisionTree models.
concrete.ml.sklearn.tree_to_numpy
: Implements the conversion of a tree model to a numpy function.
concrete.ml.sklearn.xgb
: Implements XGBoost models.
concrete.ml.torch
: Modules for torch to numpy conversion.
concrete.ml.torch.compile
: torch compilation function.
concrete.ml.torch.hybrid_model
: Implement the conversion of a torch model to a hybrid fhe/torch inference.
concrete.ml.torch.lora
: This module contains classes for LoRA (Low-Rank Adaptation) training and custom layers.
concrete.ml.torch.numpy_module
: A torch to numpy module.
concrete.ml.version
: File to manage the version of the package.
decoder.ConcreteDecoder
: Custom json decoder to handle non-native types found in serialized Concrete ML objects.
encoder.ConcreteEncoder
: Custom json encoder to handle non-native types found in serialized Concrete ML objects.
utils.FheMode
: Enum representing the execution mode.
fhe_client_server.DeploymentMode
: Mode for the FHE API.
fhe_client_server.FHEModelClient
: Client API to encrypt and decrypt FHE data.
fhe_client_server.FHEModelDev
: Dev API to save the model and then load and run the FHE circuit.
fhe_client_server.FHEModelServer
: Server API to load and run the FHE circuit.
ops_impl.ONNXMixedFunction
: A mixed quantized-raw valued onnx function.
ops_impl.RawOpOutput
: Type construct that marks an ndarray as a raw output of a quantized op.
client_engine.ClientEngine
: Define a framework that manages keys.
dataframe.EncryptedDataFrame
: Define an encrypted data-frame framework that supports Pandas operators and parameters.
torch_models.AddNet
: Torch model that performs a simple addition between two inputs.
torch_models.BranchingGemmModule
: Torch model with some branching and skip connections.
torch_models.BranchingModule
: Torch model with some branching and skip connections.
torch_models.CNN
: Torch CNN model for the tests.
torch_models.CNNGrouped
: Torch CNN model with grouped convolution for compile torch tests.
torch_models.CNNInvalid
: Torch CNN model for the tests.
torch_models.CNNMaxPool
: Torch CNN model for the tests with a max pool.
torch_models.CNNOther
: Torch CNN model for the tests.
torch_models.ConcatFancyIndexing
: Concat with fancy indexing.
torch_models.Conv1dModel
: Small model that uses a 1D convolution operator.
torch_models.DoubleQuantQATMixNet
: Torch model that with two different quantizers on the input.
torch_models.EmbeddingModel
: A torch model with an embedding layer.
torch_models.EncryptedMatrixMultiplicationModel
: PyTorch module for performing matrix multiplication between two encrypted values.
torch_models.ExpandModel
: Minimalist network that expands the input tensor to a larger size.
torch_models.FC
: Torch model for the tests.
torch_models.FCSeq
: Torch model that should generate MatMul->Add ONNX patterns.
torch_models.FCSeqAddBiasVec
: Torch model that should generate MatMul->Add ONNX patterns.
torch_models.FCSmall
: Torch model for the tests.
torch_models.IdentityExpandModel
: Model that only adds an empty dimension at axis 0.
torch_models.IdentityExpandMultiOutputModel
: Model that only adds an empty dimension at axis 0, and returns the initial input as well.
torch_models.ManualLogisticRegressionTraining
: PyTorch module for performing SGD training.
torch_models.MultiInputNN
: Torch model to test multiple inputs forward.
torch_models.MultiInputNNConfigurable
: Torch model to test multiple inputs forward.
torch_models.MultiInputNNDifferentSize
: Torch model to test multiple inputs with different shape in the forward pass.
torch_models.MultiOpOnSingleInputConvNN
: Network that applies two quantized operations on a single input.
torch_models.MultiOutputModel
: Multi-output model.
torch_models.NetWithConcatUnsqueeze
: Torch model to test the concat and unsqueeze operators.
torch_models.NetWithConstantsFoldedBeforeOps
: Torch QAT model that does not quantize the inputs.
torch_models.NetWithLoops
: Torch model, where we reuse some elements in a loop.
torch_models.PaddingNet
: Torch QAT model that applies various padding patterns.
torch_models.PartialQATModel
: A model with a QAT Module.
torch_models.QATTestModule
: Torch model that implements a simple non-uniform quantizer.
torch_models.QuantCustomModel
: A small quantized network with Brevitas, trained on make_classification.
torch_models.ShapeOperationsNet
: Torch QAT model that reshapes the input.
torch_models.SimpleNet
: Fake torch model used to generate some onnx.
torch_models.SimpleQAT
: Torch model implements a step function that needs Greater, Cast and Where.
torch_models.SingleMixNet
: Torch model that with a single conv layer that produces the output, e.g., a blur filter.
torch_models.StepActivationModule
: Torch model implements a step function that needs Greater, Cast and Where.
torch_models.TinyCNN
: A very small CNN.
torch_models.TinyQATCNN
: A very small QAT CNN to classify the sklearn digits data-set.
torch_models.TorchCustomModel
: A small network with Brevitas, trained on make_classification.
torch_models.TorchDivide
: Torch model that performs a encrypted division between two inputs.
torch_models.TorchMultiply
: Torch model that performs a encrypted multiplication between two inputs.
torch_models.TorchSum
: Torch model to test the ReduceSum ONNX operator in a leveled circuit.
torch_models.UnivariateModule
: Torch model that calls univariate and shape functions of torch.
base_quantized_op.QuantizedMixingOp
: An operator that mixes (adds or multiplies) together encrypted inputs.
base_quantized_op.QuantizedOp
: Base class for quantized ONNX ops implemented in numpy.
base_quantized_op.QuantizedOpUnivariateOfEncrypted
: An univariate operator of an encrypted value.
post_training.ONNXConverter
: Base ONNX to Concrete ML computation graph conversion class.
post_training.PostTrainingAffineQuantization
: Post-training Affine Quantization.
post_training.PostTrainingQATImporter
: Converter of Quantization Aware Training networks.
quantized_module.QuantizedModule
: Inference for a quantized model.
quantized_module_passes.PowerOfTwoScalingRoundPBSAdapter
: Detect neural network patterns that can be optimized with round PBS.
quantized_ops.ONNXConstantOfShape
: ConstantOfShape operator.
quantized_ops.ONNXGather
: Gather operator.
quantized_ops.ONNXShape
: Shape operator.
quantized_ops.ONNXSlice
: Slice operator.
quantized_ops.QuantizedAbs
: Quantized Abs op.
quantized_ops.QuantizedAdd
: Quantized Addition operator.
quantized_ops.QuantizedAvgPool
: Quantized Average Pooling op.
quantized_ops.QuantizedBatchNormalization
: Quantized Batch normalization with encrypted input and in-the-clear normalization params.
quantized_ops.QuantizedBrevitasQuant
: Brevitas uniform quantization with encrypted input.
quantized_ops.QuantizedCast
: Cast the input to the required data type.
quantized_ops.QuantizedCelu
: Quantized Celu op.
quantized_ops.QuantizedClip
: Quantized clip op.
quantized_ops.QuantizedConcat
: Concatenate operator.
quantized_ops.QuantizedConv
: Quantized Conv op.
quantized_ops.QuantizedDiv
: Quantized Division operator.
quantized_ops.QuantizedElu
: Quantized Elu op.
quantized_ops.QuantizedEqual
: Comparison operator ==.
quantized_ops.QuantizedErf
: Quantized erf op.
quantized_ops.QuantizedExp
: Quantized Exp op.
quantized_ops.QuantizedExpand
: Expand operator for quantized tensors.
quantized_ops.QuantizedFlatten
: Quantized flatten for encrypted inputs.
quantized_ops.QuantizedFloor
: Quantized Floor op.
quantized_ops.QuantizedGemm
: Quantized Gemm op.
quantized_ops.QuantizedGreater
: Comparison operator >.
quantized_ops.QuantizedGreaterOrEqual
: Comparison operator >=.
quantized_ops.QuantizedHardSigmoid
: Quantized HardSigmoid op.
quantized_ops.QuantizedHardSwish
: Quantized Hardswish op.
quantized_ops.QuantizedIdentity
: Quantized Identity op.
quantized_ops.QuantizedLeakyRelu
: Quantized LeakyRelu op.
quantized_ops.QuantizedLess
: Comparison operator <.
quantized_ops.QuantizedLessOrEqual
: Comparison operator <=.
quantized_ops.QuantizedLog
: Quantized Log op.
quantized_ops.QuantizedMatMul
: Quantized MatMul op.
quantized_ops.QuantizedMax
: Quantized Max op.
quantized_ops.QuantizedMaxPool
: Quantized Max Pooling op.
quantized_ops.QuantizedMin
: Quantized Min op.
quantized_ops.QuantizedMul
: Quantized Multiplication operator.
quantized_ops.QuantizedNeg
: Quantized Neg op.
quantized_ops.QuantizedNot
: Quantized Not op.
quantized_ops.QuantizedOr
: Or operator ||.
quantized_ops.QuantizedPRelu
: Quantized PRelu op.
quantized_ops.QuantizedPad
: Quantized Padding op.
quantized_ops.QuantizedPow
: Quantized pow op.
quantized_ops.QuantizedReduceSum
: ReduceSum with encrypted input.
quantized_ops.QuantizedRelu
: Quantized Relu op.
quantized_ops.QuantizedReshape
: Quantized Reshape op.
quantized_ops.QuantizedRound
: Quantized round op.
quantized_ops.QuantizedSelu
: Quantized Selu op.
quantized_ops.QuantizedSigmoid
: Quantized sigmoid op.
quantized_ops.QuantizedSign
: Quantized Neg op.
quantized_ops.QuantizedSoftplus
: Quantized Softplus op.
quantized_ops.QuantizedSqueeze
: Squeeze operator.
quantized_ops.QuantizedSub
: Subtraction operator.
quantized_ops.QuantizedTanh
: Quantized Tanh op.
quantized_ops.QuantizedTranspose
: Transpose operator for quantized inputs.
quantized_ops.QuantizedUnfold
: Quantized Unfold op.
quantized_ops.QuantizedUnsqueeze
: Unsqueeze operator.
quantized_ops.QuantizedWhere
: Where operator on quantized arrays.
quantizers.MinMaxQuantizationStats
: Calibration set statistics.
quantizers.QuantizationOptions
: Options for quantization.
quantizers.QuantizedArray
: Abstraction of quantized array.
quantizers.UniformQuantizationParameters
: Quantization parameters for uniform quantization.
quantizers.UniformQuantizer
: Uniform quantizer.
p_error_search.BinarySearch
: Class for p_error
hyper-parameter search for classification and regression tasks.
base.BaseClassifier
: Base class for linear and tree-based classifiers in Concrete ML.
base.BaseEstimator
: Base class for all estimators in Concrete ML.
base.BaseTreeClassifierMixin
: Mixin class for tree-based classifiers.
base.BaseTreeEstimatorMixin
: Mixin class for tree-based estimators.
base.BaseTreeRegressorMixin
: Mixin class for tree-based regressors.
base.QuantizedTorchEstimatorMixin
: Mixin that provides quantization for a torch module and follows the Estimator API.
base.SklearnKNeighborsClassifierMixin
: A Mixin class for sklearn KNeighbors classifiers with FHE.
base.SklearnKNeighborsMixin
: A Mixin class for sklearn KNeighbors models with FHE.
base.SklearnLinearClassifierMixin
: A Mixin class for sklearn linear classifiers with FHE.
base.SklearnLinearModelMixin
: A Mixin class for sklearn linear models with FHE.
base.SklearnLinearRegressorMixin
: A Mixin class for sklearn linear regressors with FHE.
base.SklearnSGDClassifierMixin
: A Mixin class for sklearn SGD classifiers with FHE.
base.SklearnSGDRegressorMixin
: A Mixin class for sklearn SGD regressors with FHE.
glm.GammaRegressor
: A Gamma regression model with FHE.
glm.PoissonRegressor
: A Poisson regression model with FHE.
glm.TweedieRegressor
: A Tweedie regression model with FHE.
linear_model.ElasticNet
: An ElasticNet regression model with FHE.
linear_model.Lasso
: A Lasso regression model with FHE.
linear_model.LinearRegression
: A linear regression model with FHE.
linear_model.LogisticRegression
: A logistic regression model with FHE.
linear_model.Ridge
: A Ridge regression model with FHE.
linear_model.SGDClassifier
: An FHE linear classifier model fitted with stochastic gradient descent.
linear_model.SGDRegressor
: An FHE linear regression model fitted with stochastic gradient descent.
neighbors.KNeighborsClassifier
: A k-nearest neighbors classifier model with FHE.
qnn.NeuralNetClassifier
: A Fully-Connected Neural Network classifier with FHE.
qnn.NeuralNetRegressor
: A Fully-Connected Neural Network regressor with FHE.
qnn_module.SparseQuantNeuralNetwork
: Sparse Quantized Neural Network.
rf.RandomForestClassifier
: Implements the RandomForest classifier.
rf.RandomForestRegressor
: Implements the RandomForest regressor.
svm.LinearSVC
: A Classification Support Vector Machine (SVM).
svm.LinearSVR
: A Regression Support Vector Machine (SVM).
tree.DecisionTreeClassifier
: Implements the sklearn DecisionTreeClassifier.
tree.DecisionTreeRegressor
: Implements the sklearn DecisionTreeClassifier.
xgb.XGBClassifier
: Implements the XGBoost classifier.
xgb.XGBRegressor
: Implements the XGBoost regressor.
hybrid_model.HybridFHEMode
: Simple enum for different modes of execution of HybridModel.
hybrid_model.HybridFHEModel
: Convert a model to a hybrid model.
hybrid_model.HybridFHEModelServer
: Hybrid FHE Model Server.
hybrid_model.LoggerStub
: Placeholder type for a typical logger like the one from loguru.
hybrid_model.RemoteModule
: A wrapper class for the modules to be evaluated remotely with FHE.
lora.BackwardModuleLinear
: Backward module for linear layers.
lora.CustomLinear
: Custom linear module.
lora.ForwardBackwardModule
: Custom autograd function for forward and backward passes.
lora.ForwardModuleLinear
: Forward module for linear layers.
lora.LoraTraining
: LoraTraining module for fine-tuning with LoRA in a hybrid model setting.
numpy_module.NumpyModule
: General interface to transform a torch.nn.Module to numpy module.
check_inputs.check_X_y_and_assert
: sklearn.utils.check_X_y with an assert.
check_inputs.check_X_y_and_assert_multi_output
: sklearn.utils.check_X_y with an assert and multi-output handling.
check_inputs.check_array_and_assert
: sklearn.utils.check_array with an assert.
custom_assert.assert_false
: Provide a custom assert to check that the condition is False.
custom_assert.assert_not_reached
: Provide a custom assert to check that a piece of code is never reached.
custom_assert.assert_true
: Provide a custom assert to check that the condition is True.
decoder.object_hook
: Define a custom object hook that enables loading any supported serialized values.
dumpers.dump
: Dump any Concrete ML object in a file.
dumpers.dumps
: Dump any object as a string.
encoder.dump_name_and_value
: Dump the value into a custom dict format.
loaders.load
: Load any Concrete ML object that provide a load_dict
method.
loaders.loads
: Load any Concrete ML object that provide a dump_dict
method.
utils.all_values_are_floats
: Indicate if all unpacked values are of a supported float dtype.
utils.all_values_are_integers
: Indicate if all unpacked values are of a supported integer dtype.
utils.all_values_are_of_dtype
: Indicate if all unpacked values are of the specified dtype(s).
utils.array_allclose_and_same_shape
: Check if two numpy arrays are equal within a tolerances and have the same shape.
utils.check_compilation_device_is_valid_and_is_cuda
: Check whether the device string for compilation or FHE execution is CUDA or CPU.
utils.check_device_is_valid
: Check whether the device string is valid or raise an exception.
utils.check_dtype_and_cast
: Convert any allowed type into an array and cast it if required.
utils.check_execution_device_is_valid_and_is_cuda
: Check whether the circuit can be executed on the required device.
utils.check_there_is_no_p_error_options_in_configuration
: Check the user did not set p_error or global_p_error in configuration.
utils.compute_bits_precision
: Compute the number of bits required to represent x.
utils.generate_proxy_function
: Generate a proxy function for a function accepting only *args type arguments.
utils.get_model_class
: Return the class of the model (instantiated or not), which can be a partial() instance.
utils.get_model_name
: Return the name of the model, which can be a partial() instance.
utils.get_onnx_opset_version
: Return the ONNX opset_version.
utils.is_brevitas_model
: Check if a model is a Brevitas type.
utils.is_classifier_or_partial_classifier
: Indicate if the model class represents a classifier.
utils.is_model_class_in_a_list
: Indicate if a model class, which can be a partial() instance, is an element of a_list.
utils.is_pandas_dataframe
: Indicate if the input container is a Pandas DataFrame.
utils.is_pandas_series
: Indicate if the input container is a Pandas Series.
utils.is_pandas_type
: Indicate if the input container is a Pandas DataFrame or Series.
utils.is_regressor_or_partial_regressor
: Indicate if the model class represents a regressor.
utils.manage_parameters_for_pbs_errors
: Return (p_error, global_p_error) that we want to give to Concrete.
utils.process_rounding_threshold_bits
: Check and process the rounding_threshold_bits parameter.
utils.replace_invalid_arg_name_chars
: Sanitize arg_name, replacing invalid chars by _.
utils.to_tuple
: Make the input a tuple if it is not already the case.
fhe_client_server.check_concrete_versions
: Check that current versions match the ones used in development.
convert.fuse_matmul_bias_to_gemm
: Fuse sequence of matmul -> add into a gemm node.
convert.get_equivalent_numpy_forward_from_onnx
: Get the numpy equivalent forward of the provided ONNX model.
convert.get_equivalent_numpy_forward_from_onnx_tree
: Get the numpy equivalent forward of the provided ONNX model for tree-based models only.
convert.get_equivalent_numpy_forward_from_torch
: Get the numpy equivalent forward of the provided torch Module.
convert.preprocess_onnx_model
: Preprocess the ONNX model to be used for numpy execution.
onnx_impl_utils.compute_conv_output_dims
: Compute the output shape of a pool or conv operation.
onnx_impl_utils.compute_onnx_pool_padding
: Compute any additional padding needed to compute pooling layers.
onnx_impl_utils.numpy_onnx_pad
: Pad a tensor according to ONNX spec, using an optional custom pad value.
onnx_impl_utils.onnx_avgpool_compute_norm_const
: Compute the average pooling normalization constant.
onnx_impl_utils.rounded_comparison
: Comparison operation using round_bit_pattern
function.
onnx_model_manipulations.clean_graph_after_node_op_type
: Remove the nodes following first node matching node_op_type from the ONNX graph.
onnx_model_manipulations.clean_graph_at_node_op_type
: Remove the first node matching node_op_type and its following nodes from the ONNX graph.
onnx_model_manipulations.convert_first_gather_to_matmul
: Convert the first Gather node to a matrix multiplication node.
onnx_model_manipulations.keep_following_outputs_discard_others
: Keep the outputs given in outputs_to_keep and remove the others from the model.
onnx_model_manipulations.remove_identity_nodes
: Remove identity nodes from a model.
onnx_model_manipulations.remove_node_types
: Remove unnecessary nodes from the ONNX graph.
onnx_model_manipulations.remove_unused_constant_nodes
: Remove unused Constant nodes in the provided onnx model.
onnx_model_manipulations.simplify_onnx_model
: Simplify an ONNX model, removes unused Constant nodes and Identity nodes.
onnx_utils.check_onnx_model
: Check an ONNX model, handling large models (>2GB) by using external data.
onnx_utils.execute_onnx_with_numpy
: Execute the provided ONNX graph on the given inputs.
onnx_utils.execute_onnx_with_numpy_trees
: Execute the provided ONNX graph on the given inputs for tree-based models only.
onnx_utils.get_attribute
: Get the attribute from an ONNX AttributeProto.
onnx_utils.get_op_type
: Construct the qualified type name of the ONNX operator.
onnx_utils.remove_initializer_from_input
: Remove initializers from model inputs.
ops_impl.cast_to_float
: Cast values to floating points.
ops_impl.numpy_abs
: Compute abs in numpy according to ONNX spec.
ops_impl.numpy_acos
: Compute acos in numpy according to ONNX spec.
ops_impl.numpy_acosh
: Compute acosh in numpy according to ONNX spec.
ops_impl.numpy_add
: Compute add in numpy according to ONNX spec.
ops_impl.numpy_asin
: Compute asin in numpy according to ONNX spec.
ops_impl.numpy_asinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_atan
: Compute atan in numpy according to ONNX spec.
ops_impl.numpy_atanh
: Compute atanh in numpy according to ONNX spec.
ops_impl.numpy_avgpool
: Compute Average Pooling using Torch.
ops_impl.numpy_batchnorm
: Compute the batch normalization of the input tensor.
ops_impl.numpy_cast
: Execute ONNX cast in Numpy.
ops_impl.numpy_celu
: Compute celu in numpy according to ONNX spec.
ops_impl.numpy_concatenate
: Apply concatenate in numpy according to ONNX spec.
ops_impl.numpy_constant
: Return the constant passed as a kwarg.
ops_impl.numpy_conv
: Compute N-D convolution using Torch.
ops_impl.numpy_cos
: Compute cos in numpy according to ONNX spec.
ops_impl.numpy_cosh
: Compute cosh in numpy according to ONNX spec.
ops_impl.numpy_div
: Compute div in numpy according to ONNX spec.
ops_impl.numpy_elu
: Compute elu in numpy according to ONNX spec.
ops_impl.numpy_equal
: Compute equal in numpy according to ONNX spec.
ops_impl.numpy_equal_float
: Compute equal in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_erf
: Compute erf in numpy according to ONNX spec.
ops_impl.numpy_exp
: Compute exponential in numpy according to ONNX spec.
ops_impl.numpy_flatten
: Flatten a tensor into a 2d array.
ops_impl.numpy_floor
: Compute Floor in numpy according to ONNX spec.
ops_impl.numpy_gemm
: Compute Gemm in numpy according to ONNX spec.
ops_impl.numpy_greater
: Compute greater in numpy according to ONNX spec.
ops_impl.numpy_greater_float
: Compute greater in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_greater_or_equal
: Compute greater or equal in numpy according to ONNX spec.
ops_impl.numpy_greater_or_equal_float
: Compute greater or equal in numpy according to ONNX specs and cast outputs to floats.
ops_impl.numpy_hardsigmoid
: Compute hardsigmoid in numpy according to ONNX spec.
ops_impl.numpy_hardswish
: Compute hardswish in numpy according to ONNX spec.
ops_impl.numpy_identity
: Compute identity in numpy according to ONNX spec.
ops_impl.numpy_leakyrelu
: Compute leakyrelu in numpy according to ONNX spec.
ops_impl.numpy_less
: Compute less in numpy according to ONNX spec.
ops_impl.numpy_less_float
: Compute less in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_less_or_equal
: Compute less or equal in numpy according to ONNX spec.
ops_impl.numpy_less_or_equal_float
: Compute less or equal in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_log
: Compute log in numpy according to ONNX spec.
ops_impl.numpy_matmul
: Compute matmul in numpy according to ONNX spec.
ops_impl.numpy_max
: Compute Max in numpy according to ONNX spec.
ops_impl.numpy_maxpool
: Compute Max Pooling using Torch.
ops_impl.numpy_min
: Compute Min in numpy according to ONNX spec.
ops_impl.numpy_mul
: Compute mul in numpy according to ONNX spec.
ops_impl.numpy_neg
: Compute Negative in numpy according to ONNX spec.
ops_impl.numpy_not
: Compute not in numpy according to ONNX spec.
ops_impl.numpy_not_float
: Compute not in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_or
: Compute or in numpy according to ONNX spec.
ops_impl.numpy_or_float
: Compute or in numpy according to ONNX spec and cast outputs to floats.
ops_impl.numpy_pow
: Compute pow in numpy according to ONNX spec.
ops_impl.numpy_relu
: Compute relu in numpy according to ONNX spec.
ops_impl.numpy_round
: Compute round in numpy according to ONNX spec.
ops_impl.numpy_selu
: Compute selu in numpy according to ONNX spec.
ops_impl.numpy_sigmoid
: Compute sigmoid in numpy according to ONNX spec.
ops_impl.numpy_sign
: Compute Sign in numpy according to ONNX spec.
ops_impl.numpy_sin
: Compute sin in numpy according to ONNX spec.
ops_impl.numpy_sinh
: Compute sinh in numpy according to ONNX spec.
ops_impl.numpy_softmax
: Compute softmax in numpy according to ONNX spec.
ops_impl.numpy_softplus
: Compute softplus in numpy according to ONNX spec.
ops_impl.numpy_sub
: Compute sub in numpy according to ONNX spec.
ops_impl.numpy_tan
: Compute tan in numpy according to ONNX spec.
ops_impl.numpy_tanh
: Compute tanh in numpy according to ONNX spec.
ops_impl.numpy_thresholdedrelu
: Compute thresholdedrelu in numpy according to ONNX spec.
ops_impl.numpy_transpose
: Transpose in numpy according to ONNX spec.
ops_impl.numpy_unfold
: Compute Unfold using Torch.
ops_impl.numpy_where
: Compute the equivalent of numpy.where.
ops_impl.numpy_where_body
: Compute the equivalent of numpy.where.
ops_impl.onnx_func_raw_args
: Decorate a numpy onnx function to flag the raw/non quantized inputs.
ops_impl.rounded_numpy_equal_for_trees
: Compute rounded equal in numpy according to ONNX spec for tree-based models only.
ops_impl.rounded_numpy_less_for_trees
: Compute rounded less in numpy according to ONNX spec for tree-based models only.
ops_impl.rounded_numpy_less_or_equal_for_trees
: Compute rounded less or equal in numpy according to ONNX spec for tree-based models only.
pandas.load_encrypted_dataframe
: Load a serialized encrypted data-frame.
pandas.merge
: Merge two encrypted data-frames in FHE using Pandas parameters.
utils.check_serialization
: Check that the given object can properly be serialized.
utils.data_calibration_processing
: Reduce size of the given data-set.
utils.get_random_samples
: Select n_sample
random elements from a 2D NumPy array.
utils.get_sklearn_all_models_and_datasets
: Get the pytest parameters to use for testing all models available in Concrete ML.
utils.get_sklearn_linear_models_and_datasets
: Get the pytest parameters to use for testing linear models.
utils.get_sklearn_neighbors_models_and_datasets
: Get the pytest parameters to use for testing neighbor models.
utils.get_sklearn_neural_net_models_and_datasets
: Get the pytest parameters to use for testing neural network models.
utils.get_sklearn_tree_models_and_datasets
: Get the pytest parameters to use for testing tree-based models.
utils.instantiate_model_generic
: Instantiate any Concrete ML model type.
utils.load_torch_model
: Load an object saved with torch.save() from a file or dict.
utils.pandas_dataframe_are_equal
: Determine if both data-frames are identical.
utils.values_are_equal
: Indicate if two values are equal.
post_training.get_n_bits_dict
: Convert the n_bits parameter into a proper dictionary.
quantizers.fill_from_kwargs
: Fill a parameter set structure from kwargs parameters.
p_error_search.compile_and_simulated_fhe_inference
: Get the quantized module of a given model in FHE, simulated or not.
tree_to_numpy.add_transpose_after_last_node
: Add transpose after last node.
tree_to_numpy.assert_add_node_and_constant_in_xgboost_regressor_graph
: Assert if an Add node with a specific constant exists in the ONNX graph.
tree_to_numpy.get_onnx_model
: Create ONNX model with Hummingbird convert method.
tree_to_numpy.onnx_fp32_model_to_quantized_model
: Build a FHE-compliant onnx-model using a fitted scikit-learn model.
tree_to_numpy.preprocess_tree_predictions
: Apply post-processing from the graph.
tree_to_numpy.tree_onnx_graph_preprocessing
: Apply pre-processing onto the ONNX graph.
tree_to_numpy.tree_to_numpy
: Convert the tree inference to a numpy functions using Hummingbird.
tree_to_numpy.tree_values_preprocessing
: Pre-process tree values.
tree_to_numpy.workaround_squeeze_node_xgboost
: Workaround to fix torch issue that does not export the proper axis in the ONNX squeeze node.
compile.build_quantized_module
: Build a quantized module from a Torch or ONNX model.
compile.compile_brevitas_qat_model
: Compile a Brevitas Quantization Aware Training model.
compile.compile_onnx_model
: Compile a torch module into an FHE equivalent.
compile.compile_torch_model
: Compile a torch module into an FHE equivalent.
compile.convert_torch_tensor_or_numpy_array_to_numpy_array
: Convert a torch tensor or a numpy array to a numpy array.
compile.has_any_qnn_layers
: Check if a torch model has QNN layers.
hybrid_model.convert_conv1d_to_linear
: Convert all Conv1D layers in a module or a Conv1D layer itself to nn.Linear.
hybrid_model.tuple_to_underscore_str
: Convert a tuple to a string representation.
hybrid_model.underscore_str_to_tuple
: Convert a a string representation of a tuple to a tuple.
lora.get_remote_names
: Get names of modules to be executed remotely.