concrete.ml.sklearn.qnn.md
Last updated
Last updated
concrete.ml.sklearn.qnn
Scikit-learn interface for fully-connected quantized neural networks.
QNN_AUTO_KWARGS
OPTIONAL_MODULE_PARAMS
ATTRIBUTE_PREFIXES
NeuralNetRegressor
A Fully-Connected Neural Network regressor with FHE.
This class wraps a quantized neural network implemented using Torch tools as a scikit-learn estimator. The skorch package allows to handle training and scikit-learn compatibility, and adds quantization as well as compilation functionalities. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
Inputs and targets that are float64 will be casted to float32 before training as Torch does not handle float64 types properly. Thus should not have a significant impact on the model's performances. An error is raised if these values are not floating points.
__init__
property base_module
Get the Torch module.
Returns:
SparseQuantNeuralNetwork
: The fitted underlying module.
property fhe_circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[UniformQuantizer]
: The input quantizers.
property is_compiled
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
property is_fitted
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
property onnx_model
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
property output_quantizers
Get the output quantizers.
Returns:
List[UniformQuantizer]
: The output quantizers.
dump_dict
fit
fit_benchmark
load_dict
predict
predict_proba
NeuralNetClassifier
A Fully-Connected Neural Network classifier with FHE.
This class wraps a quantized neural network implemented using Torch tools as a scikit-learn estimator. The skorch package allows to handle training and scikit-learn compatibility, and adds quantization as well as compilation functionalities. The neural network implemented by this class is a multi layer fully connected network trained with Quantization Aware Training (QAT).
Inputs that are float64 will be casted to float32 before training as Torch does not handle float64 types properly. Thus should not have a significant impact on the model's performances. If the targets are integers of lower bit-width, they will be safely casted to int64. Else, an error is raised.
__init__
property base_module
Get the Torch module.
Returns:
SparseQuantNeuralNetwork
: The fitted underlying module.
property classes_
property fhe_circuit
property history
property input_quantizers
Get the input quantizers.
Returns:
List[UniformQuantizer]
: The input quantizers.
property is_compiled
Indicate if the model is compiled.
Returns:
bool
: If the model is compiled.
property is_fitted
Indicate if the model is fitted.
Returns:
bool
: If the model is fitted.
property onnx_model
Get the ONNX model.
Is None if the model is not fitted.
Returns:
onnx.ModelProto
: The ONNX model.
property output_quantizers
Get the output quantizers.
Returns:
List[UniformQuantizer]
: The output quantizers.
dump_dict
fit
fit_benchmark
load_dict
predict
predict_proba