Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
0.5
0.5
  • What is Concrete ML?
  • Getting Started
    • Installation
    • Key Concepts
    • Inference in the Cloud
  • Built-in Models
    • Linear Models
    • Tree-based Models
    • Neural Networks
    • Pandas
    • Built-in Model Examples
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-Step Guide
    • Deep Learning Examples
    • Debugging Models
  • Advanced topics
    • Quantization
    • Pruning
    • Compilation
    • Production Deployment
    • Advanced Features
  • Developer Guide
    • Workflow
      • Set Up the Project
      • Set Up Docker
      • Documentation
      • Support and Issues
      • Contributing
    • Inner workings
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External Libraries
    • API
      • concrete.ml.common
      • concrete.ml.common.check_inputs
      • concrete.ml.common.debugging
      • concrete.ml.common.debugging.custom_assert
      • concrete.ml.common.utils
      • concrete.ml.deployment
      • concrete.ml.deployment.fhe_client_server
      • concrete.ml.onnx
      • concrete.ml.onnx.convert
      • concrete.ml.onnx.onnx_model_manipulations
      • concrete.ml.onnx.onnx_utils
      • concrete.ml.onnx.ops_impl
      • concrete.ml.quantization
      • concrete.ml.quantization.base_quantized_op
      • concrete.ml.quantization.post_training
      • concrete.ml.quantization.quantized_module
      • concrete.ml.quantization.quantized_ops
      • concrete.ml.quantization.quantizers
      • concrete.ml.sklearn
      • concrete.ml.sklearn.base
      • concrete.ml.sklearn.glm
      • concrete.ml.sklearn.linear_model
      • concrete.ml.sklearn.protocols
      • concrete.ml.sklearn.qnn
      • concrete.ml.sklearn.rf
      • concrete.ml.sklearn.svm
      • concrete.ml.sklearn.torch_module
      • concrete.ml.sklearn.tree
      • concrete.ml.sklearn.tree_to_numpy
      • concrete.ml.sklearn.xgb
      • concrete.ml.torch
      • concrete.ml.torch.compile
      • concrete.ml.torch.numpy_module
      • concrete.ml.version
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • module concrete.ml.quantization.quantized_module
  • Global Variables
  • class QuantizedModule

Was this helpful?

Export as PDF
  1. Developer Guide
  2. API

concrete.ml.quantization.quantized_module

Previousconcrete.ml.quantization.post_trainingNextconcrete.ml.quantization.quantized_ops

Last updated 2 years ago

Was this helpful?

module concrete.ml.quantization.quantized_module

QuantizedModule API.

Global Variables

  • DEFAULT_P_ERROR_PBS


class QuantizedModule

Inference for a quantized model.

method __init__

__init__(
    ordered_module_input_names: Iterable[str] = None,
    ordered_module_output_names: Iterable[str] = None,
    quant_layers_dict: Dict[str, Tuple[Tuple[str, ], QuantizedOp]] = None
)

property fhe_circuit

Get the FHE circuit.

Returns:

  • Circuit: the FHE circuit


property is_compiled

Return the compiled status of the module.

Returns:

  • bool: the compiled status of the module.


property onnx_model

Get the ONNX model.

.. # noqa: DAR201

Returns:

  • _onnx_model (onnx.ModelProto): the ONNX model


property post_processing_params

Get the post-processing parameters.

Returns:

  • Dict[str, Any]: the post-processing parameters


method compile

compile(
    q_inputs: Union[Tuple[ndarray, ], ndarray],
    configuration: Optional[Configuration] = None,
    compilation_artifacts: Optional[DebugArtifacts] = None,
    show_mlir: bool = False,
    use_virtual_lib: bool = False,
    p_error: Optional[float] = 6.3342483999973e-05
) → Circuit

Compile the forward function of the module.

Args:

  • q_inputs (Union[Tuple[numpy.ndarray, ...], numpy.ndarray]): Needed for tracing and building the boundaries.

  • configuration (Optional[Configuration]): Configuration object to use during compilation

  • compilation_artifacts (Optional[DebugArtifacts]): Artifacts object to fill during

  • show_mlir (bool): if set, the MLIR produced by the converter and which is going to be sent to the compiler backend is shown on the screen, e.g., for debugging or demo. Defaults to False.

  • use_virtual_lib (bool): set to use the so called virtual lib simulating FHE computation. Defaults to False.

  • p_error (Optional[float]): probability of error of a PBS.

Returns:

  • Circuit: the compiled Circuit.


method dequantize_output

dequantize_output(qvalues: ndarray) → ndarray

Take the last layer q_out and use its dequant function.

Args:

  • qvalues (numpy.ndarray): Quantized values of the last layer.

Returns:

  • numpy.ndarray: Dequantized values of the last layer.


method forward

forward(*qvalues: ndarray) → ndarray

Forward pass with numpy function only.

Args:

  • *qvalues (numpy.ndarray): numpy.array containing the quantized values.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method forward_and_dequant

forward_and_dequant(*q_x: ndarray) → ndarray

Forward pass with numpy function only plus dequantization.

Args:

  • *q_x (numpy.ndarray): numpy.ndarray containing the quantized input values. Requires the input dtype to be uint8.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method post_processing

post_processing(qvalues: ndarray) → ndarray

Post-processing of the quantized output.

Args:

  • qvalues (numpy.ndarray): numpy.ndarray containing the quantized input values.

Returns:

  • (numpy.ndarray): Predictions of the quantized model


method quantize_input

quantize_input(*values: ndarray) → Union[ndarray, Tuple[ndarray, ]]

Take the inputs in fp32 and quantize it using the learned quantization parameters.

Args:

  • *values (numpy.ndarray): Floating point values.

Returns:

  • Union[numpy.ndarray, Tuple[numpy.ndarray, ...]]: Quantized (numpy.uint32) values.


method set_inputs_quantization_parameters

set_inputs_quantization_parameters(*input_q_params: UniformQuantizer)

Set the quantization parameters for the module's inputs.

Args:

  • *input_q_params (UniformQuantizer): The quantizer(s) for the module.