Concrete ML
WebsiteLibrariesProducts & ServicesDevelopersSupport
1.9
1.9
  • Welcome
  • Get Started
    • What is Concrete ML?
    • Installation
    • Key concepts
    • Inference in the cloud
  • Built-in Models
    • Linear models
    • Tree-based models
    • Neural networks
    • Nearest neighbors
    • Encrypted dataframe
    • Encrypted training
  • LLMs
    • Inference
    • Encrypted fine-tuning
  • Deep Learning
    • Using Torch
    • Using ONNX
    • Step-by-step guide
    • Debugging models
    • Optimizing inference
  • Guides
    • Prediction with FHE
    • Production deployment
    • Hybrid models
    • Serialization
    • GPU acceleration
  • Tutorials
    • See all tutorials
    • Built-in model examples
    • Deep learning examples
  • References
    • API
  • Explanations
    • Security and correctness
    • Quantization
    • Pruning
    • Compilation
    • Advanced features
    • Project architecture
      • Importing ONNX
      • Quantization tools
      • FHE Op-graph design
      • External libraries
  • Developers
    • Set up the project
    • Set up Docker
    • Documentation
    • Support and issues
    • Contributing
    • Support new ONNX node
    • Release note
    • Feature request
    • Bug report
Powered by GitBook

Libraries

  • TFHE-rs
  • Concrete
  • Concrete ML
  • fhEVM

Developers

  • Blog
  • Documentation
  • Github
  • FHE resources

Company

  • About
  • Introduction to FHE
  • Media
  • Careers
On this page
  • Ciphertext format compatibility
  • Example
  • Quantization parameters
  • Inference time considerations

Was this helpful?

Export as PDF
  1. Built-in Models

Nearest neighbors

PreviousNeural networksNextEncrypted dataframe

Last updated 1 month ago

Was this helpful?

This document introduces the nearest neighbors non-parametric classification models that Concrete ML provides with a scikit-learn interface through the KNeighborsClassifier class.

Concrete ML
scikit-learn

Ciphertext format compatibility

These models only support Concrete ciphertexts. See documentation for more details.

Example

from concrete.ml.sklearn import KNeighborsClassifier

concrete_classifier = KNeighborsClassifier(n_bits=2, n_neighbors=3)

Quantization parameters

The KNeighborsClassifier class quantizes the training data-set provided to .fit using the specified number of bits (n_bits). To comply with , you must keep this value low. The model's accuracy will depend significantly on a well-chosen n_bits value and the dimensionality of the data.

The predict method of the KNeighborsClassifier performs the following steps:

  1. Quantize the test vectors on clear data

  2. Compute the top-k class indices of the closest training set vector on encrypted data

  3. Vote for the top-k class labels to find the class for each test vector, performed on clear data

Inference time considerations

The FHE inference latency of this model is heavily influenced by the n_bits and the dimensionality of the data. Additionally, the data-set size has a linear impact on the data complexity. The number of nearest neighbors (n_neighbors) also affects performance.

The KNN computation executes in FHE in O(Nlog2k)O(Nlog^2k)O(Nlog2k) steps, where NNN is the training data-set size and kkk is n_neighbors. Each step requires several , with their runtime affected by the factors listed above. These factors determine the precision needed to represent the distances between test vectors and training data-set vectors. The PBS input precision required by the circuit is related to the precision of the distance values.

KNeighborsClassifier
KNeighborsClassifier
the ciphertexts format
accumulator size constraints
PBS operations