Inference in the cloud
Last updated
Last updated
Concrete ML models and data-frames can be easily deployed in a client/server setting, enabling the creation of privacy-preserving services in the cloud.
As seen in the concepts section, once compiled to FHE, a Concrete ML model or data-frame generates machine code that execute prediction, training or pre-processing on encrypted data. Secret encryption keys are needed so that the user can securely encrypt their data and decrypt the execution result. An evaluation key is also needed for the server to securely process the user's encrypted data.
Keys are generated by the user once for each service they use, based on the model the service provides and its cryptographic parameters.
The overall communications protocol to enable cloud deployment of machine learning services can be summarized in the following diagram:
The steps detailed above are:
The model developer deploys the compiled machine learning model to the server. This model includes the cryptographic parameters. The server is now ready to provide private inference. Crypto-graphic parameters and compiled programs for data-frames are included directly in Concrete ML.
The client requests the cryptographic parameters (also called "client specs"). Once it receives them from the server, the secret and evaluation keys are generated.
The client sends the evaluation key to the server. The server is now ready to accept requests from this client. The client sends their encrypted data. Serialized data-frames include client evaluation keys.
The server uses the evaluation key to securely run prediction, training and pre-processing on the user's data and sends back the encrypted result.
The client now decrypts the result and can send back new requests.
For more information on how to implement this basic secure inference protocol, refer to the Production Deployment section and to the client/server example. For information on training on encrypted data, see the corresponding section.