Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
To compute on encrypted data, you first need to define the function you want to compute, then compile it into a Concrete Circuit
, which you can use to perform homomorphic evaluation.
Here is the full example that we will walk through:
Everything you need to perform homomorphic evaluation is included in a single module:
In this example, we compile a simple addition function:
To compile the function, you need to create a Compiler
by specifying the function to compile and the encryption status of its inputs:
An inputset is a collection representing the typical inputs to the function. It is used to determine the bit widths and shapes of the variables within the function.
It should be in iterable, yielding tuples, of the same length as the number of arguments of the function being compiled:
All inputs in the inputset will be evaluated in the graph, which takes time. If you're experiencing long compilation times, consider providing a smaller inputset.
You can use the compile
method of a Compiler
class with an inputset to perform the compilation and get the resulting circuit back:
You can use the encrypt_run_decrypt
method of a Circuit
class to perform homomorphic evaluation:
circuit.encrypt_run_decrypt(*args)
is just a convenient way to do everything at once. It is implemented as circuit.decrypt(circuit.run(circuit.encrypt(*args)))
.
If you are trying to compile a regular function, you can use the decorator interface instead of the explicit Compiler
interface to simplify your code:
Think of this decorator as a way to add the compile
method to the function object without changing its name elsewhere.
You can convert your compiled circuit into its textual representation by converting it to string:
If you just want to see the output on your terminal, you can directly print it as well:
Formatting is just for debugging purposes. It's not possible to create the circuit back from its textual representation. See How to Deploy if that's your goal.
One of the most common operations in Concrete is Table Lookups
(TLUs). All operations except addition, subtraction, multiplication with non-encrypted values, tensor manipulation operations, and a few operations built with those primitive operations (e.g. matmul, conv) are converted to Table Lookups under the hood:
is exactly the same as
Table Lookups are very flexible. They allow Concrete to support many operations, but they are expensive. The exact cost depends on many variables (hardware used, error probability, etc.), but they are always much more expensive compared to other operations. You should try to avoid them as much as possible. It's not always possible to avoid them completely, but you might remove the number of TLUs or replace some of them with other primitive operations.
Concrete automatically parallelizes TLUs if they are applied to tensors.
One of the most common operations in Concrete is Table Lookups
(TLUs). TLUs are performed with an FHE operation called Programmable Bootstrapping
(PBS). PBS's have a certain probability of error, which, when triggered, result in inaccurate results.
Let's say you have the table:
And you perform a Table Lookup using 4
. The result you should get is 16
, but because of the possibility of error, you can get any other value in the table.
The probability of this error can be configured through the p_error
and global_p_error
configuration options. The difference between these two options is that, p_error
is for individual TLUs but global_p_error
is for the whole circuit.
If you set p_error
to 0.01
, for example, it means every TLU in the circuit will have a 99% chance of being exact with a 1% probability of error. If you have a single TLU in the circuit, global_p_error
would be 1% as well. But if you have 2 TLUs for example, global_p_error
would be almost 2% (1 - (0.99 * 0.99)
).
However, if you set global_p_error
to 0.01
, the whole circuit will have 1% probability of error, no matter how many Table Lookups are included.
If you set both of them, both will be satisfied. Essentially, the stricter one will be used.
By default, both p_error
and global_p_error
is set to None
, which results in a global_p_error
of 1 / 100_000
being used.
Feel free to play with these configuration options to pick the one best suited for your needs! See How to Configure to learn how you can set a custom p_error
and/or global_p_error
.
Configuring either of those variables impacts computation time (compilation, keys generation, circuit execution) and space requirements (size of the keys on disk and in memory). Lower error probabilities would result in longer computation times and larger space requirements.
Here are the operations you can use inside the function you are compiling:
Some of these operations are not supported between two encrypted values. A detailed error will be raised if you try to do something that is not supported.
ndarray
methods.ndarray
properties.Some Python control flow statements are not supported. You cannot have an if
statement or a while
statement for which the condition depends on an encrypted value. However, such statements are supported with constant values (e.g., for i in range(SOME_CONSTANT)
, if os.environ.get("SOME_FEATURE") == "ON":
).
You cannot have floating-point inputs or floating-point outputs. You can have floating-point intermediate values as long as they can be converted to an integer Table Lookup (e.g., (60 * np.sin(x)).astype(np.int64)
).
There is a limit on the bit width of encrypted values. We are constantly working on increasing this bit width. If you go above the limit, you will get an error.
In this tutorial, we will review the ways to perform direct table lookups in Concrete.
Concrete provides a LookupTable
class to create your own tables and apply them in your circuits.
LookupTable
s can have any number of elements. Let's call them N. As long as the lookup variable is in range [-N, N), Table Lookup is valid.
If you go out of bounds of this range, you will get the following error:
You can create the lookup table using a list of integers and apply it using indexing:
When you apply the table lookup to a tensor, you apply the scalar table lookup to each element of the tensor:
LookupTable
mimics array indexing in Python, which means if the lookup variable is negative, the table is looked up from the back:
In case you want to apply a different lookup table to each element of a tensor, you can have a LookupTable
of LookupTable
s:
In this example, we applied a squared
table to the first column and a cubed
table to the second one.
Concrete tries to fuse some operations into table lookups automatically, so you don't need to create the lookup tables manually:
All lookup tables need to be from integers to integers. So, without .astype(np.int64)
, Concrete will not be able to fuse.
The function is first traced into:
Concrete then fuses appropriate nodes:
Fusing makes the code more readable and easier to modify, so try to utilize it over manual LookupTable
s as much as possible.
Direct circuits are still experimental. It is very easy to shoot yourself in the foot (e.g., no overflow checks, no type coercion) while using direct circuits, so utilize them with care.
For some applications, data types of inputs, intermediate values, and outputs are known (e.g., for manipulating bytes, you would want to use uint8). Using inputsets to determine bounds in these cases are not necessary, or even error-prone. Therefore, another interface for defining such circuits is introduced:
There are a few differences between direct circuits and traditional circuits:
Remember that the resulting dtype for each operation will be determined by its inputs. This can lead to some unexpected results if you're not careful (e.g., if you do -x
where x: fhe.uint8
, you'll fail to get the negative value as the result will be fhe.uint8
as well)
Use fhe types in .astype(...)
calls (e.g., np.sqrt(x).astype(fhe.uint4)
). There is no inputset evaluation, so the bit width of the output cannot be determined.
Specify the resulting data type in extension (e.g., fhe.univariate(function, outputs=fhe.uint4)(x)
), for the same reason as above.
Be careful with overflows. With inputset evaluation, you'll get bigger bit widths but no overflows. With direct definition, you must ensure there aren't any overflows!
Let's review a more complicated example to see how direct circuits behave:
This prints:
Here is the breakdown of assigned data types:
As you can see, %8
is subtraction of two unsigned values, and it's unsigned as well. In an overflow condition where c > d
, it results in undefined behavior.
Concrete supports native Python and NumPy operations as much as possible, but not everything is available in Python or NumPy. So, we provide some extensions ourselves to improve your experience.
Allows you to wrap any univariate function into a single table lookup:
The wrapped function:
shouldn't have any side effects (e.g., no modification of global state)
should be deterministic (e.g., no random numbers)
should have the same output shape as its input (i.e., output.shape
should be the same with input.shape
)
each output element should correspond to a single input element (e.g., output[0]
should only depend on input[0]
)
If any of these constraints are violated, the outcome is undefined.
Only 2D convolutions without padding and with one groups are supported for the time being.
Only 2D maxpooling without padding up to 15-bits is supported for the time being.
Allows you to create encrypted arrays:
Only scalars can be used to create arrays for the time being.
Allows you to create encrypted scalar zero:
Allows you to create encrypted tensor of zeros:
Allows you to create encrypted scalar one:
Allows you to create encrypted tensor of ones:
When you have big circuits, keeping track of which node corresponds to which part of your code becomes difficult. A tagging system can simplify such situations:
When you compile f
with inputset of range(10)
, you get the following graph:
If you get an error, you'll see exactly where the error occurred (e.g., which layer of the neural network, if you tag layers).
In the future, we plan to use tags for additional features (e.g., to measure performance of tagged regions), so it's a good idea to start utilizing them for big circuits.
During development, the speed of homomorphic execution is a big blocker for fast prototyping. You could call the function you're trying to compile directly, of course, but it won't be exactly the same as FHE execution, which has a certain probability of error (see ).
Considering this, simulation is introduced:
After the simulation runs, it prints this:
Currently, simulation is better than directly calling from Python, but it's not exactly the same with FHE execution. This is because it is implemented in Python.
Imagine you have an identity table lookup. It might be omitted from the generated FHE code by the Compiler, but it will still be present as optimizations are not done in Python. This will result in a bigger error in simulation.
Some operations also have multiple table lookups within them, and those cannot be simulated unless their actual implementations are ported to Python. In the future, simulation functionality will be provided by the Compiler, so all of these issues will be addressed. Until then, keep these in mind.
This is an interactive tutorial written as a Jupyter Notebook, which you can find .
Rounded table lookups are not yet compilable. API is stable and will not change, so it's documented, but you might not be able to run the code samples provided in this document.
Table lookups have a strict constraint on the number of bits they support. This can limiting, especially if you don't need exact precision.
To overcome this, a rounded table lookup operation is introduced. It's a way to extract the most significant bits of a large integer and then apply the table lookup to those bits.
Imagine you have an 8-bit value, but you want to have a 5-bit table lookup. You can call fhe.round_bit_pattern(input, lsbs_to_remove=3)
and use the value you get in the table lookup.
In Python, evaluation will work like this:
During homomorphic execution, it'll be converted like this:
A modified table lookup would be applied to the resulting 5 bits.
If you want to apply ReLU to an 18-bit value., first look at the original ReLU:
The input range is [-100_000, 100_000), which means 18-bit table lookups are required, but they are not yet supported. You can apply a rounding operation to the input before passing it to the ReLU
function:
We've removed the 10 least significant bits of the input and then applied the ReLU function to this value to get:
This is close enough to original ReLU for some cases. If your application is more flexible, you could remove more bits, let's say 12, to get:
This is very useful but, in some cases, you don't know how many bits your input contains, so it's not reliable to specify lsbs_to_remove
manually. For this reason, AutoRounder
class is introduced.
AutoRounder
s allow you to set how many of the most significant bits to keep, but they need to be adjusted using an inputset to determine how many of the least significant bits to remove. This can be done manually using fhe.AutoRounder.adjust(function, inputset)
, or by setting auto_adjust_rounders
to True
during compilation.
In this case, 6
of the most significant bits are kept to get:
You can adjust target_msbs
depending on your requirements. If you set it to 4
, you get:
AutoRounder
s should be defined outside the function being compiled. They are used to store the result of the adjustment process, so they shouldn't be created each time the function is called.
Concrete partly supports floating points:
They cannot be inputs
They cannot be outputs
They can be intermediate values under certain constraints
Concrete-Compile, which is used for compiling the circuit, doesn't support floating points at all. However, it supports table lookups. They take an integer and map it to another integer. It does not care how the lookup table is calculated. The constraints of this operation are such that there should be a single integer input, and it should result in a single integer output.
As long as your floating point operations comply with those constraints, Concrete automatically converts them to a table lookup operation:
In the example above, a
, b
, and c
are floating point intermediates. They are used to calculate d
, which is an integer with a value dependent upon x
, another integer. Concrete detects this and fuses all of these operations into a single table lookup from x
to d
.
This approach works for a variety of use cases, but it comes up short for some:
This results in:
The reason for the error is that d
no longer depends solely on x
; it depends on y
as well. Concrete cannot fuse these operations, so it raises an exception instead.
Allows you to perform a convolution operation, with the same semantic of :
Allows you to perform a maxpool operation, with the same semantic of :
In this section, you will learn how to debug the compilation process easily and get help in case you cannot resolve your issue.
Concrete has an artifact system to simplify the process of debugging issues.
In case of compilation failures, artifacts are exported automatically to the .artifacts
directory under the working directory. Let's intentionally create a compilation failure to show what is exported.
This function fails to compile because Concrete does not support floating-point outputs. When you try to compile it, an exception will be raised and the artifacts will be exported automatically. If you go to the .artifacts
directory under the working directory, you'll see the following files:
This file contains information about your setup (i.e., your operating system and python version).
This file contains information about Python packages and their versions installed on your system.
This file contains information about the function you tried to compile.
This file contains information about the encryption status of the parameters of the function you tried to compile.
This file contains the textual representation of the initial computation graph right after tracing.
This file contains the textual representation of the final computation graph right before MLIR conversion.
This file contains information about the error you received.
Manual exports are mostly used for visualization. They can be very useful for demonstrations. Here is how to perform one:
If you go to the /tmp/custom/export/path
directory, you'll see the following files:
This file contains the textual representation of the initial computation graph right after tracing.
This file contains the textual representation of the intermediate computation graph after fusing.
This file contains the textual representation of the final computation graph right before MLIR conversion.
This file contains information about the MLIR of the function you compiled using the inputset you provided.
This file contains information about the client parameters chosen by Concrete.
You can seek help with your issue by asking a question directly in the community forum.
If you cannot find a solution in the community forum, or you found a bug in the library, you could create an issue in our GitHub repository.
In case of a bug, try to:
minimize randomness;
minimize your function as much as possible while keeping the bug - this will help to fix the bug faster;
include your inputset in the issue;
include reproduction steps in the issue;
include debug artifacts in the issue.
In case of a feature request, try to:
give a minimal example of the desired behavior;
explain your use case.
There are two ways to contribute to Concrete. You can:
Open issues to report bugs and typos or suggest ideas;
Request to become an official contributor by emailing hello@zama.ai. Only approved contributors can send pull requests (PRs), so get in touch before you do.
Concrete generates keys for you implicitly when they are needed and if they are not generated already. This is useful for development, but it's not flexible (or secure!) for production. Explicit key management API is introduced to be used in such cases to easily generate and re-use keys.
Let's start by defining a circuit:
Circuits have a property called keys
of type fhe.Keys
, which has several utility functions dedicated to key management!
To explicitly generate keys for a circuit, you can use:
Generated keys are stored in memory upon generation, unencrypted.
And it's possible to set a custom seed for reproducibility:
Do not specify the seed manually in a production environment!
To serialize keys, say to send it across the network:
Keys are not serialized encrypted! Please make sure you keep them in a safe environment, or encrypt them manually after serialization.
To deserialize the keys back, after receiving serialized keys:
Once you have a valid fhe.Keys
object, you can directly assign it to the circuit:
If assigned keys are generated for a different circuit, an exception would be raised.
You can also use the filesystem to store the keys directly, without needing to deal with serialization and file management yourself:
Keys are not saved encrypted! Please make sure you store them in a safe environment, or encrypt them manually after saving.
After keys are saved to disk, you can load them back anytime:
Lastly, if you want to generate keys in the first run and reuse the keys in consecutive runs:
Some terms used throughout the project include:
computation graph: A data structure to represent a computation. This is basically a directed acyclic graph in which nodes are either inputs, constants, or operations on other nodes.
tracing: A technique that takes a Python function from the user and generates a corresponding computation graph.
bounds: Before computation graphs are converted to MLIR, we need to know which value should have which type (e.g., uint3 vs int5). We use inputsets for this purpose. We simulate the graph with the inputs in the inputset to remember the minimum and the maximum value for each node, which is what we call bounds, and use bounds to determine the appropriate type for each node.
circuit: The result of compilation. A circuit is made of the client and server components. It has methods for everything from printing to evaluation.
In this section, we briefly discuss the module structure of Concrete Python. You are encouraged to check individual .py
files to learn more.
concrete
fhe
dtypes: data type specifications (e.g., int4, uint5, float32)
values: value specifications (i.e., data type + shape + encryption status)
representation: representation of computation (e.g., computation graphs, nodes)
tracing: tracing of python functions
extensions: custom functionality (see Extensions)
mlir: computation graph to mlir conversion
compilation: configuration, compiler, artifacts, circuit, client/server, and anything else related to compilation
After developing your circuit, you may want to deploy it. However, sharing the details of your circuit with every client might not be desirable. You might want to perform the computation in dedicated servers, as well. In this case, you can use the Client
and Server
features of Concrete.
You can develop your circuit like we've discussed in the previous chapters. Here is a simple example:
Once you have your circuit, you can save everything the server needs:
Then, send server.zip
to your computation server.
You can load the server.zip
you get from the development machine:
You will need to wait for requests from clients. The first likely request is for ClientSpecs
.
Clients need ClientSpecs
to generate keys and request computation. You can serialize ClientSpecs
:
Then, you can send it to the clients requesting it.
After getting the serialized ClientSpecs
from a server, you can create the client object:
Once you have the Client
object, you can perform key generation:
This method generates encryption/decryption keys and evaluation keys.
The server requires evaluation keys linked to the encryption keys that you just generated. You can serialize your evaluation keys as shown:
After serialization, send the evaluation keys to the server.
Serialized evaluation keys are very big in size, so you may want to cache them on the server instead of sending them with each request.
Now encrypt your inputs and request the server to perform the computation. You can do it like so:
Then, send serialized args to the server.
Once you have serialized evaluation keys and serialized arguments, you can deserialize them:
You can perform the computation, as well:
Then, send the serialized public result back to the client, so they can decrypt it and get the result of the computation.
Once you have received the public result of the computation from the server, you can deserialize it:
Then, decrypt the result:
Concrete can be customized using Configuration
s:
You can overwrite individual options as kwargs to the compile
method:
Or you can combine both:
Additional kwarg to compile
functions take higher precedence. So if you set the option in both configuration
and compile
methods, the value in the compile
method will be used.
show_graph: Optional[bool] = None
Whether to print computation graph during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
show_mlir: Optional[bool] = None
Whether to print MLIR during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
show_optimizer: Optional[bool] = None
Whether to print optimizer output during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
verbose: bool = False
Whether to print details related to compilation.
dump_artifacts_on_unexpected_failures: bool = True
Whether to export debugging artifacts automatically on compilation failures.
auto_adjust_rounders: bool = False
Whether to adjust rounders automatically.
p_error: Optional[float] = None
Error probability for individual table lookups. If set, all table lookups will have the probability of a non-exact result smaller than the set value. See Exactness to learn more.
global_p_error: Optional[float] = None
Global error probability for the whole circuit. If set, the whole circuit will have the probability of a non-exact result smaller than the set value. See Exactness to learn more.
single_precision: bool = True
Whether to use single precision for the whole circuit.
jit: bool = False
Whether to use JIT compilation.
loop_parallelize: bool = True
Whether to enable loop parallelization in the compiler.
dataflow_parallelize: bool = False
Whether to enable dataflow parallelization in the compiler.
auto_parallelize: bool = False
Whether to enable auto parallelization in the compiler.
enable_unsafe_features: bool = False
Whether to enable unsafe features.
use_insecure_key_cache: bool = False (Unsafe)
Whether to use the insecure key cache.
insecure_key_cache_location: Optional[Union[Path, str]] = None
Location of insecure key cache.
Fusing is the act of combining multiple nodes into a single node, which is converted to a table lookup.
Code related to fusing is in the frontends/concrete-python/concrete/fhe/compilation/utils.py
file. Fusing can be performed using the fuse
function.
Within fuse
:
We loop until there are no more subgraphs to fuse.
Within each iteration: 2.1. We find a subgraph to fuse.
2.2. We search for a terminal node that is appropriate for fusing.
2.3. We crawl backwards to find the closest integer nodes to this node.
2.4. If there is a single node as such, we return the subgraph from this node to the terminal node.
2.5. Otherwise, we try to find the lowest common ancestor (lca) of this list of nodes.
2.6. If an lca doesn't exist, we say this particular terminal node is not fusable, and we go back to search for another subgraph.
2.7. Otherwise, we use this lca as the input of the subgraph and continue with subgraph
node creation below.
2.8. We convert the subgraph into a subgraph
node by checking fusability status of the nodes of the subgraph in this step.
2.9. We substitute the subgraph
node to the original graph.
With the current implementation, we cannot fuse subgraphs that depend on multiple encrypted values where those values don't have a common lca (e.g., np.round(np.sin(x) + np.cos(y))
).
Compilation begins with tracing to get an easy-to-manipulate representation of the function. We call this representation a Computation Graph
, which is basically a Directed Acyclic Graph (DAG) containing nodes representing computations done in the function. Working with graphs is good because they have been studied extensively and there are a lot of available algorithms to manipulate them. Internally, we use , which is an excellent graph library for Python.
The next step in compilation is transforming the computation graph. There are many transformations we perform, and they will be discussed in their own sections. The result of transformations is just another computation graph.
After transformations are applied, we need to determine the bounds (i.e., the minimum and the maximum values) of each intermediate node. This is required because FHE currently allows limited precision for computations. Bound measurement helps determine the required precision for the function.
The final step is to transform the computation graph to equivalent MLIR
code. Once the MLIR is generated, our Compiler backend compiles it down to native binaries.
We start with a Python function f
, such as this one:
The goal of tracing is to create the following computation graph without requiring any change from the user.
(Note that the edge labels are for non-commutative operations. To give an example, a subtraction node represents (predecessor with edge label 0) - (predecessor with edge label 1)
)
To do this, we make use of Tracer
s, which are objects that record the operation performed during their creation. We create a Tracer
for each argument of the function and call the function with those tracers. Tracer
s make use of the operator overloading feature of Python to achieve their goal:
2 * y
will be performed first, and *
is overloaded for Tracer
to return another tracer: Tracer(computation=Multiply(Constant(2), self.computation))
, which is equal to Tracer(computation=Multiply(Constant(2), Input("y")))
x + (2 * y)
will be performed next, and +
is overloaded for Tracer
to return another tracer: Tracer(computation=Add(self.computation, (2 * y).computation))
, which is equal to Tracer(computation=Add(Input("x"), Multiply(Constant(2), Input("y")))
In the end, we will have output tracers that can be used to create the computation graph. The implementation is a bit more complex than this, but the idea is the same.
Tracing is also responsible for indicating whether the values in the node would be encrypted or not. The rule for that is: if a node has an encrypted predecessor, it is encrypted as well.
The goal of topological transforms is to make more functions compilable.
With the current version of Concrete, floating-point inputs and floating-point outputs are not supported. However, if the floating-point operations are intermediate operations, they can sometimes be fused into a single table lookup from integer to integer, thanks to some specific transforms.
Let's take a closer look at the transforms we can currently perform.
We have allocated a whole new chapter to explaining fusing. You can find it after this chapter.
Given a computation graph, the goal of the bounds measurement step is to assign the minimal data type to each node in the graph.
If we have an encrypted input that is always between 0
and 10
, we should assign the type EncryptedScalar<uint4>
to the node of this input as EncryptedScalar<uint4>
. This is the minimal encrypted integer that supports all values between 0
and 10
.
If there were negative values in the range, we could have used intX
instead of uintX
.
Bounds measurement is necessary because FHE supports limited precision, and we don't want unexpected behaviour while evaluating the compiled functions.
Let's take a closer look at how we perform bounds measurement.
This is a simple approach that requires an inputset to be provided by the user.
The inputset is not to be confused with the dataset, which is classical in ML, as it doesn't require labels. Rather, it is a set of values which are typical inputs of the function.
The idea is to evaluate each input in the inputset and record the result of each operation in the computation graph. Then we compare the evaluation results with the current minimum/maximum values of each node and update the minimum/maximum accordingly. After the entire inputset is evaluated, we assign a data type to each node using the minimum and maximum values it contains.
Here is an example, given this computation graph where x
is encrypted:
and this inputset:
Evaluation result of 2
:
x
: 2
2
: 2
*
: 4
3
: 3
+
: 7
New bounds:
x
: [2, 2]
2
: [2, 2]
*
: [4, 4]
3
: [3, 3]
+
: [7, 7]
Evaluation result of 3
:
x
: 3
2
: 2
*
: 6
3
: 3
+
: 9
New bounds:
x
: [2, 3]
2
: [2, 2]
*
: [4, 6]
3
: [3, 3]
+
: [7, 9]
Evaluation result of 1
:
x
: 1
2
: 2
*
: 2
3
: 3
+
: 5
New bounds:
x
: [1, 3]
2
: [2, 2]
*
: [2, 6]
3
: [3, 3]
+
: [5, 9]
Assigned data types:
x
: EncryptedScalar<uint2>
2
: ClearScalar<uint2>
*
: EncryptedScalar<uint3>
3
: ClearScalar<uint2>
+
: EncryptedScalar<uint4>
📁 | 💛 | 🟨
Concrete is an open-source framework which simplifies the use of Fully Homomorphic Encryption (FHE).
FHE is a powerful cryptographic tool, allowing computation to be performed directly on encrypted data without needing to decrypt it. With FHE, you can build services that preserve privacy for all users. FHE is also offers ideal protection against data breaches as everything is done on encrypted data. Even if the server is compromised, no sensitive data is leaked.
Since writing FHE programs are hard, Concrete framework contains a TFHE Compiler based on LLVM to make this process easier for developers.
This documentation is split into several sections:
Getting Started gives you the basics,
Tutorials provides essential examples on various features of the library,
How to helps you perform specific tasks,
Developer explains the inner workings of the library and everything related to contributing to the project.
Concrete Numpy was the former name of the Python frontend of the Concrete Compiler. Concrete Compiler is now open source, and the package name is updated from concrete-numpy
to concrete-python
(as concrete
is already booked for a non FHE-related project).
Support forum: (we answer in less than 24 hours).
Live discussion on the FHE.org discord server: (inside the #concrete channel).
Do you have a question about Zama? Write us on or send us an email at: hello@zama.ai
Users from Concrete Numpy could safely update to Concrete with few changes explained in the .
Before v1.0, Concrete was a set of Rust libraries implementing Zama's variant of TFHE. Starting with v1, Concrete is now Zama's TFHE Compiler framework only. The Rust library is now called .