Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Concrete is natively supported on Linux and macOS from Python 3.8 to 3.11 inclusive. If you have Docker in your platform, you can use the docker image to use Concrete.
You can install Concrete from PyPI:
There are some optional features which can be enabled by installing the full
version:
Full version depends on pygraphviz, which needs graphviz to be installed in the operating system so please install the operating system dependencies before installing concrete-python[full]
.
Installing pygraphviz
on macOS can be problematic (see https://github.com/pygraphviz/pygraphviz/issues/11).
If you're using homebrew, you may try the following:
before running:
You can also get the Concrete docker image (replace "v2.4.0" below by the correct version you want):
Docker is not supported on Apple Silicon.
To compute on encrypted data, you first need to define the function you want to compute, then compile it into a Concrete Circuit
, which you can use to perform homomorphic evaluation.
Here is the full example that we will walk through:
Everything you need to perform homomorphic evaluation is included in a single module:
In this example, we compile a simple addition function:
To compile the function, you need to create a Compiler
by specifying the function to compile and the encryption status of its inputs:
To set that e.g. y
is in the clear, it would be
An inputset is a collection representing the typical inputs to the function. It is used to determine the bit widths and shapes of the variables within the function.
It should be in iterable, yielding tuples, of the same length as the number of arguments of the function being compiled:
Here, our inputset is made of 10 pairs of integers, whose the minimum pair is (0, 0)
and the maximum is (7, 7)
.
Choosing a representative inputset is critical to allow the compiler to find accurate bounds of all the intermediate values (find more details here. Later if you evaluate the circuit with values that make under or overflows it results to an undefined behavior.
There is a utility function called fhe.inputset(...)
for easily creating random inputsets, see its documentation to learn more!
You can use the compile
method of the Compiler
class with an inputset to perform the compilation and get the resulting circuit back:
You can use the keygen
method of the Circuit
class to generate the keys (public and private):
If you don't call the key generation explicitly keys will be generated lazily when it needed.
Now you can easily perform the homomorphic evaluation using the encrypt
, run
and decrypt
methods of the Circuit
:
Some terms used throughout the project include:
computation graph: A data structure to represent a computation. This is basically a directed acyclic graph in which nodes are either inputs, constants, or operations on other nodes.
tracing: A technique that takes a Python function from the user and generates a corresponding computation graph.
bounds: Before computation graphs are converted to MLIR, we need to know which value should have which type (e.g., uint3 vs int5). We use inputsets for this purpose. We simulate the graph with the inputs in the inputset to remember the minimum and the maximum value for each node, which is what we call bounds, and use bounds to determine the appropriate type for each node.
circuit: The result of compilation. A circuit is made of the client and server components. It has methods for everything from printing to evaluation.
Some applications require directly manipulating bits of integers. Concrete provides a bit extraction operation for such applications.
Bit extraction is capable of extracting a slice of bits from an integer. Index 0 corresponds to the lowest significant bit. The cost of this operation is proportional to the highest significant bit index.
Bit extraction only works in the Native
encoding, which is usually selected when all table lookups in the circuit are less than or equal to 8 bits.
Slices can be used for indexing fhe.bits(value)
as well.
Even slices with negative steps are supported!
Signed integers are supported as well.
Lastly, here is a practical use case of bit extraction.
prints
Bits cannot be extracted using a negative index.
Which means fhe.bits(x)[-1]
or fhe.bits(x)[-4:-1]
is not supported for example.
The reason for this is that we don't know in advance (i.e., before inputset evaluation) how many bits x
has.
For example, let's say you have x == 10 == 0b_000...0001010
, and you want to do fhe.bits(x)[-1]
. If the value is 4-bits (i.e., 0b_1010
), the result needs to be 1
, but if it's 6-bits (i.e., 0b_001010
), the result needs to be 0
. Since we don't know the bit-width of x
before inputset evaluation, we cannot calculate fhe.bits(x)[-1]
.
When extracting bits using slices in reverse order (i.e., step < 0), the start bit needs to be provided explicitly.
Which means fhe.bits(x)[::-1]
or fhe.bits(x)[:2:-1]
is not supported for example.
The reason is the same as above.
When extracting bits of signed values using slices, the stop bit needs to be provided explicitly.
Which means fhe.bits(x)[1:]
or fhe.bits(x)[1::2]
is not supported for example.
The reason is similar to above.
To explain a bit more, signed integers use two's complement representation. In this representation, negative values have their most significant bits set to 1 (e.g., -1 == 0b_11111
, -2 == 0b_11110
, -3 == 0b_11101
). Extracting bits always returns a positive value (e.g., fhe.bits(-1)[1:3] == 0b_11 == 3
) This means if you were to do fhe.bits(x)[1:]
where x == -1
, if x
is 4 bits, the result would be 0b_111 == 7
, but if x
is 5 bits the result would be 0b_1111 == 15
. Since we don't know the bit-width of x
before inputset evaluation, we cannot calculate fhe.bits(x)[1:]
.
Bits of floats cannot be extracted.
Floats are partially supported but extracting their bits is not supported at all.
Key Concept: Extracting a specific bit requires clearing all the preceding lower bits. This involves extracting these previous bits as intermediate values and then subtracting them from the input.
Implications:
Bits are extracted sequentially, starting from the least significant bit to the more significant ones. The cost is proportional to the index of the highest extracted bit plus one.
No parallelization is possible. The computation time is proportional to the cost, independent of the number of CPUs.
Examples:
Extracting fhe.bits(x)[4]
is approximately five times costlier than extracting fhe.bits(x)[0]
.
Extracting fhe.bits(x)[4]
takes around five times more wall clock time than fhe.bits(x)[0]
.
The cost of extracting fhe.bits(x)[0:5]
is almost the same as that of fhe.bits(x)[5]
.
Key Concept: Common sub-expression elimination is applied to intermediate extracted bits.
Implications:
The overall cost for a series of fhe.bits(x)[m:n]
calls on the same input x
is almost equivalent to the cost of the single most computationally expensive extraction in the series, i.e. fhe.bits(x)[n]
.
The order of extraction in that series does not affect the overall cost.
Example:
The combined operation fhe.bit(x)[3] + fhe.bit(x)[2] + fhe.bit(x)[1]
has almost the same cost as fhe.bits(x)[3]
.
Each extracted bit incurs a cost of approximately one TLU of 1-bit input precision. Therefore, fhe.bits(x)[0]
is generally faster than any other TLU operation.
One of the most common operations in Concrete are Table Lookups
(TLUs). All operations except addition, subtraction, multiplication with non-encrypted values, tensor manipulation operations, and a few operations built with those primitive operations (e.g. matmul, conv) are converted to Table Lookups under the hood.
Table Lookups are very flexible. They allow Concrete to support many operations, but they are expensive. The exact cost depends on many variables (hardware used, error probability, etc.), but they are always much more expensive compared to other operations. You should try to avoid them as much as possible. It's not always possible to avoid them completely, but you might remove the number of TLUs or replace some of them with other primitive operations.
Concrete automatically parallelizes TLUs if they are applied to tensors.
Concrete provides a LookupTable
class to create your own tables and apply them in your circuits.
LookupTable
s can have any number of elements. Let's call the number of elements N. As long as the lookup variable is within the range [-N, N), the Table Lookup is valid.
If you go outside of this range, you will receive the following error:
You can create the lookup table using a list of integers and apply it using indexing:
When you apply a table lookup to a tensor, the scalar table lookup is applied to each element of the tensor:
LookupTable
mimics array indexing in Python, which means if the lookup variable is negative, the table is looked up from the back:
If you want to apply a different lookup table to each element of a tensor, you can have a LookupTable
of LookupTable
s:
In this example, we applied a squared
table to the first column and a cubed
table to the second column.
Concrete tries to fuse some operations into table lookups automatically so that lookup tables don't need to be created manually:
All lookup tables need to be from integers to integers. So, without .astype(np.int64)
, Concrete will not be able to fuse.
The function is first traced into:
Concrete then fuses appropriate nodes:
Fusing makes the code more readable and easier to modify, so try to utilize it over manual LookupTable
s as much as possible.
We refer the users to this page for explanations about fhe.univariate(function)
and fhe.multivariate(function)
features, which are convenient ways to use automatically created table lookup.
TLUs are performed with an FHE operation called Programmable Bootstrapping
(PBS). PBSs have a certain probability of error: when these errors happen, it results in inaccurate results.
Let's say you have the table:
And you perform a Table Lookup using 4
. The result you should get is lut[4] = 16
, but because of the possibility of error, you could get any other value in the table.
The probability of this error can be configured through the p_error
and global_p_error
configuration options. The difference between these two options is that, p_error
is for individual TLUs but global_p_error
is for the whole circuit.
If you set p_error
to 0.01
, for example, it means every TLU in the circuit will have a 99% chance (or more) of being exact. If there is a single TLU in the circuit, it corresponds to global_p_error = 0.01
as well. But if we have 2 TLUs, then global_p_error
would be higher: that's 1 - (0.99 * 0.99) ~= 0.02 = 2%
.
If you set global_p_error
to 0.01
, the whole circuit will have at most 1% probability of error, no matter how many Table Lookups are included (which means that p_error
will be smaller than 0.01
if there are more than a single TLU).
If you set both of them, both will be satisfied. Essentially, the stricter one will be used.
By default, both p_error
and global_p_error
are set to None
, which results in a global_p_error
of 1 / 100_000
being used.
Feel free to play with these configuration options to pick the one best suited for your needs! See How to Configure to learn how you can set a custom p_error
and/or global_p_error
.
Configuring either of those variables impacts compilation and execution times (compilation, keys generation, circuit execution) and space requirements (size of the keys on disk and in memory). Lower error probabilities result in longer compilation and execution times and larger space requirements.
PBSs are very expensive, in terms of computations. Fortunately, it is sometimes possible to replace PBS by rounded PBS, truncate PBS or even approximate PBS. These TLUs have a slightly different semantic, but are very useful in cases like machine learning for more efficiency without drop of accuracy.
Concrete is an open-source FHE Compiler that simplifies the use of Fully Homomorphic Encryption (FHE).
Learn the basics of Concrete, set it up, and make it run with ease.
Start building with Concrete by exploring its core features, discovering essential guides, and learning more with step-by-step tutorials.
Access to additional resources and join the Zama community.
Refer to the API, review product architecture, and access additional resources for in-depth explanations while working with Concrete.
Ask technical questions and discuss with the community. Our team of experts usually answers within 24 hours in working days.
Collaborate with us to advance the FHE spaces and drive innovation together.
We value your feedback! Take a 5-question developer survey to improve the Concrete library and the documentation and help other developers use FHE.
Concrete partly supports floating points. There is no support for floating point inputs or outputs. However, there is support for intermediate values to be floating points (under certain constraints).
Concrete-Compile, which is used for compiling the circuit, doesn't support floating points at all. However, it supports table lookups which take an integer and map it to another integer. The constraints of this operation are that there should be a single integer input, and a single integer output.
As long as your floating point operations comply with those constraints, Concrete automatically converts them to a table lookup operation:
In the example above, a
, b
, and c
are floating point intermediates. They are used to calculate d
, which is an integer with a value dependent upon x
, which is also an integer. Concrete detects this and fuses all of these operations into a single table lookup from x
to d
.
This approach works for a variety of use cases, but it comes up short for others:
This results in:
The reason for the error is that d
no longer depends solely on x
; it depends on y
as well. Concrete cannot fuse these operations, so it raises an exception instead.
Finding the minimum or maximum of two numbers is not a native operation in Concrete, so it needs to be implemented using existing native operations (i.e., additions, clear multiplications, negations, table lookups). Concrete offers two different implementations for this.
This is the most general implementation that can be used in any situation. The idea is:
Initial comparison is chunked as well, which is already very expensive.
Multiplication with operands aren't allowed to increase the bit-width of the inputs, so they are very expensive as well.
Optimal chunk size is selected automatically to reduce the number of table lookups.
Chunked comparisons result in at least 9 and at most 21 table lookups.
It is used if no other implementation can be used.
Can be used with any integers.
Extremely expensive.
produces
This implementation uses the fact that [min,max](x, y)
is equal to [min, max](x - y, 0) + y
, which is just a subtraction, a table lookup and an addition!
There are two major problems with this implementation though:
subtraction before the TLU requires up to 2 additional bits to avoid overflows (it is 1 in most cases).
subtraction and addition require the same bit-width across operands.
What this means is that if we are comparing uint3
and uint6
, we need to convert both of them to uint7
in some way to do the subtraction and proceed with the TLU in 7-bits. There are 2 ways to achieve this behavior.
This strategy makes sure that during bit-width assignment, both operands are assigned the same bit-width, and that bit-width contains at least the amount of bits required to store x - y
. The idea is:
It will always result in a single table lookup.
It will increase the bit-width of both operands and the result, and lock them together across the whole circuit, which can result in significant slowdowns if the result or the operands are used in other costly operations.
produces
This strategy will not put any constraint on bit-widths during bit-width assignment. Instead, operands are cast to a bit-width that can store x - y
during runtime using table lookups. The idea is:
It can result in a single table lookup as well, if x and y are assigned (because of other operations) the same bit-width, and that bit-width can store x - y
.
Or in two table lookups if only one of the operands is assigned a bit-width bigger than or equal to the bit width that can store x - y
.
It will not put any constraints on bit-widths of the operands, which is amazing if they are used in other costly operations.
It will result in at most 3 table lookups, which is still good.
If you are not doing anything else with the operands, or doing less costly operations compared to comparison, it will introduce up to two unnecessary table lookups and slow down execution compared to fhe.MinMaxStrategy.ONE_TLU_PROMOTED
.
produces
CHUNKED
9
21
ONE_TLU_PROMOTED
1
1
✓
THREE_TLU_CASTED
1
3
Concrete will choose the best strategy available after bit-width assignment, regardless of the specified preference.
Different strategies are good for different circuits. If you want the best runtime for your use case, you can compile your circuit with all different comparison strategy preferences, and pick the one with the lowest complexity.
Here are the operations you can use inside the function you are compiling:
Some of these operations are not supported between two encrypted values. A detailed error will be raised if you try to do something that is not supported.
ndarray
methods.ndarray
properties.Some Python control flow statements are not supported. You cannot have an if
statement or a while
statement for which the condition depends on an encrypted value. However, such statements are supported with constant values (e.g., for i in range(SOME_CONSTANT)
, if os.environ.get("SOME_FEATURE") == "ON":
).
You cannot have floating-point inputs or floating-point outputs. You can have floating-point intermediate values as long as they can be converted to an integer Table Lookup (e.g., (60 * np.sin(x)).astype(np.int64)
).
There is a limit on the bit width of encrypted values. We are constantly working on increasing this bit width. If you go above the limit, you will get an error.
Bitwise operations are not native operations in Concrete, so they need to be implemented using existing native operations (i.e., additions, clear multiplications, negations, table lookups). Concrete offers two different implementations for performing bitwise operations.
This is the most general implementation that can be used in any situation. The idea is:
Signed bitwise operations are not supported.
The optimal chunk size is selected automatically to reduce the number of table lookups.
Chunked bitwise operations result in at least 4 and at most 9 table lookups.
It is used if no other implementation can be used.
Can be used with any integers.
Very expensive.
produces
This implementation uses the fact that we can combine two values into a single value and apply a single table lookup to this combined value!
There are two major problems with this implementation:
packing requires the same bit-width across operands.
packing requires the bit-width of at least x.bit_width + y.bit_width
and that bit-width cannot exceed maximum TLU bit-width, which is 16
at the moment.
What this means is if we are comparing uint3
and uint6
, we need to convert both of them to uint9
in some way to do the packing and proceed with the TLU in 9-bits. There are 4 ways to achieve this behavior.
This strategy makes sure that during bit-width assignment, both operands are assigned the same bit-width, and that bit-width contains at least the amount of bits required to store pack(x, y)
. The idea is:
It will always result in a single table lookup.
It will significantly increase the bit-width of both operands and lock them to each other across the whole circuit, which can result in significant slowdowns if the operands are used in other costly operations.
produces
This strategy will not put any constraint on bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store pack(x, y)
during runtime using table lookups. The idea is:
It can result in a single table lookup as well, if x and y are assigned (because of other operations) the same bit-width, and that bit-width can store pack(x, y)
.
Or in two table lookups if only one of the operands is assigned a bit-width bigger than or equal to the bit width that can store pack(x, y)
.
It will not put any constraints on bit-widths of the operands, which is amazing if they are used in other costly operations.
It will result in at most 3 table lookups, which is still good.
If you are not doing anything else with the operands, or doing less costly operations compared to bitwise, it will introduce up to two unnecessary table lookups and slow down execution compared to fhe.BitwiseStrategy.ONE_TLU_PROMOTED
.
produces
This strategy can be viewed as a middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store pack(x, y)
, and the smaller operand will be cast to that bit-width during runtime. The idea is:
It can result in a single table lookup as well, if the smaller operand is assigned (because of other operations) the same bit-width as the bigger operand.
It will only put a constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
It will result in at most 2 table lookups, which is great.
It will significantly increase the bit-width of the bigger operand which can result in significant slowdowns if the bigger operand is used in other costly operations.
If you are not doing anything else with the smaller operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to fhe.BitwiseStrategy.THREE_TLU_CASTED
.
produces
This strategy is like the exact opposite of the strategy above. With this, only the smaller operand will be constrained to have at least the required bit-width, and the bigger operand will be cast during runtime. The idea is:
It can result in a single table lookup as well, if the bigger operand is assigned (because of other operations) the same bit-width as the smaller operand.
It will only put constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
It will result in at most 2 table lookups, which is great.
It will increase the bit-width of the smaller operand which can result in significant slowdowns if the smaller operand is used in other costly operations.
If you are not doing anything else with the bigger operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to fhe.BitwiseStrategy.THREE_TLU_CASTED
.
produces
CHUNKED
4
9
ONE_TLU_PROMOTED
1
1
✓
THREE_TLU_CASTED
1
3
TWO_TLU_BIGGER_PROMOTED_SMALLER_CASTED
1
2
✓
TWO_TLU_BIGGER_CASTED_SMALLER_PROMOTED
1
2
✓
Concrete will choose the best strategy available after bit-width assignment, regardless of the specified preference.
Different strategies are good for different circuits. If you want the best runtime for your use case, you can compile your circuit with all different comparison strategy preferences, and pick the one with the lowest complexity.
The same configuration option is used to modify the behavior of encrypted shift operations, and shifts are much more complex to implement, so we'll not go over the details. What is important is, the end the result is computed using additions or subtractions on the original shifted operand. Since additions and subtractions require the same bit-width across operands, input and output bit-widths need to be synchronized at some point. There are two ways to do this:
Here, the shifted operand and shift result are assigned the same bit-width during bit-width assignment, which avoids an additional TLU on the shifted operand. On the other hand, it might increase the bit-width of the result or the shifted operand, and if they're used in other costly operations, it could result in significant slowdowns. This is the default behavior.
produces
The approach described above could be suboptimal for some circuits, so it is advised to check the complexity with it disabled before production. Here is how the implementation changes with it disabled.
produces
Concrete is an open source framework which simplifies the use of Fully Homomorphic Encryption (FHE).
Fully Homomorphic Encryption (FHE) enables performing computations on encrypted data directly without the need to decrypt it first. FHE allows developers to build services that ensure privacy for all users. FHE is also an excellent solution against data breaches as everything is performed on encrypted data. Even if the server is compromised, no sensitive data will be leaked.
Since writing FHE programs is difficult, the Concrete framework contains a TFHE Compiler based on LLVM to make this process easier for developers.
Concrete is a versatile library that can be used for a variety of purposes. For instance, Concrete ML is built on top of Concrete to simplify Machine-Learning oriented use cases.
FHE encrypts data as LWE ciphertexts. These ciphertexts can be visually represented as a bit vector with the encrypted message in the higher-order (yellow) bits as well as a random part (gray), that guarantees the security of the encrypted message, called noise.
Under the hood, each time you perform an operation on an encrypted value, the noise grows and at a certain point, it may overlap with the message and corrupt its value.
There is a way to decrease the noise of a ciphertext with the Bootstrap operation. The bootstrap operation takes as input a noisy ciphertext and generates a new ciphertext encrypting the same message, but with a lower noise. This allows additional operations to be performed on the encrypted message.
A typical FHE program will be made up of a series of operations followed by a Bootstrap, this is then repeated many times.
The amount of noise in a ciphertext is not as bounded as it may appear in the above illustration. As the errors are drawn randomly from a Gaussian distribution, they can be of varying size. This means that we need to be careful to ensure the noise terms do not affect the message bits. If the error terms do overflow into the message bits, this can cause an incorrect output (failure) when bootstrapping.
So far, we only introduced arithmetic operations but a typical program usually also involves functions (maximum, minimum, square root…)
During the Bootstrap operation, in TFHE, you could perform a table lookup simultaneously to reduce noise, turning the Bootstrap operation into a Programmable Bootstrap (PBS).
Concrete uses the PBS to support function evaluation:
Let's take a simple example. A function (or circuit) that takes a 4 bits input variable and output the maximum value between a clear constant and the encrypted input:
example:
could be turned into a table lookup:
The Lookup table lut
being applied during the Programmable Bootstrap.
You should not worry about PBS, they are completely managed by Concrete during the compilation process. Each function evaluation will be turned into a Lookup table and evaluated by a PBS.
See this in action with the previous example, if you dump the MLIR code produced by the frontend, you will see (forget about MLIR syntax, just see the Lookup table value on the 4th line):
The only thing you should keep in mind is that it adds a constraint on the input type, and that is the reason behind having a maximum bit-width supported in Concrete.
Second takeaway is that PBS are the most costly operations in FHE, the less PBS in your circuit the faster it will run. It is an interesting metrics to optimize (you will see that Concrete could give you the number of PBS used in your circuit).
Note also that PBS cost varies with the input variable precision (a circuit with 8 bit PBS will run faster than one with 16 bits PBS).
Allowing computation on encrypted data is particularly interesting in the client/server model, especially when the client data are sensitive and the server not trusted. You could split the workflow in two main steps: development and deployment.
During development, you will turn your program into its FHE equivalent. Concrete automates this task with the compilation process but you can make this process even easier by reducing the precision required, reducing the number of PBSs or allowing more parallelization in your code (e.g. working on bit chunks instead of high bit-width variables).
Once happy with the code, the development process is over and you will create the compiler artifact that will be used during deployment.
A typical Concrete deployment will host on a server the compilation artifact: Client specifications required by the compiled circuits and the fhe executable itself. Client will ask for the circuit requirements, generate keys accordingly, then it will send an encrypted payload and receive an encrypted result.
For more information on deployment, see Howto - Deploy
Table lookups have a strict constraint on the number of bits they support. This can be limiting, especially if you don't need exact precision. As well as this, using larger bit-widths leads to slower table lookups.
To overcome these issues, truncated table lookups are introduced. This operation provides a way to zero the least significant bits of a large integer and then apply the table lookup on the resulting (smaller) value.
Imagine you have a 5-bit value, you can use fhe.truncate_bit_pattern(value, lsbs_to_remove=2)
to truncate it (here the last 2 bits are discarded). Once truncated, value will remain in 5-bits (e.g., 22 = 0b10110 would be truncated to 20 = 0b10100), and the last 2 bits of it would be zero. Concrete uses this to optimize table lookups on the truncated value, the 5-bit table lookup gets optimized to a 3-bit table lookup, which is much faster!
Let's see how truncation works in practice:
prints:
and displays:
Now, let's see how truncating can be used in FHE.
prints:
These speed-ups can vary from system to system.
The reason why the speed-up is not increasing with lsbs_to_remove
is because the truncating operation itself has a cost: each bit removal is a PBS. Therefore, if a lot of bits are removed, truncation itself could take longer than the bigger TLU which is evaluated afterwards.
and displays:
Truncating is very useful but, in some cases, you don't know how many bits your input contains, so it's not reliable to specify lsbs_to_remove
manually. For this reason, the AutoTruncator
class is introduced.
AutoTruncator
allows you to set how many of the most significant bits to keep, but they need to be adjusted using an inputset to determine how many of the least significant bits to remove. This can be done manually using fhe.AutoTruncator.adjust(function, inputset)
, or by setting auto_adjust_truncators
configuration to True
during compilation.
Here is how auto truncators can be used in FHE:
prints:
and displays:
AutoTruncator
s should be defined outside the function that is being compiled. They are used to store the result of the adjustment process, so they shouldn't be created each time the function is called. Furthermore, each AutoTruncator
should be used with exactly one truncate_bit_pattern
call.
Table lookups have a strict constraint on the number of bits they support. This can be limiting, especially if you don't need exact precision. As well as this, using larger bit-widths leads to slower table lookups.
To overcome these issues, rounded table lookups are introduced. This operation provides a way to round the least significant bits of a large integer and then apply the table lookup on the resulting (smaller) value.
Imagine you have a 5-bit value, but you want to have a 3-bit table lookup. You can call fhe.round_bit_pattern(input, lsbs_to_remove=2)
and use the 3-bit value you receive as input to the table lookup.
Let's see how rounding works in practice:
prints:
and displays:
If the rounded number is one of the last 2**(lsbs_to_remove - 1)
numbers in the input range [0, 2**original_bit_width)
, an overflow will happen.
By default, if an overflow is encountered during inputset evaluation, bit-widths will be adjusted accordingly. This results in a loss of speed, but ensures accuracy.
You can turn this overflow protection off (e.g., for performance) by using fhe.round_bit_pattern(..., overflow_protection=False)
. However, this could lead to unexpected behavior at runtime.
Now, let's see how rounding can be used in FHE.
prints:
These speed-ups can vary from system to system.
The reason why the speed-up is not increasing with lsbs_to_remove
is because the rounding operation itself has a cost: each bit removal is a PBS. Therefore, if a lot of bits are removed, rounding itself could take longer than the bigger TLU which is evaluated afterwards.
and displays:
Feel free to disable overflow protection and see what happens.
Rounding is very useful but, in some cases, you don't know how many bits your input contains, so it's not reliable to specify lsbs_to_remove
manually. For this reason, the AutoRounder
class is introduced.
AutoRounder
allows you to set how many of the most significant bits to keep, but they need to be adjusted using an inputset to determine how many of the least significant bits to remove. This can be done manually using fhe.AutoRounder.adjust(function, inputset)
, or by setting auto_adjust_rounders
configuration to True
during compilation.
Here is how auto rounders can be used in FHE:
prints:
and displays:
AutoRounder
s should be defined outside the function that is being compiled. They are used to store the result of the adjustment process, so they shouldn't be created each time the function is called. Furthermore, each AutoRounder
should be used with exactly one round_bit_pattern
call.
One use of rounding is doing faster computation by ignoring the lower significant bits. For this usage, you can even get faster results if you accept the rounding it-self to be slighlty inexact. The speedup is usually around 2x-3x but can be higher for big precision reduction. This also enable higher precisions values that are not possible otherwise.
*Using the default configuration in approximate mode. For 3, 4, 5 and 6 reduced precision bits and accumulator precision up to 32bits
You can turn on this mode either globally on the configuration:
or on/off locally:
In approximate mode the rounding threshold up or down is not perfectly centered: The off-centering is:
is bounded, i.e. at worst an off-by-one on the reduced precision value compared to the exact result,
is pseudo-random, i.e. it will be different on each call,
almost symetrically distributed,
depends on cryptographic properties like the encryption mask, the encryption noise and the crypto-parameters.
In blue the exact value, the red dots are approximate values due to off-centered transition in approximate mode.
Histogram of transitions off-centering delta. Each count correspond to a specific random mask and a specific encryption noise.
With approximate rounding, you can enable an approximate clipping to get further improve performance in the case of overflow handling. Approximate clipping enable to discard the extra bit of overflow protection bit in the successor TLU. For consistency a logical clipping is available when this optimization is not suitable.
When fast approximate clipping is not suitable (i.e. slower), it's better to apply logical clipping for consistency and better resilience to code change. It has no extra cost since it's fuzed with the successor TLU.
Only the last step is clipped.
This set the first precision where approximate clipping is enabled, starting from this precision, an extra small precision TLU is introduced to safely remove the extra precision bit used to contain overflow. This way the successor TLU is faster. E.g. for a rounding to 7bits, that finishes to a TLU of 8bits due to overflow, forcing to use a TLU of 7bits is 3x faster.
The last steps are decreased.
In this first example, we compute a minimum by creating the difference between two numbers y
and x
and conditionally remove this diff from y
to either get x
if y>x
or y
if x>y
:
The companion example of above with the maximum value of two integers instead of the minimum:
And an extension for more than two values:
This example shows how to deal with an array and an encrypted index. It will create a "selection" array filled with 0
except for the requested index that will be 1
, and sum the products of all array values by this selection array:
This example filters an encrypted array with an encrypted condition, here a greater than
with an encrypted value. It packs all values with a selection bit, resulting from the comparison that allow the unpacking of only the filtered values:
In this example Matrix operation, we are introducing a key concept when using Concrete: trying to maximize the parallelization. Here instead of sequentially summing all values to create a mean value, we split the values in sub-groups, and do the mean of the sub-group means:
When you have big circuits, keeping track of which node corresponds to which part of your code becomes difficult. A tagging system can simplify such situations:
When you compile f
with inputset of range(10)
, you get the following graph:
If you get an error, you'll see exactly where the error occurred (e.g., which layer of the neural network, if you tag layers).
In the future, we plan to use tags for additional features (e.g., to measure performance of tagged regions), so it's a good idea to start utilizing them for big circuits.
Comparisons are not native operations in Concrete, so they need to be implemented using existing native operations (i.e., additions, clear multiplications, negations, table lookups). Concrete offers three different implementations for performing comparisons.
This is the most general implementation that can be used in any situation. The idea is:
Signed comparisons are more complex to explain, but they are supported!
The optimal chunk size is selected automatically to reduce the number of table lookups.
Chunked comparisons result in at least 5 and at most 13 table lookups.
It is used if no other implementation can be used.
==
and !=
are using a different chunk comparison and reduction strategy with less table lookups.
Can be used with any integers.
Very expensive.
produces
This implementation uses the fact that x [<,<=,==,!=,>=,>] y
is equal to x - y [<,<=,==,!=,>=,>] 0
, which is just a subtraction and a table lookup!
There are two major problems with this implementation:
subtraction before the TLU requires up to 2 additional bits to avoid overflows (it is 1 in most cases).
subtraction requires the same bit-width across operands.
What this means is if we are comparing uint3
and uint6
, we need to convert both of them to uint7
in some way to do the subtraction and proceed with the TLU in 7-bits. There are 4 ways to achieve this behavior.
This strategy makes sure that during bit-width assignment, both operands are assigned the same bit-width, and that bit-width contains at least the number of bits required to store x - y
. The idea is:
It will always result in a single table lookup.
It will increase the bit-width of both operands and lock them to each other across the whole circuit, which can result in significant slowdowns if the operands are used in other costly operations.
produces
This strategy will not put any constraint on bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store x - y
during runtime using table lookups. The idea is:
It can result in a single table lookup, if x and y are assigned (because of other operations) the same bit-width and that bit-width can store x - y
.
Alternatively, two table lookups can be used if only one of the operands is assigned a bit-width bigger than or equal to the bit width that can store x - y
.
It will not put any constraints on the bit-widths of the operands, which is amazing if they are used in other costly operations.
It will result in at most 3 table lookups, which is still good.
If you are not doing anything else with the operands, or doing less costly operations compared to comparison, it will introduce up to two unnecessary table lookups and slow down execution compared to fhe.ComparisonStrategy.ONE_TLU_PROMOTED
.
produces
This strategy can be seen as a middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store x - y
, and the smaller operand will be cast to that bit-width during runtime. The idea is:
It can result in a single table lookup, if the smaller operand is assigned (because of other operations) the same bit-width as the bigger operand.
It will only put a constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
It will result in at most 2 table lookups, which is great.
It will increase the bit-width of the bigger operand, which can result in significant slowdowns if the bigger operand is used in other costly operations.
If you are not doing anything else with the smaller operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to fhe.ComparisonStrategy.THREE_TLU_CASTED
.
produces
This strategy can be seen as the exact opposite of the strategy above. With this, only the smaller operand will be constrained to have at least the required bit-width, and the bigger operand will be cast during runtime. The idea is:
It can result in a single table lookup, if the bigger operand is assigned (because of other operations) the same bit-width as the smaller operand.
It will only put a constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
It will result in at most 2 table lookups, which is great.
It will increase the bit-width of the smaller operand, which can result in significant slowdowns if the smaller operand is used in other costly operations.
If you are not doing anything else with the bigger operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to fhe.ComparisonStrategy.THREE_TLU_CASTED
.
produces
This implementation uses the fact that the subtraction trick is not optimal in terms of the required intermediate bit width. The comparison result does not change if we compare(3, 40)
or compare(3, 4)
, so why not clipping the bigger operand and then doing the subtraction to use less bits!
There are two major problems with this implementation:
it can not be used when the bit-widths are the same (for some cases even when they differ by only one bit)
subtraction still requires the same bit-width across operands.
What this means is if we are comparing uint3
and uint6
, we need to convert both of them to uint4
in some way to do the subtraction and proceed with the TLU in 7-bits. There are 2 ways to achieve this behavior.
This strategy will not put any constraint on bit-widths during bit-width assignment, instead the smaller operand is cast to a bit-width that can store clipped(bigger) - smaller
or smaller - clipped(bigger)
during runtime using table lookups. The idea is:
This is a fallback implementation, so if there is a difference of 1-bit (or in some cases 2-bits) and the subtraction trick cannot be used optimally, this implementation will be used instead of fhe.ComparisonStrategy.CHUNKED
.
It can result in two table lookups if the smaller operand is assigned a bit-width bigger than or equal to the bit width that can store clipped(bigger) - smaller
or smaller - clipped(bigger)
.
It will not put any constraints on the bit-widths of the operands, which is amazing if they are used in other costly operations.
It will result in at most 3 table lookups, which is still good.
These table lookups will be on smaller bit-widths, which is great.
Cannot be used to compare integers with the same bit-width, which is very common.
produces
This strategy is similar to the strategy described above. The difference is that with this strategy, the smaller operand will be constrained to have at least the required bit-width to store clipped(bigger) - smaller
or smaller - clipped(bigger)
. The bigger operand will still be clipped to that bit-width during runtime. The idea is:
It will only put a constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
It will result in exactly 2 table lookups, which is great.
It will increase the bit-width of the bigger operand, which can result in significant slowdowns if the bigger operand is used in other costly operations.
produces
Concrete will choose the best strategy available after bit-width assignment, regardless of the specified preference.
Different strategies are good for different circuits. If you want the best runtime for your use case, you can compile your circuit with all different comparison strategy preferences, and pick the one with the lowest complexity.
Encryption can take quite some time, memory, and network bandwidth if encrypted data is to be transported. Some applications use the same argument, or a set of arguments as one of the inputs. In such applications, it doesn't make sense to encrypt and transfer the arguments each time. Instead, arguments can be encrypted separately, and reused:
If you have multiple arguments, the encrypt
method would return a tuple
, and if you specify None
as one of the arguments, None
is placed at the same location in the resulting tuple
(e.g., circuit.encrypt(a, None, b, c, None)
would return (encrypted_a, None, encrypted_b, encrypted_c, None)
). Each value returned by encrypt
can be stored and reused anytime.
The ordering of the arguments must be kept consistent! Encrypting an x
and using it as a y
could result in undefined behavior.
Each integer in the circuit has a certain bit-width, which is determined by the inputset. These bit-widths can be observed when graphs are printed:
However, it's not possible to add 3-bit and 4-bit numbers together because their encoding is different:
The result of such an addition is a 5-bit number, which also has a different encoding:
Because of these encoding differences, we perform a graph processing step called bit-width assignment, which takes the graph and updates the bit-widths to be compatible with FHE.
After this graph processing step, the graph would look like:
Most operations cannot change the encoding, which means that the input and output bit-widths need to be the same. However, there is an operation which can change the encoding: the table lookup operation.
Let's say you have this graph:
This is the graph for (x**2) + y
where x
is 2-bits and y
is 5-bits. If the table lookup operation wasn't able to change the encoding, we'd need to make everything 6-bits. However, since the encoding can be changed, the bit-widths can be assigned like so:
In this case, we kept x
as 2-bits, but set the table lookup result and y
to be 6-bits, so that the addition can be performed.
This style of bit-width assignment is called multi-precision, and it is enabled by default. To disable it and use a single precision across the circuit, you can use the single_precision=True
configuration option.
concrete-python
supports circuit composition, which allows the output of a circuit execution to be used directly as an input without decryption. We can execute the circuit as many time as we want by forwarding outputs without decrypting intermediate values. This feature enables a new range of applications, including support for control flow in pure (cleartext) python.
Here is a first simple example that uses composition to implement a simple counter in FHE:
Composition is not limited to 1-to-1 circuits, it can also be used with circuits with multiple inputs and multiple outputs. Here is an example that computes the 10 first elements of the Fibonacci sequence in FHE:
Executing this script will provide the following output:
Though it is not visible in this example, there is no limitations on the number of inputs and outputs. There is also no need for a specific logic regarding how we forward values from outputs to inputs; those could be switched for instance.
With the previous example we see that to some extent, composition allows to support iteration with cleartext iterands. That is, loops with the following shape :
Which prints:
Here we use a while loop that keeps iterating as long as the decryption of the running value is different from 1
. Again, the loop body is implemented in FHE, but the iteration control has to be in the clear.
Depending on the circuit, supporting composition may add a non-negligeable overhead when compared to a non-composable version. Indeed, to be composable a circuit must verify two conditions:
All inputs and outputs must share the same precision and the same crypto-parameters: the most expensive parameters that would otherwise be used for a single input or output, are generalized to all inputs and outputs.
There must be a noise refresh in every path between an input and an output: some circuits will need extra PBSes to be added to allow composability.
The first point is handled automatically by the compiler, no change to the circuit is needed to ensure the right precisions are used.
For the second point, since adding a PBS has an impact on performance, we do not ade them on behalf of the user. For instance, to implement a circuit that doubles an encrypted value, we would write something like:
This is a valid circuit when composable
is not used, but when compiled with composition activated, a RuntimeError: Program can not be composed: ...
error is reported, signalling that an extra PBS must be added. To solve this situation, and turn this circuit into a composable one, one can use the following snippet to add a PBS at the end of your circuit:
Concrete supports native Python and NumPy operations as much as possible, but not everything in Python or NumPy is available. Therefore, we provide some extensions ourselves to improve your experience.
Allows you to wrap any univariate function into a single table lookup:
The wrapped function:
shouldn't have any side effects (e.g., no modification of global state)
should be deterministic (e.g., no random numbers)
should have the same output shape as its input (i.e., output.shape
should be the same with input.shape
)
each output element should correspond to a single input element (e.g., output[0]
should only depend on input[0]
)
If any of these constraints are violated, the outcome is undefined.
Allows you to wrap any multivariate function into a table lookup:
The wrapped function:
shouldn't have any side effects (e.g., no modification of global state)
should be deterministic (e.g., no random numbers)
should have input shapes which are broadcastable to the output shape (i.e., input.shape
should be broadcastable to output.shape
for all inputs)
each output element should correspond to a single input element (e.g., output[0]
should only depend on input[0]
of all inputs)
If any of these constraints are violated, the outcome is undefined.
Only 2D convolutions without padding and with one group are currently supported.
Only 2D maxpooling without padding and up to 15-bits is currently supported.
Allows you to create encrypted arrays:
Currently, only scalars can be used to create arrays.
Allows you to create an encrypted scalar zero:
Allows you to create an encrypted tensor of zeros:
Allows you to create an encrypted scalar one:
Allows you to create an encrypted tensor of ones:
Allows you to hint properties of a value. Imagine you have this circuit:
You'd expect all of a
, b
, and c
to be 8-bits, but because inputset is very small, this code could print:
The first solution in these cases should be to use a bigger inputset, but it can still be tricky to solve with the inputset. That's where the hint
extension comes into play. Hints are a way to provide extra information to compilation process:
Bit-width hints are for constraining the minimum number of bits in the encoded value. If you hint a value to be 8-bits, it means it should be at least uint8
or int8
.
To fix f
using hints, you can do:
Hints are only applied to the value being hinted, and no other value. If you want the hint to be applied to multiple values, you need to hint all of them.
you'll always see:
regardless of the bounds.
Alternatively, you can use it to make sure a value can store certain integers:
Allows you to perform ReLU operation, with the same semantic as x if x >= 0 else 0
:
ReLU extension can be converted in two different ways:
With a single TLU on the original bit-width.
With multiple TLUs on smaller bit-widths.
For small bit-widths, the first one is better as it'll have a single TLU on a small bit-width. For big bit-widths, the second one is better as it won't have a TLU on a big bit-width.
The decision between the two can be controlled with relu_on_bits_threshold: int = 7
configuration option:
relu_on_bits_threshold=5
means:
1-bit to 4-bits would be converted using the first way (i.e., using TLU)
5-bits and more would be converted using the second way (i.e., using bits)
There is another option to customize the implementation relu_on_bits_chunk_size: int = 2
:
relu_on_bits_chunk_size=4
means:
When using the second implementation:
Here is a script showing how execution cost is impacted when changing these values:
You might need to run the script twice to avoid crashing when plotting.
The script will show the following figure:
The default values of these options are set based on simple circuits. How they affect performance will depend on the circuit, so play around with them to get the most out of this extension.
Conversion with the second method (i.e., using chunks) only works in Native
encoding, which is usually selected when all table lookups in the circuit are below or equal to 8 bits.
Allows you to perform ternary if operation, with the same semantic as x if condition else y
:
Allows you to copy the value:
Identity extension can be used to clone an input while changing its bit-width. Imagine you have return x**2, x+100
where x
is 2-bits. Because of x+100
, x
will be assigned 7-bits and x**2
would be more expensive than it needs to be. If return x**2, fhe.identity(x)+100
is used instead, x
will be assigned 2-bits as it should and fhe.identity(x)
will be assigned 7-bits as necessary.
Identity extension only works in Native
encoding, which is usually selected when all table lookups in the circuit are below or equal to 8 bits.
Used for creating a random inputset with the given specifications:
The result will have 100 inputs by default which can be customized using the size keyword argument:
# Compression
Fully Homomorphic Encryption (FHE) needs both ciphertexts (encrypted data) and evaluation keys to carry out the homomorphic evaluation of a function. Both elements are large, which may critically affect the application's performance depending on the use case, application deployment, and the method for transmitting and storing ciphertexts and evaluation keys.
During compilation, you can enable compression options to enforce the use of compression features. The two available compression options are:
compress_evaluation_keys: bool = False,
This specifies that serialization takes the compressed form of evaluation keys.
compress_input_ciphertexts: bool = False,
This specifies that serialization takes the compressed form of input ciphertexts.
You can see the impact of compression by comparing the size of the serialized form of input ciphertexts and evaluation keys with a sample code.
The compression factor largely depends on the cryptographic parameters identified and the compression algorithms selected during the compilation.
Currently, Concrete uses the seeded compression algorithms. These algorithms rely on the fact that CSPRNGs are deterministic. Consequently, the chain of random values can be replaced by the seed and later recalculated using the same seed.
Typically, the size of a ciphertext is (lwe dimension + 1) * 8
bytes, while the size of a seeded ciphertext is constant, equal to 3 * 8
bytes. Thus, the compression factor ranges from a hundred to thousands. Understanding the compression factor of evaluation keys is complex. The compression factor of evaluation keys typically ranges between 0 and 10.
Please note that while compression may save bandwidth and disk space, it incurs the cost of decompression. Currently, decompression occur more or less lazily during FHE evaluation without any control.
The idea of homomorphic encryption is that you can compute on ciphertexts without knowing the messages they encrypt. A scheme is said to be fully homomorphic, if an unlimited number of additions and multiplications are supported ( is a plaintext and is the corresponding ciphertext):
homomorphic addition:
homomorphic multiplication:
The default failure probability in Concrete is set for the whole program and is by default. This means that 1 execution of every 100,000 may result in an incorrect output. To have a lower probability of error, you need to change the cryptographic parameters, likely resulting in worse performance. On the other side of this trade-off, allowing a higher probability of error will likely speed-up operations.
homomorphic univariate function evaluation:
As explained in the , the challenge for developers is to adapt their code to fit FHE constraints. In this document we have collected some common examples to illustrate the kind of optimization one can do to get better performance.
All code snippets provided here are temporary workarounds. In future versions of Concrete, some functions described here could be directly available in a more generic and efficient form. These code snippets are coming from support answers in our
Note the use of the composable
flag in the compile
call. It instructs the compiler to ensure the circuit can be called on its own outputs (see for more details). Executing this script should give the following output:
See below in the , for explanations about the use of noise_reset
.
With this pattern, we can also support unbounded loops or complex dynamic condition, as long as this condition is computed in pure cleartext python. Here is an example that computes the :
Multivariate functions cannot be called with inputs.
Allows you to perform a convolution operation, with the same semantic as :
Allows you to perform a maxpool operation, with the same semantic as :
The input would be split to 4-bit chunks using , and then the ReLU would be applied to those chunks, which are then combined back.
fhe.if_then_else
is just an alias for .
CHUNKED
5
13
ONE_TLU_PROMOTED
1
1
✓
THREE_TLU_CASTED
1
3
TWO_TLU_BIGGER_PROMOTED_SMALLER_CASTED
1
2
✓
TWO_TLU_BIGGER_CASTED_SMALLER_PROMOTED
1
2
✓
THREE_TLU_BIGGER_CLIPPED_SMALLER_CASTED
2
3
TWO_TLU_BIGGER_CLIPPED_SMALLER_PROMOTED
2
2
✓
Integers in Concrete are encrypted and processed according to a set of cryptographic parameters. By default, multiple sets of such parameters are selected by the Concrete Optimizer. This might not be the best approach for every use case, and there is the option to use mono parameters instead.
When multi parameters are enabled, a different set of parameters are selected for each bit-width in the circuit, which results in:
Faster execution (generally).
Slower key generation.
Larger keys.
Larger memory usage during execution.
To disable it, you can use parameter_selection_strategy=fhe.ParameterSelectionStrategy.MONO
configuration option.
When enabled, you can select the level of circuit partitionning, with multi_parameter_strategy in configuration.
If you are trying to compile a regular function, you can use the decorator interface instead of the explicit Compiler
interface to simplify your code:
This decorator is a way to add the compile
method to the function object without changing its name elsewhere.
Modules are still experimental. They are only compatible with composition, which means the outputs of every functions can be used directly as inputs for other functions. The crypto-parameters used in this mode are large and thus, the execution is likely to slow.
In some cases, deploying a server that can execute different functions is useful. Concrete can compile FHE modules, that can contain many different functions to execute at once. All the functions are compiled in a single step and can be deployed with the same artifacts. Here is an example:
You can compile the FHE module MyModule
using the compile
method. To do that, you need to provide a dictionnary of input sets for every function:
Note that here we can see a current limitation of modules: The configuration must use the parameter_selection_strategy
of v0
, and activate the composable
flag.
After the module has been compiled, we can encrypt and call the different functions in the following way:
Direct circuits are still experimental. It is very easy to make mistakes (e.g., due to no overflow checks or type coercion) while using direct circuits, so utilize them with care.
For some applications, the data types of inputs, intermediate values, and outputs are known (e.g., for manipulating bytes, you would want to use uint8). Using inputsets to determine bounds in these cases is not necessary, and can even be error-prone. Therefore, another interface for defining such circuits is introduced:
There are a few differences between direct circuits and traditional circuits:
Remember that the resulting dtype for each operation will be determined by its inputs. This can lead to some unexpected results if you're not careful (e.g., if you do -x
where x: fhe.uint8
, you won't receive a negative value as the result will be fhe.uint8
as well)
There is no inputset evaluation when using fhe types in .astype(...)
calls (e.g., np.sqrt(x).astype(fhe.uint4)
), so the bit width of the output cannot be determined.
Specify the resulting data type in univariate extension (e.g., fhe.univariate(function, outputs=fhe.uint4)(x)
), for the same reason as above.
Be careful with overflows. With inputset evaluation, you'll get bigger bit widths but no overflows. With direct definition, you must ensure that there aren't any overflows manually.
Let's review a more complicated example to see how direct circuits behave:
This prints:
Here is the breakdown of the assigned data types:
As you can see, %8
is subtraction of two unsigned values, and the result is unsigned as well. In the case that c > d
, we have an overflow, and this results in undefined behavior.
During development, the speed of homomorphic execution can be a blocker for fast prototyping. You could call the function you're trying to compile directly, of course, but it won't be exactly the same as FHE execution, which has a certain probability of error (see Exactness).
To overcome this issue, simulation is introduced:
After the simulation runs, it prints the following:
There are some operations which are not supported in simulation yet. They will result in compilation failures. You can revert to simulation using graph execution using circuit.graph(...)
instead of circuit.simulate(...)
, which won't simulate FHE, but it will evaluate the computation graph, which is like simulating the operations without any errors due to FHE.
Big circuits can take a long time to execute, and waiting for execution to finish without having any indication of its progress can be frustrating. For this reason, progressbar feature is introduced:
When you run this code, you will see a progressbar like:
And as the circuit progresses, this progressbar would fill:
It is not a uniform progressbar. For example, when the progressbar shows 50%, this does not mean that half of the execution is performed in terms of seconds. Instead, it means that half of the nodes in the graph have been calculated. Since different node types can take a different amount of time, this should not be used to get an ETA.
Once the progressbar fills and execution completes, you will see the following figure:
You can convert your compiled circuit into its textual representation by converting it to string:
If you just want to see the output on your terminal, you can directly print it as well:
You can use the draw
method of your compiled circuit to draw it:
This method will draw the circuit on a temporary PNG file and return the path to this file.
You can show the drawing in a Jupyter notebook, like this:
Or, you can use the show
option of the draw
method to show the drawing with matplotlib
.
Beware that this will clear the matplotlib plots you have.
Lastly, you can save the drawing to a specific path:
In this section, you will learn how to debug the compilation process easily and find help in the case that you cannot resolve your issue.
compiler_verbose_mode will print the passes applied by the compiler and let you see the transformations done by the compiler. Also, in the case of a crash, it could narrow down the crash location.
compiler_debug_mode is a lot more detailed version of the verbose mode. This is even better for crashes.
These flags might not work as expected in Jupyter notebooks as they output to stderr directly from C++.
Concrete has an artifact system to simplify the process of debugging issues.
In case of compilation failures, artifacts are exported automatically to the .artifacts
directory under the working directory. Let's intentionally create a compilation failure to show what is exported.
This function fails to compile because Concrete does not support floating-point outputs. When you try to compile it, an exception will be raised and the artifacts will be exported automatically. If you go to the .artifacts
directory under the working directory, you'll see the following files:
This file contains information about your setup (i.e., your operating system and python version).
This file contains information about Python packages and their versions installed on your system.
This file contains information about the function you tried to compile.
This file contains information about the encryption status of the parameters of the function you tried to compile.
This file contains the textual representation of the initial computation graph right after tracing.
This file contains the textual representation of the final computation graph right before MLIR conversion.
This file contains information about the error that was received.
Manual exports are mostly used for visualization. They can be very useful for demonstrations. Here is how to perform one:
If you go to the /tmp/custom/export/path
directory, you'll see the following files:
This file contains the textual representation of the initial computation graph right after tracing.
This file contains the textual representation of the intermediate computation graph after fusing.
This file contains the textual representation of the final computation graph right before MLIR conversion.
This file contains information about the MLIR of the function which was compiled using the provided inputset.
This file contains information about the client parameters chosen by Concrete.
If you cannot find a solution in the community forum, or if you have found a bug in the library, you could create an issue in our GitHub repository.
In case of a bug, try to:
minimize randomness;
minimize your function as much as possible while keeping the bug - this will help to fix the bug faster;
include your inputset in the issue;
include reproduction steps in the issue;
include debug artifacts in the issue.
In case of a feature request, try to:
give a minimal example of the desired behavior;
explain your use case.
Concrete can be customized using Configuration
s:
You can overwrite individual options as kwargs to the compile
method:
Or you can combine both:
Additional kwargs to compile
functions take higher precedence. So if you set the option in both configuration
and compile
methods, the value in the compile
method will be used.
show_graph: Optional[bool] = None
Print computation graph during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
show_mlir: Optional[bool] = None
Print MLIR during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
show_optimizer: Optional[bool] = None
Print optimizer output during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
show_statistics: Optional[bool] = None
Print circuit statistics during compilation. True
means always print, False
means never print, None
means print depending on verbose configuration below.
verbose: bool = False
Print details related to compilation.
dump_artifacts_on_unexpected_failures: bool = True
Export debugging artifacts automatically on compilation failures.
auto_adjust_rounders: bool = False
Adjust rounders automatically.
p_error: Optional[float] = None
global_p_error: Optional[float] = None
single_precision: bool = False
Use single precision for the whole circuit.
parameter_selection_strategy: (fhe.ParameterSelectionStrategy) = fhe.ParameterSelectionStrategy.MULTI
Set how cryptographic parameters are selected.
multi_parameter_strategy: fhe.MultiParameterStrategy = fhe.MultiParameterStrategy.PRECISION
Set the level of circuit partionning when using fhe.ParameterSelectionStrategy.MULTI
.
PRECISION
: all TLU with same input precision have their own parameters.
loop_parallelize: bool = True
Enable loop parallelization in the compiler.
dataflow_parallelize: bool = False
Enable dataflow parallelization in the compiler.
auto_parallelize: bool = False
Enable auto parallelization in the compiler.
enable_unsafe_features: bool = False
Enable unsafe features.
use_insecure_key_cache: bool = False (Unsafe)
Use the insecure key cache.
insecure_key_cache_location: Optional[Union[Path, str]] = None
Location of insecure key cache.
show_progress: bool = False,
Display a progress bar during circuit execution
progress_title: str = "",
Title of the progress bar
progress_tag: Union[bool, int] = False,
How many nested tag elements to display with the progress bar. True
means all tag elements and False
disables the display. 2
will display elmt1.elmt2
fhe_simulation: bool = False
Enable FHE simulation. Can be enabled later using circuit.enable_fhe_simulation()
.
fhe_execution: bool = True
Enable FHE execution. Can be enabled later using circuit.enable_fhe_execution()
.
compiler_debug_mode: bool = False,
Enable/disable debug mode of the compiler. This can show a lot of information, including passes and pattern rewrites.
compiler_verbose_mode: bool = False,
Enable/disable verbose mode of the compiler. This mainly shows logs from the compiler, and is less verbose than the debug mode.
comparison_strategy_preference: Optional[Union[ComparisonStrategy, str, List[Union[ComparisonStrategy, str]]]] = None
bitwise_strategy_preference: Optional[Union[BitwiseStrategy, str, List[Union[BitwiseStrategy, str]]]] = None
shifts_with_promotion: bool = True,
composable: bool = False,
Specify that the function must be composable with itself.
relu_on_bits_threshold: int = 7,
relu_on_bits_chunk_size: int = 3,
if_then_else_chunk_size: int = 3
Chunk size to use when converting fhe.if_then_else
extension.
rounding_exactness : Exactness = fhe.Exactness.EXACT
Set default exactness mode for the rounding operation:
EXACT
: threshold for rounding up or down is exactly centered between upper and lower value,
APPROXIMATE
: faster but threshold for rounding up or down is approximately centered with pseudo-random shift.
approximate_rounding_config : ApproximateRoundingConfig = fhe.ApproximateRoundingConfig()
:
to enable exact cliping,
or/and approximate clipping which make overflow protection faster.
optimize_tlu_based_on_measured_bounds : bool = False
Enables TLU optimizations based on measured bounds.
Not enabled by default as it could result in unexpected overflows during runtime.
enable_tlu_fusing : bool = True
Enables TLU fusing to reduce the number of table lookups.
print_tlu_fusing : bool = False
Enables printing TLU fusing to see which table lookups are fused.
compress_evaluation_keys: bool = False,
This specifies that serialization takes the compressed form of evaluation keys.
compress_input_ciphertexts: bool = False,
This specifies that serialization takes the compressed form of input ciphertexts.
Concrete generates keys for you implicitly when they are needed and if they have not already been generated. This is useful for development, but it's not flexible (or secure!) for production. Explicit key management API is introduced to be used in such cases to easily generate and re-use keys.
Let's start by defining a circuit:
Circuits have a property called keys
of type fhe.Keys
, which has several utility functions dedicated to key management!
To explicitly generate keys for a circuit, you can use:
Generated keys are stored in memory upon generation, unencrypted.
And it's possible to set a custom seed for reproducibility:
Do not specify the seed manually in a production environment!
To serialize keys, say to send it across the network:
Keys are not serialized in encrypted form! Please make sure you keep them in a safe environment, or encrypt them manually after serialization.
To deserialize the keys back, after receiving serialized keys:
Once you have a valid fhe.Keys
object, you can directly assign it to the circuit:
If assigned keys are generated for a different circuit, an exception will be raised.
You can also use the filesystem to store the keys directly, without needing to deal with serialization and file management yourself:
Keys are not saved encrypted! Please make sure you store them in a safe environment, or encrypt them manually after saving.
After keys are saved to disk, you can load them back via:
If you want to generate keys in the first run and reuse the keys in consecutive runs:
Formatting is just for debugging purposes. It's not possible to create the circuit back from its textual representation. See if that's your goal.
Drawing functionality requires the installation of the package with the full feature set. See the section to learn how to do that.
There are two options that you can use to understand what's happening under the hood during the compilation process.
You can seek help with your issue by asking a question directly in the .
Error probability for individual table lookups. If set, all table lookups will have the probability of a non-exact result smaller than the set value. See to learn more.
Global error probability for the whole circuit. If set, the whole circuit will have the probability of a non-exact result smaller than the set value. See to learn more.
PRECISION_AND_NORM2
: all TLU with same input precision and output have their own parameters.
Specify preference for comparison strategies, can be a single strategy or an ordered list of strategies. See to learn more.
Specify preference for bitwise strategies, can be a single strategy or an ordered list of strategies. See to learn more.
Enable promotions in encrypted shifts instead of casting in runtime. See to learn more.
Bit-width to start implementing the ReLU extension with .
Chunk size of the ReLU extension when implementation is used.
Precise and more complete behavior is described in .
Provide more fine control on :
Concrete analyzes all compiled circuits and calculates some statistics. These statistics can be used to find bottlenecks and compare circuits. Statistics are calculated in terms of basic operations. There are 6 basic operations in Concrete:
clear addition: x + y where x is encrypted and y is clear
encrypted addition: x + y where both x and y are encrypted
clear multiplication: x * y where x is encrypted and y is clear
encrypted negation: -x where x is encrypted
key switch: building block for table lookups
packing key switch: building block for table lookups
programmable bootstrapping: building block for table lookups
You can print all statistics using the show_statistics
configuration option:
This code will print:
Each of these properties can be directly accessed on the circuit (e.g., circuit.programmable_bootstrap_count
).
Circuit analysis also considers tags!
Imagine you have a neural network with 10 layers, each of them tagged. You can easily see the number of additions and multiplications required for matrix multiplications per layer:
The concrete backends are implementations of the cryptographic primitives of the Zama variant of TFHE. The compiler emits code which combines call into these backends to perform more complex homomorphic operations.
There are client and server features.
Client features are:
private (G)LWE key generation (currently random bits)
encryption of ciphertexts using a private key
public key generation from private keys for keyswitch, bootstrap or private packing
(de)serialization of ciphertexts and public keys (also needed server side)
Server features are homomorphic operations on ciphertexts:
linear operations (multisums with plain weights)
keyswitch
simple PBS
WoP PBS
There are currently 2 backends:
concrete-cpu
which implements both client and server features targeting the CPU.
concrete-cuda
which implements only server features targeting GPUs to accelerate homomorphic circuit evalutation.
The compiler uses concrete-cpu
for the client and can use either concrete-cpu
or concrete-cuda
for the server.
Fusing is the act of combining multiple nodes into a single node, which is converted to a table lookup.
Code related to fusing is in the frontends/concrete-python/concrete/fhe/compilation/utils.py
file. Fusing can be performed using the fuse
function.
Within fuse
:
We loop until there are no more subgraphs to fuse.
Within each iteration: 2.1. We find a subgraph to fuse.
2.2. We search for a terminal node that is appropriate for fusing.
2.3. We crawl backwards to find the closest integer nodes to this node.
2.4. If there is a single node as such, we return the subgraph from this node to the terminal node.
2.5. Otherwise, we try to find the lowest common ancestor (lca) of this list of nodes.
2.6. If an lca doesn't exist, we say this particular terminal node is not fusable, and we go back to search for another subgraph.
2.7. Otherwise, we use this lca as the input of the subgraph and continue with subgraph
node creation below.
2.8. We convert the subgraph into a subgraph
node by checking fusability status of the nodes of the subgraph in this step.
2.9. We substitute the subgraph
node to the original graph.
With the current implementation, we cannot fuse subgraphs that depend on multiple encrypted values where those values don't have a common lca (e.g., np.round(np.sin(x) + np.cos(y))
).
The Encrypted Game of Life in Python Using Concrete - November 7, 2023
Encrypted Key-value Database Using Homomorphic Encryption - March 16, 2023
Compile composable functions with Concrete - February 22, 2024
How to use dynamic table look-ups using Concrete - October 27, 2023
Dive into Concrete - Zama's Fully Homomorphic Encryption Compiler - October 4, 2023
After developing your circuit, you may want to deploy it. However, sharing the details of your circuit with every client might not be desirable. As well as this, you might want to perform the computation on dedicated servers. In this case, you can use the Client
and Server
features of Concrete.
You can develop your circuit using the techniques discussed in previous chapters. Here is a simple example:
Once you have your circuit, you can save everything the server needs:
Then, send server.zip
to your computation server.
You can load the server.zip
you get from the development machine:
You will need to wait for requests from clients. The first likely request is for ClientSpecs
.
Clients need ClientSpecs
to generate keys and request computation. You can serialize ClientSpecs
:
Then, you can send it to the clients requesting it.
After getting the serialized ClientSpecs
from a server, you can create the client object:
Once you have the Client
object, you can perform key generation:
This method generates encryption/decryption keys and evaluation keys.
The server needs access to the evaluation keys that were just generated. You can serialize your evaluation keys as shown:
After serialization, send the evaluation keys to the server.
Serialized evaluation keys are very large, so you may want to cache them on the server instead of sending them with each request.
The next step is to encrypt your inputs and request the server to perform some computation. This can be done in the following way:
Then, send the serialized arguments to the server.
Once you have serialized evaluation keys and serialized arguments, you can deserialize them:
You can perform the computation, as well:
Then, send the serialized result back to the client. After this, the client can decrypt to receive the result of the computation.
Once you have received the serialized result of the computation from the server, you can deserialize it:
Then, decrypt the result:
Deploying a module follows the same logic as the deployment of circuits. Assuming a module compiled in the following way:
You can extract the server from the module and save it in a file:
The only noticeable difference between the deployment of modules and the deployment of circuits is that the methods Client::encrypt
, Client::decrypt
and Server::run
must contain an extra function_name
argument specifying the name of the targeted function.
The encryption of an argument for the inc
function of the module would be:
The execution of the inc
function would be :
Finally, decrypting a result from the execution of dec
would be:
The concrete backends are implementations of the cryptographic primitives of the Zama variant of TFHE.
There are client features (private and public key generation, encryption and decryption) and server features (homomorphic operations on ciphertexts using public keys).
Considering that
performance improvements are mostly beneficial for the server operations
the client needs to be portable for the variety of clients that may exist, we expect mostly server backend to be added to the compiler to improve performance (e.g. by using specialized hardware)
The server backend should expose C or C++ functions to do TFHE operations using the current ciphertext and key memory representation (or functions to change representation). A backend can support only a subset of the current TFHE operations.
The most common operations one would be expected to add are WP-PBS (standard TFHE programmable bootstrap), keyswitch and WoP (without padding bootsrap).
Linear operations may also be supported but may need more work since their introduction may interfere with other compilation passes. The following example does not include this.
We will detail how concrete-cuda
is integrated in the compiler. Adding a new server feature backend (for non linear operations) should be quite similar. However, if you want to integrate a backend but it does not fit with this description, please open an issue or contact us to discuss the integration.
In compilers/concrete-compiler/Makefile
the variable CUDA_SUPPORT
has been added and set to OFF
(CUDA_SUPPORT?=OFF
) by default
the variables CUDA_SUPPORT
and CUDA_PATH
are passed to CMake
In compilers/concrete-compiler/compiler/include/concretelang/Runtime/context.h
, the RuntimeContext
struct is enriched with state to manage the backend ressources (behind a #ifdef CONCRETELANG_CUDA_SUPPORT
).
In compilers/concrete-compiler/compiler/lib/Runtime/wrappers.cpp
, the cuda backend server functions are added (behind a #ifdef CONCRETELANG_CUDA_SUPPORT
)
The pass ConcreteToCAPI
is modified to have a flag to insert calls to these new wrappers instead of the cpu ones (the code calling this pass is modified accordingly).
It may be possible to replace the cpu wrappers (with a compilation flag) instead of adding new ones to avoid having to change the pass.
In compilers/concrete-compiler/CMakeLists.txt
a Section #Concrete Cuda Configuration
has been added Other CMakeLists.txt
have also been modified (or added) with if(CONCRETELANG_CUDA_SUPPORT)
guard to handle header includes, linking...
concrete-optimizer
is a tool that selects appropriate cryptographic parameters for a given fully homomorphic encryption (FHE) computation. These parameters have an impact on the security, correctness, and efficiency of the computation.
The computation is guaranteed to be secure with the given level of security (see here for details) which is typically 128 bits. The correctness of the computation is guaranteed up to a given failure probability. A surrogate of the execution time is minimized which allows for efficient FHE computation.
The cryptographic parameters are degrees of freedom in the FHE algorithms (bootstrapping, keyswitching, etc.) that need to be fixed. The search space for possible crypto-parameters is finite but extremely large. The role of the optimizer is to quickly find the most efficient crypto-parameters possible while guaranteeing security and correctness.
The security level is chosen by the user. We typically operate at a fixed security level, such as 128 bits, to ensure that there is never a trade-off between security and efficiency. This constraint imposes a minimum amount of noise in all ciphertexts.
An independent public research tool, the lattice estimator, is used to estimate the security level. The lattice estimator is maintained by FHE experts. For a given set of crypto-parameters, this tool considers all possible attacks and returns a security level.
For each security level, a parameter curve of the appropriate minimal error level is pre-computed using the lattice estimator, and is used as an input to the optimizer. Learn more about the parameter curves here.
Correctness decreases as the level of noise increases. Noise accumulates during homomorphic computation until it is actively reduced via bootstrapping. Too much noise can lead to the result of a computation being inaccurate or completely incorrect.
Before optimization, we compute a noise bound that guarantees a given error level (under the assumption that noise growth is correctly managed via bootstrapping). The noise growth depends on a critical quantity: the 2-norm of any dot product (or equivalent) present in the calculus. This 2-norm changes the scale of the noise, so we must reduce it sufficiently for the next dot product operation whenever we reduce the noise.
The user can control error probability in two ways: via the PBS error probability and the global error probability.
The PBS error probability controls correctness locally (i.e., represents the error probability of a single PBS operation), while the global error probability focuses on the overall computation result (i.e., represents the error probability of the entire computation). These probabilities are related, and choosing which one to use may depend on the specific use case.
Efficiency decreases as more precision is required, e.g. 7-bits versus 8-bits. The larger the 2-norm is, the bigger the noise will be after a dot product. To remain below the noise bound, we must ensure that the inputs to the dot product have a sufficiently small noise level. The smaller this noise is, the slower the previous bootstrapping will be. Therefore, the larger the 2norm is, the slower the computation will be.
The optimization prioritizes security and correctness. This means that the security level (or the probability of correctness) could, in practice, be a bit higher than the level which is requested by the user.
In the simplest case, the optimizer performs an exhaustive search in the full parameter space and selects the best solution. While the space to explore is huge, exact lower bound cuts are used to avoid exploring regions which are guaranteed to not contain an optimal point. This makes the process both fast and exhaustive. This case is called mono-parameter, where all parameters are shared by the whole computation graph.
In more complex cases, the optimizer iteratively performs an exhaustive search, with lower bound cuts in a wide subspace of the full parameter space, until it converges to a locally optimal solution. Since the wide subspace is large and multi-dimensional, it should not be trapped in a poor locally optimal solution. The more complex case is called multi-parameter, where different calculus operations have tailored parameters.
One can have a look at reference crypto-parameters for each security level (but for a given correctness). This provides insight between the calcululs content (i.e. maximum precision, maximum dot 2-norm, etc.,) and the cost.
Then one can manually explore crypto-parameters space using a CLI tool.
If you use this tool in your work, please cite:
Bergerat, Loris and Boudi, Anas and Bourgerie, Quentin and Chillotti, Ilaria and Ligier, Damien and Orfila Jean-Baptiste and Tap, Samuel, Parameter Optimization and Larger Precision for (T)FHE, Journal of Cryptology, 2023, Volume 36
A pre-print is available as Cryptology ePrint Archive Paper 2022/704
High Level Fully Homomorphic Encryption Linalg dialect A dialect for representation of high level linalg operations on fully homomorphic ciphertexts.
FHELinalg.add_eint_int
(::mlir::concretelang::FHELinalg::AddEintIntOp)Returns a tensor that contains the addition of a tensor of encrypted integers and a tensor of clear integers.
Performs an addition following the broadcasting rules between a tensor of encrypted integers and a tensor of clear integers. The width of the clear integers must be less than or equal to the width of encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEintInt, TensorBroadcastingRules
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.add_eint
(::mlir::concretelang::FHELinalg::AddEintOp)Returns a tensor that contains the addition of two tensor of encrypted integers.
Performs an addition following the broadcasting rules between two tensors of encrypted integers. The width of the encrypted integers must be equal.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEint, TensorBroadcastingRules
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.apply_lookup_table
(::mlir::concretelang::FHELinalg::ApplyLookupTableEintOp)Returns a tensor that contains the result of the lookup on a table.
For each encrypted index, performs a lookup table of clear integers.
The %lut
argument must be a tensor with one dimension, where its dimension is 2^p
where p
is the width of the encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
t
lut
«unnamed»
FHELinalg.apply_mapped_lookup_table
(::mlir::concretelang::FHELinalg::ApplyMappedLookupTableEintOp)Returns a tensor that contains the result of the lookup on a table, using a different lookup table for each element, specified by a map.
Performs for each encrypted index a lookup table of clear integers. Multiple lookup tables are passed, and the application of lookup tables is performed following the broadcasting rules. The precise lookup is specified by a map.
Examples:
Others examples: // [0,1] [1, 0] = [3,2] // [3,0] lut [[1,3,5,7], [0,2,4,6]] with [0, 1] = [7,0] // [2,3] [1, 0] = [4,7]
// [0,1] [0, 0] = [1,3] // [3,0] lut [[1,3,5,7], [0,2,4,6]] with [1, 1] = [6,0] // [2,3] [1, 0] = [4,7]
// [0,1] [0] = [1,3] // [3,0] lut [[1,3,5,7], [0,2,4,6]] with [1] = [6,0] // [2,3] [0] = [5,7]
// [0,1] = [1,2] // [3,0] lut [[1,3,5,7], [0,2,4,6]] with [0, 1] = [7,0] // [2,3] = [5,6]
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
t
luts
map
«unnamed»
FHELinalg.apply_multi_lookup_table
(::mlir::concretelang::FHELinalg::ApplyMultiLookupTableEintOp)Returns a tensor that contains the result of the lookup on a table, using a different lookup table for each element.
Performs for each encrypted index a lookup table of clear integers. Multiple lookup tables are passed, and the application of lookup tables is performed following the broadcasting rules.
The %luts
argument should be a tensor with M dimension, where the first M-1 dimensions are broadcastable with the N dimensions of the encrypted tensor, and where the last dimension dimension is equal to 2^p
where p
is the width of the encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
t
luts
«unnamed»
FHELinalg.concat
(::mlir::concretelang::FHELinalg::ConcatOp)Concatenates a sequence of tensors along an existing axis.
Concatenates several tensors along a given axis.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
axis
::mlir::IntegerAttr
64-bit signless integer attribute
ins
out
FHELinalg.conv2d
(::mlir::concretelang::FHELinalg::Conv2dOp)Returns the 2D convolution of a tensor in the form NCHW with weights in the form FCHW
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
padding
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
strides
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
dilations
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
group
::mlir::IntegerAttr
64-bit signless integer attribute
input
weight
bias
«unnamed»
FHELinalg.dot_eint_int
(::mlir::concretelang::FHELinalg::Dot)Returns the encrypted dot product between a vector of encrypted integers and a vector of clean integers.
Performs a dot product between a vector of encrypted integers and a vector of clear integers.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
out
FHELinalg.dot_eint_eint
(::mlir::concretelang::FHELinalg::DotEint)Returns the encrypted dot product between two vectors of encrypted integers.
Performs a dot product between two vectors of encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
out
FHELinalg.from_element
(::mlir::concretelang::FHELinalg::FromElementOp)Creates a tensor with a single element.
Creates a tensor with a single element.
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
«unnamed»
any type
«unnamed»
FHELinalg.lsb
(::mlir::concretelang::FHELinalg::LsbEintOp)Extract the lowest significant bit at a given precision.
This operation extracts the lsb of a ciphertext tensor in a specific precision.
Extracting only 1 bit:
Traits: AlwaysSpeculatableImplTrait, TensorUnaryEint
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
input
output
FHELinalg.matmul_eint_eint
(::mlir::concretelang::FHELinalg::MatMulEintEintOp)Returns a tensor that contains the result of the matrix multiplication of a matrix of encrypted integers and a second matrix of encrypted integers.
Performs a matrix multiplication of a matrix of encrypted integers and a second matrix of encrypted integers.
The behavior depends on the arguments in the following way:
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEint
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.matmul_eint_int
(::mlir::concretelang::FHELinalg::MatMulEintIntOp)Returns a tensor that contains the result of the matrix multiplication of a matrix of encrypted integers and a matrix of clear integers.
Performs a matrix multiplication of a matrix of encrypted integers and a matrix of clear integers. The width of the clear integers must be less than or equal to the width of encrypted integers.
The behavior depends on the arguments in the following way:
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEintInt
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.matmul_int_eint
(::mlir::concretelang::FHELinalg::MatMulIntEintOp)Returns a tensor that contains the result of the matrix multiplication of a matrix of clear integers and a matrix of encrypted integers.
Performs a matrix multiplication of a matrix of clear integers and a matrix of encrypted integers. The width of the clear integers must be less than or equal to the width of encrypted integers.
The behavior depends on the arguments in the following way:
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryIntEint
Interfaces: Binary, BinaryIntEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.maxpool2d
(::mlir::concretelang::FHELinalg::Maxpool2dOp)Returns the 2D maxpool of a tensor in the form NCHW
Interfaces: UnaryEint
kernel_shape
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
strides
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
dilations
::mlir::DenseIntElementsAttr
64-bit signless integer elements attribute
input
«unnamed»
FHELinalg.mul_eint_int
(::mlir::concretelang::FHELinalg::MulEintIntOp)Returns a tensor that contains the multiplication of a tensor of encrypted integers and a tensor of clear integers.
Performs a multiplication following the broadcasting rules between a tensor of encrypted integers and a tensor of clear integers. The width of the clear integers must be less than or equal to the width of encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEintInt, TensorBroadcastingRules
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.mul_eint
(::mlir::concretelang::FHELinalg::MulEintOp)Returns a tensor that contains the multiplication of two tensor of encrypted integers.
Performs an addition following the broadcasting rules between two tensors of encrypted integers. The width of the encrypted integers must be equal.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEint, TensorBroadcastingRules
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.neg_eint
(::mlir::concretelang::FHELinalg::NegEintOp)Returns a tensor that contains the negation of a tensor of encrypted integers.
Performs a negation to a tensor of encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorUnaryEint
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
input
«unnamed»
FHELinalg.reinterpret_precision
(::mlir::concretelang::FHELinalg::ReinterpretPrecisionEintOp)Reinterpret the ciphertext tensor with a different precision.
It's a reinterpretation cast which changes only the precision. On CRT represention, it does nothing. On Native representation, it moves the message/noise further forward, effectively changing the precision. Changing to - a bigger precision is safe, as the crypto-parameters are chosen such that only zeros will come from the noise part. This is equivalent to a shift left for the value - a smaller precision is only safe if you clear the lowest message bits first. If not, you can assume small errors with high probability and frequent bigger errors, which can be contained to small errors using margins. This is equivalent to a shift right for the value
Example:
Traits: AlwaysSpeculatableImplTrait, TensorUnaryEint
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
input
output
FHELinalg.round
(::mlir::concretelang::FHELinalg::RoundOp)Rounds a tensor of ciphertexts into a smaller precision.
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEintInt, TensorBroadcastingRules
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.sub_eint
(::mlir::concretelang::FHELinalg::SubEintOp)Returns a tensor that contains the subtraction of two tensor of encrypted integers.
Performs an subtraction following the broadcasting rules between two tensors of encrypted integers. The width of the encrypted integers must be equal.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryEint, TensorBroadcastingRules
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.sub_int_eint
(::mlir::concretelang::FHELinalg::SubIntEintOp)Returns a tensor that contains the subtraction of a tensor of clear integers and a tensor of encrypted integers.
Performs a subtraction following the broadcasting rules between a tensor of clear integers and a tensor of encrypted integers. The width of the clear integers must be less than or equal to the width of encrypted integers.
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorBinaryIntEint, TensorBroadcastingRules
Interfaces: Binary, BinaryIntEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
lhs
rhs
«unnamed»
FHELinalg.sum
(::mlir::concretelang::FHELinalg::SumOp)Returns the sum of elements of a tensor of encrypted integers along specified axes.
Attributes:
keep_dims: boolean = false whether to keep the rank of the tensor after the sum operation if true, reduced axes will have the size of 1
axes: I64ArrayAttr = [] list of dimension to perform the sum along think of it as the dimensions to reduce (see examples below to get an intuition)
Examples:
Traits: AlwaysSpeculatableImplTrait, TensorUnaryEint
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
axes
::mlir::ArrayAttr
64-bit integer array attribute
keep_dims
::mlir::BoolAttr
bool attribute
tensor
out
FHELinalg.to_signed
(::mlir::concretelang::FHELinalg::ToSignedOp)Cast an unsigned integer tensor to a signed one
Cast an unsigned integer tensor to a signed one. The result must have the same width and the same shape as the input.
The behavior is undefined on overflow/underflow.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
input
output
FHELinalg.to_unsigned
(::mlir::concretelang::FHELinalg::ToUnsignedOp)Cast a signed integer tensor to an unsigned one
Cast a signed integer tensor to an unsigned one. The result must have the same width and the same shape as the input.
The behavior is undefined on overflow/underflow.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
input
output
FHELinalg.transpose
(::mlir::concretelang::FHELinalg::TransposeOp)Returns a tensor that contains the transposition of the input tensor.
Performs a transpose operation on an N-dimensional tensor.
Attributes:
axes: I64ArrayAttr = [] list of dimension to perform the transposition contains a permutation of [0,1,..,N-1] where N is the number of axes think of it as a way to rearrange axes (see the example below)
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
axes
::mlir::ArrayAttr
64-bit integer array attribute
tensor
any type
«unnamed»
any type
There are two main entry points to the Concrete Compiler. The first is to use the Concrete Python frontend. The second is to use the Compiler directly, which takes MLIR as input. Concrete Python is more high level and uses the Compiler under the hood.
Compilation begins in the frontend with tracing to get an easy-to-manipulate representation of the function. We call this representation a Computation Graph
, which is a Directed Acyclic Graph (DAG) containing nodes representing computations done in the function. Working with graphs is useful because they have been studied extensively and there are a lot of available algorithms to manipulate them. Internally, we use networkx, which is an excellent graph library for Python.
The next step in compilation is transforming the computation graph. There are many transformations we perform, and these are discussed in their own sections. The result of a transformation is another computation graph.
After transformations are applied, we need to determine the bounds (i.e., the minimum and the maximum values) of each intermediate node. This is required because FHE allows limited precision for computations. Measuring these bounds helps determine the required precision for the function.
The frontend is almost done at this stage and only needs to transform the computation graph to equivalent MLIR
code. Once the MLIR
is generated, our Compiler backend takes over. Any other frontend wishing to use the Compiler needs to plugin at this stage.
The Compiler takes MLIR
code that makes use of both the FHE
and FHELinalg
dialects for scalar and tensor operations respectively.
Compilation then ends with a series of passes that generates a native binary which contains executable code. Crypto parameters are generated along the way as well.
We start with a Python function f
, such as this one:
The goal of tracing is to create the following computation graph without requiring any change from the user.
(Note that the edge labels are for non-commutative operations. To give an example, a subtraction node represents (predecessor with edge label 0) - (predecessor with edge label 1)
)
To do this, we make use of Tracer
s, which are objects that record the operation performed during their creation. We create a Tracer
for each argument of the function and call the function with those Tracer
s. Tracer
s make use of the operator overloading feature of Python to achieve their goal:
2 * y
will be performed first, and *
is overloaded for Tracer
to return another tracer: Tracer(computation=Multiply(Constant(2), self.computation))
, which is equal to Tracer(computation=Multiply(Constant(2), Input("y")))
.
x + (2 * y)
will be performed next, and +
is overloaded for Tracer
to return another tracer: Tracer(computation=Add(self.computation, (2 * y).computation))
, which is equal to Tracer(computation=Add(Input("x"), Multiply(Constant(2), Input("y")))
.
In the end, we will have output tracers that can be used to create the computation graph. The implementation is a bit more complex than this, but the idea is the same.
Tracing is also responsible for indicating whether the values in the node would be encrypted or not. The rule for that is: if a node has an encrypted predecessor, it is encrypted as well.
The goal of topological transforms is to make more functions compilable.
With the current version of Concrete, floating-point inputs and floating-point outputs are not supported. However, if the floating-point operations are intermediate operations, they can sometimes be fused into a single table lookup from integer to integer, thanks to some specific transforms.
Let's take a closer look at the transforms we can currently perform.
We have allocated a whole new chapter to explaining fusing. You can find it here.
Given a computation graph, the goal of the bounds measurement step is to assign the minimal data type to each node in the graph.
If we have an encrypted input that is always between 0
and 10
, we should assign the type EncryptedScalar<uint4>
to the node of this input as EncryptedScalar<uint4>
. This is the minimal encrypted integer that supports all values between 0
and 10
.
If there were negative values in the range, we could have used intX
instead of uintX
.
Bounds measurement is necessary because FHE supports limited precision, and we don't want unexpected behaviour while evaluating the compiled functions.
Let's take a closer look at how we perform bounds measurement.
This is a simple approach that requires an inputset to be provided by the user.
The inputset is not to be confused with the dataset, which is classical in ML, as it doesn't require labels. Rather, the inputset is a set of values which are typical inputs of the function.
The idea is to evaluate each input in the inputset and record the result of each operation in the computation graph. Then we compare the evaluation results with the current minimum/maximum values of each node and update the minimum/maximum accordingly. After the entire inputset is evaluated, we assign a data type to each node using the minimum and maximum values it contains.
Here is an example, given this computation graph where x
is encrypted:
and this inputset:
Evaluation result of 2
:
x
: 2
2
: 2
*
: 4
3
: 3
+
: 7
New bounds:
x
: [2, 2]
2
: [2, 2]
*
: [4, 4]
3
: [3, 3]
+
: [7, 7]
Evaluation result of 3
:
x
: 3
2
: 2
*
: 6
3
: 3
+
: 9
New bounds:
x
: [2, 3]
2
: [2, 2]
*
: [4, 6]
3
: [3, 3]
+
: [7, 9]
Evaluation result of 1
:
x
: 1
2
: 2
*
: 2
3
: 3
+
: 5
New bounds:
x
: [1, 3]
2
: [2, 2]
*
: [2, 6]
3
: [3, 3]
+
: [5, 9]
Assigned data types:
x
: EncryptedScalar<uint2>
2
: ClearScalar<uint2>
*
: EncryptedScalar<uint3>
3
: ClearScalar<uint2>
+
: EncryptedScalar<uint4>
We describe below some of the main passes in the compilation pipeline.
This pass converts high level operations which are not crypto specific to lower level operations from the TFHE scheme. Ciphertexts get introduced in the code as well. TFHE operations and ciphertexts require some parameters which need to be chosen, and the TFHE Parameterization pass does just that.
TFHE Parameterization takes care of introducing the chosen parameters in the Intermediate Representation (IR). After this pass, you should be able to see the dimension of ciphertexts, as well as other parameters in the IR.
This pass lowers TFHE operations to low level operations that are closer to the backend implementation, working on tensors and memory buffers (after a bufferization pass).
This pass lowers everything to LLVM-IR in order to generate the final binary.
Compilation of a Python program starts with Concrete's Python frontend, which first traces and transforms it and then converts it into an intermediate representation (IR) that is further processed by Concrete Compiler. This IR is based on the MLIR subproject of the LLVM compiler infrastructure. This document provides an overview of Concrete's FHE-specific representations based on the MLIR framework.
In contrast to traditional infrastructure for compilers, the set of operations and data types that constitute the IR, as well as the level of abstraction that the IR represents, are not fixed in MLIR and can easily be extended. All operations and data types are grouped into dialects, with each dialect representing a specific domain or a specific level of abstraction. Mixing operations and types from different dialects within the same IR is allowed and even encouraged, with all dialects--builtin or developed as an extension--being first-class citizens.
Concrete compiler takes advantage of these concepts by defining a set of dialects, capable of representing an FHE program from an abstract specification that is independent of the actual cryptosystem down to a program that can easily be mapped to function calls of a cryptographic library. The dialects for the representation of an FHE program are:
The FHELinalg Dialect (documentation, source)
The FHE Dialect (documentation, source)
The TFHE Dialect (documentation, source)
The Concrete Dialect (documentation, source)
and for debugging purposes, the Tracing Dialect (documentation, source).
In addition, the project further defines two dialects that help expose dynamic task-parallelism and static data-flow graphs in order to benefit from multi-core, multi-accelerator and distributed systems. These are:
The RT Dialect (documentation, source) and
The SDFG Dialect (documentation, source).
The figure below illustrates the relationship between the dialects and their embedding into the compilation pipeline.
The following sections focus on the FHE-related dialects, i.e., on the FHELinalg Dialect, the FHE Dialect, the TFHE Dialect and the Concrete Dialect.
The top part of the figure shows the components which are involved in the generation of the initial IR, ending with the step labelled MLIR translation. When the initial IR is passed on to Concrete Compiler through its Python bindings, all FHE-related operations are specified using either the FHE or FHELinalg Dialect. Both of these dialects provide operations and data types for the abstract specification of an FHE program, completely independently of a cryptosystem. At this point, the IR simply indicates whether an operand is encrypted (via the type FHE.eint<n>
, where n
stands for the precision in bits) and what operations are applied to encrypted values. Plaintext values simply use MLIR's builtin integer type in
(e.g., i3
or i64
).
The FHE Dialect provides scalar operations on encrypted integers, such as additions (FHE.add_eint
) or multiplications (FHE.mul_eint
), while the FHELinalg Dialect offers operations on tensors of encrypted integers, e.g., matrix products (FHELinalg.matmul_eint_eint
) or convolutions (FHELinalg.conv2d
).
In a first lowering step of the pipeline, all FHELinalg operations are lowered to operations from MLIR's builtin Linalg Dialect using scalar operations from the FHE Dialect. Consider the following example, which consists of a function that performs a multiplication of a matrix of encrypted integers and a matrix of cleartext values:
Upon conversion, the FHELinalg.matmul
operation is converted to a linalg.generic
operation whose body contains a scalar multiplication (FHE.mul_eint_int
) and a scalar addition (FHE.add_eint_int
):
This is then further lowered to a nest of loops from MLIR's SCF Dialect, implementing the parallel and reduction dimensions from the linalg.generic
operation above:
In order to obtain an executable program at the end of the compilation pipeline, the abstract specification of the FHE program must at some point be bound to a specific cryptosystem. This is the role of the TFHE Dialect, whose purpose is:
to indicate operations to be carried out using an implementation of the TFHE cryptosystem
to parametrize the cryptosystem with key sizes, and
to provide a mapping between keys and encrypted values
When lowering the IR based on the FHE Dialect to the TFHE Dialect, the compiler first generates a generic form, in which FHE operations are lowered to TFHE operations and where values are converted to unparametrized TFHE.glwe
values. The unparametrized form TFHE.glwe<sk?>
simply indicates that a TFHE.glwe
value is to be used, but without any indication of the cryptographic parameters and the actual key.
The IR below shows the example program after lowering to unparametrized TFHE:
All operations from the FHE dialect have been replaced with corresponding operations from the TFHE Dialect.
During subsequent parametrization, the compiler can either use a set of default parameters or can obtain a set of parameters from Concrete's optimizer. Either way, an additional pass injects the parameters into the IR, replacing all TFHE.glwe<sk?>
instances with TFHE.glwe<i,d,n>
, where i
is a sequential identifier for a key, d
the number of GLWE dimensions and n
the size of the GLWE polynomial.
The result of such a parametrization for the example is given below:
In this parametrization, a single key with the ID 0
is used, with a single dimension and a polynomial of size 512.
In the next step of the pipeline, operations and types are lowered to the Concrete Dialect. This dialect provides operations, which are implemented by one of Concrete's backend libraries, but still abstracts from any technical details required for interaction with an actual library. The goal is to maintain a high-level representation with value-based semantics and actual operations instead of buffer semantics and library calls, while ensuring that all operations an effectively be lowered to a library call later in the pipeline. However, the abstract types from TFHE are already lowered to tensors of integers with a suitable shape that will hold the binary data of the encrypted values.
The result of the lowering of the example to the Concrete Dialect is shown below:
The remaining stages of the pipeline are rather technical. Before any binding to an actual Concrete backend library, the compiler first invokes MLIR's bufferization infrastructure to convert the value-based IR into an IR with buffer semantics. In particular, this means that keys and encrypted values are no longer abstract values in a mathematical sense, but values backed by a memory location that holds the actual data. This form of IR is then suitable for a pass emitting actual library calls that implement the corresponding operations from the Concrete Dialect for a specific backend.
The result for the example is given below:
At this stage, the IR is only composed of operations from builtin Dialects and thus amenable to lowering to LLVM-IR using the lowering passes provided by MLIR.
High Level Fully Homomorphic Encryption dialect A dialect for representation of high level operation on fully homomorphic ciphertext.
FHE.add_eint_int
(::mlir::concretelang::FHE::AddEintIntOp)Adds an encrypted integer and a clear integer
The clear integer must have at most one more bit than the encrypted integer and the result must have the same width and the same signedness as the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.add_eint
(::mlir::concretelang::FHE::AddEintOp)Adds two encrypted integers
The encrypted integers and the result must have the same width and the same signedness.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.apply_lookup_table
(::mlir::concretelang::FHE::ApplyLookupTableEintOp)Applies a clear lookup table to an encrypted integer
The width of the result can be different than the width of the operand. The lookup table must be a tensor of size 2^p
where p
is the width of the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.and
(::mlir::concretelang::FHE::BoolAndOp)Applies an AND gate to two encrypted boolean values
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.nand
(::mlir::concretelang::FHE::BoolNandOp)Applies a NAND gate to two encrypted boolean values
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.not
(::mlir::concretelang::FHE::BoolNotOp)Applies a NOT gate to an encrypted boolean value
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.or
(::mlir::concretelang::FHE::BoolOrOp)Applies an OR gate to two encrypted boolean values
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.xor
(::mlir::concretelang::FHE::BoolXorOp)Applies an XOR gate to two encrypted boolean values
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.from_bool
(::mlir::concretelang::FHE::FromBoolOp)Cast a boolean to an unsigned integer
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.gen_gate
(::mlir::concretelang::FHE::GenGateOp)Applies a truth table based on two boolean inputs
Truth table must be a tensor of four boolean values.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.lsb
(::mlir::concretelang::FHE::LsbEintOp)Extract the lowest significant bit at a given precision.
This operation extracts the lsb of a ciphertext in a specific precision.
Extracting the lsb with the smallest precision:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.max_eint
(::mlir::concretelang::FHE::MaxEintOp)Retrieve the maximum of two encrypted integers.
Retrieve the maximum of two encrypted integers using the formula, 'max(x, y) == max(x - y, 0) + y'. The input and output types should be the same.
If `x - y`` inside the max overflows or underflows, the behavior is undefined. To support the full range, you should increase the bit-width by 1 manually.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.mul_eint_int
(::mlir::concretelang::FHE::MulEintIntOp)Multiply an encrypted integer with a clear integer
The clear integer must have one more bit than the encrypted integer and the result must have the same width and the same signedness as the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.mul_eint
(::mlir::concretelang::FHE::MulEintOp)Multiplies two encrypted integers
The encrypted integers and the result must have the same width and signedness. Also, due to the current implementation, one supplementary bit of width must be provided, in addition to the number of bits needed to encode the largest output value.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.mux
(::mlir::concretelang::FHE::MuxOp)Multiplexer for two encrypted boolean inputs, based on an encrypted condition
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.neg_eint
(::mlir::concretelang::FHE::NegEintOp)Negates an encrypted integer
The result must have the same width and the same signedness as the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.reinterpret_precision
(::mlir::concretelang::FHE::ReinterpretPrecisionEintOp)Reinterpret the ciphertext with a different precision.
Changing the precision of a ciphertext. It changes both the precision, the value, and in certain cases the correctness of the ciphertext.
Changing to - a bigger precision is always safe. This is equivalent to a shift left for the value. - a smaller precision is only safe if you clear the lowest bits that are discarded. If not, you can assume small errors on the next TLU. This is equivalent to a shift right for the value.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.round
(::mlir::concretelang::FHE::RoundEintOp)Rounds a ciphertext to a smaller precision.
Assuming a ciphertext whose message is implemented over p
bits, this operation rounds it to fit to q
bits with p>q
.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.sub_eint_int
(::mlir::concretelang::FHE::SubEintIntOp)Subtract a clear integer from an encrypted integer
The clear integer must have one more bit than the encrypted integer and the result must have the same width and the same signedness as the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryEintInt, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.sub_eint
(::mlir::concretelang::FHE::SubEintOp)Subtract an encrypted integer from an encrypted integer
The encrypted integers and the result must have the same width and the same signedness.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: BinaryEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.sub_int_eint
(::mlir::concretelang::FHE::SubIntEintOp)Subtract an encrypted integer from a clear integer
The clear integer must have one more bit than the encrypted integer and the result must have the same width and the same signedness as the encrypted integer.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: Binary, BinaryIntEint, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.to_bool
(::mlir::concretelang::FHE::ToBoolOp)Cast an unsigned integer to a boolean
The input must be of width one or two. Two being the current representation of an encrypted boolean, leaving one bit for the carry.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.to_signed
(::mlir::concretelang::FHE::ToSignedOp)Cast an unsigned integer to a signed one
The result must have the same width as the input.
The behavior is undefined on overflow/underflow.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.to_unsigned
(::mlir::concretelang::FHE::ToUnsignedOp)Cast a signed integer to an unsigned one
The result must have the same width as the input.
The behavior is undefined on overflow/underflow.
Examples:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), UnaryEint
Effects: MemoryEffects::Effect{}
FHE.zero
(::mlir::concretelang::FHE::ZeroEintOp)Returns a trivial encrypted integer of 0
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
FHE.zero_tensor
(::mlir::concretelang::FHE::ZeroTensorOp)Creates a new tensor with all elements initialized to an encrypted zero.
Creates a new tensor with the shape specified in the result type and initializes its elements with an encrypted zero.
Example:
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, ConstantNoise, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
An encrypted boolean
Syntax: !FHE.ebool
An encrypted boolean.
An encrypted signed integer
An encrypted signed integer with width
bits to performs FHE Operations.
Examples:
An encrypted unsigned integer
An encrypted unsigned integer with width
bits to performs FHE Operations.
Examples:
Runtime dialect A dialect for representation the abstraction needed for the runtime.
RT.await_future
(::mlir::concretelang::RT::AwaitFutureOp)Wait for a future and access its data.
The results of a dataflow task are always futures which could be further used as inputs to subsequent tasks. When the result of a task is needed in the outer execution context, the result future needs to be synchronized and its data accessed using RT.await_future
.
RT.build_return_ptr_placeholder
(::mlir::concretelang::RT::BuildReturnPtrPlaceholderOp)RT.clone_future
(::mlir::concretelang::RT::CloneFutureOp)Interfaces: AllocationOpInterface, MemoryEffectOpInterface
RT.create_async_task
(::mlir::concretelang::RT::CreateAsyncTaskOp)Create a dataflow task.
RT.dataflow_task
(::mlir::concretelang::RT::DataflowTaskOp)Dataflow task operation
RT.dataflow_task
allows to specify a task that will be concurrently executed when their operands are ready. Operands are either the results of computation in other RT.dataflow_task
(dataflow dependences) or obtained from the execution context (immediate operands). Operands are synchronized using futures and, in the case of immediate operands, copied when the task is created. Caution is required when the operand is a pointer as no deep copy will occur.
Example:
Traits: AutomaticAllocationScope, SingleBlockImplicitTerminator
Interfaces: AllocationOpInterface, MemoryEffectOpInterface, RegionBranchOpInterface
RT.dataflow_yield
(::mlir::concretelang::RT::DataflowYieldOp)Dataflow yield operation
RT.dataflow_yield
is a special terminator operation for blocks inside the region in RT.dataflow_task
. It allows to specify the return values of a RT.dataflow_task
.
Example:
Traits: ReturnLike, Terminator
RT.deallocate_future_data
(::mlir::concretelang::RT::DeallocateFutureDataOp)RT.deallocate_future
(::mlir::concretelang::RT::DeallocateFutureOp)RT.deref_return_ptr_placeholder
(::mlir::concretelang::RT::DerefReturnPtrPlaceholderOp)RT.deref_work_function_argument_ptr_placeholder
(::mlir::concretelang::RT::DerefWorkFunctionArgumentPtrPlaceholderOp)RT.make_ready_future
(::mlir::concretelang::RT::MakeReadyFutureOp)Build a ready future.
Data passed to dataflow tasks must be encapsulated in futures, including immediate operands. These must be converted into futures using RT.make_ready_future
.
Interfaces: AllocationOpInterface, MemoryEffectOpInterface
RT.register_task_work_function
(::mlir::concretelang::RT::RegisterTaskWorkFunctionOp)Register the task work-function with the runtime system.
RT.work_function_return
(::mlir::concretelang::RT::WorkFunctionReturnOp)Future with a parameterized element type
The value of a !RT.future
type represents the result of an asynchronous operation.
Examples:
Pointer to a parameterized element type
Tracing dialect A dialect to print program values at runtime.
Tracing.trace_ciphertext
(::mlir::concretelang::Tracing::TraceCiphertextOp)Prints a ciphertext.
Tracing.trace_message
(::mlir::concretelang::Tracing::TraceMessageOp)Prints a message.
Tracing.trace_plaintext
(::mlir::concretelang::Tracing::TracePlaintextOp)Prints a plaintext.
Data Acquisition
Model Verification.
These models are then used as input for Concrete, to ensure that the parameter space explored by the compiler attains the required security level. Note that we consider the RC.BDGL16
lattice reduction cost model within the Lattice Estimator. Therefore, when computing our security estimates, we use the call LWE.estimate(params, red_cost_model = RC.BDGL16)
on the input parameter set params
.
The cryptographic parameters are chosen considering the IND-CPA security model, and are selected with a bootstrapping failure probability fixed by the user. In particular, it is assumed that the results of decrypted computations are not shared by the secret key owner with any third parties, as such an action can lead to leakage of the secret encryption key. If you are designing an application where decryptions must be shared, you will need to craft custom encryption parameters which are chosen in consideration of the IND-CPA^D security model [1].
[1] Li, Baiyu, et al. “Securing approximate homomorphic encryption using differential privacy.” Annual International Cryptology Conference. Cham: Springer Nature Switzerland, 2022. https://eprint.iacr.org/2022/816.pdf
To generate the raw data from the lattice estimator, use::
To compare the current curves with the output of the lattice estimator, use:
To generate the associated cpp and rust code, use::
further advanced options can be found inside the Makefile.
This object is a tuple containing the information required for the four security curves ({80, 112, 128, 192} bits of security). Looking at one of the entries:
Dialect for the construction of static data flow graphs A dialect for the construction of static data flow graphs. The data flow graph is composed of a set of processes, connected through data streams. Special streams allow for data to be injected into and to be retrieved from the data flow graph.
SDFG.get
(::mlir::concretelang::SDFG::Get)Retrieves a data element from a stream
Retrieves a single data element from the specified stream (i.e., an instance of the element type of the stream).
Example:
SDFG.init
(::mlir::concretelang::SDFG::Init)Initializes the streaming framework
Initializes the streaming framework. This operation must be performed before control reaches any other operation from the dialect.
Example:
SDFG.make_process
(::mlir::concretelang::SDFG::MakeProcess)Creates a new SDFG process
Creates a new SDFG process and connects it to the input and output streams.
Example:
SDFG.make_stream
(::mlir::concretelang::SDFG::MakeStream)Returns a new SDFG stream
Returns a new SDFG stream, transporting data either between processes on the device, from the host to the device or from the device to the host. All streams are typed, allowing data to be read / written through SDFG.get
and SDFG.put
only using the stream's type.
Example:
SDFG.put
(::mlir::concretelang::SDFG::Put)Writes a data element to a stream
Writes the input operand to the specified stream. The operand's type must meet the element type of the stream.
Example:
SDFG.shutdown
(::mlir::concretelang::SDFG::Shutdown)Shuts down the streaming framework
Shuts down the streaming framework. This operation must be performed after any other operation from the dialect.
Example:
SDFG.start
(::mlir::concretelang::SDFG::Start)Finalizes the creation of an SDFG and starts execution of its processes
Finalizes the creation of an SDFG and starts execution of its processes. Any creation of streams and processes must take place before control reaches this operation.
Example:
Process kind
Syntax:
Stream kind
Syntax:
An SDFG data flow graph
Syntax: !SDFG.dfg
A handle to an SDFG data flow graph
An SDFG data stream
An SDFG stream to connect SDFG processes.
High Level Fully Homomorphic Encryption dialect A dialect for representation of high level operation on fully homomorphic ciphertext.
TFHE.batched_add_glwe_cst_int
(::mlir::concretelang::TFHE::ABatchedAddGLWECstIntOp)Batched version of AddGLWEIntOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_add_glwe_int_cst
(::mlir::concretelang::TFHE::ABatchedAddGLWEIntCstOp)Batched version of AddGLWEIntOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_add_glwe_int
(::mlir::concretelang::TFHE::ABatchedAddGLWEIntOp)Batched version of AddGLWEIntOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_add_glwe
(::mlir::concretelang::TFHE::ABatchedAddGLWEOp)Batched version of AddGLWEOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.add_glwe_int
(::mlir::concretelang::TFHE::AddGLWEIntOp)Returns the sum of a clear integer and an lwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.add_glwe
(::mlir::concretelang::TFHE::AddGLWEOp)Returns the sum of two lwe ciphertexts
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_bootstrap_glwe
(::mlir::concretelang::TFHE::BatchedBootstrapGLWEOp)Batched version of KeySwitchGLWEOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_keyswitch_glwe
(::mlir::concretelang::TFHE::BatchedKeySwitchGLWEOp)Batched version of KeySwitchGLWEOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_mapped_bootstrap_glwe
(::mlir::concretelang::TFHE::BatchedMappedBootstrapGLWEOp)Batched version of KeySwitchGLWEOp which also batches the lookup table
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_mul_glwe_cst_int
(::mlir::concretelang::TFHE::BatchedMulGLWECstIntOp)Batched version of MulGLWECstIntOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_mul_glwe_int_cst
(::mlir::concretelang::TFHE::BatchedMulGLWEIntCstOp)Batched version of MulGLWEIntCstOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_mul_glwe_int
(::mlir::concretelang::TFHE::BatchedMulGLWEIntOp)Batched version of MulGLWEIntOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.batched_neg_glwe
(::mlir::concretelang::TFHE::BatchedNegGLWEOp)Batched version of NegGLWEOp
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.bootstrap_glwe
(::mlir::concretelang::TFHE::BootstrapGLWEOp)Programmable bootstraping of a GLWE ciphertext with a lookup table
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.encode_expand_lut_for_bootstrap
(::mlir::concretelang::TFHE::EncodeExpandLutForBootstrapOp)Encode and expand a lookup table so that it can be used for a bootstrap.
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.encode_lut_for_crt_woppbs
(::mlir::concretelang::TFHE::EncodeLutForCrtWopPBSOp)Encode and expand a lookup table so that it can be used for a wop pbs.
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.encode_plaintext_with_crt
(::mlir::concretelang::TFHE::EncodePlaintextWithCrtOp)Encodes a plaintext by decomposing it on a crt basis.
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.keyswitch_glwe
(::mlir::concretelang::TFHE::KeySwitchGLWEOp)Change the encryption parameters of a glwe ciphertext by applying a keyswitch
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.mul_glwe_int
(::mlir::concretelang::TFHE::MulGLWEIntOp)Returns the product of a clear integer and an lwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.neg_glwe
(::mlir::concretelang::TFHE::NegGLWEOp)Negates a glwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: BatchableOpInterface, ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.sub_int_glwe
(::mlir::concretelang::TFHE::SubGLWEIntOp)Substracts an integer and a GLWE ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.wop_pbs_glwe
(::mlir::concretelang::TFHE::WopPBSGLWEOp)Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.zero
(::mlir::concretelang::TFHE::ZeroGLWEOp)Returns a trivial encryption of 0
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
TFHE.zero_tensor
(::mlir::concretelang::TFHE::ZeroTensorGLWEOp)Returns a tensor containing trivial encryptions of 0
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
An attribute representing bootstrap key.
Syntax:
An attribute representing keyswitch key.
Syntax:
An attribute representing Wop Pbs key.
Syntax:
A GLWE ciphertext
An GLWE cipher text
Low Level Fully Homomorphic Encryption dialect A dialect for representation of low level operation on fully homomorphic ciphertext.
Concrete.add_lwe_buffer
(::mlir::concretelang::Concrete::AddLweBufferOp)Returns the sum of 2 lwe ciphertexts
Concrete.add_lwe_tensor
(::mlir::concretelang::Concrete::AddLweTensorOp)Returns the sum of 2 lwe ciphertexts
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.add_plaintext_lwe_buffer
(::mlir::concretelang::Concrete::AddPlaintextLweBufferOp)Returns the sum of a clear integer and an lwe ciphertext
Concrete.add_plaintext_lwe_tensor
(::mlir::concretelang::Concrete::AddPlaintextLweTensorOp)Returns the sum of a clear integer and an lwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_add_lwe_buffer
(::mlir::concretelang::Concrete::BatchedAddLweBufferOp)Batched version of AddLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_add_lwe_tensor
(::mlir::concretelang::Concrete::BatchedAddLweTensorOp)Batched version of AddLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_add_plaintext_cst_lwe_buffer
(::mlir::concretelang::Concrete::BatchedAddPlaintextCstLweBufferOp)Batched version of AddPlaintextLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_add_plaintext_cst_lwe_tensor
(::mlir::concretelang::Concrete::BatchedAddPlaintextCstLweTensorOp)Batched version of AddPlaintextLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_add_plaintext_lwe_buffer
(::mlir::concretelang::Concrete::BatchedAddPlaintextLweBufferOp)Batched version of AddPlaintextLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_add_plaintext_lwe_tensor
(::mlir::concretelang::Concrete::BatchedAddPlaintextLweTensorOp)Batched version of AddPlaintextLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_bootstrap_lwe_buffer
(::mlir::concretelang::Concrete::BatchedBootstrapLweBufferOp)Batched version of BootstrapLweOp, which performs the same operation on multiple elements
Concrete.batched_bootstrap_lwe_tensor
(::mlir::concretelang::Concrete::BatchedBootstrapLweTensorOp)Batched version of BootstrapLweOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_keyswitch_lwe_buffer
(::mlir::concretelang::Concrete::BatchedKeySwitchLweBufferOp)Batched version of KeySwitchLweOp, which performs the same operation on multiple elements
Concrete.batched_keyswitch_lwe_tensor
(::mlir::concretelang::Concrete::BatchedKeySwitchLweTensorOp)Batched version of KeySwitchLweOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_mapped_bootstrap_lwe_buffer
(::mlir::concretelang::Concrete::BatchedMappedBootstrapLweBufferOp)Batched, mapped version of BootstrapLweOp, which performs the same operation on multiple elements
Concrete.batched_mapped_bootstrap_lwe_tensor
(::mlir::concretelang::Concrete::BatchedMappedBootstrapLweTensorOp)Batched, mapped version of BootstrapLweOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_mul_cleartext_cst_lwe_buffer
(::mlir::concretelang::Concrete::BatchedMulCleartextCstLweBufferOp)Batched version of MulCleartextLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_mul_cleartext_cst_lwe_tensor
(::mlir::concretelang::Concrete::BatchedMulCleartextCstLweTensorOp)Batched version of MulCleartextLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_mul_cleartext_lwe_buffer
(::mlir::concretelang::Concrete::BatchedMulCleartextLweBufferOp)Batched version of MulCleartextLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_mul_cleartext_lwe_tensor
(::mlir::concretelang::Concrete::BatchedMulCleartextLweTensorOp)Batched version of MulCleartextLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.batched_negate_lwe_buffer
(::mlir::concretelang::Concrete::BatchedNegateLweBufferOp)Batched version of NegateLweBufferOp, which performs the same operation on multiple elements
Concrete.batched_negate_lwe_tensor
(::mlir::concretelang::Concrete::BatchedNegateLweTensorOp)Batched version of NegateLweTensorOp, which performs the same operation on multiple elements
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.bootstrap_lwe_buffer
(::mlir::concretelang::Concrete::BootstrapLweBufferOp)Bootstraps a LWE ciphertext with a GLWE trivial encryption of the lookup table
Concrete.bootstrap_lwe_tensor
(::mlir::concretelang::Concrete::BootstrapLweTensorOp)Bootstraps an LWE ciphertext with a GLWE trivial encryption of the lookup table
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.encode_expand_lut_for_bootstrap_buffer
(::mlir::concretelang::Concrete::EncodeExpandLutForBootstrapBufferOp)Encode and expand a lookup table so that it can be used for a bootstrap
Concrete.encode_expand_lut_for_bootstrap_tensor
(::mlir::concretelang::Concrete::EncodeExpandLutForBootstrapTensorOp)Encode and expand a lookup table so that it can be used for a bootstrap
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.encode_lut_for_crt_woppbs_buffer
(::mlir::concretelang::Concrete::EncodeLutForCrtWopPBSBufferOp)Encode and expand a lookup table so that it can be used for a crt wop pbs
Concrete.encode_lut_for_crt_woppbs_tensor
(::mlir::concretelang::Concrete::EncodeLutForCrtWopPBSTensorOp)Encode and expand a lookup table so that it can be used for a wop pbs
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.encode_plaintext_with_crt_buffer
(::mlir::concretelang::Concrete::EncodePlaintextWithCrtBufferOp)Encodes a plaintext by decomposing it on a crt basis
Concrete.encode_plaintext_with_crt_tensor
(::mlir::concretelang::Concrete::EncodePlaintextWithCrtTensorOp)Encodes a plaintext by decomposing it on a crt basis
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.keyswitch_lwe_buffer
(::mlir::concretelang::Concrete::KeySwitchLweBufferOp)Performs a keyswitching operation on an LWE ciphertext
Concrete.keyswitch_lwe_tensor
(::mlir::concretelang::Concrete::KeySwitchLweTensorOp)Performs a keyswitching operation on an LWE ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.mul_cleartext_lwe_buffer
(::mlir::concretelang::Concrete::MulCleartextLweBufferOp)Returns the product of a clear integer and a lwe ciphertext
Concrete.mul_cleartext_lwe_tensor
(::mlir::concretelang::Concrete::MulCleartextLweTensorOp)Returns the product of a clear integer and a lwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.negate_lwe_buffer
(::mlir::concretelang::Concrete::NegateLweBufferOp)Negates an lwe ciphertext
Concrete.negate_lwe_tensor
(::mlir::concretelang::Concrete::NegateLweTensorOp)Negates an lwe ciphertext
Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
Concrete.wop_pbs_crt_lwe_buffer
(::mlir::concretelang::Concrete::WopPBSCRTLweBufferOp)Concrete.wop_pbs_crt_lwe_tensor
(::mlir::concretelang::Concrete::WopPBSCRTLweTensorOp)Traits: AlwaysSpeculatableImplTrait
Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface)
Effects: MemoryEffects::Effect{}
A runtime context
Syntax: !Concrete.context
An abstract runtime context to pass contextual value, like public keys, ...
: Compiler submodule.
: Client parameters.
: Client support.
: CompilationContext.
: Compilation feedback.
: CompilationOptions.
: EvaluationKeys.
: KeySet.
: KeySetCache.
: LambdaArgument.
: LibraryCompilationResult.
: LibraryLambda.
: LibrarySupport.
: Parameter.
: PublicArguments.
: PublicResult.
: ServerCircuit.
: ServerProgram.
: SimulatedValueDecrypter.
: SimulatedValueExporter.
: Common utils for the compiler submodule.
: Value.
: ValueDecrypter.
: ValueExporter.
: Wrapper for native Cpp objects.
: Concrete.
: Glue the compilation process together.
: Declaration of DebugArtifacts
class.
: Declaration of Circuit
class.
: Declaration of Client
class.
: Declaration of Compiler
class.
: Declaration of Configuration
class.
: Declaration of circuit
and compiler
decorators.
: Declaration of Keys
class.
: Declaration of FheModule
classes.
: Declaration of MultiCompiler
class.
: Declaration of Server
class.
: Declaration of ClientSpecs
class.
: Declaration of various functions and constants related to compilation.
: Declaration of Value
class.
: Define available data types and their semantics.
: Declaration of BaseDataType
abstract class.
: Declaration of Float
class.
: Declaration of Integer
class.
: Declaration of various functions and constants related to data types.
: Provide additional features that are not present in numpy.
: Declaration of array
function, to simplify creation of encrypted arrays.
: Bit extraction extensions.
: Tracing and evaluation of convolution.
: Declaration of hinting extensions, to provide more information to Concrete.
: Declaration of identity
extension.
: Tracing and evaluation of maxpool.
: Declaration of multivariate
extension.
: Declaration of ones
and one
functions, to simplify creation of encrypted ones.
: Declaration of relu
extension.
: Declaration of round_bit_pattern
function, to provide an interface for rounded table lookups.
: Declaration of LookupTable
class.
: Declaration of tag
context manager, to allow tagging certain nodes.
: Declaration of truncate_bit_pattern
extension.
: Declaration of univariate
function.
: Declaration of zeros
and zero
functions, to simplify creation of encrypted zeros.
.
: Declaration of various functions and constants related to the entire project.
: Provide computation graph
to mlir
functionality.
: Declaration of Context
class.
: Declaration of ConversionType
and Conversion
classes.
: Declaration of Converter
class.
: All graph processors.
: Declaration of AssignBitWidths
graph processor.
: Declaration of CheckIntegerOnly
graph processor.
: Declaration of ProcessRounding
graph processor.
: Declaration of various functions and constants related to MLIR conversion.
: Define structures used to represent computation.
: Declaration of various Evaluator
classes, to make graphs picklable.
: Declaration of Graph
class.
: Declaration of Node
class.
: Declaration of Operation
enum.
: Declaration of various functions and constants related to representation of computation.
: Provide function
to computation graph
functionality.
: Declaration of Tracer
class.
: Declaration of type annotation.
: Define the available values and their semantics.
: Declaration of ClearScalar
and EncryptedScalar
wrappers.
: Declaration of ClearTensor
and EncryptedTensor
wrappers.
: Declaration of ValueDescription
class.
: Concretelang python module
: FHE dialect module
: FHELinalg dialect module
: Tracing dialect module
: ClientParameters are public parameters used for key generation.
: Client interface for doing key generation and encryption.
: Support class for compilation context.
: CircuitCompilationFeedback is a set of hint computed by the compiler engine for a circuit.
: CompilationFeedback is a set of hint computed by the compiler engine.
: CompilationOptions holds different flags and options of the compilation process.
: EvaluationKeys required for execution.
: KeySet stores the different keys required for an encrypted computation.
: KeySetCache is a cache for KeySet to avoid generating similar keys multiple times.
: LambdaArgument holds scalar or tensor values.
: LibraryCompilationResult holds the result of the library compilation.
: LibraryLambda reference a compiled library and can be ran using LibrarySupport.
: Support class for library compilation and execution.
: An FHE parameter.
: PublicArguments holds encrypted and plain arguments, as well as public materials.
: PublicResult holds the result of an encrypted execution and can be decrypted using ClientSupport.
: ServerCircuit references a circuit that can be called for execution and simulation.
: ServerProgram references compiled circuit objects.
: A helper class to decrypt Value
s.
: A helper class to create Value
s.
: An encrypted/clear value which can be scalar/tensor.
: A helper class to decrypt Value
s.
: A helper class to create Value
s.
: Wrapper base class for native Cpp objects.
: DebugArtifacts class, to export information about the compilation process for single function.
: An object containing debug artifacts for a certain function in an fhe module.
: An object containing debug artifacts for an fhe module.
: Circuit class, to combine computation graph, mlir, client and server into a single object.
: Client class, which can be used to manage keys, encrypt arguments and decrypt results.
: Compiler class, to glue the compilation pipeline.
: EncryptionStatus enum, to represent encryption status of parameters.
: Controls the behavior of approximate rounding.
: BitwiseStrategy, to specify implementation preference for bitwise operations.
: ComparisonStrategy, to specify implementation preference for comparisons.
: Configuration class, to allow the compilation process to be customized.
.
: MinMaxStrategy, to specify implementation preference for minimum and maximum operations.
: MultiParamStrategy, to set optimization strategy for multi-parameter.
: MultivariateStrategy, to specify implementation preference for multivariate operations.
: ParameterSelectionStrategy, to set optimization strategy.
: Compilable class, to wrap a function and provide methods to trace and compile it.
: Keys class, to manage generate/reuse keys.
: Runtime object class for execution.
: Fhe function class, allowing to run or simulate one function of an fhe module.
: Fhe module class, to combine computation graphs, mlir, runtime objects into a single object.
: Runtime object class for simulation.
: A debug manager, allowing streamlined debugging.
: An object representing the definition of a function as used in an fhe module.
: Compiler class for multiple functions, to glue the compilation pipeline.
: Server class, which can be used to perform homomorphic computation.
: ClientSpecs class, to create Client objects.
: Value class, to store scalar or tensor values which can be encrypted or clear.
: BaseDataType abstract class, to form a basis for data types.
: Float class, to represent floating point numbers.
: Integer class, to represent integers.
: Bits class, to provide indexing into the bits of integers.
: Adjusting class, to be used as early stop signal during adjustment.
: AutoRounder class, to optimize for number of msbs to keep druing round bit pattern operation.
: LookupTable class, to provide a way to do direct table lookups.
: Adjusting class, to be used as early stop signal during adjustment.
: AutoTruncator class, to optimize for the number of msbs to keep during truncate operation.
: Context class, to perform operations on conversions.
: Conversion class, to store MLIR operations with additional information.
: ConversionType class, to make it easier to work with MLIR types.
: Converter class, to convert a computation graph to MLIR.
: AdditionalConstraints class to customize bit-width assignment step easily.
: AssignBitWidths graph processor, to assign proper bit-widths to be compatible with FHE.
: CheckIntegerOnly graph processor, to make sure the graph only contains integer nodes.
: ProcessRounding graph processor, to analyze rounding and support regular operations on it.
: Comparison enum, to store the result comparison in 2-bits as there are three possible outcomes.
: HashableNdarray class, to use numpy arrays in dictionaries.
: ConstantEvaluator class, to evaluate Operation.Constant nodes.
: GenericEvaluator class, to evaluate Operation.Generic nodes.
: GenericEvaluator class, to evaluate Operation.Generic nodes where args are packed in a tuple.
: InputEvaluator class, to evaluate Operation.Input nodes.
: Graph class, to represent computation graphs.
: GraphProcessor base class, to define the API for a graph processing pipeline.
: MultiGraphProcessor base class, to define the API for a multiple graph processing pipeline.
: Node class, to represent computation in a computation graph.
: Operation enum, to distinguish nodes within a computation graph.
: Base annotation for direct definition.
: Base scalar annotation for direct definition.
: Base tensor annotation for direct definition.
: Tracer class, to create computation graphs from python functions.
: Scalar f32 annotation.
: Scalar f64 annotation.
: Scalar int1 annotation.
: Scalar int10 annotation.
: Scalar int11 annotation.
: Scalar int12 annotation.
: Scalar int13 annotation.
: Scalar int14 annotation.
: Scalar int15 annotation.
: Scalar int16 annotation.
: Scalar int17 annotation.
: Scalar int18 annotation.
: Scalar int19 annotation.
: Scalar int2 annotation.
: Scalar int20 annotation.
: Scalar int21 annotation.
: Scalar int22 annotation.
: Scalar int23 annotation.
: Scalar int24 annotation.
: Scalar int25 annotation.
: Scalar int26 annotation.
: Scalar int27 annotation.
: Scalar int28 annotation.
: Scalar int29 annotation.
: Scalar int3 annotation.
: Scalar int30 annotation.
: Scalar int31 annotation.
: Scalar int32 annotation.
: Scalar int33 annotation.
: Scalar int34 annotation.
: Scalar int35 annotation.
: Scalar int36 annotation.
: Scalar int37 annotation.
: Scalar int38 annotation.
: Scalar int39 annotation.
: Scalar int4 annotation.
: Scalar int40 annotation.
: Scalar int41 annotation.
: Scalar int42 annotation.
: Scalar int43 annotation.
: Scalar int44 annotation.
: Scalar int45 annotation.
: Scalar int46 annotation.
: Scalar int47 annotation.
: Scalar int48 annotation.
: Scalar int49 annotation.
: Scalar int5 annotation.
: Scalar int50 annotation.
: Scalar int51 annotation.
: Scalar int52 annotation.
: Scalar int53 annotation.
: Scalar int54 annotation.
: Scalar int55 annotation.
: Scalar int56 annotation.
: Scalar int57 annotation.
: Scalar int58 annotation.
: Scalar int59 annotation.
: Scalar int6 annotation.
: Scalar int60 annotation.
: Scalar int61 annotation.
: Scalar int62 annotation.
: Scalar int63 annotation.
: Scalar int64 annotation.
: Scalar int7 annotation.
: Scalar int8 annotation.
: Scalar int9 annotation.
: Tensor annotation.
: Scalar uint1 annotation.
: Scalar uint10 annotation.
: Scalar uint11 annotation.
: Scalar uint12 annotation.
: Scalar uint13 annotation.
: Scalar uint14 annotation.
: Scalar uint15 annotation.
: Scalar uint16 annotation.
: Scalar uint17 annotation.
: Scalar uint18 annotation.
: Scalar uint19 annotation.
: Scalar uint2 annotation.
: Scalar uint20 annotation.
: Scalar uint21 annotation.
: Scalar uint22 annotation.
: Scalar uint23 annotation.
: Scalar uint24 annotation.
: Scalar uint25 annotation.
: Scalar uint26 annotation.
: Scalar uint27 annotation.
: Scalar uint28 annotation.
: Scalar uint29 annotation.
: Scalar uint3 annotation.
: Scalar uint30 annotation.
: Scalar uint31 annotation.
: Scalar uint32 annotation.
: Scalar uint33 annotation.
: Scalar uint34 annotation.
: Scalar uint35 annotation.
: Scalar uint36 annotation.
: Scalar uint37 annotation.
: Scalar uint38 annotation.
: Scalar uint39 annotation.
: Scalar uint4 annotation.
: Scalar uint40 annotation.
: Scalar uint41 annotation.
: Scalar uint42 annotation.
: Scalar uint43 annotation.
: Scalar uint44 annotation.
: Scalar uint45 annotation.
: Scalar uint46 annotation.
: Scalar uint47 annotation.
: Scalar uint48 annotation.
: Scalar uint49 annotation.
: Scalar uint5 annotation.
: Scalar uint50 annotation.
: Scalar uint51 annotation.
: Scalar uint52 annotation.
: Scalar uint53 annotation.
: Scalar uint54 annotation.
: Scalar uint55 annotation.
: Scalar uint56 annotation.
: Scalar uint57 annotation.
: Scalar uint58 annotation.
: Scalar uint59 annotation.
: Scalar uint6 annotation.
: Scalar uint60 annotation.
: Scalar uint61 annotation.
: Scalar uint62 annotation.
: Scalar uint63 annotation.
: Scalar uint64 annotation.
: Scalar uint7 annotation.
: Scalar uint8 annotation.
: Scalar uint9 annotation.
: ValueDescription class, to combine data type, shape, and encryption status into a single object.
: Initialize dataflow parallelization.
: Parse the MLIR input, then return it back.
: Extract tag of the operation from its location.
: Try to find the absolute path to the runtime library.
: Provide a direct interface for compilation of single circuit programs.
: Provide an easy interface for the compilation of single-circuit programs.
: Provide an easy interface to define a function within an fhe module.
: Provide an easy interface for the compilation of multi functions modules.
: Add nodes from from_nodes
to to_nodes
, to all_nodes
.
: Determine if a subgraph can be fused.
: Convert a subgraph to Operation.Generic node.
: Find the closest upstream integer output nodes to a set of start nodes in a graph.
: Find a subgraph with float computations that end with an integer output.
: Find the single lowest common ancestor of a list of nodes.
: Find a subgraph with a tlu computation that has multiple variable inputs where all variable inputs share a common ancestor.
: Convert a type to a string. Remove package name and class/type keywords.
: Fuse appropriate subgraphs in a graph to a single Operation.Generic node.
: Get the terminal size.
: Generate a random inputset.
: Determine if a node is the single common ancestor of a list of nodes.
: Validate input arguments.
: Get the 'BaseDataType' that can represent a set of 'BaseDataType's.
: Create an encrypted array from either encrypted or clear values.
: Extract bits of integers.
: Trace and evaluate convolution operations.
: Hint the compilation process about properties of a value.
: Apply identity function to x.
: Evaluate or trace MaxPool operation.
: Wrap a multivariate function so that it is traced into a single generic node.
: Create an encrypted scalar with the value of one.
: Create an encrypted array of ones.
: Create an encrypted array of ones with the same shape as another array.
: Rectified linear unit extension.
: Round the bit pattern of an integer.
: Introduce a new tag to the tag stack.
: Round the bit pattern of an integer.
: Wrap a univariate function so that it is traced into a single generic node.
: Create an encrypted scalar with the value of zero.
: Create an encrypted array of zeros.
: Create an encrypted array of zeros with the same shape as another array.
: Assert a condition.
: Raise a RuntimeError to indicate unreachable code is entered.
: Construct lookup tables for each cell of the input for an Operation.Generic node.
: Construct the lookup table for an Operation.Generic node.
: Construct the lookup table for a multivariate node.
: Use flooding algorithm to replace None
values.
: Get the textual representation of a constant.
: Format an indexing element.
: Build a clear scalar value.
: Build an encrypted scalar value.
: Build a clear scalar value.
: Build an encrypted scalar value.
: Build a clear tensor value.
: Build an encrypted tensor value.
: Build a clear tensor value.
: Build an encrypted tensor value.
To select secure cryptographic parameters for usage in Concrete, we utilize the . In particular, we use the following workflow:
For a given value of we obtain raw data from the Lattice Estimator, which ultimately leads to a security level . All relevant attacks in the Lattice Estimator are considered.
Increase the value of , until the tuple satisfies the target level of security .
Repeat for several values of .
Model Generation for .
At this point, we have several sets of points satisfying the target level of security . From here, we fit a model to this raw data ( as a function of ).
For each model, we perform a verification check to ensure that the values output from the function provide the claimed level of security, .
by default, this script will generate parameter curves for {80, 112, 128, 192} bits of security, using .
this will compare the four curves generated above against the output of the version of the lattice estimator found in the .
To look at the raw data gathered in step 1., we can look in the . These objects can be loaded in the following way using SageMath:
entries are tuples of the form: . We can view individual entries via::
To view the interpolated curves we load the verified_curves.sobj
object inside the .
Here we can see the linear model parameters along with the security level 128. This linear model can be used to generate secure parameters in the following way: for , if we have an LWE dimension of , then the required noise size is:
This value corresponds to the logarithm of the relative error size. Using the parameter set in the Lattice Estimator confirms a 128-bit security level.
a
b
integer
«unnamed»
a
b
«unnamed»
a
lut
tensor of integer values
«unnamed»
left
An encrypted boolean
right
An encrypted boolean
«unnamed»
An encrypted boolean
left
An encrypted boolean
right
An encrypted boolean
«unnamed»
An encrypted boolean
value
An encrypted boolean
«unnamed»
An encrypted boolean
left
An encrypted boolean
right
An encrypted boolean
«unnamed»
An encrypted boolean
left
An encrypted boolean
right
An encrypted boolean
«unnamed»
An encrypted boolean
input
An encrypted boolean
«unnamed»
An encrypted unsigned integer
left
An encrypted boolean
right
An encrypted boolean
truth_table
tensor of integer values
«unnamed»
An encrypted boolean
input
«unnamed»
x
y
«unnamed»
a
b
integer
«unnamed»
rhs
lhs
«unnamed»
cond
An encrypted boolean
c1
An encrypted boolean
c2
An encrypted boolean
«unnamed»
An encrypted boolean
a
«unnamed»
input
«unnamed»
input
«unnamed»
a
b
integer
«unnamed»
a
b
«unnamed»
a
integer
b
«unnamed»
input
An encrypted unsigned integer
«unnamed»
An encrypted boolean
input
An encrypted unsigned integer
«unnamed»
An encrypted signed integer
input
An encrypted signed integer
«unnamed»
An encrypted unsigned integer
out
tensor
width
unsigned
width
unsigned
input
Future with a parameterized element type
output
any type
output
Pointer to a parameterized element type
input
Future with a parameterized element type
output
Future with a parameterized element type
workfn
::mlir::SymbolRefAttr
symbol reference attribute
list
any type
inputs
any type
outputs
any type
values
any type
input
Future with a parameterized element type
input
any type
input
Pointer to a parameterized element type
output
Future with a parameterized element type
input
Pointer to a parameterized element type
output
any type
input
any type
memrefCloned
any type
output
Future with a parameterized element type
list
any type
in
any type
out
any type
elementType
Type
elementType
Type
msg
::mlir::StringAttr
string attribute
nmsb
::mlir::IntegerAttr
32-bit signless integer attribute
ciphertext
msg
::mlir::StringAttr
string attribute
msg
::mlir::StringAttr
string attribute
nmsb
::mlir::IntegerAttr
32-bit signless integer attribute
plaintext
integer
stream
An SDFG data stream
data
any type
«unnamed»
An SDFG data flow graph
type
::mlir::concretelang::SDFG::ProcessKindAttr
Process kind
dfg
An SDFG data flow graph
streams
An SDFG data stream
name
::mlir::StringAttr
string attribute
type
::mlir::concretelang::SDFG::StreamKindAttr
Stream kind
dfg
An SDFG data flow graph
«unnamed»
An SDFG data stream
stream
An SDFG data stream
data
any type
dfg
An SDFG data flow graph
dfg
An SDFG data flow graph
value
::mlir::concretelang::SDFG::ProcessKind
an enum of type ProcessKind
value
::mlir::concretelang::SDFG::StreamKind
an enum of type StreamKind
elementType
Type
ciphertext
A GLWE ciphertext
plaintexts
1D tensor of integer values
result
1D tensor of A GLWE ciphertext values
ciphertexts
1D tensor of A GLWE ciphertext values
plaintext
integer
result
1D tensor of A GLWE ciphertext values
ciphertexts
1D tensor of A GLWE ciphertext values
plaintexts
1D tensor of integer values
result
1D tensor of A GLWE ciphertext values
ciphertexts_a
1D tensor of A GLWE ciphertext values
ciphertexts_b
1D tensor of A GLWE ciphertext values
result
1D tensor of A GLWE ciphertext values
a
A GLWE ciphertext
b
integer
«unnamed»
A GLWE ciphertext
a
A GLWE ciphertext
b
A GLWE ciphertext
«unnamed»
A GLWE ciphertext
key
::mlir::concretelang::TFHE::GLWEBootstrapKeyAttr
An attribute representing bootstrap key.
ciphertexts
1D tensor of A GLWE ciphertext values
lookup_table
1D tensor of 64-bit signless integer values
result
1D tensor of A GLWE ciphertext values
key
::mlir::concretelang::TFHE::GLWEKeyswitchKeyAttr
An attribute representing keyswitch key.
ciphertexts
1D tensor of A GLWE ciphertext values
result
1D tensor of A GLWE ciphertext values
key
::mlir::concretelang::TFHE::GLWEBootstrapKeyAttr
An attribute representing bootstrap key.
ciphertexts
1D tensor of A GLWE ciphertext values
lookup_table
2D tensor of 64-bit signless integer values
result
1D tensor of A GLWE ciphertext values
ciphertext
A GLWE ciphertext
cleartexts
1D tensor of integer values
result
1D tensor of A GLWE ciphertext values
ciphertexts
1D tensor of A GLWE ciphertext values
cleartext
integer
result
1D tensor of A GLWE ciphertext values
ciphertexts
1D tensor of A GLWE ciphertext values
cleartexts
1D tensor of integer values
result
1D tensor of A GLWE ciphertext values
ciphertexts
1D tensor of A GLWE ciphertext values
result
1D tensor of A GLWE ciphertext values
key
::mlir::concretelang::TFHE::GLWEBootstrapKeyAttr
An attribute representing bootstrap key.
ciphertext
A GLWE ciphertext
lookup_table
1D tensor of 64-bit signless integer values
result
A GLWE ciphertext
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
outputBits
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
input_lookup_table
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
crtBits
::mlir::ArrayAttr
64-bit integer array attribute
modulusProduct
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
input_lookup_table
1D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
mods
::mlir::ArrayAttr
64-bit integer array attribute
modsProd
::mlir::IntegerAttr
64-bit signless integer attribute
input
64-bit signless integer
result
1D tensor of 64-bit signless integer values
key
::mlir::concretelang::TFHE::GLWEKeyswitchKeyAttr
An attribute representing keyswitch key.
ciphertext
A GLWE ciphertext
result
A GLWE ciphertext
a
A GLWE ciphertext
b
integer
«unnamed»
A GLWE ciphertext
a
A GLWE ciphertext
«unnamed»
A GLWE ciphertext
a
integer
b
A GLWE ciphertext
«unnamed»
A GLWE ciphertext
ksk
::mlir::concretelang::TFHE::GLWEKeyswitchKeyAttr
An attribute representing keyswitch key.
bsk
::mlir::concretelang::TFHE::GLWEBootstrapKeyAttr
An attribute representing bootstrap key.
pksk
::mlir::concretelang::TFHE::GLWEPackingKeyswitchKeyAttr
An attribute representing Wop Pbs key.
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
cbsLevels
::mlir::IntegerAttr
32-bit signless integer attribute
cbsBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
ciphertexts
lookupTable
2D tensor of 64-bit signless integer values
result
out
A GLWE ciphertext
tensor
inputKey
mlir::concretelang::TFHE::GLWESecretKey
outputKey
mlir::concretelang::TFHE::GLWESecretKey
polySize
int
glweDim
int
levels
int
baseLog
int
index
int
inputKey
mlir::concretelang::TFHE::GLWESecretKey
outputKey
mlir::concretelang::TFHE::GLWESecretKey
levels
int
baseLog
int
index
int
inputKey
mlir::concretelang::TFHE::GLWESecretKey
outputKey
mlir::concretelang::TFHE::GLWESecretKey
outputPolySize
int
innerLweDim
int
glweDim
int
levels
int
baseLog
int
index
int
key
mlir::concretelang::TFHE::GLWESecretKey
result
1D memref of 64-bit signless integer values
lhs
1D memref of 64-bit signless integer values
rhs
1D memref of 64-bit signless integer values
lhs
1D tensor of 64-bit signless integer values
rhs
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
result
1D memref of 64-bit signless integer values
lhs
1D memref of 64-bit signless integer values
rhs
64-bit signless integer
lhs
1D tensor of 64-bit signless integer values
rhs
64-bit signless integer
result
1D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
lhs
2D memref of 64-bit signless integer values
rhs
2D memref of 64-bit signless integer values
lhs
2D tensor of 64-bit signless integer values
rhs
2D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
lhs
2D memref of 64-bit signless integer values
rhs
64-bit signless integer
lhs
2D tensor of 64-bit signless integer values
rhs
64-bit signless integer
result
2D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
lhs
2D memref of 64-bit signless integer values
rhs
1D memref of 64-bit signless integer values
lhs
2D tensor of 64-bit signless integer values
rhs
1D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
2D memref of 64-bit signless integer values
input_ciphertext
2D memref of 64-bit signless integer values
lookup_table
1D memref of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
input_ciphertext
2D tensor of 64-bit signless integer values
lookup_table
1D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_in
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_out
::mlir::IntegerAttr
32-bit signless integer attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
2D memref of 64-bit signless integer values
ciphertext
2D memref of 64-bit signless integer values
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_in
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_out
::mlir::IntegerAttr
32-bit signless integer attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
ciphertext
2D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
2D memref of 64-bit signless integer values
input_ciphertext
2D memref of 64-bit signless integer values
lookup_table_vector
2D memref of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
input_ciphertext
2D tensor of 64-bit signless integer values
lookup_table_vector
2D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
lhs
2D memref of 64-bit signless integer values
rhs
64-bit signless integer
lhs
2D tensor of 64-bit signless integer values
rhs
64-bit signless integer
result
2D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
lhs
2D memref of 64-bit signless integer values
rhs
1D memref of 64-bit signless integer values
lhs
2D tensor of 64-bit signless integer values
rhs
1D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
result
2D memref of 64-bit signless integer values
ciphertext
2D memref of 64-bit signless integer values
ciphertext
2D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
1D memref of 64-bit signless integer values
input_ciphertext
1D memref of 64-bit signless integer values
lookup_table
1D memref of 64-bit signless integer values
inputLweDim
::mlir::IntegerAttr
32-bit signless integer attribute
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
glweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
input_ciphertext
1D tensor of 64-bit signless integer values
lookup_table
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
outputBits
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
result
1D memref of 64-bit signless integer values
input_lookup_table
1D memref of 64-bit signless integer values
polySize
::mlir::IntegerAttr
32-bit signless integer attribute
outputBits
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
input_lookup_table
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
crtBits
::mlir::ArrayAttr
64-bit integer array attribute
modulusProduct
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
result
2D memref of 64-bit signless integer values
input_lookup_table
1D memref of 64-bit signless integer values
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
crtBits
::mlir::ArrayAttr
64-bit integer array attribute
modulusProduct
::mlir::IntegerAttr
32-bit signless integer attribute
isSigned
::mlir::BoolAttr
bool attribute
input_lookup_table
1D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
mods
::mlir::ArrayAttr
64-bit integer array attribute
modsProd
::mlir::IntegerAttr
64-bit signless integer attribute
result
1D memref of 64-bit signless integer values
input
64-bit signless integer
mods
::mlir::ArrayAttr
64-bit integer array attribute
modsProd
::mlir::IntegerAttr
64-bit signless integer attribute
input
64-bit signless integer
result
1D tensor of 64-bit signless integer values
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_in
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_out
::mlir::IntegerAttr
32-bit signless integer attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
1D memref of 64-bit signless integer values
ciphertext
1D memref of 64-bit signless integer values
level
::mlir::IntegerAttr
32-bit signless integer attribute
baseLog
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_in
::mlir::IntegerAttr
32-bit signless integer attribute
lwe_dim_out
::mlir::IntegerAttr
32-bit signless integer attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
ciphertext
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
result
1D memref of 64-bit signless integer values
lhs
1D memref of 64-bit signless integer values
rhs
64-bit signless integer
lhs
1D tensor of 64-bit signless integer values
rhs
64-bit signless integer
result
1D tensor of 64-bit signless integer values
result
1D memref of 64-bit signless integer values
ciphertext
1D memref of 64-bit signless integer values
ciphertext
1D tensor of 64-bit signless integer values
result
1D tensor of 64-bit signless integer values
bootstrapLevel
::mlir::IntegerAttr
32-bit signless integer attribute
bootstrapBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
keyswitchLevel
::mlir::IntegerAttr
32-bit signless integer attribute
keyswitchBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchInputLweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchoutputPolynomialSize
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchLevel
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
circuitBootstrapLevel
::mlir::IntegerAttr
32-bit signless integer attribute
circuitBootstrapBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
pkskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
result
2D memref of 64-bit signless integer values
ciphertext
2D memref of 64-bit signless integer values
lookup_table
2D memref of 64-bit signless integer values
bootstrapLevel
::mlir::IntegerAttr
32-bit signless integer attribute
bootstrapBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
keyswitchLevel
::mlir::IntegerAttr
32-bit signless integer attribute
keyswitchBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchInputLweDimension
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchoutputPolynomialSize
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchLevel
::mlir::IntegerAttr
32-bit signless integer attribute
packingKeySwitchBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
circuitBootstrapLevel
::mlir::IntegerAttr
32-bit signless integer attribute
circuitBootstrapBaseLog
::mlir::IntegerAttr
32-bit signless integer attribute
crtDecomposition
::mlir::ArrayAttr
64-bit integer array attribute
kskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
bskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
pkskIndex
::mlir::IntegerAttr
32-bit signless integer attribute
ciphertext
2D tensor of 64-bit signless integer values
lookupTable
2D tensor of 64-bit signless integer values
result
2D tensor of 64-bit signless integer values
There are two ways to contribute to Concrete. You can:
Open issues to report bugs and typos or suggest ideas;
Request to become an official contributor by emailing hello@zama.ai. Only approved contributors can send pull requests (PRs), so get in touch before you do.
Concrete is a modular framework composed by sub-projects using different technologies, all having theirs own build system and test suite. Each sub-project have is own README that explain how to setup the developer environment, how to build it and how to run tests commands.
Concrete is made of 4 main categories of sub-project that are organized in subdirectories from the root of the concrete repo:
frontends
contains high-level transpilers that target end users developers who want to use the Concrete stack easily from their usual environment. There are for now only one frontend provided by the Concrete project: a Python frontend named concrete-python
.
compilers
contains the sub-projects in charge of actually solving the compilation problem of an high-level abstraction of FHE to an actual executable. concrete-optimizer
is a Rust based project that solves the optimization problems of an FHE dag to a TFHE dag and concrete-compiler
which use concrete-optimizer
is an end-to-end MLIR-based compiler that takes a crypto free FHE dialect and generates compilation artifacts both for the client and the server. concrete-compiler
project provide in addition of the compilation engine, a client and server library in order to easily play with the compilation artifacts to implement a client and server protocol.
backends
contains CAPI that can be called by the concrete-compiler
runtime to perform the cryptographic operations. There are currently two backends:
concrete-cpu
, using TFHE-rs that implement the fastest implementation of TFHE on CPU.
concrete-cuda
that provides a GPU acceleration of TFHE primitives.
tools
are basically every other sub-projects that cannot be classified in the three previous categories and which are used as a common support by the others.
The module structure of Concrete Python. You are encouraged to check individual .py
files to learn more.
concrete
fhe
dtypes: data type specifications (e.g., int4, uint5, float32)
values: value specifications (i.e., data type + shape + encryption status)
representation: representation of computation (e.g., computation graphs, nodes)
tracing: tracing of python functions
extensions: custom functionality (see Extensions)
mlir: computation graph to mlir conversion
compilation: configuration, compiler, artifacts, circuit, client/server, and anything else related to compilation
After doing a compilation, we end up with a couple of artifacts, including crypto parameters and a binary file containing the executable circuit. In order to be able to encrypt and run the circuit properly, we need to know how to interpret these artifacts, and there are a couple of utility functions which can be used to load them. These utility functions can be accessed through a variety of languages, including Python and C++.
We will use a really simple example for a demo, but the same steps can be done for any other circuit. example.mlir
will contain the MLIR below:
You can use the concretecompiler
binary to compile this MLIR program. Same can be done with concrete-python
, as we only need the compilation artifacts at the end.
You should be able to see artifacts listed in the python-demo
directory
Now we want to use the Python bindings in order to call the compiled circuit.
The main struct
to manage compilation artifacts is LibrarySupport
. You will have to create one with the path you used during compilation, then load the result of the compilation
Using the compilation result, you can load the server lambda (the entrypoint to the executable compiled circuit) as well as the client parameters (containing crypto parameters)
The client parameters will serve the client to generate keys and encrypt arguments for the circuit
Only evaluation keys are required for the execution of the circuit. You can execute the circuit on the encrypted arguments via server_lambda_call
At this point you have the encrypted result and can decrypt it using the keyset which holds the secret key
There is also a couple of tests in test_compilation.py that can show how to both compile and run a circuit between a client and server using serialization.
Fundamentals
Explore the core features.
Guides
Deploy your project.
Tutorials
Learn more with tutorials.