One of the most common operations in Concrete are Table Lookups
(TLUs). All operations except addition, subtraction, multiplication with non-encrypted values, tensor manipulation operations, and a few operations built with those primitive operations (e.g. matmul, conv) are converted to Table Lookups under the hood.
Table Lookups are very flexible. They allow Concrete to support many operations, but they are expensive. The exact cost depends on many variables (hardware used, error probability, etc.), but they are always much more expensive compared to other operations. You should try to avoid them as much as possible. It's not always possible to avoid them completely, but you might remove the number of TLUs or replace some of them with other primitive operations.
Concrete automatically parallelizes TLUs if they are applied to tensors.
Concrete provides a LookupTable
class to create your own tables and apply them in your circuits.
LookupTable
s can have any number of elements. Let's call the number of elements N. As long as the lookup variable is within the range [-N, N), the Table Lookup is valid.
If you go outside of this range, you will receive the following error:
You can create the lookup table using a list of integers and apply it using indexing:
When you apply a table lookup to a tensor, the scalar table lookup is applied to each element of the tensor:
LookupTable
mimics array indexing in Python, which means if the lookup variable is negative, the table is looked up from the back:
If you want to apply a different lookup table to each element of a tensor, you can have a LookupTable
of LookupTable
s:
In this example, we applied a squared
table to the first column and a cubed
table to the second column.
Concrete tries to fuse some operations into table lookups automatically so that lookup tables don't need to be created manually:
All lookup tables need to be from integers to integers. So, without .astype(np.int64)
, Concrete will not be able to fuse.
The function is first traced into:
Concrete then fuses appropriate nodes:
Fusing makes the code more readable and easier to modify, so try to utilize it over manual LookupTable
s as much as possible.
We refer the users to this page for explanations about fhe.univariate(function)
and fhe.multivariate(function)
features, which are convenient ways to use automatically created table lookup.
TLUs are performed with an FHE operation called Programmable Bootstrapping
(PBS). PBSs have a certain probability of error: when these errors happen, it results in inaccurate results.
Let's say you have the table:
And you perform a Table Lookup using 4
. The result you should get is lut[4] = 16
, but because of the possibility of error, you could get any other value in the table.
The probability of this error can be configured through the p_error
and global_p_error
configuration options. The difference between these two options is that, p_error
is for individual TLUs but global_p_error
is for the whole circuit.
If you set p_error
to 0.01
, for example, it means every TLU in the circuit will have a 99% chance (or more) of being exact. If there is a single TLU in the circuit, it corresponds to global_p_error = 0.01
as well. But if we have 2 TLUs, then global_p_error
would be higher: that's 1 - (0.99 * 0.99) ~= 0.02 = 2%
.
If you set global_p_error
to 0.01
, the whole circuit will have at most 1% probability of error, no matter how many Table Lookups are included (which means that p_error
will be smaller than 0.01
if there are more than a single TLU).
If you set both of them, both will be satisfied. Essentially, the stricter one will be used.
By default, both p_error
and global_p_error
are set to None
, which results in a global_p_error
of 1 / 100_000
being used.
Feel free to play with these configuration options to pick the one best suited for your needs! See How to Configure to learn how you can set a custom p_error
and/or global_p_error
.
Configuring either of those variables impacts compilation and execution times (compilation, keys generation, circuit execution) and space requirements (size of the keys on disk and in memory). Lower error probabilities result in longer compilation and execution times and larger space requirements.
PBSs are very expensive, in terms of computations. Fortunately, it is sometimes possible to replace PBS by rounded PBS, truncate PBS or even approximate PBS. These TLUs have a slightly different semantic, but are very useful in cases like machine learning for more efficiency without drop of accuracy.