Table Lookups advanced
Last updated
Was this helpful?
Last updated
Was this helpful?
This document details the management of Table Lookups(TLU) within Concrete for advanced usage. For a simpler guide, refer to the .
One of the most common operations in Concrete are Table Lookups
(TLUs). All operations except addition, subtraction, multiplication with non-encrypted values, tensor manipulation operations, and a few operations built with those primitive operations (e.g. matmul, conv) are converted to Table Lookups under the hood.
Table Lookups are very flexible. They allow Concrete to support many operations, but they are expensive. The exact cost depends on many variables (hardware used, error probability, etc.), but they are always much more expensive compared to other operations. You should try to avoid them as much as possible. It's not always possible to avoid them completely, but you might remove the number of TLUs or replace some of them with other primitive operations.
Concrete provides a LookupTable
class to create your own tables and apply them in your circuits.
You can create the lookup table using a list of integers and apply it using indexing:
When you apply a table lookup to a tensor, the scalar table lookup is applied to each element of the tensor:
LookupTable
mimics array indexing in Python, which means if the lookup variable is negative, the table is looked up from the back:
If you want to apply a different lookup table to each element of a tensor, you can have a LookupTable
of LookupTable
s:
In this example, we applied a squared
table to the first column and a cubed
table to the second column.
Concrete tries to fuse some operations into table lookups automatically so that lookup tables don't need to be created manually:
The function is first traced into:
Concrete then fuses appropriate nodes:
TLUs are performed with an FHE operation called Programmable Bootstrapping
(PBS). PBSs have a certain probability of error: when these errors happen, it results in inaccurate results.
Let's say you have the table:
And you perform a Table Lookup using 4
. The result you should get is lut[4] = 16
, but because of the possibility of error, you could get any other value in the table.
The probability of this error can be configured through the p_error
and global_p_error
configuration options. The difference between these two options is that, p_error
is for individual TLUs but global_p_error
is for the whole circuit.
If you set p_error
to 0.01
, for example, it means every TLU in the circuit will have a 99% chance (or more) of being exact. If there is a single TLU in the circuit, it corresponds to global_p_error = 0.01
as well. But if we have 2 TLUs, then global_p_error
would be higher: that's 1 - (0.99 * 0.99) ~= 0.02 = 2%
.
If you set global_p_error
to 0.01
, the whole circuit will have at most 1% probability of error, no matter how many Table Lookups are included (which means that p_error
will be smaller than 0.01
if there are more than a single TLU).
If you set both of them, both will be satisfied. Essentially, the stricter one will be used.
By default, both p_error
and global_p_error
are set to None
, which results in a global_p_error
of 1 / 100_000
being used.
We refer the users to for explanations about fhe.univariate(function)
and fhe.multivariate(function)
features, which are convenient ways to use automatically created table lookup.
Feel free to play with these configuration options to pick the one best suited for your needs! See to learn how you can set a custom p_error
and/or global_p_error
.
PBSs are very expensive, in terms of computations. Fortunately, it is sometimes possible to replace PBS by , or even . These TLUs have a slightly different semantic, but are very useful in cases like machine learning for more efficiency without drop of accuracy.