Accueil > Research > Code-based cryptography > Code-based cryptosystems : implementations > An improved threshold ring signature scheme based on error correcting codes

An improved threshold ring signature scheme based on error correcting codes

mardi 3 avril 2012, par Cayrel, Gerhard Hoffman

The following tables show the timings we have obtained for a C implementation.

The test system was an Intel(R) Core(TM)2 Duo CPU E8400@3.00GHz, running Debian 6.0.3. The sources have been compiled using gcc 4.6.2.
In all cases, we used parity check matrices in systematic form. Due to the row-major order of C, the transposed matrices have been stored. The number of the ring members was always set to 100, the number of ring provers to 50.

The tables show the setup time and the time running the protocol,
where the setup time is consumed for the generation of the necessary public and private keys.

Finally, the use of quasi-dyadic matrices does not allow for all theoretically possible parameters.

For instance, dyadic matrices have the dimension 2p×2p (p 2 N), which means that for quasi-dyadic matrices r = d2p for some d 2 N. In order to have comparable results and a uniform implementation, we have used this restriction for the random and the quasi-cyclic case as well.

Aguilar et. al scheme
The number of rounds for the scheme has been set to 28 (probability of
cheating 2^−16), the dimension of the parity check matrix H^T over F2 has been set to 704×352, but
only the redundancy part has been stored in memory, which is of dimension 352 × 352. The weight
of the secrets been set to 76.

Aguilar et al. timings for 28 protocol rounds :

Matrix Type Dim. [n × r] Weight Setup [ms] Protocol [ms] Total [ms] Sec
Random 704 × 352 76 108.539 98.662 207.200 2^80
Quasi-dyadic 704 × 352 76 811.202 474.737 1285.939 2^80
Quasi-cyclic 704 × 352 76 476.796 302.935 779.731 2^80

Our scheme
For our scheme the parity check matrices H^T have been chosen over F28 , mainly
because in this case a field element fits exactly in one byte. The number of rounds has been set to
16 (probability of cheating 2−16), the weight of the secrets has been set to 54. The number of the
ring members was again set to 100 and the number of ring provers to 50.

Timings for our scheme and 16 protocol rounds :

Matrix Type Dim. [n × r] Weight Setup [ms] Protocol [ms] Total [ms] Sec
Random 144 × 72 54 32.979 18.499 51.477 2^80
Quasi-dyadic 144 × 72 54 44.331 29.109 73.439 2^80
Quasi-cyclic 144 × 72 54 38.747 26.550 65.298 2^80

As one can see, the computational cost for quasi-dyadic/-cyclic cases is always higher
than using random parity check matrices. The reason is that the vector-matrix product is more
expensive in those cases, because the matrix has to be reconstructed on the fly during the multiplication
without actually building the whole matrix in memory. The savings in memory have to be
paid for with additional runtime.

The given implementation is given as a proof of concept. For instance, the communication
between the leader and the provers takes place on the same machine, even inside the same executable.
In reality, the provers would be located on different computers, having a different architecture,
connected to the leader via network connections and the like. In such a heterogeneous scenario, the
communication latency for those network connections had to be taken into account. It also might
be possible that some provers use a very fast machine, whereas others use a very slow one.
The interaction process would be dominated then by the slowest possible prover.

Transforming into a signature scheme
Similar to the general technique shown by Fiat-Shamir,
we can transform our scheme into a signature scheme. The idea is that the signing and verifying part are handled separately, i.e. at first, the leader simulates the challenge step of the verifier using a public stream cipher or public hash function with some predefined starting value involving the document to sign. The protocol is then run without further verification, but all the data which are needed for verification, in particular the master commitments, has to be recorded. These data form the signature.

The verifier uses the recorded data to run the protocol without the signing side. For the challenge step, the verifier uses the same starting value for the stream cipher or hash function as the signing part did. The document is part of this starting value. A consequence of this approach is that the signatures become quite large as everything needed for verification has to be recorded in the signature.

In the next table, we give some timings for the resulting signature scheme. We used the same settings as above, but run the protocol with random matrices only. The savings using other matrix types is negligible compared to the gained signature sizes.

The signature sizes are not fixed, but show a small variation depending on the values chosen during the challenge step. More specifically, the answers transmitted for the cases b=0,1,2 vary in size, which effectively leads to varying signature sizes as well. The values are therefore average values obtained while running the protocol 80 rounds (probability of cheating 2^−80).

Doc. [MiB] Sig. [MiB] Dim. [n × r] Weight Signing [ms] Verification [ms] Total [ms] Sec
1 4 144 × 72 54 544 454 998 2^80
10 13 144 × 72 54 3643 3551 7194 2^80
25 28 144 × 72 54 8803 8700 17503 2^80

Below one can find some information concerning the core source files, on which
our results are based.

Note that one needs to have the Keccak files installed, a secure random generator plus an implementation of a bitvector. A bitvector allows to address the bits of an arbitray byte stream in a convenient way.

Portfolio