Introduction

Ferveo is a fast platform for distributed key generation and threshold decryption that can be integrated into Tendermint. Using Ferveo, a validator set can generate a distributed private key where each validator's share of the private key is weighted proportionally to their staked amount. The distributed key generation (DKG) is efficient enough to perform once per epoch and the underlying cryptographic schemes allow for real-time decryption of encrypted transactions. Ferveo allows the validator set to commit to encrypted transactions prior to threshold decryption, preventing public visibility of pending transactions and helping prevent transaction front-running.

Overview

Goals

The cryptography in Ferveo should allow:

  • Fast distributed key generation
  • Fast threshold signature
  • Fast threshold decryption

with a set of weighted participants, where the relative weights of participants are determined by cryptoeconomics/staking values. Because relative weighting requires many more shares of the distributed key than other distributed key protocols, all primitives in Ferveo are highly optimized and run in nearly linear time.

Synchronization

Ferveo does not use an asynchronous DKG protocol. Instead, Ferveo runs on top of a synchronizing consensus layer such as Tendermint, though some messages may be gossiped peer-to-peer when synchrony is not required. The use of an underlying consensus layer allows better performance than using an asynchronous DKG because the DKG can use the consensus and censorship resistance features of the blockchain to save on computation and communication costs.

Preliminaries

Ferveo is intended to integrate into an existing blockchain framework such as Tendermint, and so the validator set is the natural participant set for Ferveo. Therefore, Ferveo assumes that all \(n\) validators have an associated Ed25519 public key identity and have staked some amount of token for the current epoch. Ferveo derives from the staking amounts an associated relative weight $\(w_i\). The total weight \(\sum_i w_i = W\) is a fixed parameter of the system, determined by a performance tradeoff (higher total weight allows more identities, higher resolution of weights, but larger computation and communication complexity). For performance reasons, \(W\) is ideally a power of two.

Lower bounds on \(W\)

In general, only performance concerns place a practical upper bound on how large \(W\) may be. The practical lower bound on \(W\) is determined by two factors:

  • In order to ensure liveness of transaction decryption, ideally every subset of validators with network stake at least 2/3 should have at least 2/3 of key shares; that way an otherwise live network will not stop waiting for threshold decryption. However, because of rounding this is difficult to guarantee. Having a larger value of \(W\) with increased resolution of weights narrows the gap between the relative network stake of a validator subset and the relative key share of that subset.
  • There should not be significant incentive for validators to strategically allocate stake among multiple identities to gain additional key shares through resolution or rounding, for example by creating many minimally staked validator identities.

System model and assumptions

The security model is intended to match the Tendermint security model. Among the validator set, safety and liveness is promised as long as less than 1/3 by weight are Byzantine (malicious, or otherwise not following the protocol) or faulty. When faulty validators crash and recover, they can resume the protocol, but liveness is only guaranteed if sufficient weight is honest and live. If proactive secret sharing is used to refresh a key during an epoch, then less than 1/3 weight nodes are Byzantine during a key phase.

Synchronous VSS can achieve resiliance with threshold \(t\) when \(W \ge 2t+1\), implying \(t < W/2\). The privacy threshold \(p\) is the value such that subsets of nodes of weight at least \(p+1\) can always recover the key or perform operations using the key, and subsets nodes of weight at most \(p\) are unable to recover the key or perform operations using the key. It must be \(p < W - t\). The default values \(t = W/3 - 1\) and \(p = 2W/3\) are designed to match the resiliance of Tendermint.

Liveness guarantees

Ferveo depends on the liveness guarantees of the Tendermint protocol; if Tendermint fails to provide liveness, the DKG and threshold operations of Ferveo will also stop. Alternatively, whenever Tendermint can provide liveness, Ferveo's DKG and threshold operations will also be live. Therefore, any assumptions or improvements to Tendermint's liveness guarantees subsequently apply to Ferveo as well. Ferveo uses a Publicly Verifiable DKG specifically to match the liveness of the consensus layer.

Cryptographic Primitives

Curve

Ferveo's distributed key generator and threshold cryptographic schemes rely on the use of a bilinear group where the Gap Diffie-Hellman assumption holds.

The default curve used is BLS12-381. Optionally, BLS12-377 may also be implemented, which would allow easier SNARK verification of the DKG or other cryptographic primitives, or BLS12-461 at a higher security level. For documentation purposes, an abstract bilinear group is assumed.

\(\mathbb{G}_1\) denotes the prime order subgroup of order \(r\) and \(\mathbb{F}_r\) is the scalar field of the curve with prime order \(r\). The pairing operation is \(e(P,Q) : \mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_T\). The generator of \(\mathbb{G}_1\) and \(\mathbb{G}_2\) are denoted \(G_1\) and \(G_2\) respectively.

Let \(\omega\) denote an \(W\)th root of unity in \(\mathbb{F}_r\). For highest performance, \(\mathbb{F}_r\) should be a highly 2-adic field, such as the scalar field of BLS12-381, to allow for FFT operations.

Let \(G \in \mathbb{G}_1\) and \(H \in \mathbb{G}_2\) denote an independent generator in each group.

Notation

\(\mathbb{G}_1\) and \(\mathbb{G}_2\) are written as additive groups, where the repeated group operation is multiplication of a point \(P\) by a scalar \(s\) is written as \([s] P\). \(\mathbb{G_T}\) is written as a multiplicative group where the repeated group operation is exponentiation of a point \(g\) by a scalar \(s\) is written as \(g^s\).

Fast subgroup checks

All subgroup checks of membership in the subgroup \(\mathbb{G}_1\) and \(\mathbb{G}_2\) are performed as described in https://eprint.iacr.org/2019/814.pdf for performance reasons.

Hashing

Let \(\operatorname{H}_{\mathbb{G}}: {0,1}^* \rightarrow \mathbb{G}\) be the hash to curve function into the group \(\mathbb{G}\) as specified in RFC https://datatracker.ietf.org/doc/draft-irtf-cfrg-hash-to-curve/

Let \(\operatorname{H}_{\mathbb{F}}: {0,1}^* \rightarrow \mathbb{F}\) be the hash to field function into the group \(\mathbb{F}\)

Let \(\operatorname{H}_{\ell}: {0,1}^* \rightarrow {0,1}^\ell\) be a hash function into \(\ell\) bits. The default hash function is BLAKE2b.

Symmetric Cryptography

The authenticated encryption and decryption operations with key $k$, ciphertext $C$, and plaintext $M$ are denoted:

\[C = \operatorname{encrypt}(k, M)\] \[M = \operatorname{decrypt}(k, C)\]

Symmetric key encryption and decryption are provided by the ChaCha20Poly1305 (RFC8439) cipher, implemented as the chacha20poly1305 crate.

Publicly Verifiable Distributed Key Generation

Ferveo uses a Publicly Verifiable Distributed Key Generator

The Aggregatable DKG scheme of Kobi Gurkan, Philipp Jovanovic, Mary Maller, Sarah Meiklejohn, Gilad Stern, and Alin Tomescu uses a similar approach to obtain an \( O(n \log n)\) time asynchronous DKG. Here, Ferveo assumes an synchronous communication model instead.

The primary advantage of a Publicly Verifiable DKG is that no complaint or dispute round is necessary; every validator can check that the DKG succeeded correctly, even for validators that remain offline during the entire DKG. Such faulty validators can come online during any point during the epoch, and after syncing the missed blocks, becomes immediately able to recover its key shares.

The primary disadvantage of a Publicly Verifiable DKG is that most schemes produce a private key shares consisting of group elements instead of scalar field elements, and thus are incompatible with many existing cryptographic primitives. Ferveo works around this issue by using novel cryptographic primitives, still based on standard cryptographic assumptions, that are compatible with the private key shares generated

Some Publicly Verifiable DKG schemes, such as Groth21, produce field private key shares. Such a scheme may be evaluated for use in Ferveo at a later date.

Parameters

In addition to the two independent generators \(G \in \mathbb{G}_1\) and \(H \in \mathbb{G}_2\), a third independent generator \(\hat{u}_1 \in \mathbb{G}_2\) is selected.

Epoch keys

Each validator \(i\) generates a epoch keypair: a private decryption key \(dk_i \in \mathbb{F}_r\), and a public encryption key \(ek_i\in \mathbb{G}_2 \). The encryption key is derived from the decryption key:

\[ek_i = [dk_i] H \]

Each validator is required to generate an epoch keypair at genesis, or upon joining the validator set. Each validator should generate and announce a new epoch public key once per epoch, but in the event that a validator does not announce a new epoch public key during an epoch, the last announced epoch public key should be used in the DKG. For this reason, each validator should persist their latest epoch private key on disk.

Publicly Verifiable Secret Sharing

The validators should each generate exactly one PVSS instance as a dealer, and include that instance as a VoteExtension to a specially designated DKG block. The next block proposer is responsible for verifying and aggregating at least 2/3 by weight of PVSS instances, and including the aggregation in the next block.

For performance reasons, the aggregating validator may sort the PVSS instances by decreasing validator weight, and only include sufficient instances to reach the necessary 2/3 total weight. PVSS instances above the 2/3 weight threshold are ignored.

In case a dealer's PVSS instance does not verify as correct, that instance is discarded (and penalties may be imposed).

Output

Once 2/3 by weight of correct PVSS instances have been aggregated into a single PVSS instance, the commitment to the constant term of the aggregated PVSS instance, \(F_0\), is the public key output \(Y\) from the PVDKG, and each validators aggregated private key shares \(Z_{i,\omega_j} \) are the private key shares associated with \(Y\)

Initialization

Each DKG session begins by choosing a unique integer session id \(\tau\). This can begin at 0 and then be incremented from the previous \(\tau\). When strongly integrated into Tendermint, the epoch number can be used as \(tau\), with the note that additional DKG sessions within an epoch (for example, to do key refresh) must use a unique \(\tau\).

Share partitioning

In general the validator's staking weights will total much greater than \(W\), the number of shares issued in the DKG; therefore, the staking weights will have to be scaled and rounded.

The algorithm to assign relative weights achieves exactly the desired total weight. Initially, every participant weight is scaled and rounded down to the nearest integer. The amount of assigned weight is greater than the total desired weight minus the number of participants, so weight at most 1 can be added to each participant in order of staked weight, until the total desired weight is reached. After all total weight is assigned, each participant will have relative weight at most 1 away from their fractional scaled weight.

Using the consensus layer, all validators should agree on a canonical ordering of \((pk_i, w_i)\)$ where \(pk_i\) is the public key of the \(i\)th validator and \(w_i\) is number of shares belonging to node \(i\). The value \(i\) is the integer id of the node with public key \(pk_i\).

Let \(\Psi_{i} = { a, a+1, \ldots, a+w_i }$\) be the disjoint partition described above such that \(\cup_i \Psi_{i} = {0,1, \ldots, W-1}\), and \(\Omega_{i} = { \omega^k \ mid k \in \Psi_{i} }\). \(\Psi_i\) are the share indexes assigned to the \(i\)th validator and \(\Omega_i\) is the share domain of the \(i\)th validator.

Publicly Verifiable Secret Sharing

The PVSS scheme used is a modified Scrape PVSS.

Dealer's role

  1. The dealer chooses a uniformly random polynomial \(f(x) = \sum^p_i a_i x^i \) of degree \(t\).
  2. Let \(F_0, \ldots, F_p \leftarrow [a_0] G_1, \ldots, [a_t] G_1 \)
  3. Let \(\hat{u}_2 \rightarrow [a_0] \hat{u_1} \)
  4. For each validator \(i\), for each \(\omega_j \in \Omega_i\), encrypt the evaluation \( \hat{Y}_{i, \omega_j} \leftarrow [f(\omega_j)] ek_i \)
  5. Post the signed message \(\tau, (F_0, \ldots, F_t), \hat{u}2, (\hat{Y}{i,\omega_j})\) to the blockchain

Public verification

  1. Check \(e(F_0, \hat{u}_1)= e(G_1, \hat{u_2})\)
  2. Compute by FFT \(A_1, \ldots, A_W \leftarrow [f(\omega_0)]G_1, \ldots, [f(\omega_W)]G_1 \)
  3. Partition \(A_1, \ldots, A_W\) into \(A_{i,\omega_j} \) for validator \(i\)'s shares \(\omega_j\)
  4. For each encrypted share \(\hat{Y}{i,\omega_i} \), check \(e(G_1, \hat{Y}{i,\omega_j}) = e(A_{i,\omega_j}, ek_i) \)

Aggregation

Two PVSS instances \( ({F_j}, \hat{u}2, \hat{Y}{i,\omega_j}) \) may be aggregated into a single PVSS instance by adding elementwise each of the corresponding group elements.

Validator decryption of private key shares

A validator \(i\) recovers their private key shares \(Z_{i,\omega_j}\) from the shares encrypted to their public encryption key \(ek_i\):

\[Z_{i, \omega_j} = [dk_i^{-1}] \hat{Y}_{i, \omega_j} \]

Public Aggregation

Multiple PVSS instances can be aggregated into one by a single validator, speeding up verification time. The aggregation and verification are similar to the Aggregatable DKG paper.

Consensus

It is critical that all validators agree on which PVSS instances are used to create the final key; in particular, this is exactly what makes Ferveo depend on a synchronous consensus protocol like Tendermint. Therefore, the validators must all verify the PVSS instances and agree on the set of valid PVSS instances; or in the case where a validator has aggregated all PVSS instances, the validator set must agree on a valid aggregation of PVSS instances.

However, although full nodes can certainly perform the verification of a PVSS instance or aggregation, full nodes do not need to verify either the PVSS instances or the aggregation.

Threshold Encryption Scheme

Based on A Simple and Efficient Threshold Cryptosystem from the Gap Diffie-Hellman Group

Overview

The threshold encryption scheme allows the encrypter to derive a shared secret \(s\) from the threshold public key \(Y\), such that sufficient threshold of validators holding private key shares \(Z_i\) associated with \(Y\) can also derive the shared secret. Both encrypter and decrypter use the shared secret to derive a symmetric key for a key-committing AEAD via HKDF.

To encrypt

  1. Let \(r\) be a random scalar
  2. Let \(s = e([r] Y, H)\)
  3. Let \(U = [r] G\)
  4. Let \(W = [r] H_{\mathbb{G}_2} (U)\)
  5. Let \(k = HKDF(s)\)

The public key portion of the ciphertext is \((U,W)\) and the derived shared secret is \(s\).

To Validate Ciphertext (for IND-CCA2 security)

Check that \(e(U, H_{\mathbb{G}_2} (U))= e(G, W)\) for ciphertext validity.

To Decrypt

  1. Check ciphertext validity.
  2. Let \(s = e(U, Z)\)

Threshold Decryption (simple method)

  1. Check ciphertext validity.
  2. Each decryption share is \(C_i = e(U, Z_i)\).
  3. To combine decryption shares, s = \(\prod C_i^{\lambda_i(0)}\) where \(\lambda_i(0)\) is the lagrange coefficient over the appropriate size domain.

Threshold Decryption (fast method)

Thanks to Kobi Gurkan for this approach.

Each validator generates a random scalar \(b\) and blinds their private key shares \([b] Z_{i, \omega_j}\). The blinded private key shares are publicly distributed, either by gossip or by posting to the blockchain.

  1. Check ciphertext validity
  2. The validator's decryption share is \(D_i = [b^{-1}] U\)

Note that to create a decryption share takes only one \(\mathbb{G}_1\) multiplication, and not one pairing as in the simple method.

To combine decryption shares, an aggregator computes for each decryption share \(D_i = [b^{-1}] U\):

\[ S_i = e( D_i, [\sum_{\omega_j \in \Omega_i} \lambda_{\omega_j}(0)] [b] Z_{i,\omega_j} ) \]

The shared secret is then \(s = \prod S_i\).

Public verification of decryption shares

The decryption shares can be made verifiable if each validator posts a blinded public key \(P_i = [b] H\). Validity of a decryption share \(D_i\) is then checked by:

\[ e(D_i, P_i) = e(U, H) \]

Proof sketch

The non-threshold version of the scheme is IND-CCA2 secure.

Suppose there is an adversary that wins the IND-CCA2 game. Then on input \((G, [a]G, [b]G, H)\), there exists an adversary that computes \(e(G,H)^{ab}\).

Rekeyability

The shared secret \(s\) can be rekeyed with respect to the secret key \(Z_1\) to a new secret key \(\hat{Z} = [\alpha] Z_1 + Z_2\), as the new shared secret \(\hat{s} = s^{\alpha} e(U, Z_2) = e(U, [\alpha] Z_2)e(U, Z_2) = e(U, [\alpha]Z_1 + Z_2)\).

The shared secret \(s\) can be rekeyed with respect to the public key \(Y_1\) to a new public key \(\hat{Y} = [\alpha] Y_1 + Y_2\) as the new shared secret \(\hat{s} = s^{\alpha} e([r] Y_2, H) = e([r\alpha] Y_1, H)e([r]Y_2, H) = e([r]([\alpha]Y_1 + Y_2), H)\).

Lagrange Interpolation

In the threshold decryption procedure, many \(\mathbb_{G}2 \) points must be multiplied by Lagrange coefficients \(\mathcal{L}^T{i} (0)\) before being summed, to interpolate the shared secret committed inside the \(\mathbb_{G}_2\) points, where \(T\) is the sufficiently large threshold subset of evaluation points.

To compute this value \(\mathcal{L}^T_{i} (0)\), note the equality:

\[\mathcal{L}^T_{i} (0) = \frac{\prod_{j\in T} (-\omega_j)}{-\omega_i \lambda_i} \]

where \(\frac{1}{\lambda_i}\) is the inverse Lagrange coefficient as computed using the Subproduct Domain technique.

Caching of Lagrange Interpolation results

Since the block signer set may, in the worst case, change for every block, and may be as small as 2/3 of the total validator set, it is possible that the Lagrange coefficients are distinct for every new block. Therefore, the protocol must accommodate the computation load required to recompute the Lagrange coefficients, and all operations that are data-dependent on the coefficients, which is significant.

However, in the expected case, validators are penalized for insufficient liveness and the block signer set should rarely deviate from block to block. Therefore it makes sense to aggressively cache the Lagrange coefficients of the longest live 2/3 subset of validators, and other data. In particular the G2Prepared form of the blinded private key shares multiplied by the Lagrange coefficients is quite expensive to compute and can be cached in case the block signer set does not change significantly, potentially saving a substantial amount of compute time.

Decryption Share Aggregation

In order to provide guarantees that every valid transaction is executed in the order it appears in a finalized block, it is necessary that validators cannot censor valid transactions by pretending that the transaction was invalidly constructed. In the case of a valid transaction, the resulting symmetric key is sufficient to prove correct decryption; however, in case of invalid transactions, the resulting symmetric key may not correctly decrypt the payload. Therefore, the decryption shares must be retained to allow verification that the decryption procedure was followed correctly and that all transactions not processed were actually invalid.

Since decryption shares are rather large in size, it is possible to consume a large amount of storage space by submitting many invalid transactions. This waste of storage space can be mitigated by use of decryption share aggregation, where many valid decryption shares for many transactions can be aggregated together into one set of decryption shares. The resulting shared secrets cannot be obtained directly from the aggregated decryption shares, but an aggregated shared secret can be obtained, and compared against stored shared secrets. Therefore, the storage cost of decryption shares across many invalid transactions is amortized and the incremental cost of an invalid transaction in a block is only the bytes of the shared secret (one \(\mathbb{G}_T\) element).

To aggregate decryption shares from many transactions

Given many valid ciphertexts \((U_j,W_j)\), on input 2/3 weight of potential decryption shares for each ciphertext \({D_{i,j}}\) sharing the same validator set, if decryption shares are only needed to check the validity of the decryption process, the decryption shares of many ciphertexts can be aggregated into one decryption share set.

For each ciphertext \(j\) compute the scalar coefficient:

\[ \rho_j = H(U_1, \ldots, U_k, j) \]

which can be used to compute the aggregated decryption share for validator \(i\):

\[\hat{D}i = \sum_j \rho_j D{i,j} \]

To verify the correctness of an aggregation against aggregated ciphertexts

Given many valid ciphertexts \((U_j,W_j)\) and an aggregated decryption share set for those ciphertexts, the validity of the aggregation can be checked by computing the publicly known coefficients:

\[ \rho_j = H(U_1, \ldots, U_k, j) \]

and checking the pairing equation:

\[ \prod_i e(\sum_{j} [\rho_i] \hat{D}{i}, P_i) = e([\sum{i,j} \rho_i] U_j, H) \]

Proof sketch

(TODO: turn into an actual proof)

The aggregation does not reduce security due to the Forking Lemma. Assume each validator provides an aggregated decryption share \(D_i\) that passes the aggregated check \(e(D_i, P_i) = e( [a] U_1 + [b] U_2, H)\) where \(a,b\) are from \(H(U_1,U_2)\). Assume an adversary can create a fake \(D_i\), then rewind the adversary and replay with new \(a',b'\) to get a new forgery \(D_i'\). Then \([X,Y] = [[a,b],[a',b']]^{-1} * [D_i, D_i']\) should satisfy \(e(X, P_i) = e( U_1, H)\) and \(e(Y, P_i) = e(U_2, H)\) so X,Y are valid decryption shares for each ciphertext. Therefore every adversary that passes the aggregated check passes the original check.

Complete Concrete Scheme

There are optimizations that can be done to increase decryption throughput when decrypting many transactions within a block. For completeness, the full concrete scheme is described here.

Summary

The DKG and TPKE schemes support the following key operations:

  • DKG.GenerateEpochKeypair() -> (ek, dk)
  • DKG.PartitionDomain({ek_i, s_i}) -> {(ek_i, Omega_i)}
  • DKG.GeneratePVSS(tau, total_weight, {(s_i, ek_i)})
  • DKG.VerifyPVSS(tau, PVSS) -> bool
  • DKG.AggregatePVSS({PVSS_i}) -> PVSS
  • DKG.VerifyAggregatedPVSS({PVSS_i}, PVSS) -> bool

And supports the following ciphertext operations:

  • TPKE.Encrypt(Y, aad) inputs a public threshold key \(Y\) and outputs a random ciphertext \((U,W)\) encrypted to that public key
  • TPKE.CiphertextValidity(U,W) tests if $\((U,W)\) is a valid ciphertext
  • TPKE.CreateDecryptionShare(dk_j, U_i,W_i) -> D_{i,j}
  • TPKE.VerifyDecryptionShares(ek_i, { U_j }, { D_{i,j} }) -> bool
  • TPKE.BatchVerifyDecryptionShares({ek_i}, { U_j }, { D_{i,j} }) -> bool
  • TPKE.CombineDecryptionShares( {U_j}, {D_{i,j}) -> {S_j} combines decryption shares
  • TPKE.VerifyCombination verifies a combination for many
  • TPKE.DeriveSymmetricKey(S_j) -> k_j

DKG.GenerateEpochKeypair() -> (ek, dk)

Choose a uniformly random scalar \(dk \in \mathbb{F}_r \) and compute \( ek = [dk] H \)

DKG.PartitionDomain({ek_i, s_i}) -> {(ek_i, Omega_i)}

DKG.GeneratePVSS(tau, total_weight, {(ek_i, Omega_i)}) -> PVSS

  1. Choose a uniformly random polynomial \(f(x) = \sum^p_i a_i x^i \) of degree \(t\).
  2. Let \(F_0, \ldots, F_t \leftarrow = [a_0] G, \ldots, [a_t] G \)
  3. For each validator \(i\), for each \(\omega_j \in \Omega_i\), encrypt the evaluation \( Z_{i, \omega_j} \leftarrow [f(\omega_j)] ek_i \)
  4. \(\sigma = [a_0] H_{\mathbb{G}_2}(tau,F_0) \)

Output PVSS = \( ((F_0, sigma), (F_1, ldots, F_t), {Z_{i,\omega_j}}) \)

DKG.VerifyPVSS(tau, PVSS) -> bool

  1. Decode \( ((F_0, sigma), (F_1, ldots, F_t), {Z_{i,\omega_j}}) \leftarrow \) PVSS
  2. Compute by FFT \(A_1, \ldots, A_W \leftarrow \operatorname{FFT}(F_0, \ldots, F_t) \)
  3. Compute \(W\) random scalars \(\alpha_i \)
  4. Check \(\mathcal{O} = \prod_i e(-G_1, Z_{i,\omega_j})e(A_{i,\omega_j}, ek_i) \)

DKG.AggregatePVSS({PVSS_i}) -> PVSS

DKG.VerifyAggregatedPVSS({PVSS_i}, PVSS) -> bool

Lagrange Coefficients

Given a validator subset \({i}\), the Lagrange coefficients \(\lambda_{\omega}(0)\) for the domain \(\cup \Omega_i \) can be computed efficiently using the Subproduct Domain technique.

Total cost: \( O(p \log p) \)

DKG.GenerateEpochKeypair() -> (ek, dk)

The validator generates a random scalar \(dk \in \mathbb{F}_r \) and computes the public key \( ek = [dk] H \)

TPKE.Encrypt(Y, aad) -> (U,W)

TPKE.Encrypt(Y, aad) creates a new, random ciphertext \((U,W)\) encrypted to the public key \(Y\), and a corresponding ephemeral shared secret \(S\) such that the private key associated with \(Y\) can efficiently compute \(S\) from the ciphertext \((U,W)\). Additional authenticated data aad may be attached to the ciphertext.

The ephemeral shared secret \(S\) can be used to derive a shared symmetric encryption key.

  1. Let \(r\) be a uniformly random scalar.
  2. Let \(S = e([r] Y, H)\)
  3. Let \(U = [r] G\)
  4. Let \(W = [r] H_{\mathbb{G}_2} (U, aad)\)

Then \((U,W)\) is the ciphertext and \(S\) is the ephemeral shared secret.

TPKE.CiphertextValidity(U,W) -> bool

To provide chosen ciphertext security, ciphertext validity must be checked for each ciphertext \((U,W)\) separately. The ciphertext can be checked by:

\[e(U, H_{\mathbb{G}_2} (U)) = e(G, W)\]

Total cost:

  • 1 \(\mathbb{G}_1\) and 1 \(\mathbb{G}_2\) deserialize per ciphertext
  • 1 product of pairings

TPKE.BatchCiphertextValidity( {U,W} ) -> bool

Once a block proposer has verified ciphertext validity, the entire block can be optimistically verified:

\[\prod_j e(\alpha_j U_j, H_{\mathbb{G}_2} (U_j)) = e(G, \sum_j \alpha_j W_j) \]

Total cost:

  • 1 \(\mathbb{G}_1\) and 1 \(\mathbb{G}_2\) deserialize per ciphertext
  • 1 product of pairings

TPKE.CreateDecryptionShare(dk_i, U_j) -> D_{i,j}

\[D_{i,j} = [dk_i^{-1}] U_j\]

TPKE.VerifyDecryptionShares(ek_i, { U_j }, { D_{i,j} }) -> bool

Given many valid ciphertexts \((U_j,W_j)\), on input potential decryption shares for each ciphertext \({D_{i,j}}\) from a single validator \(i\) with epoch public key \(ek_i\), the validity of those shares can be checked by:

\[D_{i,j} = [dk_i^{-1}] U_j\]

\[ e(\sum_j [\alpha_j] D_{i,j}, ek_i) = e(\sum_j [\alpha_j] U_j, H) \]

Total cost:

  • 1 \(\mathbb{G}_1\) deserialize per validator per ciphertext
  • 2 pairings per validator
  • 1 \(\mathbb{G}_1\) multiply and 1 \(\mathbb{G}_2\) multiply per ciphertext.

TPKE.BatchVerifyDecryptionShares({ek_i}, { U_j }, { D_{i,j} }) -> bool

Given many valid ciphertexts \((U_j,W_j)\), on input 2/3 weight of potential decryption shares for each ciphertext \({D_{i,j}}\), corresponding to validator set \({i}\) with epoch public keys \(ek_i\), the validity of those shares can be checked:

\[ \prod_i e(\sum_{j} [\alpha_{i,j}] D_{i,j}, ek_i) = e([\sum_{i,j} \alpha_{i,j}] U_j, H) \]

Total cost:

  • 1 G1 deserialize per validator
  • V+1 pairings
  • 1 \(\mathbb{G}_1\) multiply and 1 \(\mathbb{G}_2\) multiply, per ciphertext.

TPKE.AggregateDecryptionShares( {U_j}, {D_{i,j}} ) -> {\hat{D}_i}

Given many valid ciphertexts \((U_j,W_j)\), on input 2/3 weight of potential decryption shares for each ciphertext \({D_{i,j}}\) sharing the same validator set, if decryption shares are only needed to check the validity of the decryption process, the decryption shares of many ciphertexts can be aggregated into one decryption share set.

For each ciphertext \(j\) compute the scalar coefficient:

\[ \rho_j = H(U_1, \ldots, U_k, j) \]

which can be used to compute the aggregated decryption share for validator \(i\):

\[\hat{D}i = \sum_j \rho_j D{i,j} \]

TPKE.VerifyAggregatedDecryptionShares({U_j}, {\hat{D}_i}) -> bool

Given many valid ciphertexts \((U_j,W_j)\) and an aggregated decryption share set for those ciphertexts, the validity of the aggregation can be checked by computing the publicly known coefficients:

\[ \rho_j = H(U_1, \ldots, U_k, j) \]

and checking the pairing equation:

\[ \prod_i e(\sum_{j} [\rho_i] \hat{D}{i}, P_i) = e([\sum{i,j} \rho_i] U_j, H) \]

TPKE.CombineDecryptionShares( {U_j}, {D_{i,j}) -> {S_j}

For a given ciphertext \((U_j,W_j)\), on input 2/3 weight of valid decryption shares \({D_{i,j}}\) as checked by ``TPKE.VerifyDecryptionShares`, corresponding to validator set \({i}\).

Then a partial combined share \(S_{i,j}\) for that ciphertext can be computed with one pairing:

\[ S_{i,j} = e( D_{i,j}, [\sum_{\omega_j \in \Omega_i} \lambda_{\omega_j}(0)] Z_{i,\omega_j} ) \]

and combined to get the final combined share \(S_j = \prod_i S_{i,j}\).

Total cost:

  • 1 pairing and 1 \(\mathbb{G}_T\) multiply per validator

TPKE.VerifyAggregatedCombination

Verifying \(\prod_j S_j\) for many ciphertexts with the same decrypting validator set can be done faster than generating each \(S_j\) separately. For ciphertexts \((U_j, W_j)\) with valid aggregated decryption shares \( \hat{D}_{i}\) (checked by TPKE.VerifyAggregatedDecryptionShares), combined shares \({S_j}\) and random scalars \(\rho_j\), for each validator \(i\), an aggregated decryption share:

\[\hat{D}i = \sum_j \rho_j D{i,j} \]

computed using unknown \(D_{i,j}\) but with the publicly known coefficients:

\[ \rho_j = H(U_1, \ldots, U_k, j) \]

can be used to compute an aggregated partial combined share \(\hat{S}_i \):

\[ \hat{S}i = e( \hat{D}i, [\sum{\omega_j \in \Omega_i} \lambda{\omega_j}(0)] Z_{i,\omega_j} ) \]

and combined to get an aggregated final combined share \( \hat{S} = \prod_i \hat{S}_i\) which can be checked against the computed \({S_j}\) by:

\[ \hat{S} = \sum_j \alpha_j S_j \]

Total cost:

  • 1 pairing and 1 \(\mathbb{G}_T\) multiply per validator
  • 1 \(\mathbb{G}_1\) multiply and 1 \(\mathbb{G}_T\) multiply per ciphertext.

TPKE.DeriveSymmetricKey(S_j) -> k_j

Use HKDF(S_j)

Therefore the total work done per block, in the optimistic case, if there are T transactions and V validators, by role:

  • Block proposer:

    • ciphertext validity check: 2T pairings
    • combine decryption shares: T*V pairings plus T multiplies in G_t
  • Block signer:

    • ciphertext validity check: T+1 pairings plus T multiplies in each G_1 and G_2
    • create decryption shares: T multiplies of G_1, T multiplies in G_t
    • verify decryption shares: V+1pairings plus 2T multiplies in G_1
    • verify combination: V pairings plus V+T multiplies in G_t
  • Full node:

    • verify decryption shares: V+1 pairings plus 2T multiplies in G_1
    • verify combination: V pairings plus V+T multiplies in G_t
  • Long term storage per block (this does assume ephemeral data is pruned):

    • aggregated decryption shares (of invalid tx): V elements of G_1 ( 48*V bytes)
    • combined shares (of invalid tx): T elements of G_t (400*T bytes)

Encrypted Transactions

When Ferveo is integrated into Tendermint, the validators should run the DKG once per epoch and the generated public key \(Y\) broadcast to all nodes in the network.

Transactions sent to the mempool should be encrypted to this public key, and block proposers select encrypted transactions from the mempool and commit to them in to block proposals. Therefore, the execution and ordering of transactions is determined before they are decrypted and revealed.

An encrypted transaction consists of:

  • The public key ciphertext \(U, W\) associated with this transaction
  • The key-committing AEAD encrypted payload of the transaction, with symmetric key derived from \(U, W\)
  • Transaction fee payment details
  • The epoch number that the tx is being encrypted to.

The inclusion of fee payment outside of the payload ensures that the network is not saturated with invalid transactions.

The encryption method is then ran roughly as:

  1. Sample private key k.
  2. Compute Ciphertext as CT = KC_AEAD.Encrypt(key=Hash(k * threshold_pubkey), msg={state machine tx}, additional_data={empty})
  3. Run Threshold Encryption as TE.Encrypt(private_key=k, threshold_pubkey, ciphertext=ct, additional_data={tx fee details, epoch number})

Block proposal

A block proposal, therefore, consists of:

  1. Validator-selected encrypted transactions (likely ordered by fee)
  2. Decryption data for all transactions in the previous block

Availability of decryption shares for those transactions is guaranteed by new block finalization rules, and it is the block proposer's responsibility to combine the decryption shares to derive each transaction's symmetric key, and to compute the AEAD decryption of each encrypted transaction's payload.

The decryption data for a tx is oneof (encryption_private_key, list of decryption shares). For a validly constructed transaction, the decryption shares can be combined to get the symmetric key. The combined share can then be included within the block, and every node can check its validity by correctness of the key-committing AEAD.

If the tx was invalidly constructed, then all of the constituent decryption shares must get posted on-chain for verifyability.**

Constructing a valid block proposal therefore executes 1 TPKE.CombineDecryptionShares operation per transaction per block signer in the previous block.

Verifying the validity of a block proposal therefore executes 1 TPKE.VerifyCombination operation per block signer in the previous block.

** There is an optimization where we only need one list of 'cross-tx combined' decryption shares, for all invalid txs per block.

Block finalization

In addition to the standard 2/3 weight requirements for block finalization, Ferveo adds an additional requirement: every validator signature on a block must include valid, signed decryption shares corresponsing to that validator, for every encrypted transaction committed to in that block, totaling at least 2/3 weight of decryption shares.

Signing a block proposal therefore executes 1 TPKE.CiphertextValidity operation and 1 TPKE.CreateDecryptionShare() operation per transaction in that block proposal.

Verifying the block proposal executes 1 TPKE.CiphertextValidity operation and 1 TPKE.BatchVerifyDecryptionShares() operation per transaction in that block proposal.

This guarantees liveness will be similar: the network will not stop while waiting for decryption shares unless the network stops while waiting for signatures as well.

Invalid transactions

The primary issue in accepting the finality of a block is verification that the decryption of every transaction succeeded correctly.

In the optimistic case, where all transactions are constructed exactly as required by the protocol, the block proposer can run TPKE.CombineDecryptionShares() to obtain the shared secret, and therefore the 256 bit symmetric key, for each transaction. Since all transactions are valid, this symmetric key successfully decrypts the transaction plaintext which matches the BLAKE2b hash. (Alternatively, a key-committing scheme could be used). Therefore, only one 32 byte symmetric key per transaction needs to be added to the block, to assist in block verification.

Problems arise when an invalid transaction, where either the symmetric key or BLAKE2b hash, does not match the transaction payload. The protocol must verify that the transaction is actually invalid and that no validator submitted invalid decryption shares or failed to combine shares according to protocol. Otherwise, a malicious validator would be able to deny execution of specific transactions upon discovering its contents.

To avoid a high cost incurrec (and DoS attack vector) of many invalid transactions, all invalid transactions are handled in an aggregated way. The decryption shares of all invalid transactions are aggregated using TPKE.AggregateDecryptionShares; the aggregated decryption shares, along with the result of TPKE.CombineDecryptionShares for each invalid transaction, is attached to the block instead of the 32 byte symmetric key.

Therefore, full nodes can run the relatively inexpensive TPKE.VerifyAggregatedCombination to check the validity of the combined shares of each allegedly invalid transaction, and attempt symmetric decryption using each transaction's derived symmetric key. Upon the expected failure to decrypt an invalid transaction, that transaction is discarded instead of executed (consuming any fee) which mitigates the minor additional cost of invalidity verification.

Correctness of decryption shares

The validity of all decryption shares must be checked before accepting a block.

In case TPKE.BatchVerifyDecryptionShares fails, indicating that some validator may have sent faulty decryption shares, a fallback protocol must be initiated to discover which validators sent valid decryption shares, by executing TPKE.VerifyDecryptionShares separately for each validator's decryption shares.

This is significantly more expensive than TPKE.BatchVerifyDecryptionShares; however, since penalties can be enforced on validators for sending bad decryption shares, the optimistic verification TPKE.BatchVerifyDecryptionShares should succeed most of the time.

Privacy

Ferveo only provides additional privacy while pending within the mempool; once transactions are decrypted, all details of the payload become public. In addition, fee payment information reveals who the fee payer is (who may be a proxy).

Invalid transactions

In the case where the transaction creator does not follow the protocol, for example by having an invalid payload, a payload that does not match the hash, or using a bad nonce \(r\), the network does not make any guarantees regarding privacy or execution order (or execution at all) of an invalid transaction.

Side Channel Analysis

As Ferveo is an online protocol that depends on complex cryptographic primitives, this section will include an analysis of potential side channels where private dta and keys may be leaked.

Threshold Signature Scheme

A Schnorr-like signature scheme based on Gap Diffie-Hellman in ROM.

Setup

  • Let \(G\) be a generator of (\mathbb{G}_1\) and let \(H\) be a generator of \(\mathbb{G}_2\)
  • Let \(Y = [x] G\) be the public key and \(Y_i = [x_i] G\) be the public key of the \(i\)th share
  • Let \(Z = [x] H\) be the private key, and \(Z_i = [x_i] H\) be the private key of the \(i\)th share

Notation

  • Let \(H_\ell\) denote hash to \({0,1}^{\ell}\) for some integer \(\ell\)
  • Let \(H_\mathbb{F}\) denote hash to scalar field \(\mathbb{F}\)
  • Let \(H_\mathbb{G}\) denote hash to group \(\mathbb{G}\)

To sign with group element as private key

Let \(m\) be the message to sign.

  1. Let \(k\) be a random scalar
  2. Let \(R = e([k] G, H)\)
  3. Let \(c = H_\mathbb{F}(R || Y || m)\)
  4. Let \(z = [k] H + [c] Z\)
  5. Let the signature be \(\sigma = (R,z)\)

Note that \(z = [k+xc] H\), but is computable without knowledge of \(x\).

To verify

  1. Let \(\sigma = (R,z)\) be the signature of \(m\)
  2. Let \(c = H_\mathbb{F}( R || Y || m)\)
  3. Let \(R' = e( G, z)*e([c]Y, H)^{-1}\)
  4. If \(R = R'\) the signature is valid.

Threshold Signature

The technique of FROST can be applied here; the individual commitments \(R_i\) are computed similar to FROST as:

\(R_i = e(D + [\rho_i] E_i, H)\)

and

\(R = \prod R_i\)

and the value \(z = [d_i + e_i \rho_i]H + [\lambda_i s_i c] Z\)

Key Refresh

It is possible to refresh a key (also called proactive secret sharing) to change all the issued key shares without changing the actual public key.

The primary purpose is to limit vulnerability windows for key shares to leak or compromise. Instead of compromising sufficient key shares of the distributed key, an attacker must compromise those key shares within the refresh window. The secondary purpose may be to invalidate and/or issue new key shares of the same key to adjust for dynamic weights (e.g. change in stake) without changing the public key.

This is accomplished by running the DKG again, except the VSS instances all share the secret 0, and an opening of each $R$ polynomial at 0 is revealed. When the DKG succeeds the new shares of secret 0 are added to the old shares.

Secure enclave computation

The hardware-based security enclaves built into certain CPUs, such as Intel SGX, can both positively and negatively affect the security model.

Since the VSS, DKG, and threshold operations include storage and computation on secret data that should not be publicly revealed, a node may run these sensitive computations inside of a secure enclave as a layer of protection against attack by an external adversary. Since HSM support for VSS, DKG, and threshold operations is unlikely, use of a secure enclave may be useful.

Alternatively, secure enclaves can undermine the dispute process by facilitating silent collusion among adversarial or honest-but-curious participants. Since dispute procedures rely on incentivizing nodes (even adversarial or dishonest ones) to report dishonest behavior, a dispute process can only work if evidence of dishonest behavior is available; however, if collusion occurs entirely within a secure enclave or a cryptographic multiparty computation, then such evidence may not be revealed.

At this time, it does not appear practical to implement a DKG whose security model depends on the underlying security of a secure hardware enclave.

Alternative Schemes

Several alternative DKG and threshold operation protocols are implemented in Ferveo for comparison and benchmarking purposes. The major alternative schemes are:

  • A Pedersen DKG using a Feldman VSS. A traditional DKG scheme, this can be implemented either over a bilinear group, or a non-pairing friendly elliptic curve such as Pallas. Performance is excellent, but communication complexity can be high, and liveness guarantees are complicated by the need for a complaint/dispute round to prevent malicious VSS instances.
  • A scalable DKG based on fast KZG commitments. KZG commitments can make VSS verification faster and communication complexity is smaller, at the cost of a slightly more complex protocol and the same complaint/dispute round is necessary.

For completeness, these DKG and VSS schemes are documented in this Appendix.

Pedersen DKG

Fast KZG DKG

The Verifiable Secret Sharing scheme using Fast KZG commitments is a modified version of the ETHDKG scheme (https://eprint.iacr.org/2021/118.pdf) and the DKG from (https://people.csail.mit.edu/devadas/pubs/scalable_thresh.pdf), with performance enhancements and weighted shares. All polynomial evaluations are at powers of \(\omega\) to allow FFT based polynomial operations.

KZG commitment scheme

Batched fast KZG opening proofs contribute to scalability. Fast KZG openings are based off of the following:

  • https://github.com/khovratovich/Kate/blob/66aae66cd4e99db3182025c27f02e147dfa0c034/Kate_amortized.pdf
  • https://alinush.github.io/2020/03/12/towards-scalable-vss-and-dkg.html

The Feist and Khovratovich batch KZG opening allows for simultaneous fast computation of \(2^n\) opening proofs of a single polynomial at \(2^n\) roots of unity, which motivates the choice of evaluation points.

Additionally, nearly linear time fast polynomial evaluation and fast polynomial interpolation algorithms allow the dealer to combine all openings belonging to a participant into a single \(\mathbb{G}_1\) element, and allow the recipient to fast verify the opening proof.

Dealer messages

  1. The dealer \(d\) chooses a uniformly random polynomial \(S\) of degree \(p\).
  2. KZG commit to \(S\) to obtain \(\hat{S}\)
  3. KZG commit to \(S(0)\) to obtain \([S(0)] G_1\)
  4. Create an opening proof \(\pi_0\) of commitment \(\hat{S} - [S(0)] G_1\) opening to \(0\) at \(0\).
  5. For all \(i \in [1,n]\):
    • create an opening proof \(\pi_i\) of commitment \(\hat{S}\) at all points \(\alpha \in \Omega_i\).
    • compute \(\hat{T_i} = \hat{R} - \hat{S}_i\) where \(T_i = R - S_i\)

The dealer signs and posts \((\tau,d,\hat{S}, [S(0)] G_1, \pi_0)\) to the blockchain.

For each node \(i\), the dealer encrypts (with MAC) to \(pk_i\) the message \((\tau, d, \langle S(\alpha) \mid \alpha \in \Omega_i \rangle, \pi_i)\) which consists of:

  • \(\tau\), the round number
  • \(d\), the dealer id
  • \(\hat{S}\), the KZG commitment to the share polynomial
  • \(\langle S(\alpha) \mid \alpha \in \Omega_i\rangle\), the share evaluations belonging to node \(i\).
  • \(\pi_{i}\), the opening proof of \(\hat{S}\) at \(\Omega_i\)

For all \(i \in [1,n]\) the dealer posts the \(i\)th message to the blockchain.

The cost of each plaintext is \(2\) integers, ?? \(\mathbb{F}_r\) elements and \(2\) \(\mathbb{G}_1\) elements. (TODO)

Offline precompute

Much of the dealer's computation can be done offline/precomputed, potentially saving online computation time. The secret polynomial, commitment, and opening proofs can be speculatively computed, subject to maintaining secrecy of the secret data.

Receiving dealer message

Node \(i\) recieves, decrypts, and authenticates \((\tau,d,\hat{S}, \pi_0)\) and \((\tau, d, \hat{S}, \langle s_{\alpha} \rangle, \pi)\) from dealer \(d\) through the blockchain.

  1. Check that commitment \(\hat{S}\) and proof \(\pi\) open to \(\langle s_{\alpha} \rangle\)
  2. Check that commitment \(\hat{S} - [S(0)] G_1\) and proof \(\pi_0\) opens to \(0\) at \(0\).
  3. If the decryption or opening check fails, then initiate pre-success or post-success dispute process
  4. Else, the VSS has succeeded for node \(i\). Sign and post the ready message \((\tau, d, \hat{S})\).

Pre-success dispute

In the case where the dealer \(d\) posts an invalid distribution of shares, a node can initiate the pre-dispute process as long as no DKG has finalized using that VSS. A node \(i\) signs and posts a message \((\text{dispute}, \tau, d, k, \pi)\) where \(k\) is the shared ephemeral secret key used to encrypt the message from \(d\) to \(i\) and \(\pi\) is the NIZK proof that it is the ephemeral key. The remaining nodes adjucate the dispute in favor of the dealer or complainer. In case the dealer is found to be faulty, that dealer's VSS is terminated. Additionally, penalties and rewards may be allocated.

VSS finalization

To ensure the DKG results in an unbiased key, a new debiasing base \(H_1 = \operatorname{HTC}(\text{DKG session} \tau)\) is used for every generated public key.

Once sufficient weight of signatures are posted, the dealer \(d\) posts the public share \([S(0)] H_1\) along with a NIZK proof that \([S(0)] G_1\) and \([S(0)] H_1\) share the same discrete log.

Once these are posted and verified, the VSS initiated by dealer \(d\) in round \(\tau\) with commitment \(\hat{S}\) is considered succeeded.

The remaining slow \(t+f\) weight nodes can complete the VSS from information on the blockchain. In case those shares are invalid, the slow nodes can initiate the post-success dispute procedure.

If the dealer fails to finalize its VSS session after a timeout period, then a new set of VSS instances of sufficient weight are initiated.

Post-success finalization

A slow node may discover the dealer has distributed invalid shares after the relevant DKG has already been finalized, or alternatively been unable to post a dispute beforehand. The difficulty is that removing the troublesome VSS instance may change an actively used distributed key, but leaving the VSS instance in place reduces the resiliance of the network. Furthermore, the dispute process might publicly reveal valid shares of the VSS (for example, if a node receives some but not all valid shares from the dealer).

Penalties and rewards can still be allocated for disputes posted after the validity of the distributed key expires, but this process also must happen in a defined window of time between the expiry of the key and the release of staked value for the DKG session.

Proof sketch

DKG

The DKG will consist of some number of nodes dealing VSS instances until consensus is reached that enough VSS instances have completed to recover a distributed key. The final shares of the distributed key are equal to the pointwise sum of the shares of each VSS instance.

Optimistic phase

The ideal scenario is that the fewest possible number of VSS instances are used, and everyone computationally prioritizes the same instances (so there are fewer VSS instances that consume computation time and are left unused)

Since we need at least dealer total weight \(p+1\) of VSS instances to finish, the dealer identities can be sorted by their weight such that the \(\ell\) highest weighted dealers sum to total weight at least \(p+1\) and \(\ell\) is minimized.

In the optimistic phase, if there are no faults that prevent these \(\ell\) VSS instances from being finalized, and no disputes against those dealers, then these VSS instances are used in the DKG. This can be confirmed by \(W-t-f\) weight signing a message indicating the optimistic phase succeeded along with the resulting public key. Since BLS signature aggregation can be used, this consumes \(O(\log m)\) on-chain storage.

Pessimistic phase

Assume the optimistic phase does not succeed before a timeout period expires. Then the DKG enters the pessimistic phase where an arbitrary set of VSS instances of total weight at least \(t\) can be used for DKG. The nodes should initiate additional VSS instances in order of decreasing weight of dealers to again minimize the total number of VSS instances that need to be dealt, and also to minimize the number of VSS instances a node spends computation to verify but remain unused. Every time a timeout period expires, more nodes should begin dealing VSS instances.

In the pessimistic phase, as soon as \(W-t-f\) weight of VSS instances finalize (as determined by ready signatures with no disputes and dealer finalization), then the first sufficient subset of finalized VSS instances is used for the DKG, as determined by the order that the finalization messages are posted.

Debiasing the final key

The approach of Neji et al can be used to debias the final key. The public key \([s] H_1\) is the sum of all finalization values \([S(0)] H_1\) for all successful VSS instances, and the shared secret key is sharewise sum of all VSS shares.

Threshold Encryption

Curve specification

Let \(P_1, P_2\) be the generators of \(G_1, G_2\) respectively.

Let \(H\) be a hash function from \({0,1}^* \to G_2\).

Let \(G\) be a PRF from \(G_1 \to {0,1}^*\)

In the protocol, \(U\) is on \(G_1\), pubkeys \(Y\) are on \(G_1\), and \(W\) is on \(G_2\).

Encrypt

\(Enc(m, AD):\)

  1. Let \(r\) be a random scalar
  2. Let ((U = rP_1\)
  3. Let \(V = G(rY) \oplus m\)
  4. Let \(W = rH(U, V, AD)))

The public key portion of the ciphertext is (((U, W)\) and \(V\) is the ciphertext.

Calculate Decryption Share

\(Dec(U, V, W, AD, x_i):\)

  1. Let \(H = H(U,V, AD)\)
  2. Check that \(e(P_1, W) = e(U, H)\)

If the above check passes, the decryption share is \(U_i = x_i U\).

Decryption share validation

\(Verify_Dec(C, AD, U_i, Y_i):\)

  1. Parse C as \((U, V, W)\)
  2. Compute \(H = H(U,V,AD)\)
  3. Check that \(e(P_1, W) = e(U, H)\)
  4. If the above check passes, check \(e(U_i, H) = e(Y_i, W)\)

Decryption share aggregation

Given \(t\) evaluations of \((i, f(i + 1) P_1)\), for a polynomial \(f\) of degree less than \(t\), we can compute \(f(0) P_1\) via lagrange interpolation.

Since we are given many evaluations of \(U_i = f(i + 1) U = x_i r P_1\), we can use lagrange interpolation to obtain \(x r P_1 = rY\).

Then we get our decrypted message as \(decrypt(G(rY), V)\).