Logup GKR 2

following previous post

Input Layer Encoding (witness encoding)

trace columns are interpreted as functions, the inputs of the function are bits, enough number of bits that can present \(n\) number of witness in this column.

for trace \(i\), with \(n\) number of rows (trace degree, trace length)

\(f_i(b_0,b_1,b_2,…,b_n)=a_{b_n,…,b_2,b_1,b_0} \)

at the last layer (input layer) of GKR, the numerators and denominators of the fractions are all from the STARK trace, we need to defined the constraint to wind these values with STARK trace, and prove this winding.

The question can be described in different ways:

  • how to simulate a Multi-linear Polynomial Commitment scheme from a uni-variate polynomial commitment scheme
  • how to implement the final oracle query by the verifier at the random point resulting from the GKR protocol interaction. 

or from this blog

Multi-linear Polynomial Commitment scheme from a uni-variate polynomial commitment scheme

The essential question is at the end of GKR protocol, prover needs to prove a evaluation of a multi-linear polynomial, and this polynomial encode the trace.

First, let’s see clearly how the witness encoding is transferred from uni-variate to multi-variate:

in uni-variate polynomial IOP like stark, the projection is from a cyclic group element to a base field element (an element in a subgroup of M31 to an element of M31, they are both M31, but interpretation are different.)

\(H:=g^i, i\in \{0,…,2^{v}-1\}\), \(H\) is a cyclic group, generator \(g\) is \(2^v\)-root of unity.

use bit decomposition to present \(i\): \(i=\sum\limits_{i=0}^{v}i_k \cdot 2^{k-1}, i_k \in \{0,1\}\)

for any \(x=g^i\in H\), a trace that is encoded by a uni-variate polynomial \(t(x)\) is equivalent as encoded by a multi-variate polynomial \(f(i_0,…,i_v)\):

\(t(x)=f(i_0,…,i_v)\) where \(x= g^{\sum\limits_{k=0}^{v}i_k \cdot 2^{k-1}}\)

or define \(y(i_0,…,i_v)=g^{\sum\limits_{k=0}^{v}i_k \cdot 2^{k-1}}\)

\(t(x)=t(y(i_0,…,i_v))\) this way is more clear

then as \(y\) is a binary function, we can do multi-linear extension here by

\(t(y(r_0,…,r_v))=\sum\limits_{i_k \in \{0,1\}, k\in \{0,…,v\}} t(y(i_0,…,i_v)) eq[\left(r_0,…,r_v\right), \left(i_0,…,i_v\right)]\)

this \(eq[\left(r_0,…,r_v\right), \left(i_0,…,i_v\right)]\) is lagrange kernel, or multi-linear lagrange basis, or just call it eq function, fully write it out:

\(eq[\left(r_0,…,r_v\right), \left(i_0,…,i_v\right)]=\prod\limits^{v}_{k=0}(r_k\cdot i_k + (1-r_k)(1-i_k))\)

then we can see \(t(y(r_0,…,r_v))\) is an inner product of \(t(y(i_0,…,i_v))\) and \(eq[\left(r_0,…,r_v\right), \left(i_0,…,i_v\right)]\)

Recap: the last round of sum-check protocol in GKR

suppose in the last second layer, prover wants to prove

\(\widetilde{p_i}(\rho_i) +\lambda_i\widetilde{q_{i}}(\rho_i)=\)

\(\sum\limits_{x\in \{0,1\}^{k_i}}\widetilde{eq}(x,\rho_i)\left[\widetilde{p_{i+1}}(0,x)\widetilde{q_{i+1}}(1,x)+\widetilde{p_{i+1}}(1,x)\widetilde{p_{i+0}}(0,x)+\lambda_i(\widetilde{q_{i+1}}(0,x)\widetilde{q_{i+1}}) \right]\)

to make things clear, let’s write the randomness to its full “decomposition”, the equation actually is

\(\widetilde{p_{d-2}}(\gamma_0,…,\gamma_{\log{k}-1}) +\lambda_i\widetilde{q_{d-2}}(\gamma_0,…,\gamma_{\log{k}-1})=\)

\(\sum\limits_{x\in \{0,1\}^{k_{d-1}}}\widetilde{eq}(x,(\gamma_0,…,\gamma_{\log{k}-1}))\left[\widetilde{p_{d-1}}(0,x)\widetilde{q_{d-1}}(1,x)+\widetilde{p_{d-1}}(1,x)\widetilde{p_{d-1}}(0,x)+\lambda_i(\widetilde{q_{d-1}}(0,x)\widetilde{q_{d-1}}) \right]\)

where \(d\) is the depth of the GKR circuit, \(k\) is how many pair of fractions in the layer

the prover and verifer will run one more sumcheck to finally prove this equation, and let’s expand the last steps of this sumcheck:

prover sends a one degree polynomial of the last variate \(z\) to the verifier, (randomness in the earlier steps are \(r_0,…,r_{k_{d-1}-1}\))

\(\widetilde{eq}((r_o,…,z),(\gamma_0,…,\gamma_{\log{k}-1}))\)

\(\cdot \left[\widetilde{p_{d-1}}(0,(r_o,…,z))\widetilde{q_{d-1}}(1,(r_o,…,z))+\widetilde{p_{d-1}}(1,(r_o,…,z))\widetilde{p_{d-1}}(0,(r_o,…,z))+\lambda_i(\widetilde{q_{d-1}}(0,(r_o,…,z))\widetilde{q_{d-1}}) \right]\)

\(=\alpha \cdot z + \beta\)

Then verifier then choose the last randomness and verify

\(\widetilde{eq}((r_o,…,r_{\log k -1}),(\gamma_0,…,\gamma_{\log{k}-1}))\)

\(\cdot \left[\widetilde{p_{d-1}}(0,(r_o,…,r_{\log k -1}))\widetilde{q_{d-1}}(1,(r_o,…,r_{\log k -1}))+\widetilde{p_{d-1}}(1,(r_o,…,r_{\log k -1}))\widetilde{p_{d-1}}(0,(r_o,…,r_{\log k -1}))+\lambda_i(\widetilde{q_{d-1}}(0,(r_o,…,r_{\log k -1}))\widetilde{q_{d-1}}) \right]\)

\(=\alpha \cdot r_{\log k -1} + \beta\)

Leave a Reply

Your email address will not be published. Required fields are marked *