following this post
Witness encoding:
trace columns are interpreted as functions, the inputs of the function are bits, enough number of bits that can present \(n\) number of witness in this column.
for trace \(i\), with \(n\) number of rows (trace degree, trace length)
\(f_i(b_0,b_1,b_2,…,b_n)=a_{b_n,…,b_2,b_1,b_0} \)
Circuit for computing the sum of the fractions
the sum of logup is about the sum of many fractions, the strategy here is adding the numerators together (at the end I suppose the denominator will be divided, but now, only consider adding the numerators )
adding two fractions: \(\frac{a}{b}+\frac{c}{d}=\frac{a\cdot d + c \cdot b}{b\cdot d}\)
let a fraction to be represented by two element: \(numerator, denominator\)
in this way of represent, the addition of two fractions is expressed as
\((a,b)+(c,d)=(a\cdot d + c \cdot b, b\cdot d)\)
for the Logup equation to be proved in the note is:

Then the circuit is something like:


second pic from this blog
compare to the traditional GKR circuit presentation where each layer is represented by one polynomial:
$$V_i(q’,q)=\sum_{h’\in\{0,1\}^{b_N}}\sum_{h_L<h_R\in\{0,1\}^{b_G}}P_{q’,q,i}(h’,h_L,h_R)$$
$$[add_i(q,h_L, h_R)\cdot (V_{i+1}(q’,h_L)+V_{i+1}(q’,h_R))+mul_i(q,h_L,h_R)\cdot V_{i+1}(q’,h_L)\cdot V_{i+1}(q’,h_R)]$$
\(q’\) and \(q\) are the index of the gate in this layer, \(V_i\) is the polynomial that its evaluations encode the outputs of the gates in this layer.
in Logup-GKR, each gate has two values, there would be two polynomials to encode the outputs of the gate in a layer, each of this polynomial has one input, which stands for the index of the input gate of next layer.
\(p_i(x)\): as the expression of fraction addition. this would have similar pattern with first element: \(a\cdot d + c \cdot b\)
\(p_i(x) =p_{i+1}(0,x)q_{i+1}(1,x)+p_{i+1}(1,x)p_{i+0}(0,x)\)
recall the most left bits is the least significant bit, so the 0 and 1 here means exactly the left and right input gate to the current gate that is indexed by \(x\).
\(q_i(x)=q_{i+1}(0,x)q_{i+1}(1,x)\)
get their multilinear extension:
\(\widetilde{p_i}(\hat{x}) =\sum\limits_{x\in \{0,1\}^{k_i}}eq(x,\hat{x})\left(\widetilde{p_{i+1}}(0,x)\widetilde{q_{i+1}}(1,x)+\widetilde{p_{i+1}}(1,x)\widetilde{p_{i+0}}(0,x)\right)\)
\(\widetilde{q_i}(\hat{x})=\sum\limits_{x\in \{0,1\}^{k_i}}eq(x,\hat{x})\widetilde{q_{i+1}}(0,x)\widetilde{q_{i+1}}(1,x)\)
GKR logistics
The core idea of GKR is reduce a claim about layer polynomial(s) at layer \(i\) to to a claim about the layer polynomial(s) at layer \(i+1\), namely:
\(\widetilde{p_i}(\rho_i) \) and \(\widetilde{q_i}(\rho_i) \) with random \(\rho_i\in \mathbb{F}^{k_i}\)
==>
\(\widetilde{p_{i+1}}(\rho_{i+1}) \) and \(\widetilde{q_{i+1}}(\rho_{i+1}) \) with random \(\rho_{i+1}\in \mathbb{F}^{k_{i+1}}\)
combine the evaluation checks into one using random linear combination:
\(\widetilde{p_i}(\rho_i) +\lambda_i\widetilde{q_{i}}(\rho_i)=\)
\(\sum\limits_{x\in \{0,1\}^{k_i}}\widetilde{eq}(x,\rho_i)\left[\widetilde{p_{i+1}}(0,x)\widetilde{q_{i+1}}(1,x)+\widetilde{p_{i+1}}(1,x)\widetilde{p_{i+0}}(0,x)+\lambda_i(\widetilde{q_{i+1}}(0,x)\widetilde{q_{i+1}}) \right]\)
now only one sumcheck is needed, in the last round of this sumcheck, assume the randomness end up to be \(\gamma \in \mathbb{F}^{k_i}\) (attention! \(\gamma\) is not a single element in the field, it is \(k_i\) elements), the verifier needs to evaluate \(\widetilde{p_{i+1}}(0,\gamma),\widetilde{p_{i+1}}(0,\gamma)\) and \(\widetilde{q_{i+1}}(0,\gamma),\widetilde{q_{i+1}}(0,\gamma)\)
we can still shrink \(\widetilde{p_{i+1}}(0,\gamma),\widetilde{p_{i+1}}(0,\gamma)\) to one evaluation using the technique below, and same applies to \(\widetilde{q_{i+1}}(0,\gamma),\widetilde{q_{i+1}}(0,\gamma)\)
Prover: send \(\widetilde{p_{i+1}}(0,\gamma),\widetilde{p_{i+1}}(0,\gamma)\)
Verifier: sample random \(\gamma_0\) and compute \(\widetilde{p_{i+1}}(\gamma_0,\gamma)=(1-\gamma_0)\cdot \widetilde{p_{i+1}}(0,\gamma)+\gamma_0\cdot \widetilde{p_{i+1}}(1,\gamma)\) (check this post to see why this equation holds, or I will write it out in later section),
then compute: \(\widetilde{p_i}(\gamma_0,\gamma) +\lambda_i\widetilde{q_{i}}(\gamma_0,\gamma)\)
send to prover, the prover takes it as the claim to be proved in the next layer. (??? a bit vague here, prover still needs to prove 2 values?)
next post is about the input layer and how to embedded the logup-GKR into STARK