Sumcheck optimization 2: Quasilinear Time and Square-Root Space old implementation

explanation for this equation:

\(\underbrace{p(r_1, …, r_i, x’)}_{x’\in\{0,1\}^{i-l}}= \sum\limits_{y\in \{0,1\}^l}\widetilde{eq}(r_1, …, r_i, x’,y)\cdot p(y)\)

\(=\sum\limits_{y\in \{0,1\}^l}\underbrace{\widetilde{eq}(r_1, …, r_i, y)}_{y\in \{0,1\}^{i}}\underbrace{\widetilde{eq}(x’,y)}_{y\in \{0,1\}^{l-i}}\cdot \underbrace{p(y)}_{y\in \{0,1\}^l}\)

\(=\sum\limits_{y\in \{0,1\}^i}\underbrace{\widetilde{eq}(r_1, …, r_i, y)}_{y\in \{0,1\}^{i}}\underbrace{\cdot p(y, x’)}_{y\in \{0,1\}^{i}}\)

Sumcheck optimization 1: linear space and linear time old implementation

protocol:

Optimization in special case:

linear time, linear space prover:

Analysis:

To what I can think of:

\(d\cdot (d-1) \cdot 2^{l-1}\) small multiplication come from that \(s_1(u)\) is a \(d\) degree polynomial, that means prover needs to send \(d\) points to the verifier(shouldn’t be d+1?), for each points, there will …

Committing to matrices with one-hot rows

public lookup table

\(T \in \mathbb{F}^{m \times m}\)

\(T=\begin{bmatrix}1&0&0&…\\0&1&0&…\\0&0&1&…\\…&…&…&…\end{bmatrix}\)

the table to be looked up

\(M \in \mathbb{F}^{n \times m}\), the goal is to prove each row of \(M\) belongs to the rows of \(T\)

for example, \(M\) is as blow (to make it more intuitive to look at)

\(M=\begin{bmatrix}0&0&1&…\\1&0&0&…\\0&0&1&…\\…&…&…&…\end{bmatrix}\)…

Twist – 1

  • \(T\): number of reads in read-only case. number of alternating reads and writes in the read/write case.
  • values returned by any read operation -\(\widetilde{\mathsf{rv}}\)
  • value of register \(k\), \(\mathsf{Val}(k)\)
  • \(\mathsf{ra}(k,j)=\left\{ \begin{aligned} & 1, \quad \text{register} \quad \mathsf{k} \quad \text{is read at cycle} \quad j \\ & 0, \quad \text{otherwise}

Logup-GKR batching concept

IOP: prove evaluation of a multi-linear extension polynomial

a cyclic group in base Field \( \mathbb{H} \) (STARK trace domain)

base Field \(\mathcal{F}\)

4th degree extension of base Field, Secure Field \(\mathcal{SF}\)

\(a(w)\) is a univariate polynomial of degree \(2^d\) (STARK trace), \(w\in \mathbb{H}\)

\(f(b_0,b_1,…,b_d)\) is a multi-variate function (STARK …

Logup GKR in an example

traces:

\(a: [a_0,a_1,a_2,a_3,a_4,a_5,a_6,a_7]\)

\(b: [b_0,b_1,b_2,b_3,b_4,b_5,b_6,b_7]\)

goal:

prove logup relation:

\(\sum\limits_{i=0}^{i=7}\frac{1}{a_i+\alpha}=\sum\limits_{i=0}^{i=7}\frac{1}{b_i+\alpha}\)

what are available:
  • commitments to the traces via FIR:

\(\text{commit}[a(x)]\)

\(\text{commit}[b(x)]\)

note: \(a(x),b(x)\) denote the polynomials’ evaluation in trace domain, \(x\) comes from cyclic group.

  • GKR circuit: give a proof for the accumulation of fractions, reduced the problem to proving

logup GKR stwo implementation steps

Include challenge in input MLE

question 1: the challenge in logup denominators are really necessary to be in MLE?

yes, this challenge is irrelevant to the GKR protocol (don’t mess it with the challenges in sum-check and challenges for later random linear combination of MLE evaluations), \(\frac{1}{\alpha +x}\) \(\alpha\) guarantees …

Logup GKR – 3: Air constraint for STARK

Committed Traces

The traces that will be interpreted as multilinear polynomial evaluations from STARK:

talking about a single lookup relation, for multiple lookup, it means the

Extra traces that dedicated for Logup-GKR and need to be built additionally

Circuit Example:

using a simple example to describe the GKR circuit

bus …

Logup GKR 2

following previous post

Input Layer Encoding (witness encoding)

trace columns are interpreted as functions, the inputs of the function are bits, enough number of bits that can present \(n\) number of witness in this column.

for trace \(i\), with \(n\) number of rows (trace degree, trace length)

\(f_i(b_0,b_1,b_2,…,b_n)=a_{b_n,…,b_2,b_1,b_0} \)

at …