Skip to main content
Version: v6 (Hypercube)

Jagged PCS

Code: slop: crates/jagged/src/verifier.rs

The result of the zerocheck protocol is a point and claims on p1(z),,pn(z)p_1(z), \cdots, p_n(z) for all chips we want to prove. Note that zz is of size max_log_row_count, and pip_i are padded with zeroes beyond the "real" chip heights. We aggregate these claims into a single claim about the polynomial PP which is the sparse representation of the computation trace (i.e., the concatenation of all of the padded tables).

Now we want to connect this to the "main trace", i.e. the commitment of buffer of "real" data. This is the part that the Jagged Polynomial Commitment comes into play. We sample at random zcolz_{\text{col}} to combine these claims via the multilinear extension. and QQ as the dense representation (to which the prover committed), we need to prove that

P(zrow,zcol)=iQ(i)eq(zrow,row(i))eq(zcol,col(i))P(z_{\text{row}}, z_{\text{col}}) = \sum_{i} Q(i) \cdot \text{eq}(z_{\text{row}}, \text{row}(i)) \cdot \text{eq}(z_{\text{col}}, \text{col}(i))

which is true since both are multilinear and equation holds over the boolean hypercube.

Consider the table heights h1,hnh_1, \cdots h_n and ti=j=1i1hjt_i = \sum_{j=1}^{i-1} h_j be the prefix sum.

Denote f(i)f(i) to be the multilinear extension of eq(zrow,row(i))eq(zcol,col(i))\text{eq}(z_{\text{row}}, \text{row}(i)) \cdot \text{eq}(z_{\text{col}}, \text{col}(i)). Then, what we have is essentially a sumcheck claim of Q(i)f(i)Q(i) \cdot f(i). Running the sumcheck procedure basically reduces to the verifier checking the evaluation claim of QQ and an evaluation claim of ff. The former will be done using a PCS of QQ, the dense representation. The latter is the harder part, and the verifier will compute this on their own.

First, note that

f(i)=c[1,n]eq(zcol,c)g^(zrow,i,tc,tc+1)f(i) = \sum_{c \in [1, n]} \text{eq}(z_{\text{col}}, c) \cdot \hat{g}(z_{\text{row}}, i, t_c, t_{c+1})

where g^\hat{g} is the multilinear extension of

g(a,b,c,d)={1a=bc and b[c,d)0otherwiseg(a, b, c, d) = \begin{cases} 1 & a = b - c \text{ and } b \in [c, d) \\ 0 & \text{otherwise} \end{cases}

which can be seen by the fact that both sides are multilinear in ii, and are equal over the hypercube. Therefore, computing ff is equivalent to computing g^\hat{g} in nn points.

The Jagged Assist

Code: slop: crates/jagged/src/jagged_eval/sumcheck_eval.rs::jagged_evaluation

To make recursion costs better, the prover helps out. Basically, the prover will send the evaluation of g^\hat{g} in nn points, and prove that the evaluations are correct. The verifier will just need to evaluate g^\hat{g} once. Consider that the prover wants to show h(xi)=yih(x_i) = y_i for 1ik1 \le i \le k, where hh is multilinear. The verifier first selects rr randomly via Fiat-Shamir, then we show

irih(xi)=iriyi\sum_{i} r^i \cdot h(x_i) = \sum_{i} r^i \cdot y_i

The RHS can be computed by the verifier directly. The LHS is

iribeq(xi,b)h(b)=bh(b)i(rieq(xi,b))\sum_{i} r^i \cdot \sum_{b} \text{eq}(x_i, b) \cdot h(b) = \sum_{b} h(b) \sum_{i} \left( r^i \cdot \text{eq}(x_i, b) \right)

Now, if one runs a sumcheck protocol here over bb, then the verifier needs to get

h(v)i(rieq(xi,v))h(v) \cdot \sum_{i} \left( r^i \cdot \text{eq}(x_i, v) \right)

which means that the verifier can just compute a single h(v)h(v) alongside with some eqeq.

Utilizing this, we can do

f(i)=c[1,n]eq(zcol,c)g^(zrow,i,tc,tc+1)f(i) = \sum_{c \in [1, n]} \text{eq}(z_{\text{col}}, c) \cdot \hat{g}(z_{\text{row}}, i, t_c, t_{c+1}) =c[1,n]eq(zcol,c)(u,vg^(zrow,i,u,v)eq(uv,tctc+1))= \sum_{c \in [1, n]}\text{eq}(z_{\text{col}}, c) \cdot \left( \sum_{u, v} \hat{g}(z_{\text{row}}, i, u, v) \cdot \text{eq}(u || v, t_c || t_{c+1}) \right) =u,vg^(zrow,i,u,v)(c[1,n]eq(zcol,c)eq(uv,tctc+1))= \sum_{u,v} \hat{g}(z_{\text{row}}, i, u, v) \cdot \left( \sum_{c \in [1, n]} \text{eq}(z_{\text{col}}, c) \cdot \text{eq}(u || v, t_c || t_{c+1}) \right)

and now with a sumcheck over u,vu, v, we need one evaluation of g^(zrow,i,u,v)\hat{g}(z_{\text{row}}, i, u', v') as well as

c[1,n]eq(zcol,c)eq(uv,tctc+1)\sum_{c \in [1, n]} \text{eq}(z_{\text{col}}, c) \cdot \text{eq}(u' || v', t_c || t_{c+1})

The Jagged Evaluation

Code: slop: crates/jagged/src/poly.rs, BranchingProgram::eval

This means the remaining task at hand for the verifier is to compute g^\hat{g} at a single point.

Note that zrowz_{\text{row}} and the evaluation point for ff, denoted ztracez_{\text{trace}} is fixed, so we can just define

h(u,v)=g^(zrow,ztrace,u,v)h(u, v) = \hat{g}(z_{\text{row}}, z_{\text{trace}}, u, v)

and work with evaluation of a single hh. We show how to compute hh based on a dynamic programming approach.

Since g^\hat{g} is multilinear, one can write

h(u,v)=a,b,c,deq(a,zrow)eq(b,ztrace)eq(c,u)eq(d,v)g(a,b,c,d)h(u, v) = \sum_{a, b, c, d} \text{eq}(a, z_{\text{row}}) \cdot \text{eq}(b, z_{\text{trace}}) \cdot \text{eq}(c, u) \cdot \text{eq}(d, v) \cdot g(a, b, c, d)

If we can define some state over a,b,c,da, b, c, d and its transition based on its bits one by one, we can compute this via dynamic programming. For example, consider that we can define some state SiS_i over a,b,c,da, b, c, d's first ii bits, and can transition it to Si+1S_{i+1} based on their i+1i+1th bit. Then, we can compute something like

H(Si,st,u,v)=a,b,c,d{0,1}ieq(a,zrow)eq(b,ztrace)eq(c,u)eq(d,v)[(a,b,c,d)st]H(S_{i, st}, u, v) = \sum_{a, b, c, d \in \{0, 1\}^i} \text{eq}(a, z_{\text{row}}) \cdot \text{eq}(b, z_{\text{trace}}) \cdot \text{eq}(c, u) \cdot \text{eq}(d, v) \cdot [(a, b, c, d) \in st]

where eq\text{eq} polynomial just uses the first ii values. In this case, we can say

H(Si+1,st,u,v)=stst via a,b,c,dH(Si,st,u,v)tr(a,b,c,d)H(S_{i+1, st'}, u, v) = \sum_{st \rightarrow st' \text{ via } a', b', c', d'} H(S_{i, st}, u, v) \cdot \text{tr}(a', b', c', d')

where the transition coefficient is simply

tr(a,b,c,d)=eq(abcd,zrow[i+1]ztrace[i+1]u[i+1]v[i+1])\text{tr}(a',b',c',d') = \text{eq}(a' || b' || c' || d', z_{\text{row}}[i+1] || z_{\text{trace}}[i+1] || u[i+1] || v[i+1])

so now it just reduces to whether or not the state a=bca = b - c, b[c,d)b \in [c, d) can be represented in a such form. Basically we need to know b=a+cb = a + c, and b<db < d at the same time. Therefore, we keep the two boolean as states, while we traverse from LSB to MSB.

  • the carry when adding a+ca + c
  • whether or not b<db < d so far

as well as if the state is already failed, (i.e. b=a+cb = a + c is not true already). It is clear that this state can be managed by incrementing the four bits each step, as desired.