Skip to main content

FAQ

tip

For organizations prioritizing security, minimum latency, and proving at scale with reserved capacity, please get in touch. In other words if you're interested in volume-based discounts, latency-optimized proving, and uptime / latency SLAs, let's talk.

Why do my proofs seem slow?

Confirm network usage

First make sure that you are using the Prover Network rather than generating proofs locally. You should have SP1_PROVER=network set in your environment if you're using ProverClient::from_env(), or you should be using ProverClient::builder().network().build().

Confirm proof mode

If you're using Plonk proof mode, note that Groth16 and Compressed are 50-70s faster than Plonk, so you may want to consider switching.

Confirm efficient SDK usage

Also ensure that you are not recreating the prover client and/or the proving key for each proof request. These steps can take a few seconds and should be done once at the start of your program.

// Create client and proving key once
let client = ProverClient::builder().network().build();
let (pk, vk) = client.setup(elf);

// Reuse client and pk
let builder = client.prove(&pk, &stdin)
.compressed()
.run();

Adjust request timeout

Proofs can be fulfilled by provers at any point within the specified timeout. The default timeout is calculated in the SDK based on the gas limit of the proof request. If you override the timeout to be much sooner than the default, provers that pick up the request must fulfill it in a shorter amount of time. However, be careful not to set it too low as it's possible no prover will pick it up.

To set a custom timeout:

let builder = client.prove(&pk, &stdin)
.compressed()
.timeout(Duration::from_secs(120)) // 2 minute timeout
.run();

Consider using Reserved Capacity

Reserved capacity is also available for anyone that has high-throughput or low-latency requirements.

Benchmarking latency-sensitive apps with the default prover configuration might be suboptimal for various reasons. For accurate results or production use, we recommend setting up a latency-optimized environment. Contact us for more information.

Benchmarking on Small vs. Large programs

In SP1, there is a fixed overhead for proving that is independent of your program's prover gas usage. This means that benchmarking on small programs is not representative of the performance of larger programs. To get an idea of the scale of programs for real-world workloads, you can refer to our benchmarking blog post and also some numbers below:

  • An average Ethereum block ranges between 300-500M PGUs (including Merkle proof verification for storage and execution of transactions) with our keccak and secp256k1 precompiles.
  • For a Tendermint light client, the average resource consumption is ~100M PGUs (including our ed25519 precompiles).
  • We consider programs with <2M PGUs to be "small" and by default, the fixed overhead of proving will dominate the proof latency. If latency is incredibly important for your use-case, we can specialize the prover network for your program if you reach out to us.

Note that if you generate Groth16 or PLONK proofs on the prover network, you will encounter a fixed overhead for the STARK -> SNARK wrapping step (~6s and ~70s respectively). We are working on optimizing the wrapping latencies

How am I charged for proofs?

Proof generation costs are currently tied to the amount of resources your proof consumes, which is a function of prover gas units (PGUs) and proof type (Groth16 and PLONK each have a small fixed cost).

The cost of proofs is denominated in $PROVE, which is the native token of the Succinct Prover Network. The value of $PROVE is determined by the market. When submitting requests, provers submit a price per PGU they are willing to accept for the proof, and the proof is assigned to the prover with the lowest bid. More information can be found in the protocol section.

You can specify a max price per PGU you're willing to accept when submitting a request.

If you are planning to use the Succinct Prover Network regularly for an application, please reach out to us. For applications above a certain scale, we offer volume based discounts through reserved capacity.