SP1

Documentation for SP1 users and developers.

Telegram Chat

SP1 is a performant, 100% open-source, contributor-friendly zero-knowledge virtual machine (zkVM) that verifies the execution of arbitrary Rust (or any LLVM-compiled language) programs.

The future of truth is programmable

The future of ZK is writing normal code. Zero-knowledge proofs (ZKPs) are a powerful primitive that will enable a new generation of more secure, scalable and innovative blockchain architectures that rely on truth not trust. But ZKP adoption has been held back because it is “moon math”, requiring specialized knowledge in obscure ZKP frameworks and hard to maintain one-off deployments.

Performant, general-purpose zkVMs, like SP1, will obsolete the current paradigm of specialized teams hand rolling their own custom ZK stack and create a future where all blockchain infrastructure, including rollups, bridges, coprocessors, and more, utilize ZKPs via maintainable software written in Rust (or other LLVM-compiled languages).

Built from day one to be customizable and maintained by a diverse ecosystem of contributors

SP1 is 100% open-source (MIT / Apache 2.0) with no code obfuscation and built to be contributor friendly, with all development done in the open. Unlike existing zkVMs whose constraint logic is closed-source and impossible to modify, SP1 is modularly architected and designed to be customizable from day one. This customizability (unique to SP1) allows for users to add “precompiles” to the core zkVM logic that yield substantial performance gains, making SP1’s performance not only SOTA vs. existing zkVMs, but also competitive with circuits in a variety of use-cases.

Installation

SP1 currently runs on Linux and macOS. You can either use prebuilt binaries through sp1up or build the toolchain and CLI from source.

Requirements

Currently our prebuilt binaries are built on Ubuntu 20.04 (22.04 on ARM) and macOS. If your OS uses an older GLIBC version, it's possible these may not work and you will need to build the toolchain from source.

sp1up is the SP1 toolchain installer. Open your terminal and run the following command and follow the instructions:

curl -L https://sp1.succinct.xyz | bash

Then simply follow the instructions on-screen, which will make the sp1up command available in your CLI.

After following the instructions, you can run sp1up to install the toolchain:

sp1up

This will install two things:

  1. The succinct Rust toolchain which has support for the riscv32im-succinct-zkvm-elf compilation target.
  2. cargo prove CLI tool that will let you compile provable programs and then prove their correctness.

You can verify the installation by running cargo prove --version:

cargo prove --version

If this works, go to the next section to compile and prove a simple zkVM program.

Troubleshooting

If you have installed cargo-prove from source, it may conflict with sp1up's cargo-prove installation or vice versa. You can remove the cargo-prove that was installed from source with the following command:

rm ~/.cargo/bin/cargo-prove

Or, you can remove the cargo-prove that was installed through sp1up:

rm ~/.sp1/bin/cargo-prove

Option 2: Building from Source

Make sure you have installed the dependencies needed to build the rust toolchain from source.

Clone the sp1 repository and navigate to the root directory.

git clone git@github.com:succinctlabs/sp1.git
cd sp1
cd cli
cargo install --locked --path .
cd ~
cargo prove build-toolchain

Building the toolchain can take a while, ranging from 30 mins to an hour depending on your machine. If you're on a machine that we have prebuilt binaries for (ARM Mac or x86 or ARM Linux), you can use the following to download a prebuilt version.

cargo prove install-toolchain

To verify the installation of the tooolchain, run and make sure you see succinct:

rustup toolchain list

You can delete your existing installation of the toolchain with:

rustup toolchain remove succinct

Option 3: Using Docker

SP1 can also be used entirely within a Docker container. If you don't have it, Docker can be installed directly from Docker's website.

Then you can use:

cargo prove --docker

to automatically use the latest image of SP1 in a container.

Alternatively, it is possible to build the docker image locally by running:

docker build -t succinctlabs/sp1:latest ./cli/docker

You can then run the cargo prove command by mounting your program directory into the container:

docker run -v "$(pwd):/root/program" -it succinctlabs/sp1:latest prove build

Quickstart

In this section, we will show you how to create a simple Fibonacci program using the SP1 zkVM.

Create Project

The first step is to create a new project using the cargo prove new <name> command. This command will create a new folder in your current directory.

cargo prove new fibonacci
cd fibonacci

This will create a new project with the following structure:

.
├── program
│   ├── Cargo.toml
│   ├── elf
│   └── src
│       └── main.rs
└── script
    ├── Cargo.toml
    └── src
        └── main.rs

6 directories, 4 files

There are 2 directories (each a crate) in the project:

  • program: the source code that will be proven inside the zkVM.
  • script: code that contains proof generation and verification code.

We recommend you install the rust-analyzer extension. Note that if you use cargo prove new inside a monorepo, you will need to add the manifest file to rust-analyzer.linkedProjects to get full IDE support.

Generate Proof

The program simply computes the n-th Fibonacci number.

Before we can run the program inside the zkVM, it must be compiled to a RISCV executable using the succinct Rust toolchain. This is called an ELF (Executable and Linkable Format). The build.rs file in the script directory will use the cargo prove toolchain to automatically build the ELF.

To generate a proof, we take the ELF file generated by the build.rs file and execute it within the SP1 zkVM. The code in the script directory is already scaffolded with a script that has logic to generate a proof, save the proof to disk, and verify it.

cd script
cargo run --release

The output should show

...
    Compiling fibonacci-script v0.1.0 (.../fibonacci/script)
    Finished release [optimized] target(s) in 26.14s
    Running `target/release/fibonacci-script`
a: 205697230343233228174223751303346572685
b: 332825110087067562321196029789634457848
successfully generated and verified proof for the program!

The program by default is quite small, so proof generation will only take a few seconds locally. After it completes, the proof will be saved in the proof-with-io.bin file and also be verified for correctness.

Modifying the Program

You can play around with how many rounds of Fibonacci are executed by playing around with n (by default set to 186) in the file script/src/main.rs. Integer overflow will cause larger n to result in non-fibonacci output, although the proof will still be generated and verified.

The ELF will be automatically rebuilt every time you modify the program. You can verify that the ELF was re-generated by looking in the elf directory and for a file called riscv32im-succinct-zkvm-elf:

ls elf # should show riscv32im-succinct-zkvm-elf

Project Template

Another option for getting started with SP1 is to use the SP1 Project Template.

Writing Programs: Setup

In this section, we will teach you how to setup a self-contained crate which can be compiled as an program that can be executed inside the zkVM.

The recommended way to setup your first program to prove inside the zkVM is using the method described in Quickstart which will create a program folder.

cargo prove new <name>
cd program

Build

To build the program, simply run:

cargo prove build

This will compile the ELF that can be executed in the zkVM and put the executable in elf/riscv32im-succinct-zkvm-elf.

Build with Docker

Another option is to build your program in a Docker container. This is useful if you are on a platform that does not have prebuilt binaries for the succinct toolchain, or if you are looking to get a reproducible ELF output. To do so, just use the --docker flag.

cargo prove build --docker

Manual

You can also manually setup a project. First create a new cargo project:

cargo new program
cd program

Cargo Manifest

Inside this crate, add the sp1-zkvm crate as a dependency. Your Cargo.toml should look like as follows:

[workspace]
[package]
version = "0.1.0"
name = "program"
edition = "2021"

[dependencies]
sp1-zkvm = { git = "https://github.com/succinctlabs/sp1.git" }

The sp1-zkvm crate includes necessary utilities for your program, including handling inputs and outputs, precompiles, patches, and more.

main.rs

Inside the src/main.rs file, you must make sure to include these two lines to ensure that the crate properly compiles.

#![no_main]
sp1_zkvm::entrypoint!(main);

These two lines of code wrap your main function with some additional logic to ensure that your program compiles correctly with the RISCV target.

Build

To build the program, simply run:

cargo prove build

This will compile the ELF (RISCV binary) that can be executed in the zkVM and put the executable in elf/riscv32im-succinct-zkvm-elf.

Writing Programs: Basics

A zero-knowledge proof generally proves that some function f when applied to some input x produces some output y (i.e. f(x) = y). In the context of the SP1 zkVM:

  • f is written in normal Rust code.
  • x are bytes that can be serialized / deserialized into objects
  • y are bytes that can be serialized / deserialized into objects

To make this more concrete, let's walk through a simple example of writing a Fibonacci program inside the zkVM.

Fibonacci

This program is from the examples directory in the SP1 repo which contains several example programs of varying complexity.

//! A simple program that takes a number `n` as input, and writes the `n-1`th and `n`th fibonacci
//! number as an output.

// These two lines are necessary for the program to properly compile.
//
// Under the hood, we wrap your main function with some extra code so that it behaves properly
// inside the zkVM.
#![no_main]
sp1_zkvm::entrypoint!(main);

pub fn main() {
    // Read an input to the program.
    //
    // Behind the scenes, this compiles down to a custom system call which handles reading inputs
    // from the prover.
    let n = sp1_zkvm::io::read::<u32>();

    // Write n to public input
    sp1_zkvm::io::commit(&n);

    // Compute the n'th fibonacci number, using normal Rust code.
    let mut a = 0;
    let mut b = 1;
    for _ in 0..n {
        let mut c = a + b;
        c %= 7919; // Modulus to prevent overflow.
        a = b;
        b = c;
    }

    // Write the output of the program.
    //
    // Behind the scenes, this also compiles down to a custom system call which handles writing
    // outputs to the prover.
    sp1_zkvm::io::commit(&a);
    sp1_zkvm::io::commit(&b);
}

As you can see, writing programs is as simple as writing normal Rust. To read more about how inputs and outputs work, refer to the section on Inputs & Outputs.

Inputs and Outputs

In real world applications of zero-knowledge proofs, you almost always want to verify your proof in the context of some inputs and outputs. For example:

  • Rollups: Given a list of transactions, prove the new state of the blockchain.
  • Coprocessors: Given a block header, prove the historical state of some storage slot inside a smart contract.
  • Attested Images: Given a signed image, prove that you made a restricted set of image transformations.

In this section, we cover how you pass inputs and outputs to the zkVM and create new types that support serialization.

Reading Data

Data that is read is not public to the verifier by default. Use the sp1_zkvm::io::read::<T> method:

let a = sp1_zkvm::io::read::<u32>();
let b = sp1_zkvm::io::read::<u64>();
let c = sp1_zkvm::io::read::<String>();

Note that T must implement the serde::Serialize and serde::Deserialize trait. If you want to read bytes directly, you can also use the sp1_zkvm::io::read_vec method.

let my_vec = sp1_zkvm::io::read_vec();

Commiting Data

Committing to data makes the data public to the verifier. Use the sp1_zkvm::io::commit::<T> method:

sp1_zkvm::io::commit::<u32>(&a);
sp1_zkvm::io::commit::<u64>(&b);
sp1_zkvm::io::commit::<String>(&c);

Note that T must implement the Serialize and Deserialize trait. If you want to write bytes directly, you can also use sp1_zkvm::io::write_slice method:

let mut my_slice = [0_u8; 32];
sp1_zkvm::io::commit_slice(&my_slice);

Creating Serializable Types

Typically, you can implement the Serialize and Deserialize traits using a simple derive macro on a struct.

use serde::{Serialize, de::Deserialize};

#[derive(Serialize, Deserialize)]
struct MyStruct {
    a: u32,
    b: u64,
    c: String
}

For more complex usecases, refer to the Serde docs.

Example

Here is a basic example of using inputs and outputs with more complex types.

#![no_main]
sp1_zkvm::entrypoint!(main);

use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Debug, PartialEq)]
struct MyPointUnaligned {
    pub x: usize,
    pub y: usize,
    pub b: bool,
}

pub fn main() {
    let p1 = sp1_zkvm::io::read::<MyPointUnaligned>();
    println!("Read point: {:?}", p1);

    let p2 = sp1_zkvm::io::read::<MyPointUnaligned>();
    println!("Read point: {:?}", p2);

    let p3: MyPointUnaligned = MyPointUnaligned {
        x: p1.x + p2.x,
        y: p1.y + p2.y,
        b: p1.b && p2.b,
    };
    println!("Addition of 2 points: {:?}", p3);
    sp1_zkvm::io::commit(&p3);
}

Precompiles

Precompiles are built into the SP1 zkVM and accelerate commonly used operations such as elliptic curve arithmetic and hashing. Under the hood, precompiles are implemented as custom tables dedicated to proving one or few operations. They typically improve the performance of executing expensive operations by a few order of magnitudes.

Inside the zkVM, precompiles are exposed as system calls executed through the ecall RISC-V instruction. Each precompile has a unique system call number and implements an interface for the computation.

SP1 also has been designed specifically to make it easy for external contributors to create and extend the zkVM with their own precompiles. To learn more about this, you can look at implementations of existing precompiles in the precompiles folder. More documentation on this will be coming soon.

Supported Precompiles

Typically, we recommend you interact with precompiles through patches, which are crates patched to use these precompiles under the hood. However, if you are an advanced user you can interact with the precompiles directly using extern system calls.

Here is a list of extern system calls that use precompiles.

SHA256 Extend

Executes the SHA256 extend operation on a word array.

pub extern "C" fn syscall_sha256_extend(w: *mut u32);

SHA256 Compress

Executes the SHA256 compress operation on a word array and a given state.

pub extern "C" fn syscall_sha256_compress(w: *mut u32, state: *mut u32);

Keccak256 Permute

Executes the Keccak256 permutation function on the given state.

pub extern "C" fn syscall_keccak_permute(state: *mut u64);

Ed25519 Add

Adds two points on the ed25519 curve. The result is stored in the first point.

pub extern "C" fn syscall_ed_add(p: *mut u32, q: *mut u32);

Ed25519 Decompress.

Decompresses a compressed Ed25519 point.

The second half of the input array should contain the compressed Y point with the final bit as the sign bit. The first half of the input array will be overwritten with the decompressed point, and the sign bit will be removed.

pub extern "C" fn syscall_ed_decompress(point: &mut [u8; 64])

Secp256k1 Add

Adds two Secp256k1 points. The result is stored in the first point.

pub extern "C" fn syscall_secp256k1_add(p: *mut u32, q: *mut u32)

Secp256k1 Double

Doubles a Secp256k1 point in place.

pub extern "C" fn syscall_secp256k1_double(p: *mut u32)

Secp256k1 Decompress

Decompess a Secp256k1 point.

The input array should be 32 bytes long, with the first 16 bytes containing the X coordinate in big-endian format. The second half of the input will be overwritten with the decompressed point.

pub extern "C" fn syscall_secp256k1_decompress(point: &mut [u8; 64], is_odd: bool);

Bn254 Add

Adds two Bn254 points. The result is stored in the first point.

pub extern "C" fn syscall_bn254_add(p: *mut u32, q: *mut u32)

Bn254 Double

Doubles a Bn254 point in place.

pub extern "C" fn syscall_bn254_double(p: *mut u32)

Bls12-381 Add

Adds two Bls12-381 points. The result is stored in the first point.

pub extern "C" fn syscall_bls12381_add(p: *mut u32, q: *mut u32)

Bls12-381 Double

Doubles a Bls12-381 point in place.

pub extern "C" fn syscall_bls12381_double(p: *mut u32)

Patched Crates

We maintain forks of commonly used libraries in blockchain infrastructure to significantly accelerate the execution of certain operations. Under the hood, we use precompiles to acheive tremendous performance improvements in proof generation time.

If you know of a library or library version that you think should be patched, please open an issue or a pull request!

Supported Libraries

Crate NameRepositoryNotes
sha2sp1-patches/RustCrypto-hashessha256
tiny-keccaksp1-patches/tiny-keccakkeccak256
ed25519-consensussp1-patches/ed25519-consensused25519 verify
curve25519-dalek-ngsp1-patches/curve25519-dalek-nged25519 verify
curve25519-daleksp1-patches/curve25519-daleked25519 verify
revm-precompilesp1-patches/revmecrecover precompile
reth-primitivessp1-patches/rethecrecover transactions

Using Patched Crates

To use the patched libraries, you can use corresponding patch entries in your program's Cargo.toml such as:

[patch.crates-io]
sha2-v0-9-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", branch = "patch-v0.9.8" }
sha2-v0-10-6 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", branch = "patch-v0.10.6" }
sha2-v0-10-8 = { git = "https://github.com/sp1-patches/RustCrypto-hashes", package = "sha2", branch = "patch-v0.10.8" }
curve25519-dalek = { git = "https://github.com/sp1-patches/curve25519-dalek", branch = "patch-v4.1.1" }
curve25519-dalek-ng = { git = "https://github.com/sp1-patches/curve25519-dalek-ng", branch = "patch-v4.1.1" }
ed25519-consensus = { git = "https://github.com/sp1-patches/ed25519-consensus", branch = "patch-v2.1.0" }
tiny-keccak = { git = "https://github.com/sp1-patches/tiny-keccak", branch = "patch-v2.0.2" }
revm = { git = "https://github.com/sp1-patches/revm", branch = "patch-v5.0.0" }
reth-primitives = { git = "https://github.com/sp1-patches/reth", default-features = false, branch = "sp1-reth" }

You may also need to update your Cargo.lock file. For example:

cargo update -p ed25519-consensus

If you encounter issues relating to cargo / git, you can try setting CARGO_NET_GIT_FETCH_WITH_CLI:

CARGO_NET_GIT_FETCH_WITH_CLI=true cargo update -p ed25519-consensus

You can permanently set this value in ~/.cargo/config:

[net]
git-fetch-with-cli = true

Sanity Checks

You must make sure your patch is in the workspace root, otherwise it will not be applied.

You can check if the patch was applied by running a command like the following:

cargo tree -p sha2
cargo tree -p sha2@0.9.8

Next to the package name, it should have a link to the Github repository that you patched with.

Checking whether a precompile is used

To check if a precompile is used by your program, when running the script to generate a proof, make sure to use the RUST_LOG=info environment variable and set up utils::setup_logger() in your script. Then, when you run the script, you should see a log message like the following:

2024-03-02T19:10:39.570244Z  INFO runtime.run(...): ... 
2024-03-02T19:10:39.570244Z  INFO runtime.run(...): ... 
2024-03-02T19:10:40.003907Z  INFO runtime.prove(...): Sharding the execution record.
2024-03-02T19:10:40.003916Z  INFO runtime.prove(...): Generating trace for each chip.
2024-03-02T19:10:40.003918Z  INFO runtime.prove(...): Record stats before generate_trace (incomplete): ShardStats {
    nb_cpu_events: 7476561,
    nb_add_events: 2126546,
    nb_mul_events: 11116,
    nb_sub_events: 54075,
    nb_bitwise_events: 646940,
    nb_shift_left_events: 142595,
    nb_shift_right_events: 274016,
    nb_divrem_events: 0,
    nb_lt_events: 81862,
    nb_field_events: 0,
    nb_sha_extend_events: 0,
    nb_sha_compress_events: 0,
    nb_keccak_permute_events: 2916,
    nb_ed_add_events: 0,
    nb_ed_decompress_events: 0,
    nb_weierstrass_add_events: 0,
    nb_weierstrass_double_events: 0,
    nb_k256_decompress_events: 0,
}

The ShardStats struct contains the number of events for each "table" from the execution of the program, including precompile tables. In the example above, the nb_keccak_permute_events field is 2916, indicating that the precompile for the Keccak permutation was used.

Cycle Tracking

When writing a program, it is useful to know how many RISC-V cycles a portion of the program takes to identify potential performance bottlenecks. SP1 provides a way to track the number of cycles spent in a portion of the program.

Tracking Cycles

To track the number of cycles spent in a portion of the program, you can either put println!("cycle-tracker-start: block name") + println!("cycle-tracker-end: block name") statements (block name must be same between start and end) around the portion of your program you want to profile or use the #[sp1_derive::cycle_tracker] macro on a function. An example is shown below:

#![no_main]
sp1_zkvm::entrypoint!(main);

#[sp1_derive::cycle_tracker]
pub fn expensive_function(x: usize) -> usize {
    let mut y = 1;
    for _ in 0..100 {
        y *= x;
        y %= 7919;
    }
    y
}

pub fn main() {
    let mut nums = vec![1, 1];

    // Setup a large vector with Fibonacci-esque numbers.
    println!("cycle-tracker-start: setup");
    for _ in 0..100 {
        let mut c = nums[nums.len() - 1] + nums[nums.len() - 2];
        c %= 7919;
        nums.push(c);
    }
    println!("cycle-tracker-end: setup");

    println!("cycle-tracker-start: main-body");
    for i in 0..2 {
        let result = expensive_function(nums[nums.len() - i - 1]);
        println!("result: {}", result);
    }
    println!("cycle-tracker-end: main-body");
}

Note that to use the macro, you must add the sp1-derive crate to your dependencies for your program.

[dependencies]
sp1-derive = { git = "https://github.com/succinctlabs/sp1.git" }

In the script for proof generation, setup the logger with utils::setup_logger() and run the script with RUST_LOG=info cargo run --release. You should see the following output:

$ RUST_LOG=info cargo run --release
    Finished release [optimized] target(s) in 0.21s
     Running `target/release/cycle-tracking-script`
2024-03-13T02:03:40.567500Z  INFO execute: loading memory image
2024-03-13T02:03:40.567751Z  INFO execute: starting execution
2024-03-13T02:03:40.567760Z  INFO execute: clk = 0 pc = 0x2013b8    
2024-03-13T02:03:40.567822Z  INFO execute: ┌╴setup    
2024-03-13T02:03:40.568095Z  INFO execute: └╴4,398 cycles    
2024-03-13T02:03:40.568122Z  INFO execute: ┌╴main-body    
2024-03-13T02:03:40.568149Z  INFO execute: │ ┌╴expensive_function    
2024-03-13T02:03:40.568250Z  INFO execute: │ └╴1,368 cycles    
stdout: result: 5561
2024-03-13T02:03:40.568373Z  INFO execute: │ ┌╴expensive_function    
2024-03-13T02:03:40.568470Z  INFO execute: │ └╴1,368 cycles    
stdout: result: 2940
2024-03-13T02:03:40.568556Z  INFO execute: └╴5,766 cycles    
2024-03-13T02:03:40.568566Z  INFO execute: finished execution clk = 11127 pc = 0x0
2024-03-13T02:03:40.569251Z  INFO execute: close time.busy=1.78ms time.idle=21.1µs

Note that we elegantly handle nested cycle tracking, as you can see above.

Generating Proofs: Setup

In this section, we will teach you how to setup a self-contained crate which can generate proofs of programs that have been compiled with the SP1 toolchain inside the SP1 zkVM.

The recommended way to setup your first program to prove inside the zkVM is using the method described in Quickstart which will create a script folder.

cargo prove new <name>
cd script

Manual

You can also manually setup a project. First create a new cargo project:

cargo new script
cd script

Cargo Manifest

Inside this crate, add the sp1-sdk crate as a dependency. Your Cargo.toml should look like as follows:

[workspace]
[package]
version = "0.1.0"
name = "script"
edition = "2021"

[dependencies]
sp1-sdk = { git = "https://github.com/succinctlabs/sp1.git" }

The sp1-sdk crate includes necessary utilities to generate, save, and verify proofs.

Generating Proofs: Basics

An end-to-end flow of proving f(x) = y with the SP1 zkVM involves the following steps:

  • Define f using normal Rust code and compile it to an ELF (covered in the writing programs section).
  • Setup a proving key (pk) and verifying key (vk) for the program given the ELF. The proving key contains all the information needed to generate a proof and includes some post-processing on top of the ELF, while the verifying key is a compact representation of the ELF that contains all the information needed to verify a proof and is much smaller than the ELF itself.
  • Generate a proof π using the SP1 zkVM that f(x) = y with prove(pk, x).
  • Verify the proof π using verify(vk, x, y, π).

To make this more concrete, let's walk through a simple example of generating a proof for a Fiboancci program inside the zkVM.

Fibonacci

use sp1_sdk::{utils, ProverClient, SP1Proof, SP1Stdin};

/// The ELF we want to execute inside the zkVM.
const ELF: &[u8] = include_bytes!("../../program/elf/riscv32im-succinct-zkvm-elf");

fn main() {
    // Setup logging.
    utils::setup_logger();

    // Create an input stream and write '500' to it.
    let n = 500u32;

    let mut stdin = SP1Stdin::new();
    stdin.write(&n);

    // Generate the proof for the given program and input.
    let client = ProverClient::new();
    let (pk, vk) = client.setup(ELF);
    let mut proof = client.prove(&pk, stdin).unwrap();

    println!("generated proof");

    // Read and verify the output.
    let _ = proof.public_values.read::<u32>();
    let a = proof.public_values.read::<u32>();
    let b = proof.public_values.read::<u32>();

    println!("a: {}", a);
    println!("b: {}", b);

    // Verify proof and public values
    client.verify(&proof, &vk).expect("verification failed");

    // Test a round trip of proof serialization and deserialization.
    proof
        .save("proof-with-pis.bin")
        .expect("saving proof failed");
    let deserialized_proof = SP1Proof::load("proof-with-pis.bin").expect("loading proof failed");

    // Verify the deserialized proof.
    client
        .verify(&deserialized_proof, &vk)
        .expect("verification failed");

    println!("successfully generated and verified proof for the program!")
}

You can run the above script in the script directory with RUST_LOG=info cargo run --release.

Build Script

If you want your program crate to be built automatically whenever you build/run your script crate, you can add a build.rs file inside of script/ (at the same level as Cargo.toml):

fn main() {
    sp1_helper::build_program(&format!("{}/../program", env!("CARGO_MANIFEST_DIR")));
}

Make sure to also add sp1-helper as a build dependency in script/Cargo.toml:

[build-dependencies]
sp1-helper = { git = "https://github.com/succinctlabs/sp1.git" }

If you run RUST_LOG=info cargo run --release -vv, you will see the following output from the build script if the program has changed, indicating that the program was rebuilt:

[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/src
[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/Cargo.toml
[fibonacci-script 0.1.0] cargo:rerun-if-changed=../program/Cargo.lock
[fibonacci-script 0.1.0] cargo:warning=fibonacci-program built at 2024-03-02 22:01:26
[fibonacci-script 0.1.0] [sp1]    Compiling fibonacci-program v0.1.0 (/Users/umaroy/Documents/fibonacci/program)
[fibonacci-script 0.1.0] [sp1]     Finished release [optimized] target(s) in 0.15s
warning: fibonacci-script@0.1.0: fibonacci-program built at 2024-03-02 22:01:26```

Advanced Usage

Execution Only

We recommend that during development of large programs (> 1 million cycles) that you do not generate proofs each time. Instead, you should have your script only execute the program with the RISC-V runtime and read public_values. Here is an example:

use sp1_sdk::{utils, ProverClient, SP1Stdin};

/// The ELF we want to execute inside the zkVM.
const ELF: &[u8] = include_bytes!("../../program/elf/riscv32im-succinct-zkvm-elf");

fn main() {
    // Setup logging.
    utils::setup_logger();

    // Create an input stream and write '500' to it.
    let n = 500u32;

    let mut stdin = SP1Stdin::new();
    stdin.write(&n);

    // Only execute the program and get a `SP1PublicValues` object.
    let client = ProverClient::new();
    let (mut public_values, _) = client.execute(ELF, stdin).unwrap();

    println!("generated proof");

    // Read and verify the output.
    let _ = public_values.read::<u32>();
    let a = public_values.read::<u32>();
    let b = public_values.read::<u32>();

    println!("a: {}", a);
    println!("b: {}", b);
}

If execution of your program succeeds, then proof generation should succeed as well! (Unless there is a bug in our zkVM implementation.)

Compressed Proofs

With the ProverClient, the default prove function generates a proof that is succinct, but can have size that scales with the number of cycles of the program. To generate a compressed proof of constant size, you can use the prove_compressed function instead. This will use STARK recursion to generate a proof that is constant size (around 7Kb), but will be slower than just calling prove, as it will use recursion to combine the core SP1 proof into a single constant-sized proof.

use sp1_sdk::{utils, ProverClient, SP1Stdin};

/// The ELF we want to execute inside the zkVM.
const ELF: &[u8] = include_bytes!("../../program/elf/riscv32im-succinct-zkvm-elf");

fn main() {
    // Setup logging.
    utils::setup_logger();

    // Create an input stream and write '500' to it.
    let n = 500u32;
    let mut stdin = SP1Stdin::new();
    stdin.write(&n);

    // Generate the constant-sized proof for the given program and input.
    let client = ProverClient::new();
    let (pk, vk) = client.setup(ELF);
    let mut proof = client.prove_compressed(&pk, stdin).unwrap();

    println!("generated proof");
    // Read and verify the output.
    let a = proof.public_values.read::<u32>();
    let b = proof.public_values.read::<u32>();
    println!("a: {}, b: {}", a, b);

    // Verify proof and public values
    client
        .verify_compressed(&proof, &vk)
        .expect("verification failed");

    // Save the proof.
    proof
        .save("compressed-proof-with-pis.bin")
        .expect("saving proof failed");

    println!("successfully generated and verified proof for the program!")
}

You can run the above script with RUST_LOG=info cargo run --bin compressed --release from examples/fibonacci/script.

Logging and Tracing Information

You can use utils::setup_logger() to enable logging information respectively. You should only use one or the other of these functions.

Logging:

utils::setup_logger();

You must run your command with:

RUST_LOG=info cargo run --release

CPU Acceleration

To enable CPU acceleration, you can use the RUSTFLAGS environment variable to enable the target-cpu=native flag when running your script. This will enable the compiler to generate code that is optimized for your CPU.

RUSTFLAGS='-C target-cpu=native' cargo run --release

Currently there is support for AVX512 and NEON SIMD instructions. For NEON, you must also enable the sp1-sdk feature neon in your script crate's Cargo.toml file.

sp1-sdk = { git = "https://github.com/succinctlabs/sp1", features = ["neon"] }

Performance

For maximal performance, you should run proof generation with the following command and vary your shard_size depending on your program's number of cycles.

SHARD_SIZE=4194304 RUST_LOG=info RUSTFLAGS='-C target-cpu=native' cargo run --release

Memory Usage

To reduce memory usage, set the SHARD_BATCH_SIZE enviroment variable depending on how much RAM your machine has. A higher number will use more memory, but will be faster.

SHARD_BATCH_SIZE=1 SHARD_SIZE=2097152 RUST_LOG=info RUSTFLAGS='-C target-cpu=native' cargo run --release

Prover Network: Setup

So far we've explored how to generate proofs locally, but this can actually be inconvenient on local machines due to high memory / CPU requirements, especially for very large programs.

Succinct has been building the Succinct Network, a distributed network of provers that can generate proofs of any size quickly and reliably. It's currently in private beta, but you can get access by following the steps below.

Get access

Currently the network is permissioned, so you need to gain access through Succinct. After you have completed the key setup below, you can submit your address in this form and we'll contact you shortly.

Key Setup

The prover network uses secp256k1 keypairs for authentication, like Ethereum wallets. You may generate a new keypair explicitly for use with the prover network, or use an existing keypair. Currently you do not need to hold any funds in this account, it is used solely for access control.

Prover network keypair credentials can be generated using the cast CLI tool:

Install:

curl -L https://foundry.paradigm.xyz | bash

Generate a new keypair:

cast wallet new

Or, retrieve your address from an existing key:

cast wallet address --private-key $PRIVATE_KEY

Make sure to keep your private key somewhere safe and secure, you'll need it to interact with the prover network.

Prover Network: Usage

Sending a proof request

To use the prover network to generate a proof, you can run your script that uses sp1_sdk::ProverClient as you would normally but with additional environment variables set:

// Generate the proof for the given program.
let client = ProverClient::new();
let (pk, vk) = client.setup(ELF);
let mut proof = client.prove(&pk, stdin).unwrap();
SP1_PROVER=network SP1_PRIVATE_KEY=... RUST_LOG=info cargo run --release
  • SP1_PROVER should be set to network when using the prover network.

  • SP1_PRIVATE_KEY should be set to your private key. You will need to be using a permissioned key to use the network.

When you call any of the prove functions in ProverClient now, it will first simulate your program, then wait for it to be proven through the network and finally return the proof.

View the status of your proof

You can view your proof and other running proofs on the explorer. The page for your proof will show details such as the stage of your proof and the cycles used. It also shows the program hash which is the keccak256 of the program bytes.

Screenshot from explorer.succinct.xyz showing the details of a proof including status, stage, type, program, requester, prover, CPU cycles used, time requested, and time claimed.

Advanced Usage

Skip simulation

To skip the simulation step and directly submit the program for proof generation, you can set the SKIP_SIMULATION environment variable to true. This will save some time if you are sure that your program is correct. If your program panics, the proof will fail and ProverClient will panic.

Use NetworkProver directly

By using the sp1_sdk::NetworkProver struct directly, you can call async functions directly and have programmatic access to the proof ID.

impl NetworkProver {
    /// Creates a new [NetworkProver] with the private key set in `SP1_PRIVATE_KEY`.
    pub fn new() -> Self;

    /// Creates a new [NetworkProver] with the given private key.
    pub fn new_from_key(private_key: &str) -> Self;

    /// Requests a proof from the prover network, returning the proof ID.
    pub async fn request_proof(
        &self,
        elf: &[u8],
        stdin: SP1Stdin,
        mode: ProofMode,
    ) -> Result<String>;

    /// Waits for a proof to be generated and returns the proof.
    pub async fn wait_proof<P: DeserializeOwned>(&self, proof_id: &str) -> Result<P>;

    /// Requests a proof from the prover network and waits for it to be generated.
    pub async fn prove<P: ProofType>(&self, elf: &[u8], stdin: SP1Stdin) -> Result<P>;
}

Verifying Proofs: Solidity & EVM

SP1 recently added support for verifying proofs for onchain usecases. To see an end-to-end example of using SP1 for on-chain usecases, refer to the SP1 Project Template.

Generating a Plonk Bn254 Proof

By default, the proofs generated by SP1 are not verifiable onchain, as they are non-constant size and STARK verification on Ethereum is very expensive. To generate a proof that can be verified onchain, we use performant STARK recursion to combine SP1 shard proofs into a single STARK proof and then wrap that in a SNARK proof. Our ProverClient has a function for this called prove_plonk. Behind the scenes, this function will first generate a normal SP1 proof, then recursively combine all of them into a single proof using the STARK recursion protocol. Finally, the proof is wrapped in a SNARK proof using PLONK.

The PLONK Bn254 prover is only guaranteed to work on official releases of SP1. To use PLONK proving & verification locally, ensure that you have Docker installed.

Example

use sp1_sdk::{utils, ProverClient, SP1Stdin};

/// The ELF we want to execute inside the zkVM.
const ELF: &[u8] = include_bytes!("../../program/elf/riscv32im-succinct-zkvm-elf");

fn main() {
    // Setup logging.
    utils::setup_logger();

    // Create an input stream and write '500' to it.
    let n = 500u32;

    let mut stdin = SP1Stdin::new();
    stdin.write(&n);

    // Generate the proof for the given program and input.
    let client = ProverClient::new();
    let (pk, vk) = client.setup(ELF);
    let mut proof = client.prove_plonk(&pk, stdin).unwrap();

    println!("generated proof");

    // Read and verify the output.
    let _ = proof.public_values.read::<u32>();
    let a = proof.public_values.read::<u32>();
    let b = proof.public_values.read::<u32>();
    println!("a: {}", a);
    println!("b: {}", b);

    // Verify proof and public values
    client
        .verify_plonk(&proof, &vk)
        .expect("verification failed");

    // Save the proof.
    proof
        .save("proof-with-pis.bin")
        .expect("saving proof failed");

    println!("successfully generated and verified proof for the program!")
}

You can run the above script with RUST_LOG=info cargo run --bin plonk_bn254 --release in examples/fibonacci/script.

Advanced: PLONK without Docker

If you would like to run the PLONK prover directly without Docker, you must have Go 1.22 installed and enable the native-plonk feature in sp1-sdk. This path is not recommended and may require additional native dependencies.

sp1-sdk = { features = ["native-plonk"] }

Install SP1 Contracts

SP1 Contracts

This repository contains the smart contracts for verifying SP1 EVM proofs.

Installation

[!WARNING] > Foundry installs the latest release version initially, but subsequent forge update commands will use the main branch. This branch is the development branch and should be avoided in favor of tagged releases. The release process matches a specific SP1 version.

To install the latest release version:

forge install succinctlabs/sp1-contracts

To install a specific version:

forge install succinctlabs/sp1-contracts@<version>

Add @sp1-contracts/=lib/sp1-contracts/contracts/src/ in remappings.txt.

Usage

Once installed, you can use the contracts in the library by importing them:

pragma solidity ^0.8.19;

import {SP1Verifier} from "@sp1-contracts/SP1Verifier.sol";

contract MyContract is SP1Verifier {
}

For more details on the contracts, refer to the sp1-contracts repo.

Building Plonk BN254 Artifacts

To build the Plonk Bn254 artifacts from scratch, you can use the Makefile inside the prover directory.

RUST_LOG=info make plonk-bn254

Alloy Errors

If you are using a library that depends on alloy_sol_types, and encounter an error like this:

perhaps two different versions of crate `alloy_sol_types` are being used?

This is likely due to two different versions of alloy_sol_types being used. To fix this, you can set default-features to false for the sp1-sdk dependency in your Cargo.toml.

[dependencies]
sp1-sdk = { version = "0.1.0", default-features = false }

This will configure out the network feature which will remove the dependency on alloy_sol_types and configure out the NetworkProver.