Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions EIPS/eip-4844.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Compared to full data sharding, this EIP has a reduced cap on the number of thes
| `MAX_CALLDATA_SIZE` | `2**24` |
| `MAX_ACCESS_LIST_SIZE` | `2**24` |
| `MAX_ACCESS_LIST_STORAGE_KEYS` | `2**24` |
| `MAX_TX_WRAP_KZG_COMMITMENTS` | `2**12` |
| `MAX_TX_WRAP_COMMITMENTS` | `2**12` |
| `LIMIT_BLOBS_PER_TX` | `2**12` |
| `DATA_GAS_PER_BLOB` | `2**17` |
| `HASH_OPCODE_BYTE` | `Bytes1(0x49)` |
Expand Down Expand Up @@ -81,8 +81,8 @@ Specifically, we use the following methods from [`polynomial-commitments.md`](ht
### Helpers

```python
def kzg_to_versioned_hash(kzg: KZGCommitment) -> VersionedHash:
return BLOB_COMMITMENT_VERSION_KZG + sha256(kzg)[1:]
def kzg_to_versioned_hash(commitment: KZGCommitment) -> VersionedHash:
return BLOB_COMMITMENT_VERSION_KZG + sha256(commitment)[1:]
```

Approximates `factor * e ** (numerator / denominator)` using Taylor expansion:
Expand Down Expand Up @@ -150,14 +150,14 @@ The `TransactionNetworkPayload` wraps a `TransactionPayload` with additional dat
this wrapping data SHOULD be verified directly before or after signature verification.

When a blob transaction is passed through the network (see the [Networking](#networking) section below),
the `TransactionNetworkPayload` version of the transaction also includes `blobs`, `kzgs` (commitments list) and `proofs`.
the `TransactionNetworkPayload` version of the transaction also includes `blobs`, `commitments` and `proofs`.
The execution layer verifies the wrapper validity against the inner `TransactionPayload` after signature verification as:

- All hashes in `blob_versioned_hashes` must start with the byte `BLOB_COMMITMENT_VERSION_KZG`
- There may be at most `MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB` total blob commitments in a valid block.
- There is an equal amount of versioned hashes, kzg commitments, blobs and proofs.
- The KZG commitments hash to the versioned hashes, i.e. `kzg_to_versioned_hash(kzg[i]) == versioned_hash[i]`
- The KZG commitments match the blob contents. (Note: this can be optimized with additional data, using a proof for a
- There is an equal amount of versioned hashes, commitments, blobs and proofs.
- The commitments hash to the versioned hashes, i.e. `kzg_to_versioned_hash(commitment[i]) == versioned_hash[i]`
- The commitments match the blob contents. (Note: this can be optimized with additional data, using a proof for a
random evaluation at two points derived from the commitment and blob data)


Expand Down Expand Up @@ -266,13 +266,13 @@ def point_evaluation_precompile(input: Bytes) -> Bytes:
z = input[32:64]
y = input[64:96]
commitment = input[96:144]
kzg_proof = input[144:192]
proof = input[144:192]

# Verify commitment matches versioned_hash
assert kzg_to_versioned_hash(commitment) == versioned_hash

# Verify KZG proof
assert verify_kzg_proof(commitment, z, y, kzg_proof)
assert verify_kzg_proof(commitment, z, y, proof)

# Return FIELD_ELEMENTS_PER_BLOB and BLS_MODULUS as padded 32 byte big endian values
return Bytes(U256(FIELD_ELEMENTS_PER_BLOB).to_be_bytes32() + U256(BLS_MODULUS).to_be_bytes32())
Expand Down Expand Up @@ -330,19 +330,19 @@ the payload is a SSZ encoded container:
class BlobTransactionNetworkWrapper(Container):
tx: SignedBlobTransaction
# KZGCommitment = Bytes48
blob_kzgs: List[KZGCommitment, MAX_TX_WRAP_KZG_COMMITMENTS]
commitments: List[KZGCommitment, MAX_TX_WRAP_COMMITMENTS]
# BLSFieldElement = uint256
blobs: List[Vector[BLSFieldElement, FIELD_ELEMENTS_PER_BLOB], LIMIT_BLOBS_PER_TX]
# KZGProof = Bytes48
proofs: List[KZGProof, MAX_TX_WRAP_KZG_COMMITMENTS]
proofs: List[KZGProof, MAX_TX_WRAP_COMMITMENTS]
```

We do network-level validation of `BlobTransactionNetworkWrapper` objects as follows:

```python
def validate_blob_transaction_wrapper(wrapper: BlobTransactionNetworkWrapper):
versioned_hashes = wrapper.tx.message.blob_versioned_hashes
commitments = wrapper.blob_kzgs
commitments = wrapper.commitments
blobs = wrapper.blobs
proofs = wrapper.proofs
# note: assert blobs are not malformatted
Expand Down Expand Up @@ -388,7 +388,7 @@ The work that is already done in this EIP includes:

The work that remains to be done to get to full sharding includes:

- A low-degree extension of the `blob_kzgs` in the consensus layer to allow 2D sampling
- A low-degree extension of the `commitments` in the consensus layer to allow 2D sampling
- An actual implementation of data availability sampling
- PBS (proposer/builder separation), to avoid requiring individual validators to process 32 MB of data in one slot
- Proof of custody or similar in-protocol requirement for each validator to verify a particular part of the sharded data in each block
Expand All @@ -411,13 +411,13 @@ For each value it would provide a KZG proof and use the point evaluation precomp
and then perform the fraud proof verification on that data as is done today.

ZK rollups would provide two commitments to their transaction or state delta data:
the kzg in the blob and some commitment using whatever proof system the ZK rollup uses internally.
They would use a commitment proof of equivalence protocol, using the point evaluation precompile,
to prove that the kzg (which the protocol ensures points to available data) and the ZK rollup's own commitment refer to the same data.
the blob commitment (which the protocol ensures points to available data) and the ZK rollup's own commitment using whatever proof system the rollup uses internally.
They would use a proof of equivalence protocol, using the point evaluation precompile,
to prove that the two commitments refer to the same data.

### Versioned hashes & precompile return data

We use versioned hashes (rather than kzgs) as references to blobs in the execution layer to ensure forward compatibility with future changes.
We use versioned hashes (rather than commitments) as references to blobs in the execution layer to ensure forward compatibility with future changes.
For example, if we need to switch to Merkle trees + STARKs for quantum-safety reasons, then we would add a new version,
allowing the point evaluation precompile to work with the new format.
Rollups would not have to make any EVM-level changes to how they work;
Expand Down