-
Notifications
You must be signed in to change notification settings - Fork 1.8k
upstream: merge geth-v1.16.1 #3261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upstream: merge geth-v1.16.1 #3261
Conversation
This PR fixes a bug in the map renderer that sometimes used an obsolete block log value pointer to initialize the iterator for rendering from a snapshot. This bug was triggered by chain reorgs and sometimes caused indexing errors and invalid search results. A few other conditions are also made safer that were not reported to cause issues yet but could potentially be unsafe in some corner cases. A new unit test is also added that reproduced the bug but passes with the new fixes. Fixes ethereum/go-ethereum#31593 Might also fix ethereum/go-ethereum#31589 though this issue has not been reproduced yet, but it appears to be related to a log index database corruption around a specific block, similarly to the other issue. Note that running this branch resets and regenerates the log index database. For this purpose a `Version` field has been added to `rawdb.FilterMapsRange` which will also make this easier in the future if a breaking database change is needed or the existing one is considered potentially broken due to a bug, like in this case.
Log `key` in hexadecimal string format.
This fixes an issue where running geth with `--history.chain postmerge` would not work on an empty database. ``` ERROR[04-16|23:11:12.913] Chain history database is pruned to unknown block tail=0 Fatal: Failed to register the Ethereum service: unexpected database tail ```
This is an attempt at fixing #31601. I think what happens is the startup logic will try to get the full block body (it's `bc.loadLastState`) and fail because genesis block has been pruned from the freezer. This will cause it to keep repeating the reset logic, causing a deadlock. This can happen when due to an unsuccessful sync we don't have the state for the head (or any other state) fully, and try to redo the snap sync. --------- Co-authored-by: Gary Rong <[email protected]>
BroadcastTransactions needs the Sender address to route message flows from the same Sender address consistently to the same random subset of peers. It however spent considerable time calculating the Sender addresses, even if the Sender address was already calculated and cached in other parts of the code. Since we only need the mapping, we can use any signer, and the one that had already been used is a better choice because of cache reuse.
https://eips.ethereum.org/EIPS/eip-7600#activation Timestamp: `1746612311` Fork id: `0xc376cf8b`
closes #31310
This has been requested a few times in the past and I think it is a nice
quality-of-life improvement for users. At a predetermined interval,
there will now be a "Fork ready" log when a future fork is scheduled,
but not yet active.
It can only possibly print after block import, which kinda avoids the
scenario where the client isn't progressing or is syncing and the user
thinks it's "ready" because it sees a ready log.
New output:
```console
INFO [03-08|21:32:57.472] Imported new potential chain segment number=7 hash=aa24ee..f09e62 blocks=1 txs=0 mgas=0.000 elapsed="874.916µs" mgasps=0.000 snapdiffs=973.00B triediffs=7.05KiB triedirty=0.00B
INFO [03-08|21:32:57.473] Ready for fork activation fork=Prague date="18 Mar 25 19:29 CET" remaining=237h57m0s timestamp=1,742,322,597
INFO [03-08|21:32:57.475] Chain head was updated number=7 hash=aa24ee..f09e62 root=19b0de..8d32f2 elapsed="129.125µs"
```
Easiest way to verify this behavior is to apply this patch and run `geth
--dev --dev.period=12`
```patch
diff --git a/params/config.go b/params/config.go
index 9c7719d901..030c4f80e7 100644
--- a/params/config.go
+++ b/params/config.go
@@ -174,7 +174,7 @@ var (
ShanghaiTime: newUint64(0),
CancunTime: newUint64(0),
TerminalTotalDifficulty: big.NewInt(0),
- PragueTime: newUint64(0),
+ PragueTime: newUint64(uint64(time.Now().Add(time.Hour * 300).Unix())),
BlobScheduleConfig: &BlobScheduleConfig{
Cancun: DefaultCancunBlobConfig,
Prague: DefaultPragueBlobConfig,
```
…ime.Duration (#31407) closes #31401 --------- Co-authored-by: Marius van der Wijden <[email protected]> Co-authored-by: Guillaume Ballet <[email protected]> Co-authored-by: Felix Lange <[email protected]>
This pull request improves error handling for local transaction submissions. Specifically, if a transaction fails with a temporary error but might be accepted later, the error will not be returned to the user; instead, the transaction will be tracked locally for resubmission. However, if the transaction fails with a permanent error (e.g., invalid transaction or insufficient balance), the error will be propagated to the user. These errors returned in the legacyPool are regarded as temporary failure: - `ErrOutOfOrderTxFromDelegated` - `txpool.ErrInflightTxLimitReached` - `ErrAuthorityReserved` - `txpool.ErrUnderpriced` - `ErrTxPoolOverflow` - `ErrFutureReplacePending` Notably, InsufficientBalance is also treated as a permanent error, as it’s highly unlikely that users will transfer funds into the sender account after submitting the transaction. Otherwise, users may be confused—seeing their transaction submitted but unaware that the sender lacks sufficient funds—and continue waiting for it to be included. --------- Co-authored-by: lightclient <[email protected]>
This PR makes the conditions for using a map rendering snapshot stricter so that whenever a reorg happens, only a snapshot of a common ancestor block can be used. The issue fixed in ethereum/go-ethereum#31642 originated from using a snapshot that wasn't a common ancestor. For example in the following reorg scenario: `A->B`, then `A->B2`, then `A->B2->C2`, then `A->B->C` the last reorg triggered a render from snapshot `B` saved earlier. Now this is possible under certain conditions but extra care is needed, for example if block `B` crosses a map boundary then it should not be allowed. With the latest fix the checks are sufficient but I realized I would just feel safer if we disallowed this rare and risky scenario altogether and just render from snapshot `A` after the last reorg in the example above. The performance difference if a few milliseconds and it occurs rarely (about once a day on Holesky, probably much more rare on Mainnet). Note that this PR only makes the snapshot conditions stricter and `TestIndexerRandomRange` does check that snapshots are still used whenever it's obviously possible (adding blocks after the current head without a reorg) so this change can be considered safe. Also I am running the unit tests and the fuzzer and everything seems to be fine.
This PR makes `filtermaps.ChainView` thread safe because it is used concurrently both by the indexer and multiple matcher threads. Even though it represents an immutable view of the chain, adding a mutex lock to the `blockHash` function is necessary because it does so by extending its list of non-canonical hashes if the underlying blockchain is changed. The unsafe concurrency did cause a panic once after running the unit tests for several hours and it could also happen during live operation.
The API `eth_feeHistory` returns
`{"jsonrpc":"2.0","id":1,"error":{"code":-32603,"message":"json:
unsupported value: NaN"}}`, when we query `eth_feeHistory` with a old
block that without a blob, or when the field
`config.blobSchedule.cancun.max` in genesis.config is 0 (that happens
for some projects fork geth but they don't have blob).
So here we specially handle the case when maxBlobGas == 0 to prevent
this issue from happening.
This PR changes the chain view update mechanism of the log filter. Previously the head updates were all wired through the indexer, even in unindexed mode. This was both a bit weird and also unsafe as the indexer's chain view was updates asynchronously with some delay, making some log related tests flaky. Also, the reorg safety of the indexed search was integrated with unindexed search in a weird way, relying on `syncRange.ValidBlocks` in the unindexed case too, with a special condition added to only consider the head of the valid range but not the tail in the unindexed case. In this PR the current chain view is directly accessible through the filter backend and unindexed search is also chain view based, making it inherently safe. The matcher sync mechanism is now only used for indexed search as originally intended, removing a few ugly special conditions. The PR is currently based on top of ethereum/go-ethereum#31642 Together they fix ethereum/go-ethereum#31518 and replace ethereum/go-ethereum#31542 --------- Co-authored-by: Gary Rong <[email protected]>
This PR updates checkpoints for blsync and filtermaps.
This PR ensures that caching a slice or a slice of slices will never affect the original version by always cloning a slice fetched from cache if it is not used in a guaranteed read only way.
This PR adds the `AuthorizationList` field to the `CallMsg` interface to support `eth_call` and `eth_estimateGas` of set-code transactions. --------- Co-authored-by: Sina Mahmoodi <[email protected]>
This PR adds the electra beacon chain configuration for mainnet.
This PR fixes a deadlock situation is deleteTailEpoch that might arise when range delete is running in iterator based fallback mode (either using leveldb database or the hashdb state storage scheme). In this case a stopCb callback is called periodically that does check events, including matcher sync requests, in which case it tries to acquire indexLock for read access, while deleteTailEpoch already held it for write access. This pull request removes the indexLock acquiring in `FilterMapsMatcherBackend.synced` as this function is only called in the indexLoop. Fixes ethereum/go-ethereum#31700
TruncatePending shows up bright red on our nodes, because it computes
the length of a map multiple times.
I don't know why this is so expensive, but around 20% of our time is
spent on this, which is super weird.
```
//PR: BenchmarkTruncatePending-24 17498 69397 ns/op 32872 B/op 3 allocs/op
//Master: BenchmarkTruncatePending-24 9960 123954 ns/op 32872 B/op 3 allocs/op
```
```
benchmark old ns/op new ns/op delta
BenchmarkTruncatePending-24 123954 69397 -44.01%
benchmark old allocs new allocs delta
BenchmarkTruncatePending-24 3 3 +0.00%
benchmark old bytes new bytes delta
BenchmarkTruncatePending-24 32872 32872 +0.00%
```
This simple PR is a 44% improvement over the old state
```
OUTINE ======================== github.com/ethereum/go-ethereum/core/txpool/legacypool.(*LegacyPool).truncatePending in github.com/ethereum/go-ethereum/core/txpool/legacypool/legacypool.go
1.96s 18.02s (flat, cum) 19.57% of Total
. . 1495:func (pool *LegacyPool) truncatePending() {
. . 1496: pending := uint64(0)
60ms 2.99s 1497: for _, list := range pool.pending {
250ms 5.48s 1498: pending += uint64(list.Len())
. . 1499: }
. . 1500: if pending <= pool.config.GlobalSlots {
. . 1501: return
. . 1502: }
. . 1503:
. . 1504: pendingBeforeCap := pending
. . 1505: // Assemble a spam order to penalize large transactors first
. 510ms 1506: spammers := prque.New[int64, common.Address](nil)
140ms 2.50s 1507: for addr, list := range pool.pending {
. . 1508: // Only evict transactions from high rollers
50ms 5.08s 1509: if uint64(list.Len()) > pool.config.AccountSlots {
. . 1510: spammers.Push(addr, int64(list.Len()))
. . 1511: }
. . 1512: }
. . 1513: // Gradually drop transactions from offenders
. . 1514: offenders := []common.Address{}
```
```go
// Benchmarks the speed of batch transaction insertion in case of multiple accounts.
func BenchmarkTruncatePending(b *testing.B) {
// Generate a batch of transactions to enqueue into the pool
pool, _ := setupPool()
defer pool.Close()
b.ReportAllocs()
batches := make(types.Transactions, 4096+1024+1)
for i := range len(batches) {
key, _ := crypto.GenerateKey()
account := crypto.PubkeyToAddress(key.PublicKey)
pool.currentState.AddBalance(account, uint256.NewInt(1000000), tracing.BalanceChangeUnspecified)
tx := transaction(uint64(0), 100000, key)
batches[i] = tx
}
for _, tx := range batches {
pool.addRemotesSync([]*types.Transaction{tx})
}
b.ResetTimer()
// benchmark truncating the pending
for range b.N {
pool.truncatePending()
}
}
```
This PR adds checking for an edgecase which theoretically can happen in the range-prover. Right now, we check that a key does not overwrite a previous one by checking that the key is increasing. However, if keys are of different lengths, it is possible to create a key which is increasing _and_ overwrites the previous key. Example: `0xaabbcc` followed by `0xaabbccdd`. This can not happen in go-ethereum, which always uses fixed-size paths for accounts and storage slot paths in the trie, but it might happen if the range prover is used without guaranteed fixed-size keys. This PR also adds some testcases for the errors that are expected.
This PR applies the config overrides to the new config as well, otherwise they will not be applied to defined configs, making shadowforks impossible. To test: ``` > ./build/bin/geth --override.prague 123 --dev --datadir /tmp/geth INFO [04-28|21:20:47.009] - Prague: @123 > ./build/bin/geth --override.prague 321 --dev --datadir /tmp/geth INFO [04-28|21:23:59.760] - Prague: @321 ``
Correct the error message in the ExecuteStatelessPayloadV4 function to reference newPayloadV4 and the Prague fork, instead of incorrectly referencing newPayloadV3 and Cancun. This improves clarity during debugging and aligns the error message with the actual function and fork being validated. No logic is changed. --------- Co-authored-by: rjl493456442 <[email protected]>
This PR addresses a flakiness in the rollback test discussed in ethereum/go-ethereum#32252 I found `nonce` collision caused transactions occasionally fail to send. I tried to change error message in the failed test like: ``` if err = client.SendTransaction(ctx, signedTx); err != nil { t.Fatalf("failed to send transaction: %v, nonce: %d", err, signedTx.Nonce()) } ``` and I occasionally got test failure with this message: ``` === CONT TestFlakyFunction/Run_#100 rollback_test.go:44: failed to send transaction: already known, nonce: 0 --- FAIL: TestFlakyFunction/Run_#100 (0.07s) ``` Although `nonces` are obtained via `PendingNonceAt`, we observed that, in rare cases (approximately 1 in 1000), two transactions from the same sender end up with the same nonce. This likely happens because `tx0` has not yet propagated to the transaction pool before `tx1` requests its nonce. When the test succeeds, `tx0` and `tx1` have nonces `0` and `1`, respectively. However, in rare failures, both transactions end up with nonce `0`. We modified the test to explicitly assign nonces to each transaction. By controlling the nonce values manually, we eliminated the race condition and ensured consistent behavior. After several thousand runs, the flakiness was no longer reproducible in my local environment. Reduced internal polling interval in `pendingStateHasTx()` to speed up test execution without impacting stability. It reduces test time for `TestTransactionRollbackBehavior` from about 7 seconds to 2 seconds.
…(#31877) This introduces an error when the filter has both `blockHash` and `fromBlock`/`toBlock`, since these are mutually exclusive. Seems the tests were actually returning `not found` error, which went undetected since there was no check on the actual returned error in the test.
3d4b3fd to
f03da5d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR merges selected commits from go-ethereum v1.15.2 to v1.16.1 into the BSC codebase, implementing various new features, performance improvements, refactoring, and cleanup. The key changes include a new blockchain constructor API, removal of deprecated assembler components, introduction of history indexing capabilities, new workload testing tools, and various optimizations.
- Major refactoring of BlockChain constructor to use a new
BlockChainConfigstructure instead of multiple parameters - Removal of deprecated assembler/disassembler modules from
core/asm - Introduction of new workload testing tools for RPC performance validation
- Addition of new utility types and functions for range operations and LRU cache improvements
Reviewed Changes
Copilot reviewed 154 out of 807 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| core/block_validator_test.go | Updates BlockChain constructor calls to use new API |
| core/block_validator.go | Updates bloom filter creation to use MergeBloom instead of CreateBloom |
| core/bench_test.go | Updates BlockChain constructor calls and configuration |
| core/asm/* | Complete removal of assembler/disassembler modules |
| consensus/parlia/*.go | Updates to consensus engine APIs and constructor calls |
| consensus/misc/eip4844/eip4844.go | Adds Osaka fork support for blob gas calculations |
| consensus/ethash/ethash.go | Removes deprecated APIs method |
| consensus/consensus.go | Removes APIs method from Engine interface |
| consensus/clique/*.go | Updates constructor calls and removes deprecated API components |
| consensus/beacon/consensus.go | Updates merge detection logic and removes APIs method |
| cmd/workload/* | New comprehensive workload testing tool suite |
| cmd/utils/* | Updates to database handling and configuration management |
| cmd/geth/* | Updates to CLI configuration and database operations |
| common/* | New Range utility type and LRU cache improvements |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.
Summary of Upstream Changes (go-ethereum v1.15.2 → v1.16.1)
This PR merges selected commits from go-ethereum v1.15.2 to v1.16.1 into the BSC codebase.
The updates are grouped into four categories: New Features, Performance Improvements, Refactoring, and Cleanup.
New Features
Archive Mode & Historical State Access
(#31156)
(#31161)
History Pruning (Partial Integration, Disabled in BSC)
HistoryModeconfiguration for node operation modes.(#31365)
(#31384)
(#31414)
eradbas a backend to serve pruned historical data over RPC.(#31604)
Log Filtering Improvements (Fitermap, Requires BSC Checkpoints)
bloombits.(#31080)
Other Notable Features
(#29158)
(#31006)
(#31887)
(#31189)
Performance Improvements
(#31557) — Also replaces
sharedstoragepoolin BSC(flagcache.enablesharedpoolis deleted).(#30971) — Replaces the original BSC implementation.
SyncFlushtoNoAsyncFlushin config.(#30464)
(#30932)
(#31839) — Replaces original BSC logic.
(#31306)
GetTransactionReceiptRPC.(#32021)
Refactoring
(#30661)
BlockChainconstructor.(#31925)
(#31061)
Cleanup
core/vmandcmd/evm.(#32000)
(#31211)