Conversation
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3597 +/- ##
========================================
Coverage 75.87% 75.87%
========================================
Files 387 393 +6
Lines 34918 35330 +412
========================================
+ Hits 26493 26806 +313
- Misses 6556 6614 +58
- Partials 1869 1910 +41 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
480986b to
f403dea
Compare
15a5437 to
22030a1
Compare
2220c26 to
6907a65
Compare
4c820cc to
b899d50
Compare
6907a65 to
32ceaff
Compare
d3606cb to
8ef5a41
Compare
41c23ee to
b45d05e
Compare
| l1Head, err := core.GetL1Head(database) | ||
| if err != nil { | ||
| return nil, err | ||
| } |
There was a problem hiding this comment.
If we have blocks but no l1 head I agree we should stop, but does a freshly synced node with no L1 head yet will fail this right? Can not having an L1 Head but being on a fresh new head cause bad UX 🤔
There was a problem hiding this comment.
I've tested it, it will crash, additionally with no extra info such as, why is it crashing:
16:13:19.401 09/05/2026 +01:00 ERROR node/node.go:662 Error while running migrations {"error": "key not found"}
rodrodros
left a comment
There was a problem hiding this comment.
Overall looks like really good work. Just sharing some questions and nitpicks
I am curious about the cases when we start the migration close to the prune threshold
- L1Head == L2Head, do we delete everything at this stage
- L1Head - 1 == L2Head, we have only one block?
| func (m *Migrator) Migrate( | ||
| ctx context.Context, | ||
| database db.KeyValueStore, | ||
| network *networks.Network, | ||
| logger log.StructuredLogger, | ||
| ) ([]byte, error) { |
There was a problem hiding this comment.
Please add descriptive errors to each of the err returns for this function, otherwise error cause is obscure in many cases. Other functions are optional, but this is the main one
| return err | ||
| } | ||
| } | ||
| } | ||
|
|
||
| for addr := range diff.Nonces { | ||
| if err := move( | ||
| fillNonceHistoryKey(scratch.historyKey[:], &addr, blockBE), | ||
| fillNonceScratchKey(scratch.scratchKey[:], &addr, blockBE), | ||
| ); err != nil { | ||
| return err | ||
| } | ||
| } | ||
|
|
||
| for addr := range diff.ReplacedClasses { | ||
| if err := move( | ||
| fillClassHashHistoryKey(scratch.historyKey[:], &addr, blockBE), | ||
| fillClassHashScratchKey(scratch.scratchKey[:], &addr, blockBE), | ||
| ); err != nil { | ||
| return err |
There was a problem hiding this comment.
Error wrapper for this as well please
| batch := database.NewBatch() | ||
| if err := pruner.PruneBlockDataUpto(batch, oldestBlockKept); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
This would benefit from error wrapping
| batch := database.NewBatch() | ||
| if err := pruner.PruneBlockDataUpto(batch, oldestBlockKept); err != nil { | ||
| return err | ||
| } | ||
| if err := wipeReverseLookupBuckets(batch); err != nil { | ||
| return err | ||
| } | ||
| if err := batch.Write(); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
I am thinking that this batch can potentially be very big 🤔, you're pruning until the first block to keep, which the bigger mainnet, the lot more changes that the batch is going to hold.
Add that to wipeReversaeLookupBuckets(batch) which I am not sure how much they add but that also compounds.
I think for safety, even if now we are good, think of a huge mainnet (10Tb of data), would this still hold or would it be better to batch it through?
|
I am thinking that instead of having prune mode with 0 as a default we can put a higher number (?). This can not make this edge cases the default? |
| batch := database.NewBatch() | ||
| err = core.WriteBlockHeaderNumberByHash(batch, header.Hash, oldestBlockKept-uint64(1)) | ||
| if err != nil { | ||
| return nil, false, err | ||
| } | ||
| if err := batch.Write(); err != nil { | ||
| return nil, false, err | ||
| } | ||
|
|
There was a problem hiding this comment.
Maybe this can be part of setupRestorer, what do you think?
Adds optional migration for history-pruning (migration/historyprunner) that brings an existing node's database in line with the rolling pruner's retention window. Activated whenever
--prune-modeis passed (with or without a value, including--prune-mode=0); no-op otherwise.The migration runs in two pipelined, resumable phases. The keeper window is anchored on the L1-verified head (matching the rolling pruner's reorg-safe floor): with
minHead = min(l1Head, chainHeight)andretainedBlockstaken from the--prune-mode=Nvalue (defaulting to 0 when the flag is passed bare), the window covers[minHead - retainedBlocks + 1, chainHeight]. IfminHead < retainedBlocksthe chain is shorter than the window and the migration no-ops, leaving the rolling pruner to take over once blocks arrive.TransactionBlockNumbersAndIndicesByHash,L1HandlerTxnHashByMsgHash,BlockHeaderNumbersByHash), then copies in-window entries from the live history buckets (ContractStorageHistory,ContractClassHashHistory,ContractNonceHistory) into a scratch namespace.Resume state (
stagerProgress|restorerProgress|numRetainedBlocks, 24 bytes BE) is persisted between runs; the originally-configured retention pins the cutoff so a --prune-mode change mid-flight cannot retarget an in-progress migration (logged when ignored).