-
Notifications
You must be signed in to change notification settings - Fork 524
Description
We insist that block numbers increase, eg. replies should be (first_block=100, num_blocks=10), (first_block=110, num_blocks=10).
This requires implementations to split on block boundaries, and also has the (previously raised but currently purely theoretical) problem that a single block with more than 8k channels would be unsendable without compressed encoding.
Moreover, lnd simply packs in channel ids to the limit, thus it can repeat a block on the edge case, and allows this explicitly:
syncer.go:736
// If we've previously received a reply for this query, look at
// its last block to ensure the current reply properly follows
// it.
if g.prevReplyChannelRange != nil {
prevReply := g.prevReplyChannelRange
prevReplyLastHeight := prevReply.LastBlockHeight()
// The current reply can either start from the previous
// reply's last block, if there are still more channels
// for the same block, or the block after.
if msg.FirstBlockHeight != prevReplyLastHeight &&
msg.FirstBlockHeight != prevReplyLastHeight+1 {
return fmt.Errorf("first block of reply %v "+
"does not continue from last block of "+
"previous %v", msg.FirstBlockHeight,
prevReplyLastHeight)
}
}
}So I think we should allow this: in particular, c-lightning will probably start doing the same thing, packing up to 8k descriptors into a reply chunk, then compressing. (We currently do a binary split when a block doesn't fit, which is pretty dumb).