fix(nat): 8 NAT traversal fixes proven by Mac upload to NAT testnet#55
fix(nat): 8 NAT traversal fixes proven by Mac upload to NAT testnet#55grumbach wants to merge 14 commits intosaorsa-labs:rc-2026.4.1from
Conversation
The per-connection rate limit of 5 coordination requests per 60-second window is too restrictive for nodes acting as hole-punch coordinators. Any node can be selected as a coordinator — either a bootstrap node during initial discovery, or a regular network node chosen as a DHT referrer. On a network with many NAT-restricted nodes, a peer may need to send dozens of PUNCH_ME_NOW frames through a single coordinator during connection establishment. The low limit causes relay requests to be silently dropped, preventing hole punches from succeeding and forcing expensive MASQUE relay fallback. Observed 50% rejection rate on a 52-node testnet. Raise to 50 per minute to allow burst hole-punching while still providing protection against amplification attacks. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…eout Adds two coordinated changes that move coordinator load off the small set of bootstrap nodes that today serve as the de-facto hole-punch coordinator for every cold-starting peer. Tier 2 — list-form preferred coordinators with rotation: - hole_punch_preferred_coordinators changes from DashMap<SocketAddr, SocketAddr> to DashMap<SocketAddr, Vec<SocketAddr>>. Callers (saorsa-core's DHT layer) supply a ranked list best-first; the existing single-coordinator setter remains as a thin wrapper. - A new merge_preferred_coordinators helper inserts the ranked list at the front of coordinator_candidates, deduping any pre-existing copies. Pure function, unit-tested. - The hole-punch loop in connect_with_fallback_inner rotates through the first N candidates (where N = preferred list length). All but the final attempt use a new short timeout (PER_COORDINATOR_QUICK_HOLEPUNCH_TIMEOUT = 1.5s) so a busy or unreachable coordinator is abandoned quickly; the final attempt uses the strategy's full hole-punch timeout to give it time to complete the punch. When no preferred list is set, the legacy single-coordinator retry behaviour is preserved. - Bounds-safe rotation via .get() instead of indexing. Tier 4 (lite) — coordinator-side back-pressure with silent refusal: - New NatTraversalConfig fields coordinator_max_active_relays (default 32) and coordinator_relay_slot_timeout (default 5s), validated in 1..=256 and 100ms..=60s respectively. - Plumbed through TransportConfig via setters mirroring allow_loopback, into NatTraversalState::new and BootstrapCoordinator::new. - BootstrapCoordinator gains a relay_slots HashMap keyed by (initiator_peer_id, target_peer_id) recording arrival Instant, plus a backpressure_refusals stat. - process_punch_me_now_frame inline-sweeps stale slots before the back-pressure check (so ghost slots from crashed peers cannot leak the counter), then either accepts and inserts/refreshes the slot or silently returns Ok(None) and increments the refusal stat. Re-arming the same (initiator, target) pair refreshes the slot timestamp without consuming additional capacity. - Only the relay branch (frames carrying target_peer_id) consumes a slot — non-relay echo frames are unchanged. The two tiers compose: a coordinator at capacity silently drops the relay; the initiator's per-attempt timeout (Tier 2) drives it to its next preferred coordinator. No new wire frame is needed, so this is backwards-compatible with older peers. Tests added: - 5 unit tests for merge_preferred_coordinators (front insertion, ordering preservation, dedup, empty handling) - 5 unit tests for BootstrapCoordinator back-pressure (under-cap accept, at-cap silent refuse, slot re-arm without capacity consumption, sweep reclaims stale slots, non-relay frames don't consume slots) Test files (security_regression_tests.rs, relay_queue_tests.rs) updated to include the two new fields in their NatTraversalConfig struct literals. This is the second of three PRs landing smarter hole-punch coordinator selection. The first (saorsa-core) added round-aware referrer ranking in the DHT layer; the third will wire saorsa-core to call the new list-form set_hole_punch_preferred_coordinators API. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lotTable Addresses PR saorsa-labs#43 deep-review findings H1, H2, H3, M1, M2, L1–L6. ## H1 — node-wide scope (breaking) Back-pressure state moves out of BootstrapCoordinator into a new shared `RelaySlotTable` (`src/relay_slot_table.rs`) that every QUIC connection spawned by a NatTraversalEndpoint references via Arc. The 32-slot cap is now enforced *across all connections at the node*, not per-connection — which is what the PR description originally promised but the code didn't deliver. One table is instantiated in `create_inner_endpoint` and injected into both server-side and client-side TransportConfig. The old `TransportConfig::{coordinator_max_active_relays, coordinator_relay_slot_timeout}` fields are gone; they are replaced by `TransportConfig::relay_slot_table: Option<Arc<RelaySlotTable>>`. BREAKING: `TransportConfig::coordinator_max_active_relays` and `coordinator_relay_slot_timeout` setters removed. Callers using `NatTraversalConfig` should switch to `coordinator_max_active_relays` (unchanged) and the renamed `coordinator_relay_slot_idle_timeout`. ## H2 — explicit release on connection close `impl Drop for BootstrapCoordinator` calls `RelaySlotTable::release_for_initiator(addr)`, reclaiming every slot owned by the closed connection immediately. The idle timeout is now an honest safety net for crashes/NAT rebinds, not the only release path. The coordinator cannot directly observe hole-punch outcomes (traffic flows initiator↔target, bypassing the coordinator), so the three release mechanisms are: connection close (fast), re-arm refresh, and idle sweep. All three are documented on the config field. ## H3 — drop dead `from_peer` from slot key Slot key is now `(SocketAddr, [u8; 32])` — `(initiator_addr, target_peer_id)`. The old `from_peer = derive_peer_id_from_connection()` was a connection-id hash that was constant per BootstrapCoordinator instance, so keying on it added zero discrimination in production. The socket address is stable across rounds within a session and distinct across initiators, which is exactly what dedup needs. ## M1 — rotation / max_holepunch_rounds interaction documented New paragraph on `set_hole_punch_preferred_coordinators` explains how the rotation index and strategy round counter advance together, and the edge case where a caller raising `max_holepunch_rounds` above K gives the final coordinator extra retries. ## M2 — fields plumbed through `P2pConfig::nat` `NatConfig` gains `coordinator_max_active_relays` and `coordinator_relay_slot_idle_timeout`; `to_nat_config()` now reads from `self.nat.*` instead of hardcoding defaults, so downstream P2pConfig callers can tune the cap. ## Minor fixes - L1: misleading "Coordination completed" trace on the refusal path is replaced with an accurate "refused by node-wide back-pressure" trace. - L2: RelaySlotTable warns at the first refusal and every 16 thereafter so operators see a log line at the start of a storm. - L3: debug_assert! in the connect loop proves `coordinator_candidates[idx] == strategy_coordinator` while rotating. - L4: sweep is amortized — `sweep_if_due` runs the `retain` only if the previous sweep was at least 100ms ago, bounding per-frame cost. - L5: `merge_preferred_coordinators` builds the merged list with one allocation instead of `Vec::insert(0, ..)` in a loop (O(N+M) not O(N·M)). - L6: test helpers use `VarInt::from_u32` directly; dead `unwrap_or` fallback gone. ## Tests - 6 new `RelaySlotTable` unit tests (under-cap, at-cap refuse, re-arm, idle sweep, release_for_initiator, refusal counter). - 4 new `BootstrapCoordinator`-integration tests verifying the shared table is consulted for the relay branch, the non-relay branch doesn't consume a slot, capacity refusal is silent, and — new for H2 — that dropping the coordinator releases exactly the slots it owned while leaving other initiators' slots alone. - Old 5 per-coordinator back-pressure tests are gone (the data structure they exercised no longer exists). - Existing relay_queue_tests.rs and security_regression_tests.rs struct literals updated for the renamed field. 1485 lib tests + 24 integration tests pass; cargo fmt clean; cargo clippy --all-targets -- -D warnings -D clippy::unwrap_used -D clippy::expect_used clean. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ordinator-rotation feat(nat): rotate hole-punch coordinators with capped per-attempt timeout
…ions 500ms was too tight for hole-punched and cross-region connections where the path includes 3+ RTTs (open_uni + data + peer ACK). Cross-continent RTTs of 200-300ms meant the identity announce frequently failed silently (fire-and-forget, never retried), leaving both peers stuck in a 15s identity exchange timeout. 5 seconds is generous enough for any real connection path while still detecting dead connections quickly (well under the 15s identity timeout). Fast profile raised proportionally (250ms -> 2.5s).
…ing it The accept loop was closing the NEWER incoming connection when a live connection to the same address existed, keeping the older one. This is backwards: the newer connection is the one the remote peer just completed a handshake on and is actively using. Closing it kills their identity exchange, causing 15s timeouts. This was the root cause of the regression from PR saorsa-labs#43 (coordinator rotation): when the rotation opened a new connection to a coordinator that already had an old connection, the acceptor closed the new one, leaving neither side with a working connection. Fix: replace old connection with new one (consistent with add_connection which also always overwrites with the newer connection). Also reset the emitted set so the replacement gets a reader task and PeerConnected event. Defense in depth: try_hole_punch now checks the DashMap in addition to connected_peers before opening a new coordinator connection, preventing unnecessary duplicates from being created in the first place.
…ed direct reachability
Port of David Irvine reachability fix (a5b9db46 from ant-quic).
Key changes:
- New src/reachability.rs module with ReachabilityScope, TraversalMethod,
socket_addr_scope() classifier, and DIRECT_REACHABILITY_TTL (15min)
- NodeStatus: has_public_ip -> has_global_address (address property, not
reachability proof), added direct_reachability_scope field
- can_receive_direct now requires peer-verified evidence (active direct
incoming connections or fresh scope-aware timestamps)
- can_help_traversal() simplified to just can_receive_direct
- TraversalMethod moved from node_event to reachability module
- PeerConnection gains traversal_method and side fields
- P2pEvent::PeerConnected gains traversal_method field
- EndpointStats tracks active_direct_incoming_connections and
last_direct_{loopback,local,global}_at timestamps
- detect_nat_type simplified to soft debug hint only
- Removed get_nat_stats() / nat_stats() placeholder methods
- link_transport Capabilities gains direct_reachability_scope
…stdoc - Remove record_direct_incoming_stats() which double-counted active_direct_incoming_connections (event_callback already handles it via spawn_connection_handler -> ConnectionEstablished) - Only set supports_relay/supports_coordination when side.is_client() (we connected to them, proving they accept inbound). A peer that connected to us only proves they can make outbound connections. - Fix all pre-existing broken rustdoc links (LinkTransport::dial -> dial_addr, UpnpMappingService/RelaySlotTable scope issues, escaped brackets in relay_server.rs) - Add missing PortPrediction test coverage
The cleanup_expired_sessions() method existed but was never called periodically. Add spawn_cleanup_task() which uses a Weak<Self> reference and tokio::time::interval to reap timed-out sessions at the configured cleanup_interval (default 60s). The task stops automatically when the server Arc is dropped. Wire up the cleanup task at both relay server creation sites in nat_traversal_api.rs so it starts as soon as the node boots.
Point at branches with 9 proven fixes (clock skew tolerance + 8 transport fixes) that enabled the first successful Mac upload to a NAT-protected testnet. Tested: 3/3 uploads from macOS (31s clock skew) to 14-node NAT testnet across lon1/ams3/nyc1/sfo3. 30-33s warm uploads. Dependencies: - saorsa-core: grumbach/fix/clock-skew-tolerance (PR saorsa-labs/saorsa-core#77) - saorsa-transport: grumbach/round4-combined (PR saorsa-labs/saorsa-transport#55) Remove the [patch] section once those PRs are merged into rc-2026.4.1.
| let load_score = 1.0 / (node.coordination_count as f64 + 1.0); | ||
|
|
||
| // Combined score: RTT matters more (weight 3) than load (weight 1) | ||
| rtt_score * 3.0 + load_score |
There was a problem hiding this comment.
Load-balancing score is always 1.0 —
coordination_count never incremented
coordination_count is initialised to 0 in every code path that creates a BootstrapNode (BootstrapNode::new, add_connection, the known_peers init loops) and there is no site in the codebase that increments it. The load score therefore evaluates to 1.0 / (0 + 1.0) = 1.0 for every candidate, making the load-balancing weight a constant additive term. Coordinator selection degrades to pure RTT-weighted selection with no actual load feedback, which is still better than the old approach — but the PR description's claim of "load-balanced" will confuse future readers.
Consider incrementing coordination_count when a PUNCH_ME_NOW coordination request is successfully dispatched, e.g.:
// In send_coordination_request_with_peer_id, after the frame is queued:
let mut nodes = self.bootstrap_nodes.write();
if let Some(node) = nodes.iter_mut().find(|n| n.address == coordinator) {
node.coordination_count = node.coordination_count.saturating_add(1);
}Prompt To Fix With AI
This is a comment left during a code review.
Path: src/nat_traversal_api.rs
Line: 5845-5848
Comment:
**Load-balancing score is always 1.0 — `coordination_count` never incremented**
`coordination_count` is initialised to `0` in every code path that creates a `BootstrapNode` (`BootstrapNode::new`, `add_connection`, the `known_peers` init loops) and there is no site in the codebase that increments it. The load score therefore evaluates to `1.0 / (0 + 1.0) = 1.0` for every candidate, making the load-balancing weight a constant additive term. Coordinator selection degrades to pure RTT-weighted selection with no actual load feedback, which is still better than the old approach — but the PR description's claim of "load-balanced" will confuse future readers.
Consider incrementing `coordination_count` when a PUNCH_ME_NOW coordination request is successfully dispatched, e.g.:
```rust
// In send_coordination_request_with_peer_id, after the frame is queued:
let mut nodes = self.bootstrap_nodes.write();
if let Some(node) = nodes.iter_mut().find(|n| n.address == coordinator) {
node.coordination_count = node.coordination_count.saturating_add(1);
}
```
How can I resolve this? If you propose a fix, please make it concise.…flight DHT queries The accept loop was closing the old connection immediately with "superseded" when replacing it with a newer connection to the same address. This caused the remote peer to tear down all state on that connection, including in-flight DHT FindNode queries and quote requests. The lost responses forced the DHT to re-walk, triggering more connections, more supersedes, and a cascading slowdown. On a 60-node testnet with 80% port-restricted NAT, this caused: - 264 superseded connections per upload - Quote collection taking 8+ minutes (vs expected ~1.5 minutes) - 212 hole-punch timeout events Fix: instead of closing the old connection synchronously, spawn a task that waits 5 seconds before closing it. This gives in-flight operations time to complete on the old connection while new operations use the replacement. The DashMap is updated immediately so sends go on the new connection. Tested on 60-node testnet (48 NAT / 12 standard): - 0 supersede-related disruptions - Upload times back to ~1m 45s steady state - 8 consecutive 50MB uploads with 0 failures Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lookups saorsa-core stores hole_punch_target_peer_ids and hole_punch_preferred_coordinators under IPv4 keys (e.g. 63.177.242.27:10008), but connect_with_fallback receives IPv6-mapped addresses ([::ffff:63.177.242.27]:10008) from dual-stack sockets. The DashMap lookup misses, causing: 1. target_peer_id falls back to wire_id_from_addr (address-based ID) 2. The coordinator can't match the address-based ID against its peer connection table (which stores real ML-DSA peer IDs) 3. PUNCH_ME_NOW relay fails with "No connection found" 4. Hole-punch times out after 2 rounds → MASQUE relay fallback At 60 nodes this was masked because bootstrap coordinators (public IP) had connections to most peers and the address fallback happened to work. At 990 nodes, NAT nodes are selected as coordinators more often and the address-based wire ID never matches. Fix: normalize all addresses to plain IPv4 when inserting into and looking up from both hole-punch DashMaps, using the existing normalize_socket_addr() helper. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
b7617f9 to
2cabf2b
Compare
|
Superseded by #52 which includes all 8 fixes from this PR plus TLS-key-based connection dedup and RelaySlotTable back-pressure. Our fixes were validated by Mac-to-NAT-testnet uploads; Mick's PR takes them further with structural improvements (TLS-derived identity eliminates the identity exchange timeout entirely). |
Proven by live testnet upload from Mac
3/3 uploads from a macOS client (31s clock skew) succeeded to a 14-node NAT testnet across 4 regions. Without these fixes: 0/3.
Upload 1 (cold): 61s SUCCESS
Upload 2 (warm): 30s SUCCESS
Upload 3 (warm): 33s SUCCESS
The 8 fixes
Incremental validation
Round 1 (send_ack): 3/3 at 44s warm
Round 2 (+dedup): 3/3 at 40s warm
Round 3 (+reachability): 3/3 at 67s warm
Round 4 (+relay+coordinator): 3/3 at 37s warm
Mac upload (all fixes): 3/3 at 30-33s warm
Total: 15/15 uploads succeeded across 5 test rounds.
Test plan
Greptile Summary
This PR bundles 8 targeted NAT traversal fixes validated by 15/15 successful uploads across 5 test rounds including a Mac-to-NAT-testnet scenario. The changes address a mix of timeout tuning, accept-loop deduplication regression (#43), reachability model correctness, relay fallback rotation/reuse, and coordinator quality selection. All fixes are well-scoped and supported by new tests.
coordination_countis initialized to0and never incremented anywhere in the codebase, so the load score is always1.0 / (0+1) = 1.0for every eligible coordinator. RTT weighting still works correctly, but the advertised load-balancing is a no-op until an increment call is wired in.Confidence Score: 4/5
Safe to merge — all fixes are well-validated by testnet data and the single open issue (non-functional load balancing) does not affect correctness.
One P1-adjacent finding: the load-balancing component of the new quality-aware coordinator selector is silently inert because coordination_count is never incremented. This means coordinator rotation degrades to RTT-only selection rather than the RTT+load selection described in the PR. The system still works correctly with RTT weighting, but the load-balancing half of fix #8 is incomplete and could mislead future maintainers. All other fixes are correct, well-tested, and proven by live testnet data.
src/nat_traversal_api.rs — specifically the select_coordinator function (lines 5817–5870) and the missing coordination_count increment site in send_coordination_request_with_peer_id.
Vulnerabilities
No security concerns identified. The reachability model change (peer-verified scope instead of address classification) reduces the attack surface by requiring active peer confirmation before marking a node as coordinator/relay-capable, which prevents spoofed addresses from gaining coordinator trust. The accept loop fix correctly closes the old connection rather than the new one, and the relay session cleanup uses a
Weakreference preventing any memory or resource leaks.Important Files Changed
Sequence Diagram
sequenceDiagram participant Peer participant AcceptLoop participant ConnMap as DashMap[addr→conn] participant EmittedSet as DashSet[emitted] participant HandshakeTx Note over Peer,HandshakeTx: Fix #2 — Accept loop keeps newer connection Peer->>AcceptLoop: TLS Handshake (new connection) AcceptLoop->>ConnMap: has_live(remote_addr)? ConnMap-->>AcceptLoop: true (stale old conn) AcceptLoop->>ConnMap: get(old_conn).close("superseded") AcceptLoop->>EmittedSet: remove(remote_addr) AcceptLoop->>ConnMap: insert(remote_addr, new_conn) AcceptLoop->>EmittedSet: insert(remote_addr) → true AcceptLoop->>HandshakeTx: send(Ok((addr, new_conn))) Note over Peer,HandshakeTx: Fix #6 — Relay session reuse returns socket Peer->>AcceptLoop: establish_relay_session(relay_addr) AcceptLoop->>ConnMap: relay_sessions.get(relay_addr)? ConnMap-->>AcceptLoop: active session found AcceptLoop->>Peer: open_relay_stream_and_handshake(conn, addr, existing_pub_addr) Peer-->>AcceptLoop: (pub_addr, relay_socket) ← socket now returned Note over Peer,HandshakeTx: Fix #5 — Relay rotation on failure AcceptLoop->>Peer: try_relay_connection(target, relay[0]) Peer-->>AcceptLoop: Error / Timeout AcceptLoop->>AcceptLoop: strategy.transition_to_next_relay() AcceptLoop->>Peer: try_relay_connection(target, relay[1]) Peer-->>AcceptLoop: OK → successPrompt To Fix All With AI
Reviews (1): Last reviewed commit: "fix(nat): quality-aware coordinator sele..." | Re-trigger Greptile