feat(monero): Remote node load balancing#420
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThis update introduces a new "monero-rpc-pool" Rust crate providing a Monero node load balancer and health-checking proxy, with database-backed node discovery, health monitoring, and HTTP proxy endpoints. The pool is integrated into the backend, CLI, and GUI, allowing users to toggle between custom nodes and the pool. Supporting changes include Redux state management, GUI settings, event propagation, and utility scripts. Changes
Sequence Diagram(s)sequenceDiagram
participant GUI
participant Redux
participant Tauri
participant PoolServer
participant DB
GUI->>Redux: Toggle useMoneroRpcPool
Redux->>Tauri: Send settings (incl. useMoneroRpcPool)
Tauri->>PoolServer: If enabled, start server (random port)
PoolServer->>DB: Load/init node list, run health checks
PoolServer->>PoolServer: Periodically discover nodes, update health
PoolServer->>Tauri: Emit PoolStatusUpdate events
Tauri->>GUI: Forward PoolStatusUpdate to frontend
GUI->>Redux: Dispatch poolStatusReceived
GUI->>GUI: Render MoneroPoolHealthBox with status
GUI->>Tauri: Make Monero RPC request (proxy)
Tauri->>PoolServer: Proxy request via HTTP handler
PoolServer->>DB: Select best node(s), forward request
PoolServer->>Tauri: Return response or error
Tauri->>GUI: Return response to frontend
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (6)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (9)
swap/src/common/tracing_util.rs (1)
92-95: Consistent log enrichment – good, but mind possible PIIEnabling file & line numbers across all layers keeps logs consistent.
Small caveat: terminal / GUI logs may now expose absolute paths on some platforms, which can leak user names. Consider.with_file(true /* relative? */)behind a debug/verbose flag if this is a concern.Also applies to: 105-107, 123-125, 135-137
swap/src/cli/api/request.rs (1)
1376-1383: Error path now logged – consider propagating the reasonLogging the
is_availableerror is great.
Returningavailable: falsefor all failures, however, hides why the node is unavailable (network vs. auth vs. timeout). If the caller could use this signal, bubble the error up (e.g.Result<CheckMoneroNodeResponse>whereavailableisboolanderroptional).monero-rpc-pool/Cargo.toml (1)
1-6: Add missing manifest metadataCargo best-practice strongly advises declaring a
license,repository, and (optionally)homepage/keywords.
Without them crates published to crates.io are rejected and internal consumers lose provenance data.[package] name = "monero-rpc-pool" version = "0.1.0" authors = ["CoBloX Team <[email protected]>"] edition = "2021" +license = "Apache-2.0" +repository = "https://github.com/UnstoppableSwap/core"monero-rpc-pool/src/lib.rs (1)
70-71: Permissive CORS may be too open
CorsLayer::permissive()allows every origin, header and method.
Unless the service is intentionally public, consider tightening this or making it configurable.monero-rpc-pool/src/simple_handlers.rs (1)
20-28: Non-standard JSON-RPC error codesAll errors map to
code = -1.
The JSON-RPC 2.0 spec reserves-32000..-32099for server errors. Using distinct codes greatly aids client troubleshooting.monero-rpc-pool/src/pool.rs (2)
49-61: Potential overflow in back-off calculation
2_u64.pow(self.consecutive_failures.min(5))is safe up to 2⁵, but once the cap is ever raised, the multiplication with 60 s risks overflow. Considersaturating_pow(nightly) or clamping explicitly.
39-47:request_countonly tracks successes
request_countis incremented only insidemark_success, so failed attempts are invisible. Either rename tosuccess_countor increment on every attempt.monero-rpc-pool/src/discovery.rs (2)
107-180: Regex misses IPv6 / brackets
server_regexonly matcheshost:portwith no brackets, excluding IPv6 records like2001:db8::1:18081or[2001:db8::1]:18081. Extend the pattern or normalise beforehand.
29-105: Bulk upsert loop is DB-chattyEach node incurs an
UPSERT; thousands mean thousands of round-trips. Usesqlx::QueryBuilderto batch or wrap the loop in a single transaction to cut I/O.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
Cargo.toml(1 hunks)monero-rpc-pool/Cargo.toml(1 hunks)monero-rpc-pool/src/config.rs(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)monero-rpc-pool/src/smart_pool.rs(1 hunks)swap/src/cli/api/request.rs(1 hunks)swap/src/common/tracing_util.rs(4 hunks)swap/src/tracing_ext.rs(1 hunks)
🔇 Additional comments (6)
Cargo.toml (1)
3-3: Workspace member added – ensure CI & docs are updated
monero-rpc-poolis now part of the workspace.
Double-check that:
- CI (build / test / clippy / fmt) picks the new crate up.
- Any top-level docs (
README.md, release notes, etc.) mention the extra binary/library.No code issues spotted here.
swap/src/tracing_ext.rs (1)
21-22: Nice – file/line metadata improves debuggabilityThe added
.with_file(true).with_line_number(true)is a low-cost way to make log messages actionable. 👍monero-rpc-pool/src/config.rs (1)
26-34: Compilation fails due to partial move ofdefaultAfter
default.hostis moved intounwrap_or, the structdefaultis partially moved and cannot be used again. Accessingdefault.port/default.nodeswill not compile.- let default = Self::default(); - Self { - host: host.unwrap_or(default.host), - port: port.unwrap_or(default.port), - nodes: nodes.unwrap_or(default.nodes), - } + let Config { host: def_host, port: def_port, nodes: def_nodes } = Self::default(); + Self { + host: host.unwrap_or(def_host), + port: port.unwrap_or(def_port), + nodes: nodes.unwrap_or(def_nodes), + }Please fix before merging.
Likely an incorrect or invalid review comment.
monero-rpc-pool/Cargo.toml (1)
11-31: Confirm dependency security & feature choicesA few dependencies (
reqwest,sqlx,rand) have had recent CVEs.
Pinning exact versions is good, but please verify none of the selected versions are flagged before cutting a release.
Also note thatsqlxwith theruntime-tokio-rustlsfeature requires aDATABASE_URLat build-time; CI will fail if this env-var is missing.monero-rpc-pool/src/main.rs (1)
34-40: Log filter syntax – double-check target name
EnvFilter::new("monero_rpc_pool=debug,warn")sets the default towarn, but only if the crate root is calledmonero_rpc_pool.
If your crate/root module ismonero_rpc_poolyou’re good; if the path is stillmonero-rpc-pool(hyphen) logs will be missing.
Validate with a quick run in verbose mode.monero-rpc-pool/src/database.rs (1)
333-371:MIN()misuse may cause undefined ordering
MIN(avg_latency_ms, 2000)is the scalar function, not the aggregate, butMIN(success_count + failure_count, 100)is MySQL syntax and not valid in SQLite ≤3.43. Replace withMIN(success_count + failure_count, 100)→CASE WHEN success_count + failure_count > 100 THEN 100 ELSE success_count + failure_count END.
| fn create_client() -> Result<reqwest::Client, HandlerError> { | ||
| reqwest::Client::builder() | ||
| .timeout(std::time::Duration::from_secs(30)) | ||
| .build() | ||
| .map_err(|e| HandlerError::ClientError(e.to_string())) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
reqwest::Client rebuilt on every request
Constructing a client per call is expensive (TLS config & connection pool).
Store one Client inside AppState and reuse it.
-pub fn create_client() -> Result<reqwest::Client, HandlerError> { … }
+// initialise once in `AppState`
+lazy_static::lazy_static! {
+ static ref CLIENT: reqwest::Client = reqwest::Client::builder()
+ .timeout(std::time::Duration::from_secs(30))
+ .build()
+ .expect("reqwest client");
+}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In monero-rpc-pool/src/simple_handlers.rs around lines 70 to 75, the
reqwest::Client is being rebuilt on every request, which is inefficient due to
TLS setup and connection pooling overhead. To fix this, create a single
reqwest::Client instance during application initialization and store it in the
AppState struct. Then update the code to reuse this stored client for all
requests instead of building a new one each time.
monero-rpc-pool/src/pool.rs
Outdated
| if let Some(node_info) = self.nodes.get(url) { | ||
| if node_info.is_healthy || node_info.should_retry() { | ||
| debug!("Selected node: {}", url); | ||
| return Some(url.clone()); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fallback picks an unhealthy node
If every node is unhealthy, get_next_node logs a warning then returns first().cloned(), which is guaranteed to be unhealthy. This sacrifices the whole circuit until the back-off expires.
Add a second pass that waits for the shortest back-off or return None so the caller can decide.
🤖 Prompt for AI Agents
In monero-rpc-pool/src/pool.rs around lines 102 to 108, the current
get_next_node method returns the first node even if it is unhealthy, causing the
system to use an unhealthy node until its back-off expires. Modify the method to
add a second pass that, if no healthy or retryable nodes are found, waits for
the node with the shortest back-off period before returning it, or returns None
to let the caller decide. This involves scanning nodes again to find the one
with the shortest retry wait time and returning it only after that wait or
returning None if no suitable node is available.
monero-rpc-pool/src/discovery.rs
Outdated
| pub async fn health_check_all_nodes(&self) -> Result<()> { | ||
| info!("Starting health check for all nodes"); | ||
|
|
||
| // Get all nodes from database | ||
| let all_nodes = sqlx::query_as::<_, MoneroNode>( | ||
| "SELECT * FROM monero_nodes ORDER BY last_checked ASC NULLS FIRST", | ||
| ) | ||
| .fetch_all(&self.db.pool) | ||
| .await?; | ||
|
|
||
| let mut checked_count = 0; | ||
| let mut healthy_count = 0; | ||
|
|
||
| for node in all_nodes { | ||
| match self.check_node_health(&node.full_url).await { | ||
| Ok((is_healthy, latency)) => { | ||
| if is_healthy { | ||
| self.db.update_node_success(&node.full_url, latency).await?; | ||
| healthy_count += 1; | ||
| } else { | ||
| self.db.update_node_failure(&node.full_url).await?; | ||
| } | ||
| checked_count += 1; | ||
| } | ||
| Err(e) => { | ||
| warn!("Failed to check node {}: {}", node.full_url, e); | ||
| self.db.update_node_failure(&node.full_url).await?; | ||
| } | ||
| } | ||
|
|
||
| // Small delay to avoid hammering nodes | ||
| tokio::time::sleep(Duration::from_millis(100)).await; | ||
| } | ||
|
|
||
| info!( | ||
| "Health check completed: {}/{} nodes are healthy", | ||
| healthy_count, checked_count | ||
| ); | ||
|
|
||
| // Update reliable nodes after health check | ||
| self.db.update_reliable_nodes().await?; | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Health checks are fully sequential
With hundreds of nodes, per-node 10 s timeout + 100 ms delay ⇒ hours per sweep. Use FuturesUnordered + Semaphore to limit concurrency (e.g. 50 in-flight).
🤖 Prompt for AI Agents
In monero-rpc-pool/src/discovery.rs between lines 229 and 272, the
health_check_all_nodes function performs node health checks sequentially,
causing long total runtime due to per-node timeouts and delays. Refactor this
function to run health checks concurrently using FuturesUnordered combined with
a Semaphore to limit concurrency to around 50 simultaneous checks. This involves
spawning async tasks for each node health check, acquiring a semaphore permit
before starting each check, and collecting results as they complete, while
preserving the update logic for success and failure.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
monero-rpc-pool/src/simple_handlers.rs (2)
76-81:reqwest::Clientstill rebuilt for every request
This was flagged earlier; the costly TLS handshake defeats pooling.Reuse a single client stored in
AppState(orlazy_static).
400-420: Error body is lost – client only receives 500 statusReturning
Err(StatusCode::INTERNAL_SERVER_ERROR)discards the JSON crafted just above.
Return the tuple(StatusCode, Json(error_response))(or anyimpl IntoResponse) so callers get details.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
monero-rpc-pool/src/config.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)src-gui/src/renderer/components/pages/help/SettingsBox.tsx(5 hunks)src-gui/src/store/features/settingsSlice.ts(4 hunks)src-tauri/Cargo.toml(1 hunks)src-tauri/src/lib.rs(4 hunks)swap/src/cli/api/request.rs(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- src-tauri/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (4)
- swap/src/cli/api/request.rs
- monero-rpc-pool/src/main.rs
- monero-rpc-pool/src/lib.rs
- monero-rpc-pool/src/config.rs
🔇 Additional comments (1)
src-gui/src/store/features/settingsSlice.ts (1)
20-21: Backend ignores this new flag – toggle may appear to work but is silently overridden
initialize_contextinsrc-tauri/src/lib.rsdecides solely onsettings.monero_node_url.
If a user once configured a custom node, switching the GUI back to “Pool” leavesmonero_node_urlpopulated ⇒ the pool is never started.Action items:
- When the toggle is set to “Pool”, clear the stored custom URL (or pass the boolean to the backend and let it override).
- Add an integration test to cover the two scenarios.
Also applies to: 124-124, 212-214
| function MoneroRpcPoolSetting() { | ||
| const useMoneroRpcPool = useSettings((s) => s.useMoneroRpcPool); | ||
| const dispatch = useAppDispatch(); | ||
|
|
||
| const handleChange = ( | ||
| event: React.MouseEvent<HTMLElement>, | ||
| newValue: string, | ||
| ) => { | ||
| if (newValue !== null) { | ||
| dispatch(setUseMoneroRpcPool(newValue === "pool")); | ||
| } | ||
| }; | ||
|
|
||
| return ( | ||
| <TableRow> | ||
| <TableCell> | ||
| <SettingLabel | ||
| label="Monero Node Selection" | ||
| tooltip="Choose between using a load-balanced pool of Monero nodes for better reliability, or configure custom Monero nodes." | ||
| /> | ||
| </TableCell> | ||
| <TableCell> | ||
| <ToggleButtonGroup | ||
| color="primary" | ||
| value={useMoneroRpcPool ? "pool" : "custom"} | ||
| exclusive | ||
| onChange={handleChange} | ||
| aria-label="Monero node selection" | ||
| size="small" | ||
| > | ||
| <ToggleButton value="pool">Pool (Recommended)</ToggleButton> | ||
| <ToggleButton value="custom">Custom</ToggleButton> | ||
| </ToggleButtonGroup> | ||
| </TableCell> | ||
| </TableRow> | ||
| ); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
UI toggle does not clear stale custom URL
The MoneroRpcPoolSetting dispatches only the boolean; previously entered custom node URLs remain in state.
Unless those URLs are cleared (or backend checks the flag), the pool won’t be used. Consider:
if (newValue === "pool") {
dispatch(setUseMoneroRpcPool(true));
+ dispatch(clearMoneroNodeUrl()); // new action to wipe stored URL(s)
}This keeps front- and back-end behaviour consistent.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src-gui/src/renderer/components/pages/help/SettingsBox.tsx around lines 298
to 333, the MoneroRpcPoolSetting component dispatches only a boolean to toggle
between pool and custom nodes, but it does not clear any previously entered
custom node URLs, causing stale URLs to persist in state. To fix this, update
the handleChange function to also clear the custom node URLs from the state when
switching to the pool option, ensuring the front-end state and back-end behavior
remain consistent.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (5)
src-gui/src/renderer/components/pages/help/SettingsBox.tsx (1)
302-309: Toggling to “Pool” leaves stale custom node(s) in state
When the user flips from “Manual” to “Pool”, onlyuseMoneroRpcPoolis updated; previously entered custom URLs remain and may still be picked up by backend code that ignores the flag. Clear them (or ignore on the backend) when switching modes.This is the same issue flagged in an earlier review.
monero-rpc-pool/src/simple_handlers.rs (2)
59-63:reqwest::Clientrebuilt for every request – heavy overheadCreating a client per call disables connection pooling and re-does TLS handshakes, hurting latency.
Instantiate oneClientduring app startup and store it inAppState, then reuse it here.Same remark was made previously.
340-347: Error body is lost – client only receives 500 statusReturning
Response::builder().status(500)with a plain string discards useful diagnostic info.
Return(StatusCode, Body)or a struct implementingIntoResponseso the JSON error reaches the caller.Issue already raised in an earlier review of this file.
monero-rpc-pool/src/database.rs (1)
116-118: SQLite URL still malformedThe connection string lacks the triple slash required by
sqlx(sqlite:///path). This was pointed out in a previous review but remains unfixed.-let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); +// Note the *three* slashes after `sqlite:` +let database_url = format!("sqlite://{}?mode=rwc", db_path.display());Without this, Windows paths and some Unix setups fail to open the database.
monero-rpc-pool/src/discovery.rs (1)
199-261: Health-check loop is still fully sequentialWith a 10 s HTTP timeout,
for node in all_nodes { … await … sleep 50 ms }scales linearly and will take minutes or hours on large pools.Previous review suggested
FuturesUnordered + Semaphoreto run, say, 50 checks in parallel. This change hasn’t been applied.Consider:
use futures::stream::{FuturesUnordered, StreamExt}; let sem = Arc::new(Semaphore::new(50)); let mut tasks = FuturesUnordered::new(); for node in all_nodes { let permit = sem.clone().acquire_owned().await?; let db = self.db.clone(); let url = node.full_url.clone(); tasks.push(tokio::spawn(async move { let outcome = self.check_node_health(&url).await; drop(permit); (node, outcome) })); } while let Some(Ok((node, outcome))) = tasks.next().await { // existing recording / identification logic }This keeps total wall-clock time bounded while protecting nodes from excessive concurrency.
🧹 Nitpick comments (13)
justfile (2)
103-103: Avoid re-chmod; invoke the script directly withbashinsteadRunning
chmod +xon every call is unnecessary noise on each invocation and can create merge-conflict churn when the executable bit is already committed.
You can spare the extra syscall and keep the intent clear by invoking the script through the interpreter:-cd dev_scripts && chmod +x ./brew_dependencies_install.sh && ./brew_dependencies_install.sh +cd dev_scripts && bash ./brew_dependencies_install.sh
105-108: Parametrised recipe looks solid – minor quoting nitThe interpolation with
{{crate}}is correct and the single-line command keeps thecdcontext intact.
Consider double-quoting the interpolated variable to guard against crate paths that might contain spaces (unlikely, but cheap to harden):-cd {{crate}} && code2prompt . --exclude "*.lock" --exclude ".sqlx/*" --exclude "target" +cd "{{crate}}" && code2prompt . --exclude "*.lock" --exclude ".sqlx/*" --exclude "target"dev_scripts/bump-version.sh (3)
1-2: Suggest adding pipefail for robust error handling
Consider extendingset -eutoset -euo pipefailso the script also fails if any command in a pipeline fails, improving reliability.Apply this diff:
- set -eu + set -euo pipefail
4-7: Ensure usage message is written to stderr
It’s best practice to send usage/help diagnostics to stderr. Changeecho "Usage: …"toecho >&2 "Usage: …"so it doesn’t pollute stdout.- echo "Usage: $0 <version>" + echo >&2 "Usage: $0 <version>"
9-12: Optional: Validate the version string format
Right now any argument is accepted. You may want to guard against invalid bump inputs (e.g. non-semver) with a regex check and error out early.Example snippet:
if ! [[ "$VERSION" =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then echo >&2 "Error: version must match semver (X.Y.Z)" exit 1 fiswap/src/cli/api/tauri_bindings.rs (1)
302-304: Minor: keep emitter helpers alphabetically groupedFor readability keep the helper list ordered; consider moving
emit_pool_status_updatejust below theemit_balance_update_eventblock.src-gui/src/store/features/poolSlice.ts (2)
4-12: Rename slice state interface to avoid shadowingThe name
PoolSliceis very close to the slice variablepoolSlice, which can confuse code-completion & lint rules.-interface PoolSlice { +interface PoolState {(and adjust usages).
18-25: Consider handling an explicit “loading failed” state
isLoadingflips tofalseon the first payload, but there is no way to express an error or timeout from the backend. Adding anerror?: stringfield would allow the UI to surface failures gracefully.src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx (2)
11-13: Remove unused importLinearProgress
LinearProgressis imported but never used, triggering thenoUnusedLocals/no-unused-varslint rule in strict TS projects.- LinearProgress,
35-39: Dead code:getHealthColoris never referencedThe helper is defined but unused. Delete it or wire it into the Chips to avoid dead-code warnings.
dev_scripts/test_monero_rpc_pool.sh (1)
15-21: Quote the default‐value object to silenceshellcheckand avoid brace-expansion surprises
json_rpc()initialisesparamsvia${1:-{}}.
Because{}is unquoted, some shells treat it as a brace-expansion pattern; shellcheck SC1083 is flagging exactly this.- local params=${1:-{}} + local params='${1:-{}}'(Or simply
local params=${1:-'{}'}.)This keeps the literal
{}intact and removes the warning.src-gui/src/renderer/components/pages/help/SettingsBox.tsx (1)
645-646: Possible undefined access when looking up node status
nodeStatuses[blockchain][node]assumes both nested keys exist.
If either theblockchainmap or thenodeentry is missing, React will throw.-<NodeStatus status={nodeStatuses[blockchain][node]} /> +<NodeStatus status={nodeStatuses[blockchain]?.[node]} />Adds a tiny safeguard without changing behaviour.
monero-rpc-pool/src/main.rs (1)
8-8: Unnecessaryuse url;import
urlis only referenced asurl::Url::parse; importing the crate alias adds nothing and can confuse IDEs.-use url;Safe to delete.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (23)
.gitignore(1 hunks)dev_scripts/bump-version.sh(1 hunks)dev_scripts/test_monero_rpc_pool.sh(1 hunks)justfile(1 hunks)monero-rpc-pool/Cargo.toml(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)src-gui/package.json(1 hunks)src-gui/src/renderer/background.ts(2 hunks)src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx(1 hunks)src-gui/src/renderer/components/pages/help/SettingsBox.tsx(5 hunks)src-gui/src/renderer/components/pages/help/SettingsPage.tsx(2 hunks)src-gui/src/renderer/rpc.ts(2 hunks)src-gui/src/store/combinedReducer.ts(2 hunks)src-gui/src/store/features/poolSlice.ts(1 hunks)src-tauri/src/lib.rs(5 hunks)swap/Cargo.toml(1 hunks)swap/src/cli/api/tauri_bindings.rs(3 hunks)swap/src/common/tracing_util.rs(5 hunks)
✅ Files skipped from review due to trivial changes (5)
- src-gui/src/renderer/components/pages/help/SettingsPage.tsx
- swap/Cargo.toml
- .gitignore
- src-gui/src/store/combinedReducer.ts
- src-gui/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
- src-tauri/src/lib.rs
- swap/src/common/tracing_util.rs
- monero-rpc-pool/Cargo.toml
🧰 Additional context used
🪛 Shellcheck (0.10.0)
dev_scripts/test_monero_rpc_pool.sh
[warning] 17-17: This } is literal. Check expression (missing ;/\n?) or quote it.
(SC1083)
🔇 Additional comments (8)
dev_scripts/bump-version.sh (2)
15-17: Approve JSON version replacement
Thesed -i 's/"version": "[^"]*"/"version": "'"$VERSION"'"/'correctly updatessrc-tauri/tauri.conf.jsonand handles quoting properly for$VERSION.
26-26: Approve completion message
The final echo clearly informs the user of success and the new version.src-gui/src/renderer/background.ts (1)
131-134: Pool status wired-up – looks goodThe new
"PoolStatusUpdate"case cleanly forwards the payload to Redux, mirroring the existing pattern for other events.
No issues spotted.swap/src/cli/api/tauri_bindings.rs (1)
6-7: Ensure PoolStatus isClone + Serialize
PoolStatustravels through a#[derive(Clone, Serialize)]enum and the Tauri event system.
Double-check thatmonero_rpc_pool::pool::PoolStatusimplements both traits; otherwise this will fail to compile or panic at runtime.src-gui/src/renderer/rpc.ts (2)
226-257: Promise.any may need a polyfill in older Electron runtimes
Promise.anyis only available from Chrome 85 / Node 15. Verify the Electron version shipped with Tauri; if it’s older, replace with a manual implementation orpromise-anypolyfill to avoid runtime crashes.
334-343: Bitcoin node health never refreshed – confirm this is intendedThe refactor intentionally skips Electrum node checks on every refresh.
If UI components still rely on per-node status, this may regress UX.
Please confirm the design decision or re-introduce periodic checks for BTC nodes.dev_scripts/test_monero_rpc_pool.sh (1)
30-34:jqis assumed to exist – bail early when missingEvery check (
get_info,get_fee_estimate, …) pipes intojq.
If the developer runs the script on a machine withoutjq, it exits with the confusing message “command not found” after the first curl.Add an upfront dependency check:
command -v jq >/dev/null || die "jq is required but not installed"Placing this right after
set -euo pipefailfails fast with a clear error.monero-rpc-pool/src/main.rs (1)
143-156: Port inference for HTTPS nodes is likely wrong for MoneroWhen parsing manual nodes you fall back to
443for HTTPS URLs, yet the restricted RPC port on most public HTTPS nodes is18089.
This will silently rewrite e.g.https://my-node.iointohttps://my-node.io:443, leading to connection failures.Either:
- Require the port to be present in the CLI list and reject otherwise, or
- Default to
18089(and maybe28089for test/stagenet).Failing fast avoids populating the DB with unusable entries.
| # Update changelog | ||
| sed -i "s/^## \\[Unreleased\\]/## [$VERSION] - $TODAY/" CHANGELOG.md | ||
| # Add a new [Unreleased] section at the top | ||
| sed -i '3i## [Unreleased]\n' CHANGELOG.md | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix CHANGELOG insertion syntax
The sed -i '3i## [Unreleased]\n' will insert a literal \n. Use the proper i command with an escaped newline so sed inserts the new heading on its own line:
- sed -i '3i## [Unreleased]\n' CHANGELOG.md
+ sed -i '3i\
## [Unreleased]
' CHANGELOG.md📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Update changelog | |
| sed -i "s/^## \\[Unreleased\\]/## [$VERSION] - $TODAY/" CHANGELOG.md | |
| # Add a new [Unreleased] section at the top | |
| sed -i '3i## [Unreleased]\n' CHANGELOG.md | |
| # Update changelog | |
| sed -i "s/^## \\[Unreleased\\]/## [$VERSION] - $TODAY/" CHANGELOG.md | |
| # Add a new [Unreleased] section at the top | |
| sed -i '3i\ | |
| ## [Unreleased] | |
| ' CHANGELOG.md |
🤖 Prompt for AI Agents
In dev_scripts/bump-version.sh around lines 21 to 25, the sed command inserting
the new [Unreleased] section uses a literal \n which does not create a new line.
Replace the sed insertion command with the proper syntax using a backslash
followed by an actual newline character after the i command to ensure the
heading is inserted on its own line.
| # Update version in Cargo.toml files | ||
| sed -i -E 's/^version = "[0-9]+\.[0-9]+\.[0-9]+"/version = "'"$VERSION"'"/' swap/Cargo.toml src-tauri/Cargo.toml | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Include the new monero-rpc-pool crate in Cargo.toml updates
The script only bumps versions in swap/Cargo.toml and src-tauri/Cargo.toml; it omits the newly added monero-rpc-pool/Cargo.toml. Extend the invocation or glob all crates to ensure every workspace member is updated.
For example:
- sed -i -E 's/^version = "[0-9]+\.[0-9]+\.[0-9]+"/version = "'"$VERSION"'"/' swap/Cargo.toml src-tauri/Cargo.toml
+ sed -i -E 's/^version = "[0-9]+\.[0-9]+\.[0-9]+"/version = "'"$VERSION"'"/' monero-rpc-pool/Cargo.toml swap/Cargo.toml src-tauri/Cargo.toml📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Update version in Cargo.toml files | |
| sed -i -E 's/^version = "[0-9]+\.[0-9]+\.[0-9]+"/version = "'"$VERSION"'"/' swap/Cargo.toml src-tauri/Cargo.toml | |
| # Update version in Cargo.toml files | |
| sed -i -E 's/^version = "[0-9]+\.[0-9]+\.[0-9]+"/version = "'"$VERSION"'"/' monero-rpc-pool/Cargo.toml swap/Cargo.toml src-tauri/Cargo.toml |
🤖 Prompt for AI Agents
In dev_scripts/bump-version.sh around lines 18 to 20, the version update only
targets swap/Cargo.toml and src-tauri/Cargo.toml, missing the
monero-rpc-pool/Cargo.toml file. Modify the sed command to also include
monero-rpc-pool/Cargo.toml or adjust it to update all Cargo.toml files in the
workspace, ensuring the new crate's version is bumped consistently with the
others.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
monero-rpc-pool/src/simple_handlers.rs (1)
31-35:reqwest::Clientrebuilt per request
Same concern as flagged previously – large TLS cost on every call. Reuse a singleton client stored inAppState.monero-rpc-pool/src/pool.rs (1)
60-64: P2C can choose the same node twice
chooseis with replacement;node1andnode2may be identical, nullifying the “power-of-two-choices”.
Sample without replacement:let mut rng = thread_rng(); let picks = candidate_nodes.choose_multiple(&mut rng, 2).collect::<Vec<_>>(); let (node1, node2) = (picks[0], picks[1]);monero-rpc-pool/src/database.rs (1)
116-118: SQLite URL still malformed
"sqlite:{}?mode=rwc"producessqlite:/path.sqlxexpects three slashes (sqlite:///path). Update the format string.-let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); +let database_url = format!("sqlite:///{}?mode=rwc", db_path.display());
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.vscode/settings.json(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)swap/src/asb/config.rs(1 hunks)swap/src/bin/asb.rs(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .vscode/settings.json
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
monero-rpc-pool/src/lib.rs (2)
75-119: Background tasks leak & cannot be shut down gracefully
tokio::spawnis used multiple times, but the resultingJoinHandles are dropped. During tests or when embedding the pool as a library, these detached tasks out-live the router and keep the runtime alive, making clean shutdown impossible.Consider collecting the handles and exposing a
shutdown()onAppState(or returning a guard implementingDrop) that:
- Cancels the tasks via
handle.abort().- Awaits their termination in
Drop/graceful-shutdown logic.This keeps resource ownership clear and prevents dangling threads.
184-188: Spawned server task handle is not storedThe
tokio::spawncall creates a detached background task for the server, but theJoinHandleis dropped. This prevents graceful shutdown and error monitoring of the server task.Consider returning the
JoinHandleor storing it in theServerInfostruct to allow callers to monitor or shutdown the server gracefully.// Start the server in a background task -tokio::spawn(async move { +let server_handle = tokio::spawn(async move { if let Err(e) = axum::serve(listener, app).await { error!("Server error: {}", e); } });monero-rpc-pool/src/simple_handlers.rs (1)
53-56:reqwest::Clientrebuilt on every requestConstructing a client per call is expensive (TLS config & connection pool).
Store oneClientinsideAppStateand reuse it.-let client = reqwest::Client::builder() - .timeout(std::time::Duration::from_secs(30)) - .build() - .map_err(|e| HandlerError::RequestError(e.to_string()))?; +// Store in AppState and reusemonero-rpc-pool/src/pool.rs (1)
60-70: Selection algorithm still favours unhealthy / duplicate picks
candidate_nodesis sampled indiscriminately; a node withsuccess_count == 0(or a back-off in effect) can be returned, andchoosemay pick the same element twice, negating the "power-of-two-choices".Refine the logic to prefer nodes with successful health checks and ensure distinct selection:
-// Power of Two Choices: pick 2 random nodes, select the better one -let mut rng = thread_rng(); -let node1 = candidate_nodes.choose(&mut rng).unwrap(); -let node2 = candidate_nodes.choose(&mut rng).unwrap(); +// Prefer nodes that have recorded at least one successful check +let healthy: Vec<_> = candidate_nodes.iter().filter(|n| n.success_count > 0).collect(); +let population = if healthy.is_empty() { &candidate_nodes } else { &healthy }; + +// P2C with distinct samples +let mut rng = thread_rng(); +let picks = population.choose_multiple(&mut rng, 2).collect::<Vec<_>>(); +let (node1, node2) = (picks[0], picks[1]);monero-rpc-pool/src/database.rs (1)
116-116: SQLite URL format is invalid
format!("sqlite:{}?mode=rwc", db_path.display())producessqlite:/tmp/foo.db?…, missing the double slash expected bysqlx(sqlite:///tmp/foo.db) and fails on Windows.-let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); +let database_url = format!("sqlite://{}?mode=rwc", db_path.display());monero-rpc-pool/src/discovery.rs (1)
227-288: Health checks are fully sequentialWith hundreds of nodes, per-node 10s timeout + 2s delay ⇒ hours per sweep. Use
FuturesUnordered+Semaphoreto limit concurrency (e.g. 50 in-flight).use futures::stream::{FuturesUnordered, StreamExt}; use tokio::sync::Semaphore; let semaphore = Arc::new(Semaphore::new(50)); let mut tasks = FuturesUnordered::new(); for node in all_nodes { let permit = semaphore.clone().acquire_owned().await?; let self_clone = self.clone(); let node_clone = node.clone(); tasks.push(async move { let _permit = permit; self_clone.check_node_health(&node_clone.full_url).await }); } while let Some(result) = tasks.next().await { // Process result }
🧹 Nitpick comments (6)
monero-rpc-pool/src/simple_handlers.rs (2)
157-163: Unused error variable in error handlingIn the error handling block, the error
eis captured but not used in the logging or error reporting, which reduces debugging capabilities.Err(e) => { Ok(HealthCheckOutcome { was_successful: false, latency, discovered_network: None, }) }Consider logging the error for debugging:
Err(e) => { + debug!("JSON parsing failed for response: {}", e); Ok(HealthCheckOutcome { was_successful: false, latency, discovered_network: None, }) }
396-418: Handler function name doesn't match actual implementationThe function is named
simple_proxy_handlerbut the actual function being called from the router issimple_http_handler. This creates confusion about which handler is being used.Consider renaming this function to match its usage or updating the router configuration to use the correct handler name.
monero-rpc-pool/src/pool.rs (1)
256-256: Typo in TODO commentMinor spelling error in the TODO comment.
-pub avg_reliable_latency_ms: Option<f64>, // TOOD: Why is this an Option, we hate Options +pub avg_reliable_latency_ms: Option<f64>, // TODO: Why is this an Option, we hate Optionsmonero-rpc-pool/src/database.rs (1)
313-339: Complex nested reliability score calculationThe nested subquery for calculating reliability scores is complex and repeated in multiple places, making it hard to maintain and potentially slow for large datasets.
Consider extracting the reliability score calculation into a database view or a separate function to improve maintainability:
CREATE VIEW reliable_nodes_view AS SELECT node_id, (success_rate * request_weight * 0.8 + latency_factor * 0.2) as reliability_score FROM ( -- reliability calculation logic here );Then reference this view in the main queries instead of repeating the complex calculation.
monero-rpc-pool/src/discovery.rs (2)
157-163: Unused error in JSON parsing failureThe error
efrom JSON parsing is captured but not used, reducing debugging capabilities.Err(e) => { + debug!("JSON parsing failed for health check response: {}", e); Ok(HealthCheckOutcome { was_successful: false, latency, discovered_network: None, }) }
173-179: Unused error in request failureThe error
efrom the request is captured but not used, making debugging network issues difficult.Err(e) => { + debug!("Health check request failed for {}: {}", url, e); Ok(HealthCheckOutcome { was_successful: false, latency, discovered_network: None, }) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)swap/src/asb/config.rs(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- swap/src/asb/config.rs
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
src-gui/src/renderer/components/pages/help/SettingsBox.tsx (1)
311-318: UI toggle does not clear stale custom URL.The
MoneroRpcPoolSettingdispatches only the boolean; previously entered custom node URLs remain in state. Unless those URLs are cleared (or backend checks the flag), the pool won't be used.Consider clearing custom node URLs when switching to pool mode:
const handleChange = ( event: React.MouseEvent<HTMLElement>, newValue: string, ) => { if (newValue !== null) { dispatch(setUseMoneroRpcPool(newValue === "pool")); + if (newValue === "pool") { + // Clear any existing custom node URLs to ensure pool is used + dispatch(clearMoneroNodeUrl()); // Assuming this action exists + } } };monero-rpc-pool/src/simple_handlers.rs (1)
68-71:reqwest::Clientrebuilt on every request.Constructing a client per call is expensive (TLS config & connection pool). Store one
ClientinsideAppStateand reuse it.Consider initializing the client once in
AppState:+// In AppState struct +pub struct AppState { + pub node_pool: Arc<RwLock<NodePool>>, + pub http_client: reqwest::Client, +} +// Initialize once during app creation +let http_client = reqwest::Client::builder() + .timeout(std::time::Duration::from_secs(30)) + .build()?; -async fn raw_http_request( - node_url: &str, - path: &str, - method: &str, - headers: &HeaderMap, - body: Option<&[u8]>, -) -> Result<Response, HandlerError> { - let client = reqwest::Client::builder() - .timeout(std::time::Duration::from_secs(30)) - .build() - .map_err(|e| HandlerError::RequestError(e.to_string()))?; +async fn raw_http_request( + client: &reqwest::Client, + node_url: &str, + path: &str, + method: &str, + headers: &HeaderMap, + body: Option<&[u8]>, +) -> Result<Response, HandlerError> {monero-rpc-pool/src/pool.rs (1)
60-70: Power of Two Choices can select the same node twice.The current implementation uses
choose()twice independently, which can return the same node, negating the benefit of the P2C algorithm.Ensure distinct node selection:
// Power of Two Choices: pick 2 random nodes, select the better one let mut rng = thread_rng(); -let node1 = candidate_nodes.choose(&mut rng).unwrap(); -let node2 = candidate_nodes.choose(&mut rng).unwrap(); +let distinct_nodes: Vec<_> = candidate_nodes.choose_multiple(&mut rng, 2).collect(); +let (node1, node2) = if distinct_nodes.len() == 2 { + (distinct_nodes[0], distinct_nodes[1]) +} else { + // Fallback if only one node available + (candidate_nodes[0], candidate_nodes[0]) +};monero-rpc-pool/src/database.rs (1)
116-116: SQLite URL format is invalid.
format!("sqlite:{}?mode=rwc", db_path.display())producessqlite:/tmp/foo.db?…, missing the double slash expected bysqlx(sqlite:///tmp/foo.db) and fails on Windows.Apply this fix:
-let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); +let database_url = format!("sqlite://{}?mode=rwc", db_path.display());monero-rpc-pool/src/discovery.rs (1)
319-406: Sequential health checks cause scalability issues.This function processes nodes sequentially with a 2-second delay between checks, which will cause extremely long execution times with large node lists.
The previous review comment about using
FuturesUnorderedwithSemaphorefor concurrent health checks (limited to ~50 simultaneous) remains valid and should be implemented.
🧹 Nitpick comments (6)
src-gui/src/renderer/components/pages/help/SettingsBox.tsx (1)
64-64: Remove unused import.The
useAppSelectorimport is not used anywhere in this file.Apply this diff to remove the unused import:
-import { useAppSelector } from "store/hooks";monero-rpc-pool/src/discovery.rs (5)
25-38: Consider documenting the port-to-scheme mapping logic.The hardcoded port numbers (18089, 38089, 443) used for HTTPS scheme determination appear in multiple places throughout the code. Consider documenting why these specific ports default to HTTPS or extracting this logic into a helper function.
54-62: LGTM with minor suggestion.The constructor is well-structured with reasonable defaults. The 10-second timeout and user agent are appropriate choices.
Consider handling the potential panic from
.unwrap():- let client = Client::builder() - .timeout(Duration::from_secs(10)) - .user_agent("monero-rpc-pool/1.0") - .build() - .unwrap(); + let client = Client::builder() + .timeout(Duration::from_secs(10)) + .user_agent("monero-rpc-pool/1.0") + .build() + .expect("Failed to build HTTP client");
326-330: Simplify the complex SQL query construction.The query contains many hardcoded NULL values and zero defaults, which suggests a schema mismatch or incomplete data model.
Consider either:
- Simplifying the query to only fetch needed fields:
- let all_nodes = sqlx::query_as::<_, MoneroNode>( - "SELECT *, 0 as success_count, 0 as failure_count, NULL as last_success, NULL as last_failure, NULL as last_checked, 0 as is_reliable, NULL as avg_latency_ms, NULL as min_latency_ms, NULL as max_latency_ms, NULL as last_latency_ms FROM monero_nodes ORDER BY id", - ) + let all_nodes = sqlx::query_as::<_, MoneroNode>( + "SELECT id, scheme, host, port, full_url, network, created_at FROM monero_nodes ORDER BY id", + )
- Or creating a separate struct for this specific query if all fields are truly needed.
397-397: Consider making the delay configurable.The 2-second delay between health checks is hardcoded. Consider making this configurable to allow tuning based on network conditions and node count.
- tokio::time::sleep(Duration::from_secs(2)).await; + tokio::time::sleep(Duration::from_millis(self.config.health_check_delay_ms)).await;
410-410: Consider making the discovery interval configurable.The 1-hour periodic discovery interval is hardcoded. Different deployments might benefit from different discovery frequencies.
- let mut interval = tokio::time::interval(Duration::from_secs(3600)); // Every hour + let mut interval = tokio::time::interval(Duration::from_secs(self.config.discovery_interval_secs));
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (22)
monero-rpc-pool/Cargo.toml(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)monero-sys/src/lib.rs(1 hunks)src-gui/src/renderer/components/alert/UnfinishedSwapsAlert.tsx(1 hunks)src-gui/src/renderer/components/modal/swap/SwapStateStepper.tsx(1 hunks)src-gui/src/renderer/components/modal/swap/pages/in_progress/BitcoinLockTxInMempoolPage.tsx(4 hunks)src-gui/src/renderer/components/modal/swap/pages/in_progress/XmrLockInMempoolPage.tsx(1 hunks)src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx(1 hunks)src-gui/src/renderer/components/pages/help/SettingsBox.tsx(9 hunks)src-gui/src/renderer/rpc.ts(3 hunks)src-gui/src/utils/formatUtils.ts(1 hunks)src-tauri/src/lib.rs(4 hunks)swap/src/bin/asb.rs(2 hunks)swap/src/cli/api/request.rs(5 hunks)swap/src/cli/api/tauri_bindings.rs(4 hunks)swap/src/monero/wallet.rs(1 hunks)swap/src/protocol/bob/swap.rs(7 hunks)
✅ Files skipped from review due to trivial changes (4)
- src-gui/src/renderer/components/alert/UnfinishedSwapsAlert.tsx
- src-gui/src/renderer/components/modal/swap/pages/in_progress/XmrLockInMempoolPage.tsx
- src-gui/src/utils/formatUtils.ts
- src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx
🚧 Files skipped from review as they are similar to previous changes (5)
- src-gui/src/renderer/rpc.ts
- monero-rpc-pool/Cargo.toml
- swap/src/bin/asb.rs
- swap/src/cli/api/tauri_bindings.rs
- src-tauri/src/lib.rs
🔇 Additional comments (32)
src-gui/src/renderer/components/modal/swap/SwapStateStepper.tsx (1)
66-69: LGTM: Improved type safety for optional confirmation counts.The explicit undefined check before comparing
btc_lock_confirmationsto zero is a good defensive programming practice that aligns with the backend changes where confirmation fields were changed to optional types. This prevents potential runtime errors from undefined comparisons.swap/src/monero/wallet.rs (1)
220-234: ```shell
#!/bin/bash
set -euxo pipefailConfirm the struct definition and its fields
rg -n "pub struct" -n swap/src/monero/wallet.rs
Show the MoneroWallet struct fields
awk '/pub struct .*MoneroWallet/,/}/' swap/src/monero/wallet.rs
Find switch_daemon method and any references to self.daemon
rg -n "fn switch_daemon" -n swap/src/monero/wallet.rs
rg -n "self.daemon" -n swap/src/monero/wallet.rs || trueCheck for other wallet fields (e.g., swap wallets) in this file
rg -n "wallets?" -n swap/src/monero/wallet.rs || true
</details> <details> <summary>monero-sys/src/lib.rs (1)</summary> `1057-1057`: **LGTM: Appropriate visibility change for daemon switching functionality.** Making `set_daemon_address` public is necessary to support the new dynamic daemon switching feature in the wallet module. The method already has proper error handling and follows existing patterns. </details> <details> <summary>src-gui/src/renderer/components/modal/swap/pages/in_progress/BitcoinLockTxInMempoolPage.tsx (4)</summary> `2-2`: **Good practice: Using utility function for consistent formatting.** Importing and using `formatConfirmations` ensures consistent display of confirmation counts across the UI. --- `19-26`: **LGTM: Proper handling of optional confirmation counts.** The conditional logic correctly handles the case where `btc_lock_confirmations` may be `undefined`, providing appropriate user messaging for both confirmed and unconfirmed states. This aligns well with the backend type changes. --- `35-38`: **LGTM: Consistent undefined checking for warning threshold.** The conditional rendering logic properly checks for undefined before comparing against the warning threshold, maintaining type safety throughout the component. --- `49-49`: **LGTM: Consistent formatting with utility function.** Using `formatConfirmations` instead of raw number display ensures consistent formatting across the application and properly handles the optional nature of the confirmation count. </details> <details> <summary>swap/src/protocol/bob/swap.rs (5)</summary> `149-196`: **Excellent timing improvement for wallet height recording.** The refactoring to record the Monero wallet blockchain height after user approval but before broadcasting the Bitcoin lock transaction is a smart improvement. This ensures the recorded height is as close as possible to the actual Bitcoin lock time, reducing the risk of missing the Monero lock transaction if Bob goes offline. The detailed comment explains the rationale clearly, making the code maintainable. --- `227-227`: **Consistent confirmation count representation using Option<u64>.** Changing from `u64` to `Option<u64>` for confirmation counts provides better semantic representation where `None` indicates zero confirmations and `Some(count)` represents actual confirmation counts. This aligns with the broader changes in the Tauri event system. --- `292-292`: **Proper confirmation count handling in transaction monitoring.** The change to `Some(u64::from(confirmed.confirmations()))` correctly wraps the confirmation count in an Option, maintaining consistency with the new event emission pattern throughout the codebase. --- `337-337`: **Consistent XMR lock transaction confirmation handling.** Setting `xmr_lock_tx_confirmations: None` for initial transaction detection is semantically correct, indicating the transaction is in mempool but not yet confirmed. --- `372-372`: **Accurate confirmation count propagation in callback.** The use of `Some(confirmations)` in the confirmation callback properly represents actual confirmation counts, maintaining consistency with the new event emission pattern and providing accurate progress information to the frontend. </details> <details> <summary>monero-rpc-pool/src/main.rs (6)</summary> `12-22`: **Well-structured network parsing with clear error messages.** The network parsing function properly handles case-insensitive input and provides clear error messages for invalid networks. Good defensive programming practice. --- `64-77`: **Appropriate logging configuration with granular control.** The logging setup correctly differentiates between verbose and normal modes, showing debug logs from the application crate while filtering noise from dependencies. Including file and line numbers aids debugging. --- `90-112`: **Robust URL parsing with proper error handling.** The manual node parsing logic correctly handles URL validation, extracts components safely, and provides appropriate fallback port values for different schemes. The error handling with warnings allows the application to continue with valid nodes even if some are malformed. --- `158-164`: **Graceful fallback on node discovery failure.** The error handling for node discovery failure is appropriate - it logs the error, warns about the fallback, and continues with an empty node list rather than failing completely. This allows the service to start and potentially discover nodes later. --- `185-188`: **Proper error handling with process exit.** Using `std::process::exit(1)` on server startup failure is appropriate for a main application. The error is logged before exiting, providing useful debugging information. --- `138-141`: **Verify default port assignment for discovered nodes.** The default port assignment uses 18089 for HTTPS and 18081 for HTTP. Ensure these are the correct standard Monero RPC ports for the respective protocols. ```web What are the standard Monero RPC ports for HTTP and HTTPS connections?swap/src/cli/api/request.rs (5)
1200-1215: Well-implemented approval request resolution with proper error handling.The function correctly parses the UUID, validates the Tauri handle availability, and provides meaningful error messages. The implementation follows the established error handling patterns in the codebase.
1371-1407: Significantly improved error handling and logging for node availability checks.The enhanced error handling provides much better debugging information by logging the URL, network, error details, and error chain. This will be invaluable for troubleshooting node connectivity issues. The pattern of returning
available: falseon error is appropriate for this use case.
1440-1440: Enhanced struct derives improve debugging capabilities.Adding
Debug,Eq, andPartialEqderives to the approval-related structs enables better testing, debugging, and comparison operations. This is a good practice for data structures used in APIs.
1456-1456: Proper delegation to the new approval resolution function.The Request trait implementation correctly delegates to the new
resolve_approval_requestfunction, maintaining the established pattern while enabling the new functionality.
1195-1198: Simplified implementation but verify if context removal is intentional.The function signature still accepts
_contextbut doesn't use it, and now returns an empty JSON object. This suggests the functionality may be deprecated or moved elsewhere.#!/bin/bash # Search for other implementations or calls to get_current_swap to understand the change rg -A 5 -B 5 "get_current_swap"src-gui/src/renderer/components/pages/help/SettingsBox.tsx (1)
359-366: LGTM on the conditional UI behavior.The tooltip and disabled state logic correctly prevents users from editing custom nodes when the RPC pool is active, maintaining UI consistency with the backend behavior.
monero-rpc-pool/src/simple_handlers.rs (1)
261-284: LGTM on the Power of Two Choices implementation.The node selection logic correctly implements racing between two nodes and tracks errors for all attempted nodes. The circuit-breaking approach with tried_nodes tracking prevents infinite retries.
monero-rpc-pool/src/lib.rs (2)
30-41: LGTM on task management approach.The
TaskManagerstruct withDropimplementation properly handles background task cleanup, addressing the previous concern about task leaks. This ensures graceful shutdown and prevents dangling threads.
156-195: Well-designed library integration function.The
start_server_with_random_portfunction provides a clean interface for embedding the RPC pool in other applications, returning server info, status updates, and task management handles. The port 0 binding for random port assignment is elegant.monero-rpc-pool/src/pool.rs (2)
147-220: LGTM on multi-step node selection strategy.The
get_top_reliable_nodesimplementation provides robust fallback logic: primary fetch by recent success, fallback to any successful nodes, and final fill-up with random nodes. The detailed logging helps with debugging the selection process.
79-100: Well-balanced goodness score calculation.The scoring algorithm appropriately weighs success rate, request volume, and latency. The 80/20 split between success rate and latency provides good balance, and the recency weighting prevents stale data from skewing selections.
monero-rpc-pool/src/database.rs (2)
125-184: LGTM on database schema design.The migration creates a well-structured schema with proper indexes for performance. The separation of
monero_nodesfor identity andhealth_checksfor events provides good normalization and efficient querying patterns.
281-355: Complex but efficient node selection query.The
get_identified_nodesquery efficiently computes aggregated health statistics and reliability scores in a single query. The reliability scoring logic is correctly implemented in SQL, avoiding expensive post-processing.monero-rpc-pool/src/discovery.rs (1)
235-291: Good error handling with proper logging.The error handling in the health check function is well-structured, returning proper
HealthCheckOutcomeeven for failures and logging appropriately.
monero-rpc-pool/src/discovery.rs
Outdated
| let scheme = if port == 18089 || port == 38089 || port == 443 { | ||
| "https" | ||
| } else { | ||
| "http" | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Extract duplicated scheme determination logic.
The same port-based scheme determination logic appears in multiple places (lines 172-176, 212-216). This violates the DRY principle and makes maintenance harder.
Extract to a helper function:
+impl NodeDiscovery {
+ fn determine_scheme(port: u16) -> &'static str {
+ match port {
+ 18089 | 38089 | 443 => "https",
+ _ => "http",
+ }
+ }
+
- let scheme = if port == 18089 || port == 38089 || port == 443 {
- "https"
- } else {
- "http"
- };
+ let scheme = Self::determine_scheme(port);🤖 Prompt for AI Agents
In monero-rpc-pool/src/discovery.rs around lines 172 to 176, the logic
determining the scheme based on the port number is duplicated elsewhere (lines
212-216). To fix this, extract this port-to-scheme mapping into a single helper
function that takes a port number as input and returns the appropriate scheme
string. Replace all duplicated instances with calls to this new helper function
to adhere to the DRY principle and simplify maintenance.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (6)
monero-rpc-pool/src/simple_handlers.rs (2)
68-71: Inefficient HTTP client creation on every request.Creating a new
reqwest::Clientfor each request is expensive due to TLS setup and connection pool overhead. This design will significantly impact performance under load.Store a single client instance in
AppStateand reuse it:-pub async fn raw_http_request( - node_url: &str, - path: &str, - method: &str, - headers: &HeaderMap, - body: Option<&[u8]>, -) -> Result<Response, HandlerError> { - let client = reqwest::Client::builder() - .timeout(std::time::Duration::from_secs(30)) - .build() - .map_err(|e| HandlerError::RequestError(e.to_string()))?; +pub async fn raw_http_request( + client: &reqwest::Client, + node_url: &str, + path: &str, + method: &str, + headers: &HeaderMap, + body: Option<&[u8]>, +) -> Result<Response, HandlerError> {
439-442: Error responses lose detailed information.The proxy returns a generic "All nodes failed" message, discarding detailed error information that would be valuable for debugging client issues.
Return structured error responses with details:
- Err(_) => Response::builder() - .status(StatusCode::INTERNAL_SERVER_ERROR) - .body(Body::from("All nodes failed")) - .unwrap_or_else(|_| Response::new(Body::empty())), + Err(e) => { + let error_json = serde_json::json!({ + "error": "All nodes failed", + "details": e.to_string() + }); + Response::builder() + .status(StatusCode::INTERNAL_SERVER_ERROR) + .header("content-type", "application/json") + .body(Body::from(error_json.to_string())) + .unwrap_or_else(|_| Response::new(Body::empty())) + }monero-rpc-pool/src/pool.rs (1)
60-70: Power of Two Choices algorithm can select duplicate nodes.The current implementation uses
choose()which samples with replacement, potentially selecting the same node twice and defeating the purpose of the algorithm.Use
choose_multiple()to ensure distinct node selection:- // Power of Two Choices: pick 2 random nodes, select the better one - let mut rng = thread_rng(); - let node1 = candidate_nodes.choose(&mut rng).unwrap(); - let node2 = candidate_nodes.choose(&mut rng).unwrap(); + // Power of Two Choices: pick 2 distinct random nodes, select the better one + let mut rng = thread_rng(); + let picks: Vec<_> = candidate_nodes.choose_multiple(&mut rng, 2).collect(); + let (node1, node2) = match picks.len() { + 2 => (picks[0], picks[1]), + 1 => (picks[0], picks[0]), // fallback for single node + 0 => return Ok(None), // no nodes available + _ => unreachable!(), + };monero-rpc-pool/src/database.rs (1)
116-116: Incorrect SQLite URL format.The SQLite connection string is missing the required double slash, which can cause connection failures especially on Windows.
- let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); + let database_url = format!("sqlite://{}?mode=rwc", db_path.display());monero-rpc-pool/src/discovery.rs (2)
282-343: Sequential health checks will cause performance bottlenecks.The current implementation processes nodes sequentially with 2-second delays between checks. With hundreds of nodes, this could take hours to complete, severely impacting the system's responsiveness.
Implement concurrent health checking with bounded parallelism:
+use futures::stream::{self, StreamExt}; +use std::sync::Arc; +use tokio::sync::Semaphore; pub async fn health_check_all_nodes(&self, target_network: &str) -> Result<()> { // ... existing setup code ... - for node in all_nodes { - // ... health check logic ... - tokio::time::sleep(Duration::from_secs(2)).await; - } + // Process nodes concurrently with limited parallelism + let semaphore = Arc::new(Semaphore::new(50)); // Limit to 50 concurrent checks + let results = stream::iter(all_nodes.into_iter()) + .map(|node| { + let semaphore = semaphore.clone(); + let discovery = self.clone(); + async move { + let _permit = semaphore.acquire().await.unwrap(); + let outcome = discovery.check_node_health(&node.full_url).await; + (node, outcome) + } + }) + .buffer_unordered(50) + .collect::<Vec<_>>() + .await; + + // Process results... + for (node, outcome_result) in results { + // ... existing processing logic ... + }
157-161: Duplicate scheme determination logic violates DRY principle.The port-to-scheme mapping logic is duplicated in multiple places, making maintenance harder and potentially introducing inconsistencies.
Extract to a helper function:
+impl NodeDiscovery { + fn determine_scheme(port: u64) -> &'static str { + match port { + 18089 | 38089 | 443 => "https", + _ => "http", + } + } - let scheme = if port == 18089 || port == 443 { - "https" - } else { - "http" - }; + let scheme = Self::determine_scheme(port);Apply this pattern to all instances where scheme determination occurs.
Also applies to: 212-216
🧹 Nitpick comments (2)
monero-rpc-pool/src/simple_handlers.rs (1)
265-290: Potential inefficiency in node selection strategy.The racing logic could repeatedly select the same nodes in the pool, reducing the effectiveness of the "Power of Two Choices" algorithm when there are few healthy nodes available.
Consider implementing a more sophisticated selection that ensures better distribution:
// Power of Two Choices within the exclusive pool -while pool_index < available_pool.len() && tried_nodes.len() < POOL_SIZE { +let max_attempts = (available_pool.len() * 2).min(POOL_SIZE * 2); +let mut attempt_count = 0; +while pool_index < available_pool.len() && tried_nodes.len() < POOL_SIZE && attempt_count < max_attempts { + attempt_count += 1;This prevents infinite loops when the pool is nearly exhausted and ensures fair node distribution.
monero-rpc-pool/src/pool.rs (1)
259-259: Fix typo in comment.There's a typo in the comment that should be corrected for clarity.
- pub avg_reliable_latency_ms: Option<f64>, // TOOD: Why is this an Option, we hate Options + pub avg_reliable_latency_ms: Option<f64>, // TODO: Why is this an Option, we hate Options
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
.gitignore(1 hunks).vscode/settings.json(1 hunks)monero-rpc-pool/Cargo.toml(1 hunks)monero-rpc-pool/migrations/20250618212026_initial_schema.sql(1 hunks)monero-rpc-pool/migrations/20250618212059_insert_default_nodes.sql(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/pool.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx(1 hunks)swap/src/asb/config.rs(5 hunks)swap/src/protocol/bob/swap.rs(7 hunks)
✅ Files skipped from review due to trivial changes (1)
- monero-rpc-pool/migrations/20250618212059_insert_default_nodes.sql
🚧 Files skipped from review as they are similar to previous changes (7)
- .gitignore
- swap/src/asb/config.rs
- .vscode/settings.json
- swap/src/protocol/bob/swap.rs
- monero-rpc-pool/Cargo.toml
- src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx
- monero-rpc-pool/src/main.rs
🔇 Additional comments (1)
monero-rpc-pool/migrations/20250618212026_initial_schema.sql (1)
1-31: LGTM! Well-designed database schema.The migration script establishes a solid foundation for the Monero RPC pool with:
- Proper table normalization separating node metadata from health check events
- Appropriate foreign key constraints with CASCADE deletion
- Strategic indexes on frequently queried columns (full_url, network, node_id, timestamp)
- SQLite-compatible data types and syntax
The schema supports the application's requirements for node discovery, health tracking, and pool management effectively.
monero-rpc-pool/src/database.rs
Outdated
| // Complex case - we need to exclude specific IDs | ||
| // For simplicity, we'll fetch more nodes than needed and filter them in memory | ||
| // This avoids the dynamic SQL complexity | ||
| let extra_factor = 3; // Fetch 3x more to account for exclusions | ||
| let fetch_limit = limit * extra_factor; | ||
|
|
||
| let all_nodes = sqlx::query_as::<_, MoneroNode>( | ||
| r#" | ||
| SELECT | ||
| n.*, | ||
| COALESCE(stats.success_count, 0) as success_count, | ||
| COALESCE(stats.failure_count, 0) as failure_count, | ||
| stats.last_success, | ||
| stats.last_failure, | ||
| stats.last_checked, | ||
| 0 as is_reliable, | ||
| stats.avg_latency_ms, | ||
| stats.min_latency_ms, | ||
| stats.max_latency_ms, | ||
| stats.last_latency_ms | ||
| FROM monero_nodes n | ||
| LEFT JOIN ( | ||
| SELECT | ||
| node_id, | ||
| SUM(CASE WHEN was_successful THEN 1 ELSE 0 END) as success_count, | ||
| SUM(CASE WHEN NOT was_successful THEN 1 ELSE 0 END) as failure_count, | ||
| MAX(CASE WHEN was_successful THEN timestamp END) as last_success, | ||
| MAX(CASE WHEN NOT was_successful THEN timestamp END) as last_failure, | ||
| MAX(timestamp) as last_checked, | ||
| AVG(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as avg_latency_ms, | ||
| MIN(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as min_latency_ms, | ||
| MAX(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as max_latency_ms, | ||
| (SELECT latency_ms FROM health_checks hc2 WHERE hc2.node_id = health_checks.node_id ORDER BY timestamp DESC LIMIT 1) as last_latency_ms | ||
| FROM health_checks | ||
| GROUP BY node_id | ||
| ) stats ON n.id = stats.node_id | ||
| WHERE n.network = ? | ||
| ORDER BY RANDOM() | ||
| LIMIT ? | ||
| "#, | ||
| ) | ||
| .bind(network) | ||
| .bind(fetch_limit) | ||
| .fetch_all(&self.pool) | ||
| .await?; | ||
|
|
||
| // Filter out excluded IDs and limit the result | ||
| let exclude_set: std::collections::HashSet<i64> = exclude_ids.iter().copied().collect(); | ||
| let filtered_nodes: Vec<MoneroNode> = all_nodes | ||
| .into_iter() | ||
| .filter(|node| { | ||
| if let Some(id) = node.id { | ||
| !exclude_set.contains(&id) | ||
| } else { | ||
| true // Include nodes without IDs | ||
| } | ||
| }) | ||
| .take(limit as usize) | ||
| .collect(); | ||
|
|
||
| debug!( | ||
| "Retrieved {} random nodes for network {} (excluding {} IDs)", | ||
| filtered_nodes.len(), | ||
| network, | ||
| exclude_ids.len() | ||
| ); | ||
| Ok(filtered_nodes) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Inefficient exclusion logic with potential memory issues.
The get_random_nodes method fetches 3x more nodes than needed when exclusions are present, which could be memory-intensive for large node sets and doesn't guarantee finding enough non-excluded nodes.
Consider a more efficient approach:
- // Complex case - we need to exclude specific IDs
- // For simplicity, we'll fetch more nodes than needed and filter them in memory
- // This avoids the dynamic SQL complexity
- let extra_factor = 3; // Fetch 3x more to account for exclusions
- let fetch_limit = limit * extra_factor;
+ // Use SQL NOT IN clause for more efficient exclusion
+ let placeholders = exclude_ids.iter().map(|_| "?").collect::<Vec<_>>().join(",");
+ let query = format!(
+ r#"SELECT ... WHERE n.network = ? AND n.id NOT IN ({}) ORDER BY RANDOM() LIMIT ?"#,
+ placeholders
+ );
+
+ let mut query_builder = sqlx::query_as::<_, MoneroNode>(&query)
+ .bind(network);
+ for &exclude_id in exclude_ids {
+ query_builder = query_builder.bind(exclude_id);
+ }
+ let nodes = query_builder.bind(limit).fetch_all(&self.pool).await?;Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In monero-rpc-pool/src/database.rs between lines 624 and 691, the current
approach fetches three times more nodes than needed and filters exclusions in
memory, which can be inefficient and memory-heavy. To fix this, modify the SQL
query to exclude the specified IDs directly using a WHERE clause with a NOT IN
condition for the exclude_ids set, and adjust the LIMIT accordingly to fetch
only the required number of nodes. This will reduce memory usage and ensure the
query returns the correct number of nodes without post-filtering.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
monero-rpc-pool/src/database.rs (2)
124-125: SQLite URL format is still incorrect - this appears to be a duplicate of a previous issue.The SQLite URL format
sqlite:{}?mode=rwcis missing the required double slashes. This was flagged in previous reviews but appears unresolved.- let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); + let database_url = format!("sqlite://{}?mode=rwc", db_path.display());
632-699: Inefficient exclusion logic remains unresolved.This method still fetches 3x more nodes than needed and filters exclusions in memory, which was flagged in previous reviews. This approach is memory-intensive and doesn't guarantee finding enough non-excluded nodes.
The suggested SQL-based exclusion using NOT IN clause from previous reviews would be more efficient:
// Use SQL NOT IN clause for more efficient exclusion let placeholders = exclude_ids.iter().map(|_| "?").collect::<Vec<_>>().join(","); let query = format!( r#"SELECT ... WHERE n.network = ? AND n.id NOT IN ({}) ORDER BY RANDOM() LIMIT ?"#, placeholders ); let mut query_builder = sqlx::query_as::<_, MoneroNode>(&query) .bind(network); for &exclude_id in exclude_ids { query_builder = query_builder.bind(exclude_id); } let nodes = query_builder.bind(limit).fetch_all(&self.pool).await?;
🧹 Nitpick comments (3)
monero-rpc-pool/src/config.rs (1)
23-23: Address the TODO: Consider using a builder pattern to reduce code duplication.The three constructor methods (
from_args,for_library,for_library_with_data_dir) have significant code duplication. A builder pattern would improve maintainability and reduce repetition.Consider implementing a builder pattern:
+impl Config { + pub fn builder() -> ConfigBuilder { + ConfigBuilder::new() + } +} + +pub struct ConfigBuilder { + host: Option<String>, + port: Option<u16>, + nodes: Option<Vec<String>>, + data_dir: Option<PathBuf>, +} + +impl ConfigBuilder { + pub fn new() -> Self { + Self { + host: None, + port: None, + nodes: None, + data_dir: None, + } + } + + pub fn host(mut self, host: String) -> Self { + self.host = Some(host); + self + } + + pub fn port(mut self, port: u16) -> Self { + self.port = Some(port); + self + } + + pub fn data_dir(mut self, data_dir: PathBuf) -> Self { + self.data_dir = Some(data_dir); + self + } + + pub fn build(self) -> Config { + let default = Config::default(); + Config { + host: self.host.unwrap_or(default.host), + port: self.port.unwrap_or(default.port), + nodes: self.nodes.unwrap_or(default.nodes), + data_dir: self.data_dir, + } + } +}monero-rpc-pool/src/database.rs (1)
14-14: Consider usingu16for the port field instead ofi64.Network ports are typically represented as
u16(0-65535), but this field usesi64. This could lead to invalid port values and wastes memory.- pub port: i64, + pub port: u16,Note: This change would require updating related SQL queries and bindings throughout the codebase.
monero-rpc-pool/src/lib.rs (1)
157-243: Consolidate server startup functions to reduce duplication.The pattern of having separate
_with_data_dirvariants (run_server_with_data_dir,start_server_with_random_port_and_data_dir) creates unnecessary code duplication.Consider consolidating by making the base functions accept an optional data directory:
-pub async fn run_server(config: Config, network: String) -> Result<()> { +pub async fn run_server(config: Config, network: String, data_dir: Option<std::path::PathBuf>) -> Result<()> { + let final_config = if let Some(data_dir) = data_dir { + Config { data_dir: Some(data_dir), ..config } + } else { + config + }; - let app = create_app(config.clone(), network).await?; + let app = create_app(final_config.clone(), network).await?; // ... rest of implementationThen the
_with_data_dirvariants can simply call the base function withSome(data_dir).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
monero-rpc-pool/src/config.rs(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)
🔇 Additional comments (2)
monero-rpc-pool/src/lib.rs (2)
30-41: Excellent task lifecycle management.The
TaskManagerwith itsDropimplementation properly addresses the previous review concern about background task leakage. This ensures clean shutdown and prevents dangling tasks.
51-135: Config parameter is now properly utilized.The function correctly uses the
Configparameter, addressing the previous review concern. Database initialization respectsconfig.data_dir, configured nodes are inserted, and all relevant config fields are used appropriately.
monero-rpc-pool/src/config.rs
Outdated
| pub fn for_library(host: Option<String>, port: Option<u16>) -> Self { | ||
| Self { | ||
| host: host.unwrap_or_else(|| "127.0.0.1".to_string()), | ||
| port: port.unwrap_or(0), // 0 for random port | ||
| nodes: vec![], // Empty - rely on discovery | ||
| data_dir: None, // Use default data directory | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Eliminate code duplication between library constructor methods.
The for_library and for_library_with_data_dir methods have nearly identical logic except for the data_dir field.
Consolidate these methods:
- /// Creates a new config suitable for library usage with automatic discovery
- pub fn for_library(host: Option<String>, port: Option<u16>) -> Self {
- Self {
- host: host.unwrap_or_else(|| "127.0.0.1".to_string()),
- port: port.unwrap_or(0), // 0 for random port
- nodes: vec![], // Empty - rely on discovery
- data_dir: None, // Use default data directory
- }
- }
+ /// Creates a new config suitable for library usage with automatic discovery
+ pub fn for_library(host: Option<String>, port: Option<u16>) -> Self {
+ Self::for_library_with_data_dir(host, port, None)
+ }
- /// Creates a new config for library usage with a custom data directory
- pub fn for_library_with_data_dir(
- host: Option<String>,
- port: Option<u16>,
- data_dir: PathBuf,
- ) -> Self {
- Self {
- host: host.unwrap_or_else(|| "127.0.0.1".to_string()),
- port: port.unwrap_or(0), // 0 for random port
- nodes: vec![], // Empty - rely on discovery
- data_dir: Some(data_dir),
- }
- }
+ /// Creates a new config for library usage with optional custom data directory
+ pub fn for_library_with_data_dir(
+ host: Option<String>,
+ port: Option<u16>,
+ data_dir: Option<PathBuf>,
+ ) -> Self {
+ Self {
+ host: host.unwrap_or_else(|| "127.0.0.1".to_string()),
+ port: port.unwrap_or(0), // 0 for random port
+ nodes: vec![], // Empty - rely on discovery
+ data_dir,
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn for_library(host: Option<String>, port: Option<u16>) -> Self { | |
| Self { | |
| host: host.unwrap_or_else(|| "127.0.0.1".to_string()), | |
| port: port.unwrap_or(0), // 0 for random port | |
| nodes: vec![], // Empty - rely on discovery | |
| data_dir: None, // Use default data directory | |
| } | |
| } | |
| /// Creates a new config suitable for library usage with automatic discovery | |
| pub fn for_library(host: Option<String>, port: Option<u16>) -> Self { | |
| Self::for_library_with_data_dir(host, port, None) | |
| } | |
| /// Creates a new config for library usage with optional custom data directory | |
| pub fn for_library_with_data_dir( | |
| host: Option<String>, | |
| port: Option<u16>, | |
| data_dir: Option<PathBuf>, | |
| ) -> Self { | |
| Self { | |
| host: host.unwrap_or_else(|| "127.0.0.1".to_string()), | |
| port: port.unwrap_or(0), // 0 for random port | |
| nodes: vec![], // Empty - rely on discovery | |
| data_dir, | |
| } | |
| } |
🤖 Prompt for AI Agents
In monero-rpc-pool/src/config.rs around lines 36 to 43, the for_library and
for_library_with_data_dir methods contain duplicated logic except for the
data_dir field. Refactor by creating a single private constructor method that
accepts an Option for data_dir along with host and port, and have both public
methods call this private method with the appropriate data_dir value. This will
consolidate the shared initialization logic and eliminate duplication.
| LEFT JOIN ( | ||
| SELECT | ||
| node_id, | ||
| SUM(CASE WHEN was_successful THEN 1 ELSE 0 END) as success_count, | ||
| SUM(CASE WHEN NOT was_successful THEN 1 ELSE 0 END) as failure_count, | ||
| MAX(CASE WHEN was_successful THEN timestamp END) as last_success, | ||
| MAX(CASE WHEN NOT was_successful THEN timestamp END) as last_failure, | ||
| MAX(timestamp) as last_checked, | ||
| AVG(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as avg_latency_ms, | ||
| MIN(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as min_latency_ms, | ||
| MAX(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as max_latency_ms, | ||
| (SELECT latency_ms FROM health_checks hc2 WHERE hc2.node_id = health_checks.node_id ORDER BY timestamp DESC LIMIT 1) as last_latency_ms | ||
| FROM health_checks | ||
| GROUP BY node_id | ||
| ) stats ON n.id = stats.node_id | ||
| LEFT JOIN ( | ||
| SELECT DISTINCT node_id FROM ( | ||
| SELECT | ||
| n2.id as node_id, | ||
| COALESCE(s2.success_count, 0) as success_count, | ||
| COALESCE(s2.failure_count, 0) as failure_count, | ||
| s2.avg_latency_ms, | ||
| (CAST(COALESCE(s2.success_count, 0) AS REAL) / CAST(COALESCE(s2.success_count, 0) + COALESCE(s2.failure_count, 0) AS REAL)) * | ||
| (MIN(COALESCE(s2.success_count, 0) + COALESCE(s2.failure_count, 0), 200) / 200.0) * 0.8 + | ||
| CASE | ||
| WHEN s2.avg_latency_ms IS NOT NULL THEN (1.0 - (MIN(s2.avg_latency_ms, 2000) / 2000.0)) * 0.2 | ||
| ELSE 0.0 | ||
| END as reliability_score | ||
| FROM monero_nodes n2 | ||
| LEFT JOIN ( | ||
| SELECT | ||
| node_id, | ||
| SUM(CASE WHEN was_successful THEN 1 ELSE 0 END) as success_count, | ||
| SUM(CASE WHEN NOT was_successful THEN 1 ELSE 0 END) as failure_count, | ||
| AVG(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as avg_latency_ms | ||
| FROM health_checks | ||
| GROUP BY node_id | ||
| ) s2 ON n2.id = s2.node_id | ||
| WHERE n2.network = ? AND (COALESCE(s2.success_count, 0) + COALESCE(s2.failure_count, 0)) > 0 | ||
| ORDER BY reliability_score DESC | ||
| LIMIT 4 | ||
| ) | ||
| ) reliable_nodes ON n.id = reliable_nodes.node_id |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Extensive SQL duplication violates DRY principle.
The reliability score calculation and node statistics subqueries are duplicated across multiple methods (get_identified_nodes, get_reliable_nodes, get_node_stats, get_identified_nodes_with_success). This makes maintenance difficult and error-prone.
Consider extracting common SQL fragments into constants or helper methods:
const NODE_STATS_SUBQUERY: &str = r#"
SELECT
node_id,
SUM(CASE WHEN was_successful THEN 1 ELSE 0 END) as success_count,
SUM(CASE WHEN NOT was_successful THEN 1 ELSE 0 END) as failure_count,
MAX(CASE WHEN was_successful THEN timestamp END) as last_success,
MAX(CASE WHEN NOT was_successful THEN timestamp END) as last_failure,
MAX(timestamp) as last_checked,
AVG(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as avg_latency_ms,
MIN(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as min_latency_ms,
MAX(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as max_latency_ms,
(SELECT latency_ms FROM health_checks hc2 WHERE hc2.node_id = health_checks.node_id ORDER BY timestamp DESC LIMIT 1) as last_latency_ms
FROM health_checks
GROUP BY node_id
"#;
const RELIABILITY_SCORE_CALC: &str = r#"
(CAST(COALESCE(s2.success_count, 0) AS REAL) / CAST(COALESCE(s2.success_count, 0) + COALESCE(s2.failure_count, 0) AS REAL)) *
(MIN(COALESCE(s2.success_count, 0) + COALESCE(s2.failure_count, 0), 200) / 200.0) * 0.8 +
CASE
WHEN s2.avg_latency_ms IS NOT NULL THEN (1.0 - (MIN(s2.avg_latency_ms, 2000) / 2000.0)) * 0.2
ELSE 0.0
END
"#;🤖 Prompt for AI Agents
In monero-rpc-pool/src/database.rs around lines 253 to 295, the SQL queries for
node statistics and reliability score calculation are duplicated across multiple
methods, violating the DRY principle. To fix this, extract the common SQL
fragments into reusable constants or helper methods, such as defining
NODE_STATS_SUBQUERY and RELIABILITY_SCORE_CALC as string constants. Then replace
the duplicated SQL snippets in all affected methods with references to these
constants to centralize and simplify maintenance.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (4)
monero-rpc-pool/src/simple_handlers.rs (2)
68-71: reqwest::Client still rebuilt on every request.The past review comment about expensive client construction per request was not addressed. This creates unnecessary overhead for TLS setup and connection pooling.
Store the client in
AppStateand reuse it:// In AppState struct pub struct AppState { pub node_pool: Arc<RwLock<NodePool>>, pub http_client: reqwest::Client, } // Initialize once during app setup let client = reqwest::Client::builder() .timeout(std::time::Duration::from_secs(30)) .build() .map_err(|e| HandlerError::RequestError(e.to_string()))?;
437-443: Error response still loses detailed information.The past review comment about losing detailed error information in
proxy_requestwas not addressed.Return structured error responses with details:
async fn proxy_request( state: &AppState, path: &str, method: &str, headers: &HeaderMap, body: Option<&[u8]>, ) -> Response { match race_requests(state, path, method, headers, body).await { Ok(res) => res, - Err(_) => Response::builder() - .status(StatusCode::INTERNAL_SERVER_ERROR) - .body(Body::from("All nodes failed")) - .unwrap_or_else(|_| Response::new(Body::empty())), + Err(e) => { + let error_json = serde_json::json!({ + "error": "All nodes failed", + "details": e.to_string() + }); + Response::builder() + .status(StatusCode::INTERNAL_SERVER_ERROR) + .header("content-type", "application/json") + .body(Body::from(error_json.to_string())) + .unwrap_or_else(|_| Response::new(Body::empty())) + } } }monero-rpc-pool/src/discovery.rs (2)
240-302: Health checks remain fully sequential.The past review comment about sequential health checks causing long execution times was not addressed. With 2-second delays between checks, this could take hours for hundreds of nodes.
Implement concurrent health checks with limited parallelism:
use futures::stream::{FuturesUnordered, StreamExt}; use tokio::sync::Semaphore; use std::sync::Arc; pub async fn health_check_all_nodes(&self, target_network: Network) -> Result<()> { const MAX_CONCURRENT: usize = 50; let semaphore = Arc::new(Semaphore::new(MAX_CONCURRENT)); let mut futures = FuturesUnordered::new(); for node in all_nodes { let semaphore = semaphore.clone(); let self_clone = self.clone(); futures.push(async move { let _permit = semaphore.acquire().await.unwrap(); self_clone.check_single_node_health(node).await }); } while let Some(result) = futures.next().await { // Process result } }
364-366: Code duplication in scheme determination logic.The past review comment about duplicated port-based scheme logic was not addressed. The same logic appears here and elsewhere.
Extract to a helper function:
+impl NodeDiscovery { + fn determine_scheme_from_port(port: u16) -> &'static str { + match port { + 18089 | 38089 | 443 => "https", + _ => "http", + } + } + let port = url .port() - .unwrap_or(if scheme == "https" { 18089 } else { 18081 }) + .unwrap_or(if Self::determine_scheme_from_port( + url.port().unwrap_or(if url.scheme() == "https" { 18089 } else { 18081 }) + ) == "https" { 18089 } else { 18081 }) as i64;
🧹 Nitpick comments (13)
monero-rpc-pool/.sqlx/query-549f5ef13ec7bf5d987dcb893753a9c903edcafa3a66bd82965b40a9e7f238b6.json (2)
3-3: Refactor query for readability and performance.The embedded SQL is highly nested and repeats heavy calculations inline. Consider breaking the logic into named CTEs to:
// • Isolate the health‐check aggregates (stats)
// • Compute thereliability_scoreonce
// • Reference that CTE in both thereliable_nodesjoin and your final orderingThis will improve maintainability, reduce duplication, and may speed up execution by avoiding multiple scans.
3-3: Optimize thelast_latency_mssubquery.The correlated subquery for
last_latency_msruns per grouping and can be expensive. You could leverage a window function (e.g.,ROW_NUMBER() OVER (PARTITION BY node_id ORDER BY timestamp DESC)) in a CTE to fetch the latestlatency_msin one pass.monero-rpc-pool/regenerate_sqlx_cache.sh (2)
1-22: Enable stricter error handling and precondition checks.Add
set -euo pipefailto catch undefined variables and pipeline failures. Also assert you’re in the crate root (e.g., check forCargo.tomlormigrations/directory) before running the script to avoid silent errors in other contexts.
35-41: Harden cleanup on signals and errors.Extend your
trapto includeERR SIGINT SIGTERMand handle potential failures inrm -rf. For example:trap 'cleanup; exit 1' ERR SIGINT SIGTERMThis ensures temp files are always cleaned, even on interrupts.
monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json (1)
3-3: Use CTEs to simplify multi-level aggregation.The current inline
LEFT JOINforstatsand the nestedSELECT … LIMIT 4forreliableduplicate work and reduce clarity. Extract these steps into CTEs:WITH stats AS (...), reliable_nodes AS (...) SELECT COUNT(*) AS total, CAST(SUM(CASE WHEN stats.success_count > 0 THEN 1 ELSE 0 END) AS INTEGER) AS reachable, (SELECT COUNT(*) FROM reliable_nodes) AS reliable FROM monero_nodes n LEFT JOIN stats ON ...This will make the logic more transparent and may improve performance.
monero-rpc-pool/.sqlx/query-fac12e3ca6ac1db1a4812a5390a333ec95a2e5e2cd554c169ceecc61b7ff2864.json (1)
3-3: Extractreliable_nodeslogic into its own CTE.Currently, the reliability formula and top-4 selection are embedded in a nested join. Pulling this into a CTE will reduce duplication and make the core node query easier to follow:
WITH reliable_nodes AS ( SELECT DISTINCT n2.id AS node_id, … reliability_score … FROM … ) SELECT … LEFT JOIN reliable_nodes USING (node_id) …monero-rpc-pool/.sqlx/query-5ff27bdd9b6e7aadc8dd4936e0ee7e6a611aaef28697a0e9535dfb30d1c4861d.json (1)
3-3: Refactor deep joins into CTEs.The nested joins for
statsandreliable_nodesare hard to maintain. Consolidating them in CTEs will make the mainSELECTconcise and improve clarity:WITH stats AS (...), reliable_nodes AS (...) SELECT … FROM monero_nodes n LEFT JOIN stats USING (node_id) LEFT JOIN reliable_nodes USING (node_id) …monero-rpc-pool/.sqlx/query-5a25c95c04b11a60a04ad97b5fb684e9a0cc2eb5daf64f33e924f0c38a2edfec.json (1)
1-116: Complex SQL query with potential performance concerns.The query implements a sophisticated reliability scoring algorithm with multiple nested subqueries. While functionally correct, there are several considerations:
- Performance: The nested subqueries and multiple aggregations could be expensive with many nodes
- Reliability Score Logic: The scoring formula
(success_rate * min(total_checks, 200) / 200.0) * 0.8 + latency_component * 0.2is reasonable but the hardcoded values (200 max checks, 0.8/0.2 weights) should be documented- Random Selection: Using
ORDER BY RANDOM()can be expensive on large datasetsConsider adding database indexes on frequently queried columns:
CREATE INDEX IF NOT EXISTS idx_health_checks_node_timestamp ON health_checks(node_id, timestamp); CREATE INDEX IF NOT EXISTS idx_monero_nodes_network ON monero_nodes(network);monero-rpc-pool/src/main.rs (1)
58-67: Improve logging filter configuration.The custom filter creation could be more robust and the function name is misleading.
-// Custom filter function that overrides log levels for our crate -fn create_level_override_filter(base_filter: &str) -> EnvFilter { - // Parse the base filter and modify it to treat all monero_rpc_pool logs as trace - let mut filter = EnvFilter::new(base_filter); - - // Add a directive that treats all levels from our crate as trace - filter = filter.add_directive("monero_rpc_pool=trace".parse().unwrap()); - - filter -} +fn create_logging_filter(base_filter: &str) -> EnvFilter { + EnvFilter::new(base_filter) + .add_directive("monero_rpc_pool=trace".parse().expect("Invalid log directive")) +}monero-rpc-pool/src/simple_handlers.rs (2)
240-266: Inefficient node selection strategy.The "Power of Two Choices" implementation fetches 20 nodes upfront but only uses 2 at a time, potentially wasting the pool selection effort.
- const POOL_SIZE: usize = 20; + const BATCH_SIZE: usize = 2; + const MAX_RETRIES: usize = 10; let mut tried_nodes = std::collections::HashSet::new(); - let mut pool_index = 0; let mut collected_errors: Vec<(String, String)> = Vec::new(); - // Get the exclusive pool of 20 nodes once at the beginning - let available_pool = { + let mut retry_count = 0; + while retry_count < MAX_RETRIES && tried_nodes.len() < MAX_RETRIES * BATCH_SIZE { + // Get fresh nodes for each retry let node_pool_guard = state.node_pool.read().await; let reliable_nodes = node_pool_guard - .get_top_reliable_nodes(POOL_SIZE) + .get_top_reliable_nodes(BATCH_SIZE + tried_nodes.len()) .await .map_err(|e| HandlerError::PoolError(e.to_string()))?;This approach gets fresher node selections and avoids holding onto potentially stale node lists.
300-301: Potential race condition in node health tracking.In concurrent racing scenarios, health metrics might be recorded multiple times for the same failed batch, skewing the statistics.
Consider tracking which specific nodes failed in each race to avoid duplicate failure recordings:
// Since we don't know which specific node failed in the race, -// record the error for all nodes in this batch -for node_url in ¤t_nodes { - collected_errors.push((node_url.clone(), e.to_string())); -} +// track errors without immediately recording failures +for node_url in ¤t_nodes { + collected_errors.push((node_url.clone(), e.to_string())); +} +// Health recording happens in single_raw_request for accurate trackingmonero-rpc-pool/src/discovery.rs (2)
88-94: Inefficient duplicate removal.The duplicate removal logic uses
Vec::contains()which is O(n) for each check, making the overall complexity O(n²).- // Remove duplicates - let mut unique_nodes = Vec::new(); - for node in nodes { - if !unique_nodes.contains(&node) { - unique_nodes.push(node); - } - } + // Remove duplicates efficiently + let mut seen = std::collections::HashSet::new(); + let unique_nodes: Vec<String> = nodes + .into_iter() + .filter(|node| seen.insert(node.clone())) + .collect();
314-314: Hardcoded discovery interval may be too frequent.A 1-hour interval for full node discovery and health checking could be resource-intensive, especially with sequential health checks.
Consider making the interval configurable and potentially longer:
- let mut interval = tokio::time::interval(Duration::from_secs(3600)); // Every hour + let mut interval = tokio::time::interval(Duration::from_secs(4 * 3600)); // Every 4 hoursOr add configuration parameter:
pub async fn periodic_discovery_task(&self, target_network: Network, interval_hours: u64) -> Result<()> { let mut interval = tokio::time::interval(Duration::from_secs(interval_hours * 3600));
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json(1 hunks)monero-rpc-pool/.sqlx/query-1766f6ee1f0d61c8eaadfc7e208181b24c04cc911407fb64e04a1e74e304c3dc.json(1 hunks)monero-rpc-pool/.sqlx/query-340255b201ff024dfabb719fad11a7d4138d85a626390eb596dc59dd7fff1b3b.json(1 hunks)monero-rpc-pool/.sqlx/query-549f5ef13ec7bf5d987dcb893753a9c903edcafa3a66bd82965b40a9e7f238b6.json(1 hunks)monero-rpc-pool/.sqlx/query-56549d93f0e2106297b85565a52b2d9ac64d5b50fb7aa6028be3fcf266fc1d5d.json(1 hunks)monero-rpc-pool/.sqlx/query-5736de2aac47eb69d7f6835d266aa28732b02a5e8e055ffaebcb452ed1b5044c.json(1 hunks)monero-rpc-pool/.sqlx/query-5a25c95c04b11a60a04ad97b5fb684e9a0cc2eb5daf64f33e924f0c38a2edfec.json(1 hunks)monero-rpc-pool/.sqlx/query-5ff27bdd9b6e7aadc8dd4936e0ee7e6a611aaef28697a0e9535dfb30d1c4861d.json(1 hunks)monero-rpc-pool/.sqlx/query-a032eb9773d4553aeaff4fb15ed99dbaef7d16d48750ee7bd4ab83233a9a732b.json(1 hunks)monero-rpc-pool/.sqlx/query-ba231efaf208a42fa857f716ef296b428c937f2eb7c8ce9c631f7f721e914c14.json(1 hunks)monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json(1 hunks)monero-rpc-pool/.sqlx/query-fac12e3ca6ac1db1a4812a5390a333ec95a2e5e2cd554c169ceecc61b7ff2864.json(1 hunks)monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json(1 hunks)monero-rpc-pool/regenerate_sqlx_cache.sh(1 hunks)monero-rpc-pool/src/config.rs(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx(1 hunks)src-tauri/src/lib.rs(5 hunks)
✅ Files skipped from review due to trivial changes (8)
- monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json
- monero-rpc-pool/.sqlx/query-1766f6ee1f0d61c8eaadfc7e208181b24c04cc911407fb64e04a1e74e304c3dc.json
- monero-rpc-pool/.sqlx/query-5736de2aac47eb69d7f6835d266aa28732b02a5e8e055ffaebcb452ed1b5044c.json
- monero-rpc-pool/.sqlx/query-340255b201ff024dfabb719fad11a7d4138d85a626390eb596dc59dd7fff1b3b.json
- monero-rpc-pool/.sqlx/query-a032eb9773d4553aeaff4fb15ed99dbaef7d16d48750ee7bd4ab83233a9a732b.json
- monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json
- monero-rpc-pool/.sqlx/query-56549d93f0e2106297b85565a52b2d9ac64d5b50fb7aa6028be3fcf266fc1d5d.json
- monero-rpc-pool/.sqlx/query-ba231efaf208a42fa857f716ef296b428c937f2eb7c8ce9c631f7f721e914c14.json
🚧 Files skipped from review as they are similar to previous changes (5)
- src-gui/src/renderer/components/pages/help/MoneroPoolHealthBox.tsx
- src-tauri/src/lib.rs
- monero-rpc-pool/src/config.rs
- monero-rpc-pool/src/lib.rs
- monero-rpc-pool/src/database.rs
🔇 Additional comments (5)
monero-rpc-pool/.sqlx/query-549f5ef13ec7bf5d987dcb893753a9c903edcafa3a66bd82965b40a9e7f238b6.json (2)
3-3: Verify intended ordering vs. reliability score.The main
ORDER BYuses success ratio and latency metrics, but doesn’t sort by the computedreliability_score. Please confirm whether the pool should prioritize nodes by this combined score rather than re‐deriving ratio and latency here.
4-115: Confirmdescribemetadata accuracy.The
describeblock showstype_info: "Null"for columns named with!: i64(e.g.,success_count!: i64), which implies non-nullable. Please ensure the SQLx cache was generated correctly and that the nullability/type mapping matches your schema.monero-rpc-pool/regenerate_sqlx_cache.sh (1)
29-47: Validate necessity ofcargo sqlx database createfor SQLite.SQLite will auto-create the database file on first access. Confirm whether
cargo sqlx database createis required for SQLite or if you can streamline by only running migrations against the file directly.monero-rpc-pool/.sqlx/query-fac12e3ca6ac1db1a4812a5390a333ec95a2e5e2cd554c169ceecc61b7ff2864.json (1)
3-3: Confirm health-check filtering semantics.The
WHERE n.network = ? AND stats.success_count > 0condition retains only nodes with at least one success but allows nodes with zero failures or no checks. Please verify if you intend to exclude nodes never health-checked (success_count IS NULL) or only those that have truly succeeded at least once.monero-rpc-pool/.sqlx/query-5ff27bdd9b6e7aadc8dd4936e0ee7e6a611aaef28697a0e9535dfb30d1c4861d.json (1)
3-3: Ensure nodes without any health checks are handled intentionally.This query selects all
monero_nodesfor the given network, including those with no health‐check records (stats.success_countandstats.failure_countboth NULL). If untested nodes should be excluded or flagged differently, consider adding a filter like:WHERE n.network = ? AND (stats.success_count IS NOT NULL OR stats.failure_count IS NOT NULL)
| // Use manual nodes for discovery | ||
| discovery.discover_and_insert_nodes(args.network, manual_nodes).await?; | ||
| parsed_nodes |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Redundant node insertion after discovery.
The manual nodes are inserted twice - once in the loop above and again in the discovery call.
- // Use manual nodes for discovery
- discovery.discover_and_insert_nodes(args.network, manual_nodes).await?;
- parsed_nodes
+ parsed_nodesThe discover_and_insert_nodes call is redundant since nodes were already inserted in the previous loop.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Use manual nodes for discovery | |
| discovery.discover_and_insert_nodes(args.network, manual_nodes).await?; | |
| parsed_nodes | |
| parsed_nodes |
🤖 Prompt for AI Agents
In monero-rpc-pool/src/main.rs around lines 130 to 132, the manual nodes are
being inserted twice: once in the previous loop and again in the
discover_and_insert_nodes call. To fix this, remove the
discover_and_insert_nodes call to avoid redundant insertion of manual nodes
since they have already been inserted earlier.
| for node_url in &manual_nodes { | ||
| // Parse the URL to extract components | ||
| if let Ok(url) = url::Url::parse(node_url) { | ||
| let scheme = url.scheme().to_string(); | ||
| let _protocol = if scheme == "https" { "ssl" } else { "tcp" }; | ||
| let host = url.host_str().unwrap_or("").to_string(); | ||
| let port = url | ||
| .port() | ||
| .unwrap_or(if scheme == "https" { 443 } else { 80 }) | ||
| as i64; | ||
|
|
||
| let full_url = format!("{}://{}:{}", scheme, host, port); | ||
|
|
||
| // Insert into database | ||
| if let Err(e) = db.upsert_node(&scheme, &host, port).await { | ||
| warn!("Failed to insert manual node {}: {}", node_url, e); | ||
| } else { | ||
| parsed_nodes.push(full_url); | ||
| } | ||
| } else { | ||
| warn!("Failed to parse manual node URL: {}", node_url); | ||
| } | ||
| } |
There was a problem hiding this comment.
Improve URL parsing and error handling.
The URL parsing logic has several issues that could lead to incorrect node configuration.
for node_url in &manual_nodes {
- // Parse the URL to extract components
if let Ok(url) = url::Url::parse(node_url) {
let scheme = url.scheme().to_string();
- let _protocol = if scheme == "https" { "ssl" } else { "tcp" };
- let host = url.host_str().unwrap_or("").to_string();
+ let host = url.host_str()
+ .ok_or_else(|| format!("Invalid host in URL: {}", node_url))?
+ .to_string();
let port = url
.port()
- .unwrap_or(if scheme == "https" { 443 } else { 80 })
+ .unwrap_or(if scheme == "https" { 18089 } else { 18081 })
as i64;Issues:
- Using HTTP default ports (80/443) instead of Monero RPC ports (18081/18089)
- Silent fallback to empty string for missing host
- Unused
_protocolvariable
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for node_url in &manual_nodes { | |
| // Parse the URL to extract components | |
| if let Ok(url) = url::Url::parse(node_url) { | |
| let scheme = url.scheme().to_string(); | |
| let _protocol = if scheme == "https" { "ssl" } else { "tcp" }; | |
| let host = url.host_str().unwrap_or("").to_string(); | |
| let port = url | |
| .port() | |
| .unwrap_or(if scheme == "https" { 443 } else { 80 }) | |
| as i64; | |
| let full_url = format!("{}://{}:{}", scheme, host, port); | |
| // Insert into database | |
| if let Err(e) = db.upsert_node(&scheme, &host, port).await { | |
| warn!("Failed to insert manual node {}: {}", node_url, e); | |
| } else { | |
| parsed_nodes.push(full_url); | |
| } | |
| } else { | |
| warn!("Failed to parse manual node URL: {}", node_url); | |
| } | |
| } | |
| for node_url in &manual_nodes { | |
| if let Ok(url) = url::Url::parse(node_url) { | |
| let scheme = url.scheme().to_string(); | |
| let host = url.host_str() | |
| .ok_or_else(|| format!("Invalid host in URL: {}", node_url))? | |
| .to_string(); | |
| let port = url | |
| .port() | |
| .unwrap_or(if scheme == "https" { 18089 } else { 18081 }) | |
| as i64; | |
| let full_url = format!("{}://{}:{}", scheme, host, port); | |
| // Insert into database | |
| if let Err(e) = db.upsert_node(&scheme, &host, port).await { | |
| warn!("Failed to insert manual node {}: {}", node_url, e); | |
| } else { | |
| parsed_nodes.push(full_url); | |
| } | |
| } else { | |
| warn!("Failed to parse manual node URL: {}", node_url); | |
| } | |
| } |
🤖 Prompt for AI Agents
In monero-rpc-pool/src/main.rs lines 106 to 128, improve URL parsing by
replacing default HTTP ports 80 and 443 with Monero RPC default ports 18081 for
HTTP and 18089 for HTTPS. Handle missing host by returning an error or skipping
the node instead of using an empty string. Remove the unused _protocol variable
to clean up the code.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
monero-rpc-pool/src/discovery.rs (1)
231-293: Sequential health checks cause performance bottleneck.This is the same issue flagged in previous reviews. With hundreds of nodes and a 2-second delay per node plus 10-second timeouts, health checking could take hours. The sequential processing should be replaced with concurrent processing using bounded concurrency.
The solution remains the same: use
FuturesUnorderedwith aSemaphoreto limit concurrency to around 50 simultaneous checks while preserving the update logic for success and failure.
🧹 Nitpick comments (3)
monero-rpc-pool/src/discovery.rs (3)
79-85: Optimize deduplication algorithm for better performance.The current deduplication uses
Vec::contains()which has O(n) complexity for each element, resulting in O(n²) overall complexity. Use aHashSetfor O(n) deduplication.- // Remove duplicates - let mut unique_nodes = Vec::new(); - for node in nodes { - if !unique_nodes.contains(&node) { - unique_nodes.push(node); - } - } + // Remove duplicates using HashSet for O(n) complexity + let unique_nodes: Vec<String> = nodes.into_iter().collect::<std::collections::HashSet<_>>().into_iter().collect();
284-288: Improve error handling in health check loop.The error handling in the health check loop discards the error information, which could make debugging difficult. Consider logging the specific error before recording the failed health check.
Err(_e) => { + warn!("Health check failed for node {}: {}", node.full_url, _e); self.db .record_health_check(&node.full_url, false, None) .await?; }
326-336: Consider error handling for node stats logging.The node stats logging silently ignores errors from
get_node_stats(). While this might be intentional to avoid breaking the main loop, consider logging warnings when stats retrieval fails.if let Ok((total, reachable, reliable)) = self.db.get_node_stats(&network_str).await { if total > 0 { info!( "Node stats for {}: {} total, {} reachable, {} reliable", network_str, total, reachable, reliable ); } + } else { + warn!("Failed to retrieve node stats for network: {}", network_str); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
monero-rpc-pool/src/discovery.rs(1 hunks)src-tauri/src/lib.rs(4 hunks)swap/src/bin/asb.rs(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src-tauri/src/lib.rs
- swap/src/bin/asb.rs
🔇 Additional comments (1)
monero-rpc-pool/src/discovery.rs (1)
354-357: Verify default port selection logic.The default port selection logic assumes HTTPS uses port 18089 and HTTP uses 18081, but this might not always be correct. Verify that these are the standard Monero RPC ports for the respective protocols.
What are the standard Monero RPC ports for HTTP and HTTPS connections?
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (7)
monero-rpc-pool/src/discovery.rs (4)
48-56: Replaceunwrap()with proper error handling in constructor.The constructor uses
unwrap()which could cause a panic if the HTTP client fails to build.Apply this fix to return a
Resultinstead:- pub fn new(db: Database) -> Self { + pub fn new(db: Database) -> Result<Self> { let client = Client::builder() .timeout(Duration::from_secs(10)) .user_agent("monero-rpc-pool/1.0") .build() - .unwrap(); + .map_err(|e| anyhow::anyhow!("Failed to build HTTP client: {}", e))?; - Self { client, db } + Ok(Self { client, db }) }
229-272: Health checks are fully sequential causing performance bottleneck.With hundreds of nodes, per-node 10s timeout + 2s delay results in hours per sweep. This severely impacts the system's responsiveness.
Implement concurrent health checks with controlled parallelism:
+use futures::stream::{FuturesUnordered, StreamExt}; +use tokio::sync::Semaphore; +use std::sync::Arc; - for node in all_nodes { - match self.check_node_health(&node.full_url).await { + let semaphore = Arc::new(Semaphore::new(50)); // Limit to 50 concurrent checks + let mut tasks = FuturesUnordered::new(); + + for node in all_nodes { + let semaphore = semaphore.clone(); + let self_clone = self.clone(); + let node_clone = node.clone(); + + tasks.push(async move { + let _permit = semaphore.acquire().await.unwrap(); + let outcome = self_clone.check_node_health(&node_clone.full_url).await; + (node_clone, outcome) + }); + } + + while let Some((node, outcome_result)) = tasks.next().await { + match outcome_result { // ... rest of the health check logic - tokio::time::sleep(Duration::from_secs(2)).await; }
331-337: Extract duplicated scheme determination logic.The port-based scheme determination logic appears multiple times, violating DRY principle.
Extract to a helper function:
+impl NodeDiscovery { + fn determine_scheme_from_port(port: u16) -> &'static str { + match port { + 18089 | 38089 | 443 => "https", + _ => "http", + } + } - let port = url - .port() - .unwrap_or(if scheme == "https" { 18089 } else { 18081 }) - as i64; + let default_port = if scheme == "https" { 18089 } else { 18081 }; + let port = url.port().unwrap_or(default_port) as i64;
283-318: Add graceful shutdown mechanism for periodic task.The periodic discovery task runs indefinitely with no way to stop it gracefully.
Add shutdown signal support:
- pub async fn periodic_discovery_task(&self, target_network: Network) -> Result<()> { + pub async fn periodic_discovery_task(&self, target_network: Network, mut shutdown_rx: tokio::sync::oneshot::Receiver<()>) -> Result<()> { let mut interval = tokio::time::interval(Duration::from_secs(3600)); loop { - interval.tick().await; + tokio::select! { + _ = interval.tick() => { + // Continue with discovery logic + } + _ = &mut shutdown_rx => { + info!("Shutting down periodic discovery task"); + break; + } + }monero-rpc-pool/src/database.rs (3)
825-932: Inefficient exclusion logic with potential memory issues.The method fetches extra nodes and filters in memory, which is inefficient and doesn't guarantee finding enough non-excluded nodes.
Use dynamic SQL with parameterized NOT IN clause for better performance:
// Create parameterized query with NOT IN clause let placeholders = exclude_ids.iter().map(|_| "?").collect::<Vec<_>>().join(","); let query = format!( r#"SELECT ... WHERE n.network = ? AND n.id NOT IN ({}) ORDER BY RANDOM() LIMIT ?"#, placeholders ); let mut query_builder = sqlx::query_as::<_, MoneroNode>(&query) .bind(network); for &exclude_id in exclude_ids { query_builder = query_builder.bind(exclude_id); } let nodes = query_builder.bind(limit).fetch_all(&self.pool).await?;
124-125: SQLite URL format is invalid.The current format
sqlite:/pathis missing the required double slash and will fail on Windows paths.Fix the SQLite connection URL format:
- let database_url = format!("sqlite:{}?mode=rwc", db_path.display()); + let database_url = format!("sqlite://{}?mode=rwc", db_path.display());
262-304: Extensive SQL duplication violates DRY principle.The reliability score calculation and node statistics subqueries are duplicated across multiple methods, making maintenance difficult and error-prone.
Extract common SQL fragments into constants:
+const NODE_STATS_SUBQUERY: &str = r#" + SELECT + node_id, + SUM(CASE WHEN was_successful THEN 1 ELSE 0 END) as success_count, + SUM(CASE WHEN NOT was_successful THEN 1 ELSE 0 END) as failure_count, + MAX(CASE WHEN was_successful THEN timestamp END) as last_success, + MAX(CASE WHEN NOT was_successful THEN timestamp END) as last_failure, + MAX(timestamp) as last_checked, + AVG(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as avg_latency_ms, + MIN(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as min_latency_ms, + MAX(CASE WHEN was_successful AND latency_ms IS NOT NULL THEN latency_ms END) as max_latency_ms, + (SELECT latency_ms FROM health_checks hc2 WHERE hc2.node_id = health_checks.node_id ORDER BY timestamp DESC LIMIT 1) as last_latency_ms + FROM health_checks + GROUP BY node_id +"#;Then replace the duplicated queries in all methods with this constant.
🧹 Nitpick comments (1)
monero-rpc-pool/src/database.rs (1)
85-101: Reliability score calculation may have edge cases.The calculation doesn't handle the case where
total_requestsis 0, which could lead to unexpected behavior.Add explicit handling for zero requests:
pub fn reliability_score(&self) -> f64 { + let total_requests = self.success_count + self.failure_count; + if total_requests == 0 { + return 0.0; + } + let success_rate = self.success_rate(); - let total_requests = self.success_count + self.failure_count; // Weight success rate by total requests (more requests = more reliable data) let request_weight = (total_requests as f64).min(200.0) / 200.0;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
.gitignore(1 hunks)monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json(1 hunks)monero-rpc-pool/.sqlx/query-3e8f39a6ec4443cec6497672891d12bbf7c1d0aca061827740af88ced863ae23.json(1 hunks)monero-rpc-pool/.sqlx/query-549f5ef13ec7bf5d987dcb893753a9c903edcafa3a66bd82965b40a9e7f238b6.json(1 hunks)monero-rpc-pool/.sqlx/query-5798d9589772742f074e0ecc2551a40d943bfb7ed2e295f09f12d77cb65ce821.json(1 hunks)monero-rpc-pool/.sqlx/query-5a25c95c04b11a60a04ad97b5fb684e9a0cc2eb5daf64f33e924f0c38a2edfec.json(1 hunks)monero-rpc-pool/.sqlx/query-5ff27bdd9b6e7aadc8dd4936e0ee7e6a611aaef28697a0e9535dfb30d1c4861d.json(1 hunks)monero-rpc-pool/.sqlx/query-a032eb9773d4553aeaff4fb15ed99dbaef7d16d48750ee7bd4ab83233a9a732b.json(1 hunks)monero-rpc-pool/.sqlx/query-ba231efaf208a42fa857f716ef296b428c937f2eb7c8ce9c631f7f721e914c14.json(1 hunks)monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json(1 hunks)monero-rpc-pool/.sqlx/query-fac12e3ca6ac1db1a4812a5390a333ec95a2e5e2cd554c169ceecc61b7ff2864.json(1 hunks)monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json(1 hunks)monero-rpc-pool/Cargo.toml(1 hunks)monero-rpc-pool/migrations/20250618212026_initial_schema.sql(1 hunks)monero-rpc-pool/src/config.rs(1 hunks)monero-rpc-pool/src/database.rs(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)src-tauri/src/lib.rs(4 hunks)swap/src/bin/asb.rs(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- monero-rpc-pool/.sqlx/query-5798d9589772742f074e0ecc2551a40d943bfb7ed2e295f09f12d77cb65ce821.json
- monero-rpc-pool/.sqlx/query-3e8f39a6ec4443cec6497672891d12bbf7c1d0aca061827740af88ced863ae23.json
🚧 Files skipped from review as they are similar to previous changes (16)
- monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json
- monero-rpc-pool/.sqlx/query-a032eb9773d4553aeaff4fb15ed99dbaef7d16d48750ee7bd4ab83233a9a732b.json
- monero-rpc-pool/Cargo.toml
- monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json
- monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json
- monero-rpc-pool/src/config.rs
- monero-rpc-pool/migrations/20250618212026_initial_schema.sql
- monero-rpc-pool/.sqlx/query-5ff27bdd9b6e7aadc8dd4936e0ee7e6a611aaef28697a0e9535dfb30d1c4861d.json
- monero-rpc-pool/.sqlx/query-5a25c95c04b11a60a04ad97b5fb684e9a0cc2eb5daf64f33e924f0c38a2edfec.json
- monero-rpc-pool/.sqlx/query-fac12e3ca6ac1db1a4812a5390a333ec95a2e5e2cd554c169ceecc61b7ff2864.json
- src-tauri/src/lib.rs
- swap/src/bin/asb.rs
- monero-rpc-pool/.sqlx/query-549f5ef13ec7bf5d987dcb893753a9c903edcafa3a66bd82965b40a9e7f238b6.json
- monero-rpc-pool/.sqlx/query-ba231efaf208a42fa857f716ef296b428c937f2eb7c8ce9c631f7f721e914c14.json
- monero-rpc-pool/src/main.rs
- monero-rpc-pool/src/lib.rs
🔇 Additional comments (1)
.gitignore (1)
6-10: LGTM! Proper exclusion of build artifacts and temporary files.The new .gitignore entries appropriately exclude build scripts, temporary directories, and SQLite database files that shouldn't be committed to version control.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
monero-rpc-pool/src/discovery.rs (3)
79-85: Inefficient duplicate removal still present.The O(n²) complexity issue for removing duplicates using
contains()checks has not been addressed.
229-272: Sequential health checks remain a performance bottleneck.Health checks are still performed sequentially, which will cause long delays with hundreds of nodes due to per-node timeouts and delays.
282-318: Periodic task lacks graceful shutdown mechanism.The infinite loop still has no way to stop gracefully for clean termination.
monero-rpc-pool/src/simple_handlers.rs (1)
69-72: reqwest::Client still rebuilt on every request.The HTTP client is still being constructed for each request, which is inefficient due to TLS setup and connection pooling overhead.
🧹 Nitpick comments (2)
monero-rpc-pool/src/simple_handlers.rs (2)
241-241: Magic number should be configurable.The pool size of 20 is hardcoded, making it difficult to tune performance for different deployment scenarios.
- const POOL_SIZE: usize = 20; + const DEFAULT_POOL_SIZE: usize = 20; // Consider making this configurable
267-291: Inefficient node selection logic.The nested loops for finding untried nodes have O(n²) complexity in the worst case. Consider using a more efficient data structure.
+ let mut available_indices: std::collections::VecDeque<usize> = (0..available_pool.len()).collect(); + while pool_index < available_pool.len() && tried_nodes.len() < POOL_SIZE { - let mut node1_option = None; - let mut node2_option = None; - - // Select first untried node from pool - for (i, node) in available_pool.iter().enumerate().skip(pool_index) { - if !tried_nodes.contains(node) { - node1_option = Some(node.clone()); - pool_index = i + 1; - break; - } - } + let node1_option = available_indices.pop_front() + .map(|i| available_pool[i].clone()); + let node2_option = available_indices.pop_front() + .map(|i| available_pool[i].clone());
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json(1 hunks)monero-rpc-pool/.sqlx/query-3e8f39a6ec4443cec6497672891d12bbf7c1d0aca061827740af88ced863ae23.json(1 hunks)monero-rpc-pool/.sqlx/query-5798d9589772742f074e0ecc2551a40d943bfb7ed2e295f09f12d77cb65ce821.json(1 hunks)monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json(1 hunks)monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json(1 hunks)monero-rpc-pool/src/discovery.rs(1 hunks)monero-rpc-pool/src/lib.rs(1 hunks)monero-rpc-pool/src/main.rs(1 hunks)monero-rpc-pool/src/simple_handlers.rs(1 hunks)src-gui/src/utils/formatUtils.ts(1 hunks)src-tauri/src/lib.rs(4 hunks)swap/src/bin/asb.rs(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (10)
- monero-rpc-pool/.sqlx/query-e0865335c2dcb040a34e3f1305fe1a823d6fcde4a061def602cba30971817781.json
- monero-rpc-pool/.sqlx/query-5798d9589772742f074e0ecc2551a40d943bfb7ed2e295f09f12d77cb65ce821.json
- src-gui/src/utils/formatUtils.ts
- monero-rpc-pool/.sqlx/query-132666c849bf0db14e50ef41f429e17b7c1afd21031edf3af40fadfb79ef2597.json
- monero-rpc-pool/.sqlx/query-ffa1b76d20c86d6bea02bd03e5e7de159adbb7c7c0ef585ce4df9ec648bea7f8.json
- monero-rpc-pool/.sqlx/query-3e8f39a6ec4443cec6497672891d12bbf7c1d0aca061827740af88ced863ae23.json
- src-tauri/src/lib.rs
- swap/src/bin/asb.rs
- monero-rpc-pool/src/main.rs
- monero-rpc-pool/src/lib.rs
🔇 Additional comments (1)
monero-rpc-pool/src/simple_handlers.rs (1)
440-494: Excellent error handling implementation.The structured JSON error responses provide comprehensive debugging information while maintaining clean error handling. This addresses the previous concern about losing detailed error information.
| if let Ok(url) = url::Url::parse(node_url) { | ||
| let scheme = url.scheme(); | ||
| let host = url.host_str().unwrap_or(""); | ||
| let Some(port) = url.port() else { | ||
| continue; | ||
| }; | ||
| let port = port as i64; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
URL parsing lacks comprehensive validation.
The code only checks for port presence but doesn't validate other URL components. This could lead to issues with malformed URLs.
for node_url in nodes.iter() {
- if let Ok(url) = url::Url::parse(node_url) {
+ if let Ok(url) = url::Url::parse(node_url) {
let scheme = url.scheme();
- let host = url.host_str().unwrap_or("");
+ let Some(host) = url.host_str() else {
+ error!("Node URL missing host: {}", node_url);
+ continue;
+ };
let Some(port) = url.port() else {
+ error!("Node URL missing port: {}", node_url);
continue;
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if let Ok(url) = url::Url::parse(node_url) { | |
| let scheme = url.scheme(); | |
| let host = url.host_str().unwrap_or(""); | |
| let Some(port) = url.port() else { | |
| continue; | |
| }; | |
| let port = port as i64; | |
| for node_url in nodes.iter() { | |
| - if let Ok(url) = url::Url::parse(node_url) { | |
| + if let Ok(url) = url::Url::parse(node_url) { | |
| let scheme = url.scheme(); | |
| - let host = url.host_str().unwrap_or(""); | |
| + let Some(host) = url.host_str() else { | |
| + error!("Node URL missing host: {}", node_url); | |
| + continue; | |
| + }; | |
| - let Some(port) = url.port() else { | |
| + let Some(port) = url.port() else { | |
| + error!("Node URL missing port: {}", node_url); | |
| continue; | |
| }; | |
| let port = port as i64; |
🤖 Prompt for AI Agents
In monero-rpc-pool/src/discovery.rs around lines 331 to 337, the URL parsing
only verifies the presence of a port but does not validate other critical
components like scheme and host. Enhance the validation by checking that the
scheme is one of the expected protocols (e.g., http or https) and that the host
is a valid non-empty string. If any of these validations fail, skip processing
that URL to avoid issues with malformed URLs.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Chores
Documentation