Skip to content

Conversation

@samhvw8
Copy link
Contributor

@samhvw8 samhvw8 commented Jun 9, 2025

Describe Your Changes

Implement retry mechanism for MCP server activation with exponential backoff

Adds robust error handling for MCP server startup failures by implementing a retry mechanism with exponential backoff.

Key changes:

  • Added retry logic with up to 3 attempts for failed MCP server activations
  • Implemented exponential backoff starting at 1s, capped at 30s
  • Added new event emissions for retry tracking (mcp-retry-attempt, mcp-server-started, mcp-max-retries-exceeded, mcp-retry-scheduled)
  • Refactored server startup into separate internal function for cleaner retry handling
  • Enhanced logging with detailed retry attempt information

Benefits:

  • Improves reliability of MCP server initialization
  • Provides better visibility into server startup issues through events
  • Prevents permanent failures from transient startup problems

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

Important

Adds retry mechanism with exponential backoff for MCP server activation, enhancing reliability and error handling in mcp.rs.

  • Behavior:
    • Implements retry mechanism for MCP server activation with exponential backoff in mcp.rs.
    • Adds retry logic with up to 3 attempts and exponential backoff starting at 1s, capped at 30s.
    • Introduces new event emissions for retry tracking: mcp-retry-attempt, mcp-server-started, mcp-max-retries-exceeded, mcp-retry-scheduled.
    • Refactors server startup into start_mcp_server_with_restart() for cleaner retry handling.
    • Enhances logging with detailed retry attempt information.
  • State Management:
    • Adds mcp_restart_counts, mcp_active_servers, and mcp_successfully_connected to AppState in state.rs.
  • Setup:
    • Modifies setup_mcp() in setup.rs to include event listener for kill-mcp-servers and clean up logic.
  • Misc:
    • Updates activate_mcp_server() to use the new retry mechanism.
    • Adds reset_mcp_restart_count() command for resetting retry counts.

This description was created by Ellipsis for 08ed045. You can customize this summary. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 4f47b80 in 2 minutes and 8 seconds. Click for details.
  • Reviewed 157 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:16
  • Draft comment:
    The new constants for MAX_MCP_RETRY_ATTEMPTS and BASE_BACKOFF_MS are hard-coded. Consider making these configurable for easier tuning in different deployment environments.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While making retry parameters configurable can be useful in some cases, here it seems unnecessary. The values chosen (3 retries, 1 second base backoff) are reasonable defaults that should work well in most scenarios. The exponential backoff already provides adaptability. Making these configurable would add complexity without clear benefit. The code is a desktop app, not a distributed system where tuning retry parameters per environment would be critical. The comment has some merit - in distributed systems, retry parameters often need tuning. And the code does show care in implementing a proper retry mechanism. However, this is a desktop application where the retry parameters are unlikely to need adjustment. The current implementation with exponential backoff up to 30 seconds is already quite robust. The comment should be deleted as it suggests adding unnecessary configuration complexity for parameters that are unlikely to need adjustment in this context.
2. src-tauri/src/core/mcp.rs:56
  • Draft comment:
    In run_mcp_commands the server start is triggered via the new retry mechanism but its error isn’t propagated (the call is awaited then ignored). Ensure that relying solely on event emissions for failure (e.g. 'mcp-max-retries-exceeded') is the intended design.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
3. src-tauri/src/core/mcp.rs:107
  • Draft comment:
    The retry loop in start_mcp_server effectively retries with exponential backoff. However, when the maximum retries are exceeded the function simply returns without propagating an error. Consider whether returning an error to the caller would provide clearer failure semantics.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The code appears to handle failures through logging and event emission rather than error propagation. This is a valid design choice given the async nature of server startup and the fact that callers treat this as a background task. The error handling is actually quite thorough with logs and events. The comment raises a valid point about API design - functions that can fail should generally communicate that failure to callers. Not doing so could hide important errors. However, in this case the errors are deliberately handled through events and logging rather than return values, which is appropriate for a background task that callers don't wait for. The error isn't hidden, it's communicated through other channels. The comment should be deleted as the current error handling approach through events and logging is intentional and appropriate for this use case.
4. src-tauri/src/core/mcp.rs:161
  • Draft comment:
    The Tauri command activate_mcp_server calls start_mcp_server and always returns Ok(()), meaning failures aren’t propagated back to the frontend via the command result. Confirm that using event emissions as the sole error notification mechanism is the desired design here.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment raises a valid point about error handling design, but it's phrased as a request to "confirm" the design choice rather than suggesting a clear improvement. The code shows a deliberate pattern of using events for error notification, with multiple event types for different failure scenarios. This appears to be an intentional architectural decision. The comment identifies a real architectural pattern, but asking for confirmation doesn't help improve the code. If there's an issue with the error handling approach, the comment should propose a specific alternative. The code shows a comprehensive event-based error handling system that appears intentional. If there were issues with this approach, they should be raised with specific problems or alternatives, not just asking for confirmation. Delete the comment. While it identifies a real pattern, asking for confirmation of an apparently intentional design choice doesn't help improve the code. The comment would be more useful if it proposed specific improvements to the error handling approach.
5. src-tauri/src/core/mcp.rs:146
  • Draft comment:
    The exponential backoff update (backoff_ms = (backoff_ms * 2).min(30000)) is clear. It might be useful to document the rationale behind doubling the delay and capping it at 30 seconds, especially if these numbers could be adjusted in the future.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_oUIC4fZ5AwI1tJ7E

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 60fcc81 in 1 minute and 28 seconds. Click for details.
  • Reviewed 157 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:16
  • Draft comment:
    Consider externalizing retry constants (MAX_MCP_RETRY_ATTEMPTS and BASE_BACKOFF_MS) to configuration or environment variables for greater flexibility.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
2. src-tauri/src/core/mcp.rs:81
  • Draft comment:
    The retry loop applies a sleep on every retry (when retry_count > 1), including before the final attempt. While not a bug, consider avoiding an unnecessary delay if no further attempt will be made.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
3. src-tauri/src/core/mcp.rs:129
  • Draft comment:
    Emitting events like 'mcp-max-retries-exceeded' improves observability. Ensure the frontend handles these events appropriately.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
4. src-tauri/src/core/mcp.rs:179
  • Draft comment:
    If extract_command_args returns None due to misconfiguration, the function ends up returning Ok(()) silently. It might be better to return an error to avoid hiding config issues.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
5. src-tauri/src/core/mcp.rs:174
  • Draft comment:
    Using expect on env::current_exe() can cause a panic. Consider handling this error more gracefully by propagating it.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_2ohqY6DkpJwmXG48

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed a637a4e in 2 minutes and 10 seconds. Click for details.
  • Reviewed 44 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:55
  • Draft comment:
    The call in run_mcp_commands now uses schedule_mcp_start_task, which performs a single attempt without retries. This bypasses the exponential backoff retry logic implemented in start_mcp_server. Ensure this design aligns with the intended auto-restart feature.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% While the comment accurately describes the change, it doesn't clearly indicate if this is a problem that needs fixing. The comment ends with "Ensure this design aligns with intended auto-restart feature" which is asking for verification rather than pointing out a clear issue. The code change seems intentional given the comment "Start the server directly without retry mechanism". The comment identifies a real architectural change, so maybe we should keep it to highlight this significant change in behavior? The comment violates our rules by asking for verification ("Ensure that...") rather than stating a clear problem. If removing retry logic was unintentional, the code comment wouldn't explicitly say it's being removed. Delete the comment. It's asking for verification rather than identifying a clear issue, and the code change appears intentional based on the inline comment.
2. src-tauri/src/core/mcp.rs:56
  • Draft comment:
    There is an inconsistency between run_mcp_commands and activate_mcp_server: the former uses schedule_mcp_start_task (single attempt) while the latter calls start_mcp_server (which includes retry logic). Ensure the retry behavior is consistent across server activations if auto-restart is desired.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The code change deliberately removed the retry mechanism from run_mcp_commands, as evidenced by the commit message "Start the server directly without retry mechanism". This suggests it was an intentional design choice, not an oversight. The two functions likely have different requirements - initial startup may want to fail fast, while manual activation may want to be more persistent. The comment is suggesting consistency for consistency's sake without clear evidence that the inconsistency is problematic. The comment raises a valid point about inconsistent behavior that could be confusing to maintainers. Maybe there's a good reason for the different behaviors that should be documented. While documenting the different behaviors could be helpful, the comment itself doesn't provide evidence that the inconsistency is actually problematic. The code change deliberately introduced this difference, suggesting it's intentional. Delete the comment. The inconsistency appears to be an intentional design choice rather than an oversight that needs fixing. The comment doesn't provide evidence that this inconsistency causes actual problems.

Workflow ID: wflow_FgsPjhNxAccItG3a

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed 5b84908 in 2 minutes and 5 seconds. Click for details.
  • Reviewed 198 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:13
  • Draft comment:
    Removal of the constant MAX_MCP_RETRY_ATTEMPTS introduces a literal default value (0) in start_mcp_server. Consider extracting this default into a constant or documenting the default behavior to avoid magic numbers.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The value 0 here isn't really a magic number - it's a sensible default meaning "no retries". The value is only used in one place as a default, and its meaning is clear from context. The comment also ignores that the actual retry value of 3 is used in activate_mcp_server, which would be a better candidate for a constant if we wanted one. I could be wrong about the clarity - maybe future maintainers would find it confusing why 0 was chosen as the default. And constants can make it easier to change values in one place. The meaning of 0 as "no retries" is very clear from the context and variable name. This isn't a case where the number's meaning is obscure or where it's used in multiple places. The comment is making a mountain out of a molehill. The comment should be deleted. The suggestion to extract 0 into a constant would not meaningfully improve code quality, as the value's meaning is clear and it's only used once as a sensible default.
2. src-tauri/src/core/mcp.rs:76
  • Draft comment:
    The start_mcp_server signature now accepts an Option for max_retry_attempts and defaults to 0 retries if None is provided. Consider clarifying in a comment that this value represents the number of additional retries (i.e. total attempts = initial attempt + max_retry_attempts) to avoid confusion.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_dNvex3seFjEz1zNN

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@samhvw8 samhvw8 force-pushed the feat/auto-restart-mcp branch from 5b84908 to 99b6fec Compare June 12, 2025 12:13
@samhvw8 samhvw8 changed the base branch from release/v0.5.18 to dev June 13, 2025 04:54
@samhvw8 samhvw8 force-pushed the feat/auto-restart-mcp branch from e3054f0 to 0bcd859 Compare June 13, 2025 04:55
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 0bcd859 in 51 seconds. Click for details.
  • Reviewed 56 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:471
  • Draft comment:
    Good detailed logging and state updates in deactivate_mcp_server. Note that active_servers, successfully_connected, and restart_counts are updated in separate lock scopes. If atomic consistency across these states is desired, consider grouping them into a single lock.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50% None
2. src-tauri/src/core/mcp.rs:499
  • Draft comment:
    Explicitly dropping the servers_map lock before calling service.cancel() is a good practice to avoid holding locks during asynchronous operations.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None

Workflow ID: wflow_zQ3h31TbOyPWeonQ

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@louis-jan
Copy link
Contributor

please help resolve conflict @samhvw8

@github-actions
Copy link
Contributor

Preview URL: https://5644d787.docs-9ba.pages.dev

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed a53b180 in 2 minutes and 28 seconds. Click for details.
  • Reviewed 927 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:232
  • Draft comment:
    The PR description mentions emitting events like 'mcp-retry-attempt' and 'mcp-retry-scheduled' for better retry tracking, but only 'mcp_max_restarts_reached' is emitted. Consider adding the missing event emissions.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment references a PR description that I don't have access to, so I can't verify if these events were actually promised. The code shows a working restart system that already emits critical events like max restarts. Adding more granular events would be a feature request rather than fixing a bug. The comment is speculative since it's based on an unseen PR description. I could be wrong about the PR description - maybe there was an explicit commitment to add these events that should be honored. The events could be useful for frontend monitoring. Even if the events were mentioned in the PR description, this comment relies on context I don't have. The code works as is, and adding more events would be an enhancement rather than fixing an issue. We should avoid speculative comments based on unseen requirements. Delete this comment. It relies on external context about PR description promises that we can't verify, and the existing code has a functional restart system with appropriate event handling.
2. src-tauri/src/core/mcp.rs:399
  • Draft comment:
    Avoid using unwrap() on path conversions like 'app_path.to_str().unwrap()'; consider proper error handling to prevent potential panics.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. src-tauri/src/core/mcp.rs:349
  • Draft comment:
    Typo detected: The function header in this diff appears to contain an extraneous colon with a missing function name. The line reads :) -> Result<(), String> { which seems incorrect. Please verify that the function definition is complete.
  • Reason this comment was not posted:
    Comment was on unchanged code.

Workflow ID: wflow_rI14bwPQaHZBlMnd

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@samhvw8 samhvw8 force-pushed the feat/auto-restart-mcp branch 3 times, most recently from 9544b7e to a116c6f Compare June 19, 2025 09:44
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed a116c6f in 2 minutes and 23 seconds. Click for details.
  • Reviewed 924 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:229
  • Draft comment:
    The PR description mentions additional event emissions like 'mcp-retry-attempt' and 'mcp-retry-scheduled', but these are not present in the code. Please ensure the event tracking behavior matches the description.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that the event tracking behavior matches the description, which violates the rule against asking the author to ensure behavior is intended or matches descriptions. Therefore, this comment should be removed.
2. src-tauri/src/core/mcp.rs:396
  • Draft comment:
    In schedule_mcp_start_task, if opening the log file fails the error is logged but execution continues. Confirm if this non-fatal handling is the intended behavior.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. src-tauri/src/core/setup.rs:205
  • Draft comment:
    The new 'kill-mcp-servers' event listener is implemented to clean up MCP servers. Ensure that the cleanup logic is idempotent and that any errors during cleanup are handled appropriately.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to ensure that the cleanup logic is idempotent and errors are handled, which violates the rule against asking the author to ensure behavior is intended or tested. It doesn't provide a specific suggestion or point out a specific issue.
4. src-tauri/src/lib.rs:69
  • Draft comment:
    MCP commands (including reset_mcp_restart_count) and state initialization have been updated. Verify that the new MCP restart logic is utilized as intended in the frontend.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is asking the PR author to verify that the new MCP restart logic is utilized as intended in the frontend. This falls under the category of asking the author to ensure the behavior is intended, which is against the rules.

Workflow ID: wflow_B9VQO9RZfomPdejw

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

…ntial backoff

feat: enhance MCP server activation with configurable retry attempts

feat: implement MCP server restart monitoring and cleanup functionality

feat: enhance MCP server restart logic with improved monitoring and configuration handling

feat: add manual deactivation for MCP servers to prevent automatic restarts
@samhvw8 samhvw8 force-pushed the feat/auto-restart-mcp branch from a116c6f to b276965 Compare June 20, 2025 08:31
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed b276965 in 2 minutes and 6 seconds. Click for details.
  • Reviewed 980 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:80
  • Draft comment:
    Consider using a non‐deterministic random generator for jitter rather than hashing the attempt number, which produces predictable delays. This may help better distribute restart attempts in bursty environments.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 30% vs. threshold = 50% The current implementation uses deterministic hashing which means the same attempt number will always produce the same jitter value. While this provides variation between different attempts, it could potentially lead to synchronized retries across multiple instances if they fail at the same time. Non-deterministic randomness would provide better distribution. However, the impact may be minimal since the base delays are already exponentially increasing. The comment makes a valid technical point - deterministic jitter is less effective at preventing thundering herd than random jitter. However, in this specific case with exponential backoff and relatively few retries, the practical impact is likely minimal. While the suggestion is technically correct, the current implementation is likely "good enough" given the context - we have exponential backoff as the primary mechanism and jitter is just an additional safeguard. The comment should be deleted as it suggests a technically valid but likely unnecessary optimization. The current implementation is sufficient for the use case.
2. src-tauri/src/core/mcp.rs:429
  • Draft comment:
    Using unwrap on conversion of paths to strings may panic if paths are not valid UTF-8. Consider handling errors instead of unwrapping.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. src-tauri/src/core/mcp.rs:348
  • Draft comment:
    Good job breaking out of the restart loop when the server wasn’t successfully connected. Ensure that this logic meets the intended restart policy in transient failure cases.
  • Reason this comment was not posted:
    Confidence changes required: 30% <= threshold 50% None
4. src-tauri/src/lib.rs:119
  • Draft comment:
    Ensure that the new MCP restart-related command (reset_mcp_restart_count) is documented in the API docs so frontend developers are aware of its behavior.
  • Reason this comment was not posted:
    Confidence changes required: 40% <= threshold 50% None

Workflow ID: wflow_qSHsGBsffQP05wBg

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@louis-jan
Copy link
Contributor

Tested and here are a couple things need to be fixed:

  • It restarts the process on kill, but does not update server states
  • Since then user start - stop the MCP server it creates a many processes -> dangling
  • MCP server failed to start then

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed 08ed045 in 2 minutes and 29 seconds. Click for details.
  • Reviewed 246 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/mcp.rs:348
  • Draft comment:
    Event naming inconsistency: The emitted event 'mcp_max_restarts_reached' does not match the spec which indicates 'mcp-max-retries-exceeded'. Please align the event names for consistency.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. src-tauri/src/core/mcp.rs:592
  • Draft comment:
    The 500ms verification delay after starting an MCP server is hardcoded. Consider making this delay configurable to accommodate slower environments.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While making timeouts configurable can be good practice, this is a relatively minor implementation detail. The current 500ms seems like a reasonable default that should work in most cases. If it proves to be an issue in practice, it can be changed later. The comment is more of a "nice to have" suggestion rather than identifying a clear problem. The comment raises a valid point about system variability - some environments may need longer verification times. Not having this configurable could cause issues in slower systems. However, there's no evidence that 500ms is insufficient. The code already handles failed verifications gracefully by returning an error. Making every timeout configurable adds complexity that may not be needed. This comment should be removed as it suggests a "nice to have" configuration option without evidence that the current fixed delay is causing problems.
3. src-tauri/src/core/mcp.rs:556
  • Draft comment:
    Logging the full command details with log::trace!("Command: {cmd:#?}") may expose sensitive information. Consider redacting sensitive flags or using a less verbose log level in production.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.

Workflow ID: wflow_wUPrzM26EPV4GW64

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Contributor

@louis-jan louis-jan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Work great!

@louis-jan louis-jan merged commit 64a7822 into dev Jun 23, 2025
8 checks passed
@louis-jan louis-jan deleted the feat/auto-restart-mcp branch June 23, 2025 05:20
@github-project-automation github-project-automation bot moved this to QA in Jan Jun 23, 2025
@github-actions github-actions bot added this to the v0.6.2 milestone Jun 23, 2025
urmauur pushed a commit that referenced this pull request Jun 25, 2025
* feat: implement retry mechanism for MCP server activation with exponential backoff

feat: enhance MCP server activation with configurable retry attempts

feat: implement MCP server restart monitoring and cleanup functionality

feat: enhance MCP server restart logic with improved monitoring and configuration handling

feat: add manual deactivation for MCP servers to prevent automatic restarts

* feat: enhance MCP server startup with initial attempt tracking and health monitoring
samhvw8 added a commit that referenced this pull request Jun 25, 2025
* feat: implement retry mechanism for MCP server activation with exponential backoff

feat: enhance MCP server activation with configurable retry attempts

feat: implement MCP server restart monitoring and cleanup functionality

feat: enhance MCP server restart logic with improved monitoring and configuration handling

feat: add manual deactivation for MCP servers to prevent automatic restarts

* feat: enhance MCP server startup with initial attempt tracking and health monitoring
louis-jan pushed a commit that referenced this pull request Jun 25, 2025
* feat: implement retry mechanism for MCP server activation with exponential backoff

feat: enhance MCP server activation with configurable retry attempts

feat: implement MCP server restart monitoring and cleanup functionality

feat: enhance MCP server restart logic with improved monitoring and configuration handling

feat: add manual deactivation for MCP servers to prevent automatic restarts

* feat: enhance MCP server startup with initial attempt tracking and health monitoring
@louis-jan louis-jan mentioned this pull request Jun 25, 2025
3 tasks
louis-jan added a commit that referenced this pull request Jun 25, 2025
* Feat: auto restart mcp (#5226)

* feat: implement retry mechanism for MCP server activation with exponential backoff

feat: enhance MCP server activation with configurable retry attempts

feat: implement MCP server restart monitoring and cleanup functionality

feat: enhance MCP server restart logic with improved monitoring and configuration handling

feat: add manual deactivation for MCP servers to prevent automatic restarts

* feat: enhance MCP server startup with initial attempt tracking and health monitoring

* 🐛fix: prevent render error when additional information missing from hardware (#5413)

---------

Co-authored-by: Sam Hoang Van <[email protected]>
Co-authored-by: Faisal Amir <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

3 participants