-
Notifications
You must be signed in to change notification settings - Fork 53
✨ Added file accept workflow and AST hint generation #821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
WalkthroughThe changes refactor the server's solution and incident management by separating core logic from tool interface functions, introducing new batch and file acceptance/rejection operations, and enhancing data models. The database schema is updated to use explicit ORM models and relationships for solutions and files, with new event-driven status updates and extended metrics. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ToolWrapper
participant CoreLogic
participant Database
Client->>ToolWrapper: call tool_create_solution(args)
ToolWrapper->>CoreLogic: create_solution(...)
CoreLogic->>Database: create DBSolution, link DBFile (before/after)
Database-->>CoreLogic: confirmation
CoreLogic-->>ToolWrapper: solution_id
ToolWrapper-->>Client: solution_id
Client->>ToolWrapper: call tool_accept_file(solution_file)
ToolWrapper->>CoreLogic: accept_file(...)
CoreLogic->>Database: update DBFile status, possibly create DBFile
Database-->>CoreLogic: confirmation
CoreLogic-->>ToolWrapper: None
ToolWrapper-->>Client: None
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Signed-off-by: JonahSussman <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py (1)
394-420: Consider optimizing the file comparison logic.The nested loops for comparing files have O(n²) complexity. Consider using a dictionary for O(n) lookup:
def update_solution_status(self) -> None: if len(self.final_files) == 0: if len(self.change_set.after) == 0: self.solution_status = SolutionStatus.ACCEPTED return self.solution_status = SolutionStatus.PENDING return after_uris = {file.uri for file in self.change_set.after} final_uris = {file.uri for file in self.final_files} if len(after_uris) > len(final_uris): self.solution_status = SolutionStatus.MODIFIED return + # Create a mapping for O(1) lookup + final_files_map = {f.uri: f for f in self.final_files} + for after_file in self.change_set.after: - for final_file in self.final_files: - if after_file.uri != final_file.uri: - continue - if after_file.content != final_file.content: - self.solution_status = SolutionStatus.MODIFIED - return + final_file = final_files_map.get(after_file.uri) + if final_file and after_file.content != final_file.content: + self.solution_status = SolutionStatus.MODIFIED + return self.solution_status = SolutionStatus.ACCEPTED returnkai_mcp_solution_server/src/kai_mcp_solution_server/server.py (2)
514-534: Simplify redundant condition checks.The
incident.solution is not Nonecheck is redundant since it's already verified at line 511:# TODO: Make this cleaner metric.counted_solutions += 1 metric.accepted_solutions += int( - incident.solution is not None - and incident.solution.solution_status == SolutionStatus.ACCEPTED + incident.solution.solution_status == SolutionStatus.ACCEPTED ) metric.rejected_solutions += int( - incident.solution is not None - and incident.solution.solution_status == SolutionStatus.REJECTED + incident.solution.solution_status == SolutionStatus.REJECTED ) metric.modified_solutions += int( - incident.solution is not None - and incident.solution.solution_status == SolutionStatus.MODIFIED + incident.solution.solution_status == SolutionStatus.MODIFIED ) metric.pending_solutions += int( - incident.solution is not None - and incident.solution.solution_status == SolutionStatus.PENDING + incident.solution.solution_status == SolutionStatus.PENDING ) metric.unknown_solutions += int( - incident.solution is not None - and incident.solution.solution_status == SolutionStatus.UNKNOWN + incident.solution.solution_status == SolutionStatus.UNKNOWN )
590-594: Simplify status check using 'in' operator.if not ( - solution.solution_status == SolutionStatus.ACCEPTED - or solution.solution_status == SolutionStatus.MODIFIED + solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED) ): all_solutions_accepted_or_modified = False
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py(7 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/server.py(12 hunks)
🧰 Additional context used
🪛 Pylint (3.3.7)
kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py
[refactor] 206-206: Too few public methods (0/2)
(R0903)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py
[refactor] 248-248: Too many arguments (6/5)
(R0913)
[refactor] 248-248: Too many positional arguments (6/5)
(R0917)
[refactor] 295-295: Too many arguments (6/5)
(R0913)
[refactor] 295-295: Too many positional arguments (6/5)
(R0917)
[refactor] 476-476: Too few public methods (0/2)
(R0903)
[refactor] 591-592: Consider merging these comparisons with 'in' by using 'solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED)'. Use a set instead if elements are hashable.
(R1714)
🪛 GitHub Actions: Check Types on PR
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py
[error] 599-599: mypy: Value of type "Task[None]" must be used (unused-awaitable). Are you missing an await?
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Run e2e test (ubuntu-24.04, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (macos-13, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (windows-latest, cmd, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-22.04-arm, bash, ChatOpenAI, kai-test-generation)
🔇 Additional comments (5)
kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py (3)
95-148: Well-implemented diff generation logic!The computed property correctly generates unified diffs between before/after files with proper handling of matched and unmatched files. The TODO comment about similarity scoring is noted for future enhancement.
159-159: Documentation fix looks good.
451-459: Excellent use of SQLAlchemy event listeners!The automatic status update ensures data consistency by recalculating the solution status before any database insert or update operation.
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (2)
37-41: Helpful documentation addition.
161-246: Clean separation of core logic from tool interface!The refactoring properly separates business logic from MCP tool wrappers, and the addition of batch incident creation improves performance for bulk operations.
| def process_result_value( | ||
| self, value: Any | None, dialect: Dialect | ||
| ) -> list[SolutionFile] | None: | ||
| if value is None: | ||
| return None | ||
| if isinstance(value, str): | ||
| return [SolutionFile.model_validate_json(file) for file in value] | ||
|
|
||
| return [SolutionFile.model_validate(file) for file in value] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix JSON deserialization bug in process_result_value.
The current implementation incorrectly attempts to iterate over a string and call model_validate_json on each character. When value is a string, it represents the entire JSON-encoded list.
Apply this fix:
def process_result_value(
self, value: Any | None, dialect: Dialect
) -> list[SolutionFile] | None:
if value is None:
return None
if isinstance(value, str):
- return [SolutionFile.model_validate_json(file) for file in value]
+ import json
+ parsed_list = json.loads(value)
+ return [SolutionFile.model_validate(file) for file in parsed_list]
return [SolutionFile.model_validate(file) for file in value]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def process_result_value( | |
| self, value: Any | None, dialect: Dialect | |
| ) -> list[SolutionFile] | None: | |
| if value is None: | |
| return None | |
| if isinstance(value, str): | |
| return [SolutionFile.model_validate_json(file) for file in value] | |
| return [SolutionFile.model_validate(file) for file in value] | |
| def process_result_value( | |
| self, value: Any | None, dialect: Dialect | |
| ) -> list[SolutionFile] | None: | |
| if value is None: | |
| return None | |
| if isinstance(value, str): | |
| import json | |
| parsed_list = json.loads(value) | |
| return [SolutionFile.model_validate(file) for file in parsed_list] | |
| return [SolutionFile.model_validate(file) for file in value] |
🤖 Prompt for AI Agents
In kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py around lines 195
to 203, the method process_result_value incorrectly iterates over the string
value and calls model_validate_json on each character. Instead, parse the entire
JSON string once to get the list, then call model_validate on each item in that
list. This fixes the JSON deserialization bug by properly handling the
JSON-encoded list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems legit right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djzager Yes, absolutely! CodeRabbit's analysis is spot-on. This is a legitimate bug.
The current code has a critical flaw: when value is a JSON string like '[{"uri": "test.py", "content": "code"}]', the line:
return [SolutionFile.model_validate_json(file) for file in value]iterates over individual characters of the string (since strings are iterable), not the JSON array items. So it would try to call:
SolutionFile.model_validate_json('[')SolutionFile.model_validate_json('{')SolutionFile.model_validate_json('"')
This would definitely fail at runtime.
The proposed fix correctly:
- Parses the entire JSON string with
json.loads(value)to get the actual list - Then validates each dictionary item in that list with
model_validate
You can see the correct pattern is already used in SolutionChangeSetJSON.process_result_value() for single objects - it calls model_validate_json(value) on the entire string. For lists, we need that extra parsing step first.
| await session.commit() | ||
|
|
||
| if all_solutions_accepted_or_modified: | ||
| asyncio.create_task(generate_hint(kai_ctx, client_id)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix unused awaitable warning.
The asyncio.create_task returns a Task that should be handled properly to avoid the pipeline failure:
if all_solutions_accepted_or_modified:
- asyncio.create_task(generate_hint(kai_ctx, client_id))
+ # Fire and forget task for generating hints in the background
+ task = asyncio.create_task(generate_hint(kai_ctx, client_id))
+ # Optionally add error handling
+ task.add_done_callback(lambda t: log(f"Hint generation failed: {t.exception()}") if t.exception() else None)Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 GitHub Actions: Check Types on PR
[error] 599-599: mypy: Value of type "Task[None]" must be used (unused-awaitable). Are you missing an await?
🤖 Prompt for AI Agents
In kai_mcp_solution_server/src/kai_mcp_solution_server/server.py at line 599,
the call to asyncio.create_task returns a Task object that is currently unused,
which can cause warnings or pipeline failures. Assign the result of
asyncio.create_task(generate_hint(kai_ctx, client_id)) to a variable or
otherwise ensure the Task is properly handled to avoid the unused awaitable
warning.
Signed-off-by: JonahSussman <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (2)
354-361: Unresolved issue: Direct change set mutation.This code still directly modifies the
afterarray elements in the change set, which was flagged in a previous review as potentially causing unintended side effects.The past review comment remains valid - consider creating a deep copy of the change set before making modifications to ensure the original data remains unchanged.
601-601: Unresolved issue: Unused awaitable warning.The
asyncio.create_taskcall still returns an unused Task object, which causes the static analysis warning mentioned in past reviews.The past review comment remains valid - the Task should be properly handled to avoid the unused awaitable warning.
🧹 Nitpick comments (1)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
593-596: Consider using 'in' operator for status comparison.The static analysis suggestion to use
inoperator is valid for improved readability.- if not ( - solution.solution_status == SolutionStatus.ACCEPTED - or solution.solution_status == SolutionStatus.MODIFIED - ): + if solution.solution_status not in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py(12 hunks)kai_mcp_solution_server/tests/mcp_client.py(3 hunks)kai_mcp_solution_server/ts-mcp-client/src/client.ts(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
kai_mcp_solution_server/ts-mcp-client/src/client.ts (1)
kai_mcp_solution_server/ts-mcp-client/src/types/sdk.d.ts (1)
Client(2-7)
kai_mcp_solution_server/tests/mcp_client.py (3)
kai_mcp_solution_server/ts-mcp-client/src/types/sdk.d.ts (1)
Client(2-7)kai_mcp_solution_server/src/kai_mcp_solution_server/dao.py (1)
SolutionFile(86-88)kai/jsonrpc/util.py (1)
model_dump(96-97)
🪛 Pylint (3.3.7)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py
[refactor] 248-248: Too many arguments (6/5)
(R0913)
[refactor] 248-248: Too many positional arguments (6/5)
(R0917)
[refactor] 295-295: Too many arguments (6/5)
(R0913)
[refactor] 295-295: Too many positional arguments (6/5)
(R0917)
[refactor] 476-476: Too few public methods (0/2)
(R0903)
[refactor] 593-594: Consider merging these comparisons with 'in' by using 'solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED)'. Use a set instead if elements are hashable.
(R1714)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Run e2e test (macos-13, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-22.04-arm, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (macos-latest, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-24.04, bash, ChatOpenAI, kai-test-generation)
🔇 Additional comments (11)
kai_mcp_solution_server/ts-mcp-client/src/client.ts (2)
241-271: LGTM! Clean function rename and payload structure update.The function correctly implements the new
accept_filetool interface with proper error handling and logging. The payload structure withsolution_fileobject containinguriandcontentaligns with the server-side expectations.
406-407: Proper test suite integration.The test suite call is correctly updated to use the new
runAcceptFilefunction, maintaining consistency with the tool interface changes.kai_mcp_solution_server/tests/mcp_client.py (2)
167-186: Consistent implementation of the new accept_file tool.The function properly implements the new
accept_filetool interface with correct payload structure usingsolution_file.model_dump(). The error handling and logging are appropriate.
421-429: Proper test suite integration with SolutionFile construction.The test suite correctly constructs a
SolutionFileinstance and passes it to the new_run_accept_filefunction, maintaining consistency with the API changes.kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (7)
26-26: Appropriate import addition for SolutionFile.The import of
SolutionFilefrom the DAO module is correctly added to support the new file acceptance functionality.
37-41: Helpful documentation addition.The docstring with example DSNs for PostgreSQL and SQLite provides valuable guidance for configuration.
205-223: Clean separation of core logic and tool wrapper.The refactoring properly separates the core
create_incidentfunction from its tool wrappertool_create_incident, improving testability and reusability.
225-245: Useful batch incident creation functionality.The new
tool_create_multiple_incidentsfunction provides efficient batch processing while properly reusing the corecreate_incidentlogic.
294-317: Consistent tool wrapper pattern.The
tool_create_solutionwrapper follows the same clean pattern of delegating to core logic while handling context casting.
477-534: Enhanced metrics tracking with proper status counting.The expanded
SuccessRateMetricmodel and updated calculation logic properly track all solution statuses, providing comprehensive analytics.
563-614: Well-implemented file acceptance functionality.The new
accept_filefunction properly manages thefinal_fileslist with careful SQLAlchemy handling and triggers hint generation when appropriate. The tool wrapper follows the established pattern.
| # if not result: | ||
| # raise ValueError("! No related solutions found") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Address commented out error handling.
The commented out ValueError for when no hint is found should be either removed permanently or properly handled. This temporary comment suggests incomplete refactoring.
- # if not result:
- # raise ValueError("! No related solutions found")If the server no longer raises errors for missing hints, remove this code. Otherwise, clarify the intended behavior.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # if not result: | |
| # raise ValueError("! No related solutions found") |
🤖 Prompt for AI Agents
In kai_mcp_solution_server/tests/mcp_client.py around lines 203 to 204, the
commented out ValueError for no related solutions found is incomplete error
handling. Decide if the server should raise an error when no hint is found; if
not, remove these commented lines permanently. If error raising is intended,
uncomment and properly handle the ValueError to ensure consistent behavior.
djzager
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs reject_file 😉
| def process_result_value( | ||
| self, value: Any | None, dialect: Dialect | ||
| ) -> list[SolutionFile] | None: | ||
| if value is None: | ||
| return None | ||
| if isinstance(value, str): | ||
| return [SolutionFile.model_validate_json(file) for file in value] | ||
|
|
||
| return [SolutionFile.model_validate(file) for file in value] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems legit right?
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
kai_mcp_solution_server/tests/mcp_client.py (1)
200-201: Address commented out error handling.kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
683-683: Fix unused awaitable warning.
🧹 Nitpick comments (3)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py (1)
34-34: Address the TODO: Implement similarity score matchingThe TODO comment indicates that similarity score matching for file pairs is not yet implemented. This could lead to suboptimal diff generation when files are renamed or significantly restructured.
Would you like me to help implement the similarity score logic using difflib.SequenceMatcher or a similar approach?
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (1)
311-323: Simplify the status update logic.The elif chain can be simplified since each condition returns immediately.
def update_solution_status(self) -> None: for file in self.after: if file.status == SolutionStatus.REJECTED: self.solution_status = SolutionStatus.REJECTED return - elif file.status == SolutionStatus.UNKNOWN: + if file.status == SolutionStatus.UNKNOWN: self.solution_status = SolutionStatus.UNKNOWN return - elif file.status == SolutionStatus.MODIFIED: + if file.status == SolutionStatus.MODIFIED: self.solution_status = SolutionStatus.MODIFIED return - elif file.status == SolutionStatus.PENDING: + if file.status == SolutionStatus.PENDING: self.solution_status = SolutionStatus.PENDING returnkai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
674-678: Simplify status comparison.- if not ( - solution.solution_status == SolutionStatus.ACCEPTED - or solution.solution_status == SolutionStatus.MODIFIED - ): + if solution.solution_status not in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py(7 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/db/type_decorators.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/server.py(11 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/util.py(0 hunks)kai_mcp_solution_server/tests/data/2/sequence.json(1 hunks)kai_mcp_solution_server/tests/mcp_client.py(5 hunks)kai_mcp_solution_server/tests/mcp_loader_script.py(2 hunks)kai_mcp_solution_server/tools/alembic/env.py(1 hunks)
💤 Files with no reviewable changes (1)
- kai_mcp_solution_server/src/kai_mcp_solution_server/util.py
🧰 Additional context used
🧬 Code Graph Analysis (3)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/type_decorators.py (2)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py (2)
SolutionChangeSet(66-68)SolutionFile(7-9)kai/jsonrpc/util.py (1)
model_dump(96-97)
kai_mcp_solution_server/tests/mcp_client.py (2)
kai/jsonrpc/util.py (1)
model_dump(96-97)kai_mcp_solution_server/ts-mcp-client/src/types/sdk.d.ts (1)
Client(2-7)
kai_mcp_solution_server/tools/alembic/env.py (1)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (1)
Base(90-96)
🪛 Pylint (3.3.7)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py
[refactor] 7-7: Too few public methods (0/2)
(R0903)
[refactor] 66-66: Too few public methods (0/2)
(R0903)
[refactor] 79-79: Too few public methods (0/2)
(R0903)
[refactor] 84-84: Too few public methods (0/2)
(R0903)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py
[refactor] 312-323: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[refactor] 340-340: Too few public methods (1/2)
(R0903)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py
[refactor] 36-36: Too few public methods (1/2)
(R0903)
[refactor] 251-251: Too many arguments (7/5)
(R0913)
[refactor] 251-251: Too many positional arguments (7/5)
(R0917)
[refactor] 251-251: Too many local variables (23/15)
(R0914)
[refactor] 365-365: Too many arguments (7/5)
(R0913)
[refactor] 365-365: Too many positional arguments (7/5)
(R0917)
[refactor] 544-544: Too few public methods (0/2)
(R0903)
[refactor] 675-676: Consider merging these comparisons with 'in' by using 'solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED)'. Use a set instead if elements are hashable.
(R1714)
🪛 Flake8 (7.2.0)
kai_mcp_solution_server/tools/alembic/env.py
[error] 21-21: module level import not at top of file
(E402)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Run e2e test (ubuntu-22.04-arm, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (windows-latest, cmd, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (macos-latest, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (macos-13, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-24.04, bash, ChatOpenAI, kai-test-generation)
🔇 Additional comments (21)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py (2)
7-9: LGTM: Clean data model definitionThe
SolutionFilemodel is well-defined with appropriate fields for representing file content and location.
66-68: LGTM: Well-structured data modelsThe Pydantic models are appropriately designed for their purpose. The static analysis warnings about "too few public methods" are false positives - these are data models, not service classes.
Also applies to: 71-76, 79-81, 84-92
kai_mcp_solution_server/tools/alembic/env.py (1)
21-21: LGTM: Import path updated correctlyThe import path change aligns with the database module reorganization. The trunk-ignore for E402 is appropriate since the import needs to be after the config setup.
kai_mcp_solution_server/tests/mcp_loader_script.py (2)
154-157: LGTM: Improved user experienceThe updated prompt clearly explains the three input formats, making the tool more user-friendly.
173-188: LGTM: Well-implemented sequential tool call featureThe logic for handling JSON file sequences is well-structured with appropriate logging and user confirmation between calls. This will be valuable for testing batch operations.
kai_mcp_solution_server/src/kai_mcp_solution_server/db/type_decorators.py (2)
9-30: LGTM: Well-implemented type decoratorThe
SolutionChangeSetJSONdecorator properly handles serialization/deserialization between Pydantic models and PostgreSQL JSON, with appropriate null handling.
33-60: LGTM: Robust list handling with good error validationThe
ListSolutionFileJSONdecorator handles complex cases well, including:
- Proper null handling
- Support for both string and direct object deserialization
- Graceful handling of single dict vs list scenarios
- Clear error messages for invalid input types
kai_mcp_solution_server/tests/mcp_client.py (4)
27-27: LGTM!The import changes correctly reflect the module reorganization where Pydantic models were moved to
db.python_objects.
132-143: LGTM!The request structure correctly matches the refactored
create_solutiontool that now accepts separatebeforeandafterfile lists.
164-183: LGTM!The new
_run_accept_filefunction correctly replaces the removedupdate_solution_statusfunctionality and properly handles theSolutionFileobject.
418-426: LGTM!The test runner correctly uses the new
accept_filefunctionality with a properly constructedSolutionFileinstance.kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (5)
17-17: LGTM!The imports correctly support the new event-driven functionality and modular structure.
Also applies to: 40-48
95-95: LGTM!The type annotation correctly maps the new
list[SolutionFile]type to its JSON decorator.
241-263: Well-structured association tables!The separate association tables for before/after files provide clear semantics and support the solution versioning workflow.
340-371: Excellent file versioning design!The linked list structure with
nextrelationships provides a clean way to track file versions over time while maintaining referential integrity.
373-391: Smart use of event listeners!The automatic status updates ensure consistency between file and solution statuses without requiring manual synchronization.
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (5)
40-44: Helpful DSN documentation!The example DSNs for PostgreSQL and SQLite make configuration clearer for users.
208-248: Clean separation of concerns!The tool wrapper pattern effectively separates the MCP interface from core business logic, improving testability and maintainability.
283-346: Robust file versioning implementation!The logic correctly handles:
- Matching existing before files when content is identical
- Creating new versions when content differs
- Maintaining the linked list structure for version history
426-431: Clean diff generation!Using the centralized
get_difffunction ensures consistent diff formatting across the codebase.
544-605: Comprehensive metrics tracking!The enhanced metrics now provide detailed insights into all solution statuses, supporting better analysis of the system's performance.
Signed-off-by: JonahSussman <[email protected]>
Signed-off-by: JonahSussman <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (1)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
254-393: Simplify the complex file handling logic and address FIXMEs.The
create_solutionfunction has become quite complex with intricate file handling logic. Several concerns:
- FIXME comment on line 329: "Something is fishy here" indicates unresolved issues
- Complex before/after file logic: The nested conditions for handling file content differences could lead to data integrity issues
- SQLAlchemy relationship handling: Creating new
DBFileinstances when content differs may cause caching problemsConsider breaking this function into smaller, focused functions:
_handle_before_files()_handle_after_files()_create_solution_record()This would improve testability and maintainability. Would you like me to propose a refactored implementation?
# Try to match the before files. If you can't, create them. # Create the after files. Associate them with the before files. # Create the solution. + # TODO: Extract file handling logic into separate functions for better maintainability db_before_files: set[DBFile] = set() for file in before: stmt = select(DBFile).where( DBFile.client_id == client_id, DBFile.uri == file.uri, ) prev_before = (await session.execute(stmt)).scalar_one_or_none() if prev_before is None: next_before = DBFile( client_id=client_id, uri=file.uri, content=file.content, status=SolutionStatus.PENDING, solution_before=set(), solution_after=set(), next=None, ) session.add(next_before) db_before_files.add(next_before) elif prev_before.content != file.content: next_before = DBFile( client_id=client_id, uri=file.uri, content=file.content, status=SolutionStatus.PENDING, solution_before=set(), solution_after=set(), next=None, ) prev_before.status = SolutionStatus.PENDING prev_before.next = next_before session.add(next_before) db_before_files.add(next_before) else: db_before_files.add(prev_before) db_after_files: set[DBFile] = set() for file in after: - # FIXME: Something is fishy here... + # TODO: Clarify the intended behavior for after files next_after = DBFile(
♻️ Duplicate comments (1)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
779-779: Handle the unused awaitable properly.This line creates a fire-and-forget task but doesn't handle potential exceptions, which could cause silent failures.
The past review comment correctly identified this issue. Consider adding proper error handling:
if all_solutions_accepted_or_modified: - asyncio.create_task(generate_hint_v2(kai_ctx, client_id)) # type: ignore[unused-awaitable, unused-ignore] + # Fire and forget task for generating hints in the background + task = asyncio.create_task(generate_hint_v2(kai_ctx, client_id)) + task.add_done_callback( + lambda t: log(f"Hint generation failed: {t.exception()}") if t.exception() else None + )
🧹 Nitpick comments (5)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/parser.py (1)
21-28: Consider explicitly handling unimplemented language cases.The current implementation silently returns an empty
DiffableDict()for unsupported languages. Consider raising aNotImplementedErrorfor the XML case to make it clear that it's not yet implemented.match language: case Language.JAVA: return _extract_java_info( ts.Parser(ts.Language(ts_java.language())).parse( content.encode("utf-8") ) ) + case Language.XML: + raise NotImplementedError("XML parsing is not yet implemented") return DiffableDict()kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/base.py (1)
41-49: Simplify the equality check usingall().The current implementation uses a for loop that can be simplified.
def __eq__(self, o: object) -> bool: if isinstance(o, type(self)): if set(self.keys()) != set(o.keys()): return False - for key in self.keys(): - if self[key] != o[key]: - return False - return True + return all(self[key] == o[key] for key in self.keys()) return Falsekai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py (1)
52-53: Make the similarity threshold configurable.The hardcoded threshold of 0.8 might not be suitable for all use cases.
def associate_files( - before: list[SolutionFile], after: list[SolutionFile] + before: list[SolutionFile], after: list[SolutionFile], similarity_threshold: float = 0.8 ) -> dict[tuple[str, str], tuple[SolutionFile, SolutionFile]]: # ... existing code ... - if best_similarity < 0.8: + if best_similarity < similarity_threshold: breakkai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (1)
375-391: Remove commented debug print statements.The commented print statements appear to be debug code that should be removed before merging.
if isinstance(target, DBSolution): - # print(f"DBSolution target", file=sys.stderr) - # print(f" Solution {target.id}", file=sys.stderr) - # print(f" Before status: {target.solution_status}", file=sys.stderr) target.update_solution_status() - # print(f" After status: {target.solution_status}", file=sys.stderr) return if target.solution_before or target.solution_after: - # print(f"DBFile target: {target.uri}", file=sys.stderr) - # If the file is part of a solution, we need to update the solution status for solution in target.solution_before.union(target.solution_after): - # print(f" Solution {solution.id}", file=sys.stderr) - # print(f" Before status: {solution.solution_status}", file=sys.stderr) solution.update_solution_status() - # print(f" After status: {solution.solution_status}", file=sys.stderr)kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (1)
770-774: Simplify status comparison using 'in' operator.The static analysis correctly suggests using the 'in' operator for cleaner comparison.
if not ( - solution.solution_status == SolutionStatus.ACCEPTED - or solution.solution_status == SolutionStatus.MODIFIED + solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED) ): all_solutions_accepted_or_modified = False
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
kai_mcp_solution_server/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
kai_mcp_solution_server/pyproject.toml(1 hunks)kai_mcp_solution_server/requirements.txt(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/base.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/parser.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py(7 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py(1 hunks)kai_mcp_solution_server/src/kai_mcp_solution_server/server.py(12 hunks)
✅ Files skipped from review due to trivial changes (2)
- kai_mcp_solution_server/requirements.txt
- kai_mcp_solution_server/pyproject.toml
🧰 Additional context used
🧬 Code Graph Analysis (3)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/parser.py (2)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/base.py (2)
DiffableDict(38-72)DiffableSummary(7-31)kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py (1)
_extract_java_info(236-411)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/base.py (1)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py (10)
to_dict(22-26)to_dict(58-65)to_dict(101-111)to_dict(168-180)to_dict(214-220)diff(28-37)diff(67-76)diff(113-141)diff(182-206)diff(222-233)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (2)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py (3)
SolutionChangeSet(99-101)SolutionFile(7-9)SolutionStatus(104-109)kai_mcp_solution_server/src/kai_mcp_solution_server/db/type_decorators.py (2)
ListSolutionFileJSON(33-60)SolutionChangeSetJSON(9-30)
🪛 Ruff (0.11.9)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/base.py
45-48: Use return all(self[key] == o[key] for key in self.keys()) instead of for loop
Replace with return all(self[key] == o[key] for key in self.keys())
(SIM110)
🪛 Pylint (3.3.7)
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py
[refactor] 239-239: Too many local variables (26/15)
(R0914)
[refactor] 239-239: Too many branches (25/12)
(R0912)
[refactor] 239-239: Too many statements (79/50)
(R0915)
[refactor] 236-236: Too many statements (81/50)
(R0915)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py
[refactor] 7-7: Too few public methods (0/2)
(R0903)
[refactor] 12-12: Too many local variables (17/15)
(R0914)
[refactor] 99-99: Too few public methods (0/2)
(R0903)
[refactor] 112-112: Too few public methods (0/2)
(R0903)
[refactor] 117-117: Too few public methods (0/2)
(R0903)
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py
[refactor] 39-39: Too few public methods (1/2)
(R0903)
[refactor] 254-254: Too many arguments (7/5)
(R0913)
[refactor] 254-254: Too many positional arguments (7/5)
(R0917)
[refactor] 254-254: Too many local variables (23/15)
(R0914)
[refactor] 369-369: Too many arguments (7/5)
(R0913)
[refactor] 369-369: Too many positional arguments (7/5)
(R0917)
[refactor] 459-459: Too many local variables (18/15)
(R0914)
[refactor] 625-625: Too few public methods (0/2)
(R0903)
[refactor] 771-772: Consider merging these comparisons with 'in' by using 'solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED)'. Use a set instead if elements are hashable.
(R1714)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py
[refactor] 331-331: Too few public methods (1/2)
(R0903)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Run e2e test (macos-latest, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (windows-latest, cmd, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (macos-13, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-22.04-arm, bash, ChatOpenAI, kai-test-generation)
- GitHub Check: Run e2e test (ubuntu-24.04, bash, ChatOpenAI, kai-test-generation)
🔇 Additional comments (5)
kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py (1)
310-316: LGTM! The solution status update logic is well-implemented.The method correctly prioritizes non-accepted statuses and defaults to accepted when all files are accepted.
kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py (1)
267-269: Verify escape sequence handling.The code appears to be replacing literal backslash-n and backslash-t strings rather than actual newline and tab characters. If you intend to remove newlines and tabs, use proper escape sequences.
If you want to remove actual newlines and tabs:
- .replace("\\n", "", -1) - .replace("\\t", "", -1) + .replace("\n", "", -1) + .replace("\t", "", -1)If you want to remove escaped sequences in the source code, the current implementation is correct.
Also applies to: 329-331
kai_mcp_solution_server/src/kai_mcp_solution_server/server.py (3)
4-4: LGTM: Import addition for stderr output.The
sysimport is appropriately added to support stderr output in the hint generation functions.
43-47: LGTM: Helpful documentation for database DSN examples.The documentation clearly shows example DSNs for different database types, improving developer experience.
459-534: LGTM: AST-based hint generation with good separation.The
generate_hint_v2function provides an improved approach using AST diffs instead of textual diffs. The separation from v1 allows for A/B testing of hint generation strategies.Minor suggestions for consistency:
- Consider using the same logging approach (
log()) instead of mixing withprint(..., file=sys.stderr)- The commented debug prints could be removed if no longer needed
kai_mcp_solution_server/src/kai_mcp_solution_server/db/python_objects.py
Show resolved
Hide resolved
| def _extract_java_info(root: ts.Tree) -> DiffableSummary: | ||
| cursor = root.walk() | ||
|
|
||
| def traverse(node: ts.Node) -> DiffableSummary: | ||
| match node.type: | ||
| case "modifiers": | ||
| annotations = DiffableDict[str, JAnnotation]() | ||
| for child in node.children: | ||
| match child.type: | ||
| case "marker_annotation" | "annotation": | ||
| annotation_name = "" | ||
| params = "" | ||
| for annotation_child in child.children: | ||
| match annotation_child.type: | ||
| case "identifier": | ||
| if annotation_child.text is None: | ||
| raise ValueError( | ||
| "Annotation identifier has no text" | ||
| ) | ||
|
|
||
| annotation_name = annotation_child.text.decode( | ||
| "utf-8" | ||
| ) | ||
| case "annotation_argument_list": | ||
| if annotation_child.text is None: | ||
| raise ValueError( | ||
| "Annotation argument list has no text" | ||
| ) | ||
|
|
||
| params = ( | ||
| annotation_child.text.decode("utf-8") | ||
| .replace("\\n", "", -1) | ||
| .replace("\\t", "", -1) | ||
| ) | ||
| annotation = JAnnotation(annotation_name, params) | ||
| annotations[str(hash(annotation))] = annotation | ||
| return annotations | ||
| case "field_declaration": | ||
| name = "" | ||
| type = "" | ||
| annotations = DiffableDict[str, JAnnotation]() | ||
| for field_child in node.children: | ||
| match field_child.type: | ||
| case "type_identifier" | "generic_type": | ||
| if field_child.text is None: | ||
| raise ValueError("Field type has no text") | ||
|
|
||
| type = field_child.text.decode("utf-8") | ||
|
|
||
| case "variable_declarator": | ||
| for var_child in field_child.children: | ||
| if var_child.text is None: | ||
| raise ValueError("Variable declarator has no text") | ||
|
|
||
| if var_child.type == "identifier": | ||
| name = var_child.text.decode("utf-8") | ||
|
|
||
| case "modifiers": | ||
| field_child_info = traverse(field_child) | ||
| if isinstance(field_child_info, DiffableDict): | ||
| annotations = field_child_info | ||
| return JVariable(name=name, typ=type, annotations=annotations) | ||
| case "method_declaration": | ||
| name = "" | ||
| body = "" | ||
| params = "" | ||
| annotations = DiffableDict[str, JAnnotation]() | ||
| for mt_child in node.children: | ||
| match mt_child.type: | ||
| case "identifier": | ||
| if mt_child.text is None: | ||
| raise ValueError("Method identifier has no text") | ||
|
|
||
| name = mt_child.text.decode("utf-8") | ||
|
|
||
| case "modifiers": | ||
| anns = traverse(mt_child) | ||
| if isinstance(anns, DiffableDict): | ||
| annotations = anns | ||
|
|
||
| case "formal_parameters": | ||
| if mt_child.text is None: | ||
| raise ValueError("Method parameters have no text") | ||
|
|
||
| params = mt_child.text.decode("utf-8") | ||
|
|
||
| case "block": | ||
| if mt_child.text is None: | ||
| raise ValueError("Method block has no text") | ||
|
|
||
| body = ( | ||
| mt_child.text.decode("utf-8") | ||
| .strip() | ||
| .replace("\\n", "", -1) | ||
| .replace("\\t", "", -1) | ||
| ) | ||
| return JMethod( | ||
| name=name, annotations=annotations, body=body, parameters=params | ||
| ) | ||
| case "class_declaration": | ||
| name = "" | ||
| fields = DiffableDict[str, JVariable]() | ||
| methods = DiffableDict[str, JMethod]() | ||
| interfaces: set[str] = set() | ||
| super_class: str = "" | ||
| annotations = DiffableDict[str, JAnnotation]() | ||
|
|
||
| for class_child in node.children: | ||
| match class_child.type: | ||
| case "modifiers": | ||
| mods = traverse(class_child) | ||
| if isinstance(mods, DiffableDict): | ||
| annotations = mods | ||
| case "identifier": | ||
| if class_child.text is None: | ||
| raise ValueError("Class identifier has no text") | ||
|
|
||
| name = class_child.text.decode("utf-8") | ||
| case "superclass": | ||
| if class_child.text is None: | ||
| raise ValueError("Superclass has no text") | ||
|
|
||
| super_class = class_child.text.decode("utf-8") | ||
| case "super_interfaces": | ||
| interfaces = set() | ||
|
|
||
| for i in class_child.children: | ||
| if i.type != "," and i.text is not None: | ||
| interfaces.add(i.text.decode("utf-8")) | ||
|
|
||
| case "class_body": | ||
| for cb_child in class_child.children: | ||
| match cb_child.type: | ||
| case "field_declaration" | "method_declaration": | ||
| cb_info = traverse(cb_child) | ||
| match cb_info: | ||
| case cb_info if isinstance( | ||
| cb_info, JVariable | ||
| ): | ||
| fields[str(hash(cb_info))] = cb_info | ||
| case cb_info if isinstance( | ||
| cb_info, JMethod | ||
| ): | ||
| methods[str(hash(cb_info))] = cb_info | ||
| return JClass( | ||
| name=name, | ||
| fields=fields, | ||
| methods=methods, | ||
| annotations=annotations, | ||
| interfaces=interfaces, | ||
| super_class=super_class, | ||
| ) | ||
|
|
||
| classes = DiffableDict[str, DiffableSummary]() | ||
| imports = set() | ||
| for child in node.children: | ||
| match child.type: | ||
| case "import_declaration": | ||
| if child.text is None: | ||
| raise ValueError("Import declaration has no text") | ||
|
|
||
| imports.add( | ||
| child.text.decode("utf-8") | ||
| .replace("import ", "", -1) | ||
| .rstrip(";") | ||
| ) | ||
| case _: | ||
| cb_info = traverse(child) | ||
| if isinstance(cb_info, JClass): | ||
| classes[cb_info.name] = cb_info | ||
| return JFile(classes=classes, imports=imports) | ||
|
|
||
| if cursor.node is None: | ||
| return DiffableDict() | ||
|
|
||
| return traverse(cursor.node) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring the large parsing function.
The _extract_java_info function is very complex with deep nesting and many branches. This makes it difficult to maintain and test.
Would you like me to suggest a refactored version that breaks down the parsing logic into smaller, more manageable functions for each node type (e.g., _parse_annotation, _parse_method, _parse_class, etc.)?
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 239-239: Too many local variables (26/15)
(R0914)
[refactor] 239-239: Too many branches (25/12)
(R0912)
[refactor] 239-239: Too many statements (79/50)
(R0915)
[refactor] 236-236: Too many statements (81/50)
(R0915)
🤖 Prompt for AI Agents
In kai_mcp_solution_server/src/kai_mcp_solution_server/ast_diff/java.py from
lines 236 to 411, the _extract_java_info function is overly complex with deep
nesting and multiple branches, making it hard to maintain and test. Refactor
this function by extracting the parsing logic for each node type into separate
smaller functions such as _parse_annotation, _parse_method, _parse_class, and so
on. Then, replace the corresponding code blocks in _extract_java_info with calls
to these helper functions to improve readability and modularity.
| # FIXME: This should be automatic, but its not | ||
| incident.solution.update_solution_status() | ||
|
|
||
| # TODO: Make this cleaner | ||
| metric.counted_solutions += 1 | ||
| metric.accepted_solutions += int( | ||
| incident.solution is not None | ||
| and incident.solution.solution_status == SolutionStatus.ACCEPTED | ||
| ) | ||
| metric.rejected_solutions += int( | ||
| incident.solution is not None | ||
| and incident.solution.solution_status == SolutionStatus.REJECTED | ||
| ) | ||
| metric.modified_solutions += int( | ||
| incident.solution is not None | ||
| and incident.solution.solution_status == SolutionStatus.MODIFIED | ||
| ) | ||
| metric.pending_solutions += int( | ||
| incident.solution is not None | ||
| and incident.solution.solution_status == SolutionStatus.PENDING | ||
| ) | ||
| metric.unknown_solutions += int( | ||
| incident.solution is not None | ||
| and incident.solution.solution_status == SolutionStatus.UNKNOWN | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Address the manual solution status update and improve efficiency.
The FIXME comment indicates that automatic status updates aren't working as expected, requiring manual intervention.
# FIXME: This should be automatic, but its not
- incident.solution.update_solution_status()
+ # TODO: Investigate why SQLAlchemy isn't automatically updating solution status
+ incident.solution.update_solution_status()Additionally, consider optimizing the status counting logic:
- # TODO: Make this cleaner
metric.counted_solutions += 1
- metric.accepted_solutions += int(
- incident.solution is not None
- and incident.solution.solution_status == SolutionStatus.ACCEPTED
- )
- metric.rejected_solutions += int(
- incident.solution is not None
- and incident.solution.solution_status == SolutionStatus.REJECTED
- )
- metric.modified_solutions += int(
- incident.solution is not None
- and incident.solution.solution_status == SolutionStatus.MODIFIED
- )
- metric.pending_solutions += int(
- incident.solution is not None
- and incident.solution.solution_status == SolutionStatus.PENDING
- )
- metric.unknown_solutions += int(
- incident.solution is not None
- and incident.solution.solution_status == SolutionStatus.UNKNOWN
- )
+ if incident.solution is not None:
+ status = incident.solution.solution_status
+ if status == SolutionStatus.ACCEPTED:
+ metric.accepted_solutions += 1
+ elif status == SolutionStatus.REJECTED:
+ metric.rejected_solutions += 1
+ elif status == SolutionStatus.MODIFIED:
+ metric.modified_solutions += 1
+ elif status == SolutionStatus.PENDING:
+ metric.pending_solutions += 1
+ elif status == SolutionStatus.UNKNOWN:
+ metric.unknown_solutions += 1Would you like me to investigate why the automatic status updates aren't working?
To understand why the status update isn’t automatically applied, let’s locate where update_solution_status is defined and how it’s used:
🏁 Script executed:
#!/bin/bash
# Find the definition and all references of update_solution_status in the repo
grep -R "def update_solution_status" -n kai_mcp_solution_server
grep -R "update_solution_status" -n kai_mcp_solution_serverLength of output: 1076
🏁 Script executed:
#!/bin/bash
# Show the update_solution_status implementation
sed -n '280,350p' kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.py
# Show the auto_update_solution_status definition and its usage
sed -n '350,430p' kai_mcp_solution_server/src/kai_mcp_solution_server/db/dao.pyLength of output: 4984
Automate solution_status recalculation via ORM load listener and clean up manual calls
To ensure solution_status is always in sync without manual intervention, register a load listener (or use an ORM reconstructor) in db/dao.py, then remove the explicit call in the server and refactor the metrics logic:
• In src/kai_mcp_solution_server/db/dao.py (after the existing auto_update_solution_status):
from sqlalchemy import event
@event.listens_for(DBSolution, "load")
def _auto_update_solution_status_on_load(target: DBSolution, context: Any) -> None:
target.update_solution_status()• In src/kai_mcp_solution_server/server.py, remove the manual update at line ~663:
- # FIXME: This should be automatic, but its not
- incident.solution.update_solution_status()• Also in server.py, replace the status-counting block (lines 667–686) with:
metric.counted_solutions += 1
if incident.solution:
status = incident.solution.solution_status
if status == SolutionStatus.ACCEPTED:
metric.accepted_solutions += 1
elif status == SolutionStatus.REJECTED:
metric.rejected_solutions += 1
elif status == SolutionStatus.MODIFIED:
metric.modified_solutions += 1
elif status == SolutionStatus.PENDING:
metric.pending_solutions += 1
elif status == SolutionStatus.UNKNOWN:
metric.unknown_solutions += 1This removes the need for a manual status update, leverages SQLAlchemy’s load event to keep solution_status current, and keeps your metrics logic clear and efficient.
🤖 Prompt for AI Agents
In kai_mcp_solution_server/src/kai_mcp_solution_server/server.py around lines
662 to 686, remove the manual call to incident.solution.update_solution_status()
and replace the multiple int() increments for solution status counts with a
cleaner if-elif block that checks the solution_status and increments the
corresponding metric counter. Then, in src/kai_mcp_solution_server/db/dao.py,
add a SQLAlchemy load event listener for DBSolution that calls
update_solution_status() automatically on object load, ensuring solution_status
is always updated without manual calls.
| async def accept_file( | ||
| kai_ctx: KaiSolutionServerContext, | ||
| client_id: str, | ||
| solution_file: SolutionFile, | ||
| ) -> None: | ||
| async with kai_ctx.session_maker.begin() as session: | ||
| solutions_stmt = select(DBSolution).where(DBSolution.client_id == client_id) | ||
| solutions = (await session.execute(solutions_stmt)).scalars().all() | ||
|
|
||
| files_to_add: set[DBFile] = set() | ||
|
|
||
| for solution in solutions: | ||
| for file in solution.after: | ||
| if file.uri != solution_file.uri: | ||
| continue | ||
|
|
||
| if file.content == solution_file.content: | ||
| file.status = SolutionStatus.ACCEPTED | ||
| continue | ||
|
|
||
| db_file = DBFile( | ||
| client_id=client_id, | ||
| uri=solution_file.uri, | ||
| content=solution_file.content, | ||
| status=SolutionStatus.MODIFIED, | ||
| solution_before=set(), | ||
| solution_after=set(), | ||
| next=None, | ||
| ) | ||
| files_to_add.add(db_file) | ||
|
|
||
| # NOTE: Doing it this way to avoid modifying solutions.after while iterating | ||
| for file in files_to_add: | ||
| file.solution_after.add(solution) | ||
| session.add(file) | ||
|
|
||
| await session.flush() | ||
|
|
||
| all_solutions_accepted_or_modified = True | ||
| for solution in solutions: | ||
| # FIXME: There is some extreme weirdness going on here. The data in the | ||
| # database does not update when the object updates, so we have to invalidate | ||
| # it and refresh it. Also, sometimes the solution status is not updated | ||
| # automatically? | ||
| session.expire(solution) | ||
| await session.refresh(solution) | ||
| solution.update_solution_status() | ||
| session.add(solution) | ||
| await session.flush() | ||
|
|
||
| # print( | ||
| # f"Solution {solution.id} status: {solution.solution_status}", | ||
| # file=sys.stderr, | ||
| # ) | ||
|
|
||
| if not ( | ||
| solution.solution_status == SolutionStatus.ACCEPTED | ||
| or solution.solution_status == SolutionStatus.MODIFIED | ||
| ): | ||
| all_solutions_accepted_or_modified = False | ||
|
|
||
| await session.commit() | ||
|
|
||
| if all_solutions_accepted_or_modified: | ||
| asyncio.create_task(generate_hint_v2(kai_ctx, client_id)) # type: ignore[unused-awaitable, unused-ignore] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Address SQLAlchemy caching issues and improve transaction handling.
The accept_file function has concerning comments about SQLAlchemy behavior and includes complex workarounds for caching issues.
Key concerns:
- Lines 755-764: Manual session expiration/refresh suggests underlying data consistency issues
- Line 748: Modifying
solution.afterwhile potentially iterating could cause issues - Complex file status management: The logic for handling content differences could lead to race conditions
Consider implementing proper transaction isolation and investigating the root cause of the SQLAlchemy caching issues:
# NOTE: Doing it this way to avoid modifying solutions.after while iterating
for file in files_to_add:
- file.solution_after.add(solution)
+ # TODO: Verify this relationship assignment is correct
+ file.solution_after.add(solution)
session.add(file)
await session.flush()
all_solutions_accepted_or_modified = True
for solution in solutions:
- # FIXME: There is some extreme weirdness going on here. The data in the
- # database does not update when the object updates, so we have to invalidate
- # it and refresh it. Also, sometimes the solution status is not updated
- # automatically?
+ # TODO: Investigate why manual session refresh is required
+ # This suggests potential issues with SQLAlchemy session management
session.expire(solution)
await session.refresh(solution)
solution.update_solution_status()
session.add(solution)
await session.flush()Would you like me to investigate the SQLAlchemy session management issues that are causing these caching problems?
Let’s inspect the definitions of update_solution_status and DBSolution to see if the manual refresh is truly needed:
🏁 Script executed:
#!/bin/bash
# Locate the update_solution_status method
rg "def update_solution_status" -A5
# Inspect the DBSolution model for default values or hybrid properties
rg "class DBSolution" -A20Length of output: 2644
Fix incorrect relationship linking and remove unnecessary session refresh
The core issue isn’t SQLAlchemy caching per se but how new files are associated to solutions—and then revisiting stale Python objects. In the current code:
- You collect
DBFileinstances into asetwithout tracking their originatingsolution. - In the second loop you call
file.solution_after.add(solution), butsolutionis the last item from the previous loop—so every new file ends up linked only to that final solution. - Because you never update the in-memory
solution.aftercollections you originally iterated, you believe you need toexpire/refreshthe objects to see the new rows in the database.
Instead, you should:
- Track each
(solution, db_file)when you create it. - Directly append the new file to
solution.afterso the in-memory relationship stays in sync. - Eliminate the manual
expire/refresh—once you update the Python object in the session, SQLAlchemy knows about it.
Example diff (in server.py):
--- a/src/kai_mcp_solution_server/server.py
+++ b/src/kai_mcp_solution_server/server.py
@@ 743,24 +743,29 @@ async def accept_file(
files_to_add: list[tuple[DBSolution, DBFile]] = []
for solution in solutions:
for file in solution.after:
if file.uri != solution_file.uri:
continue
if file.content == solution_file.content:
file.status = SolutionStatus.ACCEPTED
continue
db_file = DBFile(
client_id=client_id,
uri=solution_file.uri,
content=solution_file.content,
status=SolutionStatus.MODIFIED,
solution_before=set(),
solution_after=set(),
next=None,
)
- files_to_add.add(db_file)
+ files_to_add.append((solution, db_file))
- # NOTE: Doing it this way to avoid modifying solutions.after while iterating
- for file in files_to_add:
- file.solution_after.add(solution)
- session.add(file)
+ # Attach each new file back to its solution in memory
+ for solution, new_file in files_to_add:
+ solution.after.add(new_file)
+ session.add(new_file)
await session.flush()
all_solutions_accepted_or_modified = True
for solution in solutions:
- # FIXME: There is some extreme weirdness going on here. The data in the
- # database does not update when the object updates, so we have to invalidate
- # it and refresh it. Also, sometimes the solution status is not updated
- # automatically?
- session.expire(solution)
- await session.refresh(solution)
- solution.update_solution_status()
- session.add(solution)
- await session.flush()
+ # In-session state is now correct—no need to expire/refresh
+ solution.update_solution_status()
+ session.add(solution)
+ await session.flush()
if not (
solution.solution_status == SolutionStatus.ACCEPTED
or solution.solution_status == SolutionStatus.MODIFIED
):With this change:
- Each
DBFileis correctly linked to its originating solution. solution.afterstays up to date in-memory, soupdate_solution_status()works on the live state.- You can remove all the manual
expire/refreshcalls and the confusing comments.
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 771-772: Consider merging these comparisons with 'in' by using 'solution.solution_status in (SolutionStatus.ACCEPTED, SolutionStatus.MODIFIED)'. Use a set instead if elements are hashable.
(R1714)
🤖 Prompt for AI Agents
In kai_mcp_solution_server/src/kai_mcp_solution_server/server.py around lines
715 to 780, the issue is that new DBFile instances are added to a set without
tracking their originating solution, and later all are linked only to the last
solution, causing incorrect relationship linking. To fix this, track each
(solution, db_file) pair when creating new files, then directly append each new
file to its solution.after collection to keep the in-memory state consistent.
Remove the manual session.expire and session.refresh calls since updating the
Python objects in the session will keep SQLAlchemy aware of changes, eliminating
the need for those refreshes.
djzager
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM from the API perspective.
Summary by CodeRabbit
New Features
Refactor
Bug Fixes
Tests
Chores