Skip to content

Conversation

@cyyeh
Copy link
Member

@cyyeh cyyeh commented Feb 12, 2025

  • return retrieval table names in ask response
  • fix sql2answer markdown formatting
  • make qdrant-client version fixed, the same as server version

Summary by CodeRabbit

  • New Features

    • Reformatted SQL answer outputs to consistently avoid markdown artifacts.
    • Improved retrieval responses by clearly pairing table names with their schema details.
    • Enhanced ask responses with added context by including the list of referenced tables.
    • Refined table description metadata, offering clearer details on associated columns.
    • Introduced a new dependency for enhanced functionality.
  • Bug Fixes

    • Adjusted test assertions to include additional metadata for improved accuracy.

@cyyeh cyyeh added module/ai-service ai-service related ci/ai-service ai-service related labels Feb 12, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2025

Walkthrough

This pull request introduces multiple modifications. A new dependency (qdrant-client==1.11.0) is added to the project configuration. The system prompt in the SQL answer pipeline is updated to restrict markdown formatting. Enhancements in table description processing now include additional metadata such as table names and a restructured columns field. Retrieval functions are modified to return dictionaries associating table DDLs with table names. The Ask service and the question recommendation service are updated to pass and store table data accordingly. Test assertions have been updated to validate these changes.

Changes

File(s) Change Summary
wren-ai-service/pyproject.toml Added dependency: qdrant-client == 1.11.0.
wren-ai-service/src/pipelines/generation/sql_answer.py Updated prompt string: added prohibition for markdown code blocks and rephrased Markdown string instruction.
wren-ai-service/src/pipelines/indexing/table_description.py
wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py
Modified table description processing: added "name" key to metadata, restructured "columns" (list to comma-separated string), and updated test assertions to reflect the new metadata.
wren-ai-service/src/pipelines/retrieval/retrieval.py Reformatted retrieval results: now returns dictionaries with table_name and table_ddl instead of plain DDL strings; updated token count aggregation accordingly and adjusted return schema.
wren-ai-service/src/web/v1/services/ask.py
wren-ai-service/src/web/v1/services/question_recommendation.py
Revised web services: updated AskResultResponse with a new optional field retrieved_tables; modified context handling by replacing document references with table DDLs.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant AS as AskService
    participant RM as Retrieval Module
    participant SQ as SQLGeneration Pipeline

    User->>AS: Send ask request
    AS->>RM: Invoke retrieval (e.g., check_using_db_schemas_without_pruning)
    RM-->>AS: Return table details (name & DDL)
    AS->>AS: Extract table names and aggregate table DDLs
    AS->>SQ: Forward request with table_ddls context
    SQ-->>AS: Return SQL response
    AS->>User: Respond with AskResultResponse (includes retrieved_tables)
Loading

Suggested reviewers

  • paopa

Poem

I’m a rabbit with a joyful hop,
Codes and tables now neatly swap.
New dependencies and prompts so clear,
Metadata and retrieval all appear.
My whiskers twitch with glee in the flow,
Hopping through changes as on I go!
🐰✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2ac542d and 246b2ce.

📒 Files selected for processing (1)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pytest
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (3)
wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py (3)

41-47: ⚠️ Potential issue

Fix test assertion to include 'columns' field.

The test is failing because the document content assertion is missing the 'columns' field that was added in the implementation.

Update the assertion to include the 'columns' field:

     assert document.content == str(
         {
             "name": "user",
-            "mdl_type": "MODEL",
             "description": "A table containing user information.",
+            "columns": ""
         }
     )
🧰 Tools
🪛 GitHub Actions: AI Service Test

[error] 41-41: Assertion error: The actual content of the document does not match the expected content. The 'columns' field is missing from the actual content.


76-82: ⚠️ Potential issue

Fix test assertion to include 'columns' field.

Similar to the previous test, this assertion is missing the 'columns' field.

Update the assertion to include the 'columns' field:

     assert document_1.content == str(
         {
             "name": "user",
-            "mdl_type": "MODEL",
             "description": "A table containing user information.",
+            "columns": ""
         }
     )
🧰 Tools
🪛 GitHub Actions: AI Service Test

[error] 76-76: Assertion error: The actual content of the document does not match the expected content. The 'columns' field is missing from the actual content.


86-92: ⚠️ Potential issue

Fix test assertion to include 'columns' field.

The assertion for document_2 also needs to be updated.

Update the assertion to include the 'columns' field:

     assert document_2.content == str(
         {
             "name": "order",
-            "mdl_type": "MODEL",
             "description": "A table containing order details.",
+            "columns": ""
         }
     )
🧹 Nitpick comments (1)
wren-ai-service/src/pipelines/generation/sql_answer.py (1)

34-34: Fix typo in output format instruction.

There's a missing space in "stringformat".

-Please provide your response in proper Markdown stringformat.
+Please provide your response in proper Markdown string format.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f7d4fd and 2ac542d.

⛔ Files ignored due to path filters (1)
  • wren-ai-service/poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • wren-ai-service/pyproject.toml (1 hunks)
  • wren-ai-service/src/pipelines/generation/sql_answer.py (1 hunks)
  • wren-ai-service/src/pipelines/indexing/table_description.py (3 hunks)
  • wren-ai-service/src/pipelines/retrieval/retrieval.py (2 hunks)
  • wren-ai-service/src/web/v1/services/ask.py (11 hunks)
  • wren-ai-service/src/web/v1/services/question_recommendation.py (2 hunks)
  • wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py (4 hunks)
🧰 Additional context used
🪛 GitHub Actions: AI Service Test
wren-ai-service/tests/pytest/pipelines/indexing/test_table_description.py

[error] 41-41: Assertion error: The actual content of the document does not match the expected content. The 'columns' field is missing from the actual content.


[error] 76-76: Assertion error: The actual content of the document does not match the expected content. The 'columns' field is missing from the actual content.


[error] 126-126: Assertion error: The actual content of the document does not match the expected content. The 'columns' field is missing from the actual content.

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: pytest
  • GitHub Check: pytest
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (12)
wren-ai-service/src/pipelines/generation/sql_answer.py (1)

30-30: LGTM!

The instruction to prevent markdown code block syntax is clear and appropriate.

wren-ai-service/src/pipelines/indexing/table_description.py (2)

34-34: LGTM!

Adding the table name to document metadata improves searchability and context.


57-57: LGTM!

The changes to include column information and format it as a comma-separated string enhance the table description's completeness.

Also applies to: 71-71

wren-ai-service/src/web/v1/services/question_recommendation.py (1)

79-79: LGTM!

The changes to use table_ddls instead of documents align with the PR objectives and improve the clarity of what data is being passed to the SQL generation pipeline.

Also applies to: 87-87, 97-97

wren-ai-service/src/pipelines/retrieval/retrieval.py (4)

229-234: LGTM! Improved data structure to include table names.

The change enhances the data structure by including both table name and DDL, making it more informative and easier to track the source of each DDL.


241-246: LGTM! Consistent data structure across table types.

The same structured format is applied to metrics and views, maintaining consistency in how table information is returned.

Also applies to: 249-254


256-259: LGTM! Updated token count calculation.

The token count calculation is correctly updated to work with the new dictionary structure.


349-354: LGTM! Consistent data structure in filtered results.

The structured format is maintained when filtering results based on columns and tables needed.

Also applies to: 361-366, 369-374

wren-ai-service/src/web/v1/services/ask.py (4)

97-97: LGTM! Added retrieved_tables field to response model.

The new optional field allows tracking which tables were used in generating the response.


314-315: LGTM! Extracted table information from retrieval results.

The code correctly extracts both table names and DDLs from the new structured retrieval results.


344-344: LGTM! Consistent propagation of table names.

Table names are consistently included in all response states (planning, generating, correcting, finished, failed).

Also applies to: 372-372, 382-382, 440-440, 478-478, 495-495


358-358: LGTM! Updated context passing.

The code correctly passes table DDLs as contexts to various pipeline stages.

Also applies to: 396-396, 410-410, 446-446

@cyyeh cyyeh marked this pull request as draft February 12, 2025 22:06
@cyyeh cyyeh marked this pull request as ready for review February 13, 2025 02:43
@cyyeh cyyeh requested a review from paopa February 13, 2025 02:46
Copy link
Contributor

@paopa paopa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@paopa paopa merged commit 97b110e into main Feb 13, 2025
10 checks passed
@paopa paopa deleted the chore/ai-service/minor-updates branch February 13, 2025 02:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/ai-service ai-service related module/ai-service ai-service related

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants