Skip to content

Conversation

@louis-jan
Copy link
Contributor

@louis-jan louis-jan commented Jul 22, 2025

Summary

This PR fixes the issue where the app continued to show models that were manually deleted from the filesystem. Previously, the app used a cached models list, which became stale when users removed model files directly. Additionally, this PR enhances the model hub to show proper recommended models for download.

Changes

🔧 Model List Refresh

  • Before: App used cached models list that didn't reflect filesystem changes
  • After: App now calls llama.cpp list() function to get current model state
  • Models that are manually deleted are now immediately removed from the UI

🚀 Model Hub Enhancement

  • Enhanced model hub to display proper recommended models for download
  • Improved model discovery and recommendation logic

Testing

  • Verified models list updates after manual deletion
  • Confirmed model hub shows appropriate recommendations
  • Tested model loading with refreshed list

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to a863241 in 1 minute and 40 seconds. Click for details.
  • Reviewed 53 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. autoqa/tests/new-user/1-user-start-chatting.txt:3
  • Draft comment:
    Test instructions updated to use qwen3-0.6B. Ensure this matches the recommended model in the hub.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. web-app/src/hooks/useModelProvider.ts:75
  • Draft comment:
    Conditional assignment uses 'provider.persist ? provider?.models : mergedModels'. Verify that using cached models when persist is true does not reintroduce stale model data after manual deletion.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 30% vs. threshold = 50% The comment raises a valid concern - when persist is true, we use provider.models directly which could include deleted models since it bypasses the filtering logic. This could lead to deleted models reappearing. However, the comment is phrased as a verification request rather than pointing out a clear issue. The underlying concern is real but the comment format violates our rules. The comment identifies a real potential bug, but it's phrased as a verification request which violates our rules. Maybe the issue isn't actually a bug but an intended behavior for persisted providers? While the phrasing isn't ideal, the underlying technical concern about bypassing deletion filtering is concrete and actionable - it's not speculative and points to a specific code path that could cause problems. The comment should be rewritten to directly state the issue rather than asking for verification, but the core concern about bypassing deletion filtering is valid and worth fixing.
3. web-app/src/routes/hub/index.tsx:39
  • Draft comment:
    Default model quantizations were changed to remove the '.gguf' extension. Confirm the matching logic in the download button aligns with this update.
  • Reason this comment was not posted:
    Comment looked like it was already resolved.
4. web-app/src/services/providers.ts:22
  • Draft comment:
    A TODO was added regarding checking chat_template for tool call support. Consider adding more context or addressing this soon.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_OaMI8s0LLtB0x2NH

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@louis-jan louis-jan merged commit 3e30c61 into release/v0.6.6 Jul 23, 2025
34 of 35 checks passed
@louis-jan louis-jan deleted the fix/app-should-refresh-local-model-list-onload branch July 23, 2025 01:36
@github-project-automation github-project-automation bot moved this to QA in Jan Jul 23, 2025
@github-actions github-actions bot added this to the v0.6.6 milestone Jul 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

fix: llama.cpp provider should use list() result instead of cached models list

3 participants