Conversation
Reviewer's Guide by SourceryThis pull request adds support for switching between available local AI models in the Ollama settings page and the chat prompt interface. It involves modifications to the frontend UI and Bloc logic to fetch and display local models, and backend changes to interact with the Ollama service, manage model selection persistence, and update AI settings accordingly. A new database table is introduced to track local model types. No diagrams generated as the changes look simple and do not need a visual representation. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
🥷 Ninja i18n – 🛎️ Translations need to be updatedProject
|
| lint rule | new reports | level | link |
|---|---|---|---|
| Missing translation | 29 | warning | contribute (via Fink 🐦) |
There was a problem hiding this comment.
Hey @appflowy - I've reviewed your changes - here's some feedback:
Overall Comments:
- The logic for determining the active model in
ai_manager.rs::get_available_modelsappears complex; consider opportunities for simplification. - Review the naming consistency between backend (
global_model) and frontend (selectedModel) concepts for clarity. - The
local_aifeature flag added inflowy-ai/Cargo.tomldoes not seem to conditionally compile code; clarify its purpose or remove it.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟡 Complexity: 2 issues found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| }) | ||
| } | ||
|
|
||
| pub async fn get_available_models(&self, source: String) -> FlowyResult<AvailableModelsPB> { |
There was a problem hiding this comment.
issue (complexity): Consider extracting the active model selection logic and preference updating into helper functions to reduce nesting and improve code clarity.
You can reduce nesting by extracting distinct concerns into helper methods. For example, you could extract two helper functions: one to determine the active (server/default) model from the available models, and another to update stored preferences if needed. This isolates the decision logic and minimizes the nested conditions in get_available_models.
Step 1: Extract active model selection
Create a helper that selects the active model from a given list. For instance:
fn select_active_model(
&self,
all_models: &[AIModel],
server_active_model: &AIModel,
user_selected_model: Option<AIModel>,
) -> AIModel {
// Use user-selected model if available and valid, otherwise default to server_active_model
if let Some(user_model) = user_selected_model {
if all_models.iter().any(|m| m.name == user_model.name) {
return user_model;
}
}
// Fallback: use server active model if it's available in the list
if all_models.iter().any(|m| m.name == server_active_model.name) {
return server_active_model.clone();
}
// Otherwise, return default
AIModel::default()
}Step 2: Refactor get_available_models using early returns and helpers
Update your method to call the above helper instead of inline nested conditions:
pub async fn get_available_models(&self, source: String) -> FlowyResult<AvailableModelsPB> {
if self.user_service.is_local_model().await? {
return self.get_local_available_models().await;
}
// Fetch server models and add local model if enabled
let mut all_models: Vec<AIModel> = self
.get_server_available_models()
.await?
.into_iter()
.map(AIModel::from)
.collect();
if self.local_ai.is_enabled() {
let setting = self.local_ai.get_local_ai_setting();
all_models.push(AIModel::local(setting.chat_model_name, "".to_string()));
}
if all_models.is_empty() {
return Ok(AvailableModelsPB {
models: vec![],
global_model: AIModelPB::default(),
});
}
// Get server active model
let server_active_model = self
.get_workspace_select_model()
.await
.map(|m| AIModel::server(m, "".to_string()))
.unwrap_or_else(|_| AIModel::default());
let user_selected_model = self.get_active_model(&source).await;
let active_model = self.select_active_model(&all_models, &server_active_model, user_selected_model);
// Update stored preference if it has changed
if active_model.name != self.get_active_model(&source).await.map(|m| m.name).unwrap_or_default() {
if let Err(err) = self.update_selected_model(source.clone(), active_model.clone()).await {
error!("[Model Selection] failed to update selected model: {}", err);
}
}
Ok(AvailableModelsPB {
models: all_models.into_iter().map(AIModelPB::from).collect(),
global_model: AIModelPB::from(active_model),
})
}Actionable Steps:
- Extract the active model selection logic into a helper (as shown above).
- Use early returns to handle empty or local-only cases.
- Delegate update of stored preferences to keep the main flow clear.
These changes isolate different concerns, reduce nesting, and improve maintainability while keeping the functionality intact.
| let cloned_local_ai = Arc::clone(&local_ai); | ||
| let cloned_user_service = Arc::clone(&user_service); | ||
| let ollama = ArcSwapOption::default(); | ||
| let sys = get_operating_system(); |
There was a problem hiding this comment.
issue (complexity): Consider extracting the nested logic inside the tokio::spawn closures into helper functions to reduce nesting and code duplication.
Consider extracting the nested logic inside the `tokio::spawn` closures into one or more helper functions. This would reduce the deep nesting and duplicate code in the state change handling.
For example, you could create a helper like:
```rust
async fn process_state_change(
state: RunningState,
cloned_user_service: Arc<dyn AIUserService>,
cloned_store_preferences: Weak<KVStorePreferences>,
cloned_llm_res: Arc<LocalAIResourceController>,
cloned_local_ai: Arc<OllamaAIPlugin>,
) {
if let Ok(workspace_id) = cloned_user_service.workspace_id() {
let key = local_ai_enabled_key(&workspace_id.to_string());
let enabled = if let Some(sp) = cloned_store_preferences.upgrade() {
sp.get_bool(&key).unwrap_or(true)
} else {
warn!("[AI Plugin] store preferences is dropped");
return;
};
let (plugin_downloaded, lack_of_resource) =
if !matches!(state, RunningState::UnexpectedStop { .. }) && enabled {
let downloaded = is_plugin_ready();
let resource_lack = cloned_llm_res.get_lack_of_resource().await;
(downloaded, resource_lack)
} else {
(false, None)
};
let plugin_version = if matches!(state, RunningState::Running { .. }) {
match cloned_local_ai.plugin_info().await {
Ok(info) => Some(info.version),
Err(_) => None,
}
} else {
None
};
let new_state = RunningStatePB::from(state);
chat_notification_builder(
APPFLOWY_AI_NOTIFICATION_KEY,
ChatNotification::UpdateLocalAIState,
)
.payload(LocalAIPB {
enabled,
plugin_downloaded,
lack_of_resource,
state: new_state,
plugin_version,
})
.send();
}
}Then simplify your spawn block to:
tokio::spawn(async move {
while let Some(state) = running_state_rx.next().await {
process_state_change(
state,
Arc::clone(&cloned_user_service),
cloned_store_preferences.clone(),
Arc::clone(&cloned_llm_res),
Arc::clone(&cloned_local_ai),
).await;
}
});This refactoring centralizes the state change processing logic and reduces nesting, preserving functionality and simplifying the control flow.
Feature Preview
PR Checklist
Summary by Sourcery
Introduce support for selecting a default local AI model (Ollama) from a list of available models discovered on the user's machine.
New Features:
Enhancements:
ollama-rs) to interact with the local instance for model discovery.selected_modeltoglobal_modelandchat_model_nametoglobal_chat_modelfor better clarity.global_modelfield.global_modelfield.local_ai_model_table.