-
Notifications
You must be signed in to change notification settings - Fork 187
Sync/upstream 2025 catch up #194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ldc861117
wants to merge
42
commits into
volcengine:main
Choose a base branch
from
ldc861117:sync/upstream-2025-catch-up
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit addresses a critical regression where the application crashes when invalid model settings are submitted. The fix includes: - Modifying the method in to ensure the existing client is not replaced if the new client fails to initialize. - Updating the function to handle reinitialization failures and return a descriptive error message to the user.
This commit implements comprehensive support for custom VLM and Embedding model configurations, including local providers that don't require API keys. Key Changes: - Enhanced LLMProvider enum with Ollama, LocalAI, LlamaCPP, and Custom providers - Added is_api_key_optional() method for provider-aware API key validation - Unified validation and rollback mechanism for both VLM and Embedding clients - Improved settings.py to delegate API key validation to LLMClient - Added comprehensive LLM Configuration Guide with examples - Provided Ollama example configuration file Benefits: - Support for local LLM providers without API keys (Ollama, LocalAI, etc.) - Graceful rollback on reinitialization failures - More flexible and extensible provider support - Better error messages and validation Fixes: API key validation preventing use of local models Addresses: User request for custom VLM and embedding model configuration Co-authored-by: Droid <[email protected]>
feat: flexible LLM configuration with optional API keys
Enhancements: - ✅ Check for config.yaml existence before starting - ✅ Provide helpful hints for Ollama configuration - ✅ Flexible virtual environment handling (optional .venv) - ✅ Verify backend started successfully before launching frontend - ✅ Auto-install dependencies if missing - ✅ Check for frontend node_modules - ✅ Colored output for better readability - ✅ More robust error handling and cleanup This makes the startup experience smoother and catches configuration issues early with helpful guidance.
enhance: Improve start-dev.sh with better validation and user experience
This commit fixes the frontend validation issues preventing users from using local LLM providers without API keys. Changes: - Remove required validation for API Key fields in custom mode - Update placeholder text to indicate API keys are optional for local providers - Change initialization check from apiKey to modelId (more reliable) - Improve user experience with clearer messaging Fixes: Users can now configure Ollama and other local providers without being blocked by API key validation in the frontend. Related: Backend already supports optional API keys for local providers
fix(frontend): Allow empty API keys for local providers (Ollama/LocalAI)
- Add .env.example template with comprehensive examples - Update .gitignore to exclude .env files from version control - Modify config.yaml to use environment variable substitution - Add python-dotenv dependency - Update CLI to automatically load .env files on startup - Enhance start-dev.sh to check for .env file presence - Create comprehensive ENV_CONFIGURATION.md documentation Benefits: - Keep API keys and sensitive data secure (not in version control) - Easy to switch between configurations (Ollama, OpenAI, Doubao, etc.) - Follow 12-factor app best practices - Support fallback values for embedding config Usage: 1. cp .env.example .env 2. Edit .env with your configuration 3. ./start-dev.sh
- Explain the difference between tracked and untracked config files - Provide three setup options (.env, Ollama example, previous config) - Document environment variable substitution syntax - Include best practices for security and configuration management - Add troubleshooting section for common issues
feat: Add .env file support for environment variable management
* docs:add github trending banner * docs:add github trending banner
* docs:add privacy protection * docs:add privacy protection * docs:add privacy protection
Merge/upstream features
Investigation of the '29 commits ahead, 10 commits behind' message: - Analyzed all 10 upstream commits - Found that 6 of 10 are the same features we already merged (different hashes) - Identified 2 new minor features (code format + community best practices) - Determined that no further merge action is needed - Confirmed fork is production-ready with all critical features - Verified all custom Ollama/LocalAI support is preserved
…ropagate to frontend\n\n- backend: cli respects WEB_HOST/WEB_PORT when args not provided\n- frontend: electron-vite loads root .env (envDir) and exposes VITE_WEB_HOST/PORT; axios uses them as defaults\n- docs: update .env.example and ENV_CONFIGURATION.md
Feature/unified env web config
健康检查逻辑:已从 and 改为 or。 配置加载:config.yaml 现在会从 .env 文件中正确加载配置。
… for core schema This change introduces Alembic-based migrations with an initial database schema for core objects, supporting future evolution and idempotent setup. It also integrates automatic migration execution into CLI startup for reliable DB initialization in all environments. - Add Alembic and SQLAlchemy 2.0+ as project dependencies - Scaffold Alembic config, env.py, migrations directory, and core models - Define initial migration for events, documents, chunks, entities, embeddings, jobs tables with indices and FKs (SQLite-optimized) - Apply SQLite PRAGMA settings (WAL, busy_timeout, synchronous) on connect - Integrate run_migrations on CLI startup (before serving) - Add seed/fixture helpers and migration/tests for core schema No breaking changes; existing SQLite data is preserved and future-proofed.
…ration Add Alembic migrations and initial DB schema for core objects
…eus integration This change introduces new operational endpoints for readiness and observability. The FastAPI app now exposes a /healthz endpoint with detailed component checks, and a /metrics endpoint compatible with Prometheus. The middleware records HTTP and pipeline metrics. All endpoints are documented and guarded by optional authentication if configured. Basic tests ensure endpoint and metric exposure, and Prometheus multi-process mode is supported for worker setups. - Adds /healthz with DB, Chroma, and worker liveness checks - Adds /metrics with HTTP and pipeline Prometheus metrics - Middleware tracks request counters/histograms - Endpoints are tested, and build/version info is shown BREAKING CHANGE: Prometheus is now a required dependency and endpoints /healthz and /metrics are reserved.
feat: add observability endpoints for healthz and Prometheus metrics
Upstream features integrated: - Screen recording functionality - Document processor (MD/PDF/DOC support) - Processor monitor and custom settings - Port configuration update (8000 → 1733) - UI improvements (todo deduplication, top bar) - Better error handling and monitoring - Performance improvements Preserved from our fork: ✅ Ollama/LocalAI support (modelId-based initialization) ✅ Optional API keys for local providers ✅ .env file configuration system ✅ Flexible LLM provider setup ✅ Custom validation logic This merge brings all upstream improvements while maintaining full compatibility with local LLM providers like Ollama.
This merge brings the branch up to date with latest main branch changes. All conflicts resolved by accepting upstream improvements. Ollama/LocalAI customizations remain intact in settings.tsx and llm_client.py.
1e8cab7 to
b6ce732
Compare
…ation plan Detailed analysis of current configuration management issues: - Problem identification: scattered locations, unclear responsibilities, validation gaps - Three-layer architecture proposal: system config, user config, runtime config - Best practices for directory structure and file organization - Prevention measures: validation, error handling, backup/restore - Implementation roadmap: 4 phases from basic cleanup to Web UI - Usage examples for different scenarios (Ollama, OpenAI, Hybrid) - Comparison table: current vs improved state This provides a clear path to reorganize configuration management and make the system more stable and user-friendly.
|
This is a very significant change. Could you please elaborate on what work was done? |
|
Hi, bro, Please use standard branch names to submit your PR, such as feat, and provide a detailed description of your code along with relevant screenshots of the effects. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Please include a concise summary, in clear English, of the changes in this pull request. If it closes an issue, please
mention it here.
Closes: #(issue)
🎯 PRs Should Target Issues
Before your create a PR, please check to see if there is an existing issue
for this change. If not, please create an issue before you create this PR, unless the fix is very small.
Not adhering to this guideline will result in the PR being closed.