Add deferred watchdog namespace tools#16181
Draft
friel-openai wants to merge 192 commits intoopenai:dev/friel/watchdog-runtime-and-promptsfrom
Draft
Add deferred watchdog namespace tools#16181friel-openai wants to merge 192 commits intoopenai:dev/friel/watchdog-runtime-and-promptsfrom
friel-openai wants to merge 192 commits intoopenai:dev/friel/watchdog-runtime-and-promptsfrom
Conversation
This addresses openai#16038 The default `tui_app_server` path stopped surfacing MCP startup failures during cold start, even though the legacy TUI still showed warnings like `MCP startup incomplete (...)`. The app-server bridge emitted per-server startup status notifications, but `tui_app_server` ignored them, so failed MCP handshakes could look like a clean startup. This change teaches `tui_app_server` to consume MCP startup status notifications, preserve the immediate per-server failure warning, and synthesize the same aggregate startup warning the legacy TUI shows once startup settles.
Fixes openai#16092 The app-server-backed TUI could accumulate ghost subagent entries in `/agent` after resume/backfill flows. Some of those rows were no longer live according to the backend, but still appeared selectable in the picker and could open as blank threads. *Cause* Unlike the legacy tui behavior, tui_app_server was creating local picker/replay state for subagents discovered through metadata refresh and loaded-thread backfill, even when no real local session or transcript had been attached. That let stale ids survive in the picker as if they were replayable threads. *Fix* Stop creating empty local thread channels during subagent metadata hydration and loaded-thread backfill. When opening /agent, prune metadata-only entries that thread/read reports as terminally unavailable. When selecting a discovered subagent that is still live but not yet locally attached, materialize a real local session on demand from thread/read instead of falling back to an empty replay state.
## Why The previous `codex-tools` migration steps moved the shared schema models, local-host specs, collaboration specs, and related adapters out of `codex-core`, but `core/src/tools/spec.rs` still contained a grab bag of pure utility tool builders. Those specs do not need session state or handler logic; they only describe wire shapes for tools that `codex-core` already knows how to execute. Moving that remaining low-coupling layer into `codex-tools` keeps the migration moving in meaningful chunks and trims another large block of passive tool-spec construction out of `codex-core` without touching the runtime-coupled handlers. ## What changed - extended `codex-tools` to own the pure spec builders for: - code-mode `exec` / `wait` - `js_repl` / `js_repl_reset` - MCP resource tools `list_mcp_resources`, `list_mcp_resource_templates`, and `read_mcp_resource` - utility tools `list_dir` and `test_sync_tool` - split those builders across small module files with sibling `*_tests.rs` coverage, keeping `src/lib.rs` exports-only - rewired `core/src/tools/spec.rs` to call the extracted builders and deleted the duplicated core-local implementations - moved the direct JS REPL grammar seam test out of `core/src/tools/spec_tests.rs` so it now lives with the extracted implementation in `codex-tools` - updated `codex-rs/tools/README.md` so the documented crate boundary matches the new utility-spec surface ## Test plan - `CARGO_TARGET_DIR=/tmp/codex-tools-utility-specs cargo test -p codex-tools` - `CARGO_TARGET_DIR=/tmp/codex-core-utility-specs cargo test -p codex-core --lib tools::spec::` - `just fix -p codex-tools -p codex-core` - `just argument-comment-lint` ## References - openai#15923 - openai#15928 - openai#15944 - openai#15953 - openai#16031 - openai#16047 - openai#16129 - openai#16132 - openai#16138 - openai#16141
…6204) ## Summary A Windows-only snapshot assertion in the app-server MCP startup warning test compared the raw rendered path, so CI saw `C:\tmp\project` instead of the normalized `/tmp/project` snapshot fixture. ## Fix Route that snapshot assertion through the existing `normalize_snapshot_paths(...)` helper so the test remains platform-stable.
70ba3d1 to
1222f23
Compare
Add a mailbox we can use for inter-agent communication `wait` is now based on it and don't take target anymore
## Why `core/src/tools/spec.rs` still owned the pure `tool_search` and `tool_suggest` spec builders even though that logic no longer needed `codex-core` runtime state. This change continues the `codex-tools` migration by moving the reusable discovery and suggestion spec construction out of `codex-core` so `spec.rs` is left with the core-owned policy decisions about when these tools are exposed and what metadata is available. ## What changed - Added `codex-rs/tools/src/tool_discovery.rs` with the shared `tool_search` and `tool_suggest` spec builders, plus focused unit tests in `tool_discovery_tests.rs`. - Moved the shared `DiscoverableToolAction` and `DiscoverableToolType` declarations into `codex-tools` so the `tool_suggest` handler and the extracted spec builders use the same wire-model enums. - Updated `core/src/tools/spec.rs` to translate `ToolInfo` and `DiscoverableTool` values into neutral `codex-tools` inputs and delegate the actual spec building there. - Removed the old template-based description rendering helpers from `core/src/tools/spec.rs` and deleted the now-dead helper methods in `core/src/tools/discoverable.rs`. - Updated `codex-rs/tools/README.md` to document that discovery and suggestion models/spec builders now live in `codex-tools`. ## Test plan - `cargo test -p codex-tools` - `CARGO_TARGET_DIR=/tmp/codex-core-discovery-specs cargo test -p codex-core --lib tools::spec::` - `CARGO_TARGET_DIR=/tmp/codex-core-discovery-specs cargo test -p codex-core --lib tools::handlers::tool_suggest::` - `just argument-comment-lint` ## References - openai#16154 - openai#15923 - openai#15928 - openai#15944 - openai#15953 - openai#16031 - openai#16047 - openai#16129 - openai#16132 - openai#16138 - openai#16141
## Why `openai#16193` moved the pure `tool_search` and `tool_suggest` spec builders into `codex-tools`, but `codex-core` still owned the shared discoverable-tool model that those builders and the `tool_suggest` runtime both depend on. This change continues the migration by moving that reusable model boundary out of `codex-core` as well, so the discovery/suggestion stack uses one shared set of types and `core/src/tools` no longer needs its own `discoverable.rs` module. ## What changed - Moved `DiscoverableTool`, `DiscoverablePluginInfo`, and `filter_tool_suggest_discoverable_tools_for_client()` into `codex-rs/tools/src/tool_discovery.rs` alongside the extracted discovery/suggestion spec builders. - Added `codex-app-server-protocol` as a `codex-tools` dependency so the shared discoverable-tool model can own the connector-side `AppInfo` variant directly. - Updated `core/src/tools/handlers/tool_suggest.rs`, `core/src/tools/spec.rs`, `core/src/tools/router.rs`, `core/src/connectors.rs`, and `core/src/codex.rs` to consume the shared `codex-tools` model instead of the old core-local declarations. - Changed `core/src/plugins/discoverable.rs` to return `DiscoverablePluginInfo` directly, moved the pure client-filter coverage into `tool_discovery_tests.rs`, and deleted the old `core/src/tools/discoverable.rs` module. - Updated `codex-rs/tools/README.md` so the crate boundary documents that `codex-tools` now owns the discoverable-tool models in addition to the discovery/suggestion spec builders. ## Test plan - `cargo test -p codex-tools` - `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p codex-core --lib tools::handlers::tool_suggest::` - `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p codex-core --lib tools::spec::` - `CARGO_TARGET_DIR=/tmp/codex-core-discoverable-model cargo test -p codex-core --lib plugins::discoverable::` - `just bazel-lock-check` - `just argument-comment-lint` ## References - openai#16193 - openai#16154 - openai#15923 - openai#15928 - openai#15944 - openai#15953 - openai#16031 - openai#16047 - openai#16129 - openai#16132 - openai#16138 - openai#16141
## Why The Bazel-backed `argument-comment-lint` CI path had two gaps: - Bazel wildcard target expansion skipped inline unit-test crates from `src/` modules because the generated `*-unit-tests-bin` `rust_test` targets are tagged `manual`. - `argument-comment-mismatch` was still only a warning in the Bazel and packaged-wrapper entrypoints, so a typoed `/*param_name*/` comment could still pass CI even when the lint detected it. That left CI blind to real linux-sandbox examples, including the missing `/*local_port*/` comment in `codex-rs/linux-sandbox/src/proxy_routing.rs` and typoed argument comments in `codex-rs/linux-sandbox/src/landlock.rs`. ## What Changed - Added `tools/argument-comment-lint/list-bazel-targets.sh` so Bazel lint runs cover `//codex-rs/...` plus the manual `rust_test` `*-unit-tests-bin` targets. - Updated `just argument-comment-lint`, `rust-ci.yml`, and `rust-ci-full.yml` to use that helper. - Promoted both `argument-comment-mismatch` and `uncommented-anonymous-literal-argument` to errors in every strict entrypoint: - `tools/argument-comment-lint/lint_aspect.bzl` - `tools/argument-comment-lint/src/bin/argument-comment-lint.rs` - `tools/argument-comment-lint/wrapper_common.py` - Added wrapper/bin coverage for the stricter lint flags and documented the behavior in `tools/argument-comment-lint/README.md`. - Fixed the now-covered callsites in `codex-rs/linux-sandbox/src/proxy_routing.rs`, `codex-rs/linux-sandbox/src/landlock.rs`, and `codex-rs/core/src/shell_snapshot_tests.rs`. This keeps the Bazel target expansion narrow while making the Bazel and prebuilt-linter paths enforce the same strict lint set. ## Verification - `python3 -m unittest discover -s tools/argument-comment-lint -p 'test_*.py'` - `cargo +nightly-2025-09-18 test --manifest-path tools/argument-comment-lint/Cargo.toml` - `just argument-comment-lint`
…nai#16225) - rework codex analytics crate to use reducer / publish architecture - in anticipation of extensive codex analytics
## Why Follow-up to openai#16106. `argument-comment-lint` already runs as a native Bazel aspect on Linux and macOS, but Windows is still the long pole in `rust-ci`. To move Windows onto the same native Bazel lane, the toolchain split has to let exec-side helper binaries build in an MSVC environment while still linting repo crates as `windows-gnullvm`. Pushing the Windows lane onto the native Bazel path exposed a second round of Windows-only issues in the mixed exec-toolchain plumbing after the initial wrapper/target fixes landed. ## What Changed - keep the Windows lint lanes on the native Bazel/aspect path in `rust-ci.yml` and `rust-ci-full.yml` - add a dedicated `local_windows_msvc` platform for exec-side helper binaries while keeping `local_windows` as the `windows-gnullvm` target platform - patch `rules_rust` so `repository_set(...)` preserves explicit exec-platform constraints for the generated toolchains, keep the Windows-specific bootstrap/direct-link fixes needed for the nightly lint driver, and expose exec-side `rustc-dev` `.rlib`s to the MSVC sysroot - register the custom Windows nightly toolchain set with MSVC exec constraints while still exposing both `x86_64-pc-windows-msvc` and `x86_64-pc-windows-gnullvm` targets - enable `dev_components` on the custom Windows nightly repository set so the MSVC exec helper toolchain actually downloads the compiler-internal crates that `clippy_utils` needs - teach `run-argument-comment-lint-bazel.sh` to enumerate concrete Windows Rust rules, normalize the resulting labels, and skip explicitly requested incompatible targets instead of failing before the lint run starts - patch `rules_rust` build-script env propagation so exec-side `windows-msvc` helper crates drop forwarded MinGW include and linker search paths as whole flag/path pairs instead of emitting malformed `CFLAGS`, `CXXFLAGS`, and `LDFLAGS` - export the Windows VS/MSVC SDK environment in `setup-bazel-ci` and pass the relevant variables through `run-bazel-ci.sh` via `--action_env` / `--host_action_env` so Bazel build scripts can see the MSVC and UCRT headers on native Windows runs - add inline comments to the Windows `setup-bazel-ci` MSVC environment export step so it is easier to audit how `vswhere`, `VsDevCmd.bat`, and the filtered `GITHUB_ENV` export fit together - patch `aws-lc-sys` to skip its standalone `memcmp` probe under Bazel `windows-msvc` build-script environments, which avoids a Windows-native toolchain mismatch that blocked the lint lane before it reached the aspect execution - patch `aws-lc-sys` to prefer its bundled `prebuilt-nasm` objects for Bazel `windows-msvc` build-script runs, which avoids missing `generated-src/win-x86_64/*.asm` runfiles in the exec-side helper toolchain - annotate the Linux test-only callsites in `codex-rs/linux-sandbox` and `codex-rs/core` that the wider native lint coverage surfaced ## Patches This PR introduces a large patch stack because the Windows Bazel lint lane currently depends on behavior that upstream dependencies do not provide out of the box in the mixed `windows-gnullvm` target / `windows-msvc` exec-toolchain setup. - Most of the `rules_rust` patches look like upstream candidates rather than OpenAI-only policy. Preserving explicit exec-platform constraints, forwarding the right MSVC/UCRT environment into exec-side build scripts, exposing exec-side `rustc-dev` artifacts, and keeping the Windows bootstrap/linker behavior coherent all look like fixes to the Bazel/Rust integration layer itself. - The two `aws-lc-sys` patches are more tactical. They special-case Bazel `windows-msvc` build-script environments to avoid a `memcmp` probe mismatch and missing NASM runfiles. Those may be harder to upstream as-is because they rely on Bazel-specific detection instead of a general Cargo/build-script contract. - Short term, carrying these patches in-tree is reasonable because they unblock a real CI lane and are still narrow enough to audit. Long term, the goal should not be to keep growing a permanent local fork of either dependency. - My current expectation is that the `rules_rust` patches are less controversial and should be broken out into focused upstream proposals, while the `aws-lc-sys` patches are more likely to be temporary escape hatches unless that crate wants a more general hook for hermetic build systems. Suggested follow-up plan: 1. Split the `rules_rust` deltas into upstream-sized PRs or issues with minimized repros. 2. Revisit the `aws-lc-sys` patches during the next dependency bump and see whether they can be replaced by an upstream fix, a crate upgrade, or a cleaner opt-in mechanism. 3. Treat each dependency update as a chance to delete patches one by one so the local patch set only contains still-needed deltas. ## Verification - `./.github/scripts/run-argument-comment-lint-bazel.sh --config=argument-comment-lint --keep_going` - `RUNNER_OS=Windows ./.github/scripts/run-argument-comment-lint-bazel.sh --nobuild --config=argument-comment-lint --platforms=//:local_windows --keep_going` - `cargo test -p codex-linux-sandbox` - `cargo test -p codex-core shell_snapshot_tests` - `just argument-comment-lint` ## References - openai#16106
…#16286) ## Summary `ExternalAuthRefresher` was still shaped around external ChatGPT auth: `ExternalAuthTokens` always implied ChatGPT account metadata even when a caller only needed a bearer token. This PR generalizes that contract so bearer-only sources are first-class, while keeping the existing ChatGPT paths strict anywhere we persist or rebuild ChatGPT auth state. ## Motivation This is the first step toward openai#15189. The follow-on provider-auth work needs one shared external-auth contract that can do both of these things: - resolve the current bearer token before a request is sent - return a refreshed bearer token after a `401` That should not require a second token result type just because there is no ChatGPT account metadata attached. ## What Changed - change `ExternalAuthTokens` to carry `access_token` plus optional `ExternalAuthChatgptMetadata` - add helper constructors for bearer-only tokens and ChatGPT-backed tokens - add `ExternalAuthRefresher::resolve()` with a default no-op implementation so refreshers can optionally provide the current token before a request is sent - keep ChatGPT-only persistence strict by continuing to require ChatGPT metadata anywhere the login layer seeds or reloads ChatGPT auth state - update the app-server bridge to construct the new token shape for external ChatGPT auth refreshes ## Testing - `cargo test -p codex-login` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16286). * openai#16288 * openai#16287 * __->__ openai#16286
## Summary `AuthManager` and `UnauthorizedRecovery` already own token resolution and staged `401` recovery. The missing piece for provider auth was a bearer-only mode that still fit that design, instead of pushing a second auth abstraction into `codex-core`. This PR keeps the design centered on `AuthManager`: it teaches `codex-login` how to own external bearer auth directly so later provider work can keep calling `AuthManager.auth()` and `UnauthorizedRecovery`. ## Motivation This is the middle layer for openai#15189. The intended design is still: - `AuthManager` encapsulates token storage and refresh - `UnauthorizedRecovery` powers staged `401` recovery - all request tokens go through `AuthManager.auth()` This PR makes that possible for provider-backed bearer tokens by adding a bearer-only auth mode inside `AuthManager` instead of building parallel request-auth plumbing in `core`. ## What Changed - move `ModelProviderAuthInfo` into `codex-protocol` so `core` and `login` share one config shape - add `login/src/auth/external_bearer.rs`, which runs the configured command, caches the bearer token in memory, and refreshes it after `401` - add `AuthManager::external_bearer_only(...)` for provider-scoped request paths that should use command-backed bearer auth without mutating the shared OpenAI auth manager - add `AuthManager::shared_with_external_chatgpt_auth_refresher(...)` and rename the other `AuthManager` helpers that only apply to external ChatGPT auth so the ChatGPT-only path is explicit at the call site - keep external ChatGPT refresh behavior unchanged while ensuring bearer-only external auth never persists to `auth.json` ## Testing - `cargo test -p codex-login` - `cargo test -p codex-protocol` --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/16287). * openai#16288 * __->__ openai#16287
## Summary Fixes openai#15189. Custom model providers that set `requires_openai_auth = false` could only use static credentials via `env_key` or `experimental_bearer_token`. That is not enough for providers that mint short-lived bearer tokens, because Codex had no way to run a command to obtain a bearer token, cache it briefly in memory, and retry with a refreshed token after a `401`. This PR adds that provider config and wires it through the existing auth design: request paths still go through `AuthManager.auth()` and `UnauthorizedRecovery`, with `core` only choosing when to use a provider-backed bearer-only `AuthManager`. ## Scope To keep this PR reviewable, `/models` only uses provider auth for the initial request in this change. It does **not** add a dedicated `401` retry path for `/models`; that can be follow-up work if we still need it after landing the main provider-token support. ## Example Usage ```toml model_provider = "corp-openai" [model_providers.corp-openai] name = "Corp OpenAI" base_url = "https://gateway.example.com/openai" requires_openai_auth = false [model_providers.corp-openai.auth] command = "gcloud" args = ["auth", "print-access-token"] timeout_ms = 5000 refresh_interval_ms = 300000 ``` The command contract is intentionally small: - write the bearer token to `stdout` - exit `0` - any leading or trailing whitespace is trimmed before the token is used ## What Changed - add `model_providers.<id>.auth` to the config model and generated schema - validate that command-backed provider auth is mutually exclusive with `env_key`, `experimental_bearer_token`, and `requires_openai_auth` - build a bearer-only `AuthManager` for `ModelClient` and `ModelsManager` when a provider configures `auth` - let normal Responses requests and realtime websocket connects use the provider-backed bearer source through the same `AuthManager.auth()` path - allow `/models` online refresh for command-auth providers and attach the provider token to the initial `/models` request - keep `auth.cwd` available as an advanced escape hatch and include it in the generated config schema ## Testing - `cargo test -p codex-core provider_auth_command` - `cargo test -p codex-core refresh_available_models_uses_provider_auth_token` - `cargo test -p codex-core test_deserialize_provider_auth_config_defaults` ## Docs - `developers.openai.com/codex` should document the new `[model_providers.<id>.auth]` block and the token-command contract
Fix the death of the end of turn watcher
Adds this:
```
properties.insert(
"fork_turns".to_string(),
JsonSchema::String {
description: Some(
"Optional MultiAgentV2 fork mode. Use `none`, `all`, or a positive integer string such as `3` to fork only the most recent turns."
.to_string(),
),
},
);
```
---------
Co-authored-by: Codex <noreply@openai.com>
I noticed that https://github.com/openai/codex/actions/workflows/rust-ci-full.yml started failing on my own PR, openai#16288, even though CI was green when I merged it. Apparently, it introduced a lint violation that was [correctly!] caught by our Cargo-based clippy runner, but not our Bazel-based one. My next step is to figure out the reason for the delta between the two setups, but I wanted to get us green again quickly, first.
The TUI’s `/feedback` flow was still uploading directly through the local feedback crate, which bypassed app-server behavior such as auth-derived feedback tags like chatgpt_user_id and made TUI feedback handling diverge from other clients. It also meant that remove TUI sessions failed to upload the correct feedback logs and session details. Testing: Manually tested `/feedback` flow and confirmed that it didn't regress.
Run a DB clean-up more frequently with an incremental `VACCUM` in it
- add event for thread initialization - thread/start, thread/fork, thread/resume - feature flagged behind `FeatureFlag::GeneralAnalytics` - does not yet support threads started by subagents PR stack: - --> [[telemetry] thread events openai#15690](openai#15690) - [[telemetry] subagent events openai#15915](openai#15915) - [[telemetry] turn events openai#15591](openai#15591) - [[telemetry] steer events openai#15697](openai#15697) - [[telemetry] queued prompt data openai#15804](openai#15804) Sample extracted logs in Codex-backend ``` INFO | 2026-03-29 16:39:37 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bf7-9f5f-7f82-9877-6d48d1052531 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774827577 | INFO | 2026-03-29 16:45:46 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3b84-5731-79d0-9b3b-9c6efe5f5066 product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=resumed subagent_source=None parent_thread_id=None created_at=1774820022 | INFO | 2026-03-29 16:45:49 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3bfd-4cd6-7c12-a13e-48cef02e8c4d product_surface=codex product_client_id=CODEX_CLI client_name=codex-tui client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=forked subagent_source=None parent_thread_id=None created_at=1774827949 | INFO | 2026-03-29 17:20:29 | codex_backend.routers.analytics_events | analytics_events.track_analytics_events:398 | Tracked analytics event codex_thread_initialized thread_id=019d3c1d-0412-7ed2-ad24-c9c0881a36b0 product_surface=codex product_client_id=CODEX_SERVICE_EXEC client_name=codex_exec client_version=0.0.0 rpc_transport=in_process experimental_api_enabled=True codex_rs_version=0.0.0 runtime_os=macos runtime_os_version=26.4.0 runtime_arch=aarch64 model=gpt-5.3-codex ephemeral=False thread_source=user initialization_mode=new subagent_source=None parent_thread_id=None created_at=1774830027 | ``` Notes - `product_client_id` gets canonicalized in codex-backend - subagent threads are addressed in a following pr
## Summary - prioritize newly surfaced review comments ahead of CI and mergeability handling in the PR babysitter watcher - keep `--watch` running for open PRs even when they are currently merge-ready so later review feedback is not missed
## Summary - Replace the separate external auth enum and refresher trait with a single `ExternalAuth` trait in login auth flow - Move bearer token auth behind `BearerTokenRefresher` and update `AuthManager` and app-server wiring to use the generic external auth API
4e3f126 to
63f6b09
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
watchdognamespace for watchdog-only parent-management toolsspawn_agent,send_input,wait_agent,close_agent,list_agents) at top levelspawn_modefrom spawn prompts/events in this branch;[agents.$role]config migration is handled in Force forked agents to inherit parent model settings #16055Testing