Skip to content

feat(channels): add personal WeChat channel support#786

Closed
xinuxZ wants to merge 81 commits intoRightNow-AI:mainfrom
xinuxZ:feature/personal-wechat
Closed

feat(channels): add personal WeChat channel support#786
xinuxZ wants to merge 81 commits intoRightNow-AI:mainfrom
xinuxZ:feature/personal-wechat

Conversation

@xinuxZ
Copy link
Copy Markdown
Contributor

@xinuxZ xinuxZ commented Mar 22, 2026

Summary

This PR adds a new wechat channel for personal WeChat accounts in OpenFang.

Unlike wecom, which targets WeCom / enterprise messaging, this channel is designed for personal WeChat usage.

Personal WeChat is far more common in many real-world communication scenarios. Compared with tools such as Feishu or DingTalk, personal WeChat setup is also much simpler for individual users.

Why

OpenFang already supports several messaging and collaboration channels, but personal WeChat support is especially useful because:

  • The iLink Bot API provides an official QR-code login flow for connecting a personal WeChat account to bot services, which makes this integration practical and straightforward.
  • personal WeChat is one of the most widely used communication tools
  • it covers many direct user-facing and daily communication scenarios
  • it is easier for individual users to set up than enterprise-focused tools
  • it lowers the barrier for connecting OpenFang to common personal communication workflows

What’s Included

This PR adds:

  • a new wechat channel using the standard OpenFang channel config style
  • QR-based login for personal WeChat authorization
  • persistence of authorized account state for reuse after restart
  • dashboard support for completing the WeChat setup flow
  • bridge startup and runtime wiring for the new channel

Implementation

  • adds the wechat channel wiring
  • adds QR login session start / status handling
  • persists authenticated WeChat account state locally
  • exposes the setup flow through the dashboard

Configuration

The new channel follows the existing OpenFang channel convention and does not introduce a separate config style.

Example:

[channels.wechat]
default_agent = "assistant"
allowed_users = []
state_dir = "./wechat-state"

Secrets can continue to be managed through the standard ~/.openfang/secret.env workflow.

Notes

This channel is specifically for personal WeChat.

It is different from wecom:

  • wecom is for enterprise / organizational messaging
  • wechat is for personal WeChat accounts

The goal of this PR is to make personal WeChat a first-class channel option in OpenFang.

Testing

Build / checks

  • cargo build --workspace --lib
  • cargo test --workspace

Manual verification

  • configured channels.wechat successfully
  • started OpenFang with the WeChat channel enabled
  • completed QR login through the setup flow
  • verified dashboard-based WeChat setup works
  • verified authorized account state can be reused after restart

@xinuxZ
Copy link
Copy Markdown
Contributor Author

xinuxZ commented Mar 22, 2026

I hope this PR gets merged soon. Personal WeChat is still the preferred option for most users, compared to enterprise tools like Feishu, DingTalk, or WeChat Work.

thx! thx! thx!

@xinuxZ
Copy link
Copy Markdown
Contributor Author

xinuxZ commented Mar 22, 2026

In China, the personal version of WeChat doesn't have an official API. The fact that Tencent is willing to provide API support is remarkable—it shows an unprecedented level of commitment from Tencent to the agent ecosystem. OpenClaw can work with personal WeChat accounts, and I hope OpenFang will support this as well.

Some extra context: This is a huge deal. With WeChat personal APIs opening up, AI agents can plug straight into the chat habits of 1.2 billion people—this totally changes the game for how agents are deployed. If OpenFang supports this, it’ll be a major differentiator.

IMG_0479 IMG_0480

@marcoziti
Copy link
Copy Markdown

A big boost to OpenFang community if support this super impactful social media platform.

@Jengro777
Copy link
Copy Markdown

Great job

@jaberjaber23
Copy link
Copy Markdown
Member

Reviewed. This is a well-built WeChat implementation via iLink Bot API. Two must-fix items before merge: (1) Replace the unsafe std::env::remove_var and set_var calls with app-level config state since they run in async multi-threaded Tokio runtime. (2) Set restrictive file permissions (0600) on account.json which stores the bot token.

@xinuxZ
Copy link
Copy Markdown
Contributor Author

xinuxZ commented Mar 27, 2026

Reviewed. This is a well-built WeChat implementation via iLink Bot API. Two must-fix items before merge: (1) Replace the unsafe std::env::remove_var and set_var calls with app-level config state since they run in async multi-threaded Tokio runtime. (2) Set restrictive file permissions (0600) on account.json which stores the bot token.

@jaberjaber23
I've fixed the issues you mentioned.

jaberjaber23 and others added 17 commits March 28, 2026 17:52
…AI#820, RightNow-AI#848, RightNow-AI#826, RightNow-AI#836)

- RightNow-AI#834: Remove 3 decommissioned Groq models (gemma2-9b-it, llama-3.2-1b/3b-preview)
- RightNow-AI#805: Ollama streaming parser now checks both reasoning_content and reasoning fields
- RightNow-AI#820: Browser Hand checks python3 before python, fix optional dep logic
- RightNow-AI#848: Hand continuous interval changed from 60s to 3600s to prevent credit waste
- RightNow-AI#826: Doctor command no longer reports all_ok when provider key is rejected
- RightNow-AI#836: WebSocket tool events now include tool call ID for concurrent call correlation

All 825+ tests passing. Verified live with daemon.
…AI#823, RightNow-AI#767, RightNow-AI#802, RightNow-AI#816)

- RightNow-AI#845: Model fallback chain now retries with fallback_models on ModelNotFound
- RightNow-AI#844: Heartbeat skips idle agents that never received a message (no more crash loops)
- RightNow-AI#823: Doctor --json outputs clean JSON to stdout, tracing to stderr, BrokenPipe handled
- RightNow-AI#767: Workflows page scrollable with flex layout fix
- RightNow-AI#802: Model dropdown handles object options (no more [object Object] for Ollama)
- RightNow-AI#816: Spawn wizard provider dropdown loads dynamically from /api/providers (43 providers)

All 829+ tests passing. Live tested with daemon.
…AI#856, RightNow-AI#770, RightNow-AI#774, RightNow-AI#851/RightNow-AI#808, RightNow-AI#785)

- RightNow-AI#825: Doctor now surfaces blocked workspace skills count in injection scan
- RightNow-AI#828: Skill install detects Git URLs (https://, git@) and clones before install
- RightNow-AI#856: Custom model names preserved — user-defined models take priority over builtins
- RightNow-AI#770: Dashboard WS streaming now triggers Alpine.js reactivity via splice()
- RightNow-AI#774: tool_use.input always normalized to JSON object (fixes Anthropic API errors)
- RightNow-AI#851/RightNow-AI#808: Global skills loaded for all agents; workspace skills properly override globals
- RightNow-AI#785: Gemini streaming SSE parser handles \r\n line endings (fixes empty response loop)

All 2,186 tests passing. Live tested with daemon.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Bumps [openssl](https://github.com/rust-openssl/rust-openssl) from 0.10.75 to 0.10.76.
- [Release notes](https://github.com/rust-openssl/rust-openssl/releases)
- [Commits](rust-openssl/rust-openssl@openssl-v0.10.75...openssl-v0.10.76)

---
updated-dependencies:
- dependency-name: openssl
  dependency-version: 0.10.76
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Bumps [clap_complete](https://github.com/clap-rs/clap) from 4.5.66 to 4.6.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](clap-rs/clap@clap_complete-v4.5.66...clap_complete-v4.6.0)

---
updated-dependencies:
- dependency-name: clap_complete
  dependency-version: 4.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Without this attribute, serde treats a missing `result` field as a
deserialization error even though `Option<T>` implies the field is
optional.  Some Claude CLI versions emit the response in `content` or
`text` rather than `result`; the silent parse failure caused the
driver to fall through to a plain-text read which could be empty,
triggering the "model returned an empty response" guard in the agent
loop.

Closes RightNow-AI#295.

Co-Authored-By: Claude <[email protected]>
…ssistant content

Newer Claude CLI versions (≥2.x) emit assistant responses inside a nested
`message.content[].text` structure in stream-json events, rather than a
flat `content` string.

Add ClaudeMessageBlock and ClaudeAssistantMessage structs, plus a new
`message` field on ClaudeStreamEvent, so the stream handler can extract
text from both layouts.

Refs: RightNow-AI#295
…urrent drain

When complete() called child.wait() before reading stdout/stderr, large
responses (>64 KB) caused a deadlock: the subprocess blocked on write()
because the OS pipe buffer was full, and wait() never returned.

Fix by spawning two tokio tasks to drain stdout/stderr concurrently with
child.wait(), then collecting after the process exits.

Also inject HOME from home_dir() so the CLI finds ~/.claude/credentials
when OpenFang runs as a service, and set stdin to null so the CLI does
not stall waiting for interactive input.

Refs: RightNow-AI#295
Mirror the same environment fixes applied to complete(): inject HOME so
the CLI locates ~/.claude/credentials when running as a service, and set
stdin to null so the process does not block on interactive input.

Refs: RightNow-AI#295
…in stream()

Claude CLI ≥2.x emits type=assistant events where the response text is
inside message.content[{"type":"text","text":"..."}] rather than a flat
content string. The old handler only checked event.content, so every
token was silently dropped and streaming always returned an empty response.

The handler now checks the flat content field first (backward-compatible),
then falls back to joining all text blocks from message.content[].

Refs: RightNow-AI#295
Add sanitize_gemini_turns() to enforce Gemini's strict turn-ordering
constraints after message history is trimmed. This merges consecutive
same-role turns, drops orphaned functionCall/functionResponse parts,
and removes empty turns. Also adds #[serde(default)] on GeminiContent.parts
and fixes two tests that were missing required ToolResult messages.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
- test_sanitize_drops_orphaned_function_call
- test_sanitize_keeps_valid_function_call_response_pair
- test_sanitize_drops_orphaned_function_response
- test_sanitize_merges_consecutive_same_role
- test_sanitize_empty_input

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
- Add NVIDIA_API_KEY to .env.example
- Add 3 new ZeroClaw-recommended models: meta/llama-3.3-70b-instruct,
  nvidia/llama-3.3-nemotron-super-49b-v1.5,
  nvidia/llama-3.1-nemotron-ultra-253b-v1
- Add nemotron, nemotron-super, nemotron-ultra aliases
- Add NVIDIA NIM provider section to docs/providers.md (provider RightNow-AI#21)
- Add NVIDIA NIM models to Model Catalog table
- Add aliases to Aliases table
- Add NVIDIA NIM to Environment Variables Summary
- Update provider/model/alias counts

Closes RightNow-AI#787

Co-authored-by: ilteoood <[email protected]>
Agent-Logs-Url: https://github.com/ilteoood/openfang/sessions/7944f50c-bf8e-47ca-a936-cc6c562e36ca
lc-soft and others added 29 commits March 28, 2026 17:55
…sitives

When agents are loaded from persistent storage on daemon startup, their
last_active timestamp reflects when they were last active before the
previous shutdown. If the daemon was down for longer than the heartbeat
timeout (default 180 s), the first heartbeat tick immediately marks every
restored agent as unresponsive and triggers crash recovery — even though
all agents just started and haven't had a chance to run.

Fix: stamp last_active = Utc::now() alongside the state = Running reset
in the restore loop. This is consistent with how new agent spawns work
(they also set last_active to now) and gives each restored agent a clean
baseline from which the heartbeat can accurately track responsiveness.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
- Widen agent detail modal from 600px to 700px to better accommodate
  longer model names and the fallback chain editor
- Restructure the Fallbacks section in the Info tab: content div is now
  a column flex container (gap:6px) with margin-left:16px to create a
  clear visual column between the label and its content
- Prevent long provider/model badge strings from overflowing the right
  edge of the modal (word-break:break-all; white-space:normal on badge)
- Add flex-shrink:0 to the × delete button so it never gets squashed
  when a badge is long
- Wrap the "+ Add" button in a div so it stays left-aligned (column
  flex would otherwise stretch a bare button to full width)
- Replace margin-top with gap-based spacing on the fallback edit form

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
…terations

Two related issues with autonomous Hand agents:

1. heartbeat_interval_secs was hardcoded at 30s (the AutonomousConfig default)
   for all Hands, with no way to override it from HAND.toml. For agents that
   make long LLM calls, 30s causes false-positive recovery triggers during
   normal operation. Add heartbeat_interval_secs to HandAgentConfig so each
   Hand can declare an appropriate interval.

2. The researcher Hand shipped with max_iterations = 80 and a system prompt
   instructing exhaustive research (50+ sources). This combination was designed
   for cloud LLMs with 200K context windows. On any model with a 32K or smaller
   context window, 80 iterations × growing history guarantees context overflow
   before the task completes. Reduce to 25, which is sufficient for thorough
   research within a 32K budget.

researcher/HAND.toml changes:
- max_iterations: 80 → 25
- heartbeat_interval_secs: 120 (new field; 30s default was triggering false
  recovery during normal multi-minute LLM calls)

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
Add POST /api/cron/jobs/{id}/run endpoint that triggers a cron job
immediately without waiting for its next scheduled fire time. The job
executes asynchronously in the background and its status can be polled
via the existing /status endpoint.

Key changes:
- Extract per-job execution logic from the inline cron tick loop into
  a reusable `cron_run_job()` method on OpenFangKernel, called by both
  the background scheduler and the new API endpoint
- Add `reserve_run()` on CronScheduler to pre-advance next_run for
  overdue jobs before spawning manual runs, preventing duplicate
  execution from the scheduler tick (only advances when next_run <= now
  to avoid skipping imminent scheduled runs)
- Fix dashboard scheduler.js to call the correct cron API endpoint
  instead of the legacy /api/schedules/ path
- Document all cron/scheduler endpoints in api-reference.md

Partially addresses upstream issue RightNow-AI#634.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
The previous sequence — get_job (read lock), check enabled, reserve_run
(write lock) — had a TOCTOU window where another request could disable
or delete the job between the check and the reservation.

Replace with CronScheduler::try_claim_for_run() which holds a single
DashMap write lock for the existence check, enabled guard, and next_run
advancement. Returns a typed ClaimError (NotFound | Disabled) so the
route handler maps directly to HTTP status codes.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
- GET /api/cron/jobs: show actual {jobs: [...], total} wrapper and
  document the ?agent_id query filter
- POST /api/cron/jobs: fix status code to 201 Created, show the actual
  {result: "<stringified-json>"} response shape
- GET /api/cron/jobs/{id}/status: show full JobMeta structure with
  nested job object, one_shot, last_status, consecutive_errors

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
- record_failure() now only recomputes next_run when the job is already
  overdue (next_run <= now), preserving the scheduled fire time when a
  manual run fails before the job's natural next_run
- Remove premature job.last_run update in scheduler.js — the job runs
  asynchronously so last_run should only reflect the server-side
  completion timestamp on the next data refresh

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Rebased on latest main (f1ca527) after codebase changes. This is a
fresh submission after PR RightNow-AI#22 was closed as stale.

## Why This Feature

Enables enterprise GCP deployments using existing service accounts
instead of requiring separate Gemini API keys. Many organizations
already have GCP infrastructure and prefer OAuth-based auth.

## What's New

- VertexAIDriver with full streaming support
- OAuth 2.0 token caching (50 min TTL) with auto-refresh via gcloud
- Auto-detection of project_id from service account JSON
- Security: tokens stored with Zeroizing<String>
- Provider aliases: vertex-ai, vertex, google-vertex
- Compatible with new ContentBlock::provider_metadata field

## Testing

- 6 unit tests passing
- Clippy clean (no warnings)
- End-to-end tested with real GCP service account + gemini-2.0-flash
- Both streaming and non-streaming paths verified

## Usage

export GOOGLE_APPLICATION_CREDENTIALS=/path/to/sa.json
# Set provider=vertex-ai, model=gemini-2.0-flash in config.toml
Route SemanticStore remember/recall operations to the memory-api gateway
(PostgreSQL + pgvector + Jina AI embeddings) when backend=http is configured.

- Add backend, http_url, http_token_env fields to MemoryConfig
- Create http_client module with MemoryApiClient (reqwest::blocking)
- Add HTTP dispatch to SemanticStore with graceful SQLite fallback
- Wire MemoryConfig through MemorySubstrate::open() and kernel boot
- Add reqwest as optional dependency behind http-memory feature flag

Sessions, KV store, and knowledge graph remain local SQLite.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
Add a Python-based code review agent powered by LangChain that
integrates with OpenFang via the A2A (Agent-to-Agent) protocol.

- agent.py: Core review logic with structured Chinese SYSTEM_PROMPT
  covering 6 dimensions (correctness, security, performance,
  maintainability, testing, style) and 4 severity levels
- server.py: FastAPI server exposing A2A-compatible endpoints
  (/.well-known/agent.json and /a2a JSON-RPC)
- workflow.json: OpenFang workflow definition for the review pipeline
- config.example.toml: Example A2A config for ~/.openfang/config.toml
- Supports OpenAI, DeepSeek, and Ollama backends

Made-with: Cursor
Add WebSocket long-connection receive mode for the Feishu/Lark adapter
as an alternative to webhook callbacks. WebSocket mode is enabled by
default, requiring no public IP or domain.

- FeishuConnectionMode enum (Webhook/WebSocket) with mode dispatch
- Protobuf binary frame parsing (prost) based on Feishu pbbp2 protocol
- Auto-reconnect, ping/pong heartbeat, ACK, multi-part payload combine
- handle_data_frame reuses parse_event() pipeline (dedup, group filter)
- FeishuMode config enum with bridge-layer adapter creation per mode
Replace the custom JSON-RPC + stdio/SSE transport layer with the rmcp
SDK (crate 'rmcp').  This gives us spec-compliant Streamable-HTTP
transport, automatic Mcp-Session-Id tracking, SSE stream parsing, and
content-type negotiation out of the box while deleting ~300 lines of
hand-rolled plumbing.

Key changes:
- Add rmcp dependency with transport feature
- Replace McpTransportHandle enum with rmcp RunningService
- Replace manual JSON-RPC send_request/send_notification with rmcp client calls
- Add custom HTTP headers support for authenticated remote MCP servers
- Simplify tool discovery and invocation through rmcp's typed API
- Add missing budget_config field to AppState in all 3 test files
- Fix redundant closures and unwrap_or_else in openfang-memory semantic.rs
- Fix needless_borrow in openfang-api routes.rs (toml::from_str)
- Update parse_researcher_hand test to match new max_iterations = 25
- Update tar to 0.4.45 to fix RUSTSEC-2026-0067 and RUSTSEC-2026-0068
- Apply cargo fmt fixes in ws.rs and feishu.rs
Add generic MQTT 3.1.1/5.0 support for IoT and messaging integration:

- MqttConfig with broker_url, TLS, QoS, auth via env vars
- MqttAdapter implementing ChannelAdapter trait
- Support for text and JSON {"text": "..."} payloads
- Command messages via /command args syntax
- Auto-reconnect with exponential backoff
- Message chunking for long responses

Configuration example:
  [channels.mqtt]
  broker_url = "tcp://broker.hivemq.com:1883"
  subscribe_topic = "openfang/inbox"
  publish_topic = "openfang/outbox"
Bumps [toml](https://github.com/toml-rs/toml) from 0.8.2 to 0.9.12+spec-1.1.0.
- [Commits](toml-rs/toml@toml-v0.8.2...toml-v0.9.12)

---
updated-dependencies:
- dependency-name: toml
  dependency-version: 0.9.12+spec-1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
Bumps [governor](https://github.com/boinkor-net/governor) from 0.8.1 to 0.10.4.
- [Release notes](https://github.com/boinkor-net/governor/releases)
- [Changelog](https://github.com/boinkor-net/governor/blob/master/release.toml)
- [Commits](boinkor-net/governor@v0.8.1...v0.10.4)

---
updated-dependencies:
- dependency-name: governor
  dependency-version: 0.10.4
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
, RightNow-AI#867, RightNow-AI#824, RightNow-AI#833, RightNow-AI#766)

- RightNow-AI#875: Install script uses robust sed parsing instead of fragile cut for version detection
- RightNow-AI#872: Session endpoint returns full tool results (removed 2000-char truncation)
- RightNow-AI#867: agent_send/agent_spawn get 600s timeout (was 120s), regular tools keep 120s
- RightNow-AI#824: Doctor workspace skills count uses direct return value from load_workspace_skills
- RightNow-AI#833: Model switching respects provider via new find_model_for_provider() lookup
- RightNow-AI#766: Closed as resolved by combined heartbeat fixes (v0.5.3 + merged PRs)

All tests passing. Live tested with daemon.
, RightNow-AI#752, RightNow-AI#772, RightNow-AI#661)

- RightNow-AI#771: Fix Qwen tool_calls orphaning after context overflow. Added safe drain boundaries
  in compactor and context_overflow to avoid splitting tool pairs. Added missing
  validate_and_repair call in streaming loop.
- RightNow-AI#811: LINE webhook signature now uses raw request bytes (not re-serialized JSON) for
  HMAC. Channel secret is trimmed. Debug logging added for mismatches.
- RightNow-AI#752: Local skill install now hot-reloads kernel via POST /api/skills/reload. TUI skill
  list fixed to parse wrapper object. ClawHub install also triggers reload.
- RightNow-AI#772: exec_policy mode=full now bypasses approval gate for shell_exec tools. Non-shell
  tools like file_delete still respect approval settings.
- RightNow-AI#661: Closed as resolved by RightNow-AI#770 splice() reactivity fix and RightNow-AI#836 tool ID fix.

All tests passing. 10 files changed, 436 insertions.
@xinuxZ xinuxZ closed this Mar 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.