Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 23 additions & 8 deletions .github/workflows/claude-nl-suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -318,8 +318,8 @@ jobs:
# (removed) Revert helper and baseline snapshot are no longer used


# ---------- Run suite ----------
- name: Run Claude NL suite (single pass)
# ---------- Run suite in two passes ----------
- name: Run Claude NL pass
uses: anthropics/claude-code-base-action@beta
if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true'
continue-on-error: true
Expand All @@ -329,17 +329,32 @@ jobs:
mcp_config: .claude/mcp.json
allowed_tools: >-
Write,
mcp__unity__manage_editor,
mcp__unity__list_resources,
mcp__unity__read_resource,
mcp__unity__find_in_file,
mcp__unity__validate_script,
mcp__unity__get_sha,
mcp__unity__read_console
disallowed_tools: TodoWrite,Task,Bash
model: claude-3-7-sonnet-latest
timeout_minutes: "30"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

- name: Run Claude T pass
uses: anthropics/claude-code-base-action@beta
if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true'
continue-on-error: true
with:
use_node_cache: false
prompt_file: .claude/prompts/nl-unity-suite-full-additive.md
mcp_config: .claude/mcp.json
allowed_tools: >-
Write,
mcp__unity__find_in_file,
mcp__unity__apply_text_edits,
mcp__unity__script_apply_edits,
mcp__unity__validate_script,
mcp__unity__find_in_file,
mcp__unity__read_console,
mcp__unity__get_sha
disallowed_tools: TodoWrite,Task,Bash
model: claude-3-7-sonnet-latest
model: claude-3-7-haiku-latest
timeout_minutes: "30"
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}

Expand Down
30 changes: 26 additions & 4 deletions README-DEV.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,27 @@ Restores original files from backup.
2. Allows you to select which backup to restore
3. Restores both Unity Bridge and Python Server files

### `prune_tool_results.py`
Compacts large `tool_result` blobs in conversation JSON into concise one-line summaries.

**Usage:**
```bash
python3 prune_tool_results.py < reports/claude-execution-output.json > reports/claude-execution-output.pruned.json
```

The script reads a conversation from `stdin` and writes the pruned version to `stdout`, making logs much easier to inspect or archive.

### Lean Tool Responses
To keep live conversations small, server tools now emit minimal payloads by default:

* `find_in_file` – first match positions only (`startLine/Col`, `endLine/Col`).
* `read_console` – error entries trimmed to `{level, message}` (pass `count` to limit).
* `validate_script` – diagnostics summarized as `{warnings, errors}` counts.
* `get_sha` – `{sha256, lengthBytes}` only.
* `read_resource` – returns only `metadata.sha256` and byte length unless `include_text` or window arguments are provided.

These defaults dramatically cut token usage without affecting essential information.

## Finding Unity Package Cache Path

Unity stores Git packages under a version-or-hash folder. Expect something like:
Expand All @@ -70,10 +91,11 @@ Note: In recent builds, the Python server sources are also bundled inside the pa

We provide a CI job to run a Natural Language Editing mini-suite against the Unity test project. It spins up a headless Unity container and connects via the MCP bridge.

- Trigger: Workflow dispatch (`Claude NL suite (Unity live)`).
- Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized.
- Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`.
- Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`.
- Trigger: Workflow dispatch (`Claude NL suite (Unity live)`).
- Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized.
- Execution: runs in two passes (NL then T) so each session stays lean.
- Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`.
- Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`.

### Test target script
- The repo includes a long, standalone C# script used to exercise larger edits and windows:
Expand Down
14 changes: 14 additions & 0 deletions UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -416,6 +416,11 @@ def validate_script(
"level": level,
}
resp = send_command_with_retry("manage_script", params)
if isinstance(resp, dict) and resp.get("success"):
diags = resp.get("data", {}).get("diagnostics", []) or []
warnings = sum(d.get("severity", "").lower() == "warning" for d in diags)
errors = sum(d.get("severity", "").lower() in ("error", "fatal") for d in diags)
return {"success": True, "data": {"warnings": warnings, "errors": errors}}
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}

@mcp.tool(description=(
Expand Down Expand Up @@ -588,6 +593,15 @@ def get_sha(ctx: Context, uri: str) -> Dict[str, Any]:
name, directory = _split_uri(uri)
params = {"action": "get_sha", "name": name, "path": directory}
resp = send_command_with_retry("manage_script", params)
if isinstance(resp, dict) and resp.get("success"):
data = resp.get("data", {})
return {
"success": True,
"data": {
"sha256": data.get("sha256"),
"lengthBytes": data.get("lengthBytes"),
},
}
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
except Exception as e:
return {"success": False, "message": f"get_sha error: {e}"}
15 changes: 11 additions & 4 deletions UnityMcpBridge/UnityMcpServer~/src/tools/read_console.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ def read_console(

# Set defaults if values are None
action = action if action is not None else 'get'
types = types if types is not None else ['error', 'warning', 'log']
format = format if format is not None else 'detailed'
include_stacktrace = include_stacktrace if include_stacktrace is not None else True
types = types if types is not None else ['error']
format = format if format is not None else 'json'
include_stacktrace = include_stacktrace if include_stacktrace is not None else False

# Normalize action if it's a string
if isinstance(action, str):
Expand All @@ -70,4 +70,11 @@ def read_console(

# Use centralized retry helper
resp = send_command_with_retry("read_console", params_dict)
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
if isinstance(resp, dict) and resp.get("success"):
lines = resp.get("data", {}).get("lines", [])
trimmed = [
{"level": l.get("level") or l.get("type"), "message": l.get("message") or l.get("text")}
for l in lines
]
return {"success": True, "data": {"lines": trimmed}}
return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}
75 changes: 52 additions & 23 deletions UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,10 +183,12 @@ async def read_resource(
tail_lines: int | None = None,
project_root: str | None = None,
request: str | None = None,
include_text: bool = False,
) -> Dict[str, Any]:
"""
Reads a resource by unity://path/... URI with optional slicing.
One of line window (start_line/line_count) or head_bytes can be used to limit size.
By default only the SHA-256 hash and byte length are returned; set
``include_text`` or provide window arguments to receive text.
"""
try:
# Serve the canonical spec directly when requested (allow bare or with scheme)
Expand Down Expand Up @@ -291,25 +293,43 @@ async def read_resource(
start_line = max(1, hit_line - half)
line_count = window

# Mutually exclusive windowing options precedence:
# 1) head_bytes, 2) tail_lines, 3) start_line+line_count, else full text
if head_bytes and head_bytes > 0:
raw = p.read_bytes()[: head_bytes]
text = raw.decode("utf-8", errors="replace")
else:
text = p.read_text(encoding="utf-8")
if tail_lines is not None and tail_lines > 0:
lines = text.splitlines()
n = max(0, tail_lines)
text = "\n".join(lines[-n:])
elif start_line is not None and line_count is not None and line_count >= 0:
lines = text.splitlines()
s = max(0, start_line - 1)
e = min(len(lines), s + line_count)
text = "\n".join(lines[s:e])
raw = p.read_bytes()
sha = hashlib.sha256(raw).hexdigest()
length = len(raw)

sha = hashlib.sha256(text.encode("utf-8")).hexdigest()
return {"success": True, "data": {"text": text, "metadata": {"sha256": sha}}}
want_text = (
bool(include_text)
or (head_bytes is not None and head_bytes >= 0)
or (tail_lines is not None and tail_lines > 0)
or (start_line is not None and line_count is not None)
)
if want_text:
text: str
if head_bytes is not None and head_bytes >= 0:
text = raw[: head_bytes].decode("utf-8", errors="replace")
else:
text = raw.decode("utf-8", errors="replace")
if tail_lines is not None and tail_lines > 0:
lines = text.splitlines()
n = max(0, tail_lines)
text = "\n".join(lines[-n:])
elif (
start_line is not None
and line_count is not None
and line_count >= 0
):
lines = text.splitlines()
s = max(0, start_line - 1)
e = min(len(lines), s + line_count)
text = "\n".join(lines[s:e])
return {
"success": True,
"data": {"text": text, "metadata": {"sha256": sha}},
}
return {
"success": True,
"data": {"metadata": {"sha256": sha, "lengthBytes": length}},
}
except Exception as e:
return {"success": False, "error": str(e)}

Expand All @@ -320,10 +340,10 @@ async def find_in_file(
ctx: Context | None = None,
ignore_case: bool | None = True,
project_root: str | None = None,
max_results: int | None = 200,
max_results: int | None = 1,
) -> Dict[str, Any]:
"""
Searches a file with a regex pattern and returns line numbers and excerpts.
Searches a file with a regex pattern and returns match positions only.
- uri: unity://path/Assets/... or file path form supported by read_resource
- pattern: regular expression (Python re)
- ignore_case: case-insensitive by default
Expand All @@ -345,8 +365,17 @@ async def find_in_file(
results = []
lines = text.splitlines()
for i, line in enumerate(lines, start=1):
if rx.search(line):
results.append({"line": i, "text": line})
m = rx.search(line)
if m:
start_col, end_col = m.span()
results.append(
{
"startLine": i,
"startCol": start_col + 1,
"endLine": i,
"endCol": end_col + 1,
}
)
if max_results and len(results) >= max_results:
break

Expand Down
59 changes: 59 additions & 0 deletions prune_tool_results.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/usr/bin/env python3
import sys, json, re

def summarize(txt):
try:
obj = json.loads(txt)
except Exception:
return f"tool_result: {len(txt)} bytes"
data = obj.get("data", {}) or {}
msg = obj.get("message") or obj.get("status") or ""
# Common tool shapes
if "sha256" in str(data):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Using str(data) for SHA256 detection could match unrelated strings containing 'sha256'. Consider checking data.get('sha256') directly.

Suggested change
if "sha256" in str(data):
if data.get("sha256"):

sha = (data.get("sha256") or "")[:8] + "…" if data.get("sha256") else ""
ln = data.get("lengthBytes") or data.get("length") or ""
return f"sha={sha} len={ln}".strip()
if "diagnostics" in data:
diags = data["diagnostics"] or []
w = sum(d.get("severity","" ).lower()=="warning" for d in diags)
e = sum(d.get("severity","" ).lower() in ("error","fatal") for d in diags)
ok = "OK" if not e else "FAIL"
return f"validate: {ok} (warnings={w}, errors={e})"
if "matches" in data:
m = data["matches"] or []
if m:
first = m[0]
return f"find_in_file: {len(m)} match(es) first@{first.get('line',0)}:{first.get('col',0)}"
return "find_in_file: 0 matches"
if "lines" in data: # console
lines = data["lines"] or []
lvls = {"info":0,"warning":0,"error":0}
for L in lines:
lvls[L.get("level","" ).lower()] = lvls.get(L.get("level","" ).lower(),0)+1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Dictionary lookup pattern lvls.get(L.get('level','').lower(),0) is inefficient. Consider using defaultdict(int) or .setdefault().

return f"console: {len(lines)} lines (info={lvls.get('info',0)},warn={lvls.get('warning',0)},err={lvls.get('error',0)})"
# Fallback: short status
return (msg or "tool_result")[:80]

def prune_message(msg):
if "content" not in msg: return msg
newc=[]
for c in msg["content"]:
if c.get("type")=="tool_result" and c.get("content"):
out=[]
for chunk in c["content"]:
if chunk.get("type")=="text":
out.append({"type":"text","text":summarize(chunk.get("text","" ))})
newc.append({"type":"tool_result","tool_use_id":c.get("tool_use_id"),"content":out})
else:
newc.append(c)
msg["content"]=newc
return msg

def main():
convo=json.load(sys.stdin)
if isinstance(convo, dict) and "messages" in convo:
convo["messages"]=[prune_message(m) for m in convo["messages"]]
elif isinstance(convo, list):
convo=[prune_message(m) for m in convo]
json.dump(convo, sys.stdout, ensure_ascii=False)
main()
45 changes: 45 additions & 0 deletions tests/test_find_in_file_minimal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import sys
import pathlib
import importlib.util
import types
import asyncio
import pytest

ROOT = pathlib.Path(__file__).resolve().parents[1]
SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src"
sys.path.insert(0, str(SRC))

from tools.resource_tools import register_resource_tools # type: ignore

class DummyMCP:
def __init__(self):
self.tools = {}

def tool(self, *args, **kwargs):
def deco(fn):
self.tools[fn.__name__] = fn
return fn
return deco

@pytest.fixture()
def resource_tools():
mcp = DummyMCP()
register_resource_tools(mcp)
return mcp.tools

def test_find_in_file_returns_positions(resource_tools, tmp_path):
proj = tmp_path
assets = proj / "Assets"
assets.mkdir()
f = assets / "A.txt"
f.write_text("hello world", encoding="utf-8")
find_in_file = resource_tools["find_in_file"]
loop = asyncio.new_event_loop()
try:
resp = loop.run_until_complete(
find_in_file(uri="unity://path/Assets/A.txt", pattern="world", ctx=None, project_root=str(proj))
)
finally:
loop.close()
Comment on lines +37 to +43
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

style: Consider using asyncio.run() instead of manually managing event loop lifecycle for cleaner code

assert resp["success"] is True
assert resp["data"]["matches"] == [{"startLine": 1, "startCol": 7, "endLine": 1, "endCol": 12}]
1 change: 1 addition & 0 deletions tests/test_get_sha.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,4 +71,5 @@ def fake_send(cmd, params):
assert captured["params"]["name"] == "A"
assert captured["params"]["path"].endswith("Assets/Scripts")
assert resp["success"] is True
assert resp["data"] == {"sha256": "abc", "lengthBytes": 1}

Loading
Loading