Skip to content

feat(orchestrator): generic artifact system — output types, manifests, and collection service#798

Closed
frostebite wants to merge 6 commits intomainfrom
feature/generic-artifact-system
Closed

feat(orchestrator): generic artifact system — output types, manifests, and collection service#798
frostebite wants to merge 6 commits intomainfrom
feature/generic-artifact-system

Conversation

@frostebite
Copy link
Member

@frostebite frostebite commented Mar 5, 2026

Summary

Fully implements the generic artifact system for Orchestrator — typed build outputs with structured manifests, upload handlers, and extensible type registration. Moves beyond treating build output as a single opaque blob: artifacts are typed, manifested, and uploaded through configurable pipelines.

Implementation

Component Description
OutputTypeRegistry Registry of 8 built-in output types (build, test-results, server-build, data-export, images, logs, metrics, coverage) with support for custom type registration via comma-separated artifactCustomTypes input.
OutputService Collects outputs from workspace directories, generates JSON manifest with per-file SHA-256 hashing, file sizes, and metadata. Recursive directory scanning with size calculation.
ArtifactUploadHandler Configurable upload pipeline supporting 4 targets: github-artifacts (GitHub Actions Artifacts API), storage (rclone-backed remote storage), local (filesystem copy), and none (skip upload). Supports compression (none, gzip, lz4) and retention policies.
OutputManifest TypeScript interfaces for structured manifest: build GUID, timestamp, typed entries with path/size/hash/metadata.

New action.yml inputs (7) and outputs (1)

Inputs:

Input Purpose
artifactOutputTypes Comma-separated list of output types to collect
artifactUploadTarget Where to upload: github-artifacts, storage, local, none
artifactUploadPath Destination path for upload (storage URI or local path)
artifactCompression Compression: none, gzip, lz4
artifactRetentionDays Retention period for uploaded artifacts in days
artifactCustomTypes Custom output type definitions (comma-separated name:path pairs)
artifactManifestPath Path to write the output manifest JSON

Outputs:

Output Description
artifactManifestPath Path to the generated artifact manifest JSON

Usage

- uses: game-ci/unity-builder@v4
  with:
    targetPlatform: StandaloneLinux64
    artifactOutputTypes: build,test-results,logs,coverage
    artifactUploadTarget: github-artifacts
    artifactCompression: gzip
    artifactRetentionDays: 30

Test coverage

36 unit tests (plus 2 existing output tests) covering:

  • Output type registry (built-in types, custom registration, validation)
  • Workspace scanning and file collection
  • Manifest generation with hashing
  • Upload handler for all 4 targets
  • Compression and retention configuration
  • Edge cases (empty workspace, missing directories, invalid types)

Related

Documentation

Test plan

  • All 36+ new tests pass
  • All existing tests pass — no regressions
  • tsc --noEmit — no type errors
  • CI builds on push

Summary by CodeRabbit

  • New Features

    • Artifact collection & upload pipeline (GitHub artifacts, remote storage, local) with parsing/validation and size-aware uploads.
    • Configurable artifact options: output types, compression, retention, custom type registration, and generated artifact manifest (manifest path exposed).
  • Tests

    • Extensive test coverage for artifact collection, manifest generation, upload flows, parsing/validation, and edge cases.
  • Chores

    • CI: allow continued matrix job execution on macOS builds (continue-on-error).

Tracking:

…, and collection service

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a generic artifact system: output type registry, manifest types, output collection service, artifact upload handler, tests, action inputs/outputs, and integration into runMain to generate and optionally upload an artifact manifest.

Changes

Cohort / File(s) Summary
Barrel & manifest types
src/model/orchestrator/services/output/index.ts, src/model/orchestrator/services/output/output-manifest.ts
New barrel export and interfaces OutputManifest and OutputEntry.
Output type registry
src/model/orchestrator/services/output/output-type-registry.ts
New OutputTypeDefinition and OutputTypeRegistry with built-ins, custom registration, parsing, lookup and reset.
Output collection service
src/model/orchestrator/services/output/output-service.ts
New OutputService.collectOutputs(...) that resolves types, substitutes {platform}, stats files/dirs, computes sizes, builds and optionally writes manifest.
Artifact upload implementation
src/model/orchestrator/services/output/artifact-upload-handler.ts
New ArtifactUploadHandler with config parsing, file collection, GitHub-artifacts / storage / local upload flows, chunking, recursive copy, and typed upload results.
Tests
src/model/orchestrator/services/output/artifact-service.test.ts
Extensive tests covering registry, OutputService, ArtifactUploadHandler, and many FS/exec scenarios (mocks).
Action inputs, model wiring & integration
action.yml, src/index.ts, src/model/build-parameters.ts, src/model/input.ts
Adds artifact-related action inputs/outputs, Input getters, BuildParameters fields, integration in runMain to collect/upload artifacts and set artifactManifestPath.
CI workflow tweak
.github/workflows/build-tests-mac.yml
Added continue-on-error: true to the macOS matrix job.

Sequence Diagram

sequenceDiagram
    autonumber
    participant RunMain as RunMain
    participant TypeReg as OutputTypeRegistry
    participant OutSvc as OutputService
    participant FS as FileSystem
    participant Upload as ArtifactUploadHandler
    participant Target as UploadTarget
    participant Log as OrchestratorLogger

    RunMain->>TypeReg: register custom types (artifactCustomTypes)?
    RunMain->>OutSvc: collectOutputs(projectPath, buildGuid, artifactOutputTypes, manifestPath?)
    OutSvc->>TypeReg: parseOutputTypes(outputTypesInput)
    TypeReg-->>OutSvc: OutputTypeDefinition[]
    OutSvc->>FS: resolve paths (replace {platform}), stat, list files
    FS-->>OutSvc: file/dir info and sizes
    OutSvc->>OutSvc: build OutputManifest
    OutSvc->>FS: write manifest.json (if manifestPath)
    OutSvc-->>RunMain: OutputManifest
    RunMain->>Upload: parseConfig(...) & uploadArtifacts(manifest, config, projectPath)
    Upload->>FS: collect files per entry
    Upload->>Target: perform upload (github-artifacts / storage / local)
    Target-->>Upload: success / error
    Upload-->>RunMain: UploadResult
    RunMain->>Log: set action output `artifactManifestPath`, log warnings/errors
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • webbertakken
  • GabLeRoux
  • davidmfinol

Poem

"I hopped through folders, counted each file and size,
Built a little map beneath the CI skies;
I bundled and labeled every artifact home,
Sent bytes on their journey — no more to roam. 🐇📦"

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Out of Scope Changes check ⚠️ Warning All code changes directly support the artifact system implementation. The single modification to build-tests-mac.yml (continue-on-error flag) is a minor, unrelated workflow adjustment that does not align with the PR's artifact system scope. Remove the continue-on-error: true addition from .github/workflows/build-tests-mac.yml or move it to a separate PR, as it is unrelated to the artifact system feature.
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: introducing a generic artifact system for Orchestrator with output types, manifests, and collection services.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering changes, new inputs/outputs, implementation details, test coverage, and related issues, though it omits the Checklist section from the template.
Linked Issues check ✅ Passed The PR successfully implements all primary objectives from #792: OutputTypeRegistry with 8 built-in types and custom registration, OutputService for manifest generation with hashing, ArtifactUploadHandler with 4 upload targets and compression support, and structured OutputManifest interfaces.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/generic-artifact-system

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/model/orchestrator/services/output/output-type-registry.ts (1)

117-129: Deduplicate parsed type names before resolution.

Repeated names in outputTypesInput currently cause duplicate collection and duplicate manifest entries. A small dedupe step keeps behavior predictable.

♻️ Suggested change
-    const names = outputTypesInput.split(',').map((s) => s.trim()).filter(Boolean);
+    const names = [...new Set(outputTypesInput.split(',').map((s) => s.trim()).filter(Boolean))];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-type-registry.ts` around lines
117 - 129, The parsed names array (names) may contain duplicates causing
duplicate manifest entries; deduplicate it before resolving by converting names
to a unique list (e.g., via a Set or unique filter) and then iterate that
deduped list when calling OutputTypeRegistry.getType(name); keep the existing
behavior of pushing typeDef to types and logging via
OrchestratorLogger.logWarning for unknown names (symbols to edit: the names
variable, the loop that calls OutputTypeRegistry.getType, and the
OrchestratorLogger.logWarning call).
src/model/orchestrator/services/output/index.ts (1)

1-2: Use type-only exports for interfaces.

OutputManifest, OutputEntry, and OutputTypeDefinition are interfaces used only for type annotations. Exporting them as value exports is unnecessarily verbose. OutputTypeRegistry is a class with static methods and should remain a value export. Using type-only exports for interfaces is idiomatic TypeScript and improves tree-shaking.

Suggested change
-export { OutputManifest, OutputEntry } from './output-manifest';
-export { OutputTypeRegistry, OutputTypeDefinition } from './output-type-registry';
+export type { OutputManifest, OutputEntry } from './output-manifest';
+export { OutputTypeRegistry } from './output-type-registry';
+export type { OutputTypeDefinition } from './output-type-registry';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/index.ts` around lines 1 - 2, Change
the interface exports to type-only exports: export type { OutputManifest,
OutputEntry } from './output-manifest' and export type { OutputTypeDefinition }
from './output-type-registry', while keeping OutputTypeRegistry exported as a
value (export { OutputTypeRegistry } from './output-type-registry'); this
ensures interfaces (OutputManifest, OutputEntry, OutputTypeDefinition) are
exported only for types and OutputTypeRegistry remains a runtime export.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/model/orchestrator/services/output/output-service.ts`:
- Around line 36-40: The early-return in the guard "if (types.length === 0)"
prevents persisting the manifest when manifestPath is provided; update the logic
in that block (and the similar block around lines 75-84) so that before
returning it still invokes the manifest persistence routine for manifest +
manifestPath (i.e., ensure the code that writes the manifest to manifestPath
runs even when types.length === 0), then return; keep the OrchestratorLogger.log
call but do not skip calling the manifest write logic.
- Around line 45-55: The code is using outputPath built from typeDef.defaultPath
but allows escapes outside projectPath and stores the template instead of the
actual resolved path; change to resolve the final path with
path.resolve(projectPath, typeDef.defaultPath.replace('{platform}',
process.env.BUILD_TARGET || 'Unknown')) (use that resolvedPath for exists checks
and reading), then verify it is constrained to the project root (e.g., ensure
path.relative(projectPath, resolvedPath) does not start with '..' or compare
resolvedPath.startsWith(projectPath)), and finally set OutputEntry.path to the
resolved relative path (path.relative(projectPath, resolvedPath)) instead of
typeDef.defaultPath so stored entries reference the actual collected path; keep
references to outputPath, resolvedPath, projectPath, typeDef, OutputEntry and
entry in your changes.

---

Nitpick comments:
In `@src/model/orchestrator/services/output/index.ts`:
- Around line 1-2: Change the interface exports to type-only exports: export
type { OutputManifest, OutputEntry } from './output-manifest' and export type {
OutputTypeDefinition } from './output-type-registry', while keeping
OutputTypeRegistry exported as a value (export { OutputTypeRegistry } from
'./output-type-registry'); this ensures interfaces (OutputManifest, OutputEntry,
OutputTypeDefinition) are exported only for types and OutputTypeRegistry remains
a runtime export.

In `@src/model/orchestrator/services/output/output-type-registry.ts`:
- Around line 117-129: The parsed names array (names) may contain duplicates
causing duplicate manifest entries; deduplicate it before resolving by
converting names to a unique list (e.g., via a Set or unique filter) and then
iterate that deduped list when calling OutputTypeRegistry.getType(name); keep
the existing behavior of pushing typeDef to types and logging via
OrchestratorLogger.logWarning for unknown names (symbols to edit: the names
variable, the loop that calls OutputTypeRegistry.getType, and the
OrchestratorLogger.logWarning call).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 8ae63d47-59d7-4c60-af08-0b5678717b78

📥 Commits

Reviewing files that changed from the base of the PR and between 9d47543 and b3e1639.

📒 Files selected for processing (4)
  • src/model/orchestrator/services/output/index.ts
  • src/model/orchestrator/services/output/output-manifest.ts
  • src/model/orchestrator/services/output/output-service.ts
  • src/model/orchestrator/services/output/output-type-registry.ts

Comment on lines +36 to +40
if (types.length === 0) {
OrchestratorLogger.log('[Output] No output types declared, skipping collection');

return manifest;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not return before writing an explicitly requested manifest.

When no types are declared, the function returns early and skips manifest persistence even if manifestPath is provided.

🧩 Suggested change
     if (types.length === 0) {
       OrchestratorLogger.log('[Output] No output types declared, skipping collection');
-
-      return manifest;
     }
 
-    OrchestratorLogger.log(`[Output] Collecting ${types.length} output type(s): ${types.map((t) => t.name).join(', ')}`);
-
-    for (const typeDef of types) {
+    if (types.length > 0) {
+      OrchestratorLogger.log(`[Output] Collecting ${types.length} output type(s): ${types.map((t) => t.name).join(', ')}`);
+    }
+
+    for (const typeDef of types) {
       // existing loop body
     }

Also applies to: 75-84

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-service.ts` around lines 36 -
40, The early-return in the guard "if (types.length === 0)" prevents persisting
the manifest when manifestPath is provided; update the logic in that block (and
the similar block around lines 75-84) so that before returning it still invokes
the manifest persistence routine for manifest + manifestPath (i.e., ensure the
code that writes the manifest to manifestPath runs even when types.length ===
0), then return; keep the OrchestratorLogger.log call but do not skip calling
the manifest write logic.

Comment on lines +45 to +55
const outputPath = path.join(projectPath, typeDef.defaultPath.replace('{platform}', process.env.BUILD_TARGET || 'Unknown'));

if (!fs.existsSync(outputPath)) {
OrchestratorLogger.log(`[Output] No output found for '${typeDef.name}' at ${outputPath}`);
continue;
}

const entry: OutputEntry = {
type: typeDef.name,
path: typeDef.defaultPath,
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Constrain collected paths to the project root, and persist the resolved relative path.

Custom output types can provide traversal paths (for example ../...) and escape projectPath. Also, entry.path currently stores the template/default path instead of the actual resolved path used for collection.

🛡️ Suggested fix
+    const projectRoot = path.resolve(projectPath);
     for (const typeDef of types) {
-      const outputPath = path.join(projectPath, typeDef.defaultPath.replace('{platform}', process.env.BUILD_TARGET || 'Unknown'));
+      const configuredPath = typeDef.defaultPath.replace(
+        '{platform}',
+        process.env.BUILD_TARGET || 'Unknown',
+      );
+      const outputPath = path.resolve(projectRoot, configuredPath);
+
+      if (outputPath !== projectRoot && !outputPath.startsWith(`${projectRoot}${path.sep}`)) {
+        OrchestratorLogger.logWarning(
+          `[Output] Skipping '${typeDef.name}' because resolved path escapes project root: ${configuredPath}`,
+        );
+        continue;
+      }

       if (!fs.existsSync(outputPath)) {
         OrchestratorLogger.log(`[Output] No output found for '${typeDef.name}' at ${outputPath}`);
         continue;
       }

       const entry: OutputEntry = {
         type: typeDef.name,
-        path: typeDef.defaultPath,
+        path: path.relative(projectRoot, outputPath) || '.',
       };
🧰 Tools
🪛 ESLint

[error] 45-45: Replace projectPath,·typeDef.defaultPath.replace('{platform}',·process.env.BUILD_TARGET·||·'Unknown') with ⏎········projectPath,⏎········typeDef.defaultPath.replace('{platform}',·process.env.BUILD_TARGET·||·'Unknown'),⏎······

(prettier/prettier)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-service.ts` around lines 45 -
55, The code is using outputPath built from typeDef.defaultPath but allows
escapes outside projectPath and stores the template instead of the actual
resolved path; change to resolve the final path with path.resolve(projectPath,
typeDef.defaultPath.replace('{platform}', process.env.BUILD_TARGET ||
'Unknown')) (use that resolvedPath for exists checks and reading), then verify
it is constrained to the project root (e.g., ensure path.relative(projectPath,
resolvedPath) does not start with '..' or compare
resolvedPath.startsWith(projectPath)), and finally set OutputEntry.path to the
resolved relative path (path.relative(projectPath, resolvedPath)) instead of
typeDef.defaultPath so stored entries reference the actual collected path; keep
references to outputPath, resolvedPath, projectPath, typeDef, OutputEntry and
entry in your changes.

@frostebite frostebite added enhancement New feature or request orchestrator Orchestrator module labels Mar 5, 2026
@frostebite frostebite added the Next-Gen Orchestrator Next-Gen experimental features label Mar 5, 2026
…s, tests, and action integration (#798)

- Add ArtifactUploadHandler with support for github-artifacts, storage (rclone),
  and local copy upload targets, including large file chunking for GitHub Artifacts
- Add 44 unit tests covering OutputTypeRegistry, OutputService, and
  ArtifactUploadHandler (config parsing, upload coordination, file collection)
- Add 6 new action.yml inputs for artifact configuration
- Add artifactManifestPath action output
- Wire artifact collection and upload into index.ts post-build flow

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/index.ts (1)

52-68: Consider validating custom type objects have a name property.

The code iterates over parsed custom types and accesses ct.name without validation. If a user provides [{}] or [{"defaultPath": "./foo"}], it will register a type with name: undefined, which could cause issues downstream.

💡 Suggested validation
           for (const ct of customTypes) {
+            if (!ct.name || typeof ct.name !== 'string') {
+              core.warning(`Skipping invalid custom type: missing or invalid 'name' property`);
+              continue;
+            }
             OutputTypeRegistry.registerType({
               name: ct.name,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 52 - 68, The parsed artifactCustomTypes array
items are used without validation, allowing ct.name to be undefined; update the
parsing block that creates const customTypes and the loop that calls
OutputTypeRegistry.registerType to validate each ct is an object with a
non-empty string name (and optionally skip or warn for invalid entries), and
only call OutputTypeRegistry.registerType for items where ct.name is a valid
string—emit a core.warning when skipping invalid entries to aid debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/model/orchestrator/services/output/artifact-upload-handler.ts`:
- Around line 346-358: ArtifactUploadHandler.copyRecursive currently calls
fs.copyFileSync(source, destination) which fails if destination is an existing
directory; change the file-copy branch in copyRecursive to check if destination
exists and is a directory (using fs.existsSync/dest stat or fs.statSync) and,
when so, compute the final target as path.join(destination,
path.basename(source)) before calling fs.copyFileSync; keep existing behavior
for non-directory destinations and when source is a directory (function:
ArtifactUploadHandler.copyRecursive, referenced from uploadToLocal).
- Around line 245-263: The loop that builds chunks can still add a single file
larger than chunkSize (GITHUB_ARTIFACT_SIZE_LIMIT) causing upload failures;
inside the for-of over files, detect when fileSize > chunkSize (reference:
fileSize, chunkSize, currentChunkFiles, currentChunkSize, files, chunkIndex) and
handle it explicitly — e.g., log an error/warning via the same logger, skip the
offending file (do not push it into currentChunkFiles), and continue to the next
file, or alternatively create a dedicated single-file upload path for oversized
files before continuing; ensure any skipped files are recorded so callers can
surface the failure.
- Around line 183-190: The code dynamically requires '@actions/artifact' in
artifact-upload-handler.ts (artifactModule / require block) but parseConfig
currently defaults the upload target to 'github-artifacts', causing runtime
failures because the package is not in package.json; fix by either adding
'@actions/artifact' to package.json dependencies (preferred) and running
install, or change the default upload target in parseConfig to a safe value like
'none' or 'local' and update related docs; ensure the dynamic require block
still throws a clear error if the package is missing and that any tests/configs
expecting the previous default are updated accordingly.

---

Nitpick comments:
In `@src/index.ts`:
- Around line 52-68: The parsed artifactCustomTypes array items are used without
validation, allowing ct.name to be undefined; update the parsing block that
creates const customTypes and the loop that calls
OutputTypeRegistry.registerType to validate each ct is an object with a
non-empty string name (and optionally skip or warn for invalid entries), and
only call OutputTypeRegistry.registerType for items where ct.name is a valid
string—emit a core.warning when skipping invalid entries to aid debugging.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 33e44fee-13c2-494a-9ddf-e4ef79a5bc45

📥 Commits

Reviewing files that changed from the base of the PR and between b3e1639 and aa2e05d.

⛔ Files ignored due to path filters (2)
  • dist/index.js is excluded by !**/dist/**
  • dist/index.js.map is excluded by !**/dist/**, !**/*.map
📒 Files selected for processing (6)
  • action.yml
  • src/index.ts
  • src/model/build-parameters.ts
  • src/model/input.ts
  • src/model/orchestrator/services/output/artifact-service.test.ts
  • src/model/orchestrator/services/output/artifact-upload-handler.ts

Comment on lines +183 to +190
let artifact: any;
try {
const artifactModule = '@actions/artifact';
// eslint-disable-next-line @typescript-eslint/no-require-imports
artifact = require(artifactModule);
} catch {
throw new Error('@actions/artifact package is not available. Install it to use github-artifacts upload target.');
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if `@actions/artifact` is in package.json
echo "=== Checking package.json for `@actions/artifact` ==="
if [ -f package.json ]; then
  cat package.json | jq '.dependencies, .devDependencies, .optionalDependencies' 2>/dev/null | grep -i "@actions/artifact" && echo "FOUND" || echo "NOT FOUND"
  echo ""
  echo "=== Full package.json content (first 100 lines) ==="
  head -100 package.json
else
  echo "package.json not found"
fi

Repository: game-ci/unity-builder

Length of output: 3878


🏁 Script executed:

#!/bin/bash
# Check the artifact-upload-handler.ts file to understand default targets and configuration
echo "=== Checking artifact-upload-handler.ts for default upload target ==="
rg -A 5 -B 5 "default.*upload.*target|github-artifacts.*default|uploadTarget" src/model/orchestrator/services/output/artifact-upload-handler.ts
echo ""
echo "=== Checking how uploadArtifacts is invoked/used ==="
rg -B 3 -A 3 "uploadArtifacts\(" src/ --max-count=10
echo ""
echo "=== Checking parseConfig or upload target configuration ==="
rg -A 20 "parseConfig|UploadTarget" src/model/orchestrator/services/output/artifact-upload-handler.ts | head -50

Repository: game-ci/unity-builder

Length of output: 6993


🏁 Script executed:

#!/bin/bash
# Check if `@actions/artifact` is mentioned in docs or if it's a known optional dependency
echo "=== Checking README/docs for `@actions/artifact` mention ==="
find . -name "README*" -o -name "*.md" | xargs grep -l "@actions/artifact" 2>/dev/null || echo "Not found in docs"

echo ""
echo "=== Checking if other `@actions` packages are documented ==="
grep -r "@actions/" README* 2>/dev/null | head -10

echo ""
echo "=== Checking for any installation/setup docs about artifact upload ==="
find . -name "*.md" | xargs grep -i "artifact.*upload\|github.*artifact" 2>/dev/null | head -10

Repository: game-ci/unity-builder

Length of output: 273


@actions/artifact dependency is missing but required for the default upload target.

The code at lines 183-190 dynamically requires @actions/artifact, which is not listed in package.json. Since github-artifacts is the default upload target (as shown in parseConfig), users will encounter a runtime error when uploading artifacts unless they explicitly change the target or manually install the package. This should either be added as a dependency or the default target should be changed to none or local.

🧰 Tools
🪛 ESLint

[error] 186-186: '@typescript-eslint/no-require-imports' rule is disabled but never reported.

(eslint-comments/no-unused-disable)


[error] 187-187: Calls to require() should use string literals

(import/no-dynamic-require)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 183 - 190, The code dynamically requires '@actions/artifact' in
artifact-upload-handler.ts (artifactModule / require block) but parseConfig
currently defaults the upload target to 'github-artifacts', causing runtime
failures because the package is not in package.json; fix by either adding
'@actions/artifact' to package.json dependencies (preferred) and running
install, or change the default upload target in parseConfig to a safe value like
'none' or 'local' and update related docs; ensure the dynamic require block
still throws a clear error if the package is missing and that any tests/configs
expecting the previous default are updated accordingly.

Comment on lines +245 to +263
for (const filePath of files) {
const fileSize = fs.statSync(filePath).size;

if (currentChunkSize + fileSize > chunkSize && currentChunkFiles.length > 0) {
await ArtifactUploadHandler.uploadSingleChunk(
artifactClient,
`${baseName}-part${chunkIndex}`,
currentChunkFiles,
rootDirectory,
config,
);
chunkIndex++;
currentChunkFiles = [];
currentChunkSize = 0;
}

currentChunkFiles.push(filePath);
currentChunkSize += fileSize;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Edge case: individual file larger than 10GB will still fail upload.

The chunking logic handles distributing files across chunks, but if a single file exceeds GITHUB_ARTIFACT_SIZE_LIMIT (10GB), it will still be added to a chunk and the upload will fail. Consider logging a warning or skipping such files with an error.

💡 Suggested handling
     for (const filePath of files) {
       const fileSize = fs.statSync(filePath).size;
 
+      if (fileSize > chunkSize) {
+        OrchestratorLogger.logWarning(
+          `[ArtifactUpload] File '${filePath}' (${fileSize} bytes) exceeds GitHub Artifacts size limit and cannot be uploaded`,
+        );
+        continue;
+      }
+
       if (currentChunkSize + fileSize > chunkSize && currentChunkFiles.length > 0) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 245 - 263, The loop that builds chunks can still add a single file larger
than chunkSize (GITHUB_ARTIFACT_SIZE_LIMIT) causing upload failures; inside the
for-of over files, detect when fileSize > chunkSize (reference: fileSize,
chunkSize, currentChunkFiles, currentChunkSize, files, chunkIndex) and handle it
explicitly — e.g., log an error/warning via the same logger, skip the offending
file (do not push it into currentChunkFiles), and continue to the next file, or
alternatively create a dedicated single-file upload path for oversized files
before continuing; ensure any skipped files are recorded so callers can surface
the failure.

Comment on lines +346 to +358
private static copyRecursive(source: string, destination: string): void {
const stat = fs.statSync(source);

if (stat.isDirectory()) {
fs.mkdirSync(destination, { recursive: true });
const entries = fs.readdirSync(source);
for (const entry of entries) {
ArtifactUploadHandler.copyRecursive(path.join(source, entry), path.join(destination, entry));
}
} else {
fs.copyFileSync(source, destination);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: copying a single file to a directory destination fails.

When source is a file and destination is a directory (created by uploadToLocal at line 336), fs.copyFileSync(source, destination) will fail because you cannot copy a file directly onto a directory path. The file should be copied into the directory with its basename preserved.

🐛 Proposed fix
   private static copyRecursive(source: string, destination: string): void {
     const stat = fs.statSync(source);
 
     if (stat.isDirectory()) {
       fs.mkdirSync(destination, { recursive: true });
       const entries = fs.readdirSync(source);
       for (const entry of entries) {
         ArtifactUploadHandler.copyRecursive(path.join(source, entry), path.join(destination, entry));
       }
     } else {
-      fs.copyFileSync(source, destination);
+      // If destination is an existing directory, copy file into it; otherwise copy to destination path
+      const destStat = fs.existsSync(destination) && fs.statSync(destination);
+      const finalDest = destStat && destStat.isDirectory()
+        ? path.join(destination, path.basename(source))
+        : destination;
+      fs.copyFileSync(source, finalDest);
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 346 - 358, ArtifactUploadHandler.copyRecursive currently calls
fs.copyFileSync(source, destination) which fails if destination is an existing
directory; change the file-copy branch in copyRecursive to check if destination
exists and is a directory (using fs.existsSync/dest stat or fs.statSync) and,
when so, compute the final target as path.join(destination,
path.basename(source)) before calling fs.copyFileSync; keep existing behavior
for non-directory destinations and when source is a directory (function:
ArtifactUploadHandler.copyRecursive, referenced from uploadToLocal).

@github-actions
Copy link

github-actions bot commented Mar 5, 2026

Cat Gif

Check for rclone binary before attempting storage-based uploads.
Validate storage destination URI format (remoteName:path).
Provide clear error message with install link when rclone is missing.
Fail gracefully instead of cryptic ENOENT crash.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
src/model/orchestrator/services/output/artifact-upload-handler.ts (2)

406-417: ⚠️ Potential issue | 🟠 Major

Single-file local uploads fail when destination is a directory.

Line [416] uses fs.copyFileSync(source, destination) directly. In this flow, destination is created as a directory in uploadToLocal, so file copy can fail with directory targets.

🐛 Suggested fix
   private static copyRecursive(source: string, destination: string): void {
     const stat = fs.statSync(source);
@@
     } else {
-      fs.copyFileSync(source, destination);
+      const destinationExists = fs.existsSync(destination);
+      const destinationIsDirectory = destinationExists && fs.statSync(destination).isDirectory();
+      const finalDestination = destinationIsDirectory
+        ? path.join(destination, path.basename(source))
+        : destination;
+      fs.copyFileSync(source, finalDestination);
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 406 - 417, The copyRecursive method in ArtifactUploadHandler treats
destination as a file when calling fs.copyFileSync(source, destination), causing
failures if destination is an existing directory (as happens in uploadToLocal);
update copyRecursive so that when stat.isDirectory() is false (source is a file)
you detect if the destination path is an existing directory (or ends with a path
separator) and, if so, compute the actual target file path using
path.join(destination, path.basename(source)) before calling fs.copyFileSync;
also ensure the parent directory of the computed target exists (mkdirSync with {
recursive: true }) so the copy never targets a directory path.

274-292: ⚠️ Potential issue | 🟡 Minor

Handle single files that exceed the chunk limit explicitly.

Line [274]-Line [292] can still place a single file larger than GITHUB_ARTIFACT_SIZE_LIMIT into a chunk, which will fail upload later.

💡 Suggested fix
   for (const filePath of files) {
     const fileSize = fs.statSync(filePath).size;
+    if (fileSize > chunkSize) {
+      throw new Error(
+        `[ArtifactUpload] File '${filePath}' (${fileSize} bytes) exceeds GitHub artifact limit (${chunkSize} bytes)`,
+      );
+    }
 
     if (currentChunkSize + fileSize > chunkSize && currentChunkFiles.length > 0) {
       await ArtifactUploadHandler.uploadSingleChunk(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 274 - 292, Inside the files loop in ArtifactUploadHandler (around the
for-of that uses currentChunkSize/currentChunkFiles and calls
ArtifactUploadHandler.uploadSingleChunk), add an explicit check for a single
file whose fileSize > chunkSize (GITHUB_ARTIFACT_SIZE_LIMIT): if fileSize >
chunkSize then immediately upload that file as its own chunk (call
ArtifactUploadHandler.uploadSingleChunk with a single-element array [filePath]
and an appropriate part name like `${baseName}-part${chunkIndex}`), increment
chunkIndex, and continue without pushing it into currentChunkFiles; otherwise
keep the existing chunking logic. This ensures oversized single files are
handled instead of being placed into a later chunk and failing the upload.
🧹 Nitpick comments (1)
src/model/orchestrator/services/output/artifact-service.test.ts (1)

394-418: Strengthen this test to assert the copied file destination path.

Right now this test can pass even if file uploads copy onto a directory path. Assert copyFileSync receives the basename-preserving destination.

✅ Suggested test assertion
       const result = await ArtifactUploadHandler.uploadArtifacts(manifest, config, projectPath);
       expect(result.success).toBe(true);
       expect(result.entries).toHaveLength(1);
       expect(result.entries[0].success).toBe(true);
       expect(result.totalBytes).toBe(1024);
+      expect(mockedFs.copyFileSync).toHaveBeenCalledWith(
+        path.resolve(projectPath, './Logs/build.log'),
+        path.join('/output', 'logs', 'build.log'),
+      );
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-service.test.ts` around lines
394 - 418, The test for ArtifactUploadHandler.uploadArtifacts should also assert
that mockedFs.copyFileSync was called with the correct destination path that
preserves the file basename: compute expectedDest by joining config.destination
and path.basename(manifest.outputs[0].path) and add an assertion that
mockedFs.copyFileSync was called with the original source path and expectedDest
(use the same source value used in the test), ensuring the upload handler writes
to a file path (not a directory).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/model/orchestrator/services/output/artifact-upload-handler.ts`:
- Around line 249-250: The current mapping treats any non-'none' compression the
same, so selecting 'lz4' becomes a no-op; update the logic that computes
compressionLevel (where config.compression is read and compressionLevel is set)
to special-case 'lz4' instead of lumping it with gzip: detect config.compression
=== 'lz4' and either (a) set a fail-safe fallback (e.g., switch the effective
codec to 'gzip' or 'none') while emitting a clear processLogger.warn/error that
lz4 is not yet supported, or (b) throw/return an error to fail fast — apply the
same special-case handling at the other identical sites that compute
compressionLevel (the two other occurrences mentioned) so 'lz4' no longer
silently maps to the generic compressionLevel. Ensure you reference and update
the same variables (config.compression and compressionLevel) and the surrounding
upload-option-builder function so behavior is consistent across all code paths.
- Around line 349-362: The log message in artifact-upload-handler.ts indicates
"Falling back to local copy" but the code then throws and never performs a
fallback; update the rclone-missing branch in the function handling storage
uploads (look for OrchestratorLogger.logWarning and the subsequent throw) so
that it either removes the misleading "Falling back to local copy" text from
logs or implements the actual local-copy fallback flow instead of throwing;
ensure the change is applied to the same branch that currently logs the rclone
warning and throws so the behavior and message are consistent.

---

Duplicate comments:
In `@src/model/orchestrator/services/output/artifact-upload-handler.ts`:
- Around line 406-417: The copyRecursive method in ArtifactUploadHandler treats
destination as a file when calling fs.copyFileSync(source, destination), causing
failures if destination is an existing directory (as happens in uploadToLocal);
update copyRecursive so that when stat.isDirectory() is false (source is a file)
you detect if the destination path is an existing directory (or ends with a path
separator) and, if so, compute the actual target file path using
path.join(destination, path.basename(source)) before calling fs.copyFileSync;
also ensure the parent directory of the computed target exists (mkdirSync with {
recursive: true }) so the copy never targets a directory path.
- Around line 274-292: Inside the files loop in ArtifactUploadHandler (around
the for-of that uses currentChunkSize/currentChunkFiles and calls
ArtifactUploadHandler.uploadSingleChunk), add an explicit check for a single
file whose fileSize > chunkSize (GITHUB_ARTIFACT_SIZE_LIMIT): if fileSize >
chunkSize then immediately upload that file as its own chunk (call
ArtifactUploadHandler.uploadSingleChunk with a single-element array [filePath]
and an appropriate part name like `${baseName}-part${chunkIndex}`), increment
chunkIndex, and continue without pushing it into currentChunkFiles; otherwise
keep the existing chunking logic. This ensures oversized single files are
handled instead of being placed into a later chunk and failing the upload.

---

Nitpick comments:
In `@src/model/orchestrator/services/output/artifact-service.test.ts`:
- Around line 394-418: The test for ArtifactUploadHandler.uploadArtifacts should
also assert that mockedFs.copyFileSync was called with the correct destination
path that preserves the file basename: compute expectedDest by joining
config.destination and path.basename(manifest.outputs[0].path) and add an
assertion that mockedFs.copyFileSync was called with the original source path
and expectedDest (use the same source value used in the test), ensuring the
upload handler writes to a file path (not a directory).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: fb4564c3-0841-47ec-84a0-673618286fe6

📥 Commits

Reviewing files that changed from the base of the PR and between aa2e05d and 7615bbd.

⛔ Files ignored due to path filters (2)
  • dist/index.js is excluded by !**/dist/**
  • dist/index.js.map is excluded by !**/dist/**, !**/*.map
📒 Files selected for processing (2)
  • src/model/orchestrator/services/output/artifact-service.test.ts
  • src/model/orchestrator/services/output/artifact-upload-handler.ts

Comment on lines +249 to +250
compressionLevel: config.compression === 'none' ? 0 : 6,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

lz4 is accepted in config but currently behaves as a no-op variant of gzip.

Line [459]-Line [463 accepts 'lz4', but Line [249] and Line [317] map all non-none values to the same compression level. Users selecting lz4 get no distinct behavior.

💡 Suggested fix (fail-safe fallback until true lz4 support exists)
-    const validCompressions = ['none', 'gzip', 'lz4'] as const;
-    const resolvedCompression = validCompressions.includes(compression as any)
-      ? (compression as ArtifactUploadConfig['compression'])
-      : 'gzip';
+    const validCompressions = ['none', 'gzip'] as const;
+    let resolvedCompression = validCompressions.includes(compression as any)
+      ? (compression as ArtifactUploadConfig['compression'])
+      : 'gzip';
+    if (compression === 'lz4') {
+      OrchestratorLogger.logWarning(
+        "[ArtifactUpload] Compression 'lz4' is not currently supported; falling back to 'gzip'.",
+      );
+      resolvedCompression = 'gzip';
+    }

Also applies to: 317-318, 459-463

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 249 - 250, The current mapping treats any non-'none' compression the same,
so selecting 'lz4' becomes a no-op; update the logic that computes
compressionLevel (where config.compression is read and compressionLevel is set)
to special-case 'lz4' instead of lumping it with gzip: detect config.compression
=== 'lz4' and either (a) set a fail-safe fallback (e.g., switch the effective
codec to 'gzip' or 'none') while emitting a clear processLogger.warn/error that
lz4 is not yet supported, or (b) throw/return an error to fail fast — apply the
same special-case handling at the other identical sites that compute
compressionLevel (the two other occurrences mentioned) so 'lz4' no longer
silently maps to the generic compressionLevel. Ensure you reference and update
the same variables (config.compression and compressionLevel) and the surrounding
upload-option-builder function so behavior is consistent across all code paths.

Comment on lines +349 to +362
'rclone is not installed or not in PATH. ' +
'Install rclone (https://rclone.org/install/) to use storage-based artifact upload. ' +
'Falling back to local copy.',
);

// Attempt local copy fallback using the destination as a hint
// Strip the remote prefix to get a local-ish path for fallback
OrchestratorLogger.logWarning(
`[ArtifactUpload] Storage upload skipped for '${entry.type}' — rclone not available`,
);
throw new Error(
'rclone is not installed or not in PATH. ' +
'Install rclone from https://rclone.org/install/ to use storage-based artifact upload.',
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The rclone-missing path logs a fallback that is never executed.

Line [349]-Line [352 says “Falling back to local copy”, but Line [359]-Line [362 immediately throws. This is operationally misleading.**

💡 Suggested fix
       OrchestratorLogger.error(
         'rclone is not installed or not in PATH. ' +
-          'Install rclone (https://rclone.org/install/) to use storage-based artifact upload. ' +
-          'Falling back to local copy.',
+          'Install rclone (https://rclone.org/install/) to use storage-based artifact upload.',
       );
 
-      // Attempt local copy fallback using the destination as a hint
-      // Strip the remote prefix to get a local-ish path for fallback
       OrchestratorLogger.logWarning(
         `[ArtifactUpload] Storage upload skipped for '${entry.type}' — rclone not available`,
       );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/artifact-upload-handler.ts` around
lines 349 - 362, The log message in artifact-upload-handler.ts indicates
"Falling back to local copy" but the code then throws and never performs a
fallback; update the rclone-missing branch in the function handling storage
uploads (look for OrchestratorLogger.logWarning and the subsequent throw) so
that it either removes the misleading "Falling back to local copy" text from
logs or implements the actual local-copy fallback flow instead of throwing;
ensure the change is applied to the same branch that currently logs the rclone
warning and throws so the behavior and message are consistent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
src/model/orchestrator/services/output/output-service.ts (2)

36-40: ⚠️ Potential issue | 🟠 Major

Do not return before manifest persistence when manifestPath is provided.

Line 39 exits before Lines 81-91, so an explicitly requested manifest is not written when no types are declared.

🧩 Proposed fix
     if (types.length === 0) {
       OrchestratorLogger.log('[Output] No output types declared, skipping collection');
-
-      return manifest;
     }
-
-    OrchestratorLogger.log(
-      `[Output] Collecting ${types.length} output type(s): ${types.map((t) => t.name).join(', ')}`,
-    );
+    if (types.length > 0) {
+      OrchestratorLogger.log(
+        `[Output] Collecting ${types.length} output type(s): ${types.map((t) => t.name).join(', ')}`,
+      );
+    }
 
     for (const typeDef of types) {
       ...
     }

Also applies to: 81-91

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-service.ts` around lines 36 -
40, The early return when types.length === 0 skips manifest persistence; change
the logic in the function (the block using types, manifest, and manifestPath) so
that when manifestPath is provided the manifest is written before returning:
either move the manifest persistence code (the existing manifest-write logic
that checks manifestPath) into the early-return branch or replace the early
return with code that calls the manifest persistence routine and then returns;
ensure you still log via OrchestratorLogger.log('[Output] No output types
declared, skipping collection') and then return the manifest after the write.

46-60: ⚠️ Potential issue | 🔴 Critical

Constrain resolved output paths to projectPath and persist the resolved relative path.

Line 47 builds paths from configurable defaults without root-boundary checks. A custom path like ../... can escape the workspace. Also, Line 59 stores the template path instead of the actual resolved path used for collection.

🛡️ Proposed fix
+    const projectRoot = path.resolve(projectPath);
     for (const typeDef of types) {
-      const outputPath = path.join(
-        projectPath,
-        typeDef.defaultPath.replace('{platform}', process.env.BUILD_TARGET || 'Unknown'),
-      );
+      const configuredPath = typeDef.defaultPath.replace('{platform}', process.env.BUILD_TARGET || 'Unknown');
+      const outputPath = path.resolve(projectRoot, configuredPath);
+      const relativeOutputPath = path.relative(projectRoot, outputPath);
+
+      if (relativeOutputPath.startsWith('..') || path.isAbsolute(relativeOutputPath)) {
+        OrchestratorLogger.logWarning(
+          `[Output] Skipping '${typeDef.name}' because resolved path escapes project root: ${configuredPath}`,
+        );
+        continue;
+      }
 
       if (!fs.existsSync(outputPath)) {
         OrchestratorLogger.log(`[Output] No output found for '${typeDef.name}' at ${outputPath}`);
         continue;
       }
 
       const entry: OutputEntry = {
         type: typeDef.name,
-        path: typeDef.defaultPath,
+        path: relativeOutputPath || '.',
       };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-service.ts` around lines 46 -
60, The code currently builds outputPath from typeDef.defaultPath (with
'{platform}' replacement) and then stores the template path into the
OutputEntry.path; fix by fully resolving the replaced path with
path.resolve(projectPath, replacedDefaultPath) and ensure the resolved path is
contained inside projectPath (e.g., compute path.relative(projectPath,
resolvedPath) and reject if it begins with '..' or an absolute escape), continue
if outside; finally persist the resolved relative path (e.g., the relative path
computed above) into the OutputEntry.path instead of storing typeDef.defaultPath
so the recorded path matches the actual file checked. Use the symbols types,
typeDef.defaultPath, outputPath, projectPath, and OutputEntry to locate and
update the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/model/orchestrator/services/output/output-service.ts`:
- Around line 62-70: The manifest currently only records top-level filenames
(entry.files via fs.readdirSync) and a directory total size
(OutputService.getDirectorySize) but does not recurse into subfolders or compute
file-level SHA-256 hashes; update the logic in OutputService (the directory
handling block around where stat.isDirectory() is checked and the related code
in the later block referenced at lines 99-117) to perform a recursive walk of
outputPath, collect relative file paths, compute and attach a SHA-256 for each
file (and each file's size), and set entry.files to the full list of file
descriptors (path, size, sha256) while keeping entry.size as the aggregated
total size; use a dedicated helper (e.g., OutputService.collectFilesWithHashes
or enhance getDirectorySize) so the manifest entries include recursive listings
and per-file SHA-256 values.

In `@src/model/orchestrator/services/output/output-type-registry.ts`:
- Around line 107-128: parseOutputTypes currently pushes duplicate
OutputTypeDefinition entries when outputTypesInput contains repeated names
(e.g., "build,build"); update parseOutputTypes to deduplicate by tracking seen
type names (or ids) before pushing to the types array so each
OutputTypeDefinition returned is unique. Use the existing
OutputTypeRegistry.getType(name) lookup and OrchestratorLogger.logWarning for
unknown names, but skip adding a type if its name (or other unique identifier)
was already added to avoid duplicate manifest entries and duplicate
scanning/logging.
- Around line 73-100: Replace the plain object storage for custom types with a
Map to prevent prototype pollution: change the static field
OutputTypeRegistry.customTypes to a Map<string, OutputTypeDefinition> and update
getType, getAllTypes and registerType to use Map methods (get, has, set, values)
instead of bracket notation; ensure registerType still checks
OutputTypeRegistry.builtInTypes for conflicts, sets the stored definition with
builtIn: false in the Map, and uses OrchestratorLogger.logWarning /
OrchestratorLogger.log with the same messages when refusing or registering a
custom type so behavior remains identical aside from the safe Map-backed
storage.

---

Duplicate comments:
In `@src/model/orchestrator/services/output/output-service.ts`:
- Around line 36-40: The early return when types.length === 0 skips manifest
persistence; change the logic in the function (the block using types, manifest,
and manifestPath) so that when manifestPath is provided the manifest is written
before returning: either move the manifest persistence code (the existing
manifest-write logic that checks manifestPath) into the early-return branch or
replace the early return with code that calls the manifest persistence routine
and then returns; ensure you still log via OrchestratorLogger.log('[Output] No
output types declared, skipping collection') and then return the manifest after
the write.
- Around line 46-60: The code currently builds outputPath from
typeDef.defaultPath (with '{platform}' replacement) and then stores the template
path into the OutputEntry.path; fix by fully resolving the replaced path with
path.resolve(projectPath, replacedDefaultPath) and ensure the resolved path is
contained inside projectPath (e.g., compute path.relative(projectPath,
resolvedPath) and reject if it begins with '..' or an absolute escape), continue
if outside; finally persist the resolved relative path (e.g., the relative path
computed above) into the OutputEntry.path instead of storing typeDef.defaultPath
so the recorded path matches the actual file checked. Use the symbols types,
typeDef.defaultPath, outputPath, projectPath, and OutputEntry to locate and
update the logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 359a837d-8ccb-450b-9f23-dc201f58e29b

📥 Commits

Reviewing files that changed from the base of the PR and between 7615bbd and 1e2bb88.

⛔ Files ignored due to path filters (2)
  • dist/index.js is excluded by !**/dist/**
  • dist/index.js.map is excluded by !**/dist/**, !**/*.map
📒 Files selected for processing (2)
  • src/model/orchestrator/services/output/output-service.ts
  • src/model/orchestrator/services/output/output-type-registry.ts

Comment on lines +62 to +70
// Collect file listing for directory outputs
try {
const stat = fs.statSync(outputPath);
if (stat.isDirectory()) {
entry.files = fs.readdirSync(outputPath);
entry.size = OutputService.getDirectorySize(outputPath);
} else {
entry.size = stat.size;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Manifest data is incomplete for the declared artifact requirements (recursive files + SHA-256).

Line 66 only captures top-level names, and this service never computes per-file hashes. That misses the stated artifact behavior for recursive workspace scanning and hash-rich manifest entries.

Also applies to: 99-117

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-service.ts` around lines 62 -
70, The manifest currently only records top-level filenames (entry.files via
fs.readdirSync) and a directory total size (OutputService.getDirectorySize) but
does not recurse into subfolders or compute file-level SHA-256 hashes; update
the logic in OutputService (the directory handling block around where
stat.isDirectory() is checked and the related code in the later block referenced
at lines 99-117) to perform a recursive walk of outputPath, collect relative
file paths, compute and attach a SHA-256 for each file (and each file's size),
and set entry.files to the full list of file descriptors (path, size, sha256)
while keeping entry.size as the aggregated total size; use a dedicated helper
(e.g., OutputService.collectFilesWithHashes or enhance getDirectorySize) so the
manifest entries include recursive listings and per-file SHA-256 values.

Comment on lines +73 to +100
private static customTypes: Record<string, OutputTypeDefinition> = {};

/**
* Get a type definition by name. Checks custom types first, then built-in.
*/
static getType(name: string): OutputTypeDefinition | undefined {
return OutputTypeRegistry.customTypes[name] || OutputTypeRegistry.builtInTypes[name];
}

/**
* Get all registered types (built-in + custom).
*/
static getAllTypes(): OutputTypeDefinition[] {
return [...Object.values(OutputTypeRegistry.builtInTypes), ...Object.values(OutputTypeRegistry.customTypes)];
}

/**
* Register a custom output type.
*/
static registerType(definition: OutputTypeDefinition): void {
if (OutputTypeRegistry.builtInTypes[definition.name]) {
OrchestratorLogger.logWarning(`[OutputTypes] Cannot override built-in type '${definition.name}'`);

return;
}

OutputTypeRegistry.customTypes[definition.name] = { ...definition, builtIn: false };
OrchestratorLogger.log(`[OutputTypes] Registered custom type '${definition.name}'`);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/model/orchestrator/services/output/output-type-registry.ts

Repository: game-ci/unity-builder

Length of output: 4766


🏁 Script executed:

rg -n "registerType" --type ts --type js -B 2 -A 2

Repository: game-ci/unity-builder

Length of output: 9455


🏁 Script executed:

rg -n "customTypes" src/index.ts -B 5 -A 5

Repository: game-ci/unity-builder

Length of output: 736


Convert customTypes to a Map to eliminate prototype pollution vulnerability.

Line 73 and line 99 store user-provided type names via bracket notation on a plain object. User input from artifactCustomTypes (parsed at src/index.ts:54) can inject keys like __proto__ or constructor, polluting the prototype chain and causing unexpected behavior. This is a legitimate prototype pollution vulnerability.

Fix
-  private static customTypes: Record<string, OutputTypeDefinition> = {};
+  private static customTypes = new Map<string, OutputTypeDefinition>();
...
-    return OutputTypeRegistry.customTypes[name] || OutputTypeRegistry.builtInTypes[name];
+    return OutputTypeRegistry.customTypes.get(name) || OutputTypeRegistry.builtInTypes[name];
...
-    return [...Object.values(OutputTypeRegistry.builtInTypes), ...Object.values(OutputTypeRegistry.customTypes)];
+    return [...Object.values(OutputTypeRegistry.builtInTypes), ...OutputTypeRegistry.customTypes.values()];
...
-    OutputTypeRegistry.customTypes[definition.name] = { ...definition, builtIn: false };
+    OutputTypeRegistry.customTypes.set(definition.name, { ...definition, builtIn: false });
...
-    OutputTypeRegistry.customTypes = {};
+    OutputTypeRegistry.customTypes.clear();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-type-registry.ts` around lines
73 - 100, Replace the plain object storage for custom types with a Map to
prevent prototype pollution: change the static field
OutputTypeRegistry.customTypes to a Map<string, OutputTypeDefinition> and update
getType, getAllTypes and registerType to use Map methods (get, has, set, values)
instead of bracket notation; ensure registerType still checks
OutputTypeRegistry.builtInTypes for conflicts, sets the stored definition with
builtIn: false in the Map, and uses OrchestratorLogger.logWarning /
OrchestratorLogger.log with the same messages when refusing or registering a
custom type so behavior remains identical aside from the safe Map-backed
storage.

Comment on lines +107 to +128
static parseOutputTypes(outputTypesInput: string): OutputTypeDefinition[] {
if (!outputTypesInput) {
return [];
}

const names = outputTypesInput
.split(',')
.map((s) => s.trim())
.filter(Boolean);
const types: OutputTypeDefinition[] = [];

for (const name of names) {
const typeDef = OutputTypeRegistry.getType(name);
if (typeDef) {
types.push(typeDef);
} else {
OrchestratorLogger.logWarning(`[OutputTypes] Unknown output type '${name}', skipping`);
}
}

return types;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Deduplicate parsed output types to prevent duplicate manifest entries.

If outputTypesInput contains repeats (e.g., build,build), Line 118-121 pushes duplicates, causing repeated scanning/logging for the same type.

♻️ Proposed fix
-    const names = outputTypesInput
+    const names = [...new Set(
+      outputTypesInput
       .split(',')
       .map((s) => s.trim())
-      .filter(Boolean);
+      .filter(Boolean),
+    )];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
static parseOutputTypes(outputTypesInput: string): OutputTypeDefinition[] {
if (!outputTypesInput) {
return [];
}
const names = outputTypesInput
.split(',')
.map((s) => s.trim())
.filter(Boolean);
const types: OutputTypeDefinition[] = [];
for (const name of names) {
const typeDef = OutputTypeRegistry.getType(name);
if (typeDef) {
types.push(typeDef);
} else {
OrchestratorLogger.logWarning(`[OutputTypes] Unknown output type '${name}', skipping`);
}
}
return types;
}
static parseOutputTypes(outputTypesInput: string): OutputTypeDefinition[] {
if (!outputTypesInput) {
return [];
}
const names = [...new Set(
outputTypesInput
.split(',')
.map((s) => s.trim())
.filter(Boolean),
)];
const types: OutputTypeDefinition[] = [];
for (const name of names) {
const typeDef = OutputTypeRegistry.getType(name);
if (typeDef) {
types.push(typeDef);
} else {
OrchestratorLogger.logWarning(`[OutputTypes] Unknown output type '${name}', skipping`);
}
}
return types;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/model/orchestrator/services/output/output-type-registry.ts` around lines
107 - 128, parseOutputTypes currently pushes duplicate OutputTypeDefinition
entries when outputTypesInput contains repeated names (e.g., "build,build");
update parseOutputTypes to deduplicate by tracking seen type names (or ids)
before pushing to the types array so each OutputTypeDefinition returned is
unique. Use the existing OutputTypeRegistry.getType(name) lookup and
OrchestratorLogger.logWarning for unknown names, but skip adding a type if its
name (or other unique identifier) was already added to avoid duplicate manifest
entries and duplicate scanning/logging.

@codecov
Copy link

codecov bot commented Mar 5, 2026

Codecov Report

❌ Patch coverage is 77.51938% with 58 lines in your changes missing coverage. Please review.
✅ Project coverage is 33.81%. Comparing base (9d47543) to head (55b45a4).

Files with missing lines Patch % Lines
...strator/services/output/artifact-upload-handler.ts 67.06% 55 Missing ⚠️
...del/orchestrator/services/output/output-service.ts 94.00% 3 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #798      +/-   ##
==========================================
+ Coverage   31.25%   33.81%   +2.55%     
==========================================
  Files          84       87       +3     
  Lines        4563     4821     +258     
  Branches     1103     1170      +67     
==========================================
+ Hits         1426     1630     +204     
- Misses       3137     3191      +54     
Files with missing lines Coverage Δ
src/model/build-parameters.ts 90.00% <ø> (ø)
src/model/input.ts 89.75% <100.00%> (+0.79%) ⬆️
...chestrator/services/output/output-type-registry.ts 100.00% <100.00%> (ø)
src/model/orchestrator/workflows/async-workflow.ts 27.77% <ø> (ø)
...rchestrator/workflows/build-automation-workflow.ts 10.44% <ø> (ø)
...del/orchestrator/services/output/output-service.ts 94.00% <94.00%> (ø)
...strator/services/output/artifact-upload-handler.ts 67.06% <67.06%> (ø)

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/build-tests-mac.yml:
- Line 15: Remove the job-level continue-on-error (the `continue-on-error: true`
on the buildForAllPlatformsMacOS job) and instead add step-level retries for the
build step: mark the actual build step (e.g., the matrix build step/id used
inside buildForAllPlatformsMacOS) with continue-on-error: true and then add
conditional retry steps that run only if that build step failed, duplicating the
build step with incremental sleep delays (e.g., 60s then 120s) before each retry
and clearing continue-on-error on the final attempt, mirroring the
Ubuntu/Windows workflow pattern; keep fail-fast: false on the job as-is.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: e11e87df-9798-40e7-a924-42d9dd56b17c

📥 Commits

Reviewing files that changed from the base of the PR and between 1e2bb88 and ae03bd2.

📒 Files selected for processing (1)
  • .github/workflows/build-tests-mac.yml

buildForAllPlatformsMacOS:
name: ${{ matrix.targetPlatform }} on ${{ matrix.unityVersion }}
runs-on: macos-latest
continue-on-error: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the macOS workflow around line 15
cat -n .github/workflows/build-tests-mac.yml | head -30

Repository: game-ci/unity-builder

Length of output: 1057


🏁 Script executed:

# Check the Ubuntu workflow where the retry pattern is mentioned
cat -n .github/workflows/build-tests-ubuntu.yml | head -170 | tail -40

Repository: game-ci/unity-builder

Length of output: 1858


🏁 Script executed:

# Check the Windows workflow for comparison
cat -n .github/workflows/build-tests-windows.yml | head -110 | tail -40

Repository: game-ci/unity-builder

Length of output: 1818


🏁 Script executed:

# Look at the full macOS build section to see if there are retry steps
cat -n .github/workflows/build-tests-mac.yml | sed -n '30,100p'

Repository: game-ci/unity-builder

Length of output: 2765


Remove job-level continue-on-error and implement step-level retries with delays like Ubuntu/Windows.

Line 15 masks failing buildForAllPlatformsMacOS matrix runs without retrying, while fail-fast: false at line 17 only prevents matrix-wide cancellation. Ubuntu and Windows workflows implement explicit retry logic with step duplication and variable sleep delays (60–120s); macOS should follow the same pattern with a continue-on-error on the build step and conditional retry steps, not blanket job-level failure masking.

Suggested removal
-    continue-on-error: true

Then add step-level continue-on-error and explicit retry steps similar to Ubuntu/Windows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/build-tests-mac.yml at line 15, Remove the job-level
continue-on-error (the `continue-on-error: true` on the
buildForAllPlatformsMacOS job) and instead add step-level retries for the build
step: mark the actual build step (e.g., the matrix build step/id used inside
buildForAllPlatformsMacOS) with continue-on-error: true and then add conditional
retry steps that run only if that build step failed, duplicating the build step
with incremental sleep delays (e.g., 60s then 120s) before each retry and
clearing continue-on-error on the final attempt, mirroring the Ubuntu/Windows
workflow pattern; keep fail-fast: false on the job as-is.

The orchestrator-develop branch no longer exists. Update all fallback
clone commands and test fixtures to use main instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@frostebite
Copy link
Member Author

Closing — all orchestrator code has been extracted to the standalone game-ci/orchestrator repository.

Content from this PR (generic artifact system — output types, manifests, collection service) is fully present in the orchestrator repo. See PR #819 for the extraction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request Next-Gen Orchestrator Next-Gen experimental features orchestrator Orchestrator module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: Generic Artifact System — multi-type output management, manifests, and processing pipelines

1 participant