Describe the bug
When deploying a Prompt Studio project as an API Deployment, the workflow executes successfully through all stages (SOURCE → INITIALIZE → COMPILE → BUILD → RUN), and the prompt output is correctly generated and visible under “Combined Output” in Prompt Studio. However, the API Deployment consistently fails at the final step with the error:
Final output processing failed:
The resulting payload shows execution_status: "ERROR" and the result array contains:
{
"file": "document_xxx",
"status": "Failed",
"result": null,
"error": "Final output processing failed:",
"metadata": null
}
The prompt output itself is valid JSON/text and is correctly written to workflow storage. The failure appears to occur only inside the API Deployment wrapper.
To reproduce
- Install Unstract OSS using the official Docker Compose stack (default configuration).
- Create a Prompt Studio project that extracts structured fields from a PDF (e.g.
"provider": "BRENNTAG").
- Test the prompt manually → valid output is produced.
- Click Deploy as API and create a new deployment.
- Send a PDF to:
POST /deployment/api/mock_org/<deployment>/
with a valid API key.
- Poll the returned
status_api endpoint once.
- The workflow completes RUN, writes the prompt output to storage, then FINALIZE fails with:
Final output processing failed:
- API returns:
{
"execution_status": "ERROR",
"result": [
{
"status": "Failed",
"result": null,
"error": "Final output processing failed:",
"metadata": null
}
]
}
Expected behavior
The API Deployment should return the prompt output generated by the workflow. Since the LLM execution succeeds and the prompt output is correctly stored, the API Deployment should assemble and return the final structured result instead of failing.
Environment details
- Unstract Open Source Edition
- Version: latest OSS (Docker images pulled November 2025)
- Host: Windows 11 (Docker Desktop)
- Containers:
unstract-backend, unstract-frontend, unstract-worker, unstract-worker-file-processing, unstract-platform-service, etc.
- API used:
/deployment/api/mock_org/<deployment>/
Additional context
- Issue reproduces whether prompt output is
json or text.
- LLM profile (
qwen2.5-3b-instruct) runs without errors.
- Workflow logs show: “Prompt studio project's output written successfully to workflow's storage”.
- Calling the same
status_api twice produces:
406 Not Acceptable
{"status": "ERROR", "message": "Result already acknowledged"}
which is expected and unrelated.
Screenshots
Describe the bug
When deploying a Prompt Studio project as an API Deployment, the workflow executes successfully through all stages (SOURCE → INITIALIZE → COMPILE → BUILD → RUN), and the prompt output is correctly generated and visible under “Combined Output” in Prompt Studio. However, the API Deployment consistently fails at the final step with the error:
The resulting payload shows
execution_status: "ERROR"and theresultarray contains:The prompt output itself is valid JSON/text and is correctly written to workflow storage. The failure appears to occur only inside the API Deployment wrapper.
To reproduce
"provider": "BRENNTAG").status_apiendpoint once.Expected behavior
The API Deployment should return the prompt output generated by the workflow. Since the LLM execution succeeds and the prompt output is correctly stored, the API Deployment should assemble and return the final structured result instead of failing.
Environment details
unstract-backend,unstract-frontend,unstract-worker,unstract-worker-file-processing,unstract-platform-service, etc./deployment/api/mock_org/<deployment>/Additional context
jsonortext.qwen2.5-3b-instruct) runs without errors.status_apitwice produces:Screenshots