Skip to content

Conversation

@chadbailey59
Copy link
Contributor

I'm working on adding some custom FrameProcessor testing capability to the pipecat CLI. This PR adds a few things to help with that.

The goal is to be able to define input frames and optional expected output frames in JSON, like this:

{
  "input_frames": [
    {"type": "TextFrame", "text": "hello world"},
    {"type": "TextFrame", "text": "testing 123"}
  ],
  "expected_output": [
    {"type": "TextFrame", "text": "HELLO WORLD"},
    {"type": "TextFrame", "text": "TESTING 123"}
  ]
}

The existing run_test function requires you to define expected output frames and has test assertions built in. This PR effectively re-implements run test as run_test_from_file to move the assertions part to a separate function. This lets us make expected_output optional, so we can run the FrameProcessor without knowing what output we expect when we're building our FrameProcessor.

The custom validator also adds a bit more utility, such as validating field contents instead of just frame types. It also starts to build out a bit of a testing DSL, like adding _contains to a string field to test for contains instead of equals, like: {"type": "TextFrame", "text_contains": "TESTING"} from the example above.

There's some mild duplication in creating a separate run_test_from_file function. It's just extremely simple pipeline creation, and I think it's worth it not to disturb the existing run_test function. But I could be convinced otherwise.

@codecov
Copy link

codecov bot commented Oct 27, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants