Skip to content

[WIP]test: integration test implementation with mock service worker(msw)#44

Draft
waqasshaukat wants to merge 2 commits intows/chat-refactorfrom
ws/chat-integration-test
Draft

[WIP]test: integration test implementation with mock service worker(msw)#44
waqasshaukat wants to merge 2 commits intows/chat-refactorfrom
ws/chat-integration-test

Conversation

@waqasshaukat
Copy link
Contributor

No description provided.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Nov 29, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
ai fbed583 Commit Preview URL

Branch Preview URL
Dec 01 2025, 10:30 AM

@socket-security
Copy link

socket-security bot commented Nov 29, 2025

Review the following changes in direct dependencies. Learn more about Socket for GitHub.

Diff Package Supply Chain
Security
Vulnerability Quality Maintenance License
Addedmsw@​2.12.39410010095100

View full report

Copy link
Contributor

@Muneeb147 Muneeb147 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking the initiative @waqasshaukat.

I'd like to point-out some concerns in test design and flow.

Picking one case as en example: we have mocked our app routes "api/ymax, api/chat..." and we are completely returning mock from app handlers. This is not right or counted as integration test (where your code path is not tested at all).

Why do we mock?
We want to mock the thirdparty calls which are not related to us (or not in our control). For example the AiChatLLM call, the McpCall....We need to just mock these two calls and other code-path of api/ymax.. etc should be tested by our testing pipeline.

Right now it is just testing of our mocked response and it'll always pass even if the user breaks our api/ymax handler. Let's first decide what the goal is and what do we actually want to test before making any further effort on this PR

Comment on lines +69 to +72
const actualModels = [...MODELS].sort();
const expectedModels = [...EXPECTED_MODELS_IN_PROVIDERS].sort();

expect(actualModels).toEqual(expectedModels);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this test? We are comparing one constant list of models with another (in tests). Everytime a new model is added or existing gets updated, the test will un-necessarily fail and developer will just update the test list.

BUT, if we strictly want to use ONLY these: "'gpt-4.1-mini',
'claude-4-5-sonnet',
'qwen-qwq',
'grok-3-mini'"

models and don't want to allow developer to add/update the model then this test might makes sense. But I don't think there is such restriction on models.

Comment on lines +75 to +80
it('should have all SAMPLE_MODELS exist in actual providers', () => {
const sampleModelValues = Object.values(SAMPLE_MODELS);

sampleModelValues.forEach(model => {
expect(MODELS).toContain(model);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be repetition of above test

@waqasshaukat waqasshaukat marked this pull request as draft December 4, 2025 06:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants