-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Describe the bug
When using the local LLM ai/gpt-oss, the LLM cannot generate a valid prototype and fails every time.
- It cannot correctly generate the
next_question_valuethat points to the number of the next question. - It uses the wrong
answer_type, e.g., "text" for an address question - It adds extraneous properties, e.g., adds options to questions that don't need them.
- It adds an empty "submit" question at the end.
Steps to reproduce
Run the Docker Compose script with the local LLM ai/gpt-oss.
How should the bug be fixed?
Options include:
- Try out different models.
- Improve the prompt & schema to coax the model. But need to be careful about tuning them for a local LLM to the detriment of the Azure LLM.
- Split the generation step into multiple steps (e.g., generate the questions, then put them in order). This is a larger piece of work but likely necessary anyway to generate larger prototypes
Device, OS and browser used
No response
Additional context (if applicable)
Screenshots (if applicable)
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels