Skip to content

The local LLM is incapable of generating a valid prototype #154

@cmenon12

Description

@cmenon12

Describe the bug

When using the local LLM ai/gpt-oss, the LLM cannot generate a valid prototype and fails every time.

  • It cannot correctly generate the next_question_value that points to the number of the next question.
  • It uses the wrong answer_type, e.g., "text" for an address question
  • It adds extraneous properties, e.g., adds options to questions that don't need them.
  • It adds an empty "submit" question at the end.

Steps to reproduce

Run the Docker Compose script with the local LLM ai/gpt-oss.

How should the bug be fixed?

Options include:

  • Try out different models.
  • Improve the prompt & schema to coax the model. But need to be careful about tuning them for a local LLM to the detriment of the Azure LLM.
  • Split the generation step into multiple steps (e.g., generate the questions, then put them in order). This is a larger piece of work but likely necessary anyway to generate larger prototypes

Device, OS and browser used

No response

Additional context (if applicable)

Follow-on from #148 and #152.

Screenshots (if applicable)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions