Skip to content

generators: check message count requested matches message count generated#1558

Merged
leondz merged 4 commits intoNVIDIA:mainfrom
leondz:generators/check_generations_requested_matches
Jan 16, 2026
Merged

generators: check message count requested matches message count generated#1558
leondz merged 4 commits intoNVIDIA:mainfrom
leondz:generators/check_generations_requested_matches

Conversation

@leondz
Copy link
Copy Markdown
Collaborator

@leondz leondz commented Jan 15, 2026

Description

  • raise exception if OpenAICompatible message count rcvd doesn't match generations requested
  • raise exception in probe() if message count rcvd doesn't match generations requested

Open to moving checks (maybe _execute_all?) or adding a hook

Verification

Put this lying code in garak/generators/test.py and request > 1 generation:

class AlwaysOne(Generator):
    """ always return one message """

    name = "AlwaysOne"
    generator_family_name = "Test"
    supports_multiple_generations = True

    def _call_model(
        self, prompt: Conversation, generations_this_call: int = 1
    ) -> List[Message | None]:
        return [Message("")]
(garak) 13:15:19 x1:~/dev/garak [generators/check_generations_requested_matches] $ python -m garak -t test.AlwaysOne -g 4 -p phrasing
garak LLM vulnerability scanner v0.14.0.pre1 ( https://github.com/NVIDIA/garak ) at 2026-01-15T13:15:22.019254
📜 logging to /home/lderczynski/.local/share/garak/garak.log
🦜 loading generator: Test: AlwaysOne
📜 reporting to /home/lderczynski/.local/share/garak/garak_runs/garak.1158b59d-9ade-4421-bf24-7a31d5ce9b1b.report.jsonl
🕵️  queue of probes: phrasing.FutureTense, phrasing.PastTense
Generator did not return the requested number of responses (asked for 4 got 1). supports_multiple_generations may be set wrongly.                                      
(garak) 13:15:23 x1:~/dev/garak [generators/check_generations_requested_matches] $

@leondz leondz added the generators Interfaces with LLMs label Jan 15, 2026
Copy link
Copy Markdown
Collaborator

@jmartin-tech jmartin-tech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These restrictions seem sensible to me. A minor nit pick added related to error phrasing.

I am not 100% sure however there might be an edge cases where a target might behave irregularly causing this to terminate a run that has already gathered responses for a probe. We can accept this as is and await user feedback on if this ends up being something we should handle more gracefully.

Comment thread garak/generators/openai.py Outdated
Comment thread garak/probes/base.py Outdated
leondz and others added 2 commits January 16, 2026 11:41
Co-authored-by: Jeffrey Martin <[email protected]>
Signed-off-by: Leon Derczynski <[email protected]>
Co-authored-by: Jeffrey Martin <[email protected]>
Signed-off-by: Leon Derczynski <[email protected]>
@leondz
Copy link
Copy Markdown
Collaborator Author

leondz commented Jan 16, 2026

I am not 100% sure however there might be an edge cases where a target might behave irregularly causing this to terminate a run that has already gathered responses for a probe. We can accept this as is and await user feedback on if this ends up being something we should handle more gracefully.

Yeah, this is a risk. I suspect this should be caught at generator level, but let's wait for the signal to emerge first

@leondz leondz merged commit f219d9e into NVIDIA:main Jan 16, 2026
16 of 17 checks passed
@github-actions github-actions Bot locked and limited conversation to collaborators Jan 16, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

generators Interfaces with LLMs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants