-
Notifications
You must be signed in to change notification settings - Fork 600
FEAT: Anthropic Model-Written Evals Dataset #1170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
FEAT: Anthropic Model-Written Evals Dataset #1170
Conversation
[pull] main from Azure:main
- Implement fetch_sorry_bench_dataset() for sorry-bench/sorry-bench-202503 on Hugging Face - Support filtering by 44 categories and 21 prompt style linguistic mutations - Default to base prompts only - Add unit tests (10 tests) - Add integration test - Update API documentation
…dation - Change prompt_styles (list) to prompt_style (string) for simpler API - Add input validation for categories and prompt_style - Add VALID_CATEGORIES and VALID_PROMPT_STYLES for input validation - Fix exception handling to preserve stack traces - Update tests to match new API (12 tests, all passing)
Implements support for Anthropic's model-written-evals dataset with 154 evaluation datasets across 4 categories. Changes: - Add fetch_anthropic_evals_dataset() function with category filtering - Support for persona, sycophancy, advanced-ai-risk, and winogenerated categories - Fetches directly from GitHub repository using API for file discovery - Add unit tests (12 tests) - Add integration test - Update API docs
|
Thanks for adding these! made a few stylistic suggestions! |
Co-authored-by: hannahwestra25 <[email protected]>
Co-authored-by: hannahwestra25 <[email protected]>
romanlutz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Looking forward to having this dataset fetcher in PyRIT 🙂
| - persona: 119 datasets testing personality traits, beliefs, and potentially dangerous goals | ||
| - advanced-ai-risk: 32 datasets (16 human + 16 LM-generated) testing AI safety concerns | ||
| - sycophancy: 3 datasets testing whether models repeat back user views | ||
| - winogenerated: 2 datasets testing gender bias in occupations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only the persona and sycophancy ones come through if you run this.
The winogenerated ones don't have a "question", but a "sentence_with_blank" that needs to have the blank replaced with the values. So those will map to multiple prompts each.
I'm not sure why the advanced-ai-risk ones got filtered (yet) but you may want to investigate why they're getting filtered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think i've fixed all of the issues and added tests (which all passed). however, when loading up the dataset categories, I see this for the winogenerated category:
Skipping item with empty prompt in winogenerated_occupations.jsonl
Skipping item with empty prompt in winogenerated_occupations.jsonl
Skipping item with empty prompt in winogenerated_occupations.jsonl
Skipping item with empty prompt in winogenerated_occupations.jsonl
Skipping item with empty prompt in winogenerated_occupations.jsonl
Skipping item with empty prompt in winogenerated_occupations.jsonl
Loaded 2990 winogenerated prompts
First prompt: The precision instrument repairer charged the client $30 an hour for _ services
This is running the following below:
from pyrit.datasets import fetch_anthropic_evals_dataset
dataset = fetch_anthropic_evals_dataset(category='winogenerated')
print(f'Loaded {len(dataset.prompts)} winogenerated prompts')
print(f'First prompt: {dataset.prompts[0].value}')not entirely sure if theres anything wrong though (seems it loads all of the 2990). let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will rerun tomorrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AdrGav941 upon further inspection of this dataset I wonder if this is better suited for Q&A?
https://github.com/anthropics/evals
Everything appears to be Q&A. In some files the parsing would be easier than in others, though. The winogenerated one has pronoun options which would be the answer options. But in some others it's just in the text itself...
Any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romanlutz this dataset does seem to lend itself well to the QA structure we have. Especially for winogenerated. Like you said, in winogenerated it looks to need minimal transformation to work with QA. The others would need some parsing but all look to have "choices" baked into the items themselves.
I agree that this would be a great candidate to make into QA prompts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romanlutz @AdrGav941 how would making it into QA prompts change the code structure now?
…rated loading has empty prompts winogenerated_occupations.jsonl???
|
@0xm00n btw I just saw Anthropic drop a new dataset: https://www.anthropic.com/news/political-even-handedness Any chance you're interested in contributing a similar fetcher for it as well? It has two prompts per row so it would require a tiny bit of custom handling, but otherwise (more?) straightforward than this one. |
Yup, seems pretty easy. will work on it soon |
Description
This PR adds support for the Anthropic model-written-evals dataset to PyRIT. The model-written-evals dataset contains 154 evaluation datasets designed to test LLM behaviors across 4 main categories: persona traits, sycophancy, advanced AI risks, and gender bias. The evaluations use language models to automatically generate test cases across multiple behavioral dimensions.
Dataset: https://github.com/anthropics/evals
Associated Paper: https://arxiv.org/abs/2212.09251
Work Completed
fetch_anthropic_evals_dataset()function inanthropic_evals_dataset.pytest_anthropic_evals_dataset.py(12 test cases)test_fetch_datasets.pyapi.rstpyrit/datasets/__init__.pyRelated Issue
Contributes to issue #450