Skip to content
View notrichardren's full-sized avatar

Highlights

  • Pro

Block or report notrichardren

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
notrichardren/README.md

Hi, I'm Richard Ren πŸ‘‹

I'm a fourth-year undergraduate student at the Jerome Fisher M&T Program at the University of Pennsylvania, specializing in AI safety and transparency research. My work focuses on evaluating and understanding large language models, particularly in the areas of:

  • Model evaluations and benchmarking
  • Steering model behaviors
  • AI transparency and interpretability

πŸ”¬ Research Highlights

  • Co-authored the most comprehensive empirical meta-analysis of AI safety benchmarks to date
  • Work presented at the UK AI Safety Institute
  • Research cited by OpenAI's Superalignment Fast Grants page
  • Techniques incorporated into major open-source projects (llama.cpp, vLLM)

πŸ“š Publications

AI Safety & Machine Learning

Physics & Applied ML

πŸ”— Connect With Me

πŸ§ͺ Research Style

I'm an experimentalist at heart with a quick, iterative, and empirically-driven approach. I like to focus on new potential research areas, rather than research that fits cleanly into pre-existing research areas.

Pinned Loading

  1. centerforaisafety/mask centerforaisafety/mask Public

    Code for evaluating AI systems on the MASK honesty benchmark.

    Python 13 12

  2. centerforaisafety/safetywashing centerforaisafety/safetywashing Public

    Measuring correlations between safety benchmarks and general AI capabilities benchmarks.

    Python 11 2